Search Results

Search found 27658 results on 1107 pages for 'sql dba'.

Page 633/1107 | < Previous Page | 629 630 631 632 633 634 635 636 637 638 639 640  | Next Page >

  • What is the corrrect way to increment a field making up part of a composit key

    - by Tr1stan
    I have a bunch of tables whose primary key is made up of the foreign keys of other tables (Composite key). Therefore for example the attributes (as a very cut down version) might look like this: A[aPK, SomeFields] 1:M B[bPK, aFK, SomeFields] 1:M C[cPK, bFK, aFK, SomeFields] as data this could look like: A[aPK, SomeFields]: 1, Foo 2, Bar B[bPK, aFK, SomeFields]: 1, 1, FooData1 2, 1, FooData2 1, 2, BarData1 2, 2, BarData2 C[cPK, bFK, aFK, SomeFields]: 1, 1, 1, FooData1More 2, 1, 1, FooData1More 1, 2, 1, FooData2More 2, 2, 1, FooData2More 1, 1, 2, BarData1More 2, 1, 2, BarData1More 1, 2, 2, BarData2More 2, 2, 2, BarData2More I've got this running in a MSSQL DBMS and I'm looking for the best way to increment the left most column, in each table when a new tuple is added to it. I can't use the Auto Increment Identity Specification option as that has no idea that it is part of a composite key. I also don't want to use any aggregate function such as: MAX(field)+1 as this will have adverse affects with multiple users inputting data, rolling back etc. There might however be a nice trigger based option here, but I'm not sure. This must be a common issue so I'm hoping that someone has a lovely solution. As a side which may or may not affect the answer, I'm using Entity Framework 1.0 as my ORM, within a c# MVC application.

    Read the article

  • Reducing a normalized table to one value

    - by Dio
    Hello, I'm sure this has been asked but I'm not quite sure how to properly search for this question, my apologies. I have two tables, Foo and Bar. For has one row per Food, bar has many rows per food matching descriptors. Foo name id Apple 1 Orange 2 Bar id description 1 Tasty 1 Ripe 2 Sweet etc (sorry for the somewhat contrived example). I'm trying to return a query where if, for each row in Foo, Bar contains a descriptor in ('Tasty', 'Juicy') return true ex: Output Apple True Orange False I had been solving this somewhat trivially with a case when I only had one item to match select Foo.name, case bar.description when 'Tasty' then True else 'False' end from Foo left join Bar on foo.id = bar.id where bar.description = 'Tasty' But with multiple items, I keep ending up with extra rows: Output Apple True Apple False etc etc Can someone point me in the right direction on how to think about this problem or what I should be doing? Thank you.

    Read the article

  • Auto increment with a Unit Of Work

    - by Derick
    Context I'm building a persistence layer to abstract different types of databases that I'll be needing. On the relational part I have mySQL, Oracle and PostgreSQL. Let's take the following simplified MySQL tables: CREATE TABLE Contact ( ID varchar(15), NAME varchar(30) ); CREATE TABLE Address ( ID varchar(15), CONTACT_ID varchar(15), NAME varchar(50) ); I use code to generate system specific alpha numeric unique ID's fitting 15 chars in this case. Thus, if I insert a Contact record with it's Addresses I have my generated Contact.ID and Address.CONTACT_IDs before committing. I've created a Unit of Work (amongst others) as per Martin Fowler's patterns to add transaction support. I'm using a key based Identity Map in the UoW to track the changed records in memory. It works like a charm for the scenario above, all pretty standard stuff so far. The question scenario comes in when I have a database that is not under my control and the ID fields are auto-increment (or in Oracle sequences). In this case I do not have the db generated Contact.ID beforehand, so when I create my Address I do not have a value for Address.CONTACT_ID. The transaction has not been started on the DB session since all is kept in the Identity Map in memory. Question: What is a good approach to address this? (Avoiding unnecessary db round trips) Some ideas: Retrieve the last ID: I can do a call to the database to retrieve the last Id like: SELECT Auto_increment FROM information_schema.tables WHERE table_name='Contact'; But this is MySQL specific and probably something similar can be done for the other databases. If do this then would need to do the 1st insert, get the ID and then update the children (Address.CONTACT_IDs) – all in the current transaction context.

    Read the article

  • openquery giving differnt results

    - by Mithil Deshmukh
    I have 2 similar queries select * from openquery(powerschool, 'select * from TEACHERS where teachernumber is not null and schoolid=''1050'' and teacherloginid is not null order by teachernumber') and SELECT * from openquery(powerschool, 'SELECT NVL(teachernumber,'''') from TEACHERS where teachernumber is not null and schoolid=''1050'' and teacherloginid is not null order by teachernumber') The first one is giving me 182 rows while the second one gives me 83. What's wrong with the queries?

    Read the article

  • MySQL Query Join Table Selecting Highest Date Value

    - by ALHUI
    Here is the query that I run SELECT cl.cl_id, cc_rego, cc_model, cl_dateIn, cl_dateOut FROM courtesycar cc LEFT JOIN courtesyloan cl ON cc.cc_id = cl.cc_id Results: 1 NXI955 Prado 2013-10-24 11:48:38 NULL 2 RJI603 Avalon 2013-10-24 11:48:42 2013-10-24 11:54:18 3 RJI603 Avalon 2013-10-24 12:01:40 NULL The results that I wanted are to group by the cc_rego values and print the most recent cl_dateIn value. (Only Display Rows 1,3) Ive tried to use MAX on the date and group by clause, but it combines rows, 2 & 3 together showing both the highest value of dateIn and dateOut. Any help will be appreciated.

    Read the article

  • Users Hierarchy Logic

    - by user342944
    Hi guys, I am writing a user security module using SQLServer 2008 so threfore need to design a database accordingly. Formally I had Userinfo table with UserID, Username and ParentID to build a recursion and populated tree to represent hierarchy but now I have following criteria which I need to develop. I have now USERS, ADMINISTRATORS and GROUPS. Each node in the user hierarchy is either a user, administrator or group. User Someone who has login access to my application Administrator A user who may also manage all their child user accounts (and their children etc) This may include creating new users and assigning permissions to those users. There is no limit to the number of administrators in user structure. The higher up in the hierarchy that I go administrators have more child accounts to manage which include other child administrators. Group A user account can be designated as a group. This will be an account which is used to group one or more users together so that they can be manage as a unit. But no one can login to my application using a group account. This is how I want to create structure Super Administrator administrator ------------------------------------------------------------- | | | Manager A Manager B Manager C (adminstrator) (administrator) (administrator) | ----------------------------------------- | | | Employee A Employee B Sales Employees (User) (User) (Group) | ------------------------ | | | Emp C Emp D Emp E (User) (User) (User) Now how to build the table structure to achieve this. Do I need to create Users table alongwith Group table or what? Please guide I would really appreciate.

    Read the article

  • C# Dataset Dynamically Add DataColumn

    - by Wesley
    I am trying to add a extra column to a dataset after a query has completed. I have a database relationship of the following: Employees / \ Groups EmployeeGroups Empoyees holds all the data for that individual, I'll name the unique key the UserID. Groups holds all the groups that a employee can be a part of, i.e. Super User, Admin, User; etc. I will name the unique key GroupID EmployeeGroups holds all the associations of which groups each employee belongs too. (UserID | GroupID) What I am trying to accomplish is after querying for a all users I want to loop though each user and add what groups that user is a part of by adding a new column to the dataset named 'Groups' which is a string to insert the values of the next query to get all the groups that user is a part of. Then by user of databinding populate a listview with all employees and their group associations My code is as follows; Position 5 is the new column I am trying to add to the dataset. string theQuery = "select UserID, FirstName, LastName, EmployeeID, Active from Employees"; DataSet theEmployeeSet = itsDatabase.runQuery(theQuery); DataColumn theCol = new DataColumn("Groups", typeof(string)); theEmployeeSet.Tables[0].Columns.Add(theCol); foreach (DataRow theRow in theEmployeeSet.Tables[0].Rows) { theRow.ItemArray[5] = "1234"; } At the moment, the code will create the new column but when i assign the data to that column nothing will be assigned, what am I missing? If there is any further explination or information I can provide, please let me know. Thank you all

    Read the article

  • Can MySQL reasonably perform queries on billions of rows?

    - by haxney
    I am planning on storing scans from a mass spectrometer in a MySQL database and would like to know whether storing and analyzing this amount of data is remotely feasible. I know performance varies wildly depending on the environment, but I'm looking for the rough order of magnitude: will queries take 5 days or 5 milliseconds? Input format Each input file contains a single run of the spectrometer; each run is comprised of a set of scans, and each scan has an ordered array of datapoints. There is a bit of metadata, but the majority of the file is comprised of arrays 32- or 64-bit ints or floats. Host system |----------------+-------------------------------| | OS | Windows 2008 64-bit | | MySQL version | 5.5.24 (x86_64) | | CPU | 2x Xeon E5420 (8 cores total) | | RAM | 8GB | | SSD filesystem | 500 GiB | | HDD RAID | 12 TiB | |----------------+-------------------------------| There are some other services running on the server using negligible processor time. File statistics |------------------+--------------| | number of files | ~16,000 | | total size | 1.3 TiB | | min size | 0 bytes | | max size | 12 GiB | | mean | 800 MiB | | median | 500 MiB | | total datapoints | ~200 billion | |------------------+--------------| The total number of datapoints is a very rough estimate. Proposed schema I'm planning on doing things "right" (i.e. normalizing the data like crazy) and so would have a runs table, a spectra table with a foreign key to runs, and a datapoints table with a foreign key to spectra. The 200 Billion datapoint question I am going to be analyzing across multiple spectra and possibly even multiple runs, resulting in queries which could touch millions of rows. Assuming I index everything properly (which is a topic for another question) and am not trying to shuffle hundreds of MiB across the network, is it remotely plausible for MySQL to handle this? UPDATE: additional info The scan data will be coming from files in the XML-based mzML format. The meat of this format is in the <binaryDataArrayList> elements where the data is stored. Each scan produces = 2 <binaryDataArray> elements which, taken together, form a 2-dimensional (or more) array of the form [[123.456, 234.567, ...], ...]. These data are write-once, so update performance and transaction safety are not concerns. My naïve plan for a database schema is: runs table | column name | type | |-------------+-------------| | id | PRIMARY KEY | | start_time | TIMESTAMP | | name | VARCHAR | |-------------+-------------| spectra table | column name | type | |----------------+-------------| | id | PRIMARY KEY | | name | VARCHAR | | index | INT | | spectrum_type | INT | | representation | INT | | run_id | FOREIGN KEY | |----------------+-------------| datapoints table | column name | type | |-------------+-------------| | id | PRIMARY KEY | | spectrum_id | FOREIGN KEY | | mz | DOUBLE | | num_counts | DOUBLE | | index | INT | |-------------+-------------| Is this reasonable?

    Read the article

  • UUID collision risk using different algorithms

    - by Diego Jancic
    Hi Guys, I have a database where 2 (or maybe 3 or 4) different applications are inserting information. The new information has IDs of the type GUID/UUID, but each application is using a different algorithm to generate the IDs. For example, one is using the NHibernate's "guid.comb", other is using the SQLServer's NEWID(), other might want to use .NET's Guid.NewGuid() implementation. Is there an above normal risk of ID collision or duplicates? Thanks!

    Read the article

  • CTE to build a list of departments and managers (hierarchical)

    - by Milky Joe
    I need to generate a list of users that are managers, or managers of managers, for company departments. I have two tables; one details the departments and one contains the manager hierarchy (simplified): CREATE TABLE [dbo].[Manager]( [ManagerId] [int], [ParentManagerId] [int]) CREATE TABLE [dbo].[Department]( [DepartmentId] [int], [ManagerId] [int]) Basically, I'm trying to build a CTE that will give me a list of DepartmentIds, together with all ManagerIds that are in the manager hierarchy for that department. So... Say Manager 1 is the Manager for Department 1, and Manager 2 is Manager 1's Manager, and Manager 3 is Manager 2's Manager, I'd like to see: DepartmentId, ManagerId 1, 1 1, 2 1, 3 Basically, managers are able to deal with all of their sub-manager's departments. Building the CTE to return the Manager hierarchy was fairly simple, but I'm struggling to inject the Departments in there: WITH DepartmentManagers AS ( SELECT ManagerId, ParentManagerId, 0 AS Depth From Manager UNION ALL SELECT Manager.ManagerId, Manager.ParentManagerId, DepartmentManagers.Depth + 1 AS Depth FROM Manager INNER JOIN DepartmentManagers ON DepartmentManagers.ManagerId = Manager.ParentManagerId ) Can anyone help?

    Read the article

  • Which will be the best query OR there is an another one?

    - by serhan
    SELECT k.id,k.adsoyad, COUNT(DISTINCT(e.id)) as iletisimbilgisisay, COUNT(DISTINCT(f.id)) AS ilangonderensay, COUNT(DISTINCT(g.id)) AS emlaksahibisay, isNULL(MAX(eb.yonetici_kisi),0) AS yoneticiid FROM dbo.kisiler k LEFT OUTER JOIN dbo.emlaklar e ON e.iletisimbilgisi=k.id LEFT OUTER JOIN dbo.emlaklar f ON f.ilangonderen=k.id LEFT OUTER JOIN dbo.emlaklar g ON g.emlaksahibi=k.id LEFT OUTER JOIN dbo.emlakcibilgileri eb ON eb.yonetici_kisi=k.id GROUP BY k.id,k.adsoyad ORDER BY yoneticiid DESC, iletisimbilgisisay DESC, ilangonderensay DESC total execution time (above) 28 SELECT id,adsoyad, (select COUNT(id) FROM dbo.emlaklar WHERE iletisimbilgisi=k.id) AS iletisimbilgisisay, (select COUNT(id) FROM dbo.emlaklar WHERE emlaksahibi=k.id) AS emlaksahibisay, (select COUNT(id) FROM dbo.emlaklar WHERE ilangonderen=k.id) AS ilangonderensay, (Select isNULL(MAX(id),0) FROM dbo.emlakcibilgileri WHERE yonetici_kisi=k.id) AS yoneticiid FROM dbo.kisiler k total execution time 4 my tables are emlaklar: id int, ilangonderen int,iletisimbilgisi int,emlaksahibi int kisiler: id int,kisiadi emlakcibilgileri: id int,yonetici_kisi int,firma and ilangonderen,iletisimbilgisi,emlaksahibi,yonetici_kisi => kisiler.id

    Read the article

  • Exceptions by DataContext

    - by Bas
    I've been doing some searching on the internet, but I can't seem to find the awnser. What exceptions can a DataContext throw? Or to be more specific, what exceptions does the DataContext.SubmitChanges() method throw?

    Read the article

  • how to increase the limit of exceptions in oracle

    - by Arunachalam
    how to increase the limit of exceptions in oracle ? i have a excel sheet in which their are about 900 records to be appended .so i converted the excel to dat file and wrote a batch file that read from the dat file and appends it to the concern table but the batch file stop execution once the exceptions reach 51(all integrity constrain parent key not found) so the remaining valid files are not updated .its very difficult to find which record has integrity constrain is there a way to increase this exception limit ?

    Read the article

  • retrieve columns from sqlite3

    - by John Smith
    I have two tables in sqlite: CREATE TABLE fruit ('fid' integer, 'name' text); CREATE TABLE basket ('fid1' integer, 'fid2' integer, 'c1' integer, 'c2' integer); basket is supposed to have count c1 of fruit fid1 and c2 of fruit fid2 I created a view fruitbasket; create view fruitbasket as select * from basket inner join fruit a on a.fid=basket.fid1 inner join fruit b on b.fid=basket.fid2; it works (almost) as expected. When I type pragma table_info(fruitbasket); I get the following output 0|fid1|integer|0||0 1|fid2|integer|0||0 2|c1|integer|0||0 3|c2|integer|0||0 4|fid|integer|0||0 5|name|text|0||0 6|fid:1|integer|0||0 7|name:1|text|0||0 The problem is that I cannot seem to SELECT name:1. How can I do it other than going back and re-aliasing the columns?

    Read the article

  • select from multiple tables but ordering by a datetime field

    - by Chris Mccabe
    I have 3 tables that are unrelated (related that each contains data for a different social network). Each has a datetime field dated- I'm already grouping by hour as you can see below (this one below for linked_in) SELECT count(*), date_format(dated, '%Y:%m:%d %H') as hour FROM upd8r_linked_in_accts WHERE CAST(dated AS DATE) = '".$start_date."' GROUP BY hour I would like to know how to do a total across all 3 networks- the tables for the three are CREATE TABLE IF NOT EXISTS `upd8r_facebook_accts` ( `id` int(11) NOT NULL AUTO_INCREMENT, `owner_id` varchar(50) NOT NULL, `user_id` int(11) NOT NULL, `fb_id` bigint(30) NOT NULL, `dated` datetime NOT NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=80 ; CREATE TABLE IF NOT EXISTS `upd8r_linked_in_accts` ( `id` int(11) NOT NULL AUTO_INCREMENT, `owner_id` varchar(50) NOT NULL, `user_id` int(11) NOT NULL, `linked_in` varchar(200) NOT NULL, `oauth_secret` varchar(100) NOT NULL, `first_count` int(11) NOT NULL, `second_count` int(11) NOT NULL, `dated` datetime NOT NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=200 ; CREATE TABLE IF NOT EXISTS `upd8r_twitter_accts` ( `id` int(11) NOT NULL AUTO_INCREMENT, `owner_id` varchar(50) NOT NULL, `user_id` int(11) NOT NULL, `twitter` varchar(200) NOT NULL, `twitter_secret` varchar(100) NOT NULL, `dated` datetime NOT NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=9 ; something like this ? (SELECT count(*), date_format(dated, '%Y:%m:%d %H') as hour FROM upd8r_linked_in_accts WHERE CAST(dated AS DATE) = '".$start_date."') UNION ALL (SELECT count(*), date_format(dated, '%Y:%m:%d %H') as hour FROM upd8r_facebook_accts WHERE CAST(dated AS DATE) = '".$start_date."') UNION ALL (SELECT count(*), date_format(dated, '%Y:%m:%d %H') as hour FROM upd8r_twitter_accts WHERE CAST(dated AS DATE) = '".$start_date."') UNION ALL GROUP BY hour

    Read the article

  • Insert Stored Procedure does not Create Database Record

    - by SidC
    Hello All, I have the following stored procedure: ALTER PROCEDURE Pro_members_Insert @id int outPut, @LoginName nvarchar(50), @Password nvarchar(15), @FirstName nvarchar(100), @LastName nvarchar(100), @signupDate smalldatetime, @Company nvarchar(100), @Phone nvarchar(50), @Email nvarchar(150), @Address nvarchar(255), @PostalCode nvarchar(10), @State_Province nvarchar(100), @City nvarchar(50), @countryCode nvarchar(4), @active bit, @activationCode nvarchar(50) AS declare @usName as varchar(50) set @usName='' select @usName=isnull(LoginName,'') from members where LoginName=@LoginName if @usName <> '' begin set @ID=-3 RAISERROR('User Already exist.', 16, 1) return end set @usName='' select @usName=isnull(email,'') from members where Email=@Email if @usName <> '' begin set @ID=-4 RAISERROR('Email Already exist.', 16, 1) return end declare @MemID as int select @memID=isnull(max(ID),0)+1 from members INSERT INTO members ( id, LoginName, Password, FirstName, LastName, signupDate, Company, Phone, Email, Address, PostalCode, State_Province, City, countryCode, active,activationCode) VALUES ( @Memid, @LoginName, @Password, @FirstName, @LastName, @signupDate, @Company, @Phone, @Email, @Address, @PostalCode, @State_Province, @City, @countryCode, @active,@activationCode) if @@error <> 0 set @ID=-1 else set @id=@memID Note that I've "inherited" this sproc and the database. I am trying to insert a new record from my signup.aspx page. My SQLDataSource is as follows: <asp:SqlDataSource runat="server" ID="dsAddMember" ConnectionString="rmsdbuser" InsertCommandType="StoredProcedure" InsertCommand="Pro_members_Insert" ProviderName="System.Data.SqlClient"> The click handler for btnSave is as follows: Protected Sub btnSave_Click(ByVal sender As Object, ByVal e As EventArgs) Handles btnSave.Click Try dsAddMember.DataBind() Catch ex As Exception End Try End Sub When I run this page, signup.aspx, provide required fields and click submit, the page simply reloads and the database table does not reflect the newly-inserted record. Questions: 1. How do I catch the error messages that might be returned from the sproc? 2. Please advise how to change signup.aspx so that the insert occurs. Thanks, Sid

    Read the article

  • Merge computed data from two tables back into one of them

    - by Tyler McHenry
    I have the following situation (as a reduced example). Two tables, Measures1 and Measures2, each of which store an ID, a Weight in grams, and optionally a Volume in fluid onces. (In reality, Measures1 has a good deal of other data that is irrelevant here) Contents of Measures1: +----+----------+--------+ | ID | Weight | Volume | +----+----------+--------+ | 1 | 100.0000 | NULL | | 2 | 200.0000 | NULL | | 3 | 150.0000 | NULL | | 4 | 325.0000 | NULL | +----+----------+--------+ Contents of Measures2: +----+----------+----------+ | ID | Weight | Volume | +----+----------+----------+ | 1 | 75.0000 | 10.0000 | | 2 | 400.0000 | 64.0000 | | 3 | 100.0000 | 22.0000 | | 4 | 500.0000 | 100.0000 | +----+----------+----------+ These tables describe equivalent weights and volumes of a substance. E.g. 10 fluid ounces of substance 1 weighs 75 grams. The IDs are related: ID 1 in Measures1 is the same substance as ID 1 in Measures2. What I want to do is fill in the NULL volumes in Measures1 using the information in Measures2, but keeping the weights from Measures1 (then, ultimately, I can drop the Measures2 table, as it will be redundant). For the sake of simplicity, assume that all volumes in Measures1 are NULL and all volumes in Measures2 are not. I can compute the volumes I want to fill in with the following query: SELECT Measures1.ID, Measures1.Weight, (Measures2.Volume * (Measures1.Weight / Measures2.Weight)) AS DesiredVolume FROM Measures1 JOIN Measures2 ON Measures1.ID = Measures2.ID; Producing: +----+----------+-----------------+ | ID | Weight | DesiredVolume | +----+----------+-----------------+ | 4 | 325.0000 | 65.000000000000 | | 3 | 150.0000 | 33.000000000000 | | 2 | 200.0000 | 32.000000000000 | | 1 | 100.0000 | 13.333333333333 | +----+----------+-----------------+ But I am at a loss for how to actually insert these computed values into the Measures1 table. Preferably, I would like to be able to do it with a single query, rather than writing a script or stored procedure that iterates through every ID in Measures1. But even then I am worried that this might not be possible because the MySQL documentation says that you can't use a table in an UPDATE query and a SELECT subquery at the same time, and I think any solution would need to do that. I know that one workaround might be to create a new table with the results of the above query (also selecting all of the other non-Volume fields in Measures1) and then drop both tables and replace Measures1 with the newly-created table, but I was wondering if there was any better way to do it that I am missing.

    Read the article

  • executing null values records

    - by jjj
    i am trying to execute the records that have TotalTime null value from the table NewTimeAttendance...TotalTime datatype nchar(10) select * from newtimeattendance where TotalTime = 'NULL' ....nothing select * from newtimeattendance where TotalTime = 'null' ....nothing select * from newtimeattendance where TotalTime = 'Null' ....nothing select * from newtimeattendance where TotalTime = null ....nothing select * from newtimeattendance where TotalTime = Null ....nothing select * from newtimeattendance where TotalTime = NULL ....nothing when i select the whole table i can see that there is some NULL TotalTime values..!! it is small select statment ..why doesn't it work ? is there a way (special way ) to execute the 'NULL' with nchar type ?! thanks in advance

    Read the article

  • Getting a linq table to be dynamically sent to a method

    - by Damian Spaulding
    I have a procedure: var Edit = (from R in Linq.Products where R.ID == RecordID select R).Single(); That I would like to make "Linq.Products" Dynamic. Something like: protected void Page_Load(object sender, EventArgs e) { something(Linq.Products); } public void something(Object MyObject) { System.Data.Linq.Table<Product> Dynamic = (System.Data.Linq.Table<Product>)MyObject; var Edit = (from R in Dynamic where R.ID == RecordID select R).Single(); My problem is that I my "something" method will not be able to know what table has been sent to it. so the static line: System.Data.Linq.Table Dynamic = (System.Data.Linq.Table)MyObject; Would have to reflect somthing like: System.Data.Linq.Table Dynamic = (System.Data.Linq.Table)MyObject; With being a dynamic catch all variable so that Linq can just execute the code just like I hand coded it statically. I have been pulling my hair out with this one. Please help.

    Read the article

  • How to return a record from function, executed by INSERT/UPDATE rule (trigger)?

    - by seas
    Do the following scheme for my database: create sequence data_sequence; create table data_table { id integer primary key; field varchar(100); }; create view data_view as select id, field from data_table; create function data_insert(_new data_view) returns data_view as $$declare _id integer; _result data_view%rowtype; begin _id := nextval('data_sequence'); insert into data_table(id, field) values(_id, _new.field); select * into _result from data_view where id = _id; return _result; end; $$ language plpgsql; create rule insert as on insert to data_view do instead select data_insert(new); Then type in psql: insert into data_view(field) values('abc'); Would like to see something like: id | field ----+--------- 1 | abc Instead see: data_insert ------------- (1, "abc") Is it possible to fix this somehow? Thanks for any ideas. Ultimate idea is to use this in other functions, so that I could obtain id of just inserted record without selecting for it from scratch. Something like: insert into data_view(field) values('abc') returning id into my_variable would be nice but doesn't work with error: ERROR: cannot perform INSERT RETURNING on relation "data_view" HINT: You need an unconditional ON INSERT DO INSTEAD rule with a RETURNING clause. I don't really understand that HINT. I use PostgreSQL 8.4.

    Read the article

  • Reuse select query in a procedure in Oracle

    - by Jer
    How would I store the result of a select statement so I can reuse the results with an in clause for other queries? Here's some pseudo code: declare ids <type?>; begin ids := select id from table_with_ids; select * from table1 where id in (ids); select * from table2 where id in (ids); end; ... or will the optimizer do this for me if I simply put the sub-query in both select statements?

    Read the article

< Previous Page | 629 630 631 632 633 634 635 636 637 638 639 640  | Next Page >