Search Results

Search found 30398 results on 1216 pages for 'embedded sql'.

Page 638/1216 | < Previous Page | 634 635 636 637 638 639 640 641 642 643 644 645  | Next Page >

  • Stored Procedure in Entity Framework

    - by kamal
    Hi I had added the Stored procedure in my Entity framework and i also imported the functions in the edmx. Is it must to add all the three functions insert, update, and delete functions to a table. I had tried with insert alone and also with all, but why can't i get the name of the stored procedure in the connection string. Let me know what i done clearly. I had added the sp i had imported the functions in the model browser. i had also mapped the insert, update and delete function to the table with return type only for insert and update. Still i can't get the name of SP in the connection string. Please let me know how could i resolve this issue. Thanks in Advance, Kamal.

    Read the article

  • How to define a query om a n-m table

    - by user559889
    Hi, I have some troubles defining a query. I have a Product and a Category table. A product can belong to multiple categories and vice versa so there is also a Product-Category table. Now I want to select all products that belong to a certain category. But if the user does not provide a category I want all products. I try to create a query using a join but this results in the product being selected multiple times if it belongs to multiple categories (in the case no specific category is queried). What kind of query do I have to create? Thanks

    Read the article

  • Convert a value based on range

    - by Chris
    I need to convert a number to another value based on a range: ie: 7 = "A" 106 = "I" I have a range like this: from to return-val 1 17 A 17 35 B 35 38 C 38 56 D 56 72 E 72 88 F 88 98 G 98 104 H 104 115 I 115 120 J 120 123 K 123 129 L 129 infinity M The values are fixed and do not change. I was thinking a lookup table would be required, but is there a way it could be done with a function on an analytics function inside of oracle?

    Read the article

  • DB2 CASE Statement

    - by gamerzfuse
    I need to somehow use the CASE syntax (which is beyond me) to affect the database results based on criteria. I have a bunch of royalties in 0.# form (royalty) I have a title ID # (title_id) and I need to show the new increase in royalties so that I can use the data. IF: they have a current royalty of 0.0 - 0.1 = 10% raise IF: they have 0.11 - 0.15 = 20% raise IF: royalty >= 0.16 = 20% raise Any help would be much appreciated. create table royalites ( title_id char(6), lorange integer, hirange integer, royalty decimal(5,2));

    Read the article

  • Modeling a Generic Relationship (expressed in C#) in a Database

    - by StevenH
    This is most likely one for all you sexy DBAs out there: How would I effieciently model a relational database whereby I have a field in an "Event" table which defines a "SportType"? This "SportsType" field can hold a link to different sports tables E.g. "FootballEvent", "RubgyEvent", "CricketEvent" and "F1 Event". Each of these Sports tables have different fields specific to that sport. My goal is to be able to genericly add sports types in the future as required, yet hold sport specific event data (fields) as part of my Event Entity. Is it possible to use an ORM such as NHibernate / Entity framework / DataObjects.NET which would reflect such a relationship? I have thrown together a quick C# example to express my intent at a higher level: public class Event<T> where T : new() { public T Fields { get; set; } public Event() { EventType = new T(); } } public class FootballEvent { public Team CompetitorA { get; set; } public Team CompetitorB { get; set; } } public class TennisEvent { public Player CompetitorA { get; set; } public Player CompetitorB { get; set; } } public class F1RacingEvent { public List<Player> Drivers { get; set; } public List<Team> Teams { get; set; } } public class Team { public IEnumerable<Player> Squad { get; set; } } public class Player { public string Name { get; set; } public DateTime DOB { get; set;} }

    Read the article

  • Using Multiple Foreign Keys to the same table in LINQ

    - by Graeme
    I have a table Users and a table Items In the Items table, I have fields such as ModifiedBy CreatedBy AssignedTo which all have a userId integer. The database is set up to have these as foreign keys back to the Users table. When using LINQToSQL, the relationships which are automatically built from the dbml end up giving me names like User, User1 and User2 e.g. myItem.User1.Name or myItem.User2.Name Obviously this isn't very readable and I'd like it be along the lines of myItem.CreatedByUser.Name or myItem.ModifiedByUser.Name etc I could change the names of the relationships but that means I have to redo that every time I change the db schema and refresh the dbml. Is there any way round this?

    Read the article

  • Porting join from Oracle to Postgres

    - by Grasper
    INSERT INTO MISSION_OBJECTIVE( MSN_INT_ID, MO_INT_ID, MO_MSN_CLASS_NM, MO_MSN_CLASS_CD, MO_MSN_TYPE, MO_PRIORITY, MO_COMMENT, MO_START_DT, MO_END_DT, ASP_AIRSPACE_NM, MO_OBJ_LOCATION, MO_ALO_LEG_ID, MO_ALO_ARRIVE_LOC) SELECT '1025', '1', 'AIRDROP', 'ADP', 'LAPES', NULL, COALESCE( NULL, ' '), TO_TIMESTAMP( '1002260900', 'YYMMDDHH24MI'), TO_TIMESTAMP( '1002260915', 'YYMMDDHH24MI'), 'TRANSIT ALPHA', 'TRANSIT ALPHA', '1', 'TRANSIT ALPHA' FROM AIRSPACE ASP, apsmain .MISSION_CLASS MC WHERE ASP.ASP_AIRSPACE_NM(+)= 'TRANSIT ALPHA' AND MC.MCS_MISSION_CLASS_NAME= 'AIRDROP' AND 'TRANSIT ALPHA' IS NOT NULL Is that exactly the same as: INSERT INTO MISSION_OBJECTIVE( MSN_INT_ID, MO_INT_ID, MO_MSN_CLASS_NM, MO_MSN_CLASS_CD, MO_MSN_TYPE, MO_PRIORITY, MO_COMMENT, MO_START_DT, MO_END_DT, ASP_AIRSPACE_NM, MO_OBJ_LOCATION, MO_ALO_LEG_ID, MO_ALO_ARRIVE_LOC) SELECT '1025', '1', 'AIRDROP', 'ADP', 'LAPES', NULL, COALESCE( NULL, ' '), TO_TIMESTAMP( '1002260900', 'YYMMDDHH24MI'), TO_TIMESTAMP( '1002260915', 'YYMMDDHH24MI'), 'TRANSIT ALPHA', 'TRANSIT ALPHA', '1', 'TRANSIT ALPHA' FROM AIRSPACE ASP, apsmain .MISSION_CLASS MC WHERE ASP.ASP_AIRSPACE_NM = 'TRANSIT ALPHA' AND MC.MCS_MISSION_CLASS_NAME= 'AIRDROP' AND 'TRANSIT ALPHA' IS NOT NULL I just deleted the (+). The part that is confusing me is that ASP.ASP_AIRSPACE_NM is being right joined to a constant.

    Read the article

  • Submitting changes to 2 tables with C# linq only one table is changing

    - by Laurence Burke
    SO I am changing the values in 2 different tables and the only table changing is the address table any one know why? protected void btnSubmit_Click(object sender, EventArgs e) { TestDataClassDataContext dc = new TestDataClassDataContext(); var addr = (from a in dc.Addresses where a.AddressID == Convert.ToInt32(ddlAddList.SelectedValue) select a).FirstOrDefault(); var caddr = (from ca in dc.CustomerAddresses where addr.AddressID == ca.AddressID select ca).FirstOrDefault(); if (txtZip.Text != "" && txtAdd1.Text != "" && txtCity.Text != "") { addr.AddressLine1 = txtAdd1.Text; addr.AddressLine2 = txtAdd2.Text; addr.City = txtCity.Text; addr.PostalCode = txtZip.Text; addr.StateProvinceID = Convert.ToInt32(ddlState.SelectedValue); caddr.AddressTypeID = Convert.ToInt32(ddlAddrType.SelectedValue); dc.SubmitChanges(); lblErrMsg.Visible = false; lblSuccess.Visible = true; } else { lblErrMsg.Text = "Invalid Input"; lblErrMsg.Visible = true; } }

    Read the article

  • Insert a datetime value with GetDate() function to a SQL server (2005) table?

    - by David.Chu.ca
    I am working (or fixing bugs) on an application which was developed in VS 2005 C#. The application saves data to a SQL server 2005. One of insert SQL statement tries to insert a time-stamp value to a field with GetDate() TSQL function as date time value. Insert into table1 (field1, ... fieldDt) values ('value1', ... GetDate()); The reason to use GetDate() function is that the SQL server may be at a remove site, and the date time may be in a difference time zone. Therefore, GetDate() will always get a date from the server. As the function can be verified in SQL Management Studio, this is what I get: SELECT GetDate(), LEN(GetDate()); -- 2010-06-10 14:04:48.293 19 One thing I realize is that the length is not up to the milliseconds, i.e., 19 is actually for '2010-06-10 14:04:48'. Anyway, the issue I have right now is that after the insert, the fieldDt actually has a date time value up to minutes, for example, '2010-06-10 14:04:00'. I am not sure why. I don't have permission to update or change the table with a trigger to update the field. My question is that how I can use a INSERT T-SQL to add a new row with a date time value ( SQL server's local date time) with a precision up to milliseconds?

    Read the article

  • Binding multiple arrays for WHERE IN in PostgreSQL

    - by Alec
    So I want to prepare a query something like: SELECT id FROM users WHERE (branch, cid) IN $1; But I then need to bind a variable length list of arrays like (('a','b'),('c','d')) to it. How do I go about doing this? I've tried using ANY but can't seem to get the syntax right. Cheers, Alec Edit: After some fiddling around, this is valid syntactically: SELECT id FROM users WHERE (branch, cid) = ANY ($1::text[][]); and then binding the string '{{a,b},{c,d}}' to $1 but throws the error "operator does not exist: record = text". Changing 'text' to 'record' then throws "input of anonymous composite types is not implemented". Any ideas?

    Read the article

  • Modeling Tools that understand both Relational and LDAP

    - by jm04469
    I am looking to do some modeling and would like to have a tool that can capture not only a relational model like ERWIN but also allow us to easily port to LDAP as an option. NOTE: Visio can connect to an existing LDAP server and draw, but does not allow for you to model first and then deploy, unlike its relational capabilities.

    Read the article

  • Finding the right terminology for a dictionary table

    - by Karl Forner
    My concern is about what I currently call "dictionary tables", that are database tables containing a list of controlled vocabulary. Let's use an example: Suppose you have a table User containing fields: user_id : primary key first_name last_name user_type_id : foreign key to the UserType table and another table UserType with just two fields: user_type_id : primary key name : the name/value of a particular type of user. For instance, the UserType table may contain (1, Administrator), (2, PowerUser), (3, Normal)... My question is: what is the canonical term for a table like UserType, that only contains a list of (dictinct) words. I want to publish some code that help managing this kind of tables, but first I have to name them ! Thanks for your help. Current state of thought: For now I feel Lookup Tables is a good term. It is also used with the same meaning in these posts: http://dbix-class.35028.n2.nabble.com/RFC-Component-for-Lookup-tables-td3504085.html http://tonyandrews.blogspot.de/2004/10/otlt-and-eav-two-big-design-mistakes.html Lookup Tables Best Practices: DB Tables... or Enumerations The only problem is that lookup table is also sometimes used to name a junction table.

    Read the article

  • How do I connect to MSSQL 2008 database in Java with JDBC

    - by shuxer
    I have MSSQL 2008 installed on my local PC, and my Java application needs to connect to a MSSQL database. I am a new to MSSQL and I would like get some help on creating user login for my Java application and getting connection via JDBC. So far I tried to create a user login for my app and used following connection string, but I doesn't work at all. Any help and hint will be appreciated. jdbc:jtds:sqlserver://127.0.0.1:1433/dotcms username="shuxer" password="itarator"

    Read the article

  • Optimize date query for large child tables: GiST or GIN?

    - by Dave Jarvis
    Problem 72 child tables, each having a year index and a station index, are defined as follows: CREATE TABLE climate.measurement_12_013 ( -- Inherited from table climate.measurement_12_013: id bigint NOT NULL DEFAULT nextval('climate.measurement_id_seq'::regclass), -- Inherited from table climate.measurement_12_013: station_id integer NOT NULL, -- Inherited from table climate.measurement_12_013: taken date NOT NULL, -- Inherited from table climate.measurement_12_013: amount numeric(8,2) NOT NULL, -- Inherited from table climate.measurement_12_013: category_id smallint NOT NULL, -- Inherited from table climate.measurement_12_013: flag character varying(1) NOT NULL DEFAULT ' '::character varying, CONSTRAINT measurement_12_013_category_id_check CHECK (category_id = 7), CONSTRAINT measurement_12_013_taken_check CHECK (date_part('month'::text, taken)::integer = 12) ) INHERITS (climate.measurement) CREATE INDEX measurement_12_013_s_idx ON climate.measurement_12_013 USING btree (station_id); CREATE INDEX measurement_12_013_y_idx ON climate.measurement_12_013 USING btree (date_part('year'::text, taken)); (Foreign key constraints to be added later.) The following query runs abysmally slow due to a full table scan: SELECT count(1) AS measurements, avg(m.amount) AS amount FROM climate.measurement m WHERE m.station_id IN ( SELECT s.id FROM climate.station s, climate.city c WHERE -- For one city ... -- c.id = 5182 AND -- Where stations are within an elevation range ... -- s.elevation BETWEEN 0 AND 3000 AND 6371.009 * SQRT( POW(RADIANS(c.latitude_decimal - s.latitude_decimal), 2) + (COS(RADIANS(c.latitude_decimal + s.latitude_decimal) / 2) * POW(RADIANS(c.longitude_decimal - s.longitude_decimal), 2)) ) <= 50 ) AND -- -- Begin extracting the data from the database. -- -- The data before 1900 is shaky; insufficient after 2009. -- extract( YEAR FROM m.taken ) BETWEEN 1900 AND 2009 AND -- Whittled down by category ... -- m.category_id = 1 AND m.taken BETWEEN -- Start date. (extract( YEAR FROM m.taken )||'-01-01')::date AND -- End date. Calculated by checking to see if the end date wraps -- into the next year. If it does, then add 1 to the current year. -- (cast(extract( YEAR FROM m.taken ) + greatest( -1 * sign( (extract( YEAR FROM m.taken )||'-12-31')::date - (extract( YEAR FROM m.taken )||'-01-01')::date ), 0 ) AS text)||'-12-31')::date GROUP BY extract( YEAR FROM m.taken ) The sluggishness comes from this part of the query: m.taken BETWEEN /* Start date. */ (extract( YEAR FROM m.taken )||'-01-01')::date AND /* End date. Calculated by checking to see if the end date wraps into the next year. If it does, then add 1 to the current year. */ (cast(extract( YEAR FROM m.taken ) + greatest( -1 * sign( (extract( YEAR FROM m.taken )||'-12-31')::date - (extract( YEAR FROM m.taken )||'-01-01')::date ), 0 ) AS text)||'-12-31')::date The HashAggregate from the plan shows a cost of 10006220141.11, which is, I suspect, on the astronomically huge side. There is a full table scan on the measurement table (itself having neither data nor indexes) being performed. The table aggregates 237 million rows from its child tables. Question What is the proper way to index the dates to avoid full table scans? Options I have considered: GIN GiST Rewrite the WHERE clause Separate year_taken, month_taken, and day_taken columns to the tables What are your thoughts? Thank you!

    Read the article

  • jwplayer embedded with recent flash with html5 back end cant hide controls or add autoplay and loop

    - by Daniel Redwood
    Hey all, After the unfortunate realization that Firefox is just not going to play nice with an embedded HTML5 video with my server set up, I've decided to go the JW Player route, since it's compatible with iPads and iPhones. I can get the file to show up on my page, but it's big and heavy. I would like for it to be just the video, no controls. Autoplay and on loop. Can you help me out? Here's the code... <div id="container">Loading the player ...</div> jwplayer("container").setup({ flashplayer: "jwplayer/player.swf", file: "Video/fernando.m4v", height: 520, width: 780, }); <script type='text/javascript' src='swfobject.js'></script>

    Read the article

  • Why DB constraints are not added during table creation.

    - by Pratik
    Hi All, What is the difference between these to ways of table creation. CREATE TABLE TABLENAME( field1.... field2... add constraint constraint1; add constraint constraint2; ) AND CREATE TABLE TABLENAME( field1.... field2... ) ALTER TABLE TABLENAME add constaint1 ALTER TABLE TABLENAME add constaint2 Moreover the first scripts fails on the SQL+ but they pass on sqldeveloper Thanks! Pratik

    Read the article

  • Freebase Query with "JOINS"

    - by codemonkey
    ... Yeah, yeah, I know traditional joins don't exist. I actually like the freebase query methodology in theory, just having a little trouble getting it to actually work for me : ) Anyone have a dumb-simple example of getting Freebase data via MQL that pulls from two different "tables"? In particular, I'm trying to get automotive data... so for example, pulling fields from both /automotive/model_year and /automotive/trim_year. I've read the documentation (for hours actually). There's a distinct possibility that I'm looking right at such an example somewhere and just not seeing it because my OLTP brain just doesn't comprehend what it's seeing. * Note * ... that the two "types" I'm working with above are siblings, not parent/child. Does freebase even allow joining data between sibling nodes... I see examples of queries pulling from parent/child, but not from siblings I don't think (or I've overlooked them).

    Read the article

  • C# : DBConnection.Open() timeout is too long.

    - by leo
    Hi, I'm trying to connect to a server that the user inputs. When the server doesn't exist, I'd like to give a quick feedback to the end-user so he can correct what he's typed. Is there any way to test if a server exists before trying to connect ? Thanks

    Read the article

  • C# SqlDataAdapter not populating DataSet

    - by Wesley
    I have searched the net and searched the net only to not quite find the probably I am running into. I am currently having an issue getting a SqlDataAdapter to populate a DataSet. I am running Visual Studio 2008 and the query is being sent to a local instance of SqlServer 2008. If I run the query itself in SqlServer, I do get information. Code is as follows: string theQuery = "select Password from Employees where employee_ID = '@EmplID'"; SqlDataAdapter theDataAdapter = new SqlDataAdapter(); theDataAdapter.SelectCommand = new SqlCommand(theQuery, conn); theDataAdapter.SelectCommand.Parameters.Add("@EmplID", SqlDbType.VarChar).Value = "EmployeeName"; theDataAdapter.Fill(theSet); The code to read the dataset: foreach (DataRow theRow in theSet.Tables[0].Rows) { //process row info } If there is any more info I can supply please let me know.

    Read the article

  • How To Get A Field Value Based On The Max Of Another Field In VFP v8.0

    - by DaveB
    So, I have a table and I want to get the value from one field in the record with the greatest DateTime() value in another field and where still another field is equal to a certain value. Example data: Balance Created MeterNumber 7924.252 02/02/2010 10:31:48 AM 2743800 7924.243 02/02/2010 11:01:37 AM 2743876 7924.227 02/02/2010 03:55:50 PM 2743876 I want to get the balance for a record with the greatest created datetime for a specific meter number. In VFP 7 I can use: SELECT a.balance ,MAX(a.created) FROM MyTable a WHERE a.meternumber = '2743876' But, in the VFP v8.0 OleDb driver I am using in my ASP.NET page I must conform to VFP 8 which says you must have a GROUP BY listing each non aggregate field listed in the SELECT. This would return a record for each balance if I added GROUP BY a.balance to my query. Yes, I could issue a SET ENGINEBEHAVIOR 70 but I wanted to know if this could be done without having to revert to a previous version?

    Read the article

  • SQL2008 merge replication fails to update depdendent items when table is added

    - by Dan Puzey
    Setup: an existing SQL2008 merge replication scenario. A large server database, including views and stored procs, being replicated to client machines. What I'm doing: * adding a new table to the database * mark the new table for replication (using SP_AddMergeArticle) * alter a view (which is already part of the replicated content) is updated to include fields from this new table (which is joined to the tables in the existing view). A stored procedure is similarly updated. The problem: the table gets replicated to client machines, but the view is not updated. The stored procedure is also not updated. Non-useful workaround: if I run the snapshot agent after calling SP_AddMergeArticle and before updating the view/SP, both the view and the stored procedure changes correctly replicate to the client. The bigger problem: I'm running a list of database scripts in a transaction, as part of a larger process. The snapshot agent can't be run during a transaction, and if I interrupt the transaction (e.g. by running the scripts in multiple transactions), I lose the ability to roll back the changes should something fail. Does anyone have any suggestions? It seems like I must be missing something obvious, because I don't see why the changes to the view/sproc wouldn't be replicating anyway, regardless of what's going on with the new table.

    Read the article

  • Put stored procedure result set into a table-- any shortcuts besides INSERT INTO...EXEC?

    - by larryq
    Hi everyone, I'm creating a temp table that's the result of a stored procedure's result set. Right now I'm using the create table statement, followed by insert into....exec. It gets the job done but I was curious if there are other ways of going about this? I wish it were possible to run a select into, with the stored procedure's result set serving the role of the select statement, so that I wouldn't have to write a create statement beforehand (so that if the stored proc's columns change, there wouldn't be any modifications needed.) If there are other ways to go about this that might fit my needs better I'd love to hear about them. Thanks very much.

    Read the article

  • MySQL table data transformation -- how can I dis-aggreate MySQL time data?

    - by lighthouse65
    We are coding for a MySQL data warehousing application that stores descriptive data (User ID, Work ID, Machine ID, Start and End Time columns in the first table below) associated with time and production quantity data (Output and Time columns in the first table below) upon which aggregate (SUM, COUNT, AVG) functions are applied. We now wish to dis-aggregate time data for another type of analysis. Our current data table design: +---------+---------+------------+---------------------+---------------------+--------+------+ | User ID | Work ID | Machine ID | Event Start Time | Event End Time | Output | Time | +---------+---------+------------+---------------------+---------------------+--------+------+ | 080025 | ABC123 | M01 | 2008-01-24 16:19:15 | 2008-01-24 16:34:45 | 2120 | 930 | +---------+---------+------------+---------------------+---------------------+--------+------+ Reprocessing dis-aggregation that we would like to do would be to transform table content based on a granularity of minutes, rather than the current production event ("Event Start Time" and "Event End Time") granularity. The resulting reprocessing of existing table rows would look like: +---------+---------+------------+---------------------+--------+ | User ID | Work ID | Machine ID | Production Minute | Output | +---------+---------+------------+---------------------+--------+ | 080025 | ABC123 | M01 | 2010-01-24 16:19 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:20 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:21 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:22 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:23 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:24 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:25 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:26 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:27 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:28 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:29 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:30 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:31 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:22 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:33 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:34 | 133 | +---------+---------+------------+---------------------+--------+ So the reprocessing would take an existing row of data created at the granularity of production event and modify the granularity to minutes, eliminating redundant (Event End Time, Time) columns while doing so. It assumes a constant rate of production and divides output by the difference in minutes plus one to populate the new table's Output column. I know this can be done in code...but can it be done entirely in a MySQL insert statement (or otherwise entirely in MySQL)? I am thinking of a INSERT ... INTO construction but keep getting stuck. An additional complexity is that there are hundreds of machines to include in the operation so there will be multiple rows (one for each machine) for each minute of the day. Any ideas would be much appreciated. Thanks.

    Read the article

< Previous Page | 634 635 636 637 638 639 640 641 642 643 644 645  | Next Page >