Search Results

Search found 28220 results on 1129 pages for 'sql clr'.

Page 668/1129 | < Previous Page | 664 665 666 667 668 669 670 671 672 673 674 675  | Next Page >

  • ASP.MVC ModelBinding Behaviour

    - by OldBoy
    This one has me stumped, despite the numerous posts on here. The scenario is a basic MVC(2) web application with simple CRUD operations. Whenever the edit form is submitted and the UpdateModel() called, an exception is thrown: System.Data.Linq.ForeignKeyReferenceAlreadyHasValueException was unhandled by user code This occurs against a DropDownList value which is a foreign key on the entity table. However, there is another DropDownList list on the form, representing another foreign key, which does not throw the error (unsurprisingly). Changing the property values manually inside the Edit Action: Recipe recipe = repository.GetRecipe(int.Parse(formValues["recipeid"])); recipe.CategoryId = Convert.ToInt32(formValues["CategoryId"].ToString()); recipe.Page = int.Parse(formValues["Page"].ToString()); recipe.PublicationId=Convert.ToInt32(formValues["PublicationId"].ToString()); Allows the CategoryId and Page properties to be updated, and then the error is thrown on the PublicationId. All of the referential integrity is checked an the same in the db and the dbml. Any light shed on this would be most welcome.

    Read the article

  • Calculating percent of votes inside mysql statement.

    - by Beck
    UPDATE polls_options SET `votes`=`votes`+1, `percent`=ROUND((`votes`+1) / (SELECT voters FROM polls WHERE poll_id=? LIMIT 1) * 100,1) WHERE option_id=? AND poll_id=? Don't have table data yet, to test it properly. :) And by the way, in what type % integers should be stored in database? Thanks for the help!

    Read the article

  • how to insert excel-2003 values into SQL2005 database?

    - by vas
    Are there any rules / guidelines for DATA form XLS sheets to be inserted into SQL- DB? I have a group of Excel templates in 2005.Each concerned cell in Excel template is named. When Excel sheets are filled, saved and submitted , the values are transferred to the database. Excel sheets have names for various cells that are to b e filled by the user EX:- for the total number of Milk in the Beginning a given month , there is an Excel Cell Named "mtsBpiPTR180" Total number of Milk in the Ending a given month , there is an Excel Cell Named **"mtsEpiPTR180"** I have added 2 new cells , named "mtsBpiPTR180PA" and "mtsEpiPTR180PA". Now I try to upload the Excel File. But I AM UNABLE TO SEE MY FILLED DATA FROM "mtsBpiPTR180PA" and "mtsEpiPTR180PA" INTO THE RELATED DB/table. The above 2 are empty in the DB/table, even though I have filled them and successfully filed the Excel sheets Now no matter how much I search in the DB/stored procs i am unable to the ACTUAL STORED PROC or how the Data form Excel sheet is inserted into Tables WHERE DATA FROM XLS is inserted into DB. So was wondering:- Are there any rules / guidelines for DATA form XLS sheets to be inserted into SQL- DB?

    Read the article

  • how to reverse the order of words in query or c#

    - by Ranjana
    i have stored in the database as location India,Tamilnadu,Chennai,Annanagar while i bind in the grid view it ll be displaying as 'India,Tamilnadu,Chennai,Annanagar' this format. but i need to be displayed as 'Annanagar,Chennai,Tamilnadu,India' in this format. how to perform this reverse order in query or in c#

    Read the article

  • read text file line by line and insert/update values in table

    - by I__
    i am exploring the option of whether DoCmd.TransferText will do what i need, and it seems like it wont. i need to insert data if it does not exist and update it if it does exist i am planning to read a text file line by line like this: Dim intFile As Integer Dim strLine As String intFile = FreeFile() Open myFile For Input As #intFile Line Input #intFile, strLine Close #intFile i guess each individual line will be a record. it will probably be comma separated and some fields will have a " text qualifier because within the field itself i will have commas my question is how would i read a comma delimited text file that has double quotes sometimes as text qualifiers into a table in access?

    Read the article

  • get n records at a time from a temporary table

    - by Claudiu
    I have a temporary table with about 1 million entries. The temporary table stores the result of a larger query. I want to process these records 1000 at a time, for example. What's the best way to set up queries such that I get the first 1000 rows, then the next 1000, etc.? They are not inherently ordered, but the temporary table just has one column with an ID, so I can order it if necessary. I was thinking of creating an extra column with the temporary table to number all the rows, something like: CREATE TEMP TABLE tmptmp AS SELECT ##autonumber somehow##, id FROM .... --complicated query then I can do: SELECT * FROM tmptmp WHERE autonumber>=0 AND autonumber < 1000 etc... how would I actually accomplish this? Or is there a better way? I'm using Python and PostgreSQL.

    Read the article

  • Querying using table-valued parameter

    - by antmx
    I need help please with writing a sproc, it takes a table-valued parameter @Locations, whose Type is defined as follows: CREATE TYPE [dbo].[tvpLocation] AS TABLE( [CountryId] [int] NULL, [ResortName] [nvarchar](100) NULL, [Ordinal] [int] NOT NULL, PRIMARY KEY CLUSTERED ( [Ordinal] ASC )WITH (IGNORE_DUP_KEY = OFF) ) @Locations will contain at least 1 row. Each row WILL have a non-null CountryId, and MAY have a non-null ResortName. Each row will have a unique Ordinal, the first being 0. The combinations of CountryId and ResortName in @Locations will be unique. The sproc needs to search against the following table structure. The image can be seen better by right-clicking it and View Image, or similar depending on your browser. Now this is where I'm stuck, the sproc should be able to find Tours where: The Tour's 1st TourHotel (Ordinal 0) has the same CountryId (and ResortName if specified) of the 1st row of @Locations (Ordinal 0). And also if @Locations has 1 row, the Tour must have additional TourHotels, ALL of which must be in the remaining CountryIds (and ResortNames if specified) of these remaining @Locations rows. Edit This is the code I finally used, based on Anthony Faull's suggestion. Thank you so much Anthony: select distinct T.Id from tblTour T join tblTourHotel TH on TH.TourId = T.Id join tblHotel H ON H.Id = TH.HotelId JOIN @Locations L ON ( ( L.Ordinal = 0 AND TH.Ordinal = 0 ) OR ( L.Ordinal > 0 AND TH.Ordinal > 0 ) ) AND L.CountryId = H.CountryId AND ( L.ResortName = H.ResortName OR L.ResortName IS NULL ) cross apply( select COUNT(TH2.Id) AS [Count] FROM tblTourHotel TH2 where TH2.TourId = TH.TourId ) TourHotelCount where TourHotelCount.[Count] = @LocationCount group by T.Id, T.TourRef, T.Description, T.DepartureDate, T.NumNights, T.DepartureAirportId, T.DestinationAirportId, T.AirlineId, T.FEPrice having COUNT(distinct TH.Id) = @LocationCount

    Read the article

  • Altering sql tables based on condition

    - by Parker
    Is there any possible way to add a row of data to only some of the tables in a database? I am not sure what parameter I could use amongst the tables to compare them to each other. Any Ideas? For Example: My database has tables that are (let's say) group A tables, and tables that are group B. I want to add a row to only the group B tables while leaving the group A tables untouched. Sorry I should have been a bit more specific. The tables that need to have a row added will change. My application monitors inventory in different store locations(each table in my database represents a store). When I need to add an item to inventory(the items are rows in the tables) I don't want to have to manually add the row to all the store tables. My problem is: Not all the tables in the database represent stores. For instance one table stores the user login data. Obviously I do not want to add the new row to this table. How do I update only the tables that represent stores?

    Read the article

  • Oracle: how to "group by" over a range?

    - by Mark Harrison
    If I have a table like this: pkey age ---- --- 1 8 2 5 3 12 4 12 5 22 I can "group by" to get a count of each age. select age,count(*) n from tbl group by age; age n --- - 5 1 8 1 12 2 22 1 What query can I use to group by age ranges? age n ----- - 1-10 2 11-20 2 20+ 1

    Read the article

  • create temporary table from cursor

    - by Claudiu
    Is there any way, in PostgreSQL accessed from Python using SQLObject, to create a temporary table from the results of a cursor? Previously, I had a query, and I created the temporary table directly from the query. I then had many other queries interacting w/ that temporary table. Now I have much more data, so I want to only process 1000 rows at a time or so. However, I can't do CREATE TEMP TABLE ... AS ... from a cursor, not as far as I can see. Is the only thing to do something like: rows = cur.fetchmany(1000); cur2 = conn.cursor() cur2.execute("""CREATE TEMP TABLE foobar (id INTEGER)""") for row in rows: cur2.execute("""INSERT INTO foobar (%d)""" % row) or is there a better way? This seems awfully inefficient.

    Read the article

  • MySQL COUNT() multiple columns

    - by liam
    Hello, I'm trying to fetch the most popular tags from all videos in my database (ignoring blank tags). I also need the 'flv' for each tag. I have this working as I want if each video has one tag: SELECT tag_1, flv, COUNT(tag_1) AS tagcount FROM videos WHERE NOT tag_1='' GROUP BY tag_1 ORDER BY tagcount DESC LIMIT 0, 10 However in my database, each video is allowed three tags - tag_1, tag_2 and tag_3. Is there a way to get the most popular tags reading from multiple columns? The record structure is: +-----------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-----------------+--------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | flv | varchar(150) | YES | | NULL | | | tag_1 | varchar(75) | YES | | NULL | | | tag_2 | varchar(75) | YES | | NULL | | | tag_3 | varchar(75) | YES | | NULL | | +-----------------+--------------+------+-----+---------+----------------+

    Read the article

  • Filter on count(*) in oracle

    - by chris
    I have a grouped query, and would like to filter it based on count(*) Can I do this without a subquery? This is what I have currently: select * from (select ID, count(*) cnt from name group by ID) where cnt > 1;

    Read the article

  • Solr autocommit and autooptimize?

    - by Camran
    I will be uploading my website to a VPS soon. It is a classifieds website which uses Solr integrated with MySql. Solr is updated whenever a new classified is put or deleted. I need a way to make the commit() and optimize() be automated, for example once every 3 hours or so. How can I do this? (Details Please) When is it ideal to optimize? Thanks

    Read the article

  • help with delete where not in query

    - by kralco626
    I have a lookup table (##lookup). I know it's bad design because I'm duplicating data, but it speeds up my queries tremendously. I have a query that populates this table insert into ##lookup select distinct col1,col2,... from table1...join...etc... I would like to simulate this behavior: delete from ##lookup insert into ##lookup select distinct col1,col2,... from table1...join...etc... This would clearly update the table correctly. But this is a lot of inserting and deleting. It messes with my indexes and locks up the table for selecting from. This table could also be updated by something like: delete from ##lookup where not in (select distinct col1,col2,... from table1...join...etc...) insert into ##lookup (select distinct col1,col2,... from table1...join...etc...) except if it is already in the table The second way may take longer, but I can say "with no lock" and I will be able to select from the table. Any ideas on how to write the query the second way?

    Read the article

  • Fetch multiple rows from SQL in PHP foreach item in array

    - by TrySpace
    I try to request an array of IDs, to return each row with that ID, and push each into an Array $finalArray But only the first result from the Query will output, and at the second foreach, it skips the while loop. I have this working in another script, so I don't understand where it's going wrong. The $arrayItems is an array containing: "home, info" $finalArray = array(); foreach ($arrayItems as $UID_get) { $Query = "SELECT * FROM items WHERE (uid = '" . cleanQuery($UID_get) . "' ) ORDER BY uid"; if($Result = $mysqli->query($Query)) { print_r($UID_get); echo "<BR><-><BR>"; while ($Row = $Result->fetch_assoc()) { array_push($finalArray , $Row); print_r($finalArray ); echo "<BR><><BR>"; } } else { echo '{ "returned" : "FAIL" }'; //. mysqli_connect_errno() . ' ' . mysqli_connect_error() . "<BR>"; } } (the cleanQuery is to escape and stripslashes) What I'm trying to get is an array of multiple rows (after i json_encoded it, like: {"finalArray" : { "home": {"id":"1","created":"0000-00-00 00:00:00","css":"{ \"background-color\" : \"red\" }"} }, { "info": {"id":"2","created":"0000-00-00 00:00:00","css":"{ \"background-color\" : \"blue\" }"} } } But that's after I get both, or more results from the db. the print_r($UID_get); does print info, but then nothing.. So, why am I not getting the second row from info? I am essentially re-querying foreach $arrayItem right?

    Read the article

  • ColdFusion Timeout Error

    - by Jason
    I have a scheduled task that runs once a day that builds an XML file that I pass off to another group. Recently the amount of data has greatly increased and is now causing the task to time out (I think). I have tried to optimize my script as much as possible but with no luck. It times out long before an hour and I don't get any kind of ColdFusion error. Instead I get a "This page cannot be found" after it runs. Could this be a timeout someplace other than Coldfusion? Is there a more efficient way to build this XML file? select PersonID, FirstName, LastName from People select d.DepartmentID, DepartmentName, pd.PersonID from Department d inner join PersonDepartment pd on d.DepartmentID = pd.DepartmentID select PaperID, PaperTitle, PaperDescription, pp.PersonID from Paper p inner join PersonPaper pp on p.PaperID = pp.PaperID select DepartmentID, DepartmentName from getDepartments where PersonID = #getPeople.PersonID# select PaperID, PaperDescription from getpapers where PersonID = #getPeople.PersonID# #getPeople.PersonID# #getPeople.Firstname# #getPeople.LastName# #getPersonDepartments.DepartmentID# #getPersonDepartments.DepartmentName# #getPersonPapers.PaperID# #getPersonPapers.PaperDescription# Done!

    Read the article

  • Restricting deletion with NHibernate

    - by FrontSvin
    I'm using NHibernate (fluent) to access an old third-party database with a bunch of tables, that are not related in any explicit way. That is a child tables does have parentID columns which contains the primary key of the parent table, but there are no foreign key relations ensuring these relations. Ideally I would like to add some foreign keys, but cannot touch the database schema. My application works fine, but I would really like impose a referential integrity rule that would prohibit deletion of parent objects if they have children, e.i. something similar 'ON DELETE RESTRICT' but maintained by NHibernate. Any ideas on how to approach this would be appreciated. Should I look into the OnDelete() method on the IInterceptor interface, or are there other ways to solve this? Of course any solution will come with a performance penalty, but I can live with that.

    Read the article

  • How to save to two tables using one SQLAlchemy model

    - by Oatman
    I have an SQLAlchemy ORM class, linked to MySQL, which works great at saving the data I need down to the underlying table. However, I would like to also save the identical data to a second archive table. Here's some psudocode to try and explain what I mean my_data = Data() #An ORM Class my_data.name = "foo" #This saves just to the 'data' table session.add(my_data) #This will save it to the identical 'backup_data' table my_data_archive = my_data my_data_archive.__tablename__ = 'backup_data' session.add(my_data_archive) #And commits them both session.commit() Just a heads up, I am not interested in mapping a class to a JOIN, as in: http://www.sqlalchemy.org/docs/05/mappers.html#mapping-a-class-against-multiple-tables

    Read the article

  • Create a trigger on Oracle Databae that updates the field in a table when a field in onether table i

    - by GigaPr
    Hi, i have two tables Order(id, date, note) and Delivery(Id, Note, Date) I want to create a trigger that updates the date in delivery when the date is updated in Order I was thinking to do something like CREATE OR REPLACE TRIGGER your_trigger_name BEFORE UPDATE ON Order DECLARE BEGIN UPDATE Delivery set date = ??? where id = ??? END; How do i get the date and row id? thanks

    Read the article

  • How to select a subset of results from a select statement

    - by Ankur
    I have a table that stores RDF triples: triples(triple_id, sub_id, pre_id, obj_id) The method (I need to write) will receive an array of numbers which correspond to pre_id values. I want to select all sub_id values that have a corresponding pre_id for all the pre_ids in the array that is passed in. E.g. if I had a single pre_id values passed in... lets call the value passed in preId, I would do: select sub_id from triples where pre_id=preId; However since I have mutliple pre_id values I want to keep iterating through the pre_id values and only keep the sub_id values corresponding to the "triples" records that have both. E.g. image there are five records: triples(1, 34,65,23) triples(2, 31,35,28) triples(3, 32,32,19) triples(4, 12,65,28) triples(5, 76,32,34) If I pass in an array of pre_id values [65,32] then I want to select the first, third, fourth and fifth records. What would I do for that?

    Read the article

  • Embedded Java Databases for Large Data Sets

    - by ExAmerican
    I would like to port a PHP/MySQL-based client/server application to be a standalone desktop application written in Java. The database has grown to be fairly large, with several tables with hundreds of thousands of rows. I expect these could grow to over a million entries for certain tables. What embedded database would best handle this? HSQLDB and Sqlite seem to be the obvious choices, though I'm guessing there are others out there as well. My main priorities are the ability to perform queries on large amounts of data efficiently (this thread seems to confirm Sqlite can handle this) and the ease with which I can import old data from MySQL (I remember HSQLDB being kind of a pain for that). Note: I am aware that similar questions comparing embedded databases have been posted before (for example here and here) but as my priorities differ somewhat from most applications considering the large data migration I thought it justified a new question.

    Read the article

< Previous Page | 664 665 666 667 668 669 670 671 672 673 674 675  | Next Page >