Search Results

Search found 25022 results on 1001 pages for 'lua table'.

Page 463/1001 | < Previous Page | 459 460 461 462 463 464 465 466 467 468 469 470  | Next Page >

  • MySQL: updating a row and deleting the original in case it becomes a duplicate

    - by Silvio Donnini
    I have a simple table made up of two columns: col_A and col_B. The primary key is defined over both. I need to update some rows and assign to col_A values that may generate duplicates, for example: UPDATE `table` SET `col_A` = 66 WHERE `col_B` = 70 This statement sometimes yields a duplicate key error. I don't want to simply ignore the error with UPDATE IGNORE, because then the rows that generate the error would remain unchanged. Instead, I want them to be deleted when they would conflict with another row after they have been updated I'd like to write something like: UPDATE `table` SET `col_A` = 66 WHERE `col_B` = 70 ON DUPLICATE KEY REPLACE which unfortunately isn't legal in SQL, so I need help finding another way around. Also, I'm using PHP and could consider a hybrid solution (i.e. part query part php code), but keep in mind that I have to perform this updating operation many millions of times. thanks for your attention, Silvio Reminder: UPDATE's syntax has problems with joins with the same table that is being updated

    Read the article

  • Does Hibernate's GenericGenerator cause update and saveOrUpdate to always insert instead of update?

    - by Derek Mahar
    When using GenericGenerator to generate unique identifiers, do Hibernate session methods update() and saveOrUpdate() always insert instead of update table rows, even when the given object has an existing identifier (where the identifier is also the table primary key)? Is this the correct behaviour? public class User { private String id; private String name; public User(String id, String name) { this.id = id; this.name = name; } @GenericGenerator(name="generator", strategy="guid")@Id @GeneratedValue(generator="generator") @Column(name="USER_ID", unique=true, nullable=false) public String getId() { return this.id; } public void setId(String id) { this.id = id; } @Column(name="USER_NAME", nullable=false, length=20) public String getUserName() { return this.userName; } public void setUserName(String userName) { this.userName = userName; } } class UserDao extends AbstractDaoHibernate { public void updateUser(final User user) { HibernateTemplate ht = getHibernateTemplate(); ht.saveOrUpdate(user); } }

    Read the article

  • Repeating fields in similar database tables

    - by user1738833
    I have been tasked with working on a database that I have never seen before and I'm looking at the DB structure. Some of the central and most heavily queried and joined tables look like virtual duplicates of each other. Here's a massively simplified representation of the situation, with business-sensitive information changed, listing hypothetical table names and fields: TopLevelGroup: PK_TLGroupId, DisplaysXOnBill, DisplaysYOnBill, IsInvoicedForJ, IsInvoicedForK SubGroup: PK_SubGroupId, FK_ParentTopLevelGroupId, DisplaysXOnBill, DisplaysYOnBill, IsInvoicedForJ, IsInvoicedForK SubSubGroup: PK_SubSUbGroupId, FK_ParentSubGroupId, DisplaysXOnBill, DisplaysYOnBill, IsInvoicedForJ, IsInvoicedForK I haven't listed the types of the fields as I don't think it's particularly important to the situation. In addition, it's worth saying that rather than four repeated fields as in the example above, I'm looking at 86 repeated fields. For the most part, those fields genuinely do represent "facts" about the primary table entity, so it's not automatically wrong for that reason. In addition, the "groups" represented here have a property inheritance relationship. If DisplaysXOnBill is NULL in the SubSubGroup, it takes the value of DisplaysXOnBillfrom it's parent, the SubGroup, and so-on up to the TopLevelGroup. Further, the requirements will never require that the model extends beyond three levels, so there is no need for flexibility in that area. Is there a design smell from several tables which describe very similar entities having almost identical fields? If so, what might be a better design of the example above? I'm using the phrase "design smell" to indicate a possible problem. Of course, in any given situation, a particular design might well be the best solution. I'm looking for a more general answer - wondering what might be wrong with this design and what might be the better design were that the case. Possibly related, but not primary questions: Is this database schema in a reasonably normal form (e.g. to 3NF), insofar as can be told from the information I've provided. I can't see a problem with the requirements of 2NF and 3NF, except in their inheriting the requirements of 1NF. Is 1NF satisfied though? Are repeating groups allowed in different tables? Is there a best-practice method for implementing the inheritance relationship in a database as I require? The method above feels clunky to me because any query on the SubSubGroup necessarily needs to join onto the SubGroup and the TopLevelGroup tables to collect inherited facts, which can make even trivial joins requiring facts from the SubSubGroup table rather long-winded. There are, of course, political considerations to making a relatively large change like this. For the purpose of this question, I'm happy to ignore that fact in the interests of keeping the answers ring-fenced to the technical problem.

    Read the article

  • PlaceHolderMain controlling td width of hard-coded values

    - by Linda
    In my custom .master page I have the following code: <asp:ContentPlaceHolder id="PlaceHolderMain" runat="server" Visible="true" /> This prints out the main content of my page. It contains this structure <table ID="OuterZoneTable" width="100%"> <tr>...</tr> <tr id="OuterRow"> <td width="80%" id="OuterLeftCell">...</td> <td width="180" id="OuterRightCell">...</td> </tr> ... </table> I want to control the width of #OuterLeftCell and #OuterRightCell but it is hard-coded in the html that is returned. How would I change these values?

    Read the article

  • PHP: Strange behaviour while calling custom php functions

    - by baltusaj
    I am facing a strange behavior while coding in PHP with Flex. Let me explain the situation: I have two funcions lets say: populateTable() //puts some data in a table made with flex createXML() //creates an xml file which is used by Fusion Charts to create a chart Now, if i call populateTable() alone, the table gets populated with data but if i call it with createXML(), the table doesn't get populated but createXML() does it's work i.e. creates an xml file. Even if i run following code, only xml file gets generated but table remains empty whereas i called populateTable() before createXML(). Any idea what may be going wrong? MXML Part <mx:HTTPService id="userRequest" url="request.php" method="POST" resultFormat="e4x"> <mx:request xmlns=""> <getResult>send</getResult> </mx:request> and <mx:DataGrid id="dgUserRequest" dataProvider="{userRequest.lastResult.user}" x="28.5" y="36" width="525" height="250" > <mx:columns> <mx:DataGridColumn headerText="No." dataField="no" /> <mx:DataGridColumn headerText="Name" dataField="name"/> <mx:DataGridColumn headerText="Age" dataField="age"/> </mx:columns> PHP Part <?php //-------------------------------------------------------------------------- function initialize($username,$password,$database) //-------------------------------------------------------------------------- { # Connect to the database $link = mysql_connect("localhost", $username,$password); if (!$link) { die('Could not connected to the database : ' . mysql_error()); } # Select the database $db_selected = mysql_select_db($database, $link); if (!$db_selected) { die ('Could not select the DB : ' . mysql_error()); } // populateTable(); createXML(); # Close database connection } //-------------------------------------------------------------------------- populateTable() //-------------------------------------------------------------------------- { if($_POST['getResult'] == 'send') { $Result = mysql_query("SELECT * FROM session" ); $Return = "<Users>"; $no = 1; while ( $row = mysql_fetch_object( $Result ) ) { $Return .= "<user><no>".$no."</no><name>".$row->name."</name><age>".$row->age."</age><salary>". $row->salary."</salary></session>"; $no=$no+1; $Return .= "</Users>"; mysql_free_result( $Result ); print ($Return); } //-------------------------------------------------------------------------- createXML() //-------------------------------------------------------------------------- { $users=array ( "0"=>array("",0), "1"=>array("Obama",0), "2"=>array("Zardari",0), "3"=>array("Imran Khan",0), "4"=>array("Ahmadenijad",0) ); $selectedUsers=array(1,4); //this means only obama and ahmadenijad are selected and the xml file will contain info related to them only //Extracting salaries of selected users $size=count($users); for($i = 0; $i<$size; $i++) { //initialize temp which will calculate total throughput for each protocol separately $salary = 0; $result = mysql_query("SELECT salary FROM userInfo where name='$users[$selectedUsers[$i]][0]'"); $row = mysql_fetch_array($result)) $salary = $row['salary']; } $users[$selectedUsers[$i]][1]=$salary; } //creating XML string $chartContent = "<chart caption=\"Users Vs Salaries\" formatNumberScale=\"0\" pieSliceDepth=\"30\" startingAngle=\"125\">"; for($i=0;$i<$size;$i++) { $chartContent .= "<set label=\"".$users[$selectedUsers[$i]][0]."\" value=\"".$users[$selectedUsers[$i]][1]."\"/>"; } $chartContent .= "<styles>" . "<definition>" . "<style type=\"font\" name=\"CaptionFont\" size=\"16\" color=\"666666\"/>" . "<style type=\"font\" name=\"SubCaptionFont\" bold=\"0\"/>" . "</definition>" . "<application>" . "<apply toObject=\"caption\" styles=\"CaptionFont\"/>" . "<apply toObject=\"SubCaption\" styles=\"SubCaptionFont\"/>" . "</application>" . "</styles>" . "</chart>"; $file_handle = fopen('ChartData.xml','w'); fwrite($file_handle,$chartContent); fclose($file_handle); } initialize("root","","hiddenpeak"); ?>

    Read the article

  • Design Help! How can design Extended properties for Entity with simple and complex data in extended

    - by mmtemporary
    I have design question. I have entity such as "Person". Person has properties such as: FirstName, LastName, Gender, BirthDate, .... End user when create a person in application may be need to define another property that is not defined in database table schema (or class person). for example: end user nead to define "property1" that its a string property. or nead define "proerty2" that its a image, or need define "property3" that its complex type. please separate your design solution in tow level: 1-database table design 2-class design thank u.

    Read the article

  • Can anyone tell me what the authors mean on this line?

    - by Anirudh Goel
    i was going through this link: FAT16 Basics to Assemble Clusters. I have read the structures involved in defining a directory entry in FAT. Now when giving the example for a FAT16 File, it says the data cluster is 0x03 for the example file MyFile.txt. Which means if we logically compute the Data Cluster we will be able to reach to the first node which happens to be cluster no 3. But what I fail to understand is what the author is trying to say in the next line where it says What we can see in the File Allocation Table at this moment? How suddenly we reach to the File Allocation Table? Weren't we already there when we were going through the information of Myfile.txt? I couldn't find any reason how suddenly the author jumped to an offset location of 00000200 and is identifying the emptiness of the clusters. It will be great if someone can help me understand. Thanks

    Read the article

  • Cannot loop through Excel 2003 files in SSIS 2008

    - by Techspirit
    Hi, I am trying to execute a SSIS 2008 package on a 64-bit OS and import Excel 2003 files to SQL Server 2008. I have created an OLEDB Connection to the Excel file with a Connection String that retrieves the Excel file from a variable, inside the ForEach Loop Container. The Run64BitRunTime is set to false. I am not able to edit the SQL Command on the OLEDB Source in the Data Flow task. It returns an error : Error 2 Validation error. Load List Staged Table: Load List Staged Table: SSIS Error Code DTS_E_CANNOTACQUIRECONNECTIONFROMCONNECTIONMANAGER. The AcquireConnection method call to the connection manager "List OLEDB to Excel" failed with error code 0xC0202009. There may be error messages posted before this with more information on why the AcquireConnection method call failed. 0 0 Appreciate any help.

    Read the article

  • How to create a backup from SqlAlchemy?

    - by swilliams
    I'm writing a Pylons app, and am trying to create a simple backup system where every table is serialized and tarred up into a single file for an administrator to download, and use to restore the app should something bad happen. I can serialize my table data just fine using the SqlAlchemy serializer, and I can deserialize it fine as well, but I can't figure out how to commit those changes back to the database. In order to serialize my data I am doing this: from myproject.model.meta import Session from sqlalchemy.ext.serializer import loads, dumps q = Session.query(MyTable) serialized_data = dumps(q.all()) In order to test things out, I go ahead and truncation MyTable, and then attempt to restore using serialized_data: from myproject.model import meta restore_q = loads(serialized_data, meta.metadata, Session) This doesn't seem to do anything... I've tried calling a Session.commit after the fact, individually walking through all the objects in restore_q and adding them, but nothing seems to work. What am I missing? Or is there a better way to do what I'm aiming for? I don't want to shell out and directly touch the database, since SqlAlchemy supports different database engines.

    Read the article

  • Need alternative field names for these reserved words

    - by MattSlay
    “type” and “class” are likely reserved or problematic words in C# and/or Ruby, two languages I may use to program against my new database schema in the future. So, in order to avoid potential conflicts with those languages, I’m looking for alternative names for these field names in my tables. In this case, it is from my Machines table, where I have: “class” field (values would be something like “manual” or “computerized”) and “type” field (values would be “lathe” or “mill”) I could call the fields “machineclass” and “machinetype”, but that is inconsistent with naming scheme in the rest of my schema (meaning, I do not re-use the table name in the field… For instance, I use Machine.name, not Machine.machinename) Any thought on this madness?

    Read the article

  • Rails. Putting update logic in your migrations

    - by Daniel Abrahamsson
    A couple of times I've been in the situation where I've wanted to refactor the design of some model and have ended up putting update logic in migrations. However, as far as I've understood, this is not good practice (especially since you are encouraged to use your schema file for deployment, and not your migrations). How do you deal with these kind of problems? To clearify what I mean, say I have a User model. Since I thought there would only be two kinds of users, namely a "normal" user and an administrator, I chose to use a simple boolean field telling whether the user was an adminstrator or not. However, after I while I figured I needed some third kind of user, perhaps a moderator or something similar. In this case I add a UserType model (and the corresponding migration), and a second migration for removing the "admin" flag from the user table. And here comes the problem. In the "add_user_type_to_users" migration I have to map the admin flag value to a user type. Additionally, in order to do this, the user types have to exist, meaning I can not use the seeds file, but rather create the user types in the migration (also considered bad practice). Here comes some fictional code representing the situation: class CreateUserTypes < ActiveRecord::Migration def self.up create_table :user_types do |t| t.string :name, :nil => false, :unique => true end #Create basic types (can not put in seed, because of future migration dependency) UserType.create!(:name => "BASIC") UserType.create!(:name => "MODERATOR") UserType.create!(:name => "ADMINISTRATOR") end def self.down drop_table :user_types end end class AddTypeIdToUsers < ActiveRecord::Migration def self.up add_column :users, :type_id, :integer #Determine type via the admin flag basic = UserType.find_by_name("BASIC") admin = UserType.find_by_name("ADMINISTRATOR") User.all.each {|u| u.update_attribute(:type_id, (u.admin?) ? admin.id : basic.id)} #Remove the admin flag remove_column :users, :admin #Add foreign key execute "alter table users add constraint fk_user_type_id foreign key (type_id) references user_types (id)" end def self.down #Re-add the admin flag add_column :users, :admin, :boolean, :default => false #Reset the admin flag (this is the problematic update code) admin = UserType.find_by_name("ADMINISTRATOR") execute "update users set admin=true where type_id=#{admin.id}" #Remove foreign key constraint execute "alter table users drop foreign key fk_user_type_id" #Drop the type_id column remove_column :users, :type_id end end As you can see there are two problematic parts. First the row creation part in the first model, which is necessary if I would like to run all migrations in a row, then the "update" part in the second migration that maps the "admin" column to the "type_id" column. Any advice?

    Read the article

  • How do I save user specific data in an asp.net site?

    - by Greg McNulty
    I just set up user profiles using asp.net 3.5 using wvd. For each user I would like to store data that they will be updating every day. For example, every time they go for a run they will update time and distance. I intend to allow them to also look up their history of distance and time from any past date. My question is, what does the database schema usually look like for such a set up? Currently asp.net set up a db for me when I made user profiles. Do I just add an extra table for every user? Should there be one big table with all users data? How do I relate a user I'd to their specific data? Etc.... I have never done this before so any ideas on how this is usually done would be very helpful. Thank you.

    Read the article

  • Faster jquery selector for finding a number of TD elements

    - by Bernard Chen
    I have a table where each row has 13 TD elements. I want to show and hide the first 10 of them when I toggle a link. These 10 TD elements all have an IDs with the prefix "foo" and a two digit number for its position (e.g., "foo01"). What's the fastest way to select them across the entire table? $("td:nth-child(-n+10)") or $("td[id^=foo]") or is it worth concatenating all of the ids? $("#foo01, #foo02, #foo03, #foo04, #foo05, #foo06, #foo07, #foo08, #foo09, #foo10") Is there another approach I should be considering as well?

    Read the article

  • CKEditor adds html entities to inline CSS. Is the CSS still valid?

    - by Mihai Secasiu
    I have this piece of code: <table style="background-image: url(path/to_image.png)"> And when I load it in CKEditor it's transformed in: <table style="background-image: url(&quot;path/to_image.png&quot;)"> Is this still still valid CSS? Actually I'm not so interested if it's valid but if there would be any problems with any web browser or email client ( the editor is used for composing a html email ). Firefox and Thunderbird seem to be fine with it.

    Read the article

  • ado.net slow updating large tables

    - by brett
    The problem: 100,000+ name & address records in an access table (2003). Need to iterate through the table & update detail with the output from a 3rd party dll. I currently use ado, and it works at an acceptable speed (less than 5 minutes on a network share). We will soon need to update to access 2007 and its 'non jet' accdb format to maintain compatability with clients. I've tried using ado.net datsets, but updating the records takes hours! We process 5-10 of these tables per day - so this cannot be a solution. Any ideas on the fastest way to update individual records using ado.net? Surely we didn't take such a hugh backward step with ado.net? Any help would be appreciated.

    Read the article

  • Delphi, Csv,Import Firebird

    - by Vijesh V.Nair
    Lemme explain my problem. I have One ID and 2 Other Fields in a CsV file. the ID connected to a database table. I have to show the curresponding entries in the db and fields from Csv. Need sort the Fields too. My Idea was load into a ClientDataset, lookup to a Query with table and Use sort and show. My Csv have 85 K Records and its taking 120 seconds to load and sort, Its not acceptable. Can you tell me, can I use Bacthmove for this. So I can easily pick fields by a simple query. if I can use Bacthmove Plz give me the guild lines. Also Is there any other Techniques for this? Thanks and Regards, Vijesh V.Nair

    Read the article

  • Future proof Primary Key design in postgresql

    - by John P
    I've always used either auto_generated or Sequences in the past for my primary keys. With the current system I'm working on there is the possibility of having to eventually partition the data which has never been a requirement in the past. Knowing that I may need to partition the data in the future, is there any advantage of using UUIDs for PKs instead of the database's built-in sequences? If so, is there a design pattern that can safely generate relatively short keys (say 6 characters instead of the usual long one e6709870-5cbc-11df-a08a-0800200c9a66)? 36^6 keys per-table is more than sufficient for any table I could imagine. I will be using the keys in URLs so conciseness is important.

    Read the article

  • Dynamic data-entry value store

    - by simendsjo
    I'm creating a data-entry application where users are allowed to create the entry schema. My first version of this just created a single table per entry schema with each entry spanning a single or multiple columns (for complex types) with the appropriate data type. This allowed for "fast" querying (on small datasets as I didn't index all columns) and simple synchronization where the data-entry was distributed on several databases. I'm not quite happy with this solution though; the only positive thing is the simplicity... I can only store a fixed number of columns. I need to create indexes on all columns. I need to recreate the table on schema changes. Some of my key design criterias are: Very fast querying (Using a simple domain specific query language) Writes doesn't have to be fast Many concurrent users Schemas will change often Schemas might contain many thousand columns The data-entries might be distributed and needs syncronization. Preferable MySQL and SQLite - Databases like DB2 and Oracle is out of the question. Using .Net/Mono I've been thinking of a couple of possible designs, but none of them seems like a good choice. Solution 1: Union like table containing a Type column and one nullable column per type. This avoids joins, but will definitly use a lot of space. Solution 2: Key/value store. All values are stored as string and converted when needed. Also use a lot of space, and of course, I hate having to convert everything to string. Solution 3: Use an xml database or store values as xml. Without any experience I would think this is quite slow (at least for the relational model unless there is some very good xpath support). I also would like to avoid an xml database as other parts of the application fits better as a relational model, and being able to join the data is helpful. I cannot help to think that someone has solved (some of) this already, but I'm unable to find anything. Not quite sure what to search for either... I know market research is doing something like this for their questionnaires, but there are few open source implementations, and the ones I've found doesn't quite fit the bill. PSPP has much of the logic I'm thinking of; primitive column types, many columns, many rows, fast querying and merging. Too bad it doesn't work against a database.. And of course... I don't need 99% of the provided functionality, but a lot of stuff not included. I'm not sure this is the right place to ask such a design related question, but I hope someone here has some tips, know of any existing work, or can point me to a better place to ask such a question. Thanks in advance!

    Read the article

  • Grouping with operands question

    - by Filip
    I have a table: mysql> desc kursy_bid; +-----------+-------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +-----------+-------------+------+-----+---------+-------+ | datetime | datetime | NO | PRI | NULL | | | currency | varchar(6) | NO | PRI | NULL | | | value | varchar(10) | YES | | NULL | | +-----------+-------------+------+-----+---------+-------+ 3 rows in set (0.01 sec) I would like to select some rows from a table, grouped by some time interval (can be one day) where I will have the first row and the last row of the group, the max(value) and min(value). I tried: select datetime, (select value order by datetime asc limit 1) open, (select value order by datetime desc limit 1) close, max(value), min(value) from kursy_bid_test where datetime > '2009-09-14 00:00:00' and currency = 'eurpln' group by month(datetime), day(datetime), hour(datetime); but the output is: | open | close | datetime | max(value) | min(value) | +--------+--------+---------------------+------------+------------+ | 1.4581 | 1.4581 | 2009-09-14 00:00:05 | 4.1712 | 1.4581 | | 1.4581 | 1.4581 | 2009-09-14 01:00:01 | 1.4581 | 1.4581 | As you see open and close is the same (but they shouldn't be). What should be the query to do what I want?

    Read the article

  • LINQ DataLoadOptions - Retrieval of data via Fulltext/Broken Foreign Key Relationship

    - by Alex
    I've hit a brick wall: I have an SQL Function: FUNCTION [dbo].[ContactsFTS] (@searchtext nvarchar(4000)) RETURNS TABLE AS RETURN SELECT * FROM Contacts INNER JOIN CONTAINSTABLE(Contacts, *, @searchtext) AS KEY_TBL ON Contacts.Id = KEY_TBL.[KEY] which I am calling via LINQ public IQueryable<ContactsFTSResult> SearchByFullText(String searchText) { return db.ContactsFTS(searchText); } I am projecting the ContactsFTSResult collection into a List<Contact> which is then given to my viewmodel. Here is the problem: My Contacts table (and therefore the Contact object created via LINQ to SQL) has multiple FK relationships to other information, such as Contact.BillingAddressId is an FK to an Address.Id. That information is missing after I do the fulltext search (e.g. if I try to access Contact.BillingAddress it is null). Can I add this information somehow via DataLoadOptions? I tried LoadWith<Contact>(c => c.BillingAddress) but this doesn't work, I assume because of the fact that I'm calling the function instead of doing the whole query via LINQ.

    Read the article

  • Problems using 2 simplemodal osx calls

    - by Marcos
    I´m developing a webpage that contains two <p> tags (program and events), that shows two different contents. When I click the program button (it´s a <p> tag), it´s ok, it shows the table on the div´s. But, when I click the events button, again it shows the program <p> tag contents, and I want to show the events table content. I´ve read de solution for this question on this site, but not happens, when I change my JS file, the content of both <p> tags disapears. Can somebody help me?

    Read the article

  • Returning partial address matches and mismatch position using L2S or SQL

    - by peter3
    I need to implement a method that takes an address split up into individual parts and returns any matching items from an address table. If no matches are found, I want to be able to return a value indicating where it failed. Each input param has a corresponding field in the table. The signature would look something like this: List<Address> MatchAddress(string zipCode, string streetName, string houseNumber, string houseLetter, string floor, string appartmentNo, out int mismatchPosition) { // return matching addresses // if none found, return the position where it stopped matching // zipCode is position 0, appartmentNo is position 5 // // an empty param value indicates "don't check" } I know I can construct the method such that I start with all the parameters, execute the query and then remove param by param (from the right side) until either a match is found or I run out of parameters, but can I construct a query that is more effective than that, i.e minimizing the number of calls to the db, maybe even as a single call?

    Read the article

  • can't insert xml dml expression as a string

    - by 81967
    Here is the code below that would explain you the problem... I create a table below with an xml column and declare a variable, initialize it and Insert the Value into the xml column, create table CustomerInfo (XmlConfigInfo xml) declare @StrTemp nvarchar(2000) set @StrTemp = '<Test></Test>' insert into [CustomerInfo](XmlConfigInfo) values (@StrTemp) Then comes the part of the question,, if I write this... update [CustomerInfo] set XmlConfigInfo.modify('insert <Info></Info> into (//Test)[1]') -- Works Fine!!! but when I try this, set @StrTemp = 'insert <Info></Info> into (//Test)[1]' update [CustomerInfo] set XmlConfigInfo.modify(@StrTemp) -- Doesn't Work!!! and throws an error The argument 1 of the xml data type method "modify" must be a string literal. is there a way around for this one? I tried this, but it is not working :(

    Read the article

  • Need help with auto-scaffolding template in ASP.NET MVC

    - by DanM
    I'm trying to write an auto-scaffolder for Index views. I'd like to be able to pass in a collection of models or view-models (e.g., IQueryable<MyViewModel>) and get back an HTML table that uses the DisplayName attribute for the headings (th elements) and Html.Display(propertyName) for the cells (td elements). Each row should correspond to one item in the collection. Here's what I have so far: <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl" %> <% var items = (IQueryable<TestProj.ViewModels.TestViewModel>)Model; // Should be generic! var properties = items.First().GetMetadata().Properties .Where(pm => pm.ShowForDisplay && !ViewData.TemplateInfo.Visited(pm)); %> <table> <tr> <% foreach(var property in properties) { %> <th> <%= property.DisplayName %> </th> <% } %> </tr> <% foreach(var item in items) { %> <tr> <% foreach(var property in properties) { %> <td> <%= Html.Display(property.DisplayName) %> // This doesn't work! </td> <% } %> </tr> <% } %> </table> Two problems with this: I'd like it to be generic. So, I'd like to replace var items = (IQueryable<TestProj.ViewModels.TestViewModel>)Model; with var items = (IQueryable<T>)Model; or something to that effect. The <td> elements are not working because the Html in <%= Html.Display(property.DisplayName) %> contains the model for the view, which is a collection of items, not the item itself. Somehow, I need to obtain an HtmlHelper object whose Model property is the current item, but I'm not sure how to do that. How do I solve these two problems?

    Read the article

< Previous Page | 459 460 461 462 463 464 465 466 467 468 469 470  | Next Page >