Search Results

Search found 35571 results on 1423 pages for 'sql developer'.

Page 259/1423 | < Previous Page | 255 256 257 258 259 260 261 262 263 264 265 266  | Next Page >

  • SQL: find entries in 1:n relation that don't comply with condition spanning multiple rows

    - by milianw
    I'm trying to optimize SQL queries in Akonadi and came across the following problem that is apparently not easy to solve with SQL, at least for me: Assume the following table structure (should work in SQLite, PostgreSQL, MySQL): CREATE TABLE a ( a_id INT PRIMARY KEY ); INSERT INTO a (a_id) VALUES (1), (2), (3), (4); CREATE TABLE b ( b_id INT PRIMARY KEY, a_id INT, name VARCHAR(255) NOT NULL ); INSERT INTO b (b_id, a_id, name) VALUES (1, 1, 'foo'), (2, 1, 'bar'), (3, 1, 'asdf'), (4, 2, 'foo'), (5, 2, 'bar'), (6, 3, 'foo'); Now my problem is to find entries in a that are missing name entries in table b. E.g. I need to make sure each entry in a has at least the name entries "foo" and "bar" in table b. Hence the query should return something similar to: a_id = 3 is missing name "bar" a_id = 4 is missing name "foo" and "bar" Since both tables are potentially huge in Akonadi, performance is of utmost importance. One solution in MySQL would be: SELECT a.a_id, CONCAT('|', GROUP_CONCAT(name ORDER BY NAME ASC SEPARATOR '|'), '|') as names FROM a LEFT JOIN b USING( a_id ) GROUP BY a.a_id HAVING names IS NULL OR names NOT LIKE '%|bar|foo|%'; I have yet to measure the performance tomorrow, but severly doubt it's any fast for tens of thousand of entries in a and thrice as many in b. Furthermore we want to support SQLite and PostgreSQL where to my knowledge the GROUP_CONCAT function is not available. Thanks, good night.

    Read the article

  • SQL aggregate query question

    - by Phil
    Hi, Can anyone help me with a SQL query in Apache Derby SQL to get a "simple" count. Given a table ABC that looks like this... id a b c 1 1 1 1 2 1 1 2 3 2 1 3 4 2 1 1 ** 5 2 1 2 ** ** 6 2 2 1 ** 7 3 1 2 8 3 1 3 9 3 1 1 How can I write a query to get a count of how may distinct values of 'a' have both (b=1 and c=2) AND (b=2 and c=1) to get the correct result of 1. (the two rows marked match the criteria and both have a value of a=2, there is only 1 distinct value of a in this table that match the criteria) The tricky bit is that (b=1 and c=2) AND (b=2 and c=1) are obviously mutually exclusive when applied to a single row. .. so how do I apply that expression across multiple rows of distinct values for a? These queries are wrong but to illustrate what I'm trying to do... "SELECT DISTINCT COUNT(a) WHERE b=1 AND c=2 AND b=2 AND c=1 ..." .. (0) no go as mutually exclusive "SELECT DISTINCT COUNT(a) WHERE b=1 AND c=2 OR b=2 AND c=1 ..." .. (3) gets me the wrong result. SELECT COUNT(a) (CASE WHEN b=1 AND c=10 THEN 1 END) FROM ABC WHERE b=2 AND c=1 .. (0) no go as mutually exclusive Cheers, Phil.

    Read the article

  • How can i test my TSQL syntax?

    - by acidzombie24
    Quick question: How do i get some kind of database to use to test my sql syntax and create basic data. I have Sqlite Code which i'll soon put on a server. I have sql server 2008 installed with visual studio 2010. I tried connecting to the database and had no luck. I also tried using a .mdf file instead thinking its a file and i wont have connectivity issues. Wrong, i still couldnt connect and i used this site to help me (i'm aware its 2005) In that case i used var conn = new SqlConnection(@"Server=.\SQLExpress;AttachDbFilename=C:\dev\src\test\SQL_DB_VS_Test\test.mdf;Database=dbo;Trusted_Connection=Yes;"); exception Unable to open the physical file "C:\dev\src\test\SQL_DB_VS_Test\test.mdf". Operating system error 5: "5(Access is denied.)". Cannot attach the file 'C:\dev\src\test\SQL_DB_VS_Test\test.mdf' as database 'dbo'. with trusted = no i get Login failed for user ''. (What user am i suppose to set...). I created the .mdf with visual studio somehow.

    Read the article

  • How to add "missing" columns in a column group in reporting services?

    - by Gimly
    Hello, I'm trying to create a report that displays for each months of the year the quantity of goods sold. I have a query that returns the list of goods sold for each month, it looks something like this : SELECT Seller.FirstName, Seller.LastName, SellingHistory.Month, SUM(SellingHistory.QuantitySold) FROM SellingHistory JOIN Seller on SellingHistory.SellerId = Seller.SellerId WHERE SellingHistory.Year = @Year GOUP BY Seller.FirstName, Seller.LastName, SellingHistory.Month What I want to do is display a report that has a column for each months + a total column that will display for each Seller the quantity sold in the selected month. Seller Name | Jan | Feb | Mar | Apr | May | Jun | Jul | Aug | Sep | Oct | Nov | Dec | Total What I managed to do is using a matrix and a column group (group on Month) to display the columns for existing data, if I have data from January to March, it will display the 3 first columns and the total. What I would like to do is always display all the columns. I thought about making that by adding the missing months in the SQL request, but I find that a bit weird and I'm sure there must be some "cleanest" solution as this is something that must be quite frequent. Thanks. PS: I'm using SQL Server Express 2008

    Read the article

  • SQL Server 2005: Internal Query Processor Error:

    - by Geetha
    I am trying to execute this following procedure in SQL Server 2005. I was able to execute this in my development server and when i tried to use this in the Live Server I am getting an Error "Internal Query Processor Error: The query processor could not produce a query plan. For more information, contact Customer Support Services". am using the same Database and the same format. when we searched in the web it shows some fixes to be used in sql server 2005 to avoid this error but my DBA has confirmed that all the patches are updated in our server. can anyone give me some clue on this. Query: create Procedure [dbo].[sample_Select] @ID as varchar(40) as Declare @Execstring as varchar(1000) set @Execstring = ' Declare @MID as varchar(40) Set @MID = '''+@ID+''' select * from ( select t1.field1, t1.field2 AS field2 , t1.field3 AS field3 , L.field1 AS field1 , L. field2 AS field2 from table1 AS t1 INNER JOIN MasterTable AS L ON L. field1 = t1. field2 where t1. field2 LIKE @MID ) as DataTable PIVOT ( Count(field2) FOR field3 IN (' Select @Execstring=@Execstring+ L.field2 +',' FROM MasterTable AS L inner join table1 AS t1 ON t1.field1= L.field2 Where t1.field2 LIKE @ID set @Execstring = stuff(@Execstring, len(@Execstring), 1, '') set @Execstring =@Execstring +')) as pivotTable' exec (@Execstring)

    Read the article

  • sql query selecting one name no matter how many rows it was mentioned in

    - by Baruch
    Basically what I'm trying to do is get the information from column x no matter how many times it was mentioned. means that if I have this kind of table: x | y | z ------+-------+-------- hello | one | bye hello | two | goodbye hi | three | see you so what I'm trying to do is create a query that would get all of the names that are mentions in the x column without duplicates and put it into a select list. my goal is that I would have a select list with TWO not THREE options, hello and hi this is what I have so far which isn't working. hope you guys know the answer to that: function getList(){ $options="<select id='names' style='margin-right:40px;'>"; $c_id = $_SESSION['id']; $sql="SELECT * FROM names"; $result=mysql_query($sql); $options.="<option value='blank'>-- Select something --</option>" ; while ($row=mysql_fetch_array($result)) { $name=$row["x"]; $options.="<option value='$name'>$name</option>"; } $options.= "</SELECT>"; return "$options"; } Sorry for confusing... i edited my source

    Read the article

  • Beginner having difficulty with SQL query

    - by Vulcanizer
    Hi, I've been studying SQL for 2 weeks now and I'm preparing for an SQL test. Anyway I'm trying to do this question: For the table: 1 create table data { 2 id int, 3 n1 int not null, 4 n2 int not null, 5 n3 int not null, 6 n4 int not null, 7 primary key (id) 8 } I need to return the relation with tuples (n1, n2, n3) where all the corresponding values for n4 are 0. The problem asks me to solve it WITHOUT using subqueries(nested selects/views) It also gives me an example table and the expected output from my query: 01 insert into data (id, n1, n2, n3, n4) 02 values (1, 2,4,7,0), 03 (2, 2,4,7,0), 04 (3, 3,6,9,8), 05 (4, 1,1,2,1), 06 (5, 1,1,2,0), 07 (6, 1,1,2,0), 08 (7, 5,3,8,0), 09 (8, 5,3,8,0), 10 (9, 5,3,8,0); expects (2,4,7) (5,3,8) and not (1,1,2) since that has a 1 in n4 in one of the cases. The best I could come up with was: 1 SELECT DISTINCT n1, n2, n3 2 FROM data a, data b 3 WHERE a.ID <> b.ID 4 AND a.n1 = b.n1 5 AND a.n2 = b.n2 6 AND a.n3 = b.n3 7 AND a.n4 = b.n4 8 AND a.n4 = 0 but I found out that also prints (1,1,2) since in the example (1,1,2,0) happens twice from IDs 5 and 6. Any suggestions would be really appreciated.

    Read the article

  • SQL Server 2005 FREETEXT() Perfomance Issue

    - by Zenon
    I have a query with about 6-7 joined tables and a FREETEXT() predicate on 6 columns of the base table in the where. Now, this query worked fine (in under 2 seconds) for the last year and practically remained unchanged (i tried old versions and the problem persists) So today, all of a sudden, the same query takes around 1-1.5 minutes. After checking the Execution Plan in SQL Server 2005, rebuilding the FULLTEXT Index of that table, reorganising the FULLTEXT index, creating the index from scratch, restarting the SQL Server Service, restarting the whole server I don't know what else to try. I temporarily switched the query to use LIKE instead until i figure this out (which takes about 6 seconds now). When I look at the query in the query performance analyser, when I compare the ´FREETEXT´query with the ´LIKE´ query, the former has 350 times as many reads (4921261 vs. 13943) and 20 times (38937 vs. 1938) the CPU usage of the latter. So it really is the ´FREETEXT´predicate that causes it to be so slow. Has anyone got any ideas on what the reason might be? Or further tests I could do?

    Read the article

  • Stop SQL returning the same result twice in a JOIN

    - by nbs189
    I have joined together several tables to get data i want but since I am new to SQL i can't figure out how to stop data being returned more than once. her's the SQL statement; SELECT T.url, T.ID, S.status, S.ID, E.action, E.ID, E.timestamp FROM tracks T, status S, events E WHERE S.ID AND T.ID = E.ID ORDER BY E.timestamp DESC The data that is returned is something like this; +----------------------------------------------------------------+ | URL | ID | Status | ID | action | ID | timestamp | +----------------------------------------------------------------+ | T.1 | 4 | hello | 4 | has uploaded a track | 4 | time | | T.2 | 3 | bye | 3 | has some news | 3 | time | | t.1 | 4 | more | 4 | has some news | 4 | time | +----------------------------------------------------------------+ That's a very basic example but does outline what happens. If you look at the third row the URL is repeated when there is a different status. This is what I want to happen; +-------------------------------------------------------+ | URL or Status | ID | action | timestamp | +-------------------------------------------------------+ | T.1 | 4 | has uploaded a track | time | | hello | 3 | has some news | time | | bye | 4 | has some news | time | +-------------------------------------------------------+ Please notice that the the url (in this case the mock one is T.1) is shown when the action is has uploaded a track. This is very important. The action in the events table is inserted on trigger of a status or track insert. If a new track is inserted the action is 'has uploaded a track' and you guess what it is for a status. The ID and timestamp is also inserted into the events table at this point. Note: There are more tables that go into the query, 3 more in fact, but I have left them out for simplicity.

    Read the article

  • Write out to text file using T-SQL

    - by sasfrog
    I am creating a basic data transfer task using TSQL where I am retrieving certain records from one database that are more recent than a given datetime value, and loading them into another database. This will happen periodically throughout the day. It's such a small task that SSIS seems like overkill - I want to just use a scheduled task which runs a .sql file. Where I need guidance is that I need to persist the datetime from the last run of this task, then use this to filter the records next time the task runs. My initial thought is to just store the datetime in a text file, and update (overwrite) it as part of the task each time it runs. I can read the file in without problems using T-SQL, but writing back out has got me stuck. I've seen plenty of examples which make use of a dynamically-built bcp command, which is then executed using xp_cmdshell. Trouble is, security on the server I'm deploying to precludes the use of xp_cmdshell. So, my question is, are there other ways to simply write a datetime value to a file using TSQL, or should I be thinking about a different approach? EDIT: happy to be corrected about SSIS being "overkill"...

    Read the article

  • Manipulating data in sql / asp.net / c# - how?

    - by SLC
    Not sure how to word the question... Basically, so far all my SQL stuff has been stored procedures and dumped into a gridview. The odd case where I had to perform an action based on a value (such as highlighting a row green if a certain value was true) were done as the gridview was rendering in one of the overrides. Now however I have to do something far more complicated - pull three sets of data down, run a series of checks on all three and some date related checks and stuff, then populate a gridview with some of the items. In logic terms, I want to run three queries, and store the lists of results (presumably in Lists?) then run some logic, then populate the gridview. Specifically what I don't know how to do is: Best way of pulling the data, and putting it into a List or other datastructure that lets me easily run through it, and retrieve data based on column (myList.age, or more likely, myList["Age"]). One I have compared the data, I assume I create a new list that will be put into the gridview... how do I put the contents of a list INTO a gridview? How would I add other stuff such as buttons or checkboxes at the same time? Any nudge in the right direction would be appreciated! Particularly doing cool stuff with lists and sql (if there is anything cool you can do with them)

    Read the article

  • Why could "insert (...) values (...)" not insert a new row?

    - by nang
    Hi, I have a simple SQL insert statement of the form: insert into MyTable (...) values (...) It is used repeatedly to insert rows and usually works as expected. It inserts exactly 1 row to MyTable, which is also the value returned by the Delphi statement AffectedRows:= myInsertADOQuery.ExecSQL. After some time there was a temporary network connectivity problem. As a result, other threads of the same application perceived EOleExceptions (Connection failure, -2147467259 = unspecified error). Later, the network connection was reestablished, these threads reconnected and were fine. The thread responsible for executing the insert statement described above, however, did not perceive the connectivity problems (No exceptions) - probably it was simply not executed while the network was down. But after the network connectivity problems myInsertADOQuery.ExecSQL always returned 0 and no rows were inserted to MyTable anymore. After a restart of the application the insert statement worked again as expected. For SQL Server, is there any defined case where an insert statment like the one above would not insert a row and return 0 as the number of affected rows? Primary key is an autogenerated GUID. There are no unique or check constraints (which should result in an exception anyway rather than not inserting a row). Are there any known ADO bugs (Provider=SQLOLEDB.1)? Any other explanations for this behaviour? Thanks, Nang.

    Read the article

  • Fixing Robocopy for SQL Server Jobs

    - by Most Valuable Yak (Rob Volk)
    Robocopy is one of, if not the, best life-saving/greatest-thing-since-sliced-bread command line utilities ever to come from Microsoft.  If you're not using it already, what are you waiting for? Of course, being a Microsoft product, it's not exactly perfect. ;)  Specifically, it sets the ERRORLEVEL to a non-zero value even if the copy is successful.  This causes a problem in SQL Server job steps, since non-zero ERRORLEVELs report as failed. You can work around this by having your SQL job go to the next step on failure, but then you can't determine if there was a genuine error.  Plus you still see annoying red X's in your job history.  One way I've found to avoid this is to use a batch file that runs Robocopy, and I add some commands after it (in red): robocopy d:\backups \\BackupServer\BackupFolder *.bak rem suppress successful robocopy exit statuses, only report genuine errors (bitmask 16 and 8 settings)set/A errlev="%ERRORLEVEL% & 24" rem exit batch file with errorlevel so SQL job can succeed or fail appropriatelyexit/B %errlev% (The REM statements are simply comments and don't need to be included in the batch file) The SET command lets you use expressions when you use the /A switch.  So I set an environment variable "errlev" to a bitwise AND with the ERRORLEVEL value. Robocopy's exit codes use a bitmap/bitmask to specify its exit status.  The bits for 1, 2, and 4 do not indicate any kind of failure, but 8 and 16 do.  So by adding 16 + 8 to get 24, and doing a bitwise AND, I suppress any of the other bits that might be set, and allow either or both of the error bits to pass. The next step is to use the EXIT command with the /B switch to set a new ERRORLEVEL value, using the "errlev" variable.  This will now return zero (unless Robocopy had real errors) and allow your SQL job step to report success. This technique should also work for other command-line utilities.  The only issues I've found is that it requires the commands to be part of a batch file, so if you use Robocopy directly in your SQL job step you'd need to place it in a batch.  If you also have multiple Robocopy calls, you'll need to place the SET/A command ONLY after the last one.  You'd therefore lose any errors from previous calls, unless you use multiple "errlev" variables and AND them together. (I'll leave this as an exercise for the reader) The SET/A syntax also permits other kinds of expressions to be calculated.  You can get a full list by running "SET /?" on a command prompt.

    Read the article

  • SQL Saturday #162 Cambridge

    - by Most Valuable Yak (Rob Volk)
    Despite the efforts of American Airlines, this past weekend I attended the first SQL Saturday in the UK!  Hosted by the SQLCambs Chapter of PASS and organized by Mark (b|t) & Lorraine Broadbent, ably assisted by John Martin (b|t), Mark Pryce-Maher (b|t) and other folks whose names I've unfortunately forgotten, it was held at the Crowne Plaza Hotel, which is completely surrounded by Cambridge University. On Friday, they presented 3 pre-conference sessions given by the brilliant American Cloud & DBA Guru, Buck Woody (b|t), the brilliant Danish SQL Server Internals Guru, Mark Rasmussen (b|t), and the brilliant Scottish Business Intelligence Guru and recent Outstanding Pass Volunteer, Jen Stirrup (b|t).  While I would have loved to attend any of their pre-cons (having seen them present several times already), finances and American Airlines ultimately made that impossible.  But not to worry, I caught up with them during the regular sessions and at the speaker dinner.  And I got back the money they all owed me.  (Actually I owed Mark some money) The schedule was jam-packed even with only 4 tracks, there were 8 regular slots, a lunch session for sponsor presentations, and a 15 minute keynote given by Buck Woody, who besides giving an excellent history of SQL Server at Microsoft (and before), also explained the source of the "unknown contact" image that appears in Outlook.  Hint: it's not Buck himself. Amazingly, and against all better judgment, I even got to present at SQL Saturday 162!  I did a 5 minute Lightning Talk on Regular Expressions in SSMS.  I then did a regular 50 minute session on Constraints.  You can download the content for the regular session at that link, and for the regular expression presentation here. I had a great time and had a great audience for both of my sessions.  You would never have guessed this was the first event for the organizers, everything went very smoothly, especially for the number of attendees and the relative smallness of the space.  The event sponsors also deserve a lot of credit for making themselves fit in a small area and for staying through the entire event until the giveaways at the very end. Overall this was one of the best SQL Saturdays I've ever attended and I have to congratulate Mark B, Lorraine, John, Mark P-M, and all the volunteers and speakers for making this an astoundingly hard act to follow!  Well done!

    Read the article

  • Create Outlook Appointments from PowerShell

    - by BuckWoody
    I've been toying around with a script to create a special set of calendar objects in Outlook that show when my SQL Server Agent Jobs are scheduled to run. I haven't finished yet, but I thought I would share the part that creates the Outlook Appointments.I have yet to fill a variable with the start and end times, and then loop through that to create the appointments. I'm thinking I'll make the script below into a function, and feed it those variables in a loop. The script below creates a whole new Calendar Folder in Outlook called "SQL Server Agent Jobs". I also use categories quite a bit, so you'll see that too. Caution: If you plan to play with this script, do it on an isolated workstation, not on your "regular" Outlook calendar. Otherwise, you'll have lots of appointments in there that you don't care about!  # Add a new calendar item to a new Outlook folder called "SQL Server Agent Jobs" $outlook = new-object -com Outlook.Application $calendar = $outlook.Session.folders.Item(1).Folders.Item("SQL Server Agent Jobs") $appt = $calendar.Items.Add(1) # == olAppointmentItem $appt.Start = [datetime]"03/11/2010 11:00" $appt.End = [datetime]"03/11/2009 12:00" $appt.Subject = "JobName" $appt.Location = "ServerName" $appt.Body = "Job Details" $appt.Categories = "SQL server Agent Job" $appt.Save()   Script Disclaimer, for people who need to be told this sort of thing: Never trust any script, including those that you find here, until you understand exactly what it does and how it will act on your systems. Always check the script on a test system or Virtual Machine, not a production system. All scripts on this site are performed by a professional stunt driver on a closed course. Your mileage may vary. Void where prohibited. Offer good for a limited time only. Keep out of reach of small children. Do not operate heavy machinery while using this script. If you experience blurry vision, indigestion or diarrhea during the operation of this script, see a physician immediately.   Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • SQL Server Memory Manager Changes in Denali

    - by SQLOS Team
    The next version of SQL Server will contain significant changes to the memory manager component.  The memory manager component has been rewritten for Denali.  In the previous versions of SQL Server there were two distinct memory managers.  There was one memory manager which handled allocation sizes of 8k or less and another for greater than 8k.  For Denali there will be one memory manager for all allocation sizes.   The majority of the changes will be transparent to the end user.  However, some changes will be visible to the user.  These are listed below: ·         The ‘max server memory’ configuration option has new lower limits.  Specifically, 32-bit versions of SQL Server will have a lower limit of 64 MB.  The 64-bit versions will have a lower limit of 128 MB. ·         All memory allocations by SQL Server components will observe the ‘max server memory’ configuration option.  In previous SQL versions only the 8k allocations were limited the ‘max server memory’ configuration option.  Allocations larger than 8k weren’t constrained. ·         DMVs which refer to memory manager internals have been modified.  This includes adding or removing columns and changing column names. ·         The memory manager configuration messages in the error log have minor changes. ·         DBCC memorystatus output has been changed. ·         Address Windowing Extensions (AWE) has been deprecated.   In the next blog post I will discuss the changes to the memory manager DMVs in greater detail.  In future blog posts I will discuss the other changes in greater detail.  

    Read the article

  • Sensible Way to Pass Web Data in XML to a SQL Server Database

    - by Emtucifor
    After exploring several different ways to pass web data to a database for update purposes, I'm wondering if XML might be a good strategy. The database is currently SQL 2000. In a few months it will move to SQL 2005 and I will be able to change things if needed, but I need a SQL 2000 solution now. First of all, the database in question uses the EAV model. I know that this kind of database is generally highly frowned on, so for the purposes of this question, please just accept that this is not going to change. The current update method has the web server inserting values (that have all been converted first to their correct underlying types, then to sql_variant) to a temp table. A stored procedure is then run which expects the temp table to exist and it takes care of updating, inserting, or deleting things as needed. So far, only a single element has needed to be updated at a time. But now, there is a requirement to be able to edit multiple elements at once, and also to support hierarchical elements, each of which can have its own list of attributes. Here's some example XML I hand-typed to demonstrate what I'm thinking of. Note that in this database the Entity is Element and an ID of 0 signifies "create" aka an insert of a new item. <Elements> <Element ID="1234"> <Attr ID="221">Value</Attr> <Attr ID="225">287</Attr> <Attr ID="234"> <Element ID="99825"> <Attr ID="7">Value1</Attr> <Attr ID="8">Value2</Attr> <Attr ID="9" Action="delete" /> </Element> <Element ID="99826" Action="delete" /> <Element ID="0" Type="24"> <Attr ID="7">Value4</Attr> <Attr ID="8">Value5</Attr> <Attr ID="9">Value6</Attr> </Element> <Element ID="0" Type="24"> <Attr ID="7">Value7</Attr> <Attr ID="8">Value8</Attr> <Attr ID="9">Value9</Attr> </Element> </Attr> <Rel ID="3827" Action="delete" /> <Rel ID="2284" Role="parent"> <Element ID="3827" /> <Element ID="3829" /> <Attr ID="665">1</Attr> </Rel> <Rel ID="0" Type="23" Role="child"> <Element ID="3830" /> <Attr ID="67" </Rel> </Element> <Element ID="0" Type="87"> <Attr ID="221">Value</Attr> <Attr ID="225">569</Attr> <Attr ID="234"> <Element ID="0" Type="24"> <Attr ID="7">Value10</Attr> <Attr ID="8">Value11</Attr> <Attr ID="9">Value12</Attr> </Element> </Attr> </Element> <Element ID="1235" Action="delete" /> </Elements> Some Attributes are straight value types, such as AttrID 221. But AttrID 234 is a special "multi-value" type that can have a list of elements underneath it, and each one can have one or more values. Types only need to be presented when a new item is created, since the ElementID fully implies the type if it already exists. I'll probably support only passing in changed items (as detected by javascript). And there may be an Action="Delete" on Attr elements as well, since NULLs are treated as "unselected"--sometimes it's very important to know if a Yes/No question has intentionally been answered No or if no one's bothered to say Yes yet. There is also a different kind of data, a Relationship. At this time, those are updated through individual AJAX calls as things are edited in the UI, but I'd like to include those so that changes to relationships can be canceled (right now, once you change it, it's done). So those are really elements, too, but they are called Rel instead of Element. Relationships are implemented as ElementID1 and ElementID2, so the RelID 2284 in the XML above is in the database as: ElementID 2284 ElementID1 1234 ElementID2 3827 Having multiple children in one relationship isn't currently supported, but it would be nice later. Does this strategy and the example XML make sense? Is there a more sensible way? I'm just looking for some broad critique to help save me from going down a bad path. Any aspect that you'd like to comment on would be helpful. The web language happens to be Classic ASP, but that could change to ASP.Net at some point. A persistence engine like Linq or nHibernate is probably not acceptable right now--I just want to get this already working application enhanced without a huge amount of development time. I'll choose the answer that shows experience and has a balance of good warnings about what not to do, confirmations of what I'm planning to do, and recommendations about something else to do. I'll make it as objective as possible. P.S. I'd like to handle unicode characters as well as very long strings (10k +). UPDATE I have had this working for some time and I used the ADO Recordset Save-To-Stream trick to make creating the XML really easy. The result seems to be fairly fast, though if speed ever becomes a problem I may revisit this. In the meantime, my code works to handle any number of elements and attributes on the page at once, including updating, deleting, and creating new items all in one go. I settled on a scheme like so for all my elements: Existing data elements Example: input name e12345_a678 (element 12345, attribute 678), the input value is the value of the attribute. New elements Javascript copies a hidden template of the set of HTML elements needed for the type into the correct location on the page, increments a counter to get a new ID for this item, and prepends the number to the names of the form items. var newid = 0; function metadataAdd(reference, nameid, value) { var t = document.createElement('input'); t.setAttribute('name', nameid); t.setAttribute('id', nameid); t.setAttribute('type', 'hidden'); t.setAttribute('value', value); reference.appendChild(t); } function multiAdd(target, parentelementid, attrid, elementtypeid) { var proto = document.getElementById('a' + attrid + '_proto'); var instance = document.createElement('p'); target.parentNode.parentNode.insertBefore(instance, target.parentNode); var thisid = ++newid; instance.innerHTML = proto.innerHTML.replace(/{prefix}/g, 'n' + thisid + '_'); instance.id = 'n' + thisid; instance.className += ' new'; metadataAdd(instance, 'n' + thisid + '_p', parentelementid); metadataAdd(instance, 'n' + thisid + '_c', attrid); metadataAdd(instance, 'n' + thisid + '_t', elementtypeid); return false; } Example: Template input name _a678 becomes n1_a678 (a new element, the first one on the page, attribute 678). all attributes of this new element are tagged with the same prefix of n1. The next new item will be n2, and so on. Some hidden form inputs are created: n1_t, value is the elementtype of the element to be created n1_p, value is the parent id of the element (if it is a relationship) n1_c, value is the child id of the element (if it is a relationship) Deleting elements A hidden input is created in the form e12345_t with value set to 0. The existing controls displaying that attribute's values are disabled so they are not included in the form post. So "set type to 0" is treated as delete. With this scheme, every item on the page has a unique name and can be distinguished properly, and every action can be represented properly. When the form is posted, here's a sample of building one of the two recordsets used (classic ASP code): Set Data = Server.CreateObject("ADODB.Recordset") Data.Fields.Append "ElementID", adInteger, 4, adFldKeyColumn Data.Fields.Append "AttrID", adInteger, 4, adFldKeyColumn Data.Fields.Append "Value", adLongVarWChar, 2147483647, adFldIsNullable Or adFldMayBeNull Data.CursorLocation = adUseClient Data.CursorType = adOpenDynamic Data.Open This is the recordset for values, the other is for the elements themselves. I step through the posted form and for the element recordset use a Scripting.Dictionary populated with instances of a custom Class that has the properties I need, so that I can add the values piecemeal, since they don't always come in order. New elements are added as negative to distinguish them from regular elements (rather than requiring a separate column to indicate if it is new or addresses an existing element). I use regular expression to tear apart the form keys: "^(e|n)([0-9]{1,10})_(a|p|t|c)([0-9]{0,10})$" Then, adding an attribute looks like this. Data.AddNew ElementID.Value = DataID AttrID.Value = Integerize(Matches(0).SubMatches(3)) AttrValue.Value = Request.Form(Key) Data.Update ElementID, AttrID, and AttrValue are references to the fields of the recordset. This method is hugely faster than using Data.Fields("ElementID").Value each time. I loop through the Dictionary of element updates and ignore any that don't have all the proper information, adding the good ones to the recordset. Then I call my data-updating stored procedure like so: Set Cmd = Server.CreateObject("ADODB.Command") With Cmd Set .ActiveConnection = MyDBConn .CommandType = adCmdStoredProc .CommandText = "DataPost" .Prepared = False .Parameters.Append .CreateParameter("@ElementMetadata", adLongVarWChar, adParamInput, 2147483647, XMLFromRecordset(Element)) .Parameters.Append .CreateParameter("@ElementData", adLongVarWChar, adParamInput, 2147483647, XMLFromRecordset(Data)) End With Result.Open Cmd ' previously created recordset object with options set Here's the function that does the xml conversion: Private Function XMLFromRecordset(Recordset) Dim Stream Set Stream = Server.CreateObject("ADODB.Stream") Stream.Open Recordset.Save Stream, adPersistXML Stream.Position = 0 XMLFromRecordset = Stream.ReadText End Function Just in case the web page needs to know, the SP returns a recordset of any new elements, showing their page value and their created value (so I can see that n1 is now e12346 for example). Here are some key snippets from the stored procedure. Note this is SQL 2000 for now, though I'll be able to switch to 2005 soon: CREATE PROCEDURE [dbo].[DataPost] @ElementMetaData ntext, @ElementData ntext AS DECLARE @hdoc int --- snip --- EXEC sp_xml_preparedocument @hdoc OUTPUT, @ElementMetaData, '<xml xmlns:s="uuid:BDC6E3F0-6DA3-11d1-A2A3-00AA00C14882" xmlns:dt="uuid:C2F41010-65B3-11d1-A29F-00AA00C14882" xmlns:rs="urn:schemas-microsoft-com:rowset" xmlns:z="#RowsetSchema" />' INSERT #ElementMetadata (ElementID, ElementTypeID, ElementID1, ElementID2) SELECT * FROM OPENXML(@hdoc, '/xml/rs:data/rs:insert/z:row', 0) WITH ( ElementID int, ElementTypeID int, ElementID1 int, ElementID2 int ) ORDER BY ElementID -- orders negative items (new elements) first so they begin counting at 1 for later ID calculation EXEC sp_xml_removedocument @hdoc --- snip --- UPDATE E SET E.ElementTypeID = M.ElementTypeID FROM Element E INNER JOIN #ElementMetadata M ON E.ElementID = M.ElementID WHERE E.ElementID >= 1 AND M.ElementTypeID >= 1 The following query does the correlation of the negative new element ids to the newly inserted ones: UPDATE #ElementMetadata -- Correlate the new ElementIDs with the input rows SET NewElementID = Scope_Identity() - @@RowCount + DataID WHERE ElementID < 0 Other set-based queries do all the other work of validating that the attributes are allowed, are the correct data type, and inserting, updating, and deleting elements and attributes. I hope this brief run-down is useful to others some day! Converting ADO Recordsets to an XML stream was a huge winner for me as it saved all sorts of time and had a namespace and schema already defined that made the results come out correctly. Using a flatter XML format with 2 inputs was also much easier than sticking to some ideal about having everything in a single XML stream.

    Read the article

  • Sensible Way to Pass Web Data to Sql Server Database

    - by Emtucifor
    After exploring several different ways to pass web data to a database for update purposes, I'm wondering if XML might be a good strategy. The database is currently SQL 2000. In a few months it will move to SQL 2005 and I will be able to change things if needed, but I need a SQL 2000 solution now. First of all, the database in question uses the EAV model. I know that this kind of database is generally highly frowned on, so for the purposes of this question, please just accept that this is not going to change. The current update method has the web server inserting values (that have all been converted first to their correct underlying types, then to sql_variant) to a temp table. A stored procedure is then run which expects the temp table to exist and it takes care of updating, inserting, or deleting things as needed. So far, only a single element has needed to be updated at a time. But now, there is a requirement to be able to edit multiple elements at once, and also to support hierarchical elements, each of which can have its own list of attributes. Here's some example XML I hand-typed to demonstrate what I'm thinking of. Note that in this database the Entity is Element and an ID of 0 signifies "create" aka an insert of a new item. <Elements> <Element ID="1234"> <Attr ID="221">Value</Attr> <Attr ID="225">287</Attr> <Attr ID="234"> <Element ID="99825"> <Attr ID="7">Value1</Attr> <Attr ID="8">Value2</Attr> <Attr ID="9" Action="delete" /> </Element> <Element ID="99826" Action="delete" /> <Element ID="0" Type="24"> <Attr ID="7">Value4</Attr> <Attr ID="8">Value5</Attr> <Attr ID="9">Value6</Attr> </Element> <Element ID="0" Type="24"> <Attr ID="7">Value7</Attr> <Attr ID="8">Value8</Attr> <Attr ID="9">Value9</Attr> </Element> </Attr> <Rel ID="3827" Action="delete" /> <Rel ID="2284" Role="parent"> <Element ID="3827" /> <Element ID="3829" /> <Attr ID="665">1</Attr> </Rel> <Rel ID="0" Type="23" Role="child"> <Element ID="3830" /> <Attr ID="67" </Rel> </Element> <Element ID="0" Type="87"> <Attr ID="221">Value</Attr> <Attr ID="225">569</Attr> <Attr ID="234"> <Element ID="0" Type="24"> <Attr ID="7">Value10</Attr> <Attr ID="8">Value11</Attr> <Attr ID="9">Value12</Attr> </Element> </Attr> </Element> <Element ID="1235" Action="delete" /> </Elements> Some Attributes are straight value types, such as AttrID 221. But AttrID 234 is a special "multi-value" type that can have a list of elements underneath it, and each one can have one or more values. Types only need to be presented when a new item is created, since the ElementID fully implies the type if it already exists. I'll probably support only passing in changed items (as detected by javascript). And there may be an Action="Delete" on Attr elements as well, since NULLs are treated as "unselected"--sometimes it's very important to know if a Yes/No question has intentionally been answered No or if no one's bothered to say Yes yet. There is also a different kind of data, a Relationship. At this time, those are updated through individual AJAX calls as things are edited in the UI, but I'd like to include those so that changes to relationships can be canceled (right now, once you change it, it's done). So those are really elements, too, but they are called Rel instead of Element. Relationships are implemented as ElementID1 and ElementID2, so the RelID 2284 in the XML above is in the database as: ElementID 2284 ElementID1 1234 ElementID2 3827 Having multiple children in one relationship isn't currently supported, but it would be nice later. Does this strategy and the example XML make sense? Is there a more sensible way? I'm just looking for some broad critique to help save me from going down a bad path. Any aspect that you'd like to comment on would be helpful. The web language happens to be Classic ASP, but that could change to ASP.Net at some point. A persistence engine like Linq or nHibernate is probably not acceptable right now--I just want to get this already working application enhanced without a huge amount of development time. I'll choose the answer that shows experience and has a balance of good warnings about what not to do, confirmations of what I'm planning to do, and recommendations about something else to do. I'll make it as objective as possible. P.S. I'd like to handle unicode characters as well as very long strings (10k +). UPDATE I have had this working for some time and I used the ADO Recordset Save-To-Stream trick to make creating the XML really easy. The result seems to be fairly fast, though if speed ever becomes a problem I may revisit this. In the meantime, my code works to handle any number of elements and attributes on the page at once, including updating, deleting, and creating new items all in one go. I settled on a scheme like so for all my elements: Existing data elements Example: input name e12345_a678 (element 12345, attribute 678), the input value is the value of the attribute. New elements Javascript copies a hidden template of the set of HTML elements needed for the type into the correct location on the page, increments a counter to get a new ID for this item, and prepends the number to the names of the form items. var newid = 0; function metadataAdd(reference, nameid, value) { var t = document.createElement('input'); t.setAttribute('name', nameid); t.setAttribute('id', nameid); t.setAttribute('type', 'hidden'); t.setAttribute('value', value); reference.appendChild(t); } function multiAdd(target, parentelementid, attrid, elementtypeid) { var proto = document.getElementById('a' + attrid + '_proto'); var instance = document.createElement('p'); target.parentNode.parentNode.insertBefore(instance, target.parentNode); var thisid = ++newid; instance.innerHTML = proto.innerHTML.replace(/{prefix}/g, 'n' + thisid + '_'); instance.id = 'n' + thisid; instance.className += ' new'; metadataAdd(instance, 'n' + thisid + '_p', parentelementid); metadataAdd(instance, 'n' + thisid + '_c', attrid); metadataAdd(instance, 'n' + thisid + '_t', elementtypeid); return false; } Example: Template input name _a678 becomes n1_a678 (a new element, the first one on the page, attribute 678). all attributes of this new element are tagged with the same prefix of n1. The next new item will be n2, and so on. Some hidden form inputs are created: n1_t, value is the elementtype of the element to be created n1_p, value is the parent id of the element (if it is a relationship) n1_c, value is the child id of the element (if it is a relationship) Deleting elements A hidden input is created in the form e12345_t with value set to 0. The existing controls displaying that attribute's values are disabled so they are not included in the form post. So "set type to 0" is treated as delete. With this scheme, every item on the page has a unique name and can be distinguished properly, and every action can be represented properly. When the form is posted, here's a sample of building one of the two recordsets used (classic ASP code): Set Data = Server.CreateObject("ADODB.Recordset") Data.Fields.Append "ElementID", adInteger, 4, adFldKeyColumn Data.Fields.Append "AttrID", adInteger, 4, adFldKeyColumn Data.Fields.Append "Value", adLongVarWChar, 2147483647, adFldIsNullable Or adFldMayBeNull Data.CursorLocation = adUseClient Data.CursorType = adOpenDynamic Data.Open This is the recordset for values, the other is for the elements themselves. I step through the posted form and for the element recordset use a Scripting.Dictionary populated with instances of a custom Class that has the properties I need, so that I can add the values piecemeal, since they don't always come in order. New elements are added as negative to distinguish them from regular elements (rather than requiring a separate column to indicate if it is new or addresses an existing element). I use regular expression to tear apart the form keys: "^(e|n)([0-9]{1,10})_(a|p|t|c)([0-9]{0,10})$" Then, adding an attribute looks like this. Data.AddNew ElementID.Value = DataID AttrID.Value = Integerize(Matches(0).SubMatches(3)) AttrValue.Value = Request.Form(Key) Data.Update ElementID, AttrID, and AttrValue are references to the fields of the recordset. This method is hugely faster than using Data.Fields("ElementID").Value each time. I loop through the Dictionary of element updates and ignore any that don't have all the proper information, adding the good ones to the recordset. Then I call my data-updating stored procedure like so: Set Cmd = Server.CreateObject("ADODB.Command") With Cmd Set .ActiveConnection = MyDBConn .CommandType = adCmdStoredProc .CommandText = "DataPost" .Prepared = False .Parameters.Append .CreateParameter("@ElementMetadata", adLongVarWChar, adParamInput, 2147483647, XMLFromRecordset(Element)) .Parameters.Append .CreateParameter("@ElementData", adLongVarWChar, adParamInput, 2147483647, XMLFromRecordset(Data)) End With Result.Open Cmd ' previously created recordset object with options set Here's the function that does the xml conversion: Private Function XMLFromRecordset(Recordset) Dim Stream Set Stream = Server.CreateObject("ADODB.Stream") Stream.Open Recordset.Save Stream, adPersistXML Stream.Position = 0 XMLFromRecordset = Stream.ReadText End Function Just in case the web page needs to know, the SP returns a recordset of any new elements, showing their page value and their created value (so I can see that n1 is now e12346 for example). Here are some key snippets from the stored procedure. Note this is SQL 2000 for now, though I'll be able to switch to 2005 soon: CREATE PROCEDURE [dbo].[DataPost] @ElementMetaData ntext, @ElementData ntext AS DECLARE @hdoc int --- snip --- EXEC sp_xml_preparedocument @hdoc OUTPUT, @ElementMetaData, '<xml xmlns:s="uuid:BDC6E3F0-6DA3-11d1-A2A3-00AA00C14882" xmlns:dt="uuid:C2F41010-65B3-11d1-A29F-00AA00C14882" xmlns:rs="urn:schemas-microsoft-com:rowset" xmlns:z="#RowsetSchema" />' INSERT #ElementMetadata (ElementID, ElementTypeID, ElementID1, ElementID2) SELECT * FROM OPENXML(@hdoc, '/xml/rs:data/rs:insert/z:row', 0) WITH ( ElementID int, ElementTypeID int, ElementID1 int, ElementID2 int ) ORDER BY ElementID -- orders negative items (new elements) first so they begin counting at 1 for later ID calculation EXEC sp_xml_removedocument @hdoc --- snip --- UPDATE E SET E.ElementTypeID = M.ElementTypeID FROM Element E INNER JOIN #ElementMetadata M ON E.ElementID = M.ElementID WHERE E.ElementID >= 1 AND M.ElementTypeID >= 1 The following query does the correlation of the negative new element ids to the newly inserted ones: UPDATE #ElementMetadata -- Correlate the new ElementIDs with the input rows SET NewElementID = Scope_Identity() - @@RowCount + DataID WHERE ElementID < 0 Other set-based queries do all the other work of validating that the attributes are allowed, are the correct data type, and inserting, updating, and deleting elements and attributes. I hope this brief run-down is useful to others some day! Converting ADO Recordsets to an XML stream was a huge winner for me as it saved all sorts of time and had a namespace and schema already defined that made the results come out correctly. Using a flatter XML format with 2 inputs was also much easier than sticking to some ideal about having everything in a single XML stream.

    Read the article

  • Please guide this self-taught Web Developer.

    - by ChickenPuke
    One of the major regrets in life is that I didn't do something with my introversion. I didn't manage to get past the first year of college because of that. I have chosen the path where there are no video games and other time sinks, all I have is the internet to quench my thirst of learning the ins and outs of the field of Web Developing/Designing. Though currently, I'm taking a Web Design Associate course at one of the best Computer Arts and this is the last month of the class. Even though I'm still a sapling, I love this field so much. So basically, At school I'm learning web design while at home I'm teaching myself web-developing. First thing first, returning to college seems impossible at the moment because of some financial problems. I'm pretty comfortable with CSS and HTML and I'm into PHP/MySQL at the moment. Could you please provide me a web-development Curriculum to follow. And do I need to learn about the theories behind? And I think I'm still young(I'm 18 at the time of writing). Is it a good thing or bad thing for choosing this path? I'm glad with my decision but in all honesty, I'm worrying about my future and employment because I'm an undergrad, coming from a country where companies are degree b!tches, it saddens me so. Thank you. (My questions are the bold parts. )

    Read the article

  • Initial review of Developer's Guide to Collections in Microsoft® .NET

    - by TATWORTH
    The code is well illustrated by diagrams. The approach is practical. The code is well commented, however the C# code samples would be better had they been fully Style Cop compliant. I am looking forward to reviewing the rest of this excellent book. I recommend this book to all C# and VB.NET Development teams. I concur with the author who states that the book is not for learning C# or VB.NET. It is an excellent book for C# or VB.NET developers to extend their knowledge of the Dot Net framework. To buy a copy, please go to http://shop.oreilly.com/product/0790145317193.do

    Read the article

  • Umbraco Certified Developer - Level 2 - Chris Houston

    - by Vizioz Limited
    I just thought I'd create a quick blog post to say that I have now been on the Umbraco Level 2 course (which I would recommend!) and although it turned out that I pretty much new 95% of what was taught, the extra 5% and a chance to have a trip to Copenhagen made it worth it :)I am now officially Umbraco level 2 certified :)Hopefully over the next month I will have some time to start adding a few more useful blog posts to my blog. I know I've been a little slack on the posting in the last month, it's just been a busy time for me!

    Read the article

  • What Every Developer Should Know About MSI Components

    - by Alois Kraus
    Hopefully nothing. But if you have to do more than simple XCopy deployment and you need to support updates, upgrades and perhaps side by side scenarios there is no way around MSI. You can create Msi files with a Visual Studio Setup project which is severely limited or you can use the Windows Installer Toolset. I cannot talk about WIX with my German colleagues because WIX has a very special meaning. It is funny to always use the long name when I talk about deployment possibilities. Alternatively you can buy commercial tools which help you to author Msi files but I am not sure how good they are. Given enough pain with existing solutions you can also learn the MSI Apis and create your own packaging solution. If I were you I would use either a commercial visual tool when you do easy deployments or use the free Windows Installer Toolset. Once you know the WIX schema you can create well formed wix xml files easily with any editor. Then you can “compile” from the wxs files your Msi package. Recently I had the “pleasure” to get my hands dirty with C++ (again) and the MSI technology. Installation is a complex topic but after several month of digging into arcane MSI issues I can safely say that there should exist an easier way to install and update files as today. I am not alone with this statement as John Robbins (creator of the cool tool Paraffin) states: “.. It's a brittle and scary API in Windows …”. To help other people struggling with installation issues I present you the advice I (and others) found useful and what will happen if you ignore this advice. What is a MSI file? A MSI file is basically a database with tables which reference each other to control how your un/installation should work. The basic idea is that you declare via these tables what you want to install and MSI controls the how to get your stuff onto or off your machine. Your “stuff” consists usually of files, registry keys, shortcuts and environment variables. Therefore the most important tables are File, Registry, Environment and Shortcut table which define what will be un/installed. The key to master MSI is that every resource (file, registry key ,…) is associated with a MSI component. The actual payload consists of compressed files in the CAB format which can either be embedded into the MSI file or reside beside the MSI file or in a subdirectory below it. To examine MSI files you need Orca a free MSI editor provided by MS. There is also another free editor called Super Orca which does support diffs between MSI and it does not lock the MSI files. But since Orca comes with a shell extension I tend to use only Orca because it is so easy to right click on a MSI file and open it with this tool. How Do I Install It? Double click it. This does work for fresh installations as well as major upgrades. Updates need to be installed via the command line via msiexec /i <msi> REINSTALL=ALL REINSTALLMODE=vomus   This tells the installer to reinstall all already installed features (new features will NOT be installed). The reinstallmode letters do force an overwrite of the old cached package in the %WINDIR%\Installer folder. All files, shortcuts and registry keys are redeployed if they are missing or need to be replaced with a newer version. When things did go really wrong and you want to overwrite everything unconditionally use REINSTALLMODE=vamus. How To Enable MSI Logs? You can download a MSI from Microsoft which installs some registry keys to enable full MSI logging. The log files can be found in your %TEMP% folder and are called MSIxxxx.log. Alternatively you can add to your msiexec command line the option msiexec …. /l*vx <LogFileName> Personally I find it rather strange that * does not mean full logging. To really get all logs I need to add v and x which is documented in the msiexec help but I still find this behavior unintuitive. What are MSI components? The whole MSI logic is bound to the concept of MSI components. Nearly every msi table has a Component column which binds an installable resource to a component. Below are the screenshots of the FeatureComponents and Component table of an example MSI. The Feature table defines basically the feature hierarchy.  To find out what belongs to a feature you need to look at the FeatureComponents table where for each feature the components are listed which will be installed when a feature is installed. The MSI components are defined in the  Component table. This table has as first column the component name and as second column the component id which is a GUID. All resources you want to install belong to a MSI component. Therefore nearly all MSI tables have a Component_ column which contains the component name. If you look e.g. a the File table you see that every file belongs to a component which is true for all other tables which install resources. The component table is the glue between all other tables which contain the resources you want to install. So far so easy. Why is MSI then so complex? Most MSI problems arise from the fact that you did violate a MSI component rule in one or the other way. When you install a feature the reference count for all components belonging to this feature will increase by one. If your component is installed by more than one feature it will get a higher refcount. When you uninstall a feature its refcount will drop by one. Interesting things happen if the component reference count reaches zero: Then all associated resources will be deleted. That looks like a reasonable thing and it is. What it makes complex are the strange component rules you have to follow. Below are some important component rules from the Tao of the Windows Installer … Rule 16: Follow Component Rules Components are a very important part of the Installer technology. They are the means whereby the Installer manages the resources that make up your application. The SDK provides the following guidelines for creating components in your package: Never create two components that install a resource under the same name and target location. If a resource must be duplicated in multiple components, change its name or target location in each component. This rule should be applied across applications, products, product versions, and companies. Two components must not have the same key path file. This is a consequence of the previous rule. The key path value points to a particular file or folder belonging to the component that the installer uses to detect the component. If two components had the same key path file, the installer would be unable to distinguish which component is installed. Two components however may share a key path folder. Do not create a version of a component that is incompatible with all previous versions of the component. This rule should be applied across applications, products, product versions, and companies. Do not create components containing resources that will need to be installed into more than one directory on the user’s system. The installer installs all of the resources in a component into the same directory. It is not possible to install some resources into subdirectories. Do not include more than one COM server per component. If a component contains a COM server, this must be the key path for the component. Do not specify more than one file per component as a target for the Start menu or a Desktop shortcut. … And these rules do not even talk about component ids, update packages and upgrades which you need to understand as well. Lets suppose you install two MSIs (MSI1 and MSI2) which have the same ComponentId but different component names. Both do install the same file. What will happen when you uninstall MSI2?   Hm the file should stay there. But the component names are different. Yes and yes. But MSI uses not use the component name as key for the refcount. Instead the ComponentId column of the Component table which contains a GUID is used as identifier under which the refcount is stored. The components Comp1 and Comp2 are identical from the MSI perspective. After the installation of both MSIs the Component with the Id {100000….} has a refcount of two. After uninstallation of one MSI there is still a refcount of one which drops to zero just as expected when we uninstall the last msi. Then the file which was the same for both MSIs is deleted. You should remember that MSI keeps a refcount across MSIs for components with the same component id. MSI does manage components not the resources you did install. The resources associated with a component are then and only then deleted when the refcount of the component reaches zero.   The dependencies between features, components and resources can be described as relations. m,k are numbers >= 1, n can be 0. Inside a MSI the following relations are valid Feature    1  –> n Components Component    1 –> m Features Component      1  –>  k Resources These relations express that one feature can install several components and features can share components between them. Every (meaningful) component will install at least one resource which means that its name (primary key to stay in database speak) does occur in some other table in the Component column as value which installs some resource. Lets make it clear with an example. We want to install with the feature MainFeature some files a registry key and a shortcut. We can then create components Comp1..3 which are referenced by the resources defined in the corresponding tables.   Feature Component Registry File Shortcuts MainFeature Comp1 RegistryKey1     MainFeature Comp2   File.txt   MainFeature Comp3   File2.txt Shortcut to File2.txt   It is illegal that the same resource is part of more than one component since this would break the refcount mechanism. Lets illustrate this:            Feature ComponentId Resource Reference Count Feature1 {1000-…} File1.txt 1 Feature2 {2000-….} File1.txt 1 The installation part works well but what happens when you uninstall Feature2? Component {20000…} gets a refcount of zero where MSI deletes all resources belonging to this component. In this case File1.txt will be deleted. But Feature1 still has another component {10000…} with a refcount of one which means that the file was deleted too early. You just have ruined your installation. To fix it you then need to click on the Repair button under Add/Remove Programs to let MSI reinstall any missing registry keys, files or shortcuts. The vigilant reader might has noticed that there is more in the Component table. Beside its name and GUID it has also an installation directory, attributes and a KeyPath. The KeyPath is a reference to a file or registry key which is used to detect if the component is already installed. This becomes important when you repair or uninstall a component. To find out if the component is already installed MSI checks if the registry key or file referenced by the KeyPath property does exist. When it does not exist it assumes that it was either already uninstalled (can lead to problems during uninstall) or that it is already installed and all is fine. Why is this detail so important? Lets put all files into one component. The KeyPath should be then one of the files of your component to check if it was installed or not. When your installation becomes corrupt because a file was deleted you cannot repair it with the Repair button under Add/Remove Programs because MSI checks the component integrity via the Resource referenced by its KeyPath. As long as you did not delete the KeyPath file MSI thinks all resources with your component are installed and never executes any repair action. You get even more trouble when you try to remove files during an upgrade (you cannot remove files during an update) from your super component which contains all files. The only way out and therefore best practice is to assign for every resource you want to install an extra component. This ensures painless updatability and repairs and you have much less effort to remove specific files during an upgrade. In effect you get this best practice relation Feature 1  –> n Components Component   1  –>  1 Resources MSI Component Rules Rule 1 – One component per resource Every resource you want to install (file, registry key, value, environment value, shortcut, directory, …) must get its own component which does never change between versions as long as the install location is the same. Penalty If you add more than one resources to a component you will break the repair capability of MSI because the KeyPath is used to check if the component needs repair. MSI ComponentId Files MSI 1.0 {1000} File1-5 MSI 2.0 {2000} File2-5 You want to remove File1 in version 2.0 of your MSI. Since you want to keep the other files you create a new component and add them there. MSI will delete all files if the component refcount of {1000} drops to zero. The files you want to keep are added to the new component {2000}. Ok that does work if your upgrade does uninstall the old MSI first. This will cause the refcount of all previously installed components to reach zero which means that all files present in version 1.0 are deleted. But there is a faster way to perform your upgrade by first installing your new MSI and then remove the old one.  If you choose this upgrade path then you will loose File1-5 after your upgrade and not only File1 as intended by your new component design.   Rule 2 – Only add, never remove resources from a component If you did follow rule 1 you will not need Rule 2. You can add in a patch more resources to one component. That is ok. But you can never remove anything from it. There are tricky ways around that but I do not want to encourage bad component design. Penalty Lets assume you have 2 MSI files which install under the same component one file   MSI1 MSI2 {1000} - ComponentId {1000} – ComponentId File1.txt File2.txt   When you install and uninstall both MSIs you will end up with an installation where either File1 or File2 will be left. Why? It seems that MSI does not store the resources associated with each component in its internal database. Instead Windows will simply query the MSI that is currently uninstalled for all resources belonging to this component. Since it will find only one file and not two it will only uninstall one file. That is the main reason why you never can remove resources from a component!   Rule 3 Never Remove A Component From an Update MSI. This is the same as if you change the GUID of a component by accident for your new update package. The resulting update package will not contain all components from the previously installed package. Penalty When you remove a component from a feature MSI will set the feature state during update to Advertised and log a warning message into its log file when you did enable MSI logging. SELMGR: ComponentId '{2DCEA1BA-3E27-E222-484C-D0D66AEA4F62}' is registered to feature 'xxxxxxx, but is not present in the Component table.  Removal of components from a feature is not supported! MSI (c) (24:44) [07:53:13:436]: SELMGR: Removal of a component from a feature is not supported Advertised means that MSI treats all components of this feature as not installed. As a consequence during uninstall nothing will be removed since it is not installed! This is not only bad because uninstall does no longer work but this feature will also not get the required patches. All other features which have followed component versioning rules for update packages will be updated but the one faulty feature will not. This results in very hard to find bugs why an update was only partially successful. Things got better with Windows Installer 4.5 but you cannot rely on that nobody will use an older installer. It is a good idea to add to your update msiexec call MSIENFORCEUPGRADECOMPONENTRULES=1 which will abort the installation if you did violate this rule.

    Read the article

< Previous Page | 255 256 257 258 259 260 261 262 263 264 265 266  | Next Page >