Search Results

Search found 2966 results on 119 pages for 'procedural generation'.

Page 11/119 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • How to generate a metamodel for POJOs

    - by Stefan Haberl
    I'm looking for a metamodel generation library for simple Java POJOs. I'm thinking about something along the lines of JPA 2.0's metamodel, which can be used to generate type safe criterias for JPA entities (see this Question on SO for JPA specific implementations). Does something similar exist for general purpose JavaBeans, i.e., POJOS, that are not JPA entities? Specifically, I want to implement Spring MVC's WebDataBinder.setAllowedFields() method in a type safe way in my Spring MVC @Controllers

    Read the article

  • [CODE GENERATION] How to generate DELETE statements in PL/SQL, based on the tables FK relations?

    - by The chicken in the kitchen
    Is it possible via script/tool to generate authomatically many delete statements based on the tables fk relations, using Oracle PL/SQL? In example: I have the table: CHICKEN (CHICKEN_CODE NUMBER) and there are 30 tables with fk references to its CHICKEN_CODE that I need to delete; there are also other 150 tables foreign-key-linked to that 30 tables that I need to delete first. Is there some tool/script PL/SQL that I can run in order to generate all the necessary delete statements based on the FK relations for me? (by the way, I know about cascade delete on the relations, but please pay attention: I CAN'T USE IT IN MY PRODUCTION DATABASE, because it's dangerous!) I'm using Oracle DataBase 10G R2. This is the result I've written, but it is not recursive: This is a view I have previously written, but of course it is not recursive! CREATE OR REPLACE FORCE VIEW RUN ( OWNER_1, CONSTRAINT_NAME_1, TABLE_NAME_1, TABLE_NAME, VINCOLO ) AS SELECT OWNER_1, CONSTRAINT_NAME_1, TABLE_NAME_1, TABLE_NAME, '(' || LTRIM ( EXTRACT (XMLAGG (XMLELEMENT ("x", ',' || COLUMN_NAME)), '/x/text()'), ',') || ')' VINCOLO FROM ( SELECT CON1.OWNER OWNER_1, CON1.TABLE_NAME TABLE_NAME_1, CON1.CONSTRAINT_NAME CONSTRAINT_NAME_1, CON1.DELETE_RULE, CON1.STATUS, CON.TABLE_NAME, CON.CONSTRAINT_NAME, COL.POSITION, COL.COLUMN_NAME FROM DBA_CONSTRAINTS CON, DBA_CONS_COLUMNS COL, DBA_CONSTRAINTS CON1 WHERE CON.OWNER = 'TABLE_OWNER' AND CON.TABLE_NAME = 'TABLE_OWNED' AND ( (CON.CONSTRAINT_TYPE = 'P') OR (CON.CONSTRAINT_TYPE = 'U')) AND COL.TABLE_NAME = CON1.TABLE_NAME AND COL.CONSTRAINT_NAME = CON1.CONSTRAINT_NAME --AND CON1.OWNER = CON.OWNER AND CON1.R_CONSTRAINT_NAME = CON.CONSTRAINT_NAME AND CON1.CONSTRAINT_TYPE = 'R' GROUP BY CON1.OWNER, CON1.TABLE_NAME, CON1.CONSTRAINT_NAME, CON1.DELETE_RULE, CON1.STATUS, CON.TABLE_NAME, CON.CONSTRAINT_NAME, COL.POSITION, COL.COLUMN_NAME) GROUP BY OWNER_1, CONSTRAINT_NAME_1, TABLE_NAME_1, TABLE_NAME; ... and it contains the error of using DBA_CONSTRAINTS instead of ALL_CONSTRAINTS...

    Read the article

  • Code generation tool, to create C# adapter classes for unit testing?

    - by RyBolt
    I know I wouldn't need this with Typemock, however, with something like MoQ , I need to use the adapter pattern to enable the creation of mocks via interfaces for code I don't control. For example, TcpClient is a .NET class, so I use adapter pattern to enable mocking of this object, b/c I need an interface of that class. I then produce interface ITcpClient, that can then be implemented via a TcpClientAdapter class, which is just plain vanilla adapter pattern implementation. I am looking for a tool to do this automatically (creation of interface and adapter), I would think there is one out there somewhere? (or is everyone just hand coding these)

    Read the article

  • Should maven generate jaxb java code or just use java code from source control?

    - by Peter Turner
    We're trying to plan how to mash together a build server for our shiny new java backend. We use a lot of jaxb XSD code generation and I was getting into a heated argument with whoever cared that the build server should delete jaxb created structures that were checked in generate the code from XSD's use code generated from those XSD's Everyone else thought that it made more sense to just use the code they checked in (we check in the code generated from the XSD because Eclipse pretty much forces you to do this as far as I can tell). My only stale argument is in my reading of the Joel test is that making the build in one step means generating from the source code and the source code is not the java source, but the XSD's because if you're messing around with the generated code you're gonna get pinched eventually. So, given that we all agree (you may not agree) we should probably be checking in our generate java files, should we use them to generate our code or should we generate it using the XSD's?

    Read the article

  • Encourage the use of markup files as documentation in enterprise [closed]

    - by linquize
    To make it eaiser to do version control and diff files of documentation, use markup files, such as HTML: html/xhtml, XML: docbook, Wiki: markdown to replace doc/docx. docx is too complex and lengthy. For html, no extra document generation required. Programmers can write html directly and end users / managers can use any web browsers to view the document. For custom XML or Wiki formats, viewers are required to view the document or converters are used to export to pdf/doc. Is such move becoming popular in enterprise context? Why or why not?

    Read the article

  • Executing procedural queries in perl

    - by KalpaG
    I have a query like this set @valid_total:=0; set @invalid_total:=0; select week as weekno, measured_week,project_id as project, role_category_id as role_category, valid_count,valid_tickets, (@valid_total := @valid_total + valid_count) as valid_total, invalid_count,invalid_tickets, (@invalid_total := @invalid_total + invalid_count) as invalid_total from metric_fault_bug_project where measured_week = yearweek(curdate()) and role_category_id = 1 and project_id = 11; it executes fine in heidi (MySQL client) but when it comes to perl, it gives me this error DBD::mysql::st execute failed: You have an error in your SQL syntax; check the m anual that corresponds to your MySQL server version for the right syntax to use near ':=0; set :=0; select week as weekno, measured_week,project_id as project' at line 1 at D:\Mx\scripts\test.pl line 35. Can't execute SQL statement: You have an error in your SQL syntax; check the man ual that corresponds to your MySQL server version for the right syntax to use ne ar ':=0; set :=0; select week as weekno, measured_week,project_id as project' at line 1 The problem seem to be in the set @valid_total := 0; line. I am fairly new to Perl. Can anyone help? this is the complete perl code #!/usr/bin/perl #use lib '/x01/home/kalpag/libs'; use DBI; use strict; use warnings; my $sid = 'issues'; my $user = 'root'; my $passwd = 'kalpa'; my $connection = "DBI:mysql:database=$sid;host=localhost"; my $dbhh = DBI->connect( $connection, $user, $passwd) || die "Database connection not made: $DBI::errstr"; my $sql_query = 'set @valid_total:=0; set @invalid_total:=0; select week as weekno, measured_week,project_id, role_category_id as role_category, valid_count,valid_tickets, (@valid_total := @valid_total + valid_count) as valid_total, invalid_count,invalid_tickets, (@invalid_total := @invalid_total + invalid_count) as invalid_total from metric_fault_bug_project where measured_week = yearweek(curdate()) and role_category_id = 1 and project_id = 11'; my $sth = $dbhh->prepare($sql_query) or die "Can't prepare SQL statement: $DBI::errstr\n"; $sth->execute() or die "Can't execute SQL statement: $DBI::errstr\n"; while ( my @memory = $sth->fetchrow() ) { print "@memory \n"; }

    Read the article

  • code snippet works when procedural, but doesn't when converted to modular

    - by Delirium tremens
    function sc_HTMLParser(aHTMLString){ var parseDOM = content.document.createElement('div'); parseDOM.appendChild(Components.classes['@mozilla.org/feed-unescapehtml;1'] .getService(Components.interfaces.nsIScriptableUnescapeHTML) .parseFragment(aHTMLString, false, null, parseDOM)); return parseDOM; } becomes this.HTMLParser = function(aHTMLString){ var parseDOM = content.document.createElement('div'); parseDOM.appendChild(Components.classes['@mozilla.org/feed-unescapehtml;1'] .getService(Components.interfaces.nsIScriptableUnescapeHTML) .parseFragment(aHTMLString, false, null, parseDOM)); return parseDOM; } and searchcontents = req.responseText; parsedHTML = sc_HTMLParser(searchcontents); sitefound = sc_sitefound(compareuris, parsedHTML); becomes searchcontents = req.responseText; alert(searchcontents); parsedHTML = this.HTMLParser(searchcontents); alert(parsedHTML); sitefound = this.sitefound(compareuris, parsedHTML); The modular code alerts the search contents, but doesn't alert the parsedHTML. Why? How to solve?

    Read the article

  • BUILDROOT files during RPM generation

    - by khmarbaise
    Currently i have the following spec file to create a RPM. The spec file is generated by maven plugin to produce a RPM out of it. The question is: will i find files which are mentioned in the spec file after the rpm generation inside the BUILDROOT/SPECS/SOURCES/SRPMS structure? %define _unpackaged_files_terminate_build 0 Name: rpm-1 Version: 1.0 Release: 1 Summary: rpm-1 License: 2009 my org Distribution: My App Vendor: my org URL: www.my.org Group: Application/Collectors Packager: my org Provides: project Requires: /bin/sh Requires: jre >= 1.5 Requires: BASE_PACKAGE PreReq: dependency Obsoletes: project autoprov: yes autoreq: yes BuildRoot: /home/build/.jenkins/jobs/rpm-maven-plugin/workspace/target/it/rpm-1/target/rpm/rpm-1/buildroot %description %install if [ -e $RPM_BUILD_ROOT ]; then mv /home/build/.jenkins/jobs/rpm-maven-plugin/workspace/target/it/rpm-1/target/rpm/rpm-1/tmp-buildroot/* $RPM_BUILD_ROOT else mv /home/build/.jenkins/jobs/rpm-maven-plugin/workspace/target/it/rpm-1/target/rpm/rpm-1/tmp-buildroot $RPM_BUILD_ROOT fi ln -s /usr/myusr/app $RPM_BUILD_ROOT/usr/myusr/app2 ln -s /tmp/myapp/somefile $RPM_BUILD_ROOT/tmp/myapp/somefile2 ln -s name.sh $RPM_BUILD_ROOT/usr/myusr/app/bin/oldname.sh %files %defattr(-,myuser,mygroup,-) %dir "/usr/myusr/app" "/usr/myusr/app2" "/tmp/myapp/somefile" "/tmp/myapp/somefile2" "/usr/myusr/app/lib" %attr(755,myuser,mygroup) "/usr/myusr/app/bin/start.sh" %attr(755,myuser,mygroup) "/usr/myusr/app/bin/filter-version.txt" %attr(755,myuser,mygroup) "/usr/myusr/app/bin/name.sh" %attr(755,myuser,mygroup) "/usr/myusr/app/bin/name-Linux.sh" %attr(755,myuser,mygroup) "/usr/myusr/app/bin/filter.txt" %attr(755,myuser,mygroup) "/usr/myusr/app/bin/oldname.sh" %dir "/usr/myusr/app/conf" %config "/usr/myusr/app/conf/log4j.xml" "/usr/myusr/app/conf/log4j.xml.deliver" %prep echo "hello from prepare" %pre -p /bin/sh #!/bin/sh if [ -s "/etc/init.d/myapp" ] then /etc/init.d/myapp stop rm /etc/init.d/myapp fi %post #!/bin/sh #create soft link script to services directory ln -s /usr/myusr/app/bin/start.sh /etc/init.d/myapp chmod 555 /etc/init.d/myapp %preun #!/bin/sh #the argument being passed in indicates how many versions will exist #during an upgrade, this value will be 1, in which case we do not want to stop #the service since the new version will be running once this script is called #during an uninstall, the value will be 0, in which case we do want to stop #the service and remove the /etc/init.d script. if [ "$1" = "0" ] then if [ -s "/etc/init.d/myapp" ] then /etc/init.d/myapp stop rm /etc/init.d/myapp fi fi; %triggerin -- dependency, dependency1 echo "hello from install" %changelog * Tue May 23 2000 Vincent Danen <[email protected]> 0.27.2-2mdk -update BuildPreReq to include rep-gtk and rep-gtkgnome * Thu May 11 2000 Vincent Danen <[email protected]> 0.27.2-1mdk -0.27.2 * Thu May 11 2000 Vincent Danen <[email protected]> 0.27.1-2mdk -added BuildPreReq -change name from Sawmill to Sawfish The problem i found is that the files (filter.txt in particular) after the generation process on a Ubuntu system but not on SuSE system. Which might be caused by different rpm versions ? Currently we have an integration test which fails based on the non existing of the file (filter.txt under a buildroot folder?)

    Read the article

  • "Raid 0 SAS" versus "2nd generation SSD"

    - by Stefano
    Hi everybody, i was planning to buy a SAS system made of two 15k RPM disks in Raid 0 configuration to give a boost to my s.o. and my apps... but after i saw that article on Coding Horror, i've started to thinking if a new 2nd generation SSD could do the same job, or even better... Does anybody have any information to help me decide?

    Read the article

  • The Most Common and Least Used 4-Digit PIN Numbers [Security Analysis Report]

    - by Asian Angel
    How ‘secure’ is your 4-digit PIN number? Is your PIN number a far too common one or is it a bit more unique in comparison to others? The folks over at the Data Genetics blog have put together an interesting analysis report that looks at the most common and least used 4-digit PIN numbers chosen by people. Numerically based (0-9) 4-digit PIN numbers only allow for a total of 10,000 possible combinations, so it stands to reason that some combinations are going to be far more common than others. The question is whether or not your personal PIN number choices are among the commonly used ones or ‘stand out’ as being more unique. Note 1: Data Genetics used data condensed from released, exposed, & discovered password tables and security breaches to generate the analysis report. Note 2: The updates section at the bottom has some interesting tidbits concerning peoples’ use of dates and certain words for PIN number generation. The analysis makes for very interesting reading, so browse on over to get an idea of where you stand with regards to your personal PIN number choices. 8 Deadly Commands You Should Never Run on Linux 14 Special Google Searches That Show Instant Answers How To Create a Customized Windows 7 Installation Disc With Integrated Updates

    Read the article

  • Caching strategies for entities and collections

    - by Rob West
    We currently have an application framework in which we automatically cache both entities and collections of entities at the business layer (using .NET cache). So the method GetWidget(int id) checks the cache using a key GetWidget_Id_{0} before hitting the database, and the method GetWidgetsByStatusId(int statusId) checks the cache using GetWidgets_Collections_ByStatusId_{0}. If the objects are not in the cache they are retrieved from the database and added to the cache. This approach is obviously quick for read scenarios, and as a blanket approach is quick for us to implement, but requires large numbers of cache keys to be purged when CRUD operations are carried out on entities. Obviously as additional methods are added this impacts performance and the benefits of caching diminish. I'm interested in alternative approaches to handling caching of collections. I know that NHibernate caches a list of the identifiers in the collection rather than the actual entities. Is this an approach other people have tried - what are the pros and cons? In particular I am looking for options that optimise performance and can be implemented automatically through boilerplate generated code (we have our own code generation tool). I know some people will say that caching needs to be done by hand each time to meet the needs of the specific situation but I am looking for something that will get us most of the way automatically.

    Read the article

  • Can I run into legal issues with random names?

    - by Nathan Sabruka
    I'm currently building a game whose NPC's are going to be assigned a random gender and a random name for the right gender. To do this I will be using a "database" of names (actually a text file with tuples). There would also be a list of last names, which will be added to the first name also randomly. My question is the following. Suppose one such random name is "George Bush", and this person has been randomly assigned the job of president. As you can see, this could easily be seen as having been "copied" from a real-life person. The main issue is this. Names will be randomly-generated, yes, but the seed for random-number generation will be constant. In other words, the name of an NPC would be randomly-generated, i.e. I wouldn't choose it, but it would be the same for every player. Could this get me in trouble? We cannot verify all possible names, since the generated number of NPC's could be potentially limitless (new NPC's are being created whenever needed).

    Read the article

  • Seeking solution for printing-reporting .NET

    - by Parhs
    I am developing an application that prints in separate threads in extreme cases about 20-25 pages per minute to various thermal printers. Currently templates for these are XAML xps documents. All printers have graphics drivers that support EMF/GDI printing. So GDI-EMF is done by operating system resulting in slower performance. Sending raw text for printing is another good solution but doesnt work always , because some clients have old chinese thermal printer that nobody support thus impossible to change codepage / emulation. So it doesnt work always. Also most computers running my software are low end ATOM CPU. So I am thinking to return to GDI, EMF printing and have both Text-Only reports and EMF reports. Another reason i want EMF is because here receipts are signed by Electronic Fiscal Memory device.Most of these dont do good job extracting text from XPS as they dont follow the standard but how windows convert GDI to XPS.Even with text-only mode some of them dont support all character encodings and are impossible to send paper cut command after the sign. I know that using a reporting engine would solve rendering problem but I dont want to buy one. All I want is to be able to show tabular data and insert an image and replaced text.I know there is StringTemplate that could do the generation of template but the problem is i should parse somehow the template and render it using GDI commands. Is there any other solution/approach for this ? Or is there anything ready ?

    Read the article

  • MathWorks offre une nouvelle fonctionnalité de calculs parallèles pour une simulation plus rapide et une génération de code améliorée

    MathWorks propose une nouvelle fonctionnalité de calculs parallèles Pour une simulation plus rapide et une génération de code améliorée grâce à Parallel Computing Toolbox MathWorks a annoncé aujourd'hui une nouvelle fonctionnalité qui permet d'accélérer la génération de code de système utilisant le référencement de modèles. Cette amélioration est rendue possible par Real-Time Workshop, un outil de génération de code qui tire désormais parti des outils d'amélioration de performance de la Parallel Computing Toolbox et du MATLAB Distributed Computing Server (MDCS). Cette fonction élargit également la prise en charge des calculs parallèles dans d'autres outils MathWorks pour améliorer...

    Read the article

  • Distributed sequence number generation?

    - by Jon
    I've generally implemented sequence number generation using database sequences in the past. e.g. Using Postgres SERIAL type http://neilconway.org/docs/sequences/ I'm curious though as how to generate sequence numbers for large distributed systems where there is no database. Does anybody have any experience or suggestions of a best practice for achieving sequence number generation in a thread safe manner for multiple clients?

    Read the article

  • AST generation for a an application developed both in visual basic and c#

    - by Dev
    Hi, I'm currently understanding one application developed both in visual basic and c#. Running through the code is getting tough as code is around 50KLOC. So i'm planning for generation of AST (abstract syntax tree). Will it be possible to generate for both language together. Atleast a call graph generation will be helpful (but can't find any tool which works for both languages) Please let me know if this question is confusing. Thanks in Advance Dev

    Read the article

  • Data Generation Plan Missing from VS2010 Pro

    - by chobo2
    Hi I was following this tutorial http://nepomucenobr.com.br/blog/post/Generating-e2809cdummy-datae2809d-with-Visual-Studio.aspx and I got to the point where I am should add a data generation plan file. Yet when I click on the folder there is not data generation plan file. I don't know why. Do I have to install it separately or what? Or is it because I am using 2005 express?

    Read the article

  • How should I start refactoring my mostly-procedural C++ application?

    - by oob
    We have a program written in C++ that is mostly procedural, but we do use some C++ containers from the standard library (vector, map, list, etc). We are constantly making changes to this code, so I wouldn't call it a stagnant piece of legacy code that we can just wrap up. There are a lot of issues with this code making it harder and harder for us to make changes, but I see the three biggest issues being: Many of the functions do more (way more) than one thing We violate the DRY principle left and right We have global variables and global state up the wazoo. I was thinking we should attack areas 1 and 2 first. Along the way, we can "de-globalize" our smaller functions from the bottom up by passing in information that is currently global as parameters to the lower level functions from the higher level functions and then concentrate on figuring out how to removing the need for global variables as much as possible. I just finished reading Code Complete 2 and The Pragmatic Programmer, and I learned a lot, but I am feeling overwhelmed. I would like to implement unit testing, change from a procedural to OO approach, automate testing, use a better logging system, fully validate all input, implement better error handling and many other things, but I know if we start all this at once, we would screw ourselves. I am thinking the three I listed are the most important to start with. Any suggestions are welcome. We are a team of two programmers mostly with experience with in-house scripting. It is going to be hard to justify taking the time to refactor, especially if we can't bill the time to a client. Believe it or not, this project has been successful enough to keep us busy full time and also keep several consultants busy using it for client work.

    Read the article

  • ULogd2.x - Documents - IPFIX data generation

    - by Gomathivinayagam
    I would like to generate IPFIX data from the packets that are coming to my local system as part of experimentation. It seems ULogd is a good tool to do that. I am able to capture PCAP data. But there are very less documents available on ULogd2.x about IPFIX format data generation.(There are very few examples provided in ulogd.conf). Can you provide me any links that describes about how to generate IPFIX data using ulogd2.x? 1) What are the options available? I saw there is polling interval configuration. But I have no idea how does it work? 2) If I set hash_enable = 0, and uncomment the polling_interval value, I'm getting an exception as NFCT plugin requires hash table, evne though I have specified hash_buckets and hash_max_entries. Could you help on this? 3) In general, I would like to know how NFCT plugin works in ulogd2.x. I sent mail to ulogd mailing list, but there are no replies. Could you shed some light?

    Read the article

  • Suggestions for interesting Fortran adventure?

    - by Gnatz
    I've been listening to some pod-casts lately that have sparked my interest in revisiting a low level procedural programming language like Fortran. However, after downloading a compiler and doing basic language exploration I've been unable to think of a fun interesting application to compose. Does anybody have or know of a resource that I could tap into for some fun and interesting programming scenarios for a low level language like Fortran?

    Read the article

  • Indesign Import XML into Automatic Page generation, data merge

    - by taudep
    I've created some InDesign Pages that I want to use as templates. I've created an XML file with all the appropriate data. I want to merge the XML data with the InDesign page and have a few hundred pages automatically generated. I've been reading online and working with InDesign's "Import XML" features without any luck. The documentation has been pretty poor for me. And Google searches haven't returned much fruitful. Edit: I'm updating this to now include my present steps 1) I create a Master Page of my template 2) I add a bunch of text frames where I want the imported data from the XML file to be places 3) I open the "Tags" window and Import and XML file 4) I mark my text frames in the Master Document with the appropriate tags 5) I then add a lot of pages (like 200) to the document 6) Then I use "Import XML" to try and get the data brought in and filled across all 200 pages. This is where I fail. So there's something I'm missing. It might be that InDesign doesn't work as I'm expecting... Anyone have any good tips for mail-merge like functionality with an XML document and auto-generation of InDesign pages? BTW, here's an example of Adobe's great documentation for merging repeated XML elements. There's gotta be more...InDesign CS4 Docs: XML-Importing XML-Working with Repeating Data EDIT: Here's some of the sample XML, notice the ITEM will repeat. I've also truncated the data in the "desc" tag: <output> <item> <user_name>taude</user_name> <date>2009-02-21</date> <title>Wishful Thinking</title> <desc>Skiing up in Vermont on a beautiful day. This photo of</desc> <thumbnail>http://www.blipfoto.com/thumbs/5371/2009/big/color/96104200949a162672e1996.15963073.jpeg</thumbnail> </item> <item> <user_name>taude</user_name> <date>2009-02-22</date> <title>Skiing Self Portrait</title> <desc>I was inspired by ML's self-portrait while </desc> <thumbnail>http://www.blipfoto.com/thumbs/5371/2009/big/color/36547696749a2c5782308e0.91477014.jpeg</thumbnail> </item> </output> Here's what my imported XML looks like with the InDesign Structure

    Read the article

  • Indesign Import XML into Automatic Page generation, data merge

    - by taudep
    I've created some InDesign Pages that I want to use as templates. I've created an XML file with all the appropriate data. I want to merge the XML data with the InDesign page and have a few hundred pages automatically generated. I've been reading online and working with InDesign's "Import XML" features without any luck. The documentation has been pretty poor for me. And Google searches haven't returned much fruitful. Here are my present steps: I create a Master Page of my template I add a bunch of text frames where I want the imported data from the XML file to be places I open the "Tags" window and Import and XML file I mark my text frames in the Master Document with the appropriate tags I then add a lot of pages (like 200) to the document Then I use "Import XML" to try and get the data brought in and filled across all 200 pages. This is where I fail. There's something I'm missing. It might be that InDesign doesn't work as I'm expecting... Does anyone have any good tips for mail-merge like functionality with an XML document and auto-generation of InDesign pages? By the way, here's an example of Adobe's great documentation for merging repeated XML elements. There's got to be more... InDesign CS4 Docs: XML-Importing XML-Working with Repeating Data Here's some of the sample XML, notice the ITEM will repeat. I've also truncated the data in the "desc" tag: <output> <item> <user_name>taude</user_name> <date>2009-02-21</date> <title>Wishful Thinking</title> <desc>Skiing up in Vermont on a beautiful day. This photo of</desc> <thumbnail>http://www.blipfoto.com/thumbs/5371/2009/big/color/96104200949a162672e1996.15963073.jpeg</thumbnail> </item> <item> <user_name>taude</user_name> <date>2009-02-22</date> <title>Skiing Self Portrait</title> <desc>I was inspired by ML's self-portrait while </desc> <thumbnail>http://www.blipfoto.com/thumbs/5371/2009/big/color/36547696749a2c5782308e0.91477014.jpeg</thumbnail> </item> </output> Here's what my imported XML looks like with the InDesign Structure:

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >