Search Results

Search found 9894 results on 396 pages for 'primary interop assembly'.

Page 366/396 | < Previous Page | 362 363 364 365 366 367 368 369 370 371 372 373  | Next Page >

  • JSON or YAML encoding in GWT/Java on both client and server

    - by KennethJ
    I'm looking for a super simple JSON or YAML library (not particularly bothered which one) written in Java, and can be used in both GWT on the client, and in its original Java form on the server. What I'm trying to do is this: I have my models, which are shared between the client and the server, and these are the primary source of data interchange. I want to design the web service in between to be as simple as possible, and decided to take the RESTful approach. My problem is that I know our application will grow substantially in the future, and writing all the getters, setters, serialization, factories, etc. by hand fills me with absolute dread. So in order to avoid it, I decided to implement annotations to keep track of attributes on the models. The reason I can't just serialize everything directly, using GWT's own one, or one which works through reflection, is because we need a certain amount of logic going on in the serialization process. I.e. whether references to other models get serialized during the serialization of the original model, or whether an ID is just passed, and general simple things like that. I've then written an annotation processor to preprocess my shared models and generate an implementing class with all the getters, setters, serialization, lazy-loading, etc. To make a long story short, I need some type of simple YAML or JSON library, which allows me to encode and decode manually, so I can generate this code through my annotation processor. I have had a look around the interwebs, but every single one I ran into supported some reflection which, while all fine and dandy, make it pretty much useless for GWT. And in the case of GWT's own JSON library, it uses JSNI for speed purposes, making it useless server side. One solution I did think about involved writing writing two sets of serialization methods on the models, one for the client and one for the server, but I'd rather not do that. Also, I'm pretty new to GWT, and even though I have done a lot of Java, it was back in the 1.2 days, so it's a bit rusty. So if you think I'm going about this problem completely the wrong way, I'm open to suggestions.

    Read the article

  • Working with hibernate/DAO problems

    - by Gandalf StormCrow
    Hello everyone here is my DAO class : public class UsersDAO extends HibernateDaoSupport { private static final Log log = LogFactory.getLog(UsersDAO.class); protected void initDao() { //do nothing } public void save(User transientInstance) { log.debug("saving Users instance"); try { getHibernateTemplate().saveOrUpdate(transientInstance); log.debug("save successful"); } catch (RuntimeException re) { log.error("save failed", re); throw re; } } public void update(User transientInstance) { log.debug("updating User instance"); try { getHibernateTemplate().update(transientInstance); log.debug("update successful"); } catch (RuntimeException re) { log.error("update failed", re); throw re; } } public void delete(User persistentInstance) { log.debug("deleting Users instance"); try { getHibernateTemplate().delete(persistentInstance); log.debug("delete successful"); } catch (RuntimeException re) { log.error("delete failed", re); throw re; } } public User findById( java.lang.Integer id) { log.debug("getting Users instance with id: " + id); try { User instance = (User) getHibernateTemplate() .get("project.hibernate.Users", id); return instance; } catch (RuntimeException re) { log.error("get failed", re); throw re; } } } Now I wrote a test class(not a junit test) to test is everything working, my user has these fields in the database : userID which is 5characters long string and unique/primary key, and fields such as address, dob etc(total 15 columns in database table). Now in my test class I intanciated User added the values like : User user = new User; user.setAddress("some address"); and so I did for all 15 fields, than at the end of assigning data to User object I called in DAO to save that to database UsersDao.save(user); and save works just perfectly. My question is how do I update/delete users using the same logic? Fox example I tried this(to delete user from table users): User user = new User; user.setUserID("1s54f"); // which is unique key for users no two keys are the same UsersDao.delete(user); I wanted to delete user with this key but its obviously different can someone explain please how to do these. thank you

    Read the article

  • many-to-many performance concerns with fluent nhibernate.

    - by Ciel
    I have a situation where I have several many-to-many associations. In the upwards of 12 to 15. Reading around I've seen that it's generally believed that many-to-many associations are not 'typical', yet they are the only way I have been able to create the associations appropriate for my case, so I'm not sure how to optimize any further. Here is my basic scenario. class Page { IList<Tag> Tags { get; set; } IList<Modification> Modifications { get; set; } IList<Aspect> Aspects { get; set; } } This is one of my 'core' classes, and coincidentally one of my core tables. Virtually half of the objects in my code can have an IList<Page>, and some of them have IList<T> where T has its own IList<Page>. As you can see, from an object oriented standpoint, this is not really a problem. But from a database standpoint this begins to introduce a lot of junction tables. So far it has worked fine for me, but I am wondering if anyone has any ideas on how I could improve on this structure. I've spent a long time thinking and in order to achieve the appropriate level of association required, I cannot think of any way to improve it. The only thing I have come up with is to make intermediate classes for each object that has an IList<Page>, but that doesn't really do anything that the HasManyToMany does not already do except introduce another class. It does not extend the functionality and, from what I can tell, it does not improve performance. Any thoughts? I am also concerned about Primary Key limits in this scenario. Most everything needs to be able to have these properties, but the Pages cannot be unique to each object, because they are going to be frequently shared and joined between multiple objects. All relationships are one-sided. (That is, a Page has no knowledge of what owns it). Because of this, I also have no Inverse() mapped HasManyToMany collections. Also, I have read the similar question : Usage of ORMs like NHibernate when there are many associations - performance concerns But it really did not answer my concerns.

    Read the article

  • LINQ Query Returning Multiple Copies Of First Result

    - by Mike G
    I'm trying to figure out why a simple query in LINQ is returning odd results. I have a view defined in the database. It basically brings together several other tables and does some data munging. It really isn't anything special except for the fact that it deals with a large data set and can be a bit slow. I want to query this view based on a long. Two sample queries below show different queries to this view. var la = Runtime.OmsEntityContext.Positions.Where(p => p.AccountNumber == 12345678).ToList(); var deDa = Runtime.OmsEntityContext.Positions.Where(p => p.AccountNumber == 12345678).Select(p => new { p.AccountNumber, p.SecurityNumber, p.CUSIP }).ToList(); The first one should hand back a List. The second one will be a list of anonymous objects. When I do these queries in entities framework the first one will hand me back a list of results where they're all exactly the same. The second query will hand me back data where the account number is the one that I queried and the other values differ. This seems to do this on a per account number basis, ie if I were to query for one account number or another all the Position objects for one account would have the same value (the first one in the list of Positions for that account) and the second account would have a set of Position objects that all had the same value (again, the first one in it's list of Position objects). I can write SQL that is in effect the same as either of the two EF queries. They both come back with results (say four) that show the correct data, one account number with different securities numbers. Why does this happen??? Is there something that I could be doing wrong so that if I had four results for the first query above that the first record's data also appears in the 2-4th's objects??? I cannot fathom what would/could be causing this. I've searched Google for all kinds of keywords and haven't seen anyone with this issue. We partial class out the Positions class for added functionality (smart object) and some smart properties. There are even some constructors that provide some view model type support. None of this is invoked in the request (I'm 99% sure of this). However, we do this same pattern all over the app. The only thing I can think of is that the mapping in the EDMX is screwy. Is there a way that this would happen if the "primary keys" in the EDMX were not in fact unique given the way the view is constructed? I'm thinking that the dev who imported this model into the EDMX let the designer auto select what would be unique. Any help would give a haggered dev some hope!

    Read the article

  • Zend Table Relationship Modeling with Composite Key

    - by emeraldjava
    I have a table with a composite primary key using four columns. mysql> describe leaguesummary; +------------------------+------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +------------------------+------------------+------+-----+---------+----------------+ | leagueid | int(10) unsigned | NO | PRI | NULL | auto_increment | | leaguetype | enum('I','T') | NO | PRI | NULL | | | leagueparticipantid | int(10) unsigned | NO | PRI | NULL | | | leaguestandard | int(10) unsigned | NO | | NULL | | | leaguedivision | varchar(5) | NO | PRI | NULL | | | leagueposition | int(10) unsigned | NO | | NULL | | I have the league object modelled as so (all plain enough mappings) <?php class Model_DbTable_League extends Zend_Db_Table_Abstract { protected $_name = 'league'; protected $_primary = 'id'; protected $_dependentTables = array('Model_DbTable_LeagueSummary'); And I've started like this on the new model class. I've mapped a simple reference map which returns all rows linked to the league id. // http://files.zend.com/help/Zend-Framework/zend.db.table.relationships.html // http://naneau.nl/2007/04/21/a-zend-framework-tutorial-part-one/ class Model_DbTable_LeagueSummary extends Zend_Db_Table_Abstract { protected $_name = "leaguesummary"; protected $_primary = array('leagueid', 'leaguetype','leagueparticipantid','leaguedivision'); protected $_referenceMap = array( 'Summary' => array( 'columns' => array('leagueid'), 'refTableClass' => 'Model_DbTable_League', 'refColumns' => array('id') ), ..... ); } ?> The simple case works when called from my controller public function listAction() { // action body $leagueTable = new Model_DbTable_League(); $this->view->leagues = $leagueTable->getLeagues(); $league = $leagueTable->getLeague(6); // work $summary = $league->findDependentRowset('Model_DbTable_LeagueSummary','Summary'); Zend_Debug::dump($summary,"",true); I'm not sure how i can define extra _referenceMap keys which will take extra contraint ket values. I would like to be able to define a set called 'MenA' in which the type and division values are hardcoded, and the league id is taken from the initial rowset. 'MenA' =>array( 'columns' => array('leagueid','leaguetype','leaguedivision'), 'refTableClass' => 'Model_DbTable_League', 'refColumns' => array("id","I","A") ) Is this style of mapping possible ie hardcoding the values into the 'refColumns'. The second crazy idea i had was to pass the variable values in as part of the third param of the findDependentRowset() method. $menA = $league->findDependentRowset('Model_DbTable_LeagueSummary','MenA',array("I","A")); Any suggestions on how I might use the Zend DB Table Relationship mapping correctly to do this would be appreciated. I'm not interested in the plain, old and ugly $db-select(a,b,c)-where(..) style solution.

    Read the article

  • File sizing issue in DOS/FAT

    - by Heather
    I've been tasked with writing a data collection program for a Unitech HT630, which runs a proprietary DOS operating system that can run executables compiled for 16-bit MS DOS with some restrictions. I'm using the Digital Mars C/C++ compiler, which is working well thus far. One of the application requirements is that the data file must be human-readable plain text, meaning the file can be imported into Excel or opened by Notepad. I'm using a variable length record format much like CSV that I've successfully implemented using the C standard library file I/O functions. When saving a record, I have to calculate whether the updated record is larger or smaller than the version of the record currently in the data file. If larger, I first shift all records immediately after the current record forward by the size difference calculated before saving the updated record. EOF is extended automatically by the OS to accommodate the extra data. If smaller, I shift all records backwards by my calculated offset. This is working well, however I have found no way to modify the EOF marker or file size to ignore the data after the end of the last record. Most of the time records will grow in size because the data collection program will be filling some of the empty fields with data when saving a record. Records will only shrink in size when a correction is made on an existing entry, or on a normal record save if the descriptive data in the record is longer than what the program reads in memory. In the situation of a shrinking record, after the last record in the file I'm left with whatever data was sitting there before the shift. I have been writing an EOF delimiter into the file after a "shrinking record save" to signal where the end of my records are and space-filling the remaining data, but then I no longer have a clean file until a "growing record save" extends the size of the file over the space-filled area. The truncate() function in unistd.h does not work (I'm now thinking this is for *nix flavors only?). One proposed solution I've seen involves creating a second file and writing all the data you wish to save into that file, and then deleting the original. Since I only have 4MB worth of disk space to use, this works if the file size is less than 2MB minus the size of my program executable and configuration files, but would fail otherwise. It is very likely that when this goes into production, users would end up with a file exceeding 2MB in size. I've looked at Ralph Brown's Interrupt List and the interrupt reference in IBM PC Assembly Language and Programming and I can't seem to find anything to update the file size or similar. Is reducing a file's size without creating a second file even possible in DOS?

    Read the article

  • How to fine tune FluentNHibernate's auto mapper?

    - by Venemo
    Okay, so yesterday I managed to get the latest trunk builds of NHibernate and FluentNHibernate to work with my latest little project. (I'm working on a bug tracking application.) I created a nice data access layer using the Repository pattern. I decided that my entities are nothing special, and also that with the current maturity of ORMs, I don't want to hand-craft the database. So, I chose to use FluentNHibernate's auto mapping feature with NHibernate's "hbm2ddl.auto" property set to "create". It really works like a charm. I put the NHibernate configuration in my app domain's config file, set it up, and started playing with it. (For the time being, I created some unit tests only.) It created all tables in the database, and everything I need for it. It even mapped my many-to-many relationships correctly. However, there are a few small glitches: All of the columns created in the DB allow null. I understand that it can't predict which properties should allow null and which shouldn't, but at least I'd like to tell it that it should allow null only for those types for which null makes sense in .NET (eg. non-nullable value types shouldn't allow null). All of the nvarchar and varbinary columns it created, have a default length of 255. I would prefer to have them on max instead of that. Is there a way to tell the auto mapper about the two simple rules above? If the answer is no, will it work correctly if I modify the tables it created? (So, if I set some columns not to allow null, and change the allowed length for some other, will it correctly work with them?) EDIT: I managed to achieve the above by using Fluent NHibernate's convention API. Thanks to everyone who helped! However, there is one more thing: after checking out the convention API, I really would like my IDs to be calld "ID", not "Id", but it seems to me that the PrimaryKey.Name.Is(x => "ID") is not working at all. If I add it to the conventions collection and rewrite my entities' properties to "ID" instead of "Id", it throws an exception that there is no primary key mapped. Any thoughts on this?

    Read the article

  • A better way to delete a list of elements from multiple tables

    - by manyxcxi
    I know this looks like a 'please write the code' request, but some basic pointer/principles for doing this the right way should be enough to get me going. I have the following stored procedure: CREATE PROCEDURE `TAA`.`runClean` (IN idlist varchar(1000)) BEGIN DECLARE EXIT HANDLER FOR NOT FOUND ROLLBACK; DECLARE EXIT HANDLER FOR SQLEXCEPTION ROLLBACK; DECLARE EXIT HANDLER FOR SQLWARNING ROLLBACK; START TRANSACTION; DELETE FROM RunningReports WHERE run_id IN (idlist); DELETE FROM TMD_INDATA_INVOICE WHERE run_id IN (idlist); DELETE FROM TMD_INDATA_LINE WHERE run_id IN (idlist); DELETE FROM TMD_OUTDATA_INVOICE WHERE run_id IN (idlist); DELETE FROM TMD_OUTDATA_LINE WHERE run_id IN (idlist); DELETE FROM TMD_TEST WHERE run_id IN (idlist); DELETE FROM RunHistory WHERE id IN (idlist); COMMIT; END $$ It is called by a PHP script to clean out old run history. It is not particularly efficient as you can see and I would like to speed it up. The PHP script gathers the ids to remove from the tables with the following query: $query = "SELECT id, stop_time FROM RunHistory WHERE config_id = $configId AND save = 0 AND NOT(stop_time IS NULL) ORDER BY stop_time"; It keeps the last five run entries and deletes all the rest. So using this query to bring back all the IDs, it determines which ones to delete and keeps the 'newest' five. After gathering the IDs it sends them to the stored procedure to remove them from the associated tables. I'm not very good with SQL, but I ASSUME that using an IN statement and not joining these tables together is probably the least efficient way I can do this, but I don't know enough to ask anything but "how do I do this better?" If possible, I would like to do this all in my stored procedure using a query to gather all the IDs except for the five 'newest', then delete them. Another twist, run entries can be marked save (save = 1) and should not be deleted. The RunHistory table looks like this: CREATE TABLE `TAA`.`RunHistory` ( `id` int(11) NOT NULL auto_increment, `start_time` datetime default NULL, `stop_time` datetime default NULL, `config_id` int(11) NOT NULL, [...] `save` tinyint(1) NOT NULL default '0', PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=0 DEFAULT CHARSET=utf8;

    Read the article

  • Gotchas INSERTing into SQLite on Android?

    - by paul.meier
    Hi friends, I'm trying to set up a simple SQLite database in Android, handling the schema via a subclass of SQLiteOpenHelper. However, when I query my tables, the columns I think I've inserted are never present. Namely, in SQLiteOpenHelper's onCreate(SQLiteDatabase db) method, I use db.execSQL() to run CREATE TABLE commands, then have tried both db.execSQL and db.insert() to run INSERT commands on the tables I've just created. This appears to run fine, but when I try to query them I always get 0 rows returned (for debugging, the queries I'm running are simple SELECT * FROM table and checking the Cursor's getCount()). Anybody run into anything like this before? These commands seem to run on command-line sqlite3. Are they're gotchas that I'm missing (e.g. INSERTS must/must not be semicolon terminated, or some issue involving multiple tables)? I've attached some of the code below. Thanks for your time, and let me know if I can clarify further. @Override public void onCreate(SQLiteDatabase db) { db.execSQL("CREATE TABLE "+ LEVEL_TABLE +" (" + " "+ _ID +" INTEGER PRIMARY KEY AUTOINCREMENT," + " level TEXT NOT NULL,"+ " rows INTEGER NOT NULL,"+ " cols INTEGER NOT NULL);"); db.execSQL("CREATE TABLE "+ DYNAMICS_TABLE +" (" + " level_id INTEGER NOT NULL," + " row INTEGER NOT NULL,"+ " col INTEGER NOT NULL,"+ " type INTEGER NOT NULL);"); db.execSQL("CREATE TABLE "+ SCORE_TABLE +" (" + " level_id INTEGER NOT NULL," + " score INTEGER NOT NULL,"+ " date_achieved DATE NOT NULL,"+ " name TEXT NOT NULL);"); this.enterFirstLevel(db); } And a sample of the insert code I'm currently using, which gets called in enterFirstLevel() (some values hard-coded just to get it running...): private void insertDynamic(SQLiteDatabase db, int row, int col, int type) { ContentValues values = new ContentValues(); values.put("level_id", "1"); values.put("row", Integer.toString(row)); values.put("col", Integer.toString(col)); values.put("type", Integer.toString(type)); db.insertOrThrow(DYNAMICS_TABLE, "col", values); } Finally, query code looks like this: private Cursor fetchLevelDynamics(int id) { SQLiteDatabase db = this.leveldata.getReadableDatabase(); try { String fetchQuery = "SELECT * FROM " + DYNAMICS_TABLE; String[] queryArgs = new String[0]; Cursor cursor = db.rawQuery(fetchQuery, queryArgs); Activity activity = (Activity) this.context; activity.startManagingCursor(cursor); return cursor; } finally { db.close(); } }

    Read the article

  • how to create a system-wide independent universal counter object primarily for Database keys?

    - by andora
    I would like to create/use a system-wide independent universal 'counter object' that can be called via COM in a thread-safe manner. The counter object will be passed an ID to identify which counter to return, handle the counting, 'persist' the count (occasionally), have reasonable performance (as fast as possible) perhaps capable of 1000 counts per second or better (1mS) and be accessible cross-process/out-of-process. The current count status must be persisted between object restarts/shutdowns. The counter object is liklely to be a 'singleton' type object implemented in some form of free-threaded dictionary, containing maybe 10 counters (perhaps 50 max). The count needs to be monotonic and consistent, (ie: guaranteed unique sequential values). Each counter should have a few methods, like reset, inc, dec, set, clear, remove. As a luxury, I would like to have a variable-increment (ie: 'step by' value). To support thread-safefty, perhaps some sorm of critical-section or mutex call. It just needs to return a long/4byte signed integer. I really want something that can be called from anywhere, including VBScript, so I figure COM is my preferred solution. The primary use of this is for database keys. I am unable to use autoinc or guid type keys and have ruled out database-generated counting systems at this point. I've spent days researching this and I have really struggled to find a solution. The best I can find is a free-threaded dictionary object that can be instantiated using COM+ from Motobit - it seems to offer all the 'basics' and I guess I could create some form of wrapper for this. So, here are my questions: Does such a 'general purpose counter-object already exist? Can you direct me to it? (MS did do an IIS/ASP object called 'MSWC.Counter' but this isn't 'cross-process'/ out-of-process component and isn't thread-safe. (but if it was, it would do!) What is the best way of creating such a Component? (I'd prefer VB6 right-now, [don't ask!] but can do in VB.NET2005 if I had to). I don't have the skills/knowledge/tools to use anything else. I am desparate for a workable solution. I need specific guidance! If anybody can code something up for me I am prepared to pay for it.

    Read the article

  • Large number of UPDATE queries slowing down page

    - by Bryan Lewis
    I am reading and validating large fixed-width text files (range from 10-50K lines) that are submitted via our ASP.net website (coded in VB.Net). I do an initial scan of the file to check for basic issues (line length, etc). Then I import each row into a MS SQL table. Each DB rows basically consists of a record_ID (Primary, auto-incrementing) and about 50 varchar fields. After the insert is done, I run a validation function on the file that checks each field in each row based on a bunch of criteria (trimmed length, isnumeric, range checks, etc). If it finds an error in any field, it inserts a record into the Errors table, which has an error_ID, the record_ID and an error message. In addition, if the field fails in a particular way, I have to do a "reset" on that field. A reset might consist of blanking the entire field, or simply replacing the value with another value (e.g. replacing the string with a new one that has all illegals chars taken out). I have a 5,000 line test file. The upload, initial check, and import takes about 5-6 seconds. The detailed error check and insert into the Errors table takes about 5-8 seconds (this file has about 1200 errors in it). However, the "resets" part takes about 40-45 seconds for 750 fields that need to be reset. When I comment out the resets function (returning immediately without actually calling the UPDATE stored proc), the process is very fast. With the resets turned on, the pages take 50 seconds to return. My UPDATE stored proc is using some recommended code from http://sommarskog.se/dynamic_sql.html, whereby it uses CASE instead of dynamic SQL: UPDATE dbo.Records SET dbo.Records.file_ID = CASE @field_name WHEN 'file_ID' THEN @field_value ELSE file_ID END, . . (all 50 varchar field CASE statements here) . WHERE dbo.Records.record_ID = @record_ID Is there any way I can help my performance here. Can I somehow group all of these UPDATE calls into a single transaction? Should I be reworking the UPDATE query somehow? Or is it just sheer quantity of 750+ UPDATEs and things are just slow (it's a quad proc server with 8GB ram). Any suggestions appreciated.

    Read the article

  • using CSS to center FLOATED input elements wrapped in a DIV

    - by Tim
    There's no shortage of questions and answers about centering but I've not been able to get it to work given my specific circumstances, which involve floating. I want to center a container DIV that contains three floated input elements (split-button, text, checkbox), so that when my page is resized wider, they go from this: ||.....[ ][v] [ ] [ ] label .....|| to this ||......................[ ][v] [ ] [ ] label.......................|| They float fine, but when the page is made wider, they stay to the left: ||.....[ ][v] [ ] [ ] label .......................................|| If I remove the float so that the input elements are stacked rather than side-by-side: [ ][v] [ ] [ ] label then they DO center correctly when the page is resized. SO it is the float being applied to the elements of the DIV#hbox inside the container that is messing up the centering. Is what I want to do impossible because of the way float is designed to work? Here is my DOCTYPE, and the markup does validate at w3c: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"> Here is my markup: <div id="term1-container"> <div class="hbox"> <div> <button id="operator1" class="operator-split-button">equals</button> <button id="operator1drop">show all operators</button> </div> <div><input type="text" id="term1"></input></div> <div><input type="checkbox" id="meta2"></input><label for="meta2" class="tinylabel">meta</label></div> </div> </div> And here's the (not-working) CSS: #term1-container {text-align: center} .hbox {margin: 0 auto;} .hbox div {float:left; } I have also tried applying display: inline-block to the floated button, text-input, and checkbox; and even though I think it applies only to text, I've also tried applying white-space: nowrap to the #term1-container DIV, based on posts I've seen here on SO. And just to be a little more complete, here's the jQuery that creates the split-button: $(".operator-split-button").button().click( function() { alert( "foo" ); }).next().button( { text: false, icons: { primary: "ui-icon-triangle-1-s" } }).click( function(){positionOperatorsMenu();} ) })

    Read the article

  • How to created filtered reports in WPF?

    - by Michael Goyote
    Creating reports in WPF. I have two related tables. Table A-Customer: CustomerID(PK) Names Phone Number Customer Num Table B-Items: Products Price CustomerID I want to be able to generate a report like this: CustomerA Items Price Item A 10 Item B 10 Item C 10 --------------- Total 30 So this is what I have done: <Window x:Class="ReportViewerWPF.MainWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:rv="clr-namespace:Microsoft.Reporting.WinForms; assembly=Microsoft.ReportViewer.WinForms" Title="Customer Report" Height="300" Width="400"> <Grid> <WindowsFormsHost Name="windowsFormsHost1"> <rv:ReportViewer x:Name="reportViewer1"/> </WindowsFormsHost> </Grid> Then I created a dataset and loaded the two tables, followed by a report wizard (dragged all the available fields and dropped them to the Values pane). The code behind the WPF window is this: public partial class CustomerReport : Window { public CustomerReport() { InitializeComponent(); _reportViewer.Load += ReportViewer_Load; } private bool _isReportViewerLoaded; private void ReportViewer_Load(object sender, EventArgs e) { if (!_isReportViewerLoaded) { Microsoft.Reporting.WinForms.ReportDataSource reportDataSource1 = new Microsoft.Reporting.WinForms.ReportDataSource(); HM2DataSet dataset = new HM2DataSet(); dataset.BeginInit(); reportDataSource1.Name = "DataSet";//This is the dataset name reportDataSource1.Value = dataset.CustomerTable; this.reportViewer1.LocalReport.DataSources.Add(reportDataSource1); this.reportViewer1.LocalReport.ReportPath = "../../Report3.rdlc"; dataset.EndInit(); HM2DataSetTableAdapters.CustomerTableAdapter funcTableAdapter = new HM2DataSetTableAdapters.CustomerTableAdapter(); funcTableAdapter.ClearBeforeFill = true; funcTableAdapter.Fill(dataset.CustomerTable); _reportViewer.RefreshReport(); _isReportViewerLoaded = true; } } As you might have guessed this loaded this list of customer with items and price: Customer Items Price Customer A Items A 10 Customer A Items B 10 Customer B Items D 10 Customer B Items C 10 How can I fine-tune this report to look like the one above, where the user can filter the customer he wants displayed on the report? Thanks in advance for the help. I would have preferred to use LINQ whenever filtering data

    Read the article

  • How to properly preload images, js and css files?

    - by Kenny Bones
    Hi, I'm creating a website from scratch and I was really into this in the late 90's but the web has changed alot since then! And I'm more of a designer so when I started putting this site together, I basically did a system of php includes to make the site more "dynamic" When you first visit the site, you'll be presented to a logon screen, if you're not already logged on (cookies). If you're not logged on, a page called access.php is introdused. I thought I'd preload the most heavy images at this point. So that when the user is done logging on, the images are already cached. And this is working as I want. But I still notice that the biggest image still isn't rendered immediatly anyway. So it's seems kinda pointless. All of this has made me rethink how the site is structured and how scripts and css files are loaded. Using FireBug and YSlow with Firefox I see a few pointers like expires headers and reducing the size of each script. But is this really the culprit? For example, would this be really really stupid in the main index.php? The entire site is basically structured like this <?php require("dbconnect.php"); ?> <?php include ("head.php"); ?> And below this is basically just the body and the content of the site. Head.php however consists of the doctype, head portions, linking of two css style sheets, jQuery library, jQuery validation engine, Cufon and Cufon font file, and then the small Cufon.Replace snippet. The rest of the body comes with the index.php file, but at the bottom of this again is an include of a file called "footer.php" which basically consists of loading of a couple of jsLoader scripts and a slidepanel and then a js function. All of this makes the end page source look like a typical complete webpage, but I'm wondering if any of you can see immediatly that "this is really really stupid" and "don't do that, do this instead" etc. :) Are includes a bad way to go? This site is also pretty image intensive and I can probably do a little more optimization. But I don't think that's its the primary culprit. YSlow gives me a report of what takes up the most space: doc(1) - 5.8K js(5) - 198.7K css(2) - 5.6K cssimage(8) - 634.7K image(6) - 110.8K I know it looks like it's cssimage(8) that weighs the most, but I've already preloaded these images from before and it doesn't really affect the rendering.

    Read the article

  • How to Work Around Limitations in Generic Type Constraints in C#?

    - by Jose
    Okay I'm looking for some input, I'm pretty sure this is not currently supported in .NET 3.5 but here goes. I want to require a generic type passed into my class to have a constructor like this: new(IDictionary<string,object>) so the class would look like this public MyClass<T> where T : new(IDictionary<string,object>) { T CreateObject(IDictionary<string,object> values) { return new T(values); } } But the compiler doesn't support this, it doesn't really know what I'm asking. Some of you might ask, why do you want to do this? Well I'm working on a pet project of an ORM so I get values from the DB and then create the object and load the values. I thought it would be cleaner to allow the object just create itself with the values I give it. As far as I can tell I have two options: 1) Use reflection(which I'm trying to avoid) to grab the PropertyInfo[] array and then use that to load the values. 2) require T to support an interface like so: public interface ILoadValues { void LoadValues(IDictionary values); } and then do this public MyClass<T> where T:new(),ILoadValues { T CreateObject(IDictionary<string,object> values) { T obj = new T(); obj.LoadValues(values); return obj; } } The problem I have with the interface I guess is philosophical, I don't really want to expose a public method for people to load the values. Using the constructor the idea was that if I had an object like this namespace DataSource.Data { public class User { protected internal User(IDictionary<string,object> values) { //Initialize } } } As long as the MyClass<T> was in the same assembly the constructor would be available. I personally think that the Type constraint in my opinion should ask (Do I have access to this constructor? I do, great!) Anyways any input is welcome.

    Read the article

  • Methodology for a Rails app

    - by Aaron Vegh
    I'm undertaking a rather large conversion from a legacy database-driven Windows app to a Rails app. Because of the large number of forms and database tables involved, I want to make sure I've got the right methodology before getting too far. My chief concern is minimizing the amount of code I have to write. There are many models that interact together, and I want to make sure I'm using them correctly. Here's a simplified set of models: class Patient < ActiveRecord::Base has_many :PatientAddresses has_many :PatientFileStatuses end class PatientAddress < ActiveRecord::Base belongs_to :Patient end class PatientFileStatus < ActiveRecord::Base belongs_to :Patient end The controller determines if there's a Patient selected; everything else is based on that. In the view, I will be needing data from each of these models. But it seems like I have to write an instance variable in my controller for every attribute that I want to use. So I start writing code like this: @patient = Patient.find(session[:patient]) @patient_addresses = @patient.PatientAddresses @patient_file_statuses = @patient.PatientFileStatuses @enrollment_received_when = @patient_file_statuses[0].EnrollmentReceivedWhen @consent_received = @patient_file_statuses[0].ConsentReceived @consent_received_when = @patient_file_statuses[0].ConsentReceivedWhen The first three lines grab the Patient model and its relations. The next three lines are examples of my providing values to the view from one of those relations. The view has a combination of text fields and select fields to show the data above. For example: <%= select("patientfilestatus", "ConsentReceived", {"val1"="val1", "val2"="val2", "Written"="Written"}, :include_blank=true )% <%= calendar_date_select_tag "patient_file_statuses[EnrollmentReceivedWhen]", @enrollment_complete_when, :popup=:force % (BTW, the select tag isn't really working; I think I have to use collection_select?) My questions are: Do I have to manually declare the value of every instance variable in the controller, or can/should I do it within the view? What is the proper technique for displaying a select tag for data that's not the primary model? When I go to save changes to this form, will I have to manually pick out the attributes for each model and save them individually? Or is there a way to name the fields such that ActiveRecord does the right thing? Thanks in advance, Aaron.

    Read the article

  • Implementing coroutines in Java

    - by JUST MY correct OPINION
    This question is related to my question on existing coroutine implementations in Java. If, as I suspect, it turns out that there is no full implementation of coroutines currently available in Java, what would be required to implement them? As I said in that question, I know about the following: You can implement "coroutines" as threads/thread pools behind the scenes. You can do tricksy things with JVM bytecode behind the scenes to make coroutines possible. The so-called "Da Vinci Machine" JVM implementation has primitives that make coroutines doable without bytecode manipulation. There are various JNI-based approaches to coroutines also possible. I'll address each one's deficiencies in turn. Thread-based coroutines This "solution" is pathological. The whole point of coroutines is to avoid the overhead of threading, locking, kernel scheduling, etc. Coroutines are supposed to be light and fast and to execute only in user space. Implementing them in terms of full-tilt threads with tight restrictions gets rid of all the advantages. JVM bytecode manipulation This solution is more practical, albeit a bit difficult to pull off. This is roughly the same as jumping down into assembly language for coroutine libraries in C (which is how many of them work) with the advantage that you have only one architecture to worry about and get right. It also ties you down to only running your code on fully-compliant JVM stacks (which means, for example, no Android) unless you can find a way to do the same thing on the non-compliant stack. If you do find a way to do this, however, you have now doubled your system complexity and testing needs. The Da Vinci Machine The Da Vinci Machine is cool for experimentation, but since it is not a standard JVM its features aren't going to be available everywhere. Indeed I suspect most production environments would specifically forbid the use of the Da Vinci Machine. Thus I could use this to make cool experiments but not for any code I expect to release to the real world. This also has the added problem similar to the JVM bytecode manipulation solution above: won't work on alternative stacks (like Android's). JNI implementation This solution renders the point of doing this in Java at all moot. Each combination of CPU and operating system requires independent testing and each is a point of potentially frustrating subtle failure. Alternatively, of course, I could tie myself down to one platform entirely but this, too, makes the point of doing things in Java entirely moot. So... Is there any way to implement coroutines in Java without using one of these four techniques? Or will I be forced to use the one of those four that smells the least (JVM manipulation) instead?

    Read the article

  • What makes great software?

    - by VirtuosiMedia
    From the perspective of an end user, what makes a software great rather than just good or functional? What are some fundamental principles that can shift the way a software is used and perceived? What are some of the little finishing touches that help put an application over the top? I'm in the later stages of developing a web app and I'm looking for ideas or concepts that I may have missed. If you have specific examples of software or apps that you absolutely love, please share the reasons or features that make it special. Keep in mind that I'm looking for examples that directly affect the end user, but not necessarily just UI suggestions. Here are some of the principles and little touches I'm trying to use: Keep the UI as simple as possible. Remove absolutely everything that isn't necessary. Use progressive disclosure when more information can be needed sometimes but isn't needed all the time. Provide inline help and useful error messages. Verbs on buttons wherever possible. Make anything that's clickable obvious. Fast, responsive UI. Accessibility (this is a work in progress). Reusable UI patterns. Once a user learns a skill, they will be able to use it in multiple places. Intelligent default settings. Auto-focusing forms when filling out the form is the primary action to be taken on the page. Clear metaphors (like tabs) and headings indicating location within the app. Automating repetitive tasks (with the ability to disable the automation). Use standardized or accepted metaphors for icons (like an "x" for delete). Larger text sizes for improved readability. High contrast so that each section is distinct. Making sure that it's obvious on every page what the user is supposed to do by establishing a clear information hierarchy and drawing the eye to the call to action. Most deletions can be undone. Discoverability - Make it easy to learn how to do new tasks. Group similar elements together.

    Read the article

  • How to parse an XML string in TDI

    - by ongle
    I am new to TDI. I have a TDI assembly line that calls a web service (ibmdi.InvokeSoapWS) which returns the result as a string in the work attribute 'xmlString'. I then have an AttributeMap that attempts to parse the xml and extract a value (the node it seeks is a few nodes deep). var parser = system.getParser('ibmdi.SOAP'); var xmlString = work.getString('xmlString'); var entity = parser.parseRequest(xmlString); task.dump(entity); The trouble is, the parsed object does not contain an accurate representation of the XML. It contains only two attributes, the first is correctly the first node following the soap body (e.g., ns0:SomeNodeReply), the second being a node inside the first (e.g., ns0:DetailCount). And as far as I can determine, both attributes are just strings so I cannot recurse into the object graph. Below is a sample soap reply: <?xml version="1.0" encoding="utf-8"?> <SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/"> <SOAP-ENV:Body> <ns0:SomeNodeReply xmlns:ns0="http://xmlns.example.com/unique/default/namespace/1136581686664"> <ns0:Status> <ns0:StatusCD>000</ns0:StatusCD> <ns0:StatusDesc /> </ns0:Status> <ns0:DetailCount>1</ns0:DetailCount> <ns0:SomeDetail> <ns0:CodeA>Foo</ns0:CodeA> <ns0:CodeB>Bar</ns0:CodeB> </ns0:SomeDetail> </ns0:SomeNodeReply> </SOAP-ENV:Body> </SOAP-ENV:Envelope> And below is a sample dump of the parsed string: 19:03:23 CTGDIS003I *** Start dumping Entry 19:03:23 Operation: generic 19:03:23 Entry attributes: 19:03:23 SOAP_CALL (replace): 'ns0:SomeNodeReply' 19:03:23 ns0:DetailCount(replace): '1' 19:03:23 CTGDIS004I *** Finished dumping Entry All I really need to do is be able to parse out a value that may or may not be there, depending on the value of another node (e.g., DetailCount == 1, get CodeA otherwise return empty string). I am open to changing anything about how this works if I can extract the data into the work Entry.

    Read the article

  • Jquery problem with errorPlacement.

    - by Eyla
    Greetings, I have problem with errorPlacement, I'm trying to place the error message next to the field but it appearing on the top of the page. any advice how to fix this problem?? here is my code: <%@ Page Title="" Language="C#" MasterPageFile="~/Master.Master" AutoEventWireup="true" CodeBehind="WebForm1.aspx.cs" Inherits="IMAM_APPLICATION.WebForm1" %> <%@ Register assembly="AjaxControlToolkit" namespace="AjaxControlToolkit" tagprefix="asp" %> <asp:Content ID="Content1" ContentPlaceHolderID="head" runat="server"> <script src="js/jquery-1.4.1.js" type="text/javascript"></script> <script src="js/jquery.validate.js" type="text/javascript"></script> <script type="text/javascript"> $(document).ready(function() { $("#aspnetForm").validate({ groups: { username: "fname lname", address: "address1 phone" }, errorPlacement: function(error, element) { if (element.attr("name") == "fname" || element.attr("name") == "lname") error.insertAfter("#lastname"); else error.insertAfter(element); }, debug: true }) }); </script> </asp:Content> <asp:Content ID="Content2" ContentPlaceHolderID="ContentPlaceHolder1" runat="server"> </asp:Content> <asp:Content ID="Content3" ContentPlaceHolderID="ContentPlaceHolder2" runat="server"> <p style="height: 313px"> <label style="position:absolute; top: 227px; left: 22px;">Your Name</label> &nbsp;<input name="fname" value="Pete" style="position:absolute; top: 226px; left: 102px;"/> <input name="lname" id="lastname" style="position:absolute; top: 264px; left: 95px;"/> <input name="address1" style="position:absolute; top: 347px; left: 102px;"/> <input name="phone" id="lastname" style="position:absolute; top: 315px; left: 102px;"/> <br/> <input type="submit" value="Submit Name" style="position:absolute; top: 407px; left: 73px;"/> <input type="submit" value="Submit Address" style="position:absolute; top: 370px; left: 437px;"/> </p> </asp:Content>

    Read the article

  • Mysql select - improve performances

    - by realshadow
    Hey, I am working on an e-shop which sells products only via loans. I display 10 products per page in any category, each product has 3 different price tags - 3 different loan types. Everything went pretty well during testing time, query execution time was perfect, but today when transfered the changes to the production server, the site "collapsed" in about 2 minutes. The query that is used to select loan types sometimes hangs for ~10 seconds and it happens frequently and thus it cant keep up and its hella slow. The table that is used to store the data has approximately 2 milion records and each select looks like this: SELECT * FROM products_loans WHERE KOD IN("X17/Q30-10", "X17/12", "X17/5-24") AND 369.27 BETWEEN CENA_OD AND CENA_DO; 3 loan types and the price that needs to be in range between CENA_OD and CENA_DO, thus 3 rows are returned. But since I need to display 10 products per page, I need to run it trough a modified select using OR, since I didnt find any other solution to this. I have asked about it here, but got no answer. As mentioned in the referencing post, this has to be done separately since there is no column that could be used in a join (except of course price and code, but that ended very, very badly). Here is the show create table, kod and CENA_OD/CENA_DO very indexed via INDEX. CREATE TABLE `products_loans` ( `KOEF_ID` bigint(20) NOT NULL, `KOD` varchar(30) NOT NULL, `AKONTACIA` int(11) NOT NULL, `POCET_SPLATOK` int(11) NOT NULL, `koeficient` decimal(10,2) NOT NULL default '0.00', `CENA_OD` decimal(10,2) default NULL, `CENA_DO` decimal(10,2) default NULL, `PREDAJNA_CENA` decimal(10,2) default NULL, `AKONTACIA_SUMA` decimal(10,2) default NULL, `TYP_VYHODY` varchar(4) default NULL, `stage` smallint(6) NOT NULL default '1', PRIMARY KEY (`KOEF_ID`), KEY `CENA_OD` (`CENA_OD`), KEY `CENA_DO` (`CENA_DO`), KEY `KOD` (`KOD`), KEY `stage` (`stage`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 And also selecting all loan types and later filtering them trough php doesnt work good, since each type has over 50k records and the select takes too much time as well... Any ides about improving the speed are appreciated. Edit: Here is the explain +----+-------------+----------------+-------+---------------------+------+---------+------+--------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+----------------+-------+---------------------+------+---------+------+--------+-------------+ | 1 | SIMPLE | products_loans | range | CENA_OD,CENA_DO,KOD | KOD | 92 | NULL | 190158 | Using where | +----+-------------+----------------+-------+---------------------+------+---------+------+--------+-------------+ I have tried the combined index and it improved the performance on the test server from 0.44 sec to 0.06 sec, I cant access the production server from home though, so I will have to try it tomorrow.

    Read the article

  • How do I write sql data into a textbox after a submit type event

    - by Matt
    Finishing up some homework and Im having trouble with figuring out how to take information generated in sql column(a primary key set up to assign a record number to a customer example 1046) at submit and writing it to my redirected recipt page. I call it recipt.aspx. Any takers Professor says to use a datareader...but things go bad after that. public partial class _Default : System.Web.UI.Page { String cnStr = "EDITED FOR THE PURPOSE OF NOT DISPLAYED SQL SERVERta Source=111.11.111.11; uid=xxxxxxx; password=xxxx; database=xxxxxx; "; String insertStr; SqlDataReader reader; SqlConnection myConnection = new SqlConnection(); protected void submitbutton_Click(object sender, EventArgs e) { myConnection.ConnectionString = cnStr; try { //more magic happens as myConnection opens myConnection.Open(); insertStr = "insert into connectAssignment values ('" + TextBox2.Text + "','" + TextBox3.Text + "','" + TextBox4.Text + "','" + TextBox5.Text + "','" + bigtextthing.Text + "','" + DropDownList1.SelectedItem.Value + "')"; //magic happens as Connection string is assigned to connection object and passes in the SQL statment //associate the command to the the myConnection connection object SqlCommand cmd = new SqlCommand(insertStr, myConnection); cmd.ExecuteNonQuery(); Session["passmyvalue1"] = TextBox2.Text; Session["passmyvalue2"] = TextBox3.Text; Session["passmyvalue3"] = TextBox4.Text; Session["passmyvalue4"] = TextBox5.Text; Session["passmyvalue5"] = bigtextthing.Text; Session["I NEED SOME HELP RIGHT HERE"] =Textbox6.Text; Response.Redirect("receipt.aspx"); } catch { bigtextthing.Text = "Error submitting" + "Possible casues: Internet is down,server is down, check your settings!"; } finally { myConnection.Close(); TextBox2.Text = ""; TextBox3.Text = ""; TextBox4.Text = ""; TextBox5.Text = ""; bigtextthing.Text = ""; } //reset validators? } The recipt page public partial class _Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { if(Session["passmyvalue1"] != null) { TextBox1.Text = (string)Session["passmyvalue1"]; TextBox2.Text = (string)Session["passmyvalue2"]; TextBox3.Text = (string)Session["passmyvalue3"]; TextBox4.Text = (string)Session["passmyvalue4"]; TextBox5.Text = (string)Session["passmyvalue5"]; TextBox6.Text = I don't know ; } } } THanks for the help

    Read the article

  • Generic Type constraint in .net

    - by Jose
    Okay I'm looking for some input, I'm pretty sure this is not currently supported in .NET 3.5 but here goes. I want to require a generic type passed into my class to have a constructor like this: new(IDictionary<string,object>) so the class would look like this public MyClass<T> where T : new(IDictionary<string,object>) { T CreateObject(IDictionary<string,object> values) { return new T(values); } } But the compiler doesn't support this, it doesn't really know what I'm asking. Some of you might ask, why do you want to do this? Well I'm working on a pet project of an ORM so I get values from the DB and then create the object and load the values. I thought it would be cleaner to allow the object just create itself with the values I give it. As far as I can tell I have two options: 1) Use reflection(which I'm trying to avoid) to grab the PropertyInfo[] array and then use that to load the values. 2) require T to support an interface like so: public interface ILoadValues { void LoadValues(IDictionary values); } and then do this public MyClass<T> where T:new(),ILoadValues { T CreateObject(IDictionary<string,object> values) { T obj = new T(); obj.LoadValues(values); return obj; } } The problem I have with the interface I guess is philosophical, I don't really want to expose a public method for people to load the values. Using the constructor the idea was that if I had an object like this namespace DataSource.Data { public class User { protected internal User(IDictionary<string,object> values) { //Initialize } } } As long as the MyClass<T> was in the same assembly the constructor would be available. I personally think that the Type constraint in my opinion should ask (Do I have access to this constructor? I do, great!) Anyways any input is welcome.

    Read the article

  • What database table structure should I use for versions, codebases, deployables?

    - by Zac Thompson
    I'm having doubts about my table structure, and I wonder if there is a better approach. I've got a little database for version control repositories (e.g. SVN), the packages (e.g. Linux RPMs) built therefrom, and the versions (e.g. 1.2.3-4) thereof. A given repository might produce no packages, or several, but if there are more than one for a given repository then a particular version for that repository will indicate a single "tag" of the codebase. A particular version "string" might be used to tag a version of the source code in more than one repository, but there may be no relationship between "1.0" for two different repos. So if packages P and Q both come from repo R, then P 1.0 and Q 1.0 are both built from the 1.0 tag of repo R. But if package X comes from repo Y, then X 1.0 has no relationship to P 1.0. In my (simplified) model, I have the following tables (the x_id columns are auto-incrementing surrogate keys; you can pretend I'm using a different primary key if you wish, it's not really important): repository - repository_id - repository_name (unique) ... version - version_id - version_string (unique for a particular repository) - repository_id ... package - package_id - package_name (unique) - repository_id ... This makes it easy for me to see, for example, what are valid versions of a given package: I can join with the version table using the repository_id. However, suppose I would like to add some information to this database, e.g., to indicate which package versions have been approved for release. I certainly need a new table: package_version - version_id - package_id - package_version_released ... Again, the nature of the keys that I use are not really important to my problem, and you can imagine that the data column is "promotion_level" or something if that helps. My doubts arise when I realize that there's really a very close relationship between the version_id and the package_id in my new table ... they must share the same repository_id. Only a small subset of package/version combinations are valid. So I should have some kind of constraint on those columns, enforcing that ... ... I don't know, it just feels off, somehow. Like I'm including somehow more information than I really need? I don't know how to explain my hesitance here. I can't figure out which (if any) normal form I'm violating, but I also can't find an example of a schema with this sort of structure ... not being a DBA by profession I'm not sure where to look. So I'm asking: am I just being overly sensitive?

    Read the article

  • basic operations for modifying a source document with XSLT

    - by SpliFF
    All the tutorials and examples I've found of XSLT processing seem to assume your destination will be a significantly different format/structure to your source and that you know the structure of the source in advance. I'm struggling with finding out how to perform simple "in-place" modifications to a HTML document without knowing anything else about its existing structure. Could somebody show me a clear example that, given an arbitrary unknown HTML source will: 1.) delete the classname 'foo' from all divs 2.) delete a node if its empty (ie <p></p>) 3.) delete a <p> node if its first child is <br> 4.) add newattr="newvalue" to all H1 5.) replace 'heading' in text nodes with 'title' 6.) wrap all <u> tags in <b> tags (ie, <u>foo</u> -> <b><u>foo</u></b>) 7.) output the transformed document without changing anything else The above examples are the primary types of transform I wish to accomplish. Understanding how to do the above will go a long way towards helping me build more complex transforms. To help clarify/test the examples here is a sample source and output, however I must reiterate that I want to work with arbitrary samples without rewriting the XSLT for each source: <!doctype html> <html> <body> <h1>heading</h1> <p></p> <p><br>line</p> <div class="foo bar"><u>baz</u></div> <p>untouched</p> </body> </html> output: <!doctype html> <html> <body> <h1 newattr="newvalue">title</h1> <div class="bar"><b><u>baz</u></b></div> <p>untouched</p> </body> </html>

    Read the article

< Previous Page | 362 363 364 365 366 367 368 369 370 371 372 373  | Next Page >