Search Results

Search found 38328 results on 1534 pages for 'write xml'.

Page 351/1534 | < Previous Page | 347 348 349 350 351 352 353 354 355 356 357 358  | Next Page >

  • Why can I not map a dynamic texture in D3D?

    - by sebf
    I am trying to map a Texture2D resource in DirectX11 via SharpDX. The resource is declared as a ShaderResource, with Dynamic usage and the 'Write' CPU flag specified. My call however fails with a generic exception from SharpDX: _Parent.Context.MapSubresource( _Resource, 0, SharpDX.Direct3D11.MapMode.Write, SharpDX.Direct3D11.MapFlags.None, out stream ); I see from this question that it is supported. The MSDN docs and this other question hint that instead of using Context.MapSubresource() I should be using Texture2D.Map(), however, the DirectX11 Texture2D class does not define Map() (though it does for the D3D 10 equivalent). If I call the above with MapMode.WriteDiscard, the call succeeds but in this case the previous content of the texture is lost, which is no good when I only want to update a section of it. Has the Map() method been removed in Direct3D 11 or am I looking in the wrong place? Is the MapSubresource() method unsuitable or am I using it wrong? EDIT: I declared my resource as Dynamic with CPU Write Flags - not Default as I originaly wrote - sorry for the fairly huge 'typo' that changes the entire question!

    Read the article

  • SharePoint Apps and Windows Azure

    - by ScottGu
    Last Monday I had an opportunity to present as part of the keynote of this year’s SharePoint Conference.  My segment of the keynote covered the new SharePoint Cloud App Model we are introducing as part of the upcoming SharePoint 2013 and Office 365 releases.  This new app model for SharePoint is additive to the full trust solutions developers write today, and is built around three core tenants: Simplifying the development model and making it consistent between the on-premises version of SharePoint and SharePoint Online provided with Office 365. Making the execution model loosely coupled – and enabling developers to build apps and write code that can run outside of the core SharePoint service. This makes it easy to deploy SharePoint apps using Windows Azure, and avoid having to worry about breaking SharePoint and the apps within it when something is upgraded.  This new loosely coupled model also enables developers to write SharePoint applications that can leverage the full capabilities of the .NET Framework – including ASP.NET Web Forms 4.5, ASP.NET MVC 4, ASP.NET Web API, EF 5, Async, and more. Implementing this loosely coupled model using standard web protocols – like OAuth, JSON, and REST APIs – that enable developers to re-use skills and tools, and easily integrate SharePoint with Web and Mobile application architectures. A video of my talk + demos is now available to watch online: In the talk I walked through building an app from scratch – it showed off how easy it is to build solutions using new SharePoint application, and highlighted a web + workflow + mobile scenario that integrates SharePoint with code hosted on Windows Azure (all built using Visual Studio 2012 and ASP.NET 4.5 – including MVC and Web API). The new SharePoint Cloud App Model is something that I think is pretty exciting, and it is going to make it a lot easier to build SharePoint apps using the full power of both Windows Azure and the .NET Framework.  Using Windows Azure to easily extend SaaS based solutions like Office 365 is also a really natural fit and one that is going to offer a bunch of great developer opportunities.  Hope this helps, Scott  P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • C# 4.0: Alternative To Optional Arguments

    - by Paulo Morgado
    Like I mentioned in my last post, exposing publicly methods with optional arguments is a bad practice (that’s why C# has resisted to having it, until now). You might argument that your method or constructor has to many variants and having ten or more overloads is a maintenance nightmare, and you’re right. But the solution has been there for ages: have an arguments class. The arguments class pattern is used in the .NET Framework is used by several classes, like XmlReader and XmlWriter that use such pattern in their Create methods, since version 2.0: XmlReaderSettings settings = new XmlReaderSettings(); settings.ValidationType = ValidationType.Auto; XmlReader.Create("file.xml", settings); With this pattern, you don’t have to maintain a long list of overloads and any default values for properties of XmlReaderSettings (or XmlWriterSettings for XmlWriter.Create) can be changed or new properties added in future implementations that won’t break existing compiled code. You might now argue that it’s too much code to write, but, with object initializers added in C# 3.0, the same code can be written like this: XmlReader.Create("file.xml", new XmlReaderSettings { ValidationType = ValidationType.Auto }); Looks almost like named and optional arguments, doesn’t it? And, who knows, in a future version of C#, it might even look like this: XmlReader.Create("file.xml", new { ValidationType = ValidationType.Auto });

    Read the article

  • .Net to Oracle Connectivity using ODBC .NET

    - by SAMIR BHOGAYTA
    You can use the new ODBC .NET Data Provider that works with the ODBC Oracle7.x driver or higher. You need to have MDAC 2.6 or later installed and then download ODBC .NET from the MS Web Site http://msdn.microsoft.com/downloads/default.asp?url=/code/sample.asp?url=/msdn-files/027/001/668/msdncompositedoc.xml&frame=true. MDAC (Microsoft Data Access Component) 2.7 contains core component, including the Microsoft SQL server and Oracle OLE Database provider and ODBC driver. Insta ...You can use the new ODBC .NET Data Provider that works with the ODBC Oracle7.x driver or higher. You need to have MDAC 2.6 or later installed and then download ODBC .NET from the MS Web Site http://msdn.microsoft.com/downloads/default.asp?url=/code/sample.asp?url=/msdn-files/027/001/668/msdncompositedoc.xml&frame=true. MDAC (Microsoft Data Access Component) 2.7 contains core component, including the Microsoft SQL server and Oracle OLE Database provider and ODBC driver. Install ODBC .NET from the MS Web Site http://msdn.microsoft.com/downloads/default.asp?URL=/downloads/sample.asp?url=/msdn-files/027/001/943/msdncompositedoc.xml Create a DSN, using either Microsoft ODBC for Oracle or Oracle supplied Driver if the Oracle client software is loaded. here for eq. TrailDSN. While creating DSN give user name along with passward for eq. scott/tiger. using Microsoft .Data.Odbc; private void Form1_Load(object sender, System.EventArgs e) { try { OdbcConnection myconnection= new OdbcConnection ("DSN=TrialDSN"); OdbcDataAdapter myda = new OdbcDataAdapter ("Select * from EMP", myconnection); DataSet ds= new DataSet (); myda.Fill(ds, "Table"); dataGrid1.DataSource = ds ; } catch(Exception ex) { MessageBox.Show (ex.Message ); } }

    Read the article

  • Third year in a row- Microsoft MVP again!!

    - by Jalpesh P. Vadgama
    Today is Sunday and I was not expecting this as today is holiday although I know it was Microsoft Mvp renewal day. At evening I got the congratulation email from the Microsoft. Yeah!! I am Microsoft Most Valuable Professional again. I got the same message as a part of Mvp. Thanks Microsoft again. Dear Jalpesh Vadgama, Congratulations! We are pleased to present you with the 2012 Microsoft® MVP Award! This award is given to exceptional technical community leaders who actively share their high quality, real world expertise with others. We appreciate your outstanding contributions in Visual C# technical communities during the past year. Feeling is again same as first time. I am going to dedicated this award to my family. My parents who always inspired me to do new things. My wife who scarifies her time to write blogs. My brother who support me in every possible way.  On this occasion, I would also like to thanks my reader without their support it was no possible to achieve this. Thanks for reading my blog!!. Please do keep reading this. I will try to write as much as possible. I would also like to thanks ‘Tanmay Kapoor’ My Mvp lead for continuous support.     Once again thank you all for your continuous support and love. There are lots of new technologies in Microsoft Stack and I am going to write lots of blog post about all the new stuff. So stay tuned for the same.

    Read the article

  • tfs 2010 RC Agile Process template update New Task progress report

    Maybe my next post will just be about why I am so excited and impressed with the out of the box templates.  But, for this first blog with my new focus, I thought I would just walk through the process I went through to create a task progress report (to enhance the out of the box Agile template). So, I started with the MSF for Agile Development 5.0 RC template.  After reviewing the template, I came away pretty excited about many of the new reports.  I am especially excited about the reporting services reports.  The big advantage I see here is that these are querying the Warehouse directly instead of the Analysis Services Cube which means that they are much closer to real-time which I find very important for reports like Burndown and task status.  One report that I focused on right away was the User Story Progress Report.  An overview is shown below: This report is very useful, but a lot of our internal managers really prefer to manage at the task level and either dont have stories in TFS or would like to view this type of report for tasks in addition to the User Stories.  So, what did I do? Step 1: Download the Agile Template In VS 2010 RC, open Process Template Manager from Team->Team Project Collection Settings.  Download the MSF for Agile Development template to your local file system.  A project template is a folder of xml files.  There is a ProcessTemplate.xml in the root and then a bunch of directories for things like Work Item Definitions and Queries, Reports, Shared Documents and Source Control Settings.  Step 2: Copy the folder My plan here is to make a new template with all of my modifications.  You can also just enhance update the MSF template.  However, I think it is cleaner when you start making modifications to make your own template.  So, copy the folder and name it with your new template name. Step 3: Change Template Name Open ProcessTemplate.xml and change the <name> of the template. Step 4: Copy the rdl of the Report you want to use a starting point In my case, I copied Stories Progress.rdl and named the file Task Progress Breakdown.rdl.  I reviewed the requirements for the new report with some of the users here and came up with this plan.  Should show tasks and be expandable to show subtasks.  Should add Assigned To and Estimated Finish Date as 2 extra columns. Step 5: Walkthrough the existing report to understand how it works The main thing that I do here is try to get the sql to run in SQL Management Studio.  So, I can walkthrough the process of building up the data for the report. After analyzing this particular report I found a couple of very useful things.  One, this report is already built to display subtasks if I just flip the IncludeTasks flag to 1.  So, if you are using Stories and have tasks assigned to each story.  This might give you everything you want.  For my purposes, I did make that change to the Stories Progress report as I find it to be a more useful report to be able to see the tasks that comprise each story.  But, I still wanted a task only version with the additional fields. Step 6: Update the report definition I tend to work on rdl in visual studio directly as xml.  Especially when I am just altering an existing report, I find it easier than trying to deal with the BI Studio designer.  For my report I made the following changes. Updated Fields Removed Stack Rank and Replaced with Priority since we dont use Stack Rank Added FinishDate and AssignedTo Changed the root deliverable SQL to pull @tasks instead of @deliverablecategory and added a join CurrentWorkItemView for FinishDate and Assigned to SELECT cwi.[System_Id] AS ID FROM [CurrentWorkItemView] cwi             WHERE cwi.[System_WorkItemType] IN (@Task)             AND cwi.[ProjectNodeGUID] = @ProjectGuid SELECT lh.SourceWorkItemID AS ID FROM FactWorkItemLinkHistory lh             INNER JOIN [CurrentWorkItemView] cwi ON lh.TargetWorkItemID = cwi.[System_Id]             WHERE lh.WorkItemLinkTypeSK = @ParentWorkItemLinkTypeSK                 AND lh.RemovedDate = CONVERT(DATETIME, '9999', 126)                 AND lh.TeamProjectCollectionSK = @TeamProjectCollectionSK                 AND cwi.[System_WorkItemType] NOT IN (@DeliverableCategory) Added AssignedTo and FinishDate columns to the @Rollups table Added two columns to the table used for column headers <Tablix Name="ProgressTable">         <TablixBody>           <TablixColumns>             <TablixColumn>               <Width>2.7625in</Width>             </TablixColumn>             <TablixColumn>               <Width>0.5125in</Width>             </TablixColumn>             <TablixColumn>               <Width>3.4625in</Width>             </TablixColumn>             <TablixColumn>               <Width>0.7625in</Width>             </TablixColumn>             <TablixColumn>               <Width>1.25in</Width>             </TablixColumn>             <TablixColumn>               <Width>1.25in</Width>             </TablixColumn>           </TablixColumns> Added Cells for the two new headers Added Cells to the data table to include the two new values (Assigned to & Finish Date) Changed a bunch of widths that would change the format of the report to display landscape and have room for the two additional columns Set the Value of the IncludeTasks Parameter to 1 <ReportParameter Name="IncludeTasks">       <DataType>Integer</DataType>       <DefaultValue>         <Values>           <Value>=1</Value>         </Values>       </DefaultValue>       <Prompt>IncludeTasks</Prompt>       <Hidden>true</Hidden>     </ReportParameter> Change a few descriptions on how the report should be used This is the resulting report I have attached the final rdl. Step 7: Update ReportTasks.xml Last step before the template is ready for use is to update the reportTasks.xml file in the reports folder.  This file defines the reports that are available in the template.           <report name="Task Progress Breakdown" filename="Reports\Task Progress Breakdown.rdl" folder="Project Management" cacheExpiration="30">             <parameters>               <parameter name="ExplicitProject" value="" />             </parameters>             <datasources>               <reference name="/Tfs2010ReportDS" dsname="TfsReportDS" />             </datasources>           </report> Step 8: Upload the template Open the process Template Manager just like Step 1.  And upload the new template. Thats it.  One other note, if you want to add this report to existing team project you will have to go into reportmanager (the reporting services portal) and upload the rdl to that projects directory.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Deleting multiple objects in a AWS S3 bucket with s3curl.pl?

    - by user183394
    I have been trying to use the AWS "official" command line tool s3curl.pl to test out the recently announced multi-object delete. Here is what I have done: First, I tested out the s3curl.pl with a set of credentials without a hitch: $ s3curl.pl --id=s3 -- http://testbucket-0.s3.amazonaws.com/|xmllint --format - % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 884 0 884 0 0 4399 0 --:--:-- --:--:-- --:--:-- 5703 <?xml version="1.0" encoding="UTF-8"?> <ListBucketResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"> <Name>testbucket-0</Name> <Prefix/> <Marker/> <MaxKeys>1000</MaxKeys> <IsTruncated>false</IsTruncated> <Contents> <Key>file_1</Key> <LastModified>2012-03-22T17:08:17.000Z</LastModified> <ETag>"ee0e521a76524034aaa5b331842a8b4e"</ETag> <Size>400000</Size> <Owner> <ID>e6d81ea69572270e58d3814ab674df8c8f1fd5d502669633a4951bdd5185f7f4</ID> <DisplayName>zackp</DisplayName> </Owner> <StorageClass>STANDARD</StorageClass> </Contents> <Contents> <Key>file_2</Key> <LastModified>2012-03-22T17:08:19.000Z</LastModified> <ETag>"6b32cbf8219a59690a9f69ba6ff3f590"</ETag> <Size>600000</Size> <Owner> <ID>e6d81ea69572270e58d3814ab674df8c8f1fd5d502669633a4951bdd5185f7f4</ID> <DisplayName>zackp</DisplayName> </Owner> <StorageClass>STANDARD</StorageClass> </Contents> </ListBucketResult> Then, I following the s3curl.pl's usage instructions: s3curl.pl --help Usage /usr/local/bin/s3curl.pl --id friendly-name (or AWSAccessKeyId) [options] -- [curl-options] [URL] options: --key SecretAccessKey id/key are AWSAcessKeyId and Secret (unsafe) --contentType text/plain set content-type header --acl public-read use a 'canned' ACL (x-amz-acl header) --contentMd5 content_md5 add x-amz-content-md5 header --put <filename> PUT request (from the provided local file) --post [<filename>] POST request (optional local file) --copySrc bucket/key Copy from this source key --createBucket [<region>] create-bucket with optional location constraint --head HEAD request --debug enable debug logging common curl options: -H 'x-amz-acl: public-read' another way of using canned ACLs -v verbose logging Then, I tried the following, and always got back error. I would appreciated it very much if someone could point out where I made a mistake? $ s3curl.pl --id=s3 --post multi_delete.xml -- http://testbucket-0.s3.amazonaws.com/?delete <?xml version="1.0" encoding="UTF-8"?> <Error><Code>SignatureDoesNotMatch</Code><Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message><StringToSignBytes>50 4f 53 54 0a 0a 0a 54 68 75 2c 20 30 35 20 41 70 72 20 32 30 31 32 20 30 30 3a 35 30 3a 30 38 20 2b 30 30 30 30 0a 2f 7a 65 74 74 61 72 2d 74 2f 3f 64 65 6c 65 74 65</StringToSignBytes><RequestId>707FBE0EB4A571A8</RequestId><HostId>mP3ZwlPTcRqARQZd6gU4UvBrxGBNIVa0VVe5p0rqGmq5hM65RprwcG/qcXe+pmDT</HostId><SignatureProvided>edkNGuugiSFe0ku4eGzkh8kYgHw=</SignatureProvided><StringToSign>POST Thu, 05 Apr 2012 00:50:08 +0000 The file multi_delete.xml contains the following: cat multi_delete.xml <?xml version="1.0" encoding="UTF-8"?> <Delete> <Quiet>true</Quiet> <Object> <Key>file_1</Key> <VersionId> </VersionId>> </Object> <Object> <Key>file_2</Key> <VersionId> </VersionId> </Object> </Delete> Thanks for any help! --Zack

    Read the article

  • Programming concepts taken from the arts and humanities

    - by Joey Adams
    After reading Paul Graham's essay Hackers and Painters and Joel Spolsky's Advice for Computer Science College Students, I think I've finally gotten it through my thick skull that I should not be loath to work hard in academic courses that aren't "programming" or "computer science" courses. To quote the former: I've found that the best sources of ideas are not the other fields that have the word "computer" in their names, but the other fields inhabited by makers. Painting has been a much richer source of ideas than the theory of computation. — Paul Graham, "Hackers and Painters" There are certainly other, much stronger reasons to work hard in the "boring" classes. However, it'd also be neat to know that these classes may someday inspire me in programming. My question is: what are some specific examples where ideas from literature, art, humanities, philosophy, and other fields made their way into programming? In particular, ideas that weren't obviously applied the way they were meant to (like most math and domain-specific knowledge), but instead gave utterance or inspiration to a program's design and choice of names. Good examples: The term endian comes from Gulliver's Travels by Tom Swift (see here), where it refers to the trivial matter of which side people crack open their eggs. The terms journal and transaction refer to nearly identical concepts in both filesystem design and double-entry bookkeeping (financial accounting). mkfs.ext2 even says: Writing superblocks and filesystem accounting information: done Off-topic: Learning to write English well is important, as it enables a programmer to document and evangelize his/her software, as well as appear competent to other programmers online. Trigonometry is used in 2D and 3D games to implement rotation and direction aspects. Knowing finance will come in handy if you want to write an accounting package. Knowing XYZ will come in handy if you want to write an XYZ package. Arguably on-topic: The Monad class in Haskell is based on a concept by the same name from category theory. Actually, Monads in Haskell are monads in the category of Haskell types and functions. Whatever that means...

    Read the article

  • Setting up Pure-FTPd with admin/user permissions for same directory

    - by modulaaron
    I need to set up 2 Pure-FTPd accounts - ftpuser and ftpadmin. Both will have access to a directory that contains 2 subdirectories - upload and downlaod. The permissions criteria needs to be as follows: ftpuser can upload to /upload but cannot view the contents (blind drop). ftpuser can download from /download but cannot write to it. ftpadmin has full read/write permissions to both, including file deletion Currently, the first two are not a problem - disabling /upload read access and /download write access for ftpuser did the job. The problem is that when a file is uploaded by ftpuser, it's permissions are set to 644, meaning that user ftpadmin can only read it (note that all FTP directories are chown'd to ftpuser:ftpadmin). How can I give ftpadmin the power he so rightfully deserves?

    Read the article

  • Outlook DASL Filter - Custom Search

    - by Ryan B
    I'm trying to write a DASL filter to combine three queries: Get all mail with no category and no flag. ("urn:schemas-microsoft-com:office:office#Keywords" IS NULL AND "urn:schemas:httpmail:messageflag" IS NULL) Get all mail that is categorized as "Ryan" and flagged with a red "Today" flag. Don't know how to write this one. Get all mail that is uncategorized and flagged with a red "Today" flag. Don't know how to write this one. Once I have the individual queries, I will combine and OR them. I am stuck on how to filter the flag value.

    Read the article

  • Free CodeSmith License!

    - by Randy Walker
    The catch?  Attend the Ozarks .Net User Group meeting on April 1st. Here’s a list of the other prizes for the event GRAND PRIZE 1 - iPad (Wi-Fi 16GB) THIRD PARTY COMPONENTS 6 - Telerik Premium Collection 5 - Infragistics NetAdvantage for .NET 1 - Nevron Chart for .NET Lite DevExpress Xceed PRODUCTIVITY 2 - CodeRush with Refactor! Pro 2 - ReSharper CodeSmith GAMES 3 - Halo3 ODST (XBox 360) 3 - Forza Motorsport (Xbox 360) OTHER SOFTWARE 3 - Windows 7 Ultimate 2 - Microsoft Office Standard 2007 HARDWARE 2 - Microsoft Arc Mouse BOOKS 12 - OReilly eBooks 12 - Microsoft Press books 5 - Apress books 3 - Addison-Wesley books 2 - Manning books 2 - Sams books The Info: "Be a Professional Developer and Write Clean Code!" by Claudio Lassala on April 1, 2010 PRESENTATION TOPIC "Be a Professional Developer and Write Clean Code!" - by Claudio Lassala Poorly written code can be created quickly, but it comes at a cost of high maintenance. Most of the time, code can be improved easily by following some simple practices. Professional developers should know these practices and tools and apply it to their work every day. This session will cover the importance of writing clean code, the kind of attitude all developers should have towards the code they produce, as well as the practices and tools that can be used to aid you in becoming a better developer. BIOGRAPHY Claudio Lassala is a Senior Developer at EPS Software Corp. He has presented several lectures at Microsoft events such as PDC Brazil and various other Microsoft seminars, as well as several conferences and user groups across North America and Brazil. He is a multiple winner of the Microsoft MVP Award since 2001 (for Visual FoxPro in 2001-2002, and for C# ever since), an INETA speaker, and also holds the MCSD for .NET certification. He has articles published on several magazines, such as MSDN Brazil Magazine, CoDe Magazine, UTMag, Developers Magazine, and FoxPro Advisor. More detailed information regarding his presentations and articles can be found in his MVP Profile. You can also read more about Claudio on his blog or on Twitter Schedule 5:30 PM – 6:30 PM Social Networking 6:30 PM - 7:00 PM  Prizes 7:00 PM - 8:30 PM Presentation:  "Be a Professional Developer and Write Clean Code!" by Claudio Lassala 8:30 PM - 9:00 PM Wrap-Up

    Read the article

  • Wrong perspective is showing in Eclipse plugin project [closed]

    - by Arun Kumar Choudhary
    I am working in Eclipse Modeling Framework (Eclipse plugin development) in my project the tool(project i am working) provides three perspectives. 1.Accelerator Analyst perspective 2.Contract Validation and 3.Underwriter rules Editor... By default it starts with Contract validation perspective (As we define it within the plugin_customization.ini). However after switching to other perspective does not change the perspective shown... As all perspective (Class, Id and Name) is define only inside Plugin.XML as it is the task of org.eclipse.ui.perspective that that perspective name should be come forefront. Out of 10 7 times it is working fine but I am not getting why this is not working in that 3 cases. I am pasting my plugin.XML file <?xml version="1.0" encoding="UTF-8"?> <?eclipse version="3.0"?> <plugin> <extension id="RuleEditor.application" name="Accelerator Tooling" point="org.eclipse.core.runtime.applications"> <application> <run class="com.csc.fs.underwriting.product.UnderWritingApplication"> </run> </application> </extension> <extension point="org.eclipse.ui.perspectives"> <perspective class="com.csc.fs.underwriting.product.ContractValidationPerspective" icon="icons/javadevhov_obj.gif" id="com.csc.fs.underwriting.product.ContractValidationPerspective" name="Contract Validation"> </perspective> </extension> <extension point="org.eclipse.ui.perspectives"> <perspective class="com.csc.fs.underwriting.product.UnderwritingPerspective" icon="icons/javadevhov_obj.gif" id="com.csc.fs.underwriting.product.UnderwritingPerspective" name="Underwriting"> </perspective> </extension> <extension id="product" point="org.eclipse.core.runtime.products"> <product application="com.csc.fs.nba.underwriting.application.RuleEditor.application" name="Rule Configurator Workbench" description="%AppName"> <property name="introTitle" value="Welcome to Accelerator Tooling"/> <property name="introVer" value="%version"/> <property name="introBrandingImage" value="product:csclogo.png"/> <property name="introBrandingImageText" value="CSC FSG"/> <property name="preferenceCustomization" value="plugin_customization.ini"/> <property name="appName" value="Rule Configurator Workbench"> </property> </product> </extension> <extension point="org.eclipse.ui.intro"> <intro class="org.eclipse.ui.intro.config.CustomizableIntroPart" icon="icons/Welcome.gif" id="com.csc.fs.nba.underwriting.intro"/> <introProductBinding introId="com.csc.fs.nba.underwriting.intro" productId="com.csc.fs.nba.underwriting.application.product"/> <intro class="org.eclipse.ui.intro.config.CustomizableIntroPart" id="com.csc.fs.nba.underwriting.application.intro"> </intro> <introProductBinding introId="com.csc.fs.nba.underwriting.application.intro" productId="com.csc.fs.nba.underwriting.application.product"> </introProductBinding> </extension> <extension name="Accelerator Tooling" point="org.eclipse.ui.intro.config"> <config content="$nl$/intro/introContent.xml" id="org.eclipse.platform.introConfig.mytest" introId="com.csc.fs.nba.underwriting.intro"> <presentation home-page-id="news"> <implementation kind="html" os="win32,linux,macosx" style="$nl$/intro/css/shared.css"/> </presentation> </config> <config content="introContent.xml" id="com.csc.fs.nba.underwriting.application.introConfigId" introId="com.csc.fs.nba.underwriting.application.intro"> <presentation home-page-id="root"> <implementation kind="html" os="win32,linux,macosx" style="content/shared.css"> </implementation> </presentation> </config> </extension> <extension point="org.eclipse.ui.intro.configExtension"> <theme default="true" id="org.eclipse.ui.intro.universal.circles" name="%theme.name.circles" path="$nl$/themes/circles" previewImage="themes/circles/preview.png"> <property name="introTitle" value="Accelerator Tooling"/> <property name="introVer" value="%version"/> </theme> </extension> <extension point="org.eclipse.ui.ide.resourceFilters"> <filter pattern="*.dependency" selected="true"/> <filter pattern="*.producteditor" selected="true"/> <filter pattern="*.av" selected="true"/> <filter pattern=".*" selected="true"/> </extension> <extension point="org.eclipse.ui.splashHandlers"> <splashHandler class="com.csc.fs.nba.underwriting.application.splashHandlers.InteractiveSplashHandler" id="com.csc.fs.nba.underwriting.application.splashHandlers.interactive"> </splashHandler> <splashHandler class="com.csc.fs.underwriting.application.splashHandlers.InteractiveSplashHandler" id="com.csc.fs.underwriting.application.splashHandlers.interactive"> </splashHandler> <splashHandlerProductBinding productId="com.csc.fs.nba.underwriting.application" splashId="com.csc.fs.underwriting.application.splashHandlers.interactive"> </splashHandlerProductBinding> </extension> <extension id="com.csc.fs.pa.security" point="com.csc.fs.pa.security.implementation.secure"> <securityImplementation class="com.csc.fs.pa.security.PASecurityImpl"> </securityImplementation> </extension> <extension id="productApplication.security.pep" name="com.csc.fs.pa.producteditor.application.security.pep" point="com.csc.fs.pa.security.implementation.authorize"> <authorizationManager class="com.csc.fs.pa.security.authorization.PAAuthorizationManager"> </authorizationManager> </extension> <extension point="org.eclipse.ui.editors"> <editor class="com.csc.fs.underwriting.product.editors.PDFViewer" extensions="pdf" icon="icons/pdficon_small.gif" id="com.csc.fs.pa.producteditor.application.editors.PDFViewer" name="PDF Viewer"> </editor> </extension> <extension point="org.eclipse.ui.views"> <category id="com.csc.fs.pa.application.viewCategory" name="%category"> </category> </extension> <extension point="org.eclipse.ui.newWizards"> <category id="com.csc.fs.pa.application.newWizardCategory" name="%category"> </category> <category id="com.csc.fs.pa.application.newWizardInitialize" name="%initialize" parentCategory="com.csc.fs.pa.application.newWizardCategory"> </category> </extension> <extension point="com.csc.fs.pa.common.usability.addNewCategory"> <addNewCategoryId id="com.csc.fs.pa.application.newWizardCategory"> </addNewCategoryId> </extension> <!--extension point="org.eclipse.ui.activities"> <activity description="View Code Generation Option" id="com.csc.fs.pa.producteditor.application.viewCodeGen" name="ViewCodeGen"> </activity> <activityPatternBinding activityId="com.csc.fs.pa.producteditor.application.viewCodeGen" pattern="com.csc.fs.pa.bpd.vpms.codegen/com.csc.fs.pa.bpd.vpms.codegen.bpdCodeGenActionId"> </activityPatternBinding> Add New Product Definition Extension </extension--> </plugin> class="com.csc.fs.underwriting.product.editors.PDFViewer" extensions="pdf" icon="icons/pdficon_small.gif" id="com.csc.fs.pa.producteditor.application.editors.PDFViewer" name="PDF Viewer"> </editor> </extension> <extension point="org.eclipse.ui.views"> <category id="com.csc.fs.pa.application.viewCategory" name="%category"> </category> </extension> <extension point="org.eclipse.ui.newWizards"> <category id="com.csc.fs.pa.application.newWizardCategory" name="%category"> </category> <category id="com.csc.fs.pa.application.newWizardInitialize" name="%initialize" parentCategory="com.csc.fs.pa.application.newWizardCategory"> </category> </extension> <extension point="com.csc.fs.pa.common.usability.addNewCategory"> <addNewCategoryId id="com.csc.fs.pa.application.newWizardCategory"> </addNewCategoryId> </extension> <!--extension point="org.eclipse.ui.activities"> <activity description="View Code Generation Option" id="com.csc.fs.pa.producteditor.application.viewCodeGen" name="ViewCodeGen"> </activity> <activityPatternBinding activityId="com.csc.fs.pa.producteditor.application.viewCodeGen" pattern="com.csc.fs.pa.bpd.vpms.codegen/com.csc.fs.pa.bpd.vpms.codegen.bpdCodeGenActionId"> </activityPatternBinding> Add New Product Definition Extension </extension--> </plugin> Inside each class(the qualified classes in above xml) i did only hide and show the view according to perspective and that is working very fine.. Please provide any method that Eclipse provide so that I can override it in each classed so that it can work accordingly.

    Read the article

  • Writing to the UI with MonoDroid using RunOnUIThread

    - by Wallym
    I've been pulling my hair out over the past day or so trying to update the UI in my test app.  I was having problem after problem.  I finally got down to my base problem.  I could not write out to my TextView.  WTF could be causing that?  I can write to my UI in other parts of my app.  This is pure craziness.  I thought long and hard and nothing was coming to me.  Wait, the light bulb went on.  I am in the wrong thread.  Great, how do I write in the correct thread?  MonoDroid supports the entire AsyncTask set of objects, but this seemed like overkill.  I was reading and came across RunOnUIThread().......Bing..........The lightbulb has been invented...BlueStar Airlines (oh wait, wrong context). Anyway, here is what I needed:this.RunOnUiThread(() => TextViewControl.Text = "Hello World"); Enjoy!!!!!!!  Remember kiddies, running on the main ui for off device operations is bad, not as bad as crossing the streams bad, but bad as in trying to drive on a flat tire bad. It won't kill you, but it does keep you from getting anywhere.

    Read the article

  • CoffeeScript Test Framework

    - by Liam McLennan
    Tonight the Brisbane Alt.NET group is doing a coding dojo. I am hoping to talk someone into pairing with me to solve the kata in CoffeeScript. CoffeeScript is an awesome language, half javascript, half ruby, that compiles to javascript. To assist with tonight’s dojo I wrote the following micro test framework for CoffeeScript: <html> <body> <div> <h2>Test Results:</h2> <p class='results' /> </div> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.3.2/jquery.min.js" type="text/javascript"></script> <script type="text/coffeescript"> # super simple test framework test: { write: (s) -> $('.results').append(s + '<br/>') assert: (b, message...) -> test.write(if b then "pass" else "fail: " + message) tests: [] exec: () -> for t in test.tests test.write("<br/><b>$t.name</b>") t.func() } # add some tests test.tests.push { name: "First Test" func: () -> test.assert(true) } test.tests.push { name: "Another Test" func: () -> test.assert(false, "You loose") } # run them test.exec(test.tests) </script> <script type="text/javascript" src="coffee-script.js"></script> </body> </html> It’s not the prettiest, but as far as I know it is the only CoffeeScript test framework in existence. Of course, I could just use one of the javascript test frameworks but that would be no fun. To get this example to run you need the coffeescript compiler in the same directory as the page.

    Read the article

  • Create Hidden Partition on USB

    - by Francesco
    I need to split an USB flash disk into two USB drives, each one with its own drive letter, but one of these has to be hidden. In the non-hidden partition I want to place my software, and in the hidden partition I need to place some files that are required by the software in order to work. Moreover, only the software may read, write, delete or execute the files in this partition. I thought to use a little partition viewed as a CD-ROM drive, as they do in many flash drives, but this solution does not allow to write other data in a second moment, and it's visible to the user that can read the file. Obviously the software must be able to access to partition and read, write, delete or execute the content. Is there a solution to do so, possibly that work also on Linux?

    Read the article

  • Flex 4 + Apache Ant, Cannot Load FlashPunk Libraries

    - by SquareCrow
    I have been searching google, Apache Docs*, and FlashPunk forums looking for an answer to this: I cannot get Ant/Flex to find and compile the FlashPunk libraries. Here is my build.xml. [code] <!-- Fetch the JAR full of Flex tasks if it is not already in the source directory --> <copy file="${FLEX_HOME}/ant/lib/flexTasks.jar" todir="${SOURCE_PATH}"/> <!-- Add flextasks to the project --> <taskdef resource="flexTasks.tasks" classpath="${SOURCE_PATH}/flexTasks.jar"></taskdef> <!-- Release build Flash Player 10.1 --> <target name="build"> <!-- Build the FlashPunk library --> <echo message="building swc..." /> <compc output="FlashPunk.swc" keep-generated-actionscript="false" incremental="false" optimize="false" debug="true" use-network="false"> <include-sources dir="${FLASHPUNK_PATH}/net" includes="**/* flashpunk/utils/* flashpunk/masks/*" excludes="**/*.TTF **/*.png"/> <load-config filename="${FLEX_HOME}/frameworks/flex-config.xml"/> </compc> <echo message="building swf..." /> <mxmlc file="${SOURCE_PATH}/epOne.as" output="${OUTPUT_PATH}/epOne.swf" debug="false" incremental="false" strict="true" accessible="false" link-report="link_report.xml" static-link-runtime-shared-libraries="true"> <optimize>true</optimize> </mxmlc> </target> [/code] Results in many errors of the type "Definition net.flashpunk.masks:Grid could not be found" even though when I open the directories I can see the *.AS files right there. Sorry if this is very basic. I am piecing together knowledge of Ant from docs and tutorials. *I decided to use Ant because neither FlashDevelop for Windows nor Eclipse for Linux seemto work for me.

    Read the article

  • What happens to modern-ui apps when they aren't in the foreground?

    - by Mark Allen
    If I start a modern-ui app and then switch to a different app or a normal program running on the desktop, what happens to the first app? I've heard something about the first app being suspended, but realized that I don't really know that for certain. I mean, could you write a SETI@Home (BOINC) app if you wanted to, or will apps that aren't in the foreground always be suspended? Can you change that? I could see changing that based on available resources, running from AC vs. battery, etc. This morning I heard about an iPad being recovered thanks to the "Find my iPad" app, and was wondering whether you could write such a thing as a modern-ui app and have it work without being the running foreground app. (I'm aware that you'd just write a Windows service or similar, that's not what I'm asking about.)

    Read the article

  • Is there any desktop version available for wsywyg editors

    - by user825904
    I have been searching for long and i am not able to find one editor which i can always keep open for daily notes keeping with some codes and which highlights the code Just like we have our editor where we ask questions. ALl is want is I can write some text as normal Then i have few php code which i can write and then click some code button or like we have ctrl +K and i gets higlighted then on the same page i write some paragraph text as normal withoout higligting and some bullet points Then again i get some c++ code and i wrap that around on the same page and page grows daily Currently i use notepad++ and it does not support syntx higlighting of different langages on same page and on individual blocks. it highights all the file even the normal text as well basically i am looking wsywyg editor but as windows desktop application

    Read the article

  • VSDB to SSDT part 3 : command-line deployment with SqlPackage.exe, replacement for Vsdbcmd.exe

    - by Etienne Giust
    For our continuous integration needs, we use a powershell script to handle deployment. A simpler approach would be to have a deployment task embedded within the build process. See the solution provided here by Jakob Ehn (a most interesting read which also dives into the '”deploying from Visual Studio” specifics) : http://geekswithblogs.net/jakob/archive/2012/04/25/deploying-ssdt-projects-with-tfs-build.aspx   For our needs, though, clearly separating our build phase from our deployment phase is important. It allows us to instantly deploy old versions. Also it is more convenient for continuous integration. So we stick with the powershell script approach. With VSDB projects, that script used to call the following command (the vsdbcmd executable was locally available, along with needed libraries): vsdbcmd.exe /a:Deploy /dd /cs:<CONNECTIONSTRING TO TARGET DB> /dsp:SQL /manifest:< PATH TO .deploymanifest FILE>   To be able to do the approximately same thing with a SSDT produced file (dacpac), you would call this command on a machine which has VS2012 installed (or the SSDT installed, see here : http://msdn.microsoft.com/en-us/library/hh500335%28v=vs.103%29):   C:\Program Files (x86)\Microsoft SQL Server\110\DAC\bin\SqlPackage.exe /Action:Publish /SourceFile:<PATH TO Database.dacpac FILE> /Profile:<PATH TO .publish.xml FILE>   And from within a powershell script :   & "C:\Program Files (x86)\Microsoft SQL Server\110\DAC\bin\SqlPackage.exe" /Action:Publish /SourceFile:<PATH TO Database.dacpac FILE> /Profile:<PATH TO .publish.xml FILE>   The command will consume a publish.xml file where the connection string and the deployment options are specified. You must be familiar with it if you have done some deployments from visual studio. If not, please refer to the above mentioned article by Jakob Ehn.   It is also possible to pass those parameters in the command line. The complete SqlPackage.exe syntax is detailed here : http://msdn.microsoft.com/en-us/library/hh550080%28v=vs.103%29.aspx

    Read the article

  • How do I measure performance of a virtual server?

    - by Sergey
    I've got a VPS running Ubuntu. Being a virtual server, I understand that it shares resources with unknown number of other servers, and I'm noticing that it's considerably slower than my desktop machine. Is there some tool to measure the performance of the virtual machine? I'd be curious to see some approximate measure similar to bogomips, possibly for CPU (operations/sec), memory and disk read/write speed. I'd like to be able to compare those numbers to my desktop machine. I'm not interested in the specs of the actual physical machine my VPS is running on - by doing cat /proc/cpuinfo I can see that it's a nice quad-core Xeon machine, but it doesn't matter to me. I'm basically interested in how fast a program would run in my VPS - how many CPU operations it can make in a second, how many bytes to write to RAM or to disk. I only have ssh access to the machine so the tool need to be command-line. I could write a script which, say, does some calculations in a loop for a second and counts how many loops it was able to do, or something similar to measure disk and RAM performance. But I'm sure something like this already exists.

    Read the article

  • MMO Data Persistence Question

    - by JasonG
    I wanted to ask a question regarding data persistence strategies for an MMO. I have some experience in the games industry with social synchronous games. At Zynga, we stored static proto data in XML on both the client and the server and stored instance/runtime data in membase. For clarity sake, proto data for a Potion would be PotionName or MaxCharges, while runtime/instance data would be something like ChargesRemaining. So basically, if a player picks up a potion the instance is (via prediction) created from XML data on the client, the request gets sent to the server where the instance is created from XML and then added to membase. Is the same strategy that would be used for soemthing like an MMO? Would it be feasible to have static proto data in some kind of in-memory no-sql database on both client and server with instance data being stored on the server in a more enterprise level database? Or should all data (proto/instance) be stored on the server and the client gets everything from server? I know a lot of this might on certain game requirements, however, i'm basically looking for some general opinion/best practices here if there are any.

    Read the article

  • Organizations &amp; Architecture UNISA Studies &ndash; Chap 7

    - by MarkPearl
    Learning Outcomes Name different device categories Discuss the functions and structure of I/.O modules Describe the principles of Programmed I/O Describe the principles of Interrupt-driven I/O Describe the principles of DMA Discuss the evolution characteristic of I/O channels Describe different types of I/O interface Explain the principles of point-to-point and multipoint configurations Discuss the way in which a FireWire serial bus functions Discuss the principles of InfiniBand architecture External Devices An external device attaches to the computer by a link to an I/O module. The link is used to exchange control, status, and data between the I/O module and the external device. External devices can be classified into 3 categories… Human readable – e.g. video display Machine readable – e.g. magnetic disk Communications – e.g. wifi card I/O Modules An I/O module has two major functions… Interface to the processor and memory via the system bus or central switch Interface to one or more peripheral devices by tailored data links Module Functions The major functions or requirements for an I/O module fall into the following categories… Control and timing Processor communication Device communication Data buffering Error detection I/O function includes a control and timing requirement, to coordinate the flow of traffic between internal resources and external devices. Processor communication involves the following… Command decoding Data Status reporting Address recognition The I/O device must be able to perform device communication. This communication involves commands, status information, and data. An essential task of an I/O module is data buffering due to the relative slow speeds of most external devices. An I/O module is often responsible for error detection and for subsequently reporting errors to the processor. I/O Module Structure An I/O module functions to allow the processor to view a wide range of devices in a simple minded way. The I/O module may hide the details of timing, formats, and the electro mechanics of an external device so that the processor can function in terms of simple reads and write commands. An I/O channel/processor is an I/O module that takes on most of the detailed processing burden, presenting a high-level interface to the processor. There are 3 techniques are possible for I/O operations Programmed I/O Interrupt[t I/O DMA Access Programmed I/O When a processor is executing a program and encounters an instruction relating to I/O it executes that instruction by issuing a command to the appropriate I/O module. With programmed I/O, the I/O module will perform the requested action and then set the appropriate bits in the I/O status register. The I/O module takes no further actions to alert the processor. I/O Commands To execute an I/O related instruction, the processor issues an address, specifying the particular I/O module and external device, and an I/O command. There are four types of I/O commands that an I/O module may receive when it is addressed by a processor… Control – used to activate a peripheral and tell it what to do Test – Used to test various status conditions associated with an I/O module and its peripherals Read – Causes the I/O module to obtain an item of data from the peripheral and place it in an internal buffer Write – Causes the I/O module to take an item of data form the data bus and subsequently transmit that data item to the peripheral The main disadvantage of this technique is it is a time consuming process that keeps the processor busy needlessly I/O Instructions With programmed I/O there is a close correspondence between the I/O related instructions that the processor fetches from memory and the I/O commands that the processor issues to an I/O module to execute the instructions. Typically there will be many I/O devices connected through I/O modules to the system – each device is given a unique identifier or address – when the processor issues an I/O command, the command contains the address of the address of the desired device, thus each I/O module must interpret the address lines to determine if the command is for itself. When the processor, main memory and I/O share a common bus, two modes of addressing are possible… Memory mapped I/O Isolated I/O (for a detailed explanation read page 245 of book) The advantage of memory mapped I/O over isolated I/O is that it has a large repertoire of instructions that can be used, allowing more efficient programming. The disadvantage of memory mapped I/O over isolated I/O is that valuable memory address space is sued up. Interrupts driven I/O Interrupt driven I/O works as follows… The processor issues an I/O command to a module and then goes on to do some other useful work The I/O module will then interrupts the processor to request service when is is ready to exchange data with the processor The processor then executes the data transfer and then resumes its former processing Interrupt Processing The occurrence of an interrupt triggers a number of events, both in the processor hardware and in software. When an I/O device completes an I/O operations the following sequence of hardware events occurs… The device issues an interrupt signal to the processor The processor finishes execution of the current instruction before responding to the interrupt The processor tests for an interrupt – determines that there is one – and sends an acknowledgement signal to the device that issues the interrupt. The acknowledgement allows the device to remove its interrupt signal The processor now needs to prepare to transfer control to the interrupt routine. To begin, it needs to save information needed to resume the current program at the point of interrupt. The minimum information required is the status of the processor and the location of the next instruction to be executed. The processor now loads the program counter with the entry location of the interrupt-handling program that will respond to this interrupt. It also saves the values of the process registers because the Interrupt operation may modify these The interrupt handler processes the interrupt – this includes examination of status information relating to the I/O operation or other event that caused an interrupt When interrupt processing is complete, the saved register values are retrieved from the stack and restored to the registers Finally, the PSW and program counter values from the stack are restored. Design Issues Two design issues arise in implementing interrupt I/O Because there will be multiple I/O modules, how does the processor determine which device issued the interrupt? If multiple interrupts have occurred, how does the processor decide which one to process? Addressing device recognition, 4 general categories of techniques are in common use… Multiple interrupt lines Software poll Daisy chain Bus arbitration For a detailed explanation of these approaches read page 250 of the textbook. Interrupt driven I/O while more efficient than simple programmed I/O still requires the active intervention of the processor to transfer data between memory and an I/O module, and any data transfer must traverse a path through the processor. Thus is suffers from two inherent drawbacks… The I/O transfer rate is limited by the speed with which the processor can test and service a device The processor is tied up in managing an I/O transfer; a number of instructions must be executed for each I/O transfer Direct Memory Access When large volumes of data are to be moved, an efficient technique is direct memory access (DMA) DMA Function DMA involves an additional module on the system bus. The DMA module is capable of mimicking the processor and taking over control of the system from the processor. It needs to do this to transfer data to and from memory over the system bus. DMA must the bus only when the processor does not need it, or it must force the processor to suspend operation temporarily (most common – referred to as cycle stealing). When the processor wishes to read or write a block of data, it issues a command to the DMA module by sending to the DMA module the following information… Whether a read or write is requested using the read or write control line between the processor and the DMA module The address of the I/O device involved, communicated on the data lines The starting location in memory to read from or write to, communicated on the data lines and stored by the DMA module in its address register The number of words to be read or written, communicated via the data lines and stored in the data count register The processor then continues with other work, it delegates the I/O operation to the DMA module which transfers the entire block of data, one word at a time, directly to or from memory without going through the processor. When the transfer is complete, the DMA module sends an interrupt signal to the processor, this the processor is involved only at the beginning and end of the transfer. I/O Channels and Processors Characteristics of I/O Channels As one proceeds along the evolutionary path, more and more of the I/O function is performed without CPU involvement. The I/O channel represents an extension of the DMA concept. An I/O channel ahs the ability to execute I/O instructions, which gives it complete control over I/O operations. In a computer system with such devices, the CPU does not execute I/O instructions – such instructions are stored in main memory to be executed by a special purpose processor in the I/O channel itself. Two types of I/O channels are common A selector channel controls multiple high-speed devices. A multiplexor channel can handle I/O with multiple characters as fast as possible to multiple devices. The external interface: FireWire and InfiniBand Types of Interfaces One major characteristic of the interface is whether it is serial or parallel parallel interface – there are multiple lines connecting the I/O module and the peripheral, and multiple bits are transferred simultaneously serial interface – there is only one line used to transmit data, and bits must be transmitted one at a time With new generation serial interfaces, parallel interfaces are becoming less common. In either case, the I/O module must engage in a dialogue with the peripheral. In general terms the dialog may look as follows… The I/O module sends a control signal requesting permission to send data The peripheral acknowledges the request The I/O module transfers data The peripheral acknowledges receipt of data For a detailed explanation of FireWire and InfiniBand technology read page 264 – 270 of the textbook

    Read the article

  • Ultrium 3 tape drive shoe-shining, 3Mb/s: and it's not the cable

    - by mowsala
    I have a HP 960 Ultrium 3 tape drive. Since I got it, (second hand, £90) I've been experiencing shoe-shining. Writing with tar in Linux, I average about 3Mb/s write speed. I've tried replacing both the SCSI card and the cable now, both of which made no difference at all. A curiuos observation I have made is that the write rate is not consistent. Sometimes it will write for over a minute without shoeshining, but more often, just a few seconds. I've also tried several tapes, different source drives, and even writing from Windows Backup, to no avail.

    Read the article

  • CSS and HTML incoherences when declaring multiple classes

    - by Cesco
    I'm learning CSS "seriously" for the first time, but I found the way you deal with multiple CSS classes in CSS and HTML quite incoherent. For example I learned that if I want to declare multiple CSS classes with a common style applied to them, I have to write: .style1, .style2, .style3 { color: red; } Then, if I have to declare an HTML tag that has multiple classes applied to it, I have to write: <div class="style1 style2 style3"></div> And I'm asking why? From my personal point of view it would be more coherent if both could be declared by using a comma to separate each class, or if both could be declared using a space; after all IMHO we're still talking about multiple classes, in both CSS and HTML. I think that it would make more sense if I could write this to declare a div with multiple classes applied: <div class="style1, style2, style3"></div> Am I'm missing something important? Could you explain me if there's a valid reason behind these two different syntaxes?

    Read the article

  • Step by Step screencasts to do Behavior Driven Development on WCF and UI using xUnit

    - by oazabir
    I am trying to encourage my team to get into Behavior Driven Development (BDD). So, I made two quick video tutorials to show how BDD can be done from early requirement collection stage to late integration tests. It explains breaking user stories into behaviors, and then developers and test engineers taking the behavior specs and writing a WCF service and unit test for it, in parallel, and then eventually integrating the WCF service and doing the integration tests. It introduces how mocking is done using the Moq library. Moreover, it shows a way how you can write test once and do both unit and integration tests at the flip of a config setting. Watch the screencast here: Doing BDD with xUnit, Subspec and on a WCF Service  Warning: you might hear some noise in the audio in some places. Something wrong with audio bit rate. I suggest you let the video download for a while and then play it. If you still get noise, go back couple of seconds earlier and then resume play. It eliminates the noise.  The next video tutorial is about doing BDD to do automated UI tests. It shows how test engineers can take behaviors and then write tests that tests a prototype UI in isolation (just like Service Contract) in order to ensure the prototype conforms to the expected behaviors, while developers can write the real code and build the real product in parallel. When the real stuff is done, the same test can test the real stuff and ensure the agreed behaviors are satisfied. I have used WatiN to automate UI and test UI for expected behaviors. Doing BDD with xUnit and WatiN on a ASP.NET webform Hope you like it!

    Read the article

< Previous Page | 347 348 349 350 351 352 353 354 355 356 357 358  | Next Page >