Search Results

Search found 7802 results on 313 pages for 'unit tests'.

Page 90/313 | < Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >

  • How to create Custom ListForm WebPart

    - by DipeshBhanani
    Mostly all who works extensively on SharePoint (including meJ) don’t like to use out-of-box list forms (DispForm.aspx, EditForm.aspx, NewForm.aspx) as interface. Actually these OOB list forms bind hands of developers for the customization. It gives headache to developers to add just one post back event, for a dropdown field and to populate other fields in NewForm.aspx or EditForm.aspx. On top of that clients always ask such stuff. So here I am going to give you guys a flight for SharePoint Customization world. In this blog, I will explain, how to create CustomListForm WebPart. In my next blogs, I am going to explain easy deployment of List Forms through features and last, guidance on using SharePoint web controls. 1.       First thing, create a class library project through Visual Studio and inherit the class with WebPart class.     public class CustomListForm : WebPart   2.       Declare the public variables and properties which we are going to use throughout the class. You will get to know these once you see them in use.         #region "Variable Declaration"           Table spTableCntl;         FormToolBar formToolBar;         Literal ltAlertMessage;         Guid SiteId;         Guid ListId;         int ItemId;         string ListName;           #endregion           #region "Properties"           SPControlMode _ControlMode = SPControlMode.New;         [Personalizable(PersonalizationScope.Shared),          WebBrowsable(true),          WebDisplayName("Control Mode"),          WebDescription("Set Control Mode"),          DefaultValue(""),          Category("Miscellaneous")]         public SPControlMode ControlMode         {             get { return _ControlMode; }             set { _ControlMode = value; }         }           #endregion     The property “ControlMode” is used to identify the mode of the List Form. The property is of type SPControlMode which is an enum type with values (Display, Edit, New and Invalid). When we will add this WebPart to DispForm.aspx, EditForm.aspx and NewForm.aspx, we will set the WebPart property “ControlMode” to Display, Edit and New respectively.     3.       Now, we need to override the CreateChildControl method and write code to manually add SharePoint Web Controls related to each list fields as well as ToolBar controls.         protected override void CreateChildControls()         {             base.CreateChildControls();               try             {                 SiteId = SPContext.Current.Site.ID;                 ListId = SPContext.Current.ListId;                 ListName = SPContext.Current.List.Title;                   if (_ControlMode == SPControlMode.Display || _ControlMode == SPControlMode.Edit)                     ItemId = SPContext.Current.ItemId;                   SPSecurity.RunWithElevatedPrivileges(delegate()                 {                     using (SPSite site = new SPSite(SiteId))                     {                         //creating a new SPSite with credentials of System Account                         using (SPWeb web = site.OpenWeb())                         {                               //<Custom Code for creating form controls>                         }                     }                 });             }             catch (Exception ex)             {                 ShowError(ex, "CreateChildControls");             }         }   Here we are assuming that we are developing this WebPart to plug into List Forms. Hence we will get the List Id and List Name from the current context. We can have Item Id only in case of Display and Edit Mode. We are putting our code into “RunWithElevatedPrivileges” to elevate privileges to System Account. Now, let’s get deep down into the main code and expand “//<Custom Code for creating form controls>”. Before initiating any SharePoint control, we need to set context of SharePoint web controls explicitly so that it will be instantiated with elevated System Account user. Following line does the job.     //To create SharePoint controls with new web object and System Account credentials     SPControl.SetContextWeb(Context, web);   First thing, let’s add main table as container for all controls.     //Table to render webpart     Table spTableMain = new Table();     spTableMain.CellPadding = 0;     spTableMain.CellSpacing = 0;     spTableMain.Width = new Unit(100, UnitType.Percentage);     this.Controls.Add(spTableMain);   Now we need to add Top toolbar with Save and Cancel button at top as you see in the below screen shot.       // Add Row and Cell for Top ToolBar     TableRow spRowTopToolBar = new TableRow();     spTableMain.Rows.Add(spRowTopToolBar);     TableCell spCellTopToolBar = new TableCell();     spRowTopToolBar.Cells.Add(spCellTopToolBar);     spCellTopToolBar.Width = new Unit(100, UnitType.Percentage);         ToolBar toolBarTop = (ToolBar)Page.LoadControl("/_controltemplates/ToolBar.ascx");     toolBarTop.CssClass = "ms-formtoolbar";     toolBarTop.ID = "toolBarTbltop";     toolBarTop.RightButtons.SeparatorHtml = "<td class=ms-separator> </td>";       if (_ControlMode != SPControlMode.Display)     {         SaveButton btnSave = new SaveButton();         btnSave.ControlMode = _ControlMode;         btnSave.ListId = ListId;           if (_ControlMode == SPControlMode.New)             btnSave.RenderContext = SPContext.GetContext(web);         else         {             btnSave.RenderContext = SPContext.GetContext(this.Context, ItemId, ListId, web);             btnSave.ItemContext = SPContext.GetContext(this.Context, ItemId, ListId, web);             btnSave.ItemId = ItemId;         }         toolBarTop.RightButtons.Controls.Add(btnSave);     }       GoBackButton goBackButtonTop = new GoBackButton();     toolBarTop.RightButtons.Controls.Add(goBackButtonTop);     goBackButtonTop.ControlMode = SPControlMode.Display;       spCellTopToolBar.Controls.Add(toolBarTop);   Here we have use “SaveButton” and “GoBackButton” which are internal SharePoint web controls for save and cancel functionality. I have set some of the properties of Save Button with if-else condition because we will not have Item Id in case of New Mode. Item Id property is used to identify which SharePoint List Item need to be saved. Now, add Form Toolbar to the page which contains “Attach File”, “Delete Item” etc buttons.       // Add Row and Cell for FormToolBar     TableRow spRowFormToolBar = new TableRow();     spTableMain.Rows.Add(spRowFormToolBar);     TableCell spCellFormToolBar = new TableCell();     spRowFormToolBar.Cells.Add(spCellFormToolBar);     spCellFormToolBar.Width = new Unit(100, UnitType.Percentage);       FormToolBar formToolBar = new FormToolBar();     formToolBar.ID = "formToolBar";     formToolBar.ListId = ListId;     if (_ControlMode == SPControlMode.New)         formToolBar.RenderContext = SPContext.GetContext(web);     else     {         formToolBar.RenderContext = SPContext.GetContext(this.Context, ItemId, ListId, web);         formToolBar.ItemContext = SPContext.GetContext(this.Context, ItemId, ListId, web);         formToolBar.ItemId = ItemId;     }     formToolBar.ControlMode = _ControlMode;     formToolBar.EnableViewState = true;       spCellFormToolBar.Controls.Add(formToolBar);     The ControlMode property will take care of which button to be displayed on the toolbar. E.g. “Attach files”, “Delete Item” in new/edit forms and “New Item”, “Edit Item”, “Delete Item”, “Manage Permissions” etc in display forms. Now add main section which contains form field controls.     //Create Form Field controls and add them in Table "spCellCntl"     CreateFieldControls(web);     //Add public variable "spCellCntl" containing all form controls to the page     spRowCntl.Cells.Add(spCellCntl);     spCellCntl.Width = new Unit(100, UnitType.Percentage);     spCellCntl.Controls.Add(spTableCntl);       //Add a Blank Row with height of 5px to render space between ToolBar table and Control table     TableRow spRowLine1 = new TableRow();     spTableMain.Rows.Add(spRowLine1);     TableCell spCellLine1 = new TableCell();     spRowLine1.Cells.Add(spCellLine1);     spCellLine1.Height = new Unit(5, UnitType.Pixel);     spCellLine1.Controls.Add(new LiteralControl("<IMG SRC='/_layouts/images/blank.gif' width=1 height=1 alt=''>"));       //Add Row and Cell for Form Controls Section     TableRow spRowCntl = new TableRow();     spTableMain.Rows.Add(spRowCntl);     TableCell spCellCntl = new TableCell();       //Create Form Field controls and add them in Table "spCellCntl"     CreateFieldControls(web);     //Add public variable "spCellCntl" containing all form controls to the page     spRowCntl.Cells.Add(spCellCntl);     spCellCntl.Width = new Unit(100, UnitType.Percentage);     spCellCntl.Controls.Add(spTableCntl);       TableRow spRowLine2 = new TableRow();     spTableMain.Rows.Add(spRowLine2);     TableCell spCellLine2 = new TableCell();     spRowLine2.Cells.Add(spCellLine2);     spCellLine2.CssClass = "ms-formline";     spCellLine2.Controls.Add(new LiteralControl("<IMG SRC='/_layouts/images/blank.gif' width=1 height=1 alt=''>"));       // Add Blank row with height of 5 pixel     TableRow spRowLine3 = new TableRow();     spTableMain.Rows.Add(spRowLine3);     TableCell spCellLine3 = new TableCell();     spRowLine3.Cells.Add(spCellLine3);     spCellLine3.Height = new Unit(5, UnitType.Pixel);     spCellLine3.Controls.Add(new LiteralControl("<IMG SRC='/_layouts/images/blank.gif' width=1 height=1 alt=''>"));   You can add bottom toolbar also to get same look and feel as OOB forms. I am not adding here as the blog will be much lengthy. At last, you need to write following lines to allow unsafe updates for Save and Delete button.     // Allow unsafe update on web for save button and delete button     if (this.Page.IsPostBack && this.Page.Request["__EventTarget"] != null         && (this.Page.Request["__EventTarget"].Contains("IOSaveItem")         || this.Page.Request["__EventTarget"].Contains("IODeleteItem")))     {         SPContext.Current.Web.AllowUnsafeUpdates = true;     }   So that’s all. We have finished writing Custom Code for adding field control. But something most important is skipped. In above code, I have called function “CreateFieldControls(web);” to add SharePoint field controls to the page. Let’s see the implementation of the function:     private void CreateFieldControls(SPWeb pWeb)     {         SPList listMain = pWeb.Lists[ListId];         SPFieldCollection fields = listMain.Fields;           //Main Table to render all fields         spTableCntl = new Table();         spTableCntl.BorderWidth = new Unit(0);         spTableCntl.CellPadding = 0;         spTableCntl.CellSpacing = 0;         spTableCntl.Width = new Unit(100, UnitType.Percentage);         spTableCntl.CssClass = "ms-formtable";           SPContext controlContext = SPContext.GetContext(this.Context, ItemId, ListId, pWeb);           foreach (SPField listField in fields)         {             string fieldDisplayName = listField.Title;             string fieldInternalName = listField.InternalName;               //Skip if the field is system field or hidden             if (listField.Hidden || listField.ShowInVersionHistory == false)                 continue;               //Skip if the control mode is display and field is read-only             if (_ControlMode != SPControlMode.Display && listField.ReadOnlyField == true)                 continue;               FieldLabel fieldLabel = new FieldLabel();             fieldLabel.FieldName = listField.InternalName;             fieldLabel.ListId = ListId;               BaseFieldControl fieldControl = listField.FieldRenderingControl;             fieldControl.ListId = ListId;             //Assign unique id using Field Internal Name             fieldControl.ID = string.Format("Field_{0}", fieldInternalName);             fieldControl.EnableViewState = true;               //Assign control mode             fieldLabel.ControlMode = _ControlMode;             fieldControl.ControlMode = _ControlMode;             switch (_ControlMode)             {                 case SPControlMode.New:                     fieldLabel.RenderContext = SPContext.GetContext(pWeb);                     fieldControl.RenderContext = SPContext.GetContext(pWeb);                     break;                 case SPControlMode.Edit:                 case SPControlMode.Display:                     fieldLabel.RenderContext = controlContext;                     fieldLabel.ItemContext = controlContext;                     fieldLabel.ItemId = ItemId;                       fieldControl.RenderContext = controlContext;                     fieldControl.ItemContext = controlContext;                     fieldControl.ItemId = ItemId;                     break;             }               //Add row to display a field row             TableRow spCntlRow = new TableRow();             spTableCntl.Rows.Add(spCntlRow);               //Add the cells for containing field lable and control             TableCell spCellLabel = new TableCell();             spCellLabel.Width = new Unit(30, UnitType.Percentage);             spCellLabel.CssClass = "ms-formlabel";             spCntlRow.Cells.Add(spCellLabel);             TableCell spCellControl = new TableCell();             spCellControl.Width = new Unit(70, UnitType.Percentage);             spCellControl.CssClass = "ms-formbody";             spCntlRow.Cells.Add(spCellControl);               //Add the control to the table cells             spCellLabel.Controls.Add(fieldLabel);             spCellControl.Controls.Add(fieldControl);               //Add description if there is any in case of New and Edit Mode             if (_ControlMode != SPControlMode.Display && listField.Description != string.Empty)             {                 FieldDescription fieldDesc = new FieldDescription();                 fieldDesc.FieldName = fieldInternalName;                 fieldDesc.ListId = ListId;                 spCellControl.Controls.Add(fieldDesc);             }               //Disable Name(Title) in Edit Mode             if (_ControlMode == SPControlMode.Edit && fieldDisplayName == "Name")             {                 TextBox txtTitlefield = (TextBox)fieldControl.Controls[0].FindControl("TextField");                 txtTitlefield.Enabled = false;             }         }         fields = null;     }   First of all, I have declared List object and got list fields in field collection object called “fields”. Then I have added a table for the container of all controls and assign CSS class as "ms-formtable" so that it gives consistent look and feel of SharePoint. Now it’s time to navigate through all fields and add them if required. Here we don’t need to add hidden or system fields. We also don’t want to display read-only fields in new and edit forms. Following lines does this job.             //Skip if the field is system field or hidden             if (listField.Hidden || listField.ShowInVersionHistory == false)                 continue;               //Skip if the control mode is display and field is read-only             if (_ControlMode != SPControlMode.Display && listField.ReadOnlyField == true)                 continue;   Let’s move to the next line of code.             FieldLabel fieldLabel = new FieldLabel();             fieldLabel.FieldName = listField.InternalName;             fieldLabel.ListId = ListId;               BaseFieldControl fieldControl = listField.FieldRenderingControl;             fieldControl.ListId = ListId;             //Assign unique id using Field Internal Name             fieldControl.ID = string.Format("Field_{0}", fieldInternalName);             fieldControl.EnableViewState = true;               //Assign control mode             fieldLabel.ControlMode = _ControlMode;             fieldControl.ControlMode = _ControlMode;   We have used “FieldLabel” control for displaying field title. The advantage of using Field Label is, SharePoint automatically adds red star besides field label to identify it as mandatory field if there is any. Here is most important part to understand. The “BaseFieldControl”. It will render the respective web controls according to type of the field. For example, if it’s single line of text, then Textbox, if it’s look up then it renders dropdown. Additionally, the “ControlMode” property tells compiler that which mode (display/edit/new) controls need to be rendered with. In display mode, it will render label with field value. In edit mode, it will render respective control with item value and in new mode it will render respective control with empty value. Please note that, it’s not always the case when dropdown field will be rendered for Lookup field or Choice field. You need to understand which controls are rendered for which list fields. I am planning to write a separate blog which I hope to publish it very soon. Moreover, we also need to assign list field specific properties like List Id, Field Name etc to identify which SharePoint List field is attached with the control.             switch (_ControlMode)             {                 case SPControlMode.New:                     fieldLabel.RenderContext = SPContext.GetContext(pWeb);                     fieldControl.RenderContext = SPContext.GetContext(pWeb);                     break;                 case SPControlMode.Edit:                 case SPControlMode.Display:                     fieldLabel.RenderContext = controlContext;                     fieldLabel.ItemContext = controlContext;                     fieldLabel.ItemId = ItemId;                       fieldControl.RenderContext = controlContext;                     fieldControl.ItemContext = controlContext;                     fieldControl.ItemId = ItemId;                     break;             }   Here, I have separate code for new mode and Edit/Display mode because we will not have Item Id to assign in New Mode. We also need to set CSS class for cell containing Label and Controls so that those controls get rendered with SharePoint theme.             spCellLabel.CssClass = "ms-formlabel";             spCellControl.CssClass = "ms-formbody";   “FieldDescription” control is used to add field description if there is any.    Now it’s time to add some more customization,               //Disable Name(Title) in Edit Mode             if (_ControlMode == SPControlMode.Edit && fieldDisplayName == "Name")             {                 TextBox txtTitlefield = (TextBox)fieldControl.Controls[0].FindControl("TextField");                 txtTitlefield.Enabled = false;             }   The above code will disable the title field in edit mode. You can add more code here to achieve more customization according to your requirement. Some of the examples are as follow:             //Adding post back event on UserField to auto populate some other dependent field             //in new mode and disable it in edit mode             if (_ControlMode != SPControlMode.Display && fieldDisplayName == "Manager")             {                 if (fieldControl.Controls[0].FindControl("UserField") != null)                 {                     PeopleEditor pplEditor = (PeopleEditor)fieldControl.Controls[0].FindControl("UserField");                     if (_ControlMode == SPControlMode.New)                         pplEditor.AutoPostBack = true;                     else                         pplEditor.Enabled = false;                 }             }               //Add JavaScript Event on Dropdown field. Don't forget to add the JavaScript function on the page.             if (_ControlMode == SPControlMode.Edit && fieldDisplayName == "Designation")             {                 DropDownList ddlCategory = (DropDownList)fieldControl.Controls[0];                 ddlCategory.Attributes.Add("onchange", string.Format("javascript:DropdownChangeEvent('{0}');return false;", ddlCategory.ClientID));             }    Following are the screenshots of my Custom ListForm WebPart. Let’s play a game, check out your OOB List forms of SharePoint, compare with these screens and find out differences.   DispForm.aspx:   EditForm.aspx:   NewForm.aspx:   Enjoy the SharePoint Soup!!! ­­­­­­­­­­­­­­­­­­­­

    Read the article

  • What’s the strategy to recover data during DAO unit test?

    - by Michael Lu
    When I test DAO module in JUnit, an obvious problem is: how to recover testing data in database? For instance, a record should be deleted in both test methods testA() and testB(), that means precondition of both test methods need an existing record to be deleted. Then my strategy is inserting the record in setUp() method to recover data. What’s your better solution? Or your practical idea in such case? Thanks

    Read the article

  • Can I use MSBUILD to investigate which dependency causes a source unit to be recompiled?

    - by Seb Rose
    I have a legacy C++ application with a deep graph of #includes. Changes to any header file often cause recompiles of seemingly unrelated source files. The application is built using a Visual Studio 2005 solution (sln) file. Can MSBUILD be invoked in a way that it reports which dependency(ies) are causing a source file to be recompiled? Is there any other tool that might be able to help?

    Read the article

  • In Ruby, why does a method invocation not be able to be treated as a unit when "do" and "end" is use

    - by Jian Lin
    The following question is related to http://stackoverflow.com/questions/2127836/ruby-print-inject-do-syntax The question is, can we insist on using DO and END and make it work with puts or p? This works: a = [1,2,3,4] b = a.inject do |sum, x| sum + x end puts b # prints out 10 so, is it correct to say, inject is a class method of the Array class, which takes a block of code, and then returns a number. If so, then it should be no different from calling a function and getting back a return value: b = foo(3) puts b or b = circle.getRadius() puts b In the above two cases, we can directly say puts foo(3) puts circle.getRadius() so, there is no way to make it work directly by using the following 2 ways: a = [1,2,3,4] puts a.inject do |sum, x| sum + x end but it gives ch01q2.rb:7:in `inject': no block given (LocalJumpError) from ch01q2.rb:4:in `each' from ch01q2.rb:4:in `inject' from ch01q2.rb:4 grouping the method call using ( ) doesn't work either: a = [1,2,3,4] puts (a.inject do |sum, x| sum + x end) and this gives: ch01q3.rb:4: syntax error, unexpected kDO_BLOCK, expecting ')' puts (a.inject do |sum, x| ^ ch01q3.rb:4: syntax error, unexpected '|', expecting '=' puts (a.inject do |sum, x| ^ ch01q3.rb:6: syntax error, unexpected kEND, expecting $end end) ^ finally, the following version works: a = [1,2,3,4] puts a.inject { |sum, x| sum + x } but why doesn't the grouping of the method invocation using ( ) work? What if a programmer insists that he uses do and end, can it be made to work directly with p or puts, without an extra temporary variable?

    Read the article

  • In Ruby, why is a method invocation not be able to be treated as a unit when "do" and "end" is used?

    - by Jian Lin
    The following question is related to the question "Ruby Print Inject Do Syntax". My question is, can we insist on using do and end and make it work with puts or p? This works: a = [1,2,3,4] b = a.inject do |sum, x| sum + x end puts b # prints out 10 so, is it correct to say, inject is a class method of the Array class, which takes a block of code, and then returns a number. If so, then it should be no different from calling a function and getting back a return value: b = foo(3) puts b or b = circle.getRadius() puts b In the above two cases, we can directly say puts foo(3) puts circle.getRadius() so, there is no way to make it work directly by using the following 2 ways: a = [1,2,3,4] puts a.inject do |sum, x| sum + x end but it gives ch01q2.rb:7:in `inject': no block given (LocalJumpError) from ch01q2.rb:4:in `each' from ch01q2.rb:4:in `inject' from ch01q2.rb:4 grouping the method call using ( ) doesn't work either: a = [1,2,3,4] puts (a.inject do |sum, x| sum + x end) and this gives: ch01q3.rb:4: syntax error, unexpected kDO_BLOCK, expecting ')' puts (a.inject do |sum, x| ^ ch01q3.rb:4: syntax error, unexpected '|', expecting '=' puts (a.inject do |sum, x| ^ ch01q3.rb:6: syntax error, unexpected kEND, expecting $end end) ^ finally, the following version works: a = [1,2,3,4] puts a.inject { |sum, x| sum + x } but why doesn't the grouping of the method invocation using ( ) work in the earlier example? What if a programmer insist that he uses do and end, can it be made to work?

    Read the article

  • How can I determine which dependency would cause a C++ compilation unit to be rebuilt?

    - by Seb Rose
    I have a legacy C++ application with a deep graph of #includes. Changes to any header file often cause recompiles of seemingly unrelated source files. The application is built using a Visual Studio 2005 solution (sln) file. Can MSBUILD be invoked in a way that it reports which dependency(ies) are causing a source file to be recompiled? Is there any other tool that might be able to help? NOTE: I'm only looking for a tool to tell me why a file would be rebuilt, not some restrospective magic telling me why it was rebuilt.

    Read the article

  • To use package properly, how to arrange directory, file name, unit test file?

    - by Stephen Hsu
    My source files tree is like this: /src /pkg /foo foo.go foo_test.go Inside foo.go: package foo func bar(n int) { ... } inside foo_test.go: package foo func testBar(t *testing.T) { bar(10) ... } My questions are: Does package name relates to directory name, source file name? If there is only one source file for a package, need I put it in a directory? Should I put foo.go and foo_test.go in the same package? In the foo_test.go, as it's in the same package as foo.go, I didn't import foo. But when I compile foo_test.go with 6g, it says bar() is undefined. What should I do?

    Read the article

  • In Ruby, why is a method invocation not able to be treated as a unit when "do" and "end" is used?

    - by Jian Lin
    The following question is related to the question "Ruby Print Inject Do Syntax". My question is, can we insist on using do and end and make it work with puts or p? This works: a = [1,2,3,4] b = a.inject do |sum, x| sum + x end puts b # prints out 10 so, is it correct to say, inject is an instance method of the Array object, and this instance method takes a block of code, and then returns a number. If so, then it should be no different from calling a function or method and getting back a return value: b = foo(3) puts b or b = circle.getRadius() puts b In the above two cases, we can directly say puts foo(3) puts circle.getRadius() so, there is no way to make it work directly by using the following 2 ways: a = [1,2,3,4] puts a.inject do |sum, x| sum + x end but it gives ch01q2.rb:7:in `inject': no block given (LocalJumpError) from ch01q2.rb:4:in `each' from ch01q2.rb:4:in `inject' from ch01q2.rb:4 grouping the method call using ( ) doesn't work either: a = [1,2,3,4] puts (a.inject do |sum, x| sum + x end) and this gives: ch01q3.rb:4: syntax error, unexpected kDO_BLOCK, expecting ')' puts (a.inject do |sum, x| ^ ch01q3.rb:4: syntax error, unexpected '|', expecting '=' puts (a.inject do |sum, x| ^ ch01q3.rb:6: syntax error, unexpected kEND, expecting $end end) ^ finally, the following version works: a = [1,2,3,4] puts a.inject { |sum, x| sum + x } but why doesn't the grouping of the method invocation using ( ) work in the earlier example? What if a programmer insist that he uses do and end, can it be made to work?

    Read the article

  • Need data on disk drive management by OS: getting base I/O unit size, “sync” option, Direct Memory A

    - by Richard T
    Hello All, I want to ensure I have done all I can to configure a system's disks for serious database use. The three areas I know of (any others?) to be concerned about are: I/O size: the database engine and disk's native size should either match, or the database's native I/O size should be a multiple of the disk's native I/O size. Disks that are capable of Direct Memory Access (eg. IDE) should be configured for it. When a disk says it has written data persistently, it must be so! No keeping it in cache and lying about it. I have been looking for information on how to ensure these are so for CENTOS and Ubuntu, but can't seem to find anything at all! I want to be able to check these things and change them if needed. Any and all input appreciated.

    Read the article

  • StructureMap: How can i unit test the registry class?

    - by Marius
    I have a registry class like this: public class StructureMapRegistry : Registry { public StructureMapRegistry() { For<IDateTimeProvider>().Singleton().Use<DateTimeProviderReturningDateTimeNow>(); } I want to test that the configuration is according to my intent, so i start writing a test: public class WhenConfiguringIOCContainer : Scenario { private TfsTimeMachine.Domain.StructureMapRegistry registry; private Container container; protected override void Given() { registry = new TfsTimeMachine.Domain.StructureMapRegistry(); container = new Container(); } protected override void When() { container.Configure(i => i.AddRegistry(registry)); } [Then] public void DateTimeProviderIsRegisteredAsSingleton() { // I want to say "verify that the container contains the expected type and that the expected type // is registered as a singleton } } How can verify that the registry is accoring to my expectations? Note: I introduced the container because I didn't see any sort of verification methods available on the Registry class. Idealy, I want to test on the registry class directly.

    Read the article

  • Is an Object the smallest pageable unit in the Heap?

    - by DonnieKun
    Hi, If I have a 2 GB ram and I have an 2 instances of an Object which is 1.5 GB each, the operating system will help and context switch the pages to and from harddisk. What if I have 1 instances but is 3 GB. Can the same paging method breakdown this instances into 2 pages? Or will I encounter out-of-memory issue? Thanks.

    Read the article

  • Cloud Based Load Testing Using TF Service &amp; VS 2013

    - by Tarun Arora [Microsoft MVP]
    Originally posted on: http://geekswithblogs.net/TarunArora/archive/2013/06/30/cloud-based-load-testing-using-tf-service-amp-vs-2013.aspx One of the new features announced as part of the Visual Studio 2013 Ultimate Preview is ‘Cloud Based Load Testing’. In this blog post I’ll walk you through, What is Cloud Based Load Testing? How have I been using this feature? – Success story! Where can you find more resources on this feature? What is Cloud Based Load Testing? It goes without saying that performance testing your application not only gives you the confidence that the application will work under heavy levels of stress but also gives you the ability to test how scalable the architecture of your application is. It is important to know how much is too much for your application! Working with various clients in the industry I have realized that the biggest barriers in Load Testing & Performance Testing adoption are, High infrastructure and administration cost that comes with this phase of testing Time taken to procure & set up the test infrastructure Finding use for this infrastructure investment after completion of testing Is cloud the answer? 100% Visual Studio Compatible Scalable and Realistic Start testing in < 2 minutes Intuitive Pay only for what you need Use existing on premise tests on cloud There are a lot of vendors out there offering Cloud Based Load Testing, to name a few, Load Storm Soasta Blaze Meter Blitz And others… The question you may want to ask is, why should you go with Microsoft’s Cloud based Load Test offering. If you are a Microsoft shop or already have investments in Microsoft technologies, you’ll see great benefit in the natural integration this offers with existing Microsoft products such as Visual Studio and Windows Azure. For example, your existing Web tests authored in Visual Studio 2010 or Visual Studio 2012 will run on the cloud without requiring any modifications what so ever. Microsoft’s cloud test rig also supports API based testing, for example, if you are building a WPF application which consumes WCF services, you can write unit tests to invoke the WCF service, these tests can be run on the cloud test rig and loaded with ‘N’ concurrent users for performance testing. If you have your assets already hosted in the Azure and possibly in the same data centre as the Cloud test rig, your Azure app will not incur a usage cost because of the generated traffic since the traffic is coming from the same data centre. The licensing or pricing information on Microsoft’s cloud based Load test service is yet to be announced, but I would expect this to be priced attractively to match the market competition.   The only additional configuration required for running load tests on Microsoft Cloud based Load Tests service is to select the Test run location as Run tests using Visual Studio Team Foundation Service, How have I been using Microsoft’s Cloud based Load Test Service? I have been part of the Microsoft Cloud Based Load Test Service advisory council for the last 7 months. This gave the opportunity to see the product shape up from concept to working solution. I was also the first person outside of Microsoft to try this offering out. This gave me the opportunity to test real world application at various clients using the Microsoft Load Test Service and provide real world feedback to the Microsoft product team. One of the most recent systems I tested using the Load Test Service has been an insurance quote generation engine. This insurance quote generation engine is,   hosted in Windows Azure expected to get quote requests from across the globe expected to handle 5 Million quote requests in a day (not clear how this load will be distributed across the day) There was no way, I could simulate such kind of load from on premise without standing up additional hardware. But Microsoft’s Cloud based Load Test service allowed me to test my key performance testing scenarios, i.e. Simulate expected Load, Endurance Testing, Threshold Testing and Testing for Latency. Simulating expected load: approach to devising a load pattern My approach to devising a load test pattern has been to run the test scenario with 1 user to figure out the response time. Then work out how many users are required to reach the target load. So, for example, to invoke 1 quote from the quote engine software takes 0.5 seconds. Now if you do the math,   1 quote request by 1 user = 0.5 seconds   quotes generated by 1 user in 24 hour = 1 * (((2 * 60) * 60) * 24) = 172,800   quotes generated by 30 users in 24 hours = 172,800 * 30 =  5,184,000 This was a very simple example, if your application requires more concurrent users to test scenario’s such as caching, etc then you can devise your own load pattern, some examples of load test patterns can be found here.  Endurance Testing To test for endurance, I loaded the quote generation engine with an expected fixed user load and ran the test for very long duration such as over 48 hours and observed the affect of the long running test on the Azure infrastructure. Currently Microsoft Load Test service does not support metrics from the machine under test. I used Azure diagnostics to begin with, but later started using Cerebrata Azure Diagnostics Manager to capture the metrics of the machine under test. Threshold Testing To figure out how much user load the application could cope with before falling on its belly, I opted to step load the quote generation engine by incrementing user load with different variations of incremental user load per minute till the application crashed out and forced an IIS reset. Testing for Latency Currently the Microsoft Load Test service does not support generating geographically distributed load, I however, deployed the insurance quote generation engine in different Azure data centres and ran the same set of performance tests to measure for latency. Because I could compare load test results from different runs by exporting the results to excel (this feature is provided out of the box right from Visual Studio 2010) I could see the different in response times. More resources on Microsoft Cloud based Load Test Service A few important links to get you started, Download Visual Studio Ultimate 2013 Preview Getting started guide for load testing using Team Foundation Service Troubleshooting guide for FAQs and known issues Team Foundation Service forum for questions and support Detailed demo and presentation (link to Tech-Ed session recording) Detailed demo and presentation (link to Build session recording) There a few limits on the usage of Microsoft Cloud based Load Test service that you can read about here. If you have any feedback on Microsoft Cloud based Load Test service, feel free to share it with the product team via the Visual Studio User Voice forum. I hope you found this useful. Thank you for taking the time out and reading this blog post. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Stay tuned!

    Read the article

  • Developing a Cost Model for Cloud Applications

    - by BuckWoody
    Note - please pay attention to the date of this post. As much as I attempt to make the information below accurate, the nature of distributed computing means that components, units and pricing will change over time. The definitive costs for Microsoft Windows Azure and SQL Azure are located here, and are more accurate than anything you will see in this post: http://www.microsoft.com/windowsazure/offers/  When writing software that is run on a Platform-as-a-Service (PaaS) offering like Windows Azure / SQL Azure, one of the questions you must answer is how much the system will cost. I will not discuss the comparisons between on-premise costs (which are nigh impossible to calculate accurately) versus cloud costs, but instead focus on creating a general model for estimating costs for a given application. You should be aware that there are (at this writing) two billing mechanisms for Windows and SQL Azure: “Pay-as-you-go” or consumption, and “Subscription” or commitment. Conceptually, you can consider the former a pay-as-you-go cell phone plan, where you pay by the unit used (at a slightly higher rate) and the latter as a standard cell phone plan where you commit to a contract and thus pay lower rates. In this post I’ll stick with the pay-as-you-go mechanism for simplicity, which should be the maximum cost you would pay. From there you may be able to get a lower cost if you use the other mechanism. In any case, the model you create should hold. Developing a good cost model is essential. As a developer or architect, you’ll most certainly be asked how much something will cost, and you need to have a reliable way to estimate that. Businesses and Organizations have been used to paying for servers, software licenses, and other infrastructure as an up-front cost, and power, people to the systems and so on as an ongoing (and sometimes not factored) cost. When presented with a new paradigm like distributed computing, they may not understand the true cost/value proposition, and that’s where the architect and developer can guide the conversation to make a choice based on features of the application versus the true costs. The two big buckets of use-types for these applications are customer-based and steady-state. In the customer-based use type, each successful use of the program results in a sale or income for your organization. Perhaps you’ve written an application that provides the spot-price of foo, and your customer pays for the use of that application. In that case, once you’ve estimated your cost for a successful traversal of the application, you can build that into the price you charge the user. It’s a standard restaurant model, where the price of the meal is determined by the cost of making it, plus any profit you can make. In the second use-type, the application will be used by a more-or-less constant number of processes or users and no direct revenue is attached to the system. A typical example is a customer-tracking system used by the employees within your company. In this case, the cost model is often created “in reverse” - meaning that you pilot the application, monitor the use (and costs) and that cost is held steady. This is where the comparison with an on-premise system becomes necessary, even though it is more difficult to estimate those on-premise true costs. For instance, do you know exactly how much cost the air conditioning is because you have a team of system administrators? This may sound trivial, but that, along with the insurance for the building, the wiring, and every other part of the system is in fact a cost to the business. There are three primary methods that I’ve been successful with in estimating the cost. None are perfect, all are demand-driven. The general process is to lay out a matrix of: components units cost per unit and then multiply that times the usage of the system, based on which components you use in the program. That sounds a bit simplistic, but using those metrics in a calculation becomes more detailed. In all of the methods that follow, you need to know your application. The components for a PaaS include computing instances, storage, transactions, bandwidth and in the case of SQL Azure, database size. In most cases, architects start with the first model and progress through the other methods to gain accuracy. Simple Estimation The simplest way to calculate costs is to architect the application (even UML or on-paper, no coding involved) and then estimate which of the components you’ll use, and how much of each will be used. Microsoft provides two tools to do this - one is a simple slider-application located here: http://www.microsoft.com/windowsazure/pricing-calculator/  The other is a tool you download to create an “Return on Investment” (ROI) spreadsheet, which has the advantage of leading you through various questions to estimate what you plan to use, located here: https://roianalyst.alinean.com/msft/AutoLogin.do?d=176318219048082115  You can also just create a spreadsheet yourself with a structure like this: Program Element Azure Component Unit of Measure Cost Per Unit Estimated Use of Component Total Cost Per Component Cumulative Cost               Of course, the consideration with this model is that it is difficult to predict a system that is not running or hasn’t even been developed. Which brings us to the next model type. Measure and Project A more accurate model is to actually write the code for the application, using the Software Development Kit (SDK) which can run entirely disconnected from Azure. The code should be instrumented to estimate the use of the application components, logging to a local file on the development system. A series of unit and integration tests should be run, which will create load on the test system. You can use standard development concepts to track this usage, and even use Windows Performance Monitor counters. The best place to start with this method is to use the Windows Azure Diagnostics subsystem in your code, which you can read more about here: http://blogs.msdn.com/b/sumitm/archive/2009/11/18/introducing-windows-azure-diagnostics.aspx This set of API’s greatly simplifies tracking the application, and in fact you can use this information for more than just a cost model. After you have the tracking logs, you can plug the numbers into ay of the tools above, which should give a representative cost or in some cases a unit cost. The consideration with this model is that the SDK fabric is not a one-to-one comparison with performance on the actual Windows Azure fabric. Those differences are usually smaller, but they do need to be considered. Also, you may not be able to accurately predict the load on the system, which might lead to an architectural change, which changes the model. This leads us to the next, most accurate method for a cost model. Sample and Estimate Using standard statistical and other predictive math, once the application is deployed you will get a bill each month from Microsoft for your Azure usage. The bill is quite detailed, and you can export the data from it to do analysis, and using methods like regression and so on project out into the future what the costs will be. I normally advise that the architect also extrapolate a unit cost from those metrics as well. This is the information that should be reported back to the executives that pay the bills: the past cost, future projected costs, and unit cost “per click” or “per transaction”, as your case warrants. The challenge here is in the model itself - statistical methods are not foolproof, and the larger the sample (in this case I recommend the entire population, not a smaller sample) is key. References and Tools Articles: http://blogs.msdn.com/b/patrick_butler_monterde/archive/2010/02/10/windows-azure-billing-overview.aspx http://technet.microsoft.com/en-us/magazine/gg213848.aspx http://blog.codingoutloud.com/2011/06/05/azure-faq-how-much-will-it-cost-me-to-run-my-application-on-windows-azure/ http://blogs.msdn.com/b/johnalioto/archive/2010/08/25/10054193.aspx http://geekswithblogs.net/iupdateable/archive/2010/02/08/qampa-how-can-i-calculate-the-tco-and-roi-when.aspx   Other Tools: http://cloud-assessment.com/ http://communities.quest.com/community/cloud_tools

    Read the article

  • How do I implement a dispatch table in a Perl OO module?

    - by Iain
    I want to put some subs that are within an OO package into an array - also within the package - to use as a dispatch table. Something like this package Blah::Blah; use fields 'tests'; sub new { my($class )= @_; my $self = fields::new($class); $self->{'tests'} = [ $self->_sub1 ,$self->_sub2 ]; return $self; } _sub1 { ... }; _sub2 { ... }; I'm not entirely sure on the syntax for this? $self->{'tests'} = [ $self->_sub1 ,$self->_sub2 ]; or $self->{'tests'} = [ \&{$self->_sub1} ,\&{$self->_sub2} ]; or $self->{'tests'} = [ \&{_sub1} ,\&{_sub2} ]; I don't seem to be able to get this to work within an OO package, whereas it's quite straightforward in a procedural fashion, and I haven't found any examples for OO. Any help is much appreciated, Iain

    Read the article

  • TFS and code coverage for web application (MVC) assemblies not working

    - by Andrew
    I've got an MVC web application with associated controller tests that run under a TFS build as per normal. I can see the tests running and passing in the build log and they appear in the "Result details for Any CPU/Release" section of the build I also have a number of other assemblies with associated tests that are running in the same build. Tests are passing and the details are being shown in the results and logs just fine. I've enabled code coverage in the build script and the testrunconfig. The coverage is appearing for all assemblies EXCEPT the web application even though it looks like the tests have been run for it. Is there anything obvious that I have missed or some sort of work around that I need to do? I've searched around for a while and haven't found an answer. Has anyone got code coverage working for MVC web applications?

    Read the article

  • Perl: implementing a dispatch table in an OO module?

    - by Iain
    I want to put some subs that are within an OO package into an array - also within the package - to use as a dispatch table. Something like this package Blah::Blah; use fields 'tests'; sub new { my($class )= @_; my $self = fields::new($class); $self->{'tests'} = [ $self->_sub1 ,$self->_sub2 ]; return $self; } _sub1 { ... }; _sub2 { ... }; I'm not entirely sure on the syntax for this? $self->{'tests'} = [ $self->_sub1 ,$self->_sub2 ]; or $self->{'tests'} = [ \&{$self->_sub1} ,\&{$self->_sub2} ]; or $self->{'tests'} = [ \&{_sub1} ,\&{_sub2} ]; I don't seem to be able to get this to work within an OO package, whereas it's quite straightforward in a procedural fashion, and I haven't found any examples for OO. Any help is much appreciated, Iain

    Read the article

  • ExecutionException and InterruptedException while using Future class's get() method

    - by java_geek
    ExecutorService executor = Executors.newSingleThreadExecutor(); try { Task t = new Task(response,inputToPass,pTypes,unit.getInstance(),methodName,unit.getUnitKey()); Future<SCCallOutResponse> fut = executor.submit(t); response = fut.get(unit.getTimeOut(),TimeUnit.MILLISECONDS); } catch (TimeoutException e) { // if the task is still running, a TimeOutException will occur while fut.get() cat.error("Unit " + unit.getUnitKey() + " Timed Out"); response.setVote(SCCallOutConsts.TIMEOUT); } catch (InterruptedException e) { cat.error(e); } catch (ExecutionException e) { cat.error(e); } finally { executor.shutdown(); } } How should i handle the InterruptedException and ExecutionException in the code? And in what cases are these exceptions thrown?

    Read the article

  • Eclipse doesn't see my new junit test

    - by morgancodes
    I'm using eclipse to run the tests in a single junit(4) test class. The tests in the class all run just fine. Then I add an additional test and run the class through the test running in ecplise again. Only the old tests are run. The new test isn't seen by eclipse. There's no error or anything, it's just as if eclipse is looking at an old version of the test. If I run the tests using maven, everything works fine. Additionally, after I run the tests in maven, ecplipse can see and run the new test correctly. Any ideas what's going on? Any ideas how to get ecplipse's test runner to see my new test cases?

    Read the article

  • Maven - Selenium - Possible to run only one test

    - by Jonas Söderström
    Hi We are using JUnit - Selenium for our web tests. We use Maven to start them and build a surefire report. The test suite is pretty large and takes a while to run and sometimes single tests fail because the browser won't start. I want to be able run a SINGLE test using maven so I retest the tests that fail and update the report. I can use mvn test -Dtest=TESTCLASSNAME to run all the tests in one test class, but this is not good enough since it takes about 10 minutes to run all the tests in our most complicated test classes and it's very likely that some other test will fail (because the browser wont start) and this will mess up my report. I know I can run one test from Eclipse but that is not what I am looking for. Any help on this would be very appriciated

    Read the article

  • BDD-testing using a UI driver (e.g. Selenium for a web-application)

    - by jonathanconway
    Can BDD (Behavior Driven Design) tests be implemented using a UI driver? For example, given a web application, instead of: Writing tests for the back-end, and then more tests in Javascript for the front-end Should I: Write the tests as Selenium macros, which simulate mouse-clicks, etc in the actual browser? The advantages I see in doing it this way are: The tests are written in one language, rather than several They're focussed on the UI, which gets developers thinking outside-in They run in the real execution environment (the browser), which allows us to Test different browsers Test different servers Get insight into real-world performance Thoughts?

    Read the article

  • JUnit 4 test suite problems

    - by Hypnus
    Hi, i have a problem with some JUnit 4 tests that i run with a test suite. If i run the tests individually they work with no problems but when run in a suite most of them, 90% of the test methods, fail with errors. What i noticed is that always the first tests works fine but the rest are failing. Another thing is that a few of the tests the methods are not executed in the right order (the reflection does not work as aspected - or it does because the retrieval of the methods is not necessarily in the created order). This usually happens if there is more than one test with methods that have the same name. I tried to debug some of the tests and it seems that from a line to the next the value of some attributes gets null. Does anyone know what is the problem, or if the behavior is "normal"? Thanks in advance.

    Read the article

  • app engine's back referencing is too slow. How can I make it faster?

    - by Ray Yun
    Google app engine has smart feature named back references and I usually iterate them where the traditional SQL's computed column need to be used. Just imagine that need to accumulate specific force's total hp. class Force(db.Model): hp = db.IntegerProperty() class UnitGroup(db.Model): force = db.ReferenceProperty(reference_class=Force,collection_name="groups") hp = db.IntegerProperty() class Unit(db.Model): group = db.ReferenceProperty(reference_class=UnitGroup,collection_name="units") hp = db.IntegerProperty() When I code like following, it was horribly slow (almost 3s) with 20 forces with single group - single unit. (I guess back-referencing force reload sub entities. Am I right?) def get_hp(self): hp = 0 for group in self.groups: group_hp = 0 for unit in group.units: group_hp += unit.hp hp += group_hp return hp How can I optimize this code? Please consider that there are more properties should be computed for each force/unit-groups and I don't want to save these collective properties to each entities. :)

    Read the article

< Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >