Search Results

Search found 7465 results on 299 pages for 'jsp tags'.

Page 277/299 | < Previous Page | 273 274 275 276 277 278 279 280 281 282 283 284  | Next Page >

  • Grid Layouts in ADF Faces using Trinidad

    - by frank.nimphius
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} ADF Faces does provide a data table component but none to define grid layouts. Grids are common in web design and developers often try HTML table markup wrapped in an f:verbatim tag or directly added the page to build a desired layout. Usually these attempts fail, showing unpredictable results, However, ADF Faces does not provide a table layout component, but Apache MyFaces Trinidad does. The Trinidad trh:tableLayout component is a thin wrapper around the HTML table element and contains a series of row layout elements, trh:rowLayout. Each trh:rowLayout component may contain one or many trh:cellLayout components to format cells content. <trh:tableLayout id="tl1" halign="left">   <trh:rowLayout id="rl1" valign="top" halign="left">     <trh:cellFormat id="cf1" width="100" header="true">        <af:outputLabel value="Label 1" id="ol1"/>     </trh:cellFormat>     <trh:cellFormat id="cf2" header="true"                               width="300">        <af:outputLabel value="Label 2" id="outputLabel1"/>        </trh:cellFormat>      </trh:rowLayout>      <trh:rowLayout id="rowLayout1" valign="top" halign="left">        <trh:cellFormat id="cellFormat1" width="100" header="false">           <af:outputLabel value="Label 3" id="outputLabel2"/>        </trh:cellFormat>     </trh:rowLayout>        ... </trh:tableLayout> To add the Trinidad tag library to your ADF Faces projects ... Open the Component Palette and right mouse click into it Choose "Edit Tag Libraries" and select the Trinidad components. Move them to the "Selected Libraries" section and Ok the dialog.The first time you drag a Trinidad component to a page, the web.xml file is updated with the required filters Note: The Trinidad tags don't participate in the ADF Faces RC geometry management. However, they are JSF components that are part of the JSF request lifecycle. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} ADF Faces RC components work well with Trinidad layout components that don't use PPR. The PPR implementation of Trinidad is different from the one in ADF Faces. However, when you mix ADF Faces components with Trinidad components, avoid Trinidad components that have integrated PPR behavior. Only use passive Trinidad components.See:http://myfaces.apache.org/trinidad/trinidad-api/tagdoc/trh_tableLayout.htmlhttp://myfaces.apache.org/trinidad/trinidad-api/tagdoc/trh_rowLayout.htmlhttp://myfaces.apache.org/trinidad/trinidad-api/tagdoc/trh_cellFormat.html .

    Read the article

  • FAQ: Highlight GridView Row on Click and Retain Selected Row on Postback

    - by Vincent Maverick Durano
    A couple of months ago I’ve written a simple demo about “Highlighting GridView Row on MouseOver”. I’ve noticed many members in the forums (http://forums.asp.net) are asking how to highlight row in GridView and retain the selected row across postbacks. So I’ve decided to write this post to demonstrate how to implement it as reference to others who might need it. In this demo I going to use a combination of plain JavaScript and jQuery to do the client-side manipulation. I presumed that you already know how to bind the grid with data because I will not include the codes for populating the GridView here. For binding the gridview you can refer this post: Binding GridView with Data the ADO.Net way or this one: GridView Custom Paging with LINQ. To get started let’s implement the highlighting of GridView row on row click and retain the selected row on postback.  For simplicity I set up the page like this: <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> <h2>You have selected Row: (<asp:Label ID="Label1" runat="server" />)</h2> <asp:HiddenField ID="hfCurrentRowIndex" runat="server"></asp:HiddenField> <asp:HiddenField ID="hfParentContainer" runat="server"></asp:HiddenField> <asp:Button ID="Button1" runat="server" onclick="Button1_Click" Text="Trigger Postback" /> <asp:GridView ID="grdCustomer" runat="server" AutoGenerateColumns="false" onrowdatabound="grdCustomer_RowDataBound"> <Columns> <asp:BoundField DataField="Company" HeaderText="Company" /> <asp:BoundField DataField="Name" HeaderText="Name" /> <asp:BoundField DataField="Title" HeaderText="Title" /> <asp:BoundField DataField="Address" HeaderText="Address" /> </Columns> </asp:GridView> </asp:Content>   Note: Since the action is done at the client-side, when we do a postback like (clicking on a button) the page will be re-created and you will lose the highlighted row. This is normal because the the server doesn't know anything about the client/browser not unless if you do something to notify the server that something has changed. To persist the settings we will use some HiddenFields control to store the data so that when it postback we can reference the value from there. Now here’s the JavaScript functions below: <asp:content id="Content1" runat="server" contentplaceholderid="HeadContent"> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.4/jquery.min.js" type="text/javascript"></script> <script type="text/javascript">       var prevRowIndex;       function ChangeRowColor(row, rowIndex) {           var parent = document.getElementById(row);           var currentRowIndex = parseInt(rowIndex) + 1;                 if (prevRowIndex == currentRowIndex) {               return;           }           else if (prevRowIndex != null) {               parent.rows[prevRowIndex].style.backgroundColor = "#FFFFFF";           }                 parent.rows[currentRowIndex].style.backgroundColor = "#FFFFD6";                 prevRowIndex = currentRowIndex;                 $('#<%= Label1.ClientID %>').text(currentRowIndex);                 $('#<%= hfParentContainer.ClientID %>').val(row);           $('#<%= hfCurrentRowIndex.ClientID %>').val(rowIndex);       }             $(function () {           RetainSelectedRow();       });             function RetainSelectedRow() {           var parent = $('#<%= hfParentContainer.ClientID %>').val();           var currentIndex = $('#<%= hfCurrentRowIndex.ClientID %>').val();           if (parent != null) {               ChangeRowColor(parent, currentIndex);           }       }          </script> </asp:content>   The ChangeRowColor() is the function that sets the background color of the selected row. It is also where we set the previous row and rowIndex values in HiddenFields.  The $(function(){}); is a short-hand for the jQuery document.ready event. This event will be fired once the page is posted back to the server that’s why we call the function RetainSelectedRow(). The RetainSelectedRow() function is where we referenced the current selected values stored from the HiddenFields and pass these values to the ChangeRowColor() function to retain the highlighted row. Finally, here’s the code behind part: protected void grdCustomer_RowDataBound(object sender, GridViewRowEventArgs e) { if (e.Row.RowType == DataControlRowType.DataRow) { e.Row.Attributes.Add("onclick", string.Format("ChangeRowColor('{0}','{1}');", e.Row.ClientID, e.Row.RowIndex)); } } The code above is responsible for attaching the javascript onclick event for each row and call the ChangeRowColor() function and passing the e.Row.ClientID and e.Row.RowIndex to the function. Here’s the sample output below:   That’s it! I hope someone find this post useful! Technorati Tags: jQuery,GridView,JavaScript,TipTricks

    Read the article

  • 2011 The Year of Awesomesauce

    - by MOSSLover
    So I was talking to one of my friends, Cathy Dew, and I’m wondering how to start out this post.  What kind of title should I put?  Somehow we’re just randomly throwing things out and this title pops into my head the one you see above. I woke up today to the buzz of a text message.  I spent New Years laying around until 3 am watching Warehouse 13 Episodes and drinking champagne.  It was one of the best New Year’s I spent with my boyfriend and my cat.  I figured I would sleep in until Noon, but ended up waking up around 11:15 to that text message buzz.  I guess my DE, Rachel Appel, had texted me “Happy New Years”, because Rachel is that kind of person.  I immediately proceeded to check my email.  I noticed my live account had a hit.  The account I rarely ever use had an email.  I sort of had that sinking suspicion I was going to get Silverlight MVP right?  So I open the email and something out of the blue happens it says “blah blah blah SharePoint Server MVP blah blah…”.  I’m sitting here a little confused what?  Really?  Just about when you give up on something the unexplained happens.  I am grateful for what I have every day. So let me tell you a story.  I was a senior in high school and it was December 31st, 1999.  A couple days prior my grandmother was complaining she had a cold and her assisted living facility was not going to let her see a doctor.  She claimed to be very sick.  New Year’s Eve Day 1999 my grandmother was rushed to the hospital sometime very early in the morning.  My uncle, my little brother, and myself were sitting in the waiting room eagerly awaiting news.  The Sydney Opera House was playing in the background as New Years 2000 for Australia was ringing in.  They come out and they tell us my grandmother has pneumonia.  She is in the ICU in critical condition.  Eventually time passes in the day and my parents take my brother and I home.  So in the car we had a huge fight that ended in the worst new years of my life.  The next 30 days were the worst 30 days of my life.  I went to the hospital every single day to do my homework and watch my grandmother.  Each day was a challenge mentally and physically as my grandmother berated me in her demented state.  On the 30th day my grandmother ended up in critical condition in the ICU maxed out on painkillers.  At approximately 3 am I hear my parents telling me they don’t want to wake me up and that my grandmother had passed away.  I must have cried more collectively that day than any other day in my life.  Every New Years Even since I have cried thinking about who she was and what she represented.  She was human looking back she wasn’t anything great, but she was one of the positive lights in my life.  Her and my dad and my other grandmother constantly tried to make me feel great when my mother was telling me the opposite.  I’d like to think since 2000 the past 11 years have been the best 11 years of my life.  I got out of a bad situation by using the tools that I had in front of me.  Good grades and getting into a college so I could aspire to be the person that I wanted to be.  I had some great people along the way to help me out. So getting to the point I like to help people further there lives somehow in the best way I can possibly help out.  This New Years was one of the great years that helped me forget the past and focus on the present.  It makes me realize how far I’ve come since high school and even since college.  The one thing I’ve been grappling with over the years is how do you feel good about making money while helping others out.  I’d to think I try really hard to give back to my community.  I could not have done what I did without other people’s help.  I sent out an email prior to even announcing I got the award today.  I can’t say I did everything on my own.  It’s not possible.  I had the help of others every step of the way.  I’m not sure if this makes sense but the award can’t just be mine.  This award is really owned by each and everyone who helped me get here.  From my dad to my grandmother to Rachel Appel to Bob Hunt to Jason Gallicchio to Cathy Dew to Mark Rackley to Johnny Ennion to Lee Brandt to Jeff Julian to John Alexander to Lori Gowin and to many others.  Thank you guys for all the help and support. Technorati Tags: SharePoint Community,MVP Award,Microsoft Community

    Read the article

  • Community Branching

    - by Dane Morgridge
    As some may have noticed, I have taken a liking to Ruby (and Rails in particular) quite a bit recently. This last weekend I spoke at the NYC Code Camp on a comparison of ASP.NET and Rails as well as an intro to Entity Framework talk.  I am speaking at RubyNation in April and have submitted to other ruby conferences around the area and I am also doing a Rails and MongoDB talk at the Philly Code Camp in April. Before you start to think this is my "I'm leaving .NET post", which it isn't so I need to clarify. I am not, nor do I intend to any time in the near future plan on abandoning .NET.  I am simply branching out into another community based on a development technology that I very much enjoy.  If you look at my twitter bio, you will see that I am into Entity Framework, Ruby on Rails, C++ and ASP.NET MVC, and not necessarily in that order.  I know you're probably thinking to your self that I am crazy, which is probably true on several levels (especially the C++ part). I was actually crazy enough at the NYC Code Camp to show up wearing a Linux t-shirt, presenting with my MacBook Pro on Entity Framework, ASP.NET MVC and Rails. (I did get pelted in the head with candy by Rachel Appel for it though) At all of the code camps I am submitting to this year, i will be submitting sessions on likely all four topics, and some sessions will be a combination of 2 or more.  For example, my "ASP.NET MVC: A Gateway To Rails?" talk touches ASP.NET MVC, Entity Framework Code First and Rails. Simply put (and I talk about this in my MVC & Rails talk) is that learning and using Rails has made me a better ASP.NET MVC developer. Just one example of this is helper methods.  When I started working with ASP.NET MVC, I didn't really want to use helpers and preferred to just use standard html tags, especially where links were concerned.  It was just me being stubborn and not really seeing all of the benefit of the helpers.  To my defense, coming from WebForms, I wanted to be as bare metal as possible and it seemed at first like a lot of the helpers were an unnecessary abstraction. I took my first look at Rails back in v1 and didn't spend very much time with it so I dismissed it and went on my merry ASP.NET WebForms way.  Then I picked up ASP.NET MVC and grasped the MVC pattern itself much better. After this, I took another look at Rails and everything made sense.  I decided then to learn Rails. (I think it is important for developers to learn new languages and platforms regularly so it was a natural progression for me) I wanted to learn it the right way, so when I dug into code, everyone used helpers everywhere for pretty much everything possible. I took some time to dig in and found out how helpful they were and subsequently realized how awesome they were in ASP.NET MVC also and started using them. In short, I love Rails (and Ruby in general).  I also love ASP.NET MVC and Entity Framework and yes I still love C++.  I have varying degrees of love for them individually at any given moment and it is likely to shift based on the current project I am working on.  I know you're thinking it so before you ask the question. "Which do I use when?", I'm going to give the standard developer answer of: It depends.  There are a lot of factors that I am not going to even go into that would go into a decision.  The most basic question I would ask though is,  does this project depend on .NET?  If it does, then I'd say that ASP.NET MVC is probably going to be the more logical choice and I am going to leave it at that.  I am working on projects right now in both technologies and I don't see that changing anytime soon (one project even uses both). With all that being said, you'll find me at code camps, conferences and user groups presenting on .NET, Ruby or both, writing about .NET and Ruby and I will likely be blogging on both in the future.  I know of others that have successfully branched out to other communities and with any luck I'll be successful at it too. On a (sorta) side note, I read a post by Justin Etheredge the other day that pretty much sums up my feelings about Ruby as a language.  I highly recommend checking it out: What Is So Great About Ruby?

    Read the article

  • Build 2012, some thoughts..

    - by Dennis Vroegop
    I think you probably read my rant about the logistics at Build 2012, as posted here, so I am not going into that anymore. Instead, let’s look at the content. (BTW If you did read that post and want some more info then read Nia Angelina’s post about Build. I have nothing to add to that.) As usual, there were good speakers and some speakers who could benefit from some speaker training. I find it hard to understand why Microsoft allows certain people on stage, people who speak English with such strong accents it’s hard for people, especially from abroad, to understand. Some basic training might be useful for some of them. However, it is nice to see that most speakers are project managers, program managers or even devs on the teams that build the stuff they talk about: there was a lot of knowledge on stage! And that means when you ask questions you get very relevant information. I realize I am not the average audience member here, I am regular speaker myself so I tend to look for other things when I am in a room than most audience members so my opinion might differ from others. All in all the knowledge of the speakers was above average but the presentation skills were most of the times below what I would describe as adequate. But let us look at the contents. Since the official name of the conference is Build Windows 2012 it is not surprising most of the talks were focused on building Windows 8 apps. Next to that, there was a lot of focus on Azure and of course Windows Phone 8 that launched the day before Build started. Most sessions dealt with C# and JavaScript although I did see a tendency to use C++ more. Touch. Well, that was the focus on a lot of sessions, that goes without saying. Microsoft is really betting on Touch these days and being a Touch oriented developer I can only applaud this. The term NUI is getting a bit outdated but the principles behind it certainly aren’t. The sessions did cover quite a lot on how to make your applications easy to use and easy to understand. However, not all is touch nowadays; still the majority of people use keyboard and mouse to interact with their machines (or, as I do, use keyboard, mouse AND touch at the same time). Microsoft understands this and has spend some serious thoughts on this as well. It was all about making your apps run everywhere on all sorts of devices and in all sorts of scenarios. I have seen a couple of sessions focusing on the portable class library and on sharing code between Windows 8 and Windows Phone 8. You get the feeling Microsoft is enabling us devs to write software that will be ubiquitous. They want your stuff to be all over the place and they do anything they can to help. To achieve that goal they provide us with brilliant SDK’s, great tooling, a very, very good backend in the form of Windows Azure (I was particularly impressed by the Mobility part of Azure) and some fantastic hardware. And speaking of hardware: the partners such as Acer, Lenovo and Dell are making hardware that puts Apple to a shame nowadays. To illustrate: in Bellevue (very close to Redmond where Microsoft HQ is) they have the Microsoft Store located very close to the Apple Store, so it’s easy to compare devices. And I have to say: the Microsoft offerings are much, much more appealing that what the Cupertino guys have to offer. That was very visible by the number of people visiting the stores: even on the day that Apple launched the iPad Mini there were more people in the Microsoft store than in the Apple store. So, the future looks like it’s going to be fun. Great hardware (did I mention the Nokia Lumia 920? No? It’s brilliant), great software (Windows 8 is in a league of its own), the best dev tools (Visual Studio 2012 is still the champion here) and a fantastic backend (Azure.. need I say more?). It’s up to us devs to fill up the stores with applications that matches this. To summarize: it is great to be a Windows developer. PS. Did I mention Surface RT? Man….. People were drooling all over it wherever I went. It is fantastic :-) Technorati Tags: Build,Windows 8,Windows Phone,Lumia,Surface,Microsoft

    Read the article

  • Configuring Application/User Settings in WPF the easy way.

    - by mbcrump
    In this tutorial, we are going to configure the application/user settings in a WPF application the easy way. Most example that I’ve seen on the net involve the ConfigurationManager class and involve creating your own XML file from scratch. I am going to show you a easier way to do it. (in my humble opinion) First, the definitions: User Setting – is designed to be something specific to the user. For example, one user may have a requirement to see certain stocks, news articles or local weather. This can be set at run-time. Application Setting – is designed to store information such as a database connection string. These settings are read-only at run-time. 1) Lets create a new WPF Project and play with a few settings. Once you are inside VS, then paste the following code snippet inside the <Grid> tags. <Grid> <TextBox Height="23" HorizontalAlignment="Left" Margin="12,11,0,0" Name="textBox1" VerticalAlignment="Top" Width="285" Grid.ColumnSpan="2" /> <Button Content="Set Title" Name="button2" Click="button2_Click" Margin="108,40,96,114" /> <TextBlock Height="23" Name="textBlock1" Text="TextBlock" VerticalAlignment="Bottom" Width="377" /> </Grid> Basically, its just a Textbox, Button and TextBlock. The main Window should look like the following:   2) Now we are going to setup our Configuration Settings. Look in the Solution Explorer and double click on the Settings.settings file. Make sure that your settings file looks just like mine included below:   What just happened was the designer created an XML file and created the Settings.Designer.cs file which looks like this: //------------------------------------------------------------------------------ // <auto-generated> // This code was generated by a tool. // Runtime Version:4.0.30319.1 // // Changes to this file may cause incorrect behavior and will be lost if // the code is regenerated. // </auto-generated> //------------------------------------------------------------------------------ namespace WPFExam.Properties { [global::System.Runtime.CompilerServices.CompilerGeneratedAttribute()] [global::System.CodeDom.Compiler.GeneratedCodeAttribute("Microsoft.VisualStudio.Editors.SettingsDesigner.SettingsSingleFileGenerator", "10.0.0.0")] internal sealed partial class Settings : global::System.Configuration.ApplicationSettingsBase { private static Settings defaultInstance = ((Settings)(global::System.Configuration.ApplicationSettingsBase.Synchronized(new Settings()))); public static Settings Default { get { return defaultInstance; } } [global::System.Configuration.UserScopedSettingAttribute()] [global::System.Diagnostics.DebuggerNonUserCodeAttribute()] [global::System.Configuration.DefaultSettingValueAttribute("ApplicationName")] public string ApplicationName { get { return ((string)(this["ApplicationName"])); } set { this["ApplicationName"] = value; } } [global::System.Configuration.ApplicationScopedSettingAttribute()] [global::System.Diagnostics.DebuggerNonUserCodeAttribute()] [global::System.Configuration.DefaultSettingValueAttribute("SQL_SRV342")] public string DatabaseServerName { get { return ((string)(this["DatabaseServerName"])); } } } } The XML File is named app.config and looks like this: <?xml version="1.0" encoding="utf-8" ?> <configuration> <configSections> <sectionGroup name="userSettings" type="System.Configuration.UserSettingsGroup, System, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" > <section name="WPFExam.Properties.Settings" type="System.Configuration.ClientSettingsSection, System, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" allowExeDefinition="MachineToLocalUser" requirePermission="false" /> </sectionGroup> <sectionGroup name="applicationSettings" type="System.Configuration.ApplicationSettingsGroup, System, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" > <section name="WPFExam.Properties.Settings" type="System.Configuration.ClientSettingsSection, System, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" requirePermission="false" /> </sectionGroup> </configSections> <userSettings> <WPFExam.Properties.Settings> <setting name="ApplicationName" serializeAs="String"> <value>ApplicationName</value> </setting> </WPFExam.Properties.Settings> </userSettings> <applicationSettings> <WPFExam.Properties.Settings> <setting name="DatabaseServerName" serializeAs="String"> <value>SQL_SRV342</value> </setting> </WPFExam.Properties.Settings> </applicationSettings> </configuration> 3) The only left now is the code behind the button. Double click the button and replace the MainWindow() method with the following code snippet. public MainWindow() { InitializeComponent(); this.Title = Properties.Settings.Default.ApplicationName; textBox1.Text = Properties.Settings.Default.ApplicationName; textBlock1.Text = Properties.Settings.Default.DatabaseServerName; } private void button2_Click(object sender, RoutedEventArgs e) { Properties.Settings.Default.ApplicationName = textBox1.Text.ToString(); Properties.Settings.Default.Save(); } Run the application and type something in the textbox and hit the Set Title button. Now, restart the application and you should see the text that you entered earlier.   If you look at the button2 click event, you will see that it was actually 2 lines of codes to save to the configuration file. I hope this helps, for more information consult MSDN.

    Read the article

  • Master Page: Dynamically Adding Rows in ASP Table on Button Click event

    - by Vincent Maverick Durano
    In my previous post here, I wrote an example that demonstrates how are we going to generate table rows dynamically using ASP Table on click of the Button control. Now based on some comments in my previous example and in the forums they wanted to implement it within Masterpage. Unfortunately the code in my previous example doesn't work in Masterpage for the following main reasons: The Table is dynamically added within the Form tag and so the TextBox control will not be generated correcty in the page. The data will not be retained on each and every postbacks because the SetPreviousData() method is looking for the Table element within the Page and not on the MasterPage. The Request.Form key value should be set correctly since all controls within the master page are prefixed with the naming containter ID to prevent duplicate ids on the final rendered HTML. For example the TextBox control with the ID of TextBoxRow will turn to ID to this ctl00$MainBody$TextBoxRow. In order for the previous example to work within Masterpage then we will have to correct those three main reasons above and this post will guide you how to correct it. Suppose we have this content page declaration below:   <asp:Content ID="Content1" ContentPlaceHolderID="MainHead" Runat="Server"> </asp:Content> <asp:Content ID="Content2" ContentPlaceHolderID="MainBody" Runat="Server"> <asp:PlaceHolder ID="PlaceHolder1" runat="server"> <asp:Button ID="BTNAdd" runat="server" Text="Add New Row" OnClick="BTNAdd_Click" /> </asp:PlaceHolder> </asp:Content> As you notice I've added a PlaceHolder control within the MainBody ContentPlaceHolder. This is because we are going to generate the Table in the PlaceHolder instead of generating it within the Form element. Now since issue #1 is already corrected then let's proceed to the code beind part. Here are the full code blocks below:     using System; using System.Web.UI; using System.Web.UI.WebControls; public partial class DynamicControlDemo : System.Web.UI.Page { private int numOfRows = 1; protected void Page_Load(object sender, EventArgs e) { //Generate the Rows on Initial Load if (!Page.IsPostBack) { GenerateTable(numOfRows); } } protected void BTNAdd_Click(object sender, EventArgs e) { if (ViewState["RowsCount"] != null) { numOfRows = Convert.ToInt32(ViewState["RowsCount"].ToString()); GenerateTable(numOfRows); } } private void SetPreviousData(int rowsCount, int colsCount) { Table table = (Table)this.Page.Master.FindControl("MainBody").FindControl("Table1"); // **** if (table != null) { for (int i = 0; i < rowsCount; i++) { for (int j = 0; j < colsCount; j++) { //Extracting the Dynamic Controls from the Table TextBox tb = (TextBox)table.Rows[i].Cells[j].FindControl("TextBoxRow_" + i + "Col_" + j); //Use Request object for getting the previous data of the dynamic textbox tb.Text = Request.Form["ctl00$MainBody$TextBoxRow_" + i + "Col_" + j];//***** } } } } private void GenerateTable(int rowsCount) { //Creat the Table and Add it to the Page Table table = new Table(); table.ID = "Table1"; PlaceHolder1.Controls.Add(table);//****** //The number of Columns to be generated const int colsCount = 3;//You can changed the value of 3 based on you requirements // Now iterate through the table and add your controls for (int i = 0; i < rowsCount; i++) { TableRow row = new TableRow(); for (int j = 0; j < colsCount; j++) { TableCell cell = new TableCell(); TextBox tb = new TextBox(); // Set a unique ID for each TextBox added tb.ID = "TextBoxRow_" + i + "Col_" + j; // Add the control to the TableCell cell.Controls.Add(tb); // Add the TableCell to the TableRow row.Cells.Add(cell); } // And finally, add the TableRow to the Table table.Rows.Add(row); } //Set Previous Data on PostBacks SetPreviousData(rowsCount, colsCount); //Sore the current Rows Count in ViewState rowsCount++; ViewState["RowsCount"] = rowsCount; } }   As you observed the code is pretty much similar to the previous example except for the highlighted lines above. That's it! I hope someone find this post usefu! Technorati Tags: Dynamic Controls,ASP.NET,C#,Master Page

    Read the article

  • Many Different Things Rolled into a Ball

    - by MOSSLover
    Yeah I know I don’t blog much anymore, because life has taken me places that don’t involve the interwebs unfortunately.  I am in the midst of planning two events, starting a non for profit, creating more sessions for various conferences, submitting to various conferences, working a 40 hour a week job, attempting to hang out with boyfriend/friends/family.  So you can see that list does not include this blog sadly that’s how it goes sometimes.  The bottom piece very important over any of the top pieces.  I haven’t seen St. Louis in a while and I get to go back.  I was gone from home for MVP Summit and Best Practices Conference, so the boyfriend and cat didn’t get to see me either for a bit.  Then you have to add in the whole toilet being broken fiasco this week.  Maintenance really thought it would be cool to turn off the ability to flush.  I mean who does that?  Then when we call the owner he comes by turns it on and we figure it was an accident, because well the next day no one came by to tell us there was a leak.  It was all kinds of strangeness and involved me running to other people’s toilets.  As Dan Usher would say, I was a sad panda for a few days.  So I guess I wanted to post a few thoughts here just because I can.  I do not like multiple content editor webparts embedded with html files in numerous pages doing the same thing.  I will tell you why I don’t like these particular webparts and the way they are being used.  First off if you have a bunch of pages with script includes it’s about time you should just dump them into the masterpage.  Why bother finding all 20 pages and changing those pages when you can just use a single masterpage that already exists? The other thing that is bothering me days is screen scraping.  Just don’t do it, because in 2010 you will find the UI is substantially slower.  I understand you are new and you have no idea what to do.  You are also using 2007 am I right?  So then you need to go to codeplex.com and type in a search for SPServices.  Download it, use it, love it and then have it’s babies (well maybe don’t go so far this is not the GRID in Tron). If you have a ton of constants in your code why did you not go in and create a webpart with a bunch of properties and/or link to a configuration list hidden in the browser?  This type of property and list could help you out in the long run.  The power users and administrators can now change the control without you having to compile it over and over again.  It’s good stuff.  Also, you can change the control without compiling it, especially in 2007 where you have to do a farm solution.  In 2010 you can do a sandbox solution I guess, but shouldn’t you make it as easy and supportable as possible for other users? In conclusion I’m an angry person when it comes to viewing something repeatedly and analyzing it in a system.  Now we will move on to the next topic…MVP Summit…So yeah I can’t really talk about particulars, but I can talk about my experience as a person.  Don’t build something up to be cooler than it is only to be dropped from your 10,000 foot perch.  My experience was great, but the content overall was something to be desired.  It’s ok I got to meet a lot of people I would not have met if I had not gone.  Some of it was surreal, such as product group members showing up and talking to us.  It was pretty neat.  Plus I never had the chance to get to that mythical MS Office in Redmond.  Prior to Summit it was like Rainbow Brites unicorn trying taunting me on television when I was a kid.  So I guess with all that said I give it a B.  It was awesome in some way, but lacking in other ways.  The cool part is that I got to go.  Would I have lived without going? Yes, but it was still cool. I could prattle on about other things and make this post massive, but I’m going to pass and give myself a piece of Sunday to play Rockband and do 800 other things.  I hope the two of you who read this blog are well.  I’ll catch you all at another juncture.  Have a good weekend and varying holidays in between. Technorati Tags: SharePoint,MVP Summit,JQuery,Javascript

    Read the article

  • Silverlight Cream Monday WP7 App Review # 1

    - by Dave Campbell
    I'm going to try something here... if it seems useful, I'll continue, if it doesn't, I'll stop... so give me feedback! There are *lots* of Apps in the WP7 Marketplace, and heaven help me, but the Marketplace sucks for finding stuff. I won't rehash what's already been said in the blogs, but I agree with one and all. I went out last Saturday to find 2 apps that I knew were released, and couldn't do so on my device. Even in the Zune app, it took quite a while to find them... ok, I'll back off a bit, because I just found out I can do 'Search' now if I know the name... I didn't think that was working before. So my thought is on Monday (like today), I will post a review of 5 apps/games I either use or have played with on my device. These are strictly my opinions, you understand, but hey... it's better than a poke in the eye with an iPhone! A few disclaimers:     Feel free to write me about your app and tell me about it. While it would be very cool to receive a whole bunch of xap files to review, at this point, for technical reasons, I'm unable to side-load my device. Since I plan on only doing this one day a week, and only 5, I may never get caught up, so if you send me some info, be patient. Re: games ... remember I'm old... I'm from the era of Colossal Cave and Zork. Duke-Nukem 2D and Captain Comic were awesome. I don't own an XBOX or any other game system, so take game reviews from my perspective -- who knows, it may be refreshing :) I won't pay for an app or game just to try it. If you expect me to test-drive your app, it's going to have to have a Free Trial. In this Issue:   Jingo! is the first app I bought, just to see what the experience was like. It's very much like a game we used to play in school in the Army in 1971 on paper we passed around. Sort of a cross between hangman and Mastermind, you try to figure out the hidden word in 5 tries. You get really good at 5-letter words after a while. I like this because you have to think, and you're not pressured by a clock Jingo! is by James Furdell and is $1.99 I reviewed René Schulte's Pictures Lab a while back, and have not changed my mind. This is an excellent app for playing with any photo on your device... one you've just taken, one you've synced from your PC, or one you've saved from email. I like this because you can get some cool effects for your photos, and it just works. Pictures Lab is by Schulte Software Development and is $1.99 Since I work as a consultant, and from home, I wanted something I could track my time with. I've test-driven all the contenders I could find so far on the phone, and so far I like ONTRACK! the best. If asked, I have some suggestions, but it's probably just the way I work or think. What I do like is I can tap a project to start/stop/restart a counter, and at the end of the day it shows me how much time I've been working. If there's a way to make an adjustment in case you forget to tap the counter, I don't know how to do it, and that's my biggest complaint. I like this because you can get a daily readout which you can also email as a spreadsheet. The daily results display is very good. ONTRACK! is by Qmino and is $2.99 Remember Item 4 above... I've been playing guitar for 48 years... obviously since before the invention of 'tuners', so I'm not as dependent upon these as some folks are. I've tried some in the past and have always felt I can do just as good by ear (I have perfect relative pitch). So, I gave this app by András Velvárt a dance just to see how it works, and it is surprisingly good. If you're used to one of the stage tuners this may take a little getting used to, but it does the job. The difference with this one is there is no real 'null' point inside which you can think your guitar is in tune. The soundwave stays visible on the device, and if it's moving to the right, your string is flat, if it's moving to the left, your string is sharp. Getting it exact might be tricky, but it is exact! If you need to rely on a tuner, this is a good choice in my opinion, exactly because of the sensitivity.. tune up with this and you're dead-on. Guitar Tuner is by Kinabalu Innovation Limited and is $0.99 Popper 2 is the WP7 version of a wildly popular game by Bill Reiss named Dr. Popper. You can get a trial, or you can now get a free lite version of the game. Popper 2 is a fast-paced bubble breaker game. I find it something fun to play when I just want to buzz out, but maybe the best review is that my daughter didn't want to give my phone back when I showed it to her, and always wants to grab my phone to play 'that game'. A fun distraction with great graphics and a great price Popper 2 is by Blue Rose Systems, LLC and is $1.29 Let me know what you think of the idea of doing reviews, or the layout/whatever, and Stay in the 'Light!   Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Telerik Releases a new Visual Entity Designer

    Love LINQ to SQL but are concerned that it is a second class citizen? Need to connect to more databases other than SQL Server? Think that the Entity Framework is too complex? Want a domain model designer for data access that is easy, yet powerful? Then the Telerik Visual Entity Designer is for you. Built on top of Telerik OpenAccess ORM, a very mature and robust product, Teleriks Visual Entity Designer is a new way to build your domain model that is very powerful and also real easy to use. How easy? Ill show you here. First Look: Using the Telerik Visual Entity Designer To get started, you need to install the Telerik OpenAccess ORM Q1 release for Visual Studio 2008 or 2010. You dont need to use any of the Telerik OpenAccess wizards, designers, or using statements. Just right click on your project and select Add|New Item from the context menu. Choose Telerik OpenAccess Domain Model from the Visual Studio project templates. (Note to existing OpenAccess users, dont run the Enable ORM wizard or any other OpenAccess menu unless you are building OpenAccess Entities.) You will then have to specify the database backend (SQL Server, SQL Azure, Oracle, MySQL, etc) and connection. After you establish your connection, select the database objects you want to add to your domain model. You can also name your model, by default it will be NameofyourdatabaseEntityDiagrams. You can click finish here if you are comfortable, or tweak some advanced settings. Many users of domain models like to add prefixes and suffixes to classes, fields, and properties as well as handle pluralization. I personally accept the defaults, however, I hate how DBAs force underscores on me, so I click on the option to remove them. You can also tweak your namespace, mapping options, and define your own code generation template to gain further control over the outputted code. This is a very powerful feature, but for now, I will just accept the defaults.   When we click finish, you can see your domain model as a file with the .rlinq extension in the Solution Explorer. You can also bring up the visual designer to view or further tweak your model by double clicking on the model in the Solution Explorer.  Time to use the model! Writing a LINQ Query Programming against the domain model is very simple using LINQ. Just set a reference to the model (line 12 of the code below) and write a standard LINQ statement (lines 14-16).  (OpenAccess users: notice the you dont need any using statements for OpenAccess or an IObjectScope, just raw LINQ against your model.) 1: using System; 2: using System.Linq; 3: //no need for anOpenAccess using statement 4:   5: namespace ConsoleApplication3 6: { 7: class Program 8: { 9: static void Main(string[] args) 10: { 11: //a reference tothe data context 12: NorthwindEntityDiagrams dat = new NorthwindEntityDiagrams(); 13: //LINQ Statement 14: var result = from c in dat.Customers 15: where c.Country == "Germany" 16: select c; 17:   18: //Print out the company name 19: foreach (var cust in result) 20: { 21: Console.WriteLine("CompanyName: " + cust.CompanyName); 22: } 23: //keep the consolewindow open 24: Console.Read(); 25: } 26: } 27: } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Lines 19-24 loop through the result of our LINQ query and displays the results. Thats it! All of the super powerful features of OpenAccess are available to you to further enhance your experience, however, in most cases this is all you need. In future posts I will show how to use the Visual Designer with some other scenarios. Stay tuned. Enjoy! Technorati Tags: Telerik,OpenAccess,LINQ Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Converting LINQ to Twitter to Twitter API v1.1

    - by Joe Mayo
    Twitter recently updated their API to v1.1 (Current status: API v1.1). Naturally, LINQ to Twitter  needed to be updated too. This blog post outlines the changes made to LINQ to Twitter during this conversion and highlights important features that LINQ to Twitter developers will want to know. Overall Impact Generally speaking, Twitter API v1.1 is semantically very much the same as it’s predecessor. The base URL changed and so did a few resource segments, but the resources themselves are still intact. The good news is that LINQ to Twitter has always shielded the developer from this plumbing, so the entities, types, and filters didn’t change much at all.  The following sections describe what did  change. Authentication In Twitter API v1.0 authentication was not required for some resources, such as user timelines and search. However, that’s all changed because *all* queries must be authenticated in Twitter API v1.1. LINQ to Twitter has various types of authorizers you can use, supporting whatever OAuth options are available via Twitter.  You can see the LINQ to Twitter documentation, Securing Your Applications, for more info on OAuth support. The New Search One of the larger changes to the API was Search. To be more specific, the Search entity now contains a List<Status>, named Statuses, to hold results.  Additionally, any meta-data associated with the search is now in a property named SearchMetaData. The change to the Search entity and responses is the big change, but the good news is that your Search query syntax doesn’t change. Different Rate Limits The issue of rate limits itself is contentious, but this discussion is focused on the coding experience and I’ll leave the politics to those who prefer to engage in that activity. What’s important here is that both headers and resources have changed. You should review Twitter’s Rate Limit documentation to understand what the changes mean.  A quick explanation is that rate limits are applied individually to each resource in 15 minute time intervals. In LINQ to Twitter these changes surface on the Help entity, via HelpType.RateLimits. The RateLimits query has a Resources filter where you can specify a comma-separated list of categories to return rate limit info for.  The results materialize in the RateLimits dictionary, keyed on category. The Help entity also has a RateLimitsAuthorizationContext, holding the Access Token for the user performing queries – and to whom the rate limits apply. In addition to the new RateLimits query, there are new RateLimit headers that appear in the query response, whose HTTP header name is of the form X-Rate-Limit… which is different from the previous header name. LINQ to Twitter surfaces these headers via the existing properties of the TwitterContext instance. For anyone who retrieved rate limit information via the Headers property of TwitterContext, you should be aware of the new header names.  I haven’t done anything with Feature rate limit properties yet, but they appear to no longer be available – this will require more follow-up. Error Handling Twitter API v1.1 has a new format for Error Codes & Responses. LINQ to Twitter wraps these messages in the TwitterQueryException, which has been updated appropriately. The Message property of TwitterQueryException now reflects the Twitter error message, when available. There’s also a new ErrorCode that’s populated with the message error code. Parameters Most parameters stayed the same, but one of interest is Include Entities (different from LINQ to Twitter data object entities). Entities are metadata hanging off tweets, that provide start/end position in the tweet and other information for mentions, urls, hash tags, and media. Entities used to not be included unless you specified you wanted them. Now, in v1.1, entities are included by default for all APIs that return a Status.  If you were always setting IncludeEntities to true, then you won’t see a change. However, be aware that you’ll now be receiving additional data in your response from Twitter, which will explain a sudden increase in bandwidth utilization. This might or might not  matter to you  depending on the requirements of your application, but you should be aware of it. Everything Else There might be small changes here and there that I haven’t mentioned, but these were the ones you should be most aware of.  Streams didn’t change, but Twitter will be deprecating username/password authentication on public streams, in favor of OAuth, so you’ll be seeing me make that change some time in the future.  Also, Twitter will continue to evolve the API and you can expect that LINQ to Twitter will change accordingly. Summary The big changes to Twitter API were Authentication, Search, Rate Limits, and Error Handling. All API calls must be authenticated. You’ll need to change your code to read Search results differently, but the query is much the same as you use now. There’s a new RateLimits API, one of the Help queries.  Also, the new error messages are integrated into TwitterQueryException. Besides these changes, I expect  most others to be small or affect a smaller percentage of developers.  You can get the latest version of LINQ to Twitter from NuGet or visit the LINQ to Twitter download page at CodePlex.com.   @JoeMayo

    Read the article

  • 2D metaball liquid effect - how to feed output of one rendering pass as input to another shader

    - by Guye Incognito
    I'm attempting to make a shader for unity3d web project. I want to implement something like in the great answer by DMGregory in this question. in order to achieve a final look something like this.. Its metaballs with specular and shading. The steps to make this shader are. 1. Convert the feathered blobs into a heightmap. 2. Generate a normalmap from the heightmap 3. Feed the normal map and height map into a standard unity shader, for instance transparent parallax specular. I pretty much have all the pieces I need assembled but I am new to shaders and need help putting them together I can generate a heightmap from the blobs using some fragment shader code I wrote (I'm just using the red channel here cus i dont know if you can access the brightness) half4 frag (v2f i) : COLOR{ half4 texcol,finalColor; texcol = tex2D (_MainTex, i.uv); finalColor=_MyColor; if(texcol.r<_botmcut) { finalColor.r= 0; } else if((texcol.r>_topcut)) { finalColor.r= 0; } else { float r = _topcut-_botmcut; float xpos = _topcut - texcol.r; finalColor.r= (_botmcut + sqrt((xpos*xpos)-(r*r)))/_constant; } return finalColor; } turns these blobs.. into this heightmap Also I've found some CG code that generates a normal map from a height map. The bit of code that makes the normal map from finite differences is here void surf (Input IN, inout SurfaceOutput o) { o.Albedo = fixed3(0.5); float3 normal = UnpackNormal(tex2D(_BumpMap, IN.uv_MainTex)); float me = tex2D(_HeightMap,IN.uv_MainTex).x; float n = tex2D(_HeightMap,float2(IN.uv_MainTex.x,IN.uv_MainTex.y+1.0/_HeightmapDimY)).x; float s = tex2D(_HeightMap,float2(IN.uv_MainTex.x,IN.uv_MainTex.y-1.0/_HeightmapDimY)).x; float e = tex2D(_HeightMap,float2(IN.uv_MainTex.x-1.0/_HeightmapDimX,IN.uv_MainTex.y)).x; float w = tex2D(_HeightMap,float2(IN.uv_MainTex.x+1.0/_HeightmapDimX,IN.uv_MainTex.y)).x; float3 norm = normal; float3 temp = norm; //a temporary vector that is not parallel to norm if(norm.x==1) temp.y+=0.5; else temp.x+=0.5; //form a basis with norm being one of the axes: float3 perp1 = normalize(cross(norm,temp)); float3 perp2 = normalize(cross(norm,perp1)); //use the basis to move the normal in its own space by the offset float3 normalOffset = -_HeightmapStrength * ( ( (n-me) - (s-me) ) * perp1 + ( ( e - me ) - ( w - me ) ) * perp2 ); norm += normalOffset; norm = normalize(norm); o.Normal = norm; } Also here is the built-in transparent parallax specular shader for unity. Shader "Transparent/Parallax Specular" { Properties { _Color ("Main Color", Color) = (1,1,1,1) _SpecColor ("Specular Color", Color) = (0.5, 0.5, 0.5, 0) _Shininess ("Shininess", Range (0.01, 1)) = 0.078125 _Parallax ("Height", Range (0.005, 0.08)) = 0.02 _MainTex ("Base (RGB) TransGloss (A)", 2D) = "white" {} _BumpMap ("Normalmap", 2D) = "bump" {} _ParallaxMap ("Heightmap (A)", 2D) = "black" {} } SubShader { Tags {"Queue"="Transparent" "IgnoreProjector"="True" "RenderType"="Transparent"} LOD 600 CGPROGRAM #pragma surface surf BlinnPhong alpha #pragma exclude_renderers flash sampler2D _MainTex; sampler2D _BumpMap; sampler2D _ParallaxMap; fixed4 _Color; half _Shininess; float _Parallax; struct Input { float2 uv_MainTex; float2 uv_BumpMap; float3 viewDir; }; void surf (Input IN, inout SurfaceOutput o) { half h = tex2D (_ParallaxMap, IN.uv_BumpMap).w; float2 offset = ParallaxOffset (h, _Parallax, IN.viewDir); IN.uv_MainTex += offset; IN.uv_BumpMap += offset; fixed4 tex = tex2D(_MainTex, IN.uv_MainTex); o.Albedo = tex.rgb * _Color.rgb; o.Gloss = tex.a; o.Alpha = tex.a * _Color.a; o.Specular = _Shininess; o.Normal = UnpackNormal(tex2D(_BumpMap, IN.uv_BumpMap)); } ENDCG } FallBack "Transparent/Bumped Specular" }

    Read the article

  • Agile Executives

    - by Robert May
    Over the years, I have experienced many different styles of software development. In the early days, most of the development was Waterfall development. In the last few years, I’ve become an advocate of Scrum. As I talked about last month, many people have misconceptions about what Scrum really is. The reason why we do Scrum at Veracity is because of the difference it makes in the life of the team doing Scrum. Software is for people, and happy motivated people will build better software. However, not all executives understand Scrum and how to get the information from development teams that use Scrum. I think that these executives need a support system for managing Agile teams. Historical Software Management When Henry Ford pioneered the assembly line, I doubt he realized the impact he’d have on Management through the ages. Historically, management was about managing the process of building things. The people were just cogs in that process. Like all cogs, they were replaceable. Unfortunately, most of the software industry followed this same style of management. Many of today’s senior managers learned how to manage companies before software was a significant influence on how the company did business. Software development is a very creative process, but too many managers have treated it like an assembly line. Idea’s go in, working software comes out, and we just have to figure out how to make sure that the ideas going in are perfect, then the software will be perfect. Lean Manufacturing In the manufacturing industry, Lean manufacturing has revolutionized Henry Ford’s assembly line. Derived from the Toyota process, Lean places emphasis on always providing value for the customer. Anything the customer wouldn’t be willing to pay for is wasteful. Agile is based on similar principles. We’re building software for people, and anything that isn’t useful to them doesn’t add value. Waterfall development would have teams build reams and reams of documentation about how the software should work. Agile development dispenses with this work because excessive documentation doesn’t add value. Instead, teams focus on building documentation only when it truly adds value to the customer. Many other Agile principals are similar. Playing Catch-up Just like in the manufacturing industry, many managers in the software industry have yet to understand the value of the principles of Lean and Agile. They think they can wrap the uncertainties of software development up in a nice little package and then just execute, usually followed by failure. They spend a great deal of time and money trying to exactly predict the future. That expenditure of time and money doesn’t add value to the customer. Managers that understand that Agile know that there is a better way. They will instead focus on the priorities of the near term in detail, and leave the future to take care of itself. They have very detailed two week plans with less detailed quarterly plans. These plans are guided by a general corporate strategy that doesn’t focus on the exact implementation details. These managers also think in smaller features rather than large functionality. This adds a great deal of value to customers, since the features that matter most are the ones that the team focuses on in the near term and then are able to deliver to the customers that are paying for them. Agile managers also realize that stale software is very costly. They know that keeping the technology in their software current is much less expensive and risky than large rewrites that occur infrequently and schedule time in each release for refactoring of the existing software. Agile Executives Even though Agile is a better way, I’ve still seen failures using the Agile process. While some of these failures can be attributed to the team, most of them are caused by managers, not the team. Managers fail to understand what Agile is, how it works, and how to get the information that they need to make good business decisions. I think this is a shame. I’m very pleased that Veracity understands this problem and is trying to do something about it. Veracity is a key sponsor of Agile Executives. In fact, Galen is this year’s acting president for Agile Executives. The purpose of Agile Executives is to help managers better manage Agile teams and see better success. Agile Executives is trying to build a community of executives that range from managers interested in Agile to managers that have successfully adopted Agile. Together, these managers can form a community of support and ideas that will help make Agile teams more successful. Helping Your Team You can help too! Talk with your manager and get them involved in Agile Executives. Help Veracity build the community. If your manager understands Agile better, he’ll understand how to help his teams, which will result in software that adds more value for customers. If you have any questions about how you can be involved, please let me know. Technorati Tags: Agile,Agile Executives

    Read the article

  • Silverlight Cream Monday WP7 App Review # 2

    - by Dave Campbell
    Today's Review (alphabetic order): GooNews, Grocery Shopping List, Need for Speed, SurfCube, and United Nations News. I'm a day late if these are going to be 'Monday' posts, but there are lots of apps, lots of goodness, and lots of email, so I might try to do 2 a week, we'll see. So once again I've got a small review of 5 apps that are either on my phone or have been. Disclaimers at the end. In this Issue:   GooNews is a very cool app from Shawn Wildermuth (AgiliTrain). I don't know if he uses this as a demo during his instruction, but it definitely serves a purpose... wanna pick up the top news items from Google on a never-ending basis? ... this is it. You can add your own keyword searches, and send stories to InstaPaper or share via email. I like this because it brings me the news quickly and updated, and works great. GooNews is by AgiliTrain and is Free This was a request by the author, and actually surprised me. I'm a big one for lists, but I would have just done a OneNote list to SkyDrive and to my phone. This app is a lot more than that, but will take you some setup to make it be 'yours'. For obvious reasons, there are no unit prices on things, so you have to set that up to get some idea of the cost of what you're shopping for. But if you do that, you'll get a nice total. Lots of thought went into the various categories and you can add your own. There's a bit of animation on the category selection that's nice. He seems to have covered all the bases necessary to use this, even shopping 'plans' that can be saved, and emailing of lists. As I said, I'm more of a raw list person, but if you take the time to set this up, it should work very nicely for you. Grocery Shopping List is by Grocery Shopper and is $0.99 ($1.99 after Feb 1) with a free trial. This was my 2nd commercial game I bought, and the one I've played the most. I ran the trial, thought it worked great, and bought it. I've had a lot of fun with this... there's no gas pedal.. your foot is in the carbeurator from the GO!, and unless you wanna tap the screen and brake like a little girl, just hang onto the steering wheel (the phone), and guide your way through. Hours of fun and challenges here. I like this because it's got some challenge to it, and the cars seem to be very realistic in their reactions. Need for Speed Undercover is by Electronic Arts is $4.99 and has a free trial. SurfCube Browser is another app by the folks that did the GuitarTuner I reviewed on Monday. You have to see SurfCube to believe it. You've probably seen the YouTube video, if not check SilverlightCream number 1017. The app works very solid, and just as the video demonstrates. I downloaded and tried this, and it immediately did 2 things: bought it, and pinned it to my start page. I like this because it's fun to work with, and it works great as a browser. I'm about *this* close to replacing the IE tile on my front page with SurfCube. SurfCube Browser is by Kinabalu Innovation Limited and is $1.99 and has a free trial. Coming in with another News app is United Nations News by Justin Angel. This is definitely a news aggregator for 'grown ups'... news, photos, videos, and radio broadcsts from the international community all in one very slick app. This is an amazingly well thought-out and complete app. Even better yet, Justin has the code on CodePlex. A very well-done International news aggregator. United Nations News is by Justin Angel and is Free. A few disclaimers: Feel free to write me about your app and tell me about it. While it would be very cool to receive a whole bunch of xap files to review, at this point, for technical reasons, I'm unable to side-load my device. Since I plan on only doing this one day a week (twice if I find time), and only 5, I may never get caught up, so if you send me some info, be patient. Re: games ... remember I'm old... I'm from the era of Colossal Cave and Zork. Duke-Nukem 2D and Captain Comic were awesome. I don't own an XBOX or any other game system, so take game reviews from my perspective -- who knows, it may be refreshing :) I won't pay for an app or game just to try it. If you expect me to test-drive your app, it's going to have to have a Free Trial. I'm still playing with the format, comments are welcome. I decided I should alphabetize the list today... so there's no order implied Let me know what you think of the idea of doing reviews, or the layout/whatever, and Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • FAQ&ndash;Highlight GridView Row on Click and Retain Selected Row on Postback

    - by Vincent Maverick Durano
    A couple of months ago I’ve written a simple demo about “Highlighting GridView Row on MouseOver”. I’ve noticed many members in the forums (http://forums.asp.net) are asking how to highlight row in GridView and retain the selected row across postbacks. So I’ve decided to write this post to demonstrate how to implement it as reference to others who might need it. In this demo I going to use a combination of plain JavaScript and jQuery to do the client-side manipulation. I presumed that you already know how to bind the grid with data because I will not include the codes for populating the GridView here. For binding the gridview you can refer this post: Binding GridView with Data the ADO.Net way or this one: GridView Custom Paging with LINQ. To get started let’s implement the highlighting of GridView row on row click and retain the selected row on postback.  For simplicity I set up the page like this: <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> <h2>You have selected Row: (<asp:Label ID="Label1" runat="server" />)</h2> <asp:HiddenField ID="hfCurrentRowIndex" runat="server"></asp:HiddenField> <asp:HiddenField ID="hfParentContainer" runat="server"></asp:HiddenField> <asp:Button ID="Button1" runat="server" onclick="Button1_Click" Text="Trigger Postback" /> <asp:GridView ID="grdCustomer" runat="server" AutoGenerateColumns="false" onrowdatabound="grdCustomer_RowDataBound"> <Columns> <asp:BoundField DataField="Company" HeaderText="Company" /> <asp:BoundField DataField="Name" HeaderText="Name" /> <asp:BoundField DataField="Title" HeaderText="Title" /> <asp:BoundField DataField="Address" HeaderText="Address" /> </Columns> </asp:GridView> </asp:Content>   Note: Since the action is done at the client-side, when we do a postback like (clicking on a button) the page will be re-created and you will lose the highlighted row. This is normal because the the server doesn't know anything about the client/browser not unless if you do something to notify the server that something has changed. To persist the settings we will use some HiddenFields control to store the data so that when it postback we can reference the value from there. Now here’s the JavaScript functions below: <asp:content id="Content1" runat="server" contentplaceholderid="HeadContent"> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.4/jquery.min.js" type="text/javascript"></script> <script type="text/javascript">       var prevRowIndex;       function ChangeRowColor(row, rowIndex) {           var parent = document.getElementById(row);           var currentRowIndex = parseInt(rowIndex) + 1;                 if (prevRowIndex == currentRowIndex) {               return;           }           else if (prevRowIndex != null) {               parent.rows[prevRowIndex].style.backgroundColor = "#FFFFFF";           }                 parent.rows[currentRowIndex].style.backgroundColor = "#FFFFD6";                 prevRowIndex = currentRowIndex;                 $('#<%= Label1.ClientID %>').text(currentRowIndex);                 $('#<%= hfParentContainer.ClientID %>').val(row);           $('#<%= hfCurrentRowIndex.ClientID %>').val(rowIndex);       }             $(function () {           RetainSelectedRow();       });             function RetainSelectedRow() {           var parent = $('#<%= hfParentContainer.ClientID %>').val();           var currentIndex = $('#<%= hfCurrentRowIndex.ClientID %>').val();           if (parent != null) {               ChangeRowColor(parent, currentIndex);           }       }          </script> </asp:content>   The ChangeRowColor() is the function that sets the background color of the selected row. It is also where we set the previous row and rowIndex values in HiddenFields.  The $(function(){}); is a short-hand for the jQuery document.ready function. This function will be fired once the page is posted back to the server that’s why we call the function RetainSelectedRow(). The RetainSelectedRow() function is where we referenced the current selected values stored from the HiddenFields and pass these values to the ChangeRowColor) function to retain the highlighted row. Finally, here’s the code behind part: protected void grdCustomer_RowDataBound(object sender, GridViewRowEventArgs e) { if (e.Row.RowType == DataControlRowType.DataRow) { e.Row.Attributes.Add("onclick", string.Format("ChangeRowColor('{0}','{1}');", e.Row.ClientID, e.Row.RowIndex)); } } The code above is responsible for attaching the javascript onclick event for each row and call the ChangeRowColor() function and passing the e.Row.ClientID and e.Row.RowIndex to the function. Here’s the sample output below:   That’s it! I hope someone find this post useful! Technorati Tags: jQuery,GridView,JavaScript,TipTricks

    Read the article

  • Character-encoding problem spring

    - by aelshereay
    Hi All, I am stuck in a big problem with encoding in my website! I use spring 3, tomcat 6, and mysql db. I want to support German and Czech along with English in my website, I created all the JSPs as UTF-8 files, and in each jsp I include the following: <%@ page language="java" contentType="text/html; charset=UTF-8" pageEncoding="UTF-8"%> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> I created messages.properties (the default which is Czech), messages_de.properties, and messages_en.properties. And all of them are saved as UTF-8 files. I added the following to web.xml: <filter> <filter-name>encodingFilter</filter-name> <filterclass> org.springframework.web.filter.CharacterEncodingFilter</filterclass> <init-param> <param-name>encoding</param-name> <param-value>UTF-8</param-value> </init-param> <init-param> <param-name>forceEncoding</param-name> <param-value>true</param-value> </init-param> </filter> <locale-encoding-mapping-list> <locale-encoding-mapping> <locale>en</locale> <encoding>UTF-8</encoding> </locale-encoding-mapping> <locale-encoding-mapping> <locale>cz</locale> <encoding>UTF-8</encoding> </locale-encoding-mapping> <locale-encoding-mapping> <locale>de</locale> <encoding>UTF-8</encoding> </locale-encoding-mapping> </locale-encoding-mapping-list> <filter-mapping> <filter-name>encodingFilter</filter-name> <url-pattern>/*</url-pattern> </filter-mapping> And add the following to my applicationContext.xml: <bean id="messageSource" class="org.springframework.context.support.ResourceBundleMessageSource" p:basenames="messages"/> <!-- Declare the Interceptor --> <mvc:interceptors> <bean class="org.springframework.web.servlet.i18n.LocaleChangeInterceptor" p:paramName="locale" /> </mvc:interceptors> <!-- Declare the Resolver --> <bean id="localeResolver" class="org.springframework.web.servlet.i18n.SessionLocaleResolver" /> I set the useBodyEncodingForURI attribute to true in the element of server.xml under: %CATALINA_HOME%/conf, also another time tried to add URIEncoding="UTF-8" instead. I created all the tables and fields with charset [utf8] and collection [utf8_general_ci] The encoding in my browser is UTF-8 (BTW, I have IE8 and Firefox 3.6.3) When I open the MYSQL Query browser and insert manually Czech or German data, it's being inserted correctly, and displayed correctly in my app as well. So, here's the list of problems I have: By default the messages.properties (Czech) should load, instead the messages_en.properties loads by default. In the web form, when I enter Czech data, then click submit, in the Controller I print out the data in the console before to save it to db, what's being printed is not correct having strange chars, and this is the exact data that saves to db. I don't know where's the mistake! Why can't I get it working although I did what people did and worked for them! don't know.. Please help me, I am stuck in this crappy problem since days, and it drives me crazy! Thank you in advance.

    Read the article

  • JSTL c:forEach causes @ViewScoped bean to invoke @PostConstruct on every request

    - by Nitesh Panchal
    Hello, Again i see that the @PostConstruct is firing every time even though no binding attribute is used. See this code :- <?xml version='1.0' encoding='UTF-8' ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xmlns:h="http://java.sun.com/jsf/html" xmlns:ui="http://java.sun.com/jsf/facelets" xmlns:c="http://java.sun.com/jsp/jstl/core"> <h:head> <title>Facelet Title</title> </h:head> <h:body> <h:form> <c:forEach var="item" items="#{TestBean.listItems}"> <h:outputText value="#{item}"/> </c:forEach> <h:commandButton value="Click" actionListener="#{TestBean.actionListener}"/> </h:form> </h:body> </html> And this is the simplest possible bean in JSF :- package managedBeans; import java.io.Serializable; import java.util.ArrayList; import java.util.List; import javax.annotation.PostConstruct; import javax.faces.bean.ManagedBean; import javax.faces.bean.ViewScoped; @ManagedBean(name="TestBean") @ViewScoped public class TestBean implements Serializable { private List<String> listItems; public List<String> getListItems() { return listItems; } public void setListItems(List<String> listItems) { this.listItems = listItems; } public TestBean() { } @PostConstruct public void init(){ System.out.println("Post Construct fired!"); listItems = new ArrayList<String>(); listItems.add("Mango"); listItems.add("Apple"); listItems.add("Banana"); } public void actionListener(){ System.out.println("Action Listener fired!"); } } Do you see any behaviour that should cause postconstruct callback to fire each time? I think JSF 2.0 is highly unstable. If it has to fire PostConstruct each and every time what purpose does @ViewScoped serve. Why not to use @RequestScoped only? I thought i have made some mistake in my application. But when i created this simplest possible in JSF, i still get this error. Am i not understanding the scopes of JSF? or are they not testing it properly? Further, if you remove c:forEach and replace it with ui:repeat, then it works fine. Waiting for replies to confirm whether it is bug or it is intentional to stop the programmers from using jstl?

    Read the article

  • Webdriver PageObject Implementation using PageFactory in Java

    - by kamal
    here is what i have so far: A working Webdriver based Java class, which logs-in to the application and goes to a Home page: import java.io.File; import java.io.IOException; import java.util.concurrent.TimeUnit; import org.apache.commons.io.FileUtils; import org.openqa.selenium.By; import org.openqa.selenium.OutputType; import org.openqa.selenium.TakesScreenshot; import org.openqa.selenium.WebDriver; import org.openqa.selenium.firefox.FirefoxDriver; import org.openqa.selenium.firefox.FirefoxProfile; import org.testng.AssertJUnit; import org.testng.annotations.AfterMethod; import org.testng.annotations.BeforeMethod; import org.testng.annotations.Test; public class MLoginFFTest { private WebDriver driver; private String baseUrl; private String fileName = "screenshot.png"; @BeforeMethod public void setUp() throws Exception { FirefoxProfile profile = new FirefoxProfile(); profile.setPreference("network.http.phishy-userpass-length", 255); profile.setAssumeUntrustedCertificateIssuer(false); driver = new FirefoxDriver(profile); baseUrl = "https://a.b.c.d/"; driver.manage().timeouts().implicitlyWait(30, TimeUnit.SECONDS); } @Test public void testAccountLogin() throws Exception { driver.get(baseUrl + "web/certLogon.jsp"); driver.findElement(By.name("logonName")).clear(); AssertJUnit.assertEquals(driver.findElement(By.name("logonName")) .getTagName(), "input"); AssertJUnit.assertEquals(driver.getTitle(), "DA Logon"); driver.findElement(By.name("logonName")).sendKeys("username"); driver.findElement(By.name("password")).clear(); driver.findElement(By.name("password")).sendKeys("password"); driver.findElement(By.name("submit")).click(); driver.findElement(By.linkText("Account")).click(); AssertJUnit.assertEquals(driver.getTitle(), "View Account"); } @AfterMethod public void tearDown() throws Exception { File screenshot = ((TakesScreenshot) driver) .getScreenshotAs(OutputType.FILE); try { FileUtils.copyFile(screenshot, new File(fileName)); } catch (IOException e) { e.printStackTrace(); } driver.quit(); } } Now as we see there are 2 pages: 1. Login page, where i have to enter username and password, and homepage, where i would be taken, once the authentication succeeds. Now i want to implement this as PageObjects using Pagefactory: so i have : package com.example.pageobjects; import static com.example.setup.SeleniumDriver.getDriver; import java.util.concurrent.TimeUnit; import org.openqa.selenium.support.PageFactory; import org.openqa.selenium.support.ui.ExpectedCondition; import org.openqa.selenium.support.ui.FluentWait; import org.openqa.selenium.support.ui.Wait; public abstract class MPage<T> { private static final String BASE_URL = "https://a.b.c.d/"; private static final int LOAD_TIMEOUT = 30; private static final int REFRESH_RATE = 2; public T openPage(Class<T> clazz) { T page = PageFactory.initElements(getDriver(), clazz); getDriver().get(BASE_URL + getPageUrl()); ExpectedCondition pageLoadCondition = ((MPage) page).getPageLoadCondition(); waitForPageToLoad(pageLoadCondition); return page; } private void waitForPageToLoad(ExpectedCondition pageLoadCondition) { Wait wait = new FluentWait(getDriver()) .withTimeout(LOAD_TIMEOUT, TimeUnit.SECONDS) .pollingEvery(REFRESH_RATE, TimeUnit.SECONDS); wait.until(pageLoadCondition); } /** * Provides condition when page can be considered as fully loaded. * * @return */ protected abstract ExpectedCondition getPageLoadCondition(); /** * Provides page relative URL/ * * @return */ public abstract String getPageUrl(); } And for login Page not sure how i would implement that, as well as the Test, which would call these pages.

    Read the article

  • Defaulting the HLSL Vertex and Pixel Shader Levels to Feature Level 9_1 in VS 2012

    - by Michael B. McLaughlin
    I love Visual Studio 2012. But this is not a post about that. This is a post about tweaking one particular parameter that I’ve found a bit annoying. Disclaimer: You will be modifying important MSBuild files. If you screw up you will break your build tools. And maybe your computer will catch fire. I’m not responsible. No warranties or guaranties of any sort. This info is provided “as is”. By default, if you add a new vertex shader or pixel shader item to a project, it will be set to build with shader profile 4.0_level_9_3. If you need 9_3 functionality, this is all well and good. But (especially for Windows Store apps) you really want to target the lowest shader profile possible so that your game will run on as many computers as possible. So it’s a good idea to default to 9_1. To do this you could add in new HLSL files via “Add->New Item->Visual C++->HLSL->______ Shader File (.hlsl)” and then edit the shader files’ properties to set them manually to use 9_1 via “Properties->HLSL Compiler->General->Shader Model”. This is fine unless you forget to do this once and then submit your game with 9_3 shaders instead of 9_1 shaders to the Windows Store or to some other game store. Then you’d wind up with either rejection or angry “this doesn’t work on my computer! ripoff!” messages. There’s another option though. In “Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE\ItemTemplates\VC\HLSL\1033\VertexShader” (note the path might vary slightly for you if you are using a 32-bit system or have a non-ENU version of Visual Studio 2012) you will find a “VertexShader.vstemplate” file. If you open this file in a text editor (e.g. Notepad++), then inside the CustomParameters tag within the TemplateContent tag you should see a CustomParameter tag for the ShaderType, i.e.: <CustomParameter Name="$ShaderType$" Value="Vertex"/> On a new line, we are going to add another CustomParameter tag to the CustomParameters tag. It will look like this: <CustomParameter Name="$ShaderModel$" Value="4.0_level_9_1"/> such that we now have:     <CustomParameters>       <CustomParameter Name="$ShaderType$" Value="Vertex"/>       <CustomParameter Name="$ShaderModel$" Value="4.0_level_9_1"/>     </CustomParameters> You can then save the file (you will need to be an Administrator or have Administrator access). Back in the 1033 directory (or whatever the number is for your language), go into the “PixelShader” directory. Edit the “PixelShader.vstemplate” file and make the same change (note that this time $ShaderType$ is “Pixel” not “Vertex”; you shouldn’t be changing that line anyway, but if you were to just copy and replace the above four lines then you will wind up creating pixel shaders that the HLSL compiler would try to compile as vertex shaders, with all sort of weird errors as a result). Once you’ve added the $ShaderModel$ line to “PixelShader.vstemplate” and have saved it, everything should be done. Since Feature Level 9_1 and 9_3 don’t support any of the other shader types, those are set to default to their appropriate minimums already (Compute and Geometry are set to “4.0” and Domain and Hull are set to “5.0”, which are their respective minimums (though not all 4.0 cards support Compute shaders; they were an optional feature added with DirectX 10.1 and only became required for DirectX 11 hardware). In case you are wondering where these magic values come from, you can find them all in the “fxc.xml” file in the “\Program Files (x86)\MSBuild\Microsoft.CPP\v4.0\V110\1033” directory (or whatever your language number is; 1033 is ENU and various other product languages have their own respective numbers (see: http://msdn.microsoft.com/en-us/goglobal/bb964664.aspx ) such that Japanese is 1041 (for example), though for all I know MSBuild tasks might be 1033 for everyone). If, like me, you installed VS 2012 to a drive other than the C:\ drive, you will find the vstemplate files in the drive to which you installed VS 2012 (D:\ in my case) but you will find the fxc.xml file on the C:\ drive. You should not edit fxc.xml. You will almost definitely break things by doing that; it’s just something you can look through to see all the other options that the FXC task takes such that you could, if needed, add further CustomParameter tags if you wanted to default to other supported options. I haven’t tried any others though so I don’t have any advice on how to set them.

    Read the article

  • Database Rebuild

    - by Robert May
    I promised I’d have a simpler mechanism for rebuilding the database.  Below is a complete MSBuild targets file for rebuilding the database from scratch.  I don’t know if I’ve explained the rational for this.  The reason why you’d WANT to do this is so that each developer has a clean version of the database on their local machine.  This also includes the continuous integration environment.  Basically, you can do whatever you want to the database without fear, and in a minute or two, have a completely rebuilt database structure. DBDeploy (including the KTSC build task for dbdeploy) is used in this script to do change tracking on the database itself.  The MSBuild ExtensionPack is used in this target file.  You can get an MSBuild DBDeploy task here. There are two database scripts that you’ll see below.  First is the task for creating an admin (dbo) user in the system.  This script looks like the following: USE [master] GO If not Exists (select Name from sys.sql_logins where name = '$(User)') BEGIN CREATE LOGIN [$(User)] WITH PASSWORD=N'$(Password)', DEFAULT_DATABASE=[$(DatabaseName)], CHECK_EXPIRATION=OFF, CHECK_POLICY=OFF END GO EXEC master..sp_addsrvrolemember @loginame = N'$(User)', @rolename = N'sysadmin' GO USE [$(DatabaseName)] GO CREATE USER [$(User)] FOR LOGIN [$(User)] GO ALTER USER [$(User)] WITH DEFAULT_SCHEMA=[dbo] GO EXEC sp_addrolemember N'db_owner', N'$(User)' GO The second creates the changelog table.  This script can also be found in the dbdeploy.net install\scripts directory. CREATE TABLE changelog ( change_number INTEGER NOT NULL, delta_set VARCHAR(10) NOT NULL, start_dt DATETIME NOT NULL, complete_dt DATETIME NULL, applied_by VARCHAR(100) NOT NULL, description VARCHAR(500) NOT NULL ) GO ALTER TABLE changelog ADD CONSTRAINT Pkchangelog PRIMARY KEY (change_number, delta_set) GO Finally, Here’s the targets file. <Projectxmlns="http://schemas.microsoft.com/developer/msbuild/2003" ToolsVersion="4.0" DefaultTargets="Update">   <PropertyGroup>     <DatabaseName>TestDatabase</DatabaseName>     <Server>localhost</Server>     <ScriptDirectory>.\Scripts</ScriptDirectory>     <RebuildDirectory>.\Rebuild</RebuildDirectory>     <TestDataDirectory>.\TestData</TestDataDirectory>     <DbDeploy>.\DBDeploy</DbDeploy>     <User>TestUser</User>     <Password>TestPassword</Password>     <BCP>bcp</BCP>     <BCPOptions>-S$(Server) -U$(User) -P$(Password) -N -E -k</BCPOptions>     <OutputFileName>dbDeploy-output.sql</OutputFileName>     <UndoFileName>dbDeploy-output-undo.sql</UndoFileName>     <LastChangeToApply>99999</LastChangeToApply>   </PropertyGroup>     <ImportProject="$(MSBuildExtensionsPath)\ExtensionPack\4.0\MSBuild.ExtensionPack.tasks"/>   <UsingTask TaskName="Ktsc.Build.DBDeploy" AssemblyFile="$(DbDeploy)\Ktsc.Build.dll"/>   <ItemGroup>     <VariableInclude="DatabaseName">       <Value>$(DatabaseName)</Value>     </Variable>     <VariableInclude="Server">       <Value>$(Server)</Value>     </Variable>     <VariableInclude="User">       <Value>$(User)</Value>     </Variable>     <VariableInclude="Password">       <Value>$(Password)</Value>     </Variable>   </ItemGroup>     <TargetName="Rebuild">     <!--Take the database offline to disconnect any users. Requires that the current user is an admin of the sql server machine.-->     <MSBuild.ExtensionPack.SqlServer.SqlCmd Variables="@(Variable)" Database="$(DatabaseName)" TaskAction="Execute" CommandLineQuery ="ALTER DATABASE $(DatabaseName) SET OFFLINE WITH ROLLBACK IMMEDIATE"/>         <!--Bring it back online.  If you don't, the database files won't be deleted.-->     <MSBuild.ExtensionPack.Sql2008.DatabaseTaskAction="SetOnline" DatabaseItem="$(DatabaseName)"/>     <!--Delete the database, removing the existing files.-->     <MSBuild.ExtensionPack.Sql2008.DatabaseTaskAction="Delete" DatabaseItem="$(DatabaseName)"/>     <!--Create the new database in the default database path location.-->     <MSBuild.ExtensionPack.Sql2008.DatabaseTaskAction="Create" DatabaseItem="$(DatabaseName)" Force="True"/>         <!--Create admin user-->     <MSBuild.ExtensionPack.SqlServer.SqlCmd TaskAction="Execute" Server="(local)" Database="$(DatabaseName)" InputFiles="$(RebuildDirectory)\0002 Create Admin User.sql" Variables="@(Variable)" />     <!--Create the dbdeploy changelog.-->     <MSBuild.ExtensionPack.SqlServer.SqlCmd TaskAction="Execute" Server="(local)" Database="$(DatabaseName)" LogOn="$(User)" Password="$(Password)" InputFiles="$(RebuildDirectory)\0003 Create Changelog.sql" Variables="@(Variable)" />     <CallTarget Targets="Update;ImportData"/>     </Target>    <TargetName="Update" DependsOnTargets="CreateUpdateScript">     <MSBuild.ExtensionPack.SqlServer.SqlCmd TaskAction="Execute" Server="(local)" Database="$(DatabaseName)" LogOn="$(User)" Password="$(Password)" InputFiles="$(OutputFileName)" Variables="@(Variable)" />   </Target>   <TargetName="CreateUpdateScript">     <ktsc.Build.DBDeploy DbType="mssql"                                        DbConnection="User=$(User);Password=$(Password);Data Source=$(Server);Initial Catalog=$(DatabaseName);"                                        Dir="$(ScriptDirectory)"                                        OutputFile="..\$(OutputFileName)"                                        UndoOutputFile="..\$(UndoFileName)"                                        LastChangeToApply="$(LastChangeToApply)"/>   </Target>     <TargetName="ImportData">     <ItemGroup>       <TestData Include="$(TestDataDirectory)\*.dat"/>     </ItemGroup>     <ExecCommand="$(BCP) $(DatabaseName).dbo.%(TestData.Filename) in&quot;%(TestData.Identity)&quot;$(BCPOptions)"/>   </Target> </Project> Technorati Tags: MSBuild

    Read the article

  • Exception thrown while accessing HTML DataTable in backing bean

    - by Denzil
    Hi folks, I am building a simple application in JSF with the CrUD functionality. I am trying to implement edit functionality using the tomahawk component .I am unable to retrieve the selected row in my backing bean. Here's my JSP file snip: <t:dataTable id="data" binding="#{selectOneRowList.dataTable}" styleClass="scrollerTable" headerClass="standardTable_Header" footerClass="standardTable_Header" rowClasses="standardTable_Row1,standardTable_Row2" columnClasses="standardTable_Column,standardTable_ColumnCentered,standardTable_Column" var="car" value="#{selectOneRowList.list}" sortColumn="#{selectOneRowList.sortColumn}" sortAscending="#{selectOneRowList.sortAscending}" preserveDataModel="false" preserveSort="true" preserveRowStates="true" rows="10" > <h:column> <f:facet name="header"> <h:outputText value="Select"/> </f:facet> <t:selectOneRow groupName="selection" id="hugo" value="#{selectOneRowList.selectedRowIndex}" onchange="submit();" immediate="true" valueChangeListener="#{selectOneRowList.processRowSelection}"/> </h:column> <h:column> <f:facet name="header"> </f:facet> <h:outputText value="#{car.id}" /> </h:column> <h:column> <f:facet name="header"> <h:outputText value="Cars" /> </f:facet> <h:outputText value="#{car.type}" /> </h:column> <t:column sortable="true"> <f:facet name="header"> <h:outputText value="Color" /> </f:facet> <h:outputText value="#{car.color}" /> </t:column> </t:dataTable> Here's my backing bean SelectOneRowList.java : public void editCar(ActionEvent event) { System.out.println("Row number ## " + _selectedRowIndex.toString() + " selected!"); System.out.println("Datatable ::"+ dataTable); System.out.println("Row Count ::" + dataTable.getRowCount()); dataItem = (SimpleCar) getDataTable().getRowData(); //selectedCar = (SimpleCar) _list.get(Integer.parseInt(_selectedRowIndex.toString())); selectedCar = (SimpleCar) dataTable.getRowData(); System.out.println(dataTable.getRowData()); } My DTO{Data Transfer Object} which is SimpleCar.java contains the variables ID, type, color and their respective setters/getters. The dataItem variable is of type "SimpleCar". The dataTable is of type HTMLDataTable. I am able to get the the first 3 SOP's but the 4th SOP isn't printed. I receive the following exception on the server : javax.faces.el.EvaluationException: Exception while invoking expression #{selectOneRowList.editCar} org.apache.myfaces.el.MethodBindingImpl.invoke(MethodBindingImpl.java:156) javax.faces.component.UICommand.broadcast(UICommand.java:89) javax.faces.component.UIViewRoot._broadcastForPhase(UIViewRoot.java:97) javax.faces.component.UIViewRoot.processApplication(UIViewRoot.java:171) org.apache.myfaces.lifecycle.InvokeApplicationExecutor.execute(InvokeApplicationExecutor.java:32) org.apache.myfaces.lifecycle.LifecycleImpl.executePhase(LifecycleImpl.java:95) org.apache.myfaces.lifecycle.LifecycleImpl.execute(LifecycleImpl.java:70) javax.faces.webapp.FacesServlet.service(FacesServlet.java:139) org.apache.myfaces.webapp.filter.ExtensionsFilter.doFilter(ExtensionsFilter.java:341) On click of the edit button the editCar method in my backing bean is invoked. I need to get the data of the selected row in my backing bean. Why is the exception occurring ? The above example is taken from the tomawhawk examples WAR distributed on the website. I have gone through many links including the ones on BalusC but none of them have helped. Regards,

    Read the article

  • Silverlight 4 Twitter Client &ndash; Part 7

    - by Max
    Download this article as a PDF Welcome back :) This week we are going to look at something more exciting and a much required feature for any twitter client – auto refresh so as to show new status updates. We are going to achieve this using Silverlight 4 Timers and a bit and refresh our datagrid every 2 minutes to show new updates. We will do this so that we do only minimal request to the twitter api, so that twitter does not block us – there is a limit of 150 request an hour. Let us get started now. Also we will get the profile user id hyperlinked, so that when ever the user click on it, we will take them to their twitter page. Also it was a pain to always run this application by pressing F5, then it would open in a browser you would have to right click uninstall and install it again to see any changes. All this and yet we were not able to debug it :( Now there is a solution for this to run a silverlight application directly out of browser and yet have the debug feature. Super cool, here is how. Right on the Silverlight project and go to debug and then select the Out-Of-Browser application option and choose the *.Web project. Then just right click on the SL project and set as Startup Project. There you go, now every time you press F5, it will automatically run out of browser and still have the debug options. I go to know about this after some binging. Now let us jump to the core straight away. 1) To get the user id hyperlinked, we need to have a DataGridTemplateColumn and within that have a HyperLinkButton. The code for this will  be <data:DataGridTemplateColumn> <data:DataGridTemplateColumn.CellTemplate> <DataTemplate> <HyperlinkButton Click="HyperlinkButton_Click" Content="{Binding UserName}" TargetName="_blank" ></HyperlinkButton> </DataTemplate> </data:DataGridTemplateColumn.CellTemplate> </data:DataGridTemplateColumn> 2) Now let us look at how we are getting this done by looking into HyperlinkButton_Click event handler. There we will dynamically set the NavigateUri to the twitter page. I tried to do this using some binding, eval like stuff as in ASP.NET, but no luck! private void HyperlinkButton_Click(object sender, RoutedEventArgs e) { HyperlinkButton hb = (HyperlinkButton)e.OriginalSource; hb.NavigateUri = new Uri("http://twitter.com/" + hb.Content.ToString(), UriKind.Absolute); } 3) Now we need to switch on our Timer right in the OnNavigated to event on our SL page. So we need to modify our OnNavigated event to some thing like below: protected override void OnNavigatedTo(NavigationEventArgs e) { image1.Source = new BitmapImage(new Uri(GlobalVariable.profileImage, UriKind.Absolute)); this.Title = GlobalVariable.getUserName() + " - Home"; if (!GlobalVariable.isLoggedin()) this.NavigationService.Navigate(new Uri("/Login", UriKind.Relative)); else { currentGrid = "Timeline-Grid"; TwitterCredentialsSubmit(); myDispatcherTimer.Interval = new TimeSpan(0, 0, 0, 60, 0); myDispatcherTimer.Tick += new EventHandler(Each_Tick); myDispatcherTimer.Start(); } } I use a global string – here it is currentGrid variable to indicate what is bound in the datagrid so that after every timer tick, I can rebind the latest data to it again. Like I will only rebind the friends timeline again if the data grid currently holds it and I’ll only rebind the respective list status again in the data grid, if already a list status is bound to the data grid. In the above timer code, its set to trigger the Each_Tick event handler every 1 minute (60 seconds). TimeSpan takes in (days, hours, minutes, seconds, milliseconds). 4) Now we need to set the list name in the currentGrid variable when a list button is clicked. So add the code line below to the list button event handler currentGrid = currentList = b.Content.ToString(); 5) Now let us see how Each_Tick event handler is implemented. public void Each_Tick(object o, EventArgs sender) { if (!currentGrid.Equals("Timeline-Grid")) getListStatuses(currentGrid); else { WebRequest.RegisterPrefix("https://", System.Net.Browser.WebRequestCreator.ClientHttp); WebClient myService = new WebClient(); myService.AllowReadStreamBuffering = true; myService.UseDefaultCredentials = false; myService.Credentials = new NetworkCredential(GlobalVariable.getUserName(), GlobalVariable.getPassword()); myService.DownloadStringCompleted += new DownloadStringCompletedEventHandler(TimelineRequestCompleted); myService.DownloadStringAsync(new Uri("https://twitter.com/statuses/friends_timeline.xml")); } } If the data grid hold friends timeline, I just use the same bit of code we had already to bind the friends timeline to the data grid. Copy Paste. But if it is some list timeline that is bound in the datagrid, I then call the getListStatus method with the currentGrid string which will actually be holding the list name. 6) I wanted to make the hyperlinks inside the status message as hyperlinks and when the user clicks on it, we can then open that link. I tried using a convertor and using a regex to recognize a url and wrap it up with a href, but that is not gonna work in silverlight textblock :( Anyways that convertor code is in the zip file. 7) You can get the complete project files from here. 8) Please comment below for your doubts, suggestions, improvements. I will try to reply as early as possible. Thanks for all your support. Technorati Tags: Silverlight 4,Datagrid,Twitter API,Silverlight Timer

    Read the article

  • C#/.NET Little Wonders: Getting Caller Information

    - by James Michael Hare
    Originally posted on: http://geekswithblogs.net/BlackRabbitCoder/archive/2013/07/25/c.net-little-wonders-getting-caller-information.aspx Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can help improve your code by making it easier to write and maintain. The index of all my past little wonders posts can be found here. There are times when it is desirable to know who called the method or property you are currently executing.  Some applications of this could include logging libraries, or possibly even something more advanced that may server up different objects depending on who called the method. In the past, we mostly relied on the System.Diagnostics namespace and its classes such as StackTrace and StackFrame to see who our caller was, but now in C# 5, we can also get much of this data at compile-time. Determining the caller using the stack One of the ways of doing this is to examine the call stack.  The classes that allow you to examine the call stack have been around for a long time and can give you a very deep view of the calling chain all the way back to the beginning for the thread that has called you. You can get caller information by either instantiating the StackTrace class (which will give you the complete stack trace, much like you see when an exception is generated), or by using StackFrame which gets a single frame of the stack trace.  Both involve examining the call stack, which is a non-trivial task, so care should be done not to do this in a performance-intensive situation. For our simple example let's say we are going to recreate the wheel and construct our own logging framework.  Perhaps we wish to create a simple method Log which will log the string-ified form of an object and some information about the caller.  We could easily do this as follows: 1: static void Log(object message) 2: { 3: // frame 1, true for source info 4: StackFrame frame = new StackFrame(1, true); 5: var method = frame.GetMethod(); 6: var fileName = frame.GetFileName(); 7: var lineNumber = frame.GetFileLineNumber(); 8: 9: // we'll just use a simple Console write for now 10: Console.WriteLine("{0}({1}):{2} - {3}", 11: fileName, lineNumber, method.Name, message); 12: } So, what we are doing here is grabbing the 2nd stack frame (the 1st is our current method) using a 2nd argument of true to specify we want source information (if available) and then taking the information from the frame.  This works fine, and if we tested it out by calling from a file such as this: 1: // File c:\projects\test\CallerInfo\CallerInfo.cs 2:  3: public class CallerInfo 4: { 5: Log("Hello Logger!"); 6: } We'd see this: 1: c:\projects\test\CallerInfo\CallerInfo.cs(5):Main - Hello Logger! This works well, and in fact CallStack and StackFrame are still the best ways to examine deeper into the call stack.  But if you only want to get information on the caller of your method, there is another option… Determining the caller at compile-time In C# 5 (.NET 4.5) they added some attributes that can be supplied to optional parameters on a method to receive caller information.  These attributes can only be applied to methods with optional parameters with explicit defaults.  Then, as the compiler determines who is calling your method with these attributes, it will fill in the values at compile-time. These are the currently supported attributes available in the  System.Runtime.CompilerServices namespace": CallerFilePathAttribute – The path and name of the file that is calling your method. CallerLineNumberAttribute – The line number in the file where your method is being called. CallerMemberName – The member that is calling your method. So let’s take a look at how our Log method would look using these attributes instead: 1: static int Log(object message, 2: [CallerMemberName] string memberName = "", 3: [CallerFilePath] string fileName = "", 4: [CallerLineNumber] int lineNumber = 0) 5: { 6: // we'll just use a simple Console write for now 7: Console.WriteLine("{0}({1}):{2} - {3}", 8: fileName, lineNumber, memberName, message); 9: } Again, calling this from our sample Main would give us the same result: 1: c:\projects\test\CallerInfo\CallerInfo.cs(5):Main - Hello Logger! However, though this seems the same, there are a few key differences. First of all, there are only 3 supported attributes (at this time) that give you the file path, line number, and calling member.  Thus, it does not give you as rich of detail as a StackFrame (which can give you the calling type as well and deeper frames, for example).  Also, these are supported through optional parameters, which means we could call our new Log method like this: 1: // They're defaults, why not fill 'em in 2: Log("My message.", "Some member", "Some file", -13); In addition, since these attributes require optional parameters, they cannot be used in properties, only in methods. These caveats aside, they do let you get similar information inside of methods at a much greater speed!  How much greater?  Well lets crank through 1,000,000 iterations of each.  instead of logging to console, I’ll return the formatted string length of each.  Doing this, we get: 1: Time for 1,000,000 iterations with StackTrace: 5096 ms 2: Time for 1,000,000 iterations with Attributes: 196 ms So you see, using the attributes is much, much faster!  Nearly 25x faster in fact.  Summary There are a few ways to get caller information for a method.  The StackFrame allows you to get a comprehensive set of information spanning the whole call stack, but at a heavier cost.  On the other hand, the attributes allow you to quickly get at caller information baked in at compile-time, but to do so you need to create optional parameters in your methods to support it. Technorati Tags: Little Wonders,CSharp,C#,.NET,StackFrame,CallStack,CallerFilePathAttribute,CallerLineNumberAttribute,CallerMemberName

    Read the article

  • ReSharper C# Live Template for Read-Only Dependency Property and Routed Event Boilerplate

    - by Bart Read
    Following on from my previous post, where I shared a Live Template for quickly declaring a normal read-write dependency property and its associated property change event boilerplate, here's an unsurprisingly similar template for creating a read-only dependency property.        #region $PROPNAME$ Read-Only Property and Property Change Routed Event        private static readonly DependencyPropertyKey $PROPNAME$PropertyKey =                                             DependencyProperty.RegisterReadOnly(             "$PROPNAME$", typeof ( $PROPTYPE$ ), typeof ( $DECLARING_TYPE$ ),             new PropertyMetadata( $DEF_VALUE$ , On$PROPNAME$Changed ) );       public static readonly DependencyProperty $PROPNAME$Property =                                           $PROPNAME$PropertyKey.DependencyProperty;        public $PROPTYPE$ $PROPNAME$         {             get { return ( $PROPTYPE$ ) GetValue( $PROPNAME$Property ); }             private set { SetValue( $PROPNAME$PropertyKey, value ); }         }       public static readonly RoutedEvent $PROPNAME$ChangedEvent   =                                           EventManager.RegisterRoutedEvent(           "$PROPNAME$Changed",           RoutingStrategy.$ROUTINGSTRATEGY$,           typeof( RoutedPropertyChangedEventHandler< $PROPTYPE$ > ),           typeof( $DECLARING_TYPE$ ) );       public event RoutedPropertyChangedEventHandler< $PROPTYPE$ > $PROPNAME$Changed       {           add { AddHandler( $PROPNAME$ChangedEvent, value ); }           remove { RemoveHandler( $PROPNAME$ChangedEvent, value ); }       }        private static void On$PROPNAME$Changed(           DependencyObject d, DependencyPropertyChangedEventArgs e)         {             var $DECLARING_TYPE_var$ = d as $DECLARING_TYPE$;            var args = new RoutedPropertyChangedEventArgs< $PROPTYPE$ >(               ( $PROPTYPE$ ) e.OldValue,               ( $PROPTYPE$ ) e.NewValue );           args.RoutedEvent    = $DECLARING_TYPE$.$PROPNAME$ChangedEvent;           $DECLARING_TYPE_var$.RaiseEvent( args );$END$        }        #endregion The only real difference here is the addition of the DependencyPropertyKey, which allows your implementation to set the value of the dependency property without exposing the setter code to consumers of your type. You'll probably find that you create read-only dependency properties much less often than read-write properties, but this should still save you some typing when you do need to do so. Technorati Tags: resharper,live template,c#,dependency property,read-only,routed events,property change,boilerplate,wpf

    Read the article

  • Airline mess - what a journey

    - by Mike Dietrich
    What a day, what a journey ... Flew this noon from Munich to Zuerich for catch my ongoing flight to San Francisco with Swiss. And that day did start very well as Lufthansa messed up the connection flight by 42 minutes for a 35 minute flight. And as I was obviously the only passenger connection to San Francisco nobody picked me up at the airplane to bring me directly to my connection as Swiss did for the 8 passengers connection to Miami. So I missed my flight. What a start - and many thanks to Lufthansa. I was not the only one missing a connection as Lufthansa/Swiss had canceled the flight before due to "technical problems". In Zuerich Swiss did rebook me via Frankfurt with Lufthansa to board a United Airlines flight to San Francisco. "Ouch" I thought. I had my share of experience with United already as they've messed up my luggage on the way to San Francisco some years ago and it took them five (!!!) days to fly my bag over and deliver it. But actually it was the only option today. So I said "Yes". A big mistake as I've learned later on. The Frankfurt flight was delayed as well "due to a late incoming aircraft". But there was plenty of time. And I went to the Swiss counter at the gate and let them check if my baggage is on that flight to Frankfurt. They've said "Yes". Boarding the plane with a delay of 45 minutes (the typical Lufthansa delay these days) I spotted my Rimowa trolley right next to the plane on the airfield. So I was sure that it will be send to Frankfurt. In Frankfurt I went to the United counter once it did open - had to go through the passport check they do for US flights as well - and they've said "Yes, your luggage is with us". Well ... Arriving in San Francisco with just a bit of a some minutes delay and a very fast immigration procedure I saw the first bags with Priority tags getting pushed to the baggage claim - but mine was not there. I did wait ... and wait ... and wait. Well, thanks United, you did it again!!! I flew twice in the past years United Airlines - and in both cases they've messed up my luggage on the way to San Francisco. How lovely is that ... Now the real fun started again as the lady at the "Lost and Found" counter for luggage spotted my luggage in her system in Zuerich - and told me it's supposed to be sent with LH1191 to Frankfurt on Sept 27. But this was yesterday in Europe - it's already Sept 28 - and I saw my luggage in front of the airplane. So I'd suppose it's in Frankfurt already. But what could she do? Nothing but doing the awful paperwork. And "No Mr Dietrich, we don't call international numbers". Thank you, United. Next time I'll try to get a contract for a US land line in advance. They can't even tell you which plane will bring your luggage. It may be tomorrow with UA flight arriving around 4pm in SFO. I'm looking forward to some hours in the wonderful United Airlines call center waiting line. Last time I did spend 60-90 minutes every day until I got my luggage. If it takes again that long then OOW will be over by then. I love airline travel - and especially with United Airlines. And by the way ... they gave us these nice fancy packages during the flight:  That looks good - what's in that box??? Yes, really ... a bag of potato chips. Pure fat - very healthy.  I doubt that I'll ever fly United Airlines again!!!

    Read the article

< Previous Page | 273 274 275 276 277 278 279 280 281 282 283 284  | Next Page >