Search Results

Search found 62532 results on 2502 pages for 'id string'.

Page 503/2502 | < Previous Page | 499 500 501 502 503 504 505 506 507 508 509 510  | Next Page >

  • Learn About Oracle’s Strategy for a Simple, Modern User Experience at OpenWorld 2012

    - by Applications User Experience
    By Kathy Miedema, Oracle Applications User Experience If you’re interested in what the best possible user experience looks like, you’ll want to hear what Oracle’s Applications User Experience team is planning for OpenWorld 2012, Sept. 30-Oct. 4 in San Francisco. This year, we will talk Fusion, Fusion, Fusion. We were among the first to show Oracle Fusion Applications in the last couple of years, and we’ll be showing it again this year so you can see what Oracle is planning for the next generation of enterprise applications. Attend our sessions to learn more about the user experience strategy in which Oracle is investing. Simplicity is the driving force behind the demos that we are unveiling now, which you can see at OpenWorld. We want to create opportunities for productivity and efficiency, and deliver enterprise data across devices to help you do your work in the way best suited to your job and needs, said Jeremy Ashley, Vice President, Oracle Applications User Experience. You can see the new look for Fusion Applications at a general session led by Ashley at 3:30 p.m. on Wednesday, Oct. 3. You’ll also have the chance to learn more about tailoring in Oracle Fusion Applications, and gain a new understanding of the investment in the user experience behind Fusion Applications at our sessions (see session information below). Inside the Oracle Applications User Experience team’s on-site lab at Oracle OpenWorld 2011. Head to the demogrounds to see new demos from the Applications User Experience team, including the new look for Fusion Applications and what we’re building for mobile platforms. Take a spin on our eye tracker, a very cool tool that we use to research the usability of a particular design. Visit the Usable Apps OpenWorld page to find out where our demopods will be located. We are also recruiting participants for our on-site lab, in which we gather feedback on new user experience designs, and taking reservations for a charter bus that will bring you to Oracle headquarters for a lab tour Thursday, Oct. 4, or Friday, Oct. 5. Tours leave at 10 a.m. and 1:45 p.m. from the Moscone Center in San Francisco. You’ll see more of our newest designs at the lab tour, and some of our research tools in action. Can’t participate in a customer feedback session or take a lab tour this time around? Visit Usable Apps to participate or book a tour another time. For more information on any OpenWorld sessions, check the content catalog – also available at www.oracle.com/openworld. For information on Applications User Experience (Apps UX) sessions and activities, go to the Usable Apps OpenWorld page. APPS UX OPENWORLD SESSIONS Oracle’s Roadmap to a Simple, Modern User Experience Presenter: Jeremy Ashley, Vice President Applications User Experience, Oracle; with Debra Lilley, Fujitsu Consulting; Basheer Khan, Innowave; and Edward Roske, InterRelSession ID: CON9467Date: Wednesday, Oct. 3 Time: 3:30 - 4:30 p.m.Location: Moscone West - 3002/3004 Jeremy Ashley Oracle Fusion Applications: Transforming Insight into Action Presenters: Killian Evers and Kristin Desmond, OracleSession ID: CON8718Date: Thursday, Oct. 4Time: 11:15 a.m. - 12:15 p.m.Location: Moscone West - 2008 “FRIENDS OF UX” OPENWORLD SESSIONS Sessions by the Oracle Usability Advisory Board (OUAB) members: Advances in Oracle Enterprise Governance, Risk, and Compliance Manager  Presenters: Koen Delaure, KPMG Advisory NV, and Oracle Usability Advisory Board member; Russell Stohr, Oracle Session ID: CON9389Date: Tuesday, Oct. 2Time: 1:15 - 2:15 p.m.Location: Palace Hotel - Concert Optimize Oracle E-Busines Suite Procure-to-Pay: Cut Inefficiences/Fraud with Oracle GRC Apps Presenters: Koen Delaure, KPMG Advisory NV, and Solveig Wagner, Seadrill Management AS, both Oracle Usability Advisory Board members; and Swarnali Bag, OracleSession ID: CON9401Date: Monday, Oct. 1Time: 12:15 - 1:15 p.m.Location: Intercontinental - Sutter Showcase of JD Edwards EnterpriseOne Mobility Presenters: Jon Wells, Westmoreland Coal Co., Oracle Usability Advisory Board member; Rob Mills and Liz Davson, Town of Oakville; Keith Sholes and Louise Farner, Oracle Session ID: CON9123Date: Tuesday, Oct. 2Time: 1:15 - 2:15 p.m.Location: InterContinental - Grand Ballroom B Sessions by the Fusion User Experience Adovcates (FXA) Usability and Features of Oracle Fusion Applications, Built upon Oracle Fusion Middleware Presenters: Debra Lilley, Fujitsu Consulting and Oracle Usability Advisory Board member; John King, King Training ResourcesSession ID: UGF10371Date: Sunday, Sept. 30Time: 11 a.m. - 11:45 a.m. Location: Moscone West – 2010 Ten Things to Love About Oracle Fusion Project Portfolio Management  Presenter: Floyd Teter, EiS TechnologiesSession ID: CON6021Date: Tuesday, Oct. 2Time: 10:15 - 11:15 a.m.Location: Moscone West – 2003

    Read the article

  • RIDC Accelerator for Portal

    - by Stefan Krantz
    What is RIDC?Remote IntraDoc Client is a Java enabled API that leverages simple transportation protocols like Socket, HTTP and JAX/WS to execute content service operations in WebCenter Content Server. Each operation by design in the Content Server will execute stateless and return a complete result of the request. Each request object simply specifies the in a Map format (key and value pairs) what service to call and what parameters settings to apply. The result responded with will be built on the same Map format (key and value pairs). The possibilities with RIDC is endless since you can consume any available service (even custom made ones), RIDC can be executed from any Java SE application that has any WebCenter Content Services needs. WebCenter Portal and the example Accelerator RIDC adapter frameworkWebCenter Portal currently integrates and leverages WebCenter Content Services to enable available use cases in the portal today, like Content Presenter and Doc Lib. However the current use cases only covers few of the scenarios that the Content Server has to offer, in addition to the existing use cases it is not rare that the customer requirements requires additional steps and functionality that is provided by WebCenter Content but not part of the use cases from the WebCenter Portal.The good news to this is RIDC, the second good news is that WebCenter Portal already leverages the RIDC and has a connection management framework in place. The million dollar question here is how can I leverage this infrastructure for my custom use cases. Oracle A-Team has during its interactions produced a accelerator adapter framework that will reuse and leverage the existing connections provisioned in the webcenter portal application (works for WebCenter Spaces as well), as well as a very comprehensive design patter to minimize the work involved when exposing functionality. Let me introduce the RIDCCommon framework for accelerating WebCenter Content consumption from WebCenter Portal including Spaces. How do I get started?Through a few easy steps you will be on your way, Extract the zip file RIDCCommon.zip to the WebCenter Portal Application file structure (PortalApp) Open you Portal Application in JDeveloper (PS4/PS5) select to open the project in your application - this will add the project as a member of the application Update the Portal project dependencies to include the new RIDCCommon project Make sure that you WebCenter Content Server connection is marked as primary (a checkbox at the top of the connection properties form) You should by this stage have a similar structure in your JDeveloper Application Project Portal Project PortalWebAssets Project RIDCCommon Since the API is coming with some example operations that has already been exposed as DataControl actions, if you open Data Controls accordion you should see following: How do I implement my own operation? Create a new Java Class in for example com.oracle.ateam.portal.ridc.operation call it (GetDocInfoOperation) Extend the abstract class com.oracle.ateam.portal.ridc.operation.RIDCAbstractOperation and implement the interface com.oracle.ateam.portal.ridc.operation.IRIDCOperation The only method you actually are required to implement is execute(RIDCManager, IdcClient, IdcContext) The best practice to set object references for the operation is through the Constructor, example below public GetDocInfoOperation(String dDocName)By leveraging the constructor you can easily force the implementing class to pass right information, you can also overload the Constructor with more or less parameters as required Implement the execute method, the work you supposed to execute here is creating a new request binder and retrieve a response binder with the information in the request binder.In this case the dDocName for which we want the DocInfo Secondly you have to process the response binder by extracting the information you need from the request and restore this information in a simple POJO Java BeanIn the example below we do this in private void processResult(DataBinder responseData) - the new SearchDataObject is a Member of the GetDocInfoOperation so we can return this from a access method. Since the RIDCCommon API leverage template pattern for the operations you are now required to add a method that will enable access to the result after the execution of the operationIn the example below we added the method public SearchDataObject getDataObject() - this method returns the pre processed SearchDataObject from the execute method  This is it, as you can see on the code below you do not need more than 32 lines of very simple code 1: public class GetDocInfoOperation extends RIDCAbstractOperation implements IRIDCOperation { 2: private static final String DOC_INFO_BY_NAME = "DOC_INFO_BY_NAME"; 3: private String dDocName = null; 4: private SearchDataObject sdo = null; 5: 6: public GetDocInfoOperation(String dDocName) { 7: super(); 8: this.dDocName = dDocName; 9: } 10:   11: public boolean execute(RIDCManager manager, IdcClient client, 12: IdcContext userContext) throws Exception { 13: DataBinder dataBinder = createNewRequestBinder(DOC_INFO_BY_NAME); 14: dataBinder.putLocal(DocumentAttributeDef.NAME.getName(), dDocName); 15: 16: DataBinder responseData = getResponseBinder(dataBinder); 17: processResult(responseData); 18: return true; 19: } 20: 21: private void processResult(DataBinder responseData) { 22: DataResultSet rs = responseData.getResultSet("DOC_INFO"); 23: for(DataObject dobj : rs.getRows()) { 24: this.sdo = new SearchDataObject(dobj); 25: } 26: super.setMessage(responseData.getLocal(ATTR_MESSAGE)); 27: } 28: 29: public SearchDataObject getDataObject() { 30: return this.sdo; 31: } 32: } How do I execute my operation? In the previous section we described how to create a operation, so by now you should be ready to execute the operation Step one either add a method to the class  com.oracle.ateam.portal.datacontrol.ContentServicesDC or a class of your own choiceRemember the RIDCManager is a very light object and can be created where needed Create a method signature look like this public SearchDataObject getDocInfo(String dDocName) throws Exception In the method body - create a new instance of GetDocInfoOperation and meet the constructor requirements by passing the dDocNameGetDocInfoOperation docInfo = new GetDocInfoOperation(dDocName) Execute the operation via the RIDCManager instance rMgr.executeOperation(docInfo) Return the result by accessing it from the executed operationreturn docInfo.getDataObject() 1: private RIDCManager rMgr = null; 2: private String lastOperationMessage = null; 3:   4: public ContentServicesDC() { 5: super(); 6: this.rMgr = new RIDCManager(); 7: } 8: .... 9: public SearchDataObject getDocInfo(String dDocName) throws Exception { 10: GetDocInfoOperation docInfo = new GetDocInfoOperation(dDocName); 11: boolean boolVal = rMgr.executeOperation(docInfo); 12: lastOperationMessage = docInfo.getMessage(); 13: return docInfo.getDataObject(); 14: }   Get the binaries! The enclosed code in a example that can be used as a reference on how to consume and leverage similar use cases, user has to guarantee appropriate quality and support.  Download link: https://blogs.oracle.com/ATEAM_WEBCENTER/resource/stefan.krantz/RIDCCommon.zip RIDC API Referencehttp://docs.oracle.com/cd/E23943_01/apirefs.1111/e17274/toc.htm

    Read the article

  • Should this immutable struct be a mutable class?

    - by ChaosPandion
    I showed this struct to a fellow programmer and they felt that it should be a mutable class. They felt it is inconvenient not to have null references and the ability to alter the object as required. I would really like to know if there are any other reasons to make this a mutable class. [Serializable] public struct PhoneNumber : ICloneable, IEquatable<PhoneNumber> { private const int AreaCodeShift = 54; private const int CentralOfficeCodeShift = 44; private const int SubscriberNumberShift = 30; private const int CentralOfficeCodeMask = 0x000003FF; private const int SubscriberNumberMask = 0x00003FFF; private const int ExtensionMask = 0x3FFFFFFF; private readonly ulong value; public int AreaCode { get { return UnmaskAreaCode(value); } } public int CentralOfficeCode { get { return UnmaskCentralOfficeCode(value); } } public int SubscriberNumber { get { return UnmaskSubscriberNumber(value); } } public int Extension { get { return UnmaskExtension(value); } } public PhoneNumber(ulong value) : this(UnmaskAreaCode(value), UnmaskCentralOfficeCode(value), UnmaskSubscriberNumber(value), UnmaskExtension(value), true) { } public PhoneNumber(int areaCode, int centralOfficeCode, int subscriberNumber) : this(areaCode, centralOfficeCode, subscriberNumber, 0, true) { } public PhoneNumber(int areaCode, int centralOfficeCode, int subscriberNumber, int extension) : this(areaCode, centralOfficeCode, subscriberNumber, extension, true) { } private PhoneNumber(int areaCode, int centralOfficeCode, int subscriberNumber, int extension, bool throwException) { value = 0; if (areaCode < 200 || areaCode > 989) { if (!throwException) return; throw new ArgumentOutOfRangeException("areaCode", areaCode, @"The area code portion must fall between 200 and 989."); } else if (centralOfficeCode < 200 || centralOfficeCode > 999) { if (!throwException) return; throw new ArgumentOutOfRangeException("centralOfficeCode", centralOfficeCode, @"The central office code portion must fall between 200 and 999."); } else if (subscriberNumber < 0 || subscriberNumber > 9999) { if (!throwException) return; throw new ArgumentOutOfRangeException("subscriberNumber", subscriberNumber, @"The subscriber number portion must fall between 0 and 9999."); } else if (extension < 0 || extension > 1073741824) { if (!throwException) return; throw new ArgumentOutOfRangeException("extension", extension, @"The extension portion must fall between 0 and 1073741824."); } else if (areaCode.ToString()[1] - 48 > 8) { if (!throwException) return; throw new ArgumentOutOfRangeException("areaCode", areaCode, @"The second digit of the area code cannot be greater than 8."); } else { value |= ((ulong)(uint)areaCode << AreaCodeShift); value |= ((ulong)(uint)centralOfficeCode << CentralOfficeCodeShift); value |= ((ulong)(uint)subscriberNumber << SubscriberNumberShift); value |= ((ulong)(uint)extension); } } public object Clone() { return this; } public override bool Equals(object obj) { return obj != null && obj.GetType() == typeof(PhoneNumber) && Equals((PhoneNumber)obj); } public bool Equals(PhoneNumber other) { return this.value == other.value; } public override int GetHashCode() { return value.GetHashCode(); } public override string ToString() { return ToString(PhoneNumberFormat.Separated); } public string ToString(PhoneNumberFormat format) { switch (format) { case PhoneNumberFormat.Plain: return string.Format(@"{0:D3}{1:D3}{2:D4} {3:#}", AreaCode, CentralOfficeCode, SubscriberNumber, Extension).Trim(); case PhoneNumberFormat.Separated: return string.Format(@"{0:D3}-{1:D3}-{2:D4} {3:#}", AreaCode, CentralOfficeCode, SubscriberNumber, Extension).Trim(); default: throw new ArgumentOutOfRangeException("format"); } } public ulong ToUInt64() { return value; } public static PhoneNumber Parse(string value) { var result = default(PhoneNumber); if (!TryParse(value, out result)) { throw new FormatException(string.Format(@"The string ""{0}"" could not be parsed as a phone number.", value)); } return result; } public static bool TryParse(string value, out PhoneNumber result) { result = default(PhoneNumber); if (string.IsNullOrEmpty(value)) { return false; } var index = 0; var numericPieces = new char[value.Length]; foreach (var c in value) { if (char.IsNumber(c)) { numericPieces[index++] = c; } } if (index < 9) { return false; } var numericString = new string(numericPieces); var areaCode = int.Parse(numericString.Substring(0, 3)); var centralOfficeCode = int.Parse(numericString.Substring(3, 3)); var subscriberNumber = int.Parse(numericString.Substring(6, 4)); var extension = 0; if (numericString.Length > 10) { extension = int.Parse(numericString.Substring(10)); } result = new PhoneNumber( areaCode, centralOfficeCode, subscriberNumber, extension, false ); return result.value == 0; } public static bool operator ==(PhoneNumber left, PhoneNumber right) { return left.Equals(right); } public static bool operator !=(PhoneNumber left, PhoneNumber right) { return !left.Equals(right); } private static int UnmaskAreaCode(ulong value) { return (int)(value >> AreaCodeShift); } private static int UnmaskCentralOfficeCode(ulong value) { return (int)((value >> CentralOfficeCodeShift) & CentralOfficeCodeMask); } private static int UnmaskSubscriberNumber(ulong value) { return (int)((value >> SubscriberNumberShift) & SubscriberNumberMask); } private static int UnmaskExtension(ulong value) { return (int)(value & ExtensionMask); } } public enum PhoneNumberFormat { Plain, Separated }

    Read the article

  • Error about 'invalid JSON' with couchDB view but the json's fine...

    - by Chris Huang-Leaver
    I am trying to setup the following view on CouchDB { "_id":"_design/id", "_rev":"1-9be2e55e05ac368da3047841f301203d", "language":"javascript", "views":{ "by_id":{ "map" : "function(doc) { emit(doc.id, doc)}" },"from_user_id":{ "map" : "function(doc) { if (doc.from_user_id) {emit(doc.from_user_id, doc)}}"}, "from_user":{ "map" : "function(doc) { if (doc.from_user) {emit(doc.from_user, doc)}}"}, "to_user_id":{ "map" : "function(doc) {if (doc.to_user_id){ emit(doc.to_user_id, doc)}}"}, "to_user":{ "map" : "function(doc) {if (doc.to_user){ emit(doc.to_user, doc)}}" }, "max_id":{ "map" : "function(doc) { if (doc.id) {emit(doc._id, eval(doc.id))}}", "reduce" :"function(key,value) { a = value[0]; for (i=1; i <value.length; ++i){a = Math.max(a,value[i])} return a}" } } } when I try to 'PUT' this using curl: curl -X PUT -d keys.json $CDB/_design/id {"error":"bad_request","reason":"invalid UTF-8 JSON"} I know it's not invalid JSON, because I tested it using the 'json' library built into Python 2.6, it loads fine. JS screw ups give me the error 'must evaluate to a function' What else might be wrong with it?

    Read the article

  • The name 'GridView1' does not exist in the current context

    - by sameer
    hi all, I have two files named as TimeSheet.aspx.cs and TimSheet.aspx ,code of the file are given below for your reference. when i build the application im getting error "The name 'GridView1' does not exist in the current context" even thought i have a control with the id GridView1 and i have added the runat="server" as well. Im not able to figure out what is causing this issue.Can any one figure whats happen here. Thanks & Regards, ======================================= TimeSheet.aspx.cs ======================================= #region Using directives using System; using System.Data; using System.Configuration; using System.Web; using System.Web.Security; using System.Web.UI; using System.Web.UI.WebControls; using System.Web.UI.WebControls.WebParts; using System.Web.UI.HtmlControls; using TSMS.Web.UI; #endregion public partial class TimeSheets: Page { protected void Page_Load(object sender, EventArgs e) { FormUtil.RedirectAfterUpdate(GridView1, "TimeSheets.aspx?page={0}"); FormUtil.SetPageIndex(GridView1, "page"); FormUtil.SetDefaultButton((Button)GridViewSearchPanel1.FindControl("cmdSearch")); } protected void GridView1_SelectedIndexChanged(object sender, EventArgs e) { string urlParams = string.Format("TimeSheetId={0}", GridView1.SelectedDataKey.Values[0]); Response.Redirect("TimeSheetsEdit.aspx?" + urlParams, true); } protected void GridView1_RowCommand(object sender, GridViewCommandEventArgs e) { } } ======================================================= TimeSheet.aspx ======================================================= <%@ Page Language="C#" Theme="Default" MasterPageFile="~/MasterPages/admin.master" AutoEventWireup="true" CodeFile="TimeSheets.aspx.cs" Inherits="TimeSheets" Title="TimeSheets List" %> <asp:Content ID="Content2" ContentPlaceHolderID="ContentPlaceHolder2" Runat="Server">Time Sheets List</asp:Content> <asp:Content ID="Content1" ContentPlaceHolderID="ContentPlaceHolder1" Runat="Server"> <data:GridViewSearchPanel ID="GridViewSearchPanel1" runat="server" GridViewControlID="GridView1" PersistenceMethod="Session" /> <br /> <data:EntityGridView ID="GridView1" runat="server" AutoGenerateColumns="False" OnSelectedIndexChanged="GridView1_SelectedIndexChanged" DataSourceID="TimeSheetsDataSource" DataKeyNames="TimeSheetId" AllowMultiColumnSorting="false" DefaultSortColumnName="" DefaultSortDirection="Ascending" ExcelExportFileName="Export_TimeSheets.xls" onrowcommand="GridView1_RowCommand" > <Columns> <asp:CommandField ShowSelectButton="True" ShowEditButton="True" /> <asp:BoundField DataField="TimeSheetId" HeaderText="Time Sheet Id" SortExpression="[TimeSheetID]" ReadOnly="True" /> <asp:BoundField DataField="TimeSheetTitle" HeaderText="Time Sheet Title" SortExpression="[TimeSheetTitle]" /> <asp:BoundField DataField="StartDate" DataFormatString="{0:d}" HtmlEncode="False" HeaderText="Start Date" SortExpression="[StartDate]" /> <asp:BoundField DataField="EndDate" DataFormatString="{0:d}" HtmlEncode="False" HeaderText="End Date" SortExpression="[EndDate]" /> <asp:BoundField DataField="DateOfCreation" DataFormatString="{0:d}" HtmlEncode="False" HeaderText="Date Of Creation" SortExpression="[DateOfCreation]" /> <data:BoundRadioButtonField DataField="Locked" HeaderText="Locked" SortExpression="[Locked]" /> <asp:BoundField DataField="ReviewedBy" HeaderText="Reviewed By" SortExpression="[ReviewedBy]" /> <data:HyperLinkField HeaderText="Employee Id" DataNavigateUrlFormatString="EmployeesEdit.aspx?EmployeeId={0}" DataNavigateUrlFields="EmployeeId" DataContainer="EmployeeIdSource" DataTextField="LastName" /> </Columns> <EmptyDataTemplate> <b>No TimeSheets Found!</b> </EmptyDataTemplate> </data:EntityGridView> <asp:GridView ID="GridView2" runat="server"> </asp:GridView> <br /> <asp:Button runat="server" ID="btnTimeSheets" OnClientClick="javascript:location.href='TimeSheetsEdit.aspx'; return false;" Text="Add New"></asp:Button> <data:TimeSheetsDataSource ID="TimeSheetsDataSource" runat="server" SelectMethod="GetPaged" EnablePaging="True" EnableSorting="True" EnableDeepLoad="True" > <DeepLoadProperties Method="IncludeChildren" Recursive="False"> <Types> <data:TimeSheetsProperty Name="Employees"/> <%--<data:TimeSheetsProperty Name="TimeSheetDetailsCollection" />--%> </Types> </DeepLoadProperties> <Parameters> <data:CustomParameter Name="WhereClause" Value="" ConvertEmptyStringToNull="false" /> <data:CustomParameter Name="OrderByClause" Value="" ConvertEmptyStringToNull="false" /> <asp:ControlParameter Name="PageIndex" ControlID="GridView1" PropertyName="PageIndex" Type="Int32" /> <asp:ControlParameter Name="PageSize" ControlID="GridView1" PropertyName="PageSize" Type="Int32" /> <data:CustomParameter Name="RecordCount" Value="0" Type="Int32" /> </Parameters> </data:TimeSheetsDataSource> </asp:Content>

    Read the article

  • CDN on Hosted Service in Windows Azure

    - by Shaun
    Yesterday I told Wang Tao, an annoying colleague sitting beside me, about how to make the static content enable the CDN in his website which had just been published on Windows Azure. The approach would be Move the static content, the images, CSS files, etc. into the blob storage. Enable the CDN on his storage account. Change the URL of those static files to the CDN URL. I think these are the very common steps when using CDN. But this morning I found that the new Windows Azure SDK 1.4 and new Windows Azure Developer Portal had just been published announced at the Windows Azure Blog. One of the new features in this release is about the CDN, which means we can enabled the CDN not only for a storage account, but a hosted service as well. Within this new feature the steps I mentioned above would be turned simpler a lot.   Enable CDN for Hosted Service To enable the CDN for a hosted service we just need to log on the Windows Azure Developer Portal. Under the “Hosted Services, Storage Accounts & CDN” item we will find a new menu on the left hand side said “CDN”, where we can manage the CDN for storage account and hosted service. As we can see the hosted services and storage accounts are all listed in my subscriptions. To enable a CDN for a hosted service is veru simple, just select a hosted service and click the New Endpoint button on top. In this dialog we can select the subscription and the storage account, or the hosted service we want the CDN to be enabled. If we selected the hosted service, like I did in the image above, the “Source URL for the CDN endpoint” will be shown automatically. This means the windows azure platform will make all contents under the “/cdn” folder as CDN enabled. But we cannot change the value at the moment. The following 3 checkboxes next to the URL are: Enable CDN: Enable or disable the CDN. HTTPS: If we need to use HTTPS connections check it. Query String: If we are caching content from a hosted service and we are using query strings to specify the content to be retrieved, check it. Just click the “Create” button to let the windows azure create the CDN for our hosted service. The CDN would be available within 60 minutes as Microsoft mentioned. My experience is that about 15 minutes the CDN could be used and we can find the CDN URL in the portal as well.   Put the Content in CDN in Hosted Service Let’s create a simple windows azure project in Visual Studio with a MVC 2 Web Role. When we created the CDN mentioned above the source URL of CDN endpoint would be under the “/cdn” folder. So in the Visual Studio we create a folder under the website named “cdn” and put some static files there. Then all these files would be cached by CDN if we use the CDN endpoint. The CDN of the hosted service can cache some kind of “dynamic” result with the Query String feature enabled. We create a controller named CdnController and a GetNumber action in it. The routed URL of this controller would be /Cdn/GetNumber which can be CDN-ed as well since the URL said it’s under the “/cdn” folder. In the GetNumber action we just put a number value which specified by parameter into the view model, then the URL could be like /Cdn/GetNumber?number=2. 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Web; 5: using System.Web.Mvc; 6:  7: namespace MvcWebRole1.Controllers 8: { 9: public class CdnController : Controller 10: { 11: // 12: // GET: /Cdn/ 13:  14: public ActionResult GetNumber(int number) 15: { 16: return View(number); 17: } 18:  19: } 20: } And we add a view to display the number which is super simple. 1: <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<int>" %> 2:  3: <asp:Content ID="Content1" ContentPlaceHolderID="TitleContent" runat="server"> 4: GetNumber 5: </asp:Content> 6:  7: <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> 8:  9: <h2>The number is: <% 1: : Model.ToString() %></h2> 10:  11: </asp:Content> Since this action is under the CdnController the URL would be under the “/cdn” folder which means it can be CDN-ed. And since we checked the “Query String” the content of this dynamic page will be cached by its query string. So if I use the CDN URL, http://az25311.vo.msecnd.net/GetNumber?number=2, the CDN will firstly check if there’s any content cached with the key “GetNumber?number=2”. If yes then the CDN will return the content directly; otherwise it will connect to the hosted service, http://aurora-sys.cloudapp.net/Cdn/GetNumber?number=2, and then send the result back to the browser and cached in CDN. But to be notice that the query string are treated as string when used by the key of CDN element. This means the URLs below would be cached in 2 elements in CDN: http://az25311.vo.msecnd.net/GetNumber?number=2&page=1 http://az25311.vo.msecnd.net/GetNumber?page=1&number=2 The final step is to upload the project onto azure. Test the Hosted Service CDN After published the project on azure, we can use the CDN in the website. The CDN endpoint we had created is az25311.vo.msecnd.net so all files under the “/cdn” folder can be requested with it. Let’s have a try on the sample.htm and c_great_wall.jpg static files. Also we can request the dynamic page GetNumber with the query string with the CDN endpoint. And if we refresh this page it will be shown very quickly since the content comes from the CDN without MCV server side process. We style of this page was missing. This is because the CSS file was not includes in the “/cdn” folder so the page cannot retrieve the CSS file from the CDN URL.   Summary In this post I introduced the new feature in Windows Azure CDN with the release of Windows Azure SDK 1.4 and new Developer Portal. With the CDN of the Hosted Service we can just put the static resources under a “/cdn” folder so that the CDN can cache them automatically and no need to put then into the blob storage. Also it support caching the dynamic content with the Query String feature. So that we can cache some parts of the web page by using the UserController and CDN. For example we can cache the log on user control in the master page so that the log on part will be loaded super-fast. There are some other new features within this release you can find here. And for more detailed information about the Windows Azure CDN please have a look here as well.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • What's the best name for a non-mutating "add" method on an immutable collection?

    - by Jon Skeet
    Sorry for the waffly title - if I could come up with a concise title, I wouldn't have to ask the question. Suppose I have an immutable list type. It has an operation Foo(x) which returns a new immutable list with the specified argument as an extra element at the end. So to build up a list of strings with values "Hello", "immutable", "world" you could write: var empty = new ImmutableList<string>(); var list1 = empty.Foo("Hello"); var list2 = list1.Foo("immutable"); var list3 = list2.Foo("word"); (This is C# code, and I'm most interested in a C# suggestion if you feel the language is important. It's not fundamentally a language question, but the idioms of the language may be important.) The important thing is that the existing lists are not altered by Foo - so empty.Count would still return 0. Another (more idiomatic) way of getting to the end result would be: var list = new ImmutableList<string>().Foo("Hello"); .Foo("immutable"); .Foo("word"); My question is: what's the best name for Foo? EDIT 3: As I reveal later on, the name of the type might not actually be ImmutableList<T>, which makes the position clear. Imagine instead that it's TestSuite and that it's immutable because the whole of the framework it's a part of is immutable... (End of edit 3) Options I've come up with so far: Add: common in .NET, but implies mutation of the original list Cons: I believe this is the normal name in functional languages, but meaningless to those without experience in such languages Plus: my favourite so far, it doesn't imply mutation to me. Apparently this is also used in Haskell but with slightly different expectations (a Haskell programmer might expect it to add two lists together rather than adding a single value to the other list). With: consistent with some other immutable conventions, but doesn't have quite the same "additionness" to it IMO. And: not very descriptive. Operator overload for + : I really don't like this much; I generally think operators should only be applied to lower level types. I'm willing to be persuaded though! The criteria I'm using for choosing are: Gives the correct impression of the result of the method call (i.e. that it's the original list with an extra element) Makes it as clear as possible that it doesn't mutate the existing list Sounds reasonable when chained together as in the second example above Please ask for more details if I'm not making myself clear enough... EDIT 1: Here's my reasoning for preferring Plus to Add. Consider these two lines of code: list.Add(foo); list.Plus(foo); In my view (and this is a personal thing) the latter is clearly buggy - it's like writing "x + 5;" as a statement on its own. The first line looks like it's okay, until you remember that it's immutable. In fact, the way that the plus operator on its own doesn't mutate its operands is another reason why Plus is my favourite. Without the slight ickiness of operator overloading, it still gives the same connotations, which include (for me) not mutating the operands (or method target in this case). EDIT 2: Reasons for not liking Add. Various answers are effectively: "Go with Add. That's what DateTime does, and String has Replace methods etc which don't make the immutability obvious." I agree - there's precedence here. However, I've seen plenty of people call DateTime.Add or String.Replace and expect mutation. There are loads of newsgroup questions (and probably SO ones if I dig around) which are answered by "You're ignoring the return value of String.Replace; strings are immutable, a new string gets returned." Now, I should reveal a subtlety to the question - the type might not actually be an immutable list, but a different immutable type. In particular, I'm working on a benchmarking framework where you add tests to a suite, and that creates a new suite. It might be obvious that: var list = new ImmutableList<string>(); list.Add("foo"); isn't going to accomplish anything, but it becomes a lot murkier when you change it to: var suite = new TestSuite<string, int>(); suite.Add(x => x.Length); That looks like it should be okay. Whereas this, to me, makes the mistake clearer: var suite = new TestSuite<string, int>(); suite.Plus(x => x.Length); That's just begging to be: var suite = new TestSuite<string, int>().Plus(x => x.Length); Ideally, I would like my users not to have to be told that the test suite is immutable. I want them to fall into the pit of success. This may not be possible, but I'd like to try. I apologise for over-simplifying the original question by talking only about an immutable list type. Not all collections are quite as self-descriptive as ImmutableList<T> :)

    Read the article

  • Trabajando el redireccionamiento de usuarios/Working with user redirect methods

    - by Jason Ulloa
    La protección de las aplicaciones es un elemento que no se puede dejar por fuera cuando se elabora un sistema. Cada parte o elemento de código que protege nuetra aplicación debe ser cuidadosamente seleccionado y elaborado. Una de las cosas comunes con las que nos topamos en asp.net cuando deseamos trabajar con usuarios, es con la necesidad de poder redireccionarlos a los distintos elementos o páginas dependiendo del rol. Pues precisamente eso es lo que haremos, vamos a trabajar con el Web.config de nuestra aplicación y le añadiremos unas pequeñas líneas de código para lograr dar un poco mas de seguridad al sistema y sobre todo lograr el redireccionamiento. Así que veamos como logramos lo deseado: Como bien sabemos el web.config nos permite manejar muchos elementos dentro de asp.net, muchos de ellos relacionados con la seguridad, asi como tambien nos brinda la posibilidad de poder personalizar los elementos para poder adaptarlo a nuestras necesidades. Así que, basandonos en el principio de que podemos personalizar el web.config, entonces crearemos una sección personalizada, que será la que utilicemos para manejar el redireccionamiento: Nuestro primer paso será ir a nuestro web.config y buscamos las siguientes líneas: <configuration>     <configSections>  </sectionGroup>             </sectionGroup>         </sectionGroup> Y luego de ellas definiremos una nueva sección  <section name="loginRedirectByRole" type="crabit.LoginRedirectByRoleSection" allowLocation="true" allowDefinition="Everywhere" /> El section name corresponde al nombre de nuestra nueva sección Type corresponde al nombre de la clase (que pronto realizaremos) y que será la encargada del Redirect Como estamos trabajando dentro de la seccion de configuración una vez definidad nuestra sección personalizada debemos cerrar esta sección  </configSections> Por lo que nuestro web.config debería lucir de la siguiente forma <configuration>     <configSections>  </sectionGroup>             </sectionGroup>         </sectionGroup> <section name="loginRedirectByRole" type="crabit.LoginRedirectByRoleSection" allowLocation="true" allowDefinition="Everywhere" /> </configSections> Anteriormente definimos nuestra sección, pero esta sería totalmente inútil sin el Metodo que le da vida. En nuestro caso el metodo loginRedirectByRole, este metodo lo definiremos luego del </configSections> último que cerramos: <loginRedirectByRole>     <roleRedirects>       <add role="Administrador" url="~/Admin/Default.aspx" />       <add role="User" url="~/User/Default.aspx" />     </roleRedirects>   </loginRedirectByRole> Como vemos, dentro de nuestro metodo LoginRedirectByRole tenemos el elemento add role. Este elemento será el que posteriormente le indicará a la aplicación hacia donde irá el usuario cuando realice un login correcto. Así que, veamos un poco esta configuración: add role="Administrador" corresponde al nombre del Role que tenemos definidio, pueden existir tantos elementos add role como tengamos definidos en nuestra aplicación. El elemento URL indica la ruta o página a la que será dirigido un usuario una vez logueado y dentro de la aplicación. Como vemos estamos utilizando el ~ para indicar que es una ruta relativa. Con esto hemos terminado la configuración de nuestro web.config, ahora veamos a fondo el código que se encargará de leer estos elementos y de utilziarlos: Para nuestro ejemplo, crearemos una nueva clase denominada LoginRedirectByRoleSection, recordemos que esta clase es la que llamamos en el elemento TYPE definido en la sección de nuestro web.config. Una vez creada la clase, definiremos algunas propiedades, pero antes de ello le indicaremos a nuestra clase que debe heredar de configurationSection, esto para poder obtener los elementos del web.config.  Inherits ConfigurationSection Ahora nuestra primer propiedad   <ConfigurationProperty("roleRedirects")> _         Public Property RoleRedirects() As RoleRedirectCollection             Get                 Return DirectCast(Me("roleRedirects"), RoleRedirectCollection)             End Get             Set(ByVal value As RoleRedirectCollection)                 Me("roleRedirects") = value             End Set         End Property     End Class Esta propiedad será la encargada de obtener todos los roles que definimos en la metodo personalizado de nuestro web.config Nuestro segundo paso será crear una segunda clase (en la misma clase LoginRedirectByRoleSection) a esta clase la llamaremos RoleRedirectCollection y la heredaremos de ConfigurationElementCollection y definiremos lo siguiente Public Class RoleRedirectCollection         Inherits ConfigurationElementCollection         Default Public ReadOnly Property Item(ByVal index As Integer) As RoleRedirect             Get                 Return DirectCast(BaseGet(index), RoleRedirect)             End Get         End Property         Default Public ReadOnly Property Item(ByVal key As Object) As RoleRedirect             Get                 Return DirectCast(BaseGet(key), RoleRedirect)             End Get         End Property         Protected Overrides Function CreateNewElement() As ConfigurationElement             Return New RoleRedirect()         End Function         Protected Overrides Function GetElementKey(ByVal element As ConfigurationElement) As Object             Return DirectCast(element, RoleRedirect).Role         End Function     End Class Nuevamente crearemos otra clase esta vez llamada RoleRedirect y en este caso la heredaremos de ConfigurationElement. Nuestra nueva clase debería lucir así: Public Class RoleRedirect         Inherits ConfigurationElement         <ConfigurationProperty("role", IsRequired:=True)> _         Public Property Role() As String             Get                 Return DirectCast(Me("role"), String)             End Get             Set(ByVal value As String)                 Me("role") = value             End Set         End Property         <ConfigurationProperty("url", IsRequired:=True)> _         Public Property Url() As String             Get                 Return DirectCast(Me("url"), String)             End Get             Set(ByVal value As String)                 Me("url") = value             End Set         End Property     End Class Una vez que nuestra clase madre esta lista, lo unico que nos queda es un poc de codigo en la pagina de login de nuestro sistema (por supuesto, asumo que estan utilizando  los controles de login que por defecto tiene asp.net). Acá definiremos nuestros dos últimos metodos  Protected Sub ctllogin_LoggedIn(ByVal sender As Object, ByVal e As System.EventArgs) Handles ctllogin.LoggedIn         RedirectLogin(ctllogin.UserName)     End Sub El procedimiento loggeding es parte del control login de asp.net y se desencadena en el momento en que el usuario hace loguin correctametne en nuestra aplicación Este evento desencadenará el siguiente procedimiento para redireccionar.     Private Sub RedirectLogin(ByVal username As String)         Dim roleRedirectSection As crabit.LoginRedirectByRoleSection = DirectCast(ConfigurationManager.GetSection("loginRedirectByRole"), crabit.LoginRedirectByRoleSection)         For Each roleRedirect As crabit.RoleRedirect In roleRedirectSection.RoleRedirects             If Roles.IsUserInRole(username, roleRedirect.Role) Then                 Response.Redirect(roleRedirect.Url)             End If         Next     End Sub   Con esto, nuestra aplicación debería ser capaz de redireccionar sin problemas y manejar los roles.  Además, tambien recordar que nuestro ejemplo se basa en la utilización del esquema de bases de datos que por defecto nos proporcionada asp.net.

    Read the article

  • Asp.Net MVC Create/Update/Delete with composite key

    - by nubm
    Hi, I'm not sure how to use composite key. My Categories table has CategoryId (PK,FK), LanguageId (PK,FK), CategoryName CategoryId | LanguageId | CategoryName 1 | 1 | Car 1 | 2 | Auto 1 | 3 | Automobile etc. I'm following this design The default action looks like // // GET: /Category/Edit/5 public ActionResult Edit(int id) { return View(); } and ActionLink <%= Html.ActionLink("Edit", "Edit", new { id= item.CategoryId }) %> Should I use something like <%= Html.ActionLink("Edit", "Edit", new { id= (item.CategoryId + "-" + item.LanguageId) }) %> so the url is /Category/Edit/5-5 and // // GET: /Category/Edit/5-5 public ActionResult Edit(string id) { // parse id return View(); } or change the route to something like /Category/Edit/5/5 Or there is some better way?

    Read the article

  • How to optimize simple linked server select query?

    - by tomaszs
    Hello, I have a table called Table with columns: ID (int, primary key, clustered, unique index) TEXT (varchar 15) on a MSSQL linked server called LS. Linked server is on the same server computer. And: When I call: SELECT ID, TEXT FROM OPENQUERY(LS, 'SELECT ID, TEXT FROM Table') It takes 400 ms. When I call: SELECT ID, TEXT FROM LS.dbo.Table It takes 200 ms And when I call the query directly while being at LS server: SELECT ID, TEXT FROM dbo.Table It takes 100 ms. In many places i've read that OPENQUERY is faster, but in this simple case it does not seem to work. What can I do to make this query faster when I call it from another server, not LS directly?

    Read the article

  • Inheritence in C# question - is overriding internal methods possible?

    - by Jeff Dahmer
    Is it possible to override an internal method's behavior? using System; class TestClass { public string Name { get { return this.ProtectedMethod(); } } protected string ProtectedMethod() { return InternalMethod(); } string InternalMethod() { return "TestClass::InternalMethod()"; } } class OverrideClassProgram : TestClass { // try to override the internal method using new? (compiler warning) new string InternalMethod() { return "OverrideClassProgram::InternalMethod()"; } static int Main(string[] args) { // TestClass::InternalMethod() Console.WriteLine(new TestClass().Name); // TestClass::InternalMethod() ?? are we just screwed? Console.WriteLine(new OverrideClassProgram().Name); return (int)Console.ReadKey().Key; } }

    Read the article

  • Control JSON Serialization format of a custom type in .NET

    - by mrjoltcola
    I have a PhoneNumber class that stores a normalized string, and I've defined implicit operators for string <- Phone to simplify treatment of the PhoneNumber as a string. I've also overridden the ToString() method to always return the cleaned version of the number (no hyphens or parentheses or spaces). In any MVC.NET code where I explicitly display the number, I can explicitly call phone.Format(). The problem here is serializing an entity that has a PhoneNumber to JSON; JavaScriptSerializer serializes it as [object Object]. I want to serialize it as a string in (555)555-5555 format. I've looked at writing a custom JavaScriptConverter, but JavaScriptConverter.Serialize() method returns a dictionary of name-value pairs. I don't want PhoneNumber to be treated as an object with fields, I want to simply serialize it as a string.

    Read the article

  • Implementing RSA-SHA1 signature algorithm in Java (creating a private key for use with OAuth RSA-SHA

    - by The Elite Gentleman
    Hi everyone, As you know, OAuth can support RSA-SHA1 Signature. I have an OAuthSignature interface that has the following method public String sign(String data, String consumerSecret, String tokenSecret) throws GeneralSecurityException; I successfully implemented and tested HMAC-SHA1 Signature (which OAuth Supports) as well as the PLAINTEXT "signature". I have searched google and I have to create a private key if I need to use SHA1withRSA signature: Sample code: /** * Signs the data with the given key and the provided algorithm. */ private static byte[] sign(PrivateKey key, String data) throws GeneralSecurityException { Signature signature = Signature.getInstance("SHA1withRSA"); signature.initSign(key); signature.update(data.getBytes()); return signature.sign(); } Now, How can I take the OAuth key (which is key = consumerSecret&tokenSecret) and create a PrivateKey to use with SHA1withRSA signature? Thanks

    Read the article

  • NerdDinner form validation DataAnnotations ERROR in MVC2 when a form field is left blank.

    - by Edward Burns
    Platform: Windows 7 Ultimate IDE: Visual Studio 2010 Ultimate Web Environment: ASP.NET MVC 2 Database: SQL Server 2008 R2 Express Data Access: Entity Framework 4 Form Validation: DataAnnotations Sample App: NerdDinner from Wrox Pro ASP.NET MVC 2 Book: Wrox Professional MVC 2 Problem with Chapter 1 - Section: "Integrating Validation and Business Rule Logic with Model Classes" (pages 33 to 35) ERROR Synopsis: NerdDinner form validation ERROR with DataAnnotations and db nulls. DataAnnotations in sample code does not work when the database fields are set to not allow nulls. ERROR occurs with the code from the book and with the sample code downloaded from codeplex. Help! I'm really frustrated by this!! I can't believe something so simple just doesn't work??? Steps to reproduce ERROR: Set Database fields to not allow NULLs (See Picture) Set NerdDinnerEntityModel Dinner class fields' Nullable property to false (See Picture) Add DataAnnotations for Dinner_Validation class (CODE A) Create Dinner repository class (CODE B) Add CREATE action to DinnerController (CODE C) This is blank form before posting (See Picture) This null ERROR occurs when posting a blank form which should be intercepted by the Dinner_Validation class DataAnnotations. Note ERROR message says that "This property cannot be set to a null value. WTH??? (See Picture) The next ERROR occurs during the edit process. Here is the Edit controller action (CODE D) This is the "Edit" form with intentionally wrong input to test Dinner Validation DataAnnotations (See Picture) The ERROR occurs again when posting the edit form with blank form fields. The post request should be intercepted by the Dinner_Validation class DataAnnotations. Same null entry error. WTH??? (See Picture) See screen shots at: http://www.intermedia4web.com/temp/nerdDinner/StackOverflowNerdDinnerQuestionshort.png CODE A: [MetadataType(typeof(Dinner_Validation))] public partial class Dinner { } [Bind(Include = "Title, EventDate, Description, Address, Country, ContactPhone, Latitude, Longitude")] public class Dinner_Validation { [Required(ErrorMessage = "Title is required")] [StringLength(50, ErrorMessage = "Title may not be longer than 50 characters")] public string Title { get; set; } [Required(ErrorMessage = "Description is required")] [StringLength(265, ErrorMessage = "Description must be 256 characters or less")] public string Description { get; set; } [Required(ErrorMessage="Event date is required")] public DateTime EventDate { get; set; } [Required(ErrorMessage = "Address is required")] public string Address { get; set; } [Required(ErrorMessage = "Country is required")] public string Country { get; set; } [Required(ErrorMessage = "Contact phone is required")] public string ContactPhone { get; set; } [Required(ErrorMessage = "Latitude is required")] public double Latitude { get; set; } [Required(ErrorMessage = "Longitude is required")] public double Longitude { get; set; } } CODE B: public class DinnerRepository { private NerdDinnerEntities _NerdDinnerEntity = new NerdDinnerEntities(); // Query Method public IQueryable<Dinner> FindAllDinners() { return _NerdDinnerEntity.Dinners; } // Query Method public IQueryable<Dinner> FindUpcomingDinners() { return from dinner in _NerdDinnerEntity.Dinners where dinner.EventDate > DateTime.Now orderby dinner.EventDate select dinner; } // Query Method public Dinner GetDinner(int id) { return _NerdDinnerEntity.Dinners.FirstOrDefault(d => d.DinnerID == id); } // Insert Method public void Add(Dinner dinner) { _NerdDinnerEntity.Dinners.AddObject(dinner); } // Delete Method public void Delete(Dinner dinner) { foreach (var rsvp in dinner.RSVPs) { _NerdDinnerEntity.RSVPs.DeleteObject(rsvp); } _NerdDinnerEntity.Dinners.DeleteObject(dinner); } // Persistence Method public void Save() { _NerdDinnerEntity.SaveChanges(); } } CODE C: // ************************************** // GET: /Dinners/Create/ // ************************************** public ActionResult Create() { Dinner dinner = new Dinner() { EventDate = DateTime.Now.AddDays(7) }; return View(dinner); } // ************************************** // POST: /Dinners/Create/ // ************************************** [HttpPost] public ActionResult Create(Dinner dinner) { if (ModelState.IsValid) { dinner.HostedBy = "The Code Dude"; _dinnerRepository.Add(dinner); _dinnerRepository.Save(); return RedirectToAction("Details", new { id = dinner.DinnerID }); } else { return View(dinner); } } CODE D: // ************************************** // GET: /Dinners/Edit/{id} // ************************************** public ActionResult Edit(int id) { Dinner dinner = _dinnerRepository.GetDinner(id); return View(dinner); } // ************************************** // POST: /Dinners/Edit/{id} // ************************************** [HttpPost] public ActionResult Edit(int id, FormCollection formValues) { Dinner dinner = _dinnerRepository.GetDinner(id); if (TryUpdateModel(dinner)){ _dinnerRepository.Save(); return RedirectToAction("Details", new { id=dinner.DinnerID }); } return View(dinner); } I have sent Wrox and one of the authors a request for help but have not heard back from anyone. Readers of the book cannot even continue to finish the rest of chapter 1 because of these errors. Even if I download the latest build from Codeplex, it still has the same errors. Can someone please help me and tell me what needs to be fixed? Thanks - Ed.

    Read the article

  • Using jQuery Live instead of jQuery Hover function

    - by hajan
    Let’s say we have a case where we need to create mouseover / mouseout functionality for a list which will be dynamically filled with data on client-side. We can use jQuery hover function, which handles the mouseover and mouseout events with two functions. See the following example: <!DOCTYPE html> <html lang="en"> <head id="Head1" runat="server">     <title>jQuery Mouseover / Mouseout Demo</title>     <script type="text/javascript" src="http://ajax.aspnetcdn.com/ajax/jquery/jquery-1.4.4.js"></script>     <style type="text/css">         .hover { color:Red; cursor:pointer;}     </style>     <script type="text/javascript">         $(function () {             $("li").hover(               function () {                   $(this).addClass("hover");               },               function () {                   $(this).removeClass("hover");               });         });     </script> </head> <body>     <form id="form2" runat="server">     <ul>         <li>Data 1</li>         <li>Data 2</li>         <li>Data 3</li>         <li>Data 4</li>         <li>Data 5</li>         <li>Data 6</li>     </ul>     </form> </body> </html> Now, if you have situation where you want to add new data dynamically... Lets say you have a button to add new item in the list. Add the following code right bellow the </ul> tag <input type="text" id="txtItem" /> <input type="button" id="addNewItem" value="Add New Item" /> And add the following button click functionality: //button add new item functionality $("#addNewItem").click(function (event) {     event.preventDefault();     $("<li>" + $("#txtItem").val() + "</li>").appendTo("ul"); }); The mouse over effect won't work for the newly added items. Therefore, we need to use live or delegate function. These both do the same job. The main difference is that for some cases delegate is considered a bit faster, and can be used in chaining. In our case, we can use both. I will use live function. $("li").live("mouseover mouseout",   function (event) {       if (event.type == "mouseover") $(this).addClass("hover");       else $(this).removeClass("hover");   }); The complete code is: <!DOCTYPE html> <html lang="en"> <head id="Head1" runat="server">     <title>jQuery Mouseover / Mouseout Demo</title>     <script type="text/javascript" src="http://ajax.aspnetcdn.com/ajax/jquery/jquery-1.4.4.js"></script>     <style type="text/css">         .hover { color:Red; cursor:pointer;}     </style>     <script type="text/javascript">         $(function () {             $("li").live("mouseover mouseout",               function (event) {                   if (event.type == "mouseover") $(this).addClass("hover");                   else $(this).removeClass("hover");               });             //button add new item functionality             $("#addNewItem").click(function (event) {                 event.preventDefault();                 $("<li>" + $("#txtItem").val() + "</li>").appendTo("ul");             });         });     </script> </head> <body>     <form id="form2" runat="server">     <ul>         <li>Data 1</li>         <li>Data 2</li>         <li>Data 3</li>         <li>Data 4</li>         <li>Data 5</li>         <li>Data 6</li>     </ul>          <input type="text" id="txtItem" />     <input type="button" id="addNewItem" value="Add New Item" />     </form> </body> </html> So, basically when replacing hover with live, you see we use the mouseover and mouseout names for both events. Check the working demo which is available HERE. Hope this was useful blog for you. Hope it’s helpful. HajanReference blog: http://codeasp.net/blogs/hajan/microsoft-net/1260/using-jquery-live-instead-of-jquery-hover-function

    Read the article

  • Java enums vs constants for Strings

    - by Marcus
    I've switched from using constants for Strings: public static final String OPTION_1 = "OPTION_1"; ... to enums: public enum Options { OPTION_1; } With constants, you'd just refer to the constant: String s = TheClass.OPTION_1 But with Enums, you have to specify toString(): String s = Options.OPTION_1.toString(); I don't like that you have to use the toString() statement, and also, in some cases you can forget to include it which can lead to unintended results.. ie: Object o = map.get(Options.OPTION_1); //This won't work as intended if the Map key is a String Is there a better way to use enums for String constants?

    Read the article

  • Why is my onItemSelectedListener not called in a ListView?

    - by Janusz
    I'm using a ListView that is setup like this: <ListView android:id="@android:id/list" android:layout_width="fill_parent" android:layout_height="fill_parent" android:longClickable="false" android:choiceMode="singleChoice"> </ListView> In my code I add an OnItemSelectedListener to the ListView like this: getListView().setAdapter(adapter); getListView().setOnItemSelectedListener(this); my Activity implements the listener like this: @Override public void onItemSelected(AdapterView<?> parent, View view, int position, long id) { Log.d("LocateByCategory", "ListItemSelected: Parent: " + parent.toString() + " View: " + view.toString() + " Position: " + " Id: " + id); } My hope was, that I would see this debug output the moment I click on something in the list. But the debug output is never shown in LogCat.

    Read the article

  • jQuery Javascript array 'contains' functionality?

    - by YourMomzThaBomb
    I'm trying to use the jQuery $.inArray function to iterate through an array and if there's an element whose text contains a particular keyword, remove that element. $.inArray is only returning the array index though if the element's text is equal to the keyword. For example given the following array named 'tokens': - tokens {...} Object [0] "Starbucks^25^http://somelink" String [1] "McDonalds^34^" String [2] "BurgerKing^31^https://www.somewhere.com" String And a call to removeElement(tokens, 'McDonalds'); would return the following array: - tokens {...} Object [0] "Starbucks^25^http://somelink" String [1] "BurgerKing^31^https://www.somewhere.com" String I'm guessing this may be possible using the jQuery $.grep or $.each function, or maybe regex. However, I'm not familiar enough with jQuery to accomplish this. Any help would be appreciated!

    Read the article

  • June 2013 Release of the Ajax Control Toolkit

    - by Stephen.Walther
    I’m happy to announce the June 2013 release of the Ajax Control Toolkit. For this release, we enhanced the AjaxFileUpload control to support uploading files directly to Windows Azure. We also improved the SlideShow control by adding support for CSS3 animations. You can get the latest release of the Ajax Control Toolkit by visiting the project page at CodePlex (http://AjaxControlToolkit.CodePlex.com). Alternatively, you can execute the following NuGet command from the Visual Studio Library Package Manager window: Uploading Files to Azure The AjaxFileUpload control enables you to efficiently upload large files and display progress while uploading. With this release, we’ve added support for uploading large files directly to Windows Azure Blob Storage (You can continue to upload to your server hard drive if you prefer). Imagine, for example, that you have created an Azure Blob Storage container named pictures. In that case, you can use the following AjaxFileUpload control to upload to the container: <toolkit:ToolkitScriptManager runat="server" /> <toolkit:AjaxFileUpload ID="AjaxFileUpload1" StoreToAzure="true" AzureContainerName="pictures" runat="server" /> Notice that the AjaxFileUpload control is declared with two properties related to Azure. The StoreToAzure property causes the AjaxFileUpload control to upload a file to Azure instead of the local computer. The AzureContainerName property points to the blob container where the file is uploaded. .int3{position:absolute;clip:rect(487px,auto,auto,444px);}SMALL cash advance VERY CHEAP To use the AjaxFileUpload control, you need to modify your web.config file so it contains some additional settings. You need to configure the AjaxFileUpload handler and you need to point your Windows Azure connection string to your Blob Storage account. <configuration> <appSettings> <!--<add key="AjaxFileUploadAzureConnectionString" value="UseDevelopmentStorage=true"/>--> <add key="AjaxFileUploadAzureConnectionString" value="DefaultEndpointsProtocol=https;AccountName=testact;AccountKey=RvqL89Iw4npvPlAAtpOIPzrinHkhkb6rtRZmD0+ojZupUWuuAVJRyyF/LIVzzkoN38I4LSr8qvvl68sZtA152A=="/> </appSettings> <system.web> <compilation debug="true" targetFramework="4.5" /> <httpRuntime targetFramework="4.5" /> <httpHandlers> <add verb="*" path="AjaxFileUploadHandler.axd" type="AjaxControlToolkit.AjaxFileUploadHandler, AjaxControlToolkit"/> </httpHandlers> </system.web> <system.webServer> <validation validateIntegratedModeConfiguration="false" /> <handlers> <add name="AjaxFileUploadHandler" verb="*" path="AjaxFileUploadHandler.axd" type="AjaxControlToolkit.AjaxFileUploadHandler, AjaxControlToolkit"/> </handlers> <security> <requestFiltering> <requestLimits maxAllowedContentLength="4294967295"/> </requestFiltering> </security> </system.webServer> </configuration> You supply the connection string for your Azure Blob Storage account with the AjaxFileUploadAzureConnectionString property. If you set the value “UseDevelopmentStorage=true” then the AjaxFileUpload will upload to the simulated Blob Storage on your local machine. After you create the necessary configuration settings, you can use the AjaxFileUpload control to upload files directly to Azure (even very large files). Here’s a screen capture of how the AjaxFileUpload control appears in Google Chrome: After the files are uploaded, you can view the uploaded files in the Windows Azure Portal. You can see that all 5 files were uploaded successfully: New AjaxFileUpload Events In response to user feedback, we added two new events to the AjaxFileUpload control (on both the server and the client): · UploadStart – Raised on the server before any files have been uploaded. · UploadCompleteAll – Raised on the server when all files have been uploaded. · OnClientUploadStart – The name of a function on the client which is called before any files have been uploaded. · OnClientUploadCompleteAll – The name of a function on the client which is called after all files have been uploaded. These new events are most useful when uploading multiple files at a time. The updated AjaxFileUpload sample page demonstrates how to use these events to show the total amount of time required to upload multiple files (see the AjaxFileUpload.aspx file in the Ajax Control Toolkit sample site). SlideShow Animated Slide Transitions With this release of the Ajax Control Toolkit, we also added support for CSS3 animations to the SlideShow control. The animation is used when transitioning from one slide to another. Here’s the complete list of animations: · FadeInFadeOut · ScaleX · ScaleY · ZoomInOut · Rotate · SlideLeft · SlideDown You specify the animation which you want to use by setting the SlideShowAnimationType property. For example, here is how you would use the Rotate animation when displaying a set of slides: <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="ShowSlideShow.aspx.cs" Inherits="TestACTJune2013.ShowSlideShow" %> <%@ Register TagPrefix="toolkit" Namespace="AjaxControlToolkit" Assembly="AjaxControlToolkit" %> <script runat="Server" type="text/C#"> [System.Web.Services.WebMethod] [System.Web.Script.Services.ScriptMethod] public static AjaxControlToolkit.Slide[] GetSlides() { return new AjaxControlToolkit.Slide[] { new AjaxControlToolkit.Slide("slides/Blue hills.jpg", "Blue Hills", "Go Blue"), new AjaxControlToolkit.Slide("slides/Sunset.jpg", "Sunset", "Setting sun"), new AjaxControlToolkit.Slide("slides/Winter.jpg", "Winter", "Wintery..."), new AjaxControlToolkit.Slide("slides/Water lilies.jpg", "Water lillies", "Lillies in the water"), new AjaxControlToolkit.Slide("slides/VerticalPicture.jpg", "Sedona", "Portrait style picture") }; } </script> <!DOCTYPE html> <html > <head runat="server"> <title></title> </head> <body> <form id="form1" runat="server"> <div> <toolkit:ToolkitScriptManager ID="ToolkitScriptManager1" runat="server" /> <asp:Image ID="Image1" Height="300" Runat="server" /> <toolkit:SlideShowExtender ID="SlideShowExtender1" TargetControlID="Image1" SlideShowServiceMethod="GetSlides" AutoPlay="true" Loop="true" SlideShowAnimationType="Rotate" runat="server" /> </div> </form> </body> </html> In the code above, the set of slides is exposed by a page method named GetSlides(). The SlideShowAnimationType property is set to the value Rotate. The following animated GIF gives you an idea of the resulting slideshow: If you want to use either the SlideDown or SlideRight animations, then you must supply both an explicit width and height for the Image control which is the target of the SlideShow extender. For example, here is how you would declare an Image and SlideShow control to use a SlideRight animation: <toolkit:ToolkitScriptManager ID="ToolkitScriptManager1" runat="server" /> <asp:Image ID="Image1" Height="300" Width="300" Runat="server" /> <toolkit:SlideShowExtender ID="SlideShowExtender1" TargetControlID="Image1" SlideShowServiceMethod="GetSlides" AutoPlay="true" Loop="true" SlideShowAnimationType="SlideRight" runat="server" /> Notice that the Image control includes both a Height and Width property. Here’s an approximation of this animation using an animated GIF: Summary The Superexpert team worked hard on this release. We hope you like the new improvements to both the AjaxFileUpload and the SlideShow controls. We’d love to hear your feedback in the comments. On to the next sprint!

    Read the article

  • Using AesCryptoServiceProvider in VB.NET

    - by Collegeman
    My problem is actually a bit more complicated than just how to use AES in VB.NET, since what I'm really trying to do is use AES in VB.NET from within a Java application across JACOB. But for now, what I need to focus on is the AES implementation itself. Here's my encryption code Public Function EncryptAES(ByVal toEncrypt As String, ByVal key As String) As Byte() Dim keyArray = Convert.FromBase64String(key) Dim toEncryptArray = Encoding.Unicode.GetBytes(toEncrypt) Dim aes = New AesCryptoServiceProvider aes.Key = keyArray aes.Mode = CipherMode.ECB aes.Padding = PaddingMode.ISO10126 Dim encryptor = aes.CreateEncryptor() Dim encrypted = encryptor.TransformFinalBlock(toEncryptArray, 0, toEncryptArray.Length) aes.Clear() Return encrypted End Function Once back in the Java code, I turn the byte array into a hexadecimal String. Now, to reverse the process, here's my decryption code Public Function DecryptAES(ByVal toDecrypt As String, ByVal key As String) As Byte() Dim keyArray = Convert.FromBase64String(key) Dim toDecryptArray = Convert.FromBase64String(toDecrypt) Dim aes = New AesCryptoServiceProvider aes.Key = keyArray aes.Mode = CipherMode.ECB aes.Padding = PaddingMode.ISO10126 Dim decryptor = aes.CreateDecryptor() Dim decrypted = decryptor.TransformFinalBlock(toDecryptArray, 0, toDecryptArray.Length) aes.Clear() Return decrypted End Function When I run the decryption code, I get the following error message Padding is invalid and cannot be removed.

    Read the article

  • Accessing a Service from within an XNA Content Pipeline Extension

    - by David Wallace
    I need to allow my content pipeline extension to use a pattern similar to a factory. I start with a dictionary type: public delegate T Mapper<T>(MapFactory<T> mf, XElement d); public class MapFactory<T> { Dictionary<string, Mapper<T>> map = new Dictionary<string, Mapper<T>>(); public void Add(string s, Mapper<T> m) { map.Add(s, m); } public T Get(XElement xe) { if (xe == null) throw new ArgumentNullException( "Invalid document"); var key = xe.Name.ToString(); if (!map.ContainsKey(key)) throw new ArgumentException( key + " is not a valid key."); return map[key](this, xe); } public IEnumerable<T> GetAll(XElement xe) { if (xe == null) throw new ArgumentNullException( "Invalid document"); foreach (var e in xe.Elements()) { var val = e.Name.ToString(); if (map.ContainsKey(val)) yield return map[val](this, e); } } } Here is one type of object I want to store: public partial class TestContent { // Test type public string title; // Once test if true public bool once; // Parameters public Dictionary<string, object> args; public TestContent() { title = string.Empty; args = new Dictionary<string, object>(); } public TestContent(XElement xe) { title = xe.Name.ToString(); args = new Dictionary<string, object>(); xe.ParseAttribute("once", once); } } XElement.ParseAttribute is an extension method that works as one might expect. It returns a boolean that is true if successful. The issue is that I have many different types of tests, each of which populates the object in a way unique to the specific test. The element name is the key to MapFactory's dictionary. This type of test, while atypical, illustrates my problem. public class LogicTest : TestBase { string opkey; List<TestBase> items; public override bool Test(BehaviorArgs args) { if (items == null) return false; if (items.Count == 0) return false; bool result = items[0].Test(args); for (int i = 1; i < items.Count; i++) { bool other = items[i].Test(args); switch (opkey) { case "And": result &= other; if (!result) return false; break; case "Or": result |= other; if (result) return true; break; case "Xor": result ^= other; break; case "Nand": result = !(result & other); break; case "Nor": result = !(result | other); break; default: result = false; break; } } return result; } public static TestContent Build(MapFactory<TestContent> mf, XElement xe) { var result = new TestContent(xe); string key = "Or"; xe.GetAttribute("op", key); result.args.Add("key", key); var names = mf.GetAll(xe).ToList(); if (names.Count() < 2) throw new ArgumentException( "LogicTest requires at least two entries."); result.args.Add("items", names); return result; } } My actual code is more involved as the factory has two dictionaries, one that turns an XElement into a content type to write and another used by the reader to create the actual game objects. I need to build these factories in code because they map strings to delegates. I have a service that contains several of these factories. The mission is to make these factory classes available to a content processor. Neither the processor itself nor the context it uses as a parameter have any known hooks to attach an IServiceProvider or equivalent. Any ideas?

    Read the article

  • Only send populated object properties over WCF?

    - by dlanod
    I have an object that is being sent across WCF that is essentially a property holder - it can potentially have a large number of properties, i.e. up to 100, but in general only a small subset will be set, up to 10 for instance. Example: [DataContract(Namespace = "...")] public class Monkey { [DataMember] public string Arms { get; set; } [DataMember] public string Legs { get; set; } [DataMember] public string Heads { get; set; } [DataMember] public string Feet { get; set; } [DataMember] public string Bodies { get; set; } /* repeat another X times */ } Is there a way to tell WCF to only send the populated properties over the wire? It seems like a potential waste of bandwidth to send over the full object.

    Read the article

  • C# WPF application is using too much memory while GC.GetTotalMemory() is low

    - by Dmitry
    I wrote little WPF application with 2 threads - main thread is GUI thread and another thread is worker. App has one WPF form with some controls. There is a button, allowing to select directory. After selecting directory, application scans for .jpg files in that directory and checks if their thumbnails are in hashtable. if they are, it does nothing. else it's adding their full filenames to queue for worker. Worker is taking filenames from this queue, loading JPEG images (using WPF's JpegBitmapDecoder and BitmapFrame), making thumbnails of them (using WPF's TransformedBitmap) and adding them to hashtable. Everything works fine, except memory consumption by this application when making thumbnails for big images (like 5000x5000 pixels). I've added textboxes on my form to show memory consumption (GC.GetTotalMemory() and Process.GetCurrentProcess().PrivateMemorySize64) and was very surprised, cuz GC.GetTotalMemory() stays close to 1-2 Mbytes, while private memory size constantly grows, especially when loading new image (~ +100Mb per image). Even after loading all images, making thumbnails of them and freeing original images, private memory size stays at ~700-800Mbytes. My VirtualBox is limited to 512Mb of physical memory and Windows in VirtualBox starts to swap alot to handle this huge memory consumption. I guess I'm doing something wrong, but I don't know how to investigate this problem, cuz according to GC, allocated memory size is very low. Attaching code of thumbnail loader class: class ThumbnailLoader { Hashtable thumbnails; Queue<string> taskqueue; EventWaitHandle wh; Thread[] workers; bool stop; object locker; int width, height, processed, added; public ThumbnailLoader() { int workercount,i; wh = new AutoResetEvent(false); thumbnails = new Hashtable(); taskqueue = new Queue<string>(); stop = false; locker = new object(); width = height = 64; processed = added = 0; workercount = Environment.ProcessorCount; workers=new Thread[workercount]; for (i = 0; i < workercount; i++) { workers[i] = new Thread(Worker); workers[i].IsBackground = true; workers[i].Priority = ThreadPriority.Highest; workers[i].Start(); } } public void SetThumbnailSize(int twidth, int theight) { width = twidth; height = theight; if (thumbnails.Count!=0) AddTask("#resethash"); } public void GetProgress(out int Added, out int Processed) { Added = added; Processed = processed; } private void AddTask(string filename) { lock(locker) { taskqueue.Enqueue(filename); wh.Set(); added++; } } private string NextTask() { lock(locker) { if (taskqueue.Count == 0) return null; else { processed++; return taskqueue.Dequeue(); } } } public static string FileNameToHash(string s) { return FormsAuthentication.HashPasswordForStoringInConfigFile(s, "MD5"); } public bool GetThumbnail(string filename,out BitmapFrame thumbnail) { string hash; hash = FileNameToHash(filename); if (thumbnails.ContainsKey(hash)) { thumbnail=(BitmapFrame)thumbnails[hash]; return true; } AddTask(filename); thumbnail = null; return false; } private BitmapFrame LoadThumbnail(string filename) { FileStream fs; JpegBitmapDecoder bd; BitmapFrame oldbf, bf; TransformedBitmap tb; double scale, dx, dy; fs = new FileStream(filename, FileMode.Open); bd = new JpegBitmapDecoder(fs, BitmapCreateOptions.None, BitmapCacheOption.OnLoad); oldbf = bd.Frames[0]; dx = (double)oldbf.Width / width; dy = (double)oldbf.Height / height; if (dx > dy) scale = 1 / dx; else scale = 1 / dy; tb = new TransformedBitmap(oldbf, new ScaleTransform(scale, scale)); bf = BitmapFrame.Create(tb); fs.Close(); oldbf = null; bd = null; GC.Collect(); return bf; } public void Dispose() { lock(locker) { stop = true; } AddTask(null); foreach (Thread worker in workers) { worker.Join(); } wh.Close(); } private void Worker() { string curtask,hash; while (!stop) { curtask = NextTask(); if (curtask == null) wh.WaitOne(); else { if (curtask == "#resethash") thumbnails.Clear(); else { hash = FileNameToHash(curtask); try { thumbnails[hash] = LoadThumbnail(curtask); } catch { thumbnails[hash] = null; } } } } } }

    Read the article

  • International Radio Operators Alphabet in F# &amp; Silverlight &ndash; Part 1

    - by MarkPearl
    So I have been delving into F# more and more and thought the best way to learn the language is to write something useful. I have been meaning to get some more Silverlight knowledge (up to now I have mainly been doing WPF) so I came up with a really simple project that I can actually use at work. Simply put – I often get support calls from clients wanting new activation codes. One of our main app’s was written in VB6 and had its own “security” where it would require about a 45 character sequence for it to be activated. The catch being that each time you reopen the program it would require a different character sequence, which meant that when we activate clients systems we have to do it live! This involves us either referring them to a website, or reading the characters to them over the phone and since nobody in the office knows the IROA off by heart we would come up with some interesting words to represent characters… 9 times out of 10 the client would type in the wrong character and we would have to start all over again… with this app I am hoping to reduce the errors of reading characters over the phone by treating it like a ham radio. My “Silverlight” application will allow for the user to input a series of characters and the system will then generate the equivalent IROA words… very basic stuff e.g. Character Input – abc Words Generated – Alpha Bravo Charlie After listening to Anders Hejlsberg on Dot Net Rocks Show 541 he mentioned that he felt many applications could make use of F# but in an almost silo basis – meaning that you would write modules that leant themselves to Functional Programming in F# and then incorporate it into a solution where the front end may be in C# or where you would have some other sort of glue. I buy into this kind of approach, so in this project I will use F# to do my very intensive “Business Logic” and will use Silverlight/C# to do the front end. F# Business Layer I am no expert at this, so I am sure to get some feedback on way I could improve my algorithm. My approach was really simple. I would need a function that would convert a single character to a string – i.e. ‘A’ –> “Alpha” and then I would need a function that would take a string of characters, convert them into a sequence of characters, and then apply my converter to return a sequence of words… make sense? Lets start with the CharToString function let CharToString (element:char) = match element.ToString().ToLower() with | "1" -> "1" | "5" -> "5" | "9" -> "9" | "2" -> "2" | "6" -> "6" | "0" -> "0" | "3" -> "3" | "7" -> "7" | "4" -> "4" | "8" -> "8" | "a" -> "Alpha" | "b" -> "Bravo" | "c" -> "Charlie" | "d" -> "Delta" | "e" -> "Echo" | "f" -> "Foxtrot" | "g" -> "Golf" | "h" -> "Hotel" | "i" -> "India" | "j" -> "Juliet" | "k" -> "Kilo" | "l" -> "Lima" | "m" -> "Mike" | "n" -> "November" | "o" -> "Oscar" | "p" -> "Papa" | "q" -> "Quebec" | "r" -> "Romeo" | "s" -> "Sierra" | "t" -> "Tango" | "u" -> "Uniform" | "v" -> "Victor" | "w" -> "Whiskey" | "x" -> "XRay" | "y" -> "Yankee" | "z" -> "Zulu" | element -> "Unknown" Quite simple, an element is passed in, this element is them converted to a lowercase single character string and then matched up with the equivalent word. If by some chance a character is not recognized, “Unknown” will be returned… I know need a function that can take a string and can parse each character of the string and generate a new sequence with the converted words… let ConvertCharsToStrings (s:string) = s |> Seq.toArray |> Seq.map(fun elem -> CharToString(elem)) Here… the Seq.toArray converts the string to a sequence of characters. I then searched for some way to parse through every element in the sequence. Originally I tried Seq.iter, but I think my understanding of what iter does was incorrect. Eventually I found Seq.map, which applies a function to every element in a sequence and then creates a new collection with the adjusted processed element. It turned out to be exactly what I needed… To test that everything worked I created one more function that parsed through every element in a sequence and printed it. AT this point I realized the the Seq.iter would be ideal for this… So my testing code is below… let PrintStrings items = items |> Seq.iter(fun x -> Console.Write(x.ToString() + " ")) let newSeq = ConvertCharsToStrings("acdefg123") PrintStrings newSeq Console.ReadLine()   Pretty basic stuff I guess… I hope my approach was right? In Part 2 I will look into doing a simple Silverlight Frontend, referencing the projects together and deploying….

    Read the article

  • Django foreign key question

    - by Hulk
    All, i have the following model defined, class header(models.Model): title = models.CharField(max_length = 255) created_by = models.CharField(max_length = 255) def __unicode__(self): return self.id() class criteria(models.Model): details = models.CharField(max_length = 255) headerid = models.ForeignKey(header) def __unicode__(self): return self.id() class options(models.Model): opt_details = models.CharField(max_length = 255) headerid = models.ForeignKey(header) def __unicode__(self): return self.id() AND IN MY VIEWS I HAVE p= header(title=name,created_by=id) p.save() Now the data will be saved to header table .My question is that for this id generated in header table how will save the data to criteria and options table..Please let me know.. Thanks..

    Read the article

< Previous Page | 499 500 501 502 503 504 505 506 507 508 509 510  | Next Page >