Search Results

Search found 3169 results on 127 pages for 'identity ranges'.

Page 100/127 | < Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >

  • Type '_Default' already contains a definition

    - by salvationishere
    I am developing a C# VS 2008 / SQL Server 2008 website. I have a Gridview. I included the Default.aspx and aspx.cs files below. But when I build this I get the below error: The Type '_Default' already contains a definition for 'btnOWrite' What do I need to do to fix this? I am not getting any errors now; just that this grid does not show up. Thanks! ASPX file: <%@ Page Language="C#" MasterPageFile="~/Site.master" AutoEventWireup="true" CodeFile="Default.aspx.cs" Inherits="_Default" Title="Untitled Page" %> <asp:Content ID="Content1" ContentPlaceHolderID="MainContent" runat="Server"> <asp:Panel runat="server" ID="AuthenticatedMessagePanel"> <asp:Label runat="server" ID="WelcomeBackMessage"></asp:Label> <table> <tr> <td> <asp:Label ID="tableLabel" runat="server" Font-Bold="True" Text="Select target table:"></asp:Label> </td> <td> <asp:Label ID="inputLabel" runat="server" Font-Bold="True" Text="Select input file:"></asp:Label> </td> </tr> <tr> <td valign="top"> <asp:Label ID="feedbackLabel" runat="server"></asp:Label> <asp:SqlDataSource ID="SelectTables" runat="server" ConnectionString="<%$ ConnectionStrings:AdventureWorks3_SelectTables %>" SelectCommand="getTableNames" SelectCommandType="StoredProcedure"> <SelectParameters> <asp:QueryStringParameter DefaultValue="Person" Name="SchemaName" QueryStringField="SchemaName" Type="String" /> </SelectParameters> </asp:SqlDataSource> <asp:GridView ID="GridView1" DataSourceID="SelectTables" runat="server" Style="width: 400px;" CellPadding="4" ForeColor="#333333" GridLines="None" OnSelectedIndexChanged="GridView1_SelectedIndexChanged" AutoGenerateSelectButton="True" DataKeyNames="TABLE_NAME"> <RowStyle BackColor="#F7F6F3" ForeColor="#333333" /> <Columns> <asp:BoundField HeaderText="TABLE_NAME" DataField="TABLE_NAME" /> </Columns> <FooterStyle BackColor="#5D7B9D" Font-Bold="True" ForeColor="White" /> <PagerStyle BackColor="#284775" ForeColor="White" HorizontalAlign="Center" /> <SelectedRowStyle BackColor="#E2DED6" Font-Bold="True" ForeColor="#333333" /> <HeaderStyle BackColor="#5D7B9D" Font-Bold="True" ForeColor="White" /> <EditRowStyle BackColor="#999999" /> <AlternatingRowStyle BackColor="White" ForeColor="#284775" /> </asp:GridView> </td> <td valign="top"> <input id="uploadFile" type="file" size="26" runat="server" name="uploadFile" title="UploadFile" class="greybar" enableviewstate="True" /> </td> </tr> </table> <table> <tr> <td style="width:150px; height:50px"></td> <td valign="bottom" style="width:150px; height:50px"> <input id="btnOWrite" type="submit" value="Overwrite Data" runat="server" class="greybar" onserverclick="btnOWrite_Click" name="btnOWrite" />&nbsp; </td> <td style="width:100px"></td> <td valign="bottom" style="width:150px; height:50px"> <input id="btnAppend" type="submit" value="Append Data" runat="server" class="greybar" onserverclick="btnAppend_Click" name="btnAppend" /> </td> </tr> </table> </asp:Panel> <asp:Panel runat="Server" ID="AnonymousMessagePanel"> <asp:HyperLink runat="server" ID="lnkLogin" Text="Log In" NavigateUrl="~/Login.aspx"> </asp:HyperLink> </asp:Panel> </asp:Content> ASPX.CS file: using System; using System.Collections; using System.Configuration; using System.Data; using System.Linq; using System.Web; using System.Web.Security; using System.Web.UI; using System.Web.UI.HtmlControls; using System.Web.UI.WebControls; using System.Web.UI.WebControls.WebParts; using System.Xml.Linq; using System.Collections.Generic; using System.IO; using System.Drawing; using System.ComponentModel; using System.Data.SqlClient; using ADONET_namespace; using System.Security.Principal; //using System.Windows; public partial class _Default : System.Web.UI.Page //namespace AddFileToSQL { //protected System.Web.UI.HtmlControls.HtmlInputFile uploadFile; protected System.Web.UI.HtmlControls.HtmlInputButton btnOWrite; protected System.Web.UI.HtmlControls.HtmlInputButton btnAppend; protected System.Web.UI.WebControls.Label Label1; protected static string inputfile = ""; public static string targettable; public static string selection; // Number of controls added to view state protected int default_NumberOfControls { get { if (ViewState["default_NumberOfControls"] != null) { return (int)ViewState["default_NumberOfControls"]; } else { return 0; } } set { ViewState["default_NumberOfControls"] = value; } } protected void uploadFile_onclick(object sender, EventArgs e) { } protected void Load_GridData() { //GridView1.DataSource = ADONET_methods.DisplaySchemaTables(); //GridView1.DataBind(); } protected void btnOWrite_Click(object sender, EventArgs e) { if (uploadFile.PostedFile.ContentLength > 0) { feedbackLabel.Text = "You do not have sufficient access to overwrite table records."; } else { feedbackLabel.Text = "This file does not contain any data."; } } protected void btnAppend_Click(object sender, EventArgs e) { string fullpath = Page.Request.PhysicalApplicationPath; string path = uploadFile.PostedFile.FileName; if (File.Exists(path)) { // Create a file to write to. try { StreamReader sr = new StreamReader(path); string s = ""; while (sr.Peek() > 0) s = sr.ReadLine(); sr.Close(); } catch (IOException exc) { Console.WriteLine(exc.Message + "Cannot open file."); return; } } if (uploadFile.PostedFile.ContentLength > 0) { inputfile = System.IO.File.ReadAllText(path); Session["Message"] = inputfile; Response.Redirect("DataMatch.aspx"); } else { feedbackLabel.Text = "This file does not contain any data."; } } protected void Page_Load(object sender, EventArgs e) { if (Request.IsAuthenticated) { WelcomeBackMessage.Text = "Welcome back, " + User.Identity.Name + "!"; // Reference the CustomPrincipal / CustomIdentity CustomIdentity ident = User.Identity as CustomIdentity; if (ident != null) WelcomeBackMessage.Text += string.Format(" You are the {0} of {1}.", ident.Title, ident.CompanyName); AuthenticatedMessagePanel.Visible = true; AnonymousMessagePanel.Visible = false; if (!Page.IsPostBack) { Load_GridData(); } } else { AuthenticatedMessagePanel.Visible = false; AnonymousMessagePanel.Visible = true; } } protected void GridView1_SelectedIndexChanged(object sender, EventArgs e) { GridViewRow row = GridView1.SelectedRow; targettable = row.Cells[2].Text; } }

    Read the article

  • How to display table in ASP.NET website?

    - by salvationishere
    I am developing a C# VS 2008 / SQL Server 2008 website, but now I cannot display the Gridview containing my table. I included the Default.aspx and aspx.cs files below. What do I need to do to fix this? I am not getting any errors now; just that this grid does not show up. Thanks! ASPX file: <%@ Page Language="C#" MasterPageFile="~/Site.master" AutoEventWireup="true" CodeFile="Default.aspx.cs" Inherits="_Default" Title="Untitled Page" %> <asp:Content ID="Content1" ContentPlaceHolderID="MainContent" Runat="Server"> <asp:Panel runat="server" ID="AuthenticatedMessagePanel"> <asp:Label runat="server" ID="WelcomeBackMessage"></asp:Label> <table> <tr > <td> <asp:Label ID="tableLabel" runat="server" Font-Bold="True" Text="Select target table:"></asp:Label> </td> <td> <asp:Label ID="inputLabel" runat="server" Font-Bold="True" Text="Select input file:"></asp:Label> </td></tr> <tr><td valign="top"> <asp:Label ID="feedbackLabel" runat="server"></asp:Label> <asp:SqlDataSource ID="SelectTables" runat="server" ConnectionString="<%$ ConnectionStrings:AdventureWorks3_SelectTables %>" SelectCommand="SELECT TABLE_NAME FROM information_schema.Tables WHERE TABLE_TYPE = 'BASE TABLE' AND TABLE_SCHEMA = @SchemaName"> <SelectParameters> <asp:Parameter Name="SchemaName" Type="String" DefaultValue="" /> </SelectParameters> <InsertParameters> <asp:Parameter Name="TABLE_NAME" Direction="Output" Type="String" DefaultValue="" /> </InsertParameters> </asp:SqlDataSource> <asp:GridView ID="GridView1" DatasourceID="SelectTables" runat="server" style="WIDTH: 400px;" CellPadding="4" ForeColor="#333333" GridLines="None" onselectedindexchanged="GridView1_SelectedIndexChanged" AutoGenerateSelectButton="True" DataKeyNames="TABLE_NAME"> <RowStyle BackColor="#F7F6F3" ForeColor="#333333" /> <Columns> <asp:BoundField HeaderText="TABLE_NAME" DataField="TABLE_NAME" /> </Columns> <FooterStyle BackColor="#5D7B9D" Font-Bold="True" ForeColor="White" /> <PagerStyle BackColor="#284775" ForeColor="White" HorizontalAlign="Center" /> <SelectedRowStyle BackColor="#E2DED6" Font-Bold="True" ForeColor="#333333" /> <HeaderStyle BackColor="#5D7B9D" Font-Bold="True" ForeColor="White" /> <EditRowStyle BackColor="#999999" /> <AlternatingRowStyle BackColor="White" ForeColor="#284775" /> </asp:GridView> </td> <td valign="top"> <input id="uploadFile" type="file" size="26" runat="server" name="uploadFile" title="UploadFile" class="greybar" enableviewstate="True" /> </td></tr> </table> </asp:Panel> <asp:Panel runat="Server" ID="AnonymousMessagePanel"> <asp:HyperLink runat="server" ID="lnkLogin" Text="Log In" NavigateUrl="~/Login.aspx"> </asp:HyperLink> </asp:Panel> </asp:Content> ASPX.CS file: using System; using System.Collections; using System.Configuration; using System.Data; using System.Linq; using System.Web; using System.Web.Security; using System.Web.UI; using System.Web.UI.HtmlControls; using System.Web.UI.WebControls; using System.Web.UI.WebControls.WebParts; using System.Xml.Linq; using System.Collections.Generic; using System.IO; using System.Drawing; using System.ComponentModel; using System.Data.SqlClient; using ADONET_namespace; using System.Security.Principal; //using System.Windows; public partial class _Default : System.Web.UI.Page //namespace AddFileToSQL { //protected System.Web.UI.HtmlControls.HtmlInputFile uploadFile; protected System.Web.UI.HtmlControls.HtmlInputButton btnOWrite; protected System.Web.UI.HtmlControls.HtmlInputButton btnAppend; protected System.Web.UI.WebControls.Label Label1; protected static string inputfile = ""; public static string targettable; public static string selection; // Number of controls added to view state protected int default_NumberOfControls { get { if (ViewState["default_NumberOfControls"] != null) { return (int)ViewState["default_NumberOfControls"]; } else { return 0; } } set { ViewState["default_NumberOfControls"] = value; } } protected void uploadFile_onclick(object sender, EventArgs e) { } protected void Load_GridData() { //GridView1.DataSource = ADONET_methods.DisplaySchemaTables(); //GridView1.DataBind(); } protected void btnOWrite_Click(object sender, EventArgs e) { if (uploadFile.PostedFile.ContentLength > 0) { feedbackLabel.Text = "You do not have sufficient access to overwrite table records."; } else { feedbackLabel.Text = "This file does not contain any data."; } } protected void btnAppend_Click(object sender, EventArgs e) { string fullpath = Page.Request.PhysicalApplicationPath; string path = uploadFile.PostedFile.FileName; if (File.Exists(path)) { // Create a file to write to. try { StreamReader sr = new StreamReader(path); string s = ""; while (sr.Peek() > 0) s = sr.ReadLine(); sr.Close(); } catch (IOException exc) { Console.WriteLine(exc.Message + "Cannot open file."); return; } } if (uploadFile.PostedFile.ContentLength > 0) { inputfile = System.IO.File.ReadAllText(path); Session["Message"] = inputfile; Response.Redirect("DataMatch.aspx"); } else { feedbackLabel.Text = "This file does not contain any data."; } } protected void Page_Load(object sender, EventArgs e) { if (Request.IsAuthenticated) { WelcomeBackMessage.Text = "Welcome back, " + User.Identity.Name + "!"; // Reference the CustomPrincipal / CustomIdentity CustomIdentity ident = User.Identity as CustomIdentity; if (ident != null) WelcomeBackMessage.Text += string.Format(" You are the {0} of {1}.", ident.Title, ident.CompanyName); AuthenticatedMessagePanel.Visible = true; AnonymousMessagePanel.Visible = false; if (!Page.IsPostBack) { Load_GridData(); } } else { AuthenticatedMessagePanel.Visible = false; AnonymousMessagePanel.Visible = true; } } protected void GridView1_SelectedIndexChanged(object sender, EventArgs e) { GridViewRow row = GridView1.SelectedRow; targettable = row.Cells[2].Text; } }

    Read the article

  • WCF Service returning 400 error: The body of the message cannot be read because it is empty

    - by Josh
    I have a WCF service that is causing a bit of a headache. I have tracing enabled, I have an object with a data contract being built and passed in, but I am seeing this error in the log: <TraceData> <DataItem> <TraceRecord xmlns="http://schemas.microsoft.com/2004/10/E2ETraceEvent/TraceRecord" Severity="Error"> <TraceIdentifier>http://msdn.microsoft.com/en-US/library/System.ServiceModel.Diagnostics.ThrowingException.aspx</TraceIdentifier> <Description>Throwing an exception.</Description> <AppDomain>efb0d0d7-1-129315381593520544</AppDomain> <Exception> <ExceptionType>System.ServiceModel.ProtocolException, System.ServiceModel, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089</ExceptionType> <Message>There is a problem with the XML that was received from the network. See inner exception for more details.</Message> <StackTrace> at System.ServiceModel.Channels.HttpRequestContext.CreateMessage() at System.ServiceModel.Channels.HttpChannelListener.HttpContextReceived(HttpRequestContext context, Action callback) at System.ServiceModel.Activation.HostedHttpTransportManager.HttpContextReceived(HostedHttpRequestAsyncResult result) at System.ServiceModel.Activation.HostedHttpRequestAsyncResult.HandleRequest() at System.ServiceModel.Activation.HostedHttpRequestAsyncResult.BeginRequest() at System.ServiceModel.Activation.HostedHttpRequestAsyncResult.OnBeginRequest(Object state) at System.Runtime.IOThreadScheduler.ScheduledOverlapped.IOCallback(UInt32 errorCode, UInt32 numBytes, NativeOverlapped* nativeOverlapped) at System.Runtime.Fx.IOCompletionThunk.UnhandledExceptionFrame(UInt32 error, UInt32 bytesRead, NativeOverlapped* nativeOverlapped) at System.Threading._IOCompletionCallback.PerformIOCompletionCallback(UInt32 errorCode, UInt32 numBytes, NativeOverlapped* pOVERLAP) </StackTrace> <ExceptionString> System.ServiceModel.ProtocolException: There is a problem with the XML that was received from the network. See inner exception for more details. ---&amp;gt; System.Xml.XmlException: The body of the message cannot be read because it is empty. --- End of inner exception stack trace --- </ExceptionString> <InnerException> <ExceptionType>System.Xml.XmlException, System.Xml, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089</ExceptionType> <Message>The body of the message cannot be read because it is empty.</Message> <StackTrace> at System.ServiceModel.Channels.HttpRequestContext.CreateMessage() at System.ServiceModel.Channels.HttpChannelListener.HttpContextReceived(HttpRequestContext context, Action callback) at System.ServiceModel.Activation.HostedHttpTransportManager.HttpContextReceived(HostedHttpRequestAsyncResult result) at System.ServiceModel.Activation.HostedHttpRequestAsyncResult.HandleRequest() at System.ServiceModel.Activation.HostedHttpRequestAsyncResult.BeginRequest() at System.ServiceModel.Activation.HostedHttpRequestAsyncResult.OnBeginRequest(Object state) at System.Runtime.IOThreadScheduler.ScheduledOverlapped.IOCallback(UInt32 errorCode, UInt32 numBytes, NativeOverlapped* nativeOverlapped) at System.Runtime.Fx.IOCompletionThunk.UnhandledExceptionFrame(UInt32 error, UInt32 bytesRead, NativeOverlapped* nativeOverlapped) at System.Threading._IOCompletionCallback.PerformIOCompletionCallback(UInt32 errorCode, UInt32 numBytes, NativeOverlapped* pOVERLAP) </StackTrace> <ExceptionString>System.Xml.XmlException: The body of the message cannot be read because it is empty.</ExceptionString> </InnerException> </Exception> </TraceRecord> </DataItem> </TraceData> So, here is my service interface: [ServiceContract] public interface IRDCService { [OperationContract] Response<Customer> GetCustomer(CustomerRequest request); [OperationContract] Response<Customer> GetSiteCustomers(CustomerRequest request); } And here is my service instance public class RDCService : IRDCService { ICustomerService customerService; public RDCService() { //We have to locate the instance from structuremap manually because web services *REQUIRE* a default constructor customerService = ServiceLocator.Locate<ICustomerService>(); } public Response<Customer> GetCustomer(CustomerRequest request) { return customerService.GetCustomer(request); } public Response<Customer> GetSiteCustomers(CustomerRequest request) { return customerService.GetSiteCustomers(request); } } The configuration for the web service (server side) looks like this: <system.serviceModel> <diagnostics> <messageLogging logMalformedMessages="true" logMessagesAtServiceLevel="true" logMessagesAtTransportLevel="true" /> </diagnostics> <services> <service behaviorConfiguration="MySite.Web.Services.RDCServiceBehavior" name="MySite.Web.Services.RDCService"> <endpoint address="http://localhost:27433" binding="wsHttpBinding" contract="MySite.Common.Services.Web.IRDCService"> <identity> <dns value="localhost:27433" /> </identity> </endpoint> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" /> </service> </services> <behaviors> <serviceBehaviors> <behavior name="MySite.Web.Services.RDCServiceBehavior"> <!-- To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment --> <serviceMetadata httpGetEnabled="true"/> <!-- To receive exception details in faults for debugging purposes, set the value below to true. Set to false before deployment to avoid disclosing exception information --> <serviceDebug includeExceptionDetailInFaults="true"/> <dataContractSerializer maxItemsInObjectGraph="6553600" /> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel> Here is what my request object looks like [DataContract] public class CustomerRequest : RequestBase { [DataMember] public int Id { get; set; } [DataMember] public int SiteId { get; set; } } And the RequestBase: [DataContract] public abstract class RequestBase : IRequest { #region IRequest Members [DataMember] public int PageSize { get; set; } [DataMember] public int PageIndex { get; set; } #endregion } And my IRequest interface public interface IRequest { int PageSize { get; set; } int PageIndex { get; set; } } And I have a wrapper class around my service calls. Here is the class. public class MyService : IMyService { IRDCService service; public MyService() { //service = new MySite.RDCService.RDCServiceClient(); EndpointAddress address = new EndpointAddress(APISettings.Default.ServiceUrl); BasicHttpBinding binding = new BasicHttpBinding(BasicHttpSecurityMode.None); binding.TransferMode = TransferMode.Streamed; binding.MaxBufferSize = 65536; binding.MaxReceivedMessageSize = 4194304; ChannelFactory<IRDCService> factory = new ChannelFactory<IRDCService>(binding, address); service = factory.CreateChannel(); } public Response<Customer> GetCustomer(CustomerRequest request) { return service.GetCustomer(request); } public Response<Customer> GetSiteCustomers(CustomerRequest request) { return service.GetSiteCustomers(request); } } and finally, the response object. [DataContract] public class Response<T> { [DataMember] public IEnumerable<T> Results { get; set; } [DataMember] public int TotalResults { get; set; } [DataMember] public int PageIndex { get; set; } [DataMember] public int PageSize { get; set; } [DataMember] public RulesException Exception { get; set; } } So, when I build my CustomerRequest object and pass it in, for some reason it's hitting the server as an empty request. Any ideas why? I've tried upping the object graph and the message size. When I debug it stops in the wrapper class with the 400 error. I'm not sure if there is a serialization error, but considering the object contract is 4 integer properties I can't imagine it causing an issue.

    Read the article

  • Gridview Datasource Server error

    - by salvationishere
    I am developing a C# VS 2008 and SQL Server 2008 website. However, I get the below error now when I first run this: The DataSourceID of 'GridView1' must be the ID of a control of type IDataSource. A control with ID 'AdventureWorks3.mdf' could not be found What is causing this error? Here is my default.aspx file. I have configured GridView1 to use my AdventureWorks3.mdf file, stored in my App_Data folder. Do I need to add this folder name to this ASPX file? <%@ Page Language="C#" MasterPageFile="~/Site.master" AutoEventWireup="true" CodeFile="Default.aspx.cs" Inherits="_Default" Title="Untitled Page" %> <asp:Content ID="Content1" ContentPlaceHolderID="MainContent" Runat="Server"> <asp:Panel runat="server" ID="AuthenticatedMessagePanel"> <asp:Label runat="server" ID="WelcomeBackMessage"></asp:Label> <table> <tr > <td> <asp:Label ID="tableLabel" runat="server" Font-Bold="True" Text="Select target table:"></asp:Label> </td> <td> <asp:Label ID="inputLabel" runat="server" Font-Bold="True" Text="Select input file:"></asp:Label> </td></tr> <tr><td valign="top"> <asp:Label ID="feedbackLabel" runat="server"></asp:Label> <asp:GridView ID="GridView1" runat="server" style="WIDTH: 400px;" CellPadding="4" ForeColor="#333333" GridLines="None" onselectedindexchanged="GridView1_SelectedIndexChanged" AutoGenerateSelectButton="True" DataSourceID="AdventureWorks3.mdf" > <RowStyle BackColor="#F7F6F3" ForeColor="#333333" /> <FooterStyle BackColor="#5D7B9D" Font-Bold="True" ForeColor="White" /> <PagerStyle BackColor="#284775" ForeColor="White" HorizontalAlign="Center" /> <SelectedRowStyle BackColor="#E2DED6" Font-Bold="True" ForeColor="#333333" /> <HeaderStyle BackColor="#5D7B9D" Font-Bold="True" ForeColor="White" /> <EditRowStyle BackColor="#999999" /> <AlternatingRowStyle BackColor="White" ForeColor="#284775" /> </asp:GridView> </td> <td valign="top"> <input id="uploadFile" type="file" size="26" runat="server" name="uploadFile" title="UploadFile" class="greybar" enableviewstate="True" /> </td></tr> </table> </asp:Panel> <asp:Panel runat="Server" ID="AnonymousMessagePanel"> <asp:HyperLink runat="server" ID="lnkLogin" Text="Log In" NavigateUrl="~/Login.aspx"> </asp:HyperLink> </asp:Panel> </asp:Content> Or what about my ASPX.CS file? Is this the problem? using System; using System.Collections; using System.Configuration; using System.Data; using System.Linq; using System.Web; using System.Web.Security; using System.Web.UI; using System.Web.UI.HtmlControls; using System.Web.UI.WebControls; using System.Web.UI.WebControls.WebParts; using System.Xml.Linq; using System.Collections.Generic; using System.IO; using System.Drawing; using System.ComponentModel; using System.Data.SqlClient; using ADONET_namespace; using System.Security.Principal; //using System.Windows; public partial class _Default : System.Web.UI.Page //namespace AddFileToSQL { //protected System.Web.UI.HtmlControls.HtmlInputFile uploadFile; protected System.Web.UI.HtmlControls.HtmlInputButton btnOWrite; protected System.Web.UI.HtmlControls.HtmlInputButton btnAppend; protected System.Web.UI.WebControls.Label Label1; protected static string inputfile = ""; public static string targettable; public static string selection; // Number of controls added to view state protected int default_NumberOfControls { get { if (ViewState["default_NumberOfControls"] != null) { return (int)ViewState["default_NumberOfControls"]; } else { return 0; } } set { ViewState["default_NumberOfControls"] = value; } } protected void uploadFile_onclick(object sender, EventArgs e) { } protected void Load_GridData() { GridView1.DataSource = ADONET_methods.DisplaySchemaTables(); GridView1.DataBind(); } protected void btnOWrite_Click(object sender, EventArgs e) { if (uploadFile.PostedFile.ContentLength > 0) { feedbackLabel.Text = "You do not have sufficient access to overwrite table records."; } else { feedbackLabel.Text = "This file does not contain any data."; } } protected void btnAppend_Click(object sender, EventArgs e) { string fullpath = Page.Request.PhysicalApplicationPath; string path = uploadFile.PostedFile.FileName; if (File.Exists(path)) { // Create a file to write to. try { StreamReader sr = new StreamReader(path); string s = ""; while (sr.Peek() > 0) s = sr.ReadLine(); sr.Close(); } catch (IOException exc) { Console.WriteLine(exc.Message + "Cannot open file."); return; } } if (uploadFile.PostedFile.ContentLength > 0) { inputfile = System.IO.File.ReadAllText(path); Session["Message"] = inputfile; Response.Redirect("DataMatch.aspx"); } else { feedbackLabel.Text = "This file does not contain any data."; } } protected void Page_Load(object sender, EventArgs e) { if (Request.IsAuthenticated) { WelcomeBackMessage.Text = "Welcome back, " + User.Identity.Name + "!"; // Reference the CustomPrincipal / CustomIdentity CustomIdentity ident = User.Identity as CustomIdentity; if (ident != null) WelcomeBackMessage.Text += string.Format(" You are the {0} of {1}.", ident.Title, ident.CompanyName); AuthenticatedMessagePanel.Visible = true; AnonymousMessagePanel.Visible = false; //if (!Page.IsPostBack) //{ // Load_GridData(); //} } else { AuthenticatedMessagePanel.Visible = false; AnonymousMessagePanel.Visible = true; } } protected void GridView1_SelectedIndexChanged(object sender, EventArgs e) { GridViewRow row = GridView1.SelectedRow; targettable = row.Cells[2].Text; } }

    Read the article

  • Nginx and client certificates from hierarchical OpenSSL-based certification authorities

    - by Fmy Oen
    I'm trying to set up root certification authority, subordinate certification authority and to generate the client certificates signed by any of this CA that nginx 0.7.67 on Debian Squeeze will accept. My problem is that root CA signed client certificate works fine while subordinate CA signed one results in "400 Bad Request. The SSL certificate error". Step 1: nginx virtual host configuration: server { server_name test.local; access_log /var/log/nginx/test.access.log; listen 443 default ssl; keepalive_timeout 70; ssl_protocols SSLv3 TLSv1; ssl_ciphers AES128-SHA:AES256-SHA:RC4-SHA:DES-CBC3-SHA:RC4-MD5; ssl_certificate /etc/nginx/ssl/server.crt; ssl_certificate_key /etc/nginx/ssl/server.key; ssl_client_certificate /etc/nginx/ssl/client.pem; ssl_verify_client on; ssl_session_cache shared:SSL:10m; ssl_session_timeout 5m; location / { proxy_pass http://testsite.local/; } } Step 2: PKI infrastructure organization for both root and subordinate CA (based on this article): # mkdir ~/pki && cd ~/pki # mkdir rootCA subCA # cp -v /etc/ssl/openssl.cnf rootCA/ # cd rootCA/ # mkdir certs private crl newcerts; touch serial; echo 01 > serial; touch index.txt; touch crlnumber; echo 01 > crlnumber # cp -Rvp * ../subCA/ Almost no changes was made to rootCA/openssl.cnf: [ CA_default ] dir = . # Where everything is kept ... certificate = $dir/certs/rootca.crt # The CA certificate ... private_key = $dir/private/rootca.key # The private key and to subCA/openssl.cnf: [ CA_default ] dir = . # Where everything is kept ... certificate = $dir/certs/subca.crt # The CA certificate ... private_key = $dir/private/subca.key # The private key Step 3: Self-signed root CA certificate generation: # openssl genrsa -out ./private/rootca.key -des3 2048 # openssl req -x509 -new -key ./private/rootca.key -out certs/rootca.crt -config openssl.cnf Enter pass phrase for ./private/rootca.key: You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [AU]: State or Province Name (full name) [Some-State]: Locality Name (eg, city) []: Organization Name (eg, company) [Internet Widgits Pty Ltd]: Organizational Unit Name (eg, section) []: Common Name (eg, YOUR name) []:rootca Email Address []: Step 4: Subordinate CA certificate generation: # cd ../subCA # openssl genrsa -out ./private/subca.key -des3 2048 # openssl req -new -key ./private/subca.key -out subca.csr -config openssl.cnf Enter pass phrase for ./private/subca.key: You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [AU]: State or Province Name (full name) [Some-State]: Locality Name (eg, city) []: Organization Name (eg, company) [Internet Widgits Pty Ltd]: Organizational Unit Name (eg, section) []: Common Name (eg, YOUR name) []:subca Email Address []: Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []: An optional company name []: Step 5: Subordinate CA certificate signing by root CA certificate: # cd ../rootCA/ # openssl ca -in ../subCA/subca.csr -extensions v3_ca -config openssl.cnf Using configuration from openssl.cnf Enter pass phrase for ./private/rootca.key: Check that the request matches the signature Signature ok Certificate Details: Serial Number: 1 (0x1) Validity Not Before: Feb 4 10:49:43 2013 GMT Not After : Feb 4 10:49:43 2014 GMT Subject: countryName = AU stateOrProvinceName = Some-State organizationName = Internet Widgits Pty Ltd commonName = subca X509v3 extensions: X509v3 Subject Key Identifier: C9:E2:AC:31:53:81:86:3F:CD:F8:3D:47:10:FC:E5:8E:C2:DA:A9:20 X509v3 Authority Key Identifier: keyid:E9:50:E6:BF:57:03:EA:6E:8F:21:23:86:BB:44:3D:9F:8F:4A:8B:F2 DirName:/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca serial:9F:FB:56:66:8D:D3:8F:11 X509v3 Basic Constraints: CA:TRUE Certificate is to be certified until Feb 4 10:49:43 2014 GMT (365 days) Sign the certificate? [y/n]:y 1 out of 1 certificate requests certified, commit? [y/n]y ... # cd ../subCA/ # cp -v ../rootCA/newcerts/01.pem certs/subca.crt Step 6: Server certificate generation and signing by root CA (for nginx virtual host): # cd ../rootCA # openssl genrsa -out ./private/server.key -des3 2048 # openssl req -new -key ./private/server.key -out server.csr -config openssl.cnf Enter pass phrase for ./private/server.key: You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [AU]: State or Province Name (full name) [Some-State]: Locality Name (eg, city) []: Organization Name (eg, company) [Internet Widgits Pty Ltd]: Organizational Unit Name (eg, section) []: Common Name (eg, YOUR name) []:test.local Email Address []: Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []: An optional company name []: # openssl ca -in server.csr -out certs/server.crt -config openssl.cnf Step 7: Client #1 certificate generation and signing by root CA: # openssl genrsa -out ./private/client1.key -des3 2048 # openssl req -new -key ./private/client1.key -out client1.csr -config openssl.cnf Enter pass phrase for ./private/client1.key: You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [AU]: State or Province Name (full name) [Some-State]: Locality Name (eg, city) []: Organization Name (eg, company) [Internet Widgits Pty Ltd]: Organizational Unit Name (eg, section) []: Common Name (eg, YOUR name) []:Client #1 Email Address []: Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []: An optional company name []: # openssl ca -in client1.csr -out certs/client1.crt -config openssl.cnf Step 8: Client #1 certificate converting to PKCS12 format: # openssl pkcs12 -export -out certs/client1.p12 -inkey private/client1.key -in certs/client1.crt -certfile certs/rootca.crt Step 9: Client #2 certificate generation and signing by subordinate CA: # cd ../subCA/ # openssl genrsa -out ./private/client2.key -des3 2048 # openssl req -new -key ./private/client2.key -out client2.csr -config openssl.cnf Enter pass phrase for ./private/client2.key: You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [AU]: State or Province Name (full name) [Some-State]: Locality Name (eg, city) []: Organization Name (eg, company) [Internet Widgits Pty Ltd]: Organizational Unit Name (eg, section) []: Common Name (eg, YOUR name) []:Client #2 Email Address []: Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []: An optional company name []: # openssl ca -in client2.csr -out certs/client2.crt -config openssl.cnf Step 10: Client #2 certificate converting to PKCS12 format: # openssl pkcs12 -export -out certs/client2.p12 -inkey private/client2.key -in certs/client2.crt -certfile certs/subca.crt Step 11: Passing server certificate and private key to nginx (performed with OS superuser privileges): # cd ../rootCA/ # cp -v certs/server.crt /etc/nginx/ssl/ # cp -v private/server.key /etc/nginx/ssl/ Step 12: Passing root and subordinate CA certificates to nginx (performed with OS superuser privileges): # cat certs/rootca.crt > /etc/nginx/ssl/client.pem # cat ../subCA/certs/subca.crt >> /etc/nginx/ssl/client.pem client.pem file look like this: # cat /etc/nginx/ssl/client.pem -----BEGIN CERTIFICATE----- MIID6TCCAtGgAwIBAgIJAJ/7VmaN048RMA0GCSqGSIb3DQEBBQUAMFYxCzAJBgNV BAYTAkFVMRMwEQYDVQQIEwpTb21lLVN0YXRlMSEwHwYDVQQKExhJbnRlcm5ldCBX aWRnaXRzIFB0eSBMdGQxDzANBgNVBAMTBnJvb3RjYTAeFw0xMzAyMDQxMDM1NTda ... -----END CERTIFICATE----- Certificate: Data: Version: 3 (0x2) Serial Number: 1 (0x1) ... -----BEGIN CERTIFICATE----- MIID4DCCAsigAwIBAgIBATANBgkqhkiG9w0BAQUFADBWMQswCQYDVQQGEwJBVTET MBEGA1UECBMKU29tZS1TdGF0ZTEhMB8GA1UEChMYSW50ZXJuZXQgV2lkZ2l0cyBQ dHkgTHRkMQ8wDQYDVQQDEwZyb290Y2EwHhcNMTMwMjA0MTA0OTQzWhcNMTQwMjA0 ... -----END CERTIFICATE----- It looks like everything is working fine: # service nginx reload # Reloading nginx configuration: Enter PEM pass phrase: # nginx. # Step 13: Installing *.p12 certificates in browser (Firefox in my case) gives the problem I've mentioned above. Client #1 = 200 OK, Client #2 = 400 Bad request/The SSL certificate error. Any ideas what should I do? Update 1: Results of SSL connection test attempts: # openssl s_client -connect test.local:443 -CAfile ~/pki/rootCA/certs/rootca.crt -cert ~/pki/rootCA/certs/client1.crt -key ~/pki/rootCA/private/client1.key -showcerts Enter pass phrase for tmp/testcert/client1.key: CONNECTED(00000003) depth=1 C = AU, ST = Some-State, O = Internet Widgits Pty Ltd, CN = rootca verify return:1 depth=0 C = AU, ST = Some-State, O = Internet Widgits Pty Ltd, CN = test.local verify return:1 --- Certificate chain 0 s:/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=test.local i:/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca -----BEGIN CERTIFICATE----- MIIDpjCCAo6gAwIBAgIBAjANBgkqhkiG9w0BAQUFADBWMQswCQYDVQQGEwJBVTET MBEGA1UECBMKU29tZS1TdGF0ZTEhMB8GA1UEChMYSW50ZXJuZXQgV2lkZ2l0cyBQ dHkgTHRkMQ8wDQYDVQQDEwZyb290Y2EwHhcNMTMwMjA0MTEwNjAzWhcNMTQwMjA0 ... -----END CERTIFICATE----- 1 s:/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca i:/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca -----BEGIN CERTIFICATE----- MIID6TCCAtGgAwIBAgIJAJ/7VmaN048RMA0GCSqGSIb3DQEBBQUAMFYxCzAJBgNV BAYTAkFVMRMwEQYDVQQIEwpTb21lLVN0YXRlMSEwHwYDVQQKExhJbnRlcm5ldCBX aWRnaXRzIFB0eSBMdGQxDzANBgNVBAMTBnJvb3RjYTAeFw0xMzAyMDQxMDM1NTda ... -----END CERTIFICATE----- --- Server certificate subject=/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=test.local issuer=/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca --- Acceptable client certificate CA names /C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca /C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=subca --- SSL handshake has read 3395 bytes and written 2779 bytes --- New, TLSv1/SSLv3, Cipher is AES256-SHA Server public key is 2048 bit Secure Renegotiation IS supported Compression: zlib compression Expansion: zlib compression SSL-Session: Protocol : TLSv1 Cipher : AES256-SHA Session-ID: 15BFC2029691262542FAE95A48078305E76EEE7D586400F8C4F7C516B0F9D967 Session-ID-ctx: Master-Key: 23246CF166E8F3900793F0A2561879E5DB07291F32E99591BA1CF53E6229491FEAE6858BFC9AACAF271D9C3706F139C7 Key-Arg : None PSK identity: None PSK identity hint: None SRP username: None TLS session ticket: 0000 - c2 5e 1d d2 b5 6d 40 23-b2 40 89 e4 35 75 70 07 .^...m@#[email protected]. 0010 - 1b bb 2b e6 e0 b5 ab 10-10 bf 46 6e aa 67 7f 58 ..+.......Fn.g.X 0020 - cf 0e 65 a4 67 5a 15 ba-aa 93 4e dd 3d 6e 73 4c ..e.gZ....N.=nsL 0030 - c5 56 f6 06 24 0f 48 e6-38 36 de f1 b5 31 c5 86 .V..$.H.86...1.. ... 0440 - 4c 53 39 e3 92 84 d2 d0-e5 e2 f5 8a 6a a8 86 b1 LS9.........j... Compression: 1 (zlib compression) Start Time: 1359989684 Timeout : 300 (sec) Verify return code: 0 (ok) --- Everything seems fine with Client #2 and root CA certificate but request returns 400 Bad Request error: # openssl s_client -connect test.local:443 -CAfile ~/pki/rootCA/certs/rootca.crt -cert ~/pki/subCA/certs/client2.crt -key ~/pki/subCA/private/client2.key -showcerts Enter pass phrase for tmp/testcert/client2.key: CONNECTED(00000003) depth=1 C = AU, ST = Some-State, O = Internet Widgits Pty Ltd, CN = rootca verify return:1 depth=0 C = AU, ST = Some-State, O = Internet Widgits Pty Ltd, CN = test.local verify return:1 ... Compression: 1 (zlib compression) Start Time: 1359989989 Timeout : 300 (sec) Verify return code: 0 (ok) --- GET / HTTP/1.0 HTTP/1.1 400 Bad Request Server: nginx/0.7.67 Date: Mon, 04 Feb 2013 15:00:43 GMT Content-Type: text/html Content-Length: 231 Connection: close <html> <head><title>400 The SSL certificate error</title></head> <body bgcolor="white"> <center><h1>400 Bad Request</h1></center> <center>The SSL certificate error</center> <hr><center>nginx/0.7.67</center> </body> </html> closed Verification fails with Client #2 certificate and subordinate CA certificate: # openssl s_client -connect test.local:443 -CAfile ~/pki/subCA/certs/subca.crt -cert ~/pki/subCA/certs/client2.crt -key ~/pki/subCA/private/client2.key -showcerts Enter pass phrase for tmp/testcert/client2.key: CONNECTED(00000003) depth=1 C = AU, ST = Some-State, O = Internet Widgits Pty Ltd, CN = rootca verify error:num=19:self signed certificate in certificate chain verify return:0 ... Compression: 1 (zlib compression) Start Time: 1359990354 Timeout : 300 (sec) Verify return code: 19 (self signed certificate in certificate chain) --- GET / HTTP/1.0 HTTP/1.1 400 Bad Request ... Still getting 400 Bad Request error with concatenated CA certificates and Client #2 (but still everything ok with Client #1): # cat certs/rootca.crt ../subCA/certs/subca.crt > certs/concatenatedca.crt # openssl s_client -connect test.local:443 -CAfile ~/pki/rootCA/certs/concatenatedca.crt -cert ~/pki/subCA/certs/client2.crt -key ~/pki/subCA/private/client2.key -showcerts Enter pass phrase for tmp/testcert/client2.key: CONNECTED(00000003) depth=1 C = AU, ST = Some-State, O = Internet Widgits Pty Ltd, CN = rootca verify return:1 depth=0 C = AU, ST = Some-State, O = Internet Widgits Pty Ltd, CN = test.local verify return:1 --- ... Compression: 1 (zlib compression) Start Time: 1359990772 Timeout : 300 (sec) Verify return code: 0 (ok) --- GET / HTTP/1.0 HTTP/1.1 400 Bad Request ... Update 2: I've managed to recompile nginx with enabled debug. Here is the part of successfull conection by Client #1 track: 2013/02/05 14:08:23 [debug] 38701#0: *119 accept: <MY IP ADDRESS> fd:3 2013/02/05 14:08:23 [debug] 38701#0: *119 event timer add: 3: 60000:2856497512 2013/02/05 14:08:23 [debug] 38701#0: *119 kevent set event: 3: ft:-1 fl:0025 2013/02/05 14:08:23 [debug] 38701#0: *119 malloc: 28805200:660 2013/02/05 14:08:23 [debug] 38701#0: *119 malloc: 28834400:1024 2013/02/05 14:08:23 [debug] 38701#0: *119 posix_memalign: 28860000:4096 @16 2013/02/05 14:08:23 [debug] 38701#0: *119 http check ssl handshake 2013/02/05 14:08:23 [debug] 38701#0: *119 https ssl handshake: 0x16 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL server name: "test.local" 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL_do_handshake: -1 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL_get_error: 2 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL handshake handler: 0 2013/02/05 14:08:23 [debug] 38701#0: *119 verify:1, error:0, depth:1, subject:"/C=AU /ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca",issuer: "/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca" 2013/02/05 14:08:23 [debug] 38701#0: *119 verify:1, error:0, depth:0, subject:"/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=Client #1",issuer: "/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca" 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL_do_handshake: 1 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL: TLSv1, cipher: "AES256-SHA SSLv3 Kx=RSA Au=RSA Enc=AES(256) Mac=SHA1" 2013/02/05 14:08:23 [debug] 38701#0: *119 http process request line 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL_read: -1 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL_get_error: 2 2013/02/05 14:08:23 [debug] 38701#0: *119 http process request line 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL_read: 1 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL_read: 524 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL_read: -1 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL_get_error: 2 2013/02/05 14:08:23 [debug] 38701#0: *119 http request line: "GET / HTTP/1.1" And here is the part of unsuccessfull conection by Client #2 track: 2013/02/05 13:51:34 [debug] 38701#0: *112 accept: <MY_IP_ADDRESS> fd:3 2013/02/05 13:51:34 [debug] 38701#0: *112 event timer add: 3: 60000:2855488975 2013/02/05 13:51:34 [debug] 38701#0: *112 kevent set event: 3: ft:-1 fl:0025 2013/02/05 13:51:34 [debug] 38701#0: *112 malloc: 28805200:660 2013/02/05 13:51:34 [debug] 38701#0: *112 malloc: 28834400:1024 2013/02/05 13:51:34 [debug] 38701#0: *112 posix_memalign: 28860000:4096 @16 2013/02/05 13:51:34 [debug] 38701#0: *112 http check ssl handshake 2013/02/05 13:51:34 [debug] 38701#0: *112 https ssl handshake: 0x16 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL server name: "test.local" 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL_do_handshake: -1 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL_get_error: 2 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL handshake handler: 0 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL_do_handshake: -1 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL_get_error: 2 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL handshake handler: 0 2013/02/05 13:51:34 [debug] 38701#0: *112 verify:0, error:20, depth:1, subject:"/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=subca",issuer: "/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca" 2013/02/05 13:51:34 [debug] 38701#0: *112 verify:0, error:27, depth:1, subject:"/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=subca",issuer: "/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca" 2013/02/05 13:51:34 [debug] 38701#0: *112 verify:1, error:27, depth:0, subject:"/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=Client #2",issuer: "/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=subca" 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL_do_handshake: 1 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL: TLSv1, cipher: "AES256-SHA SSLv3 Kx=RSA Au=RSA Enc=AES(256) Mac=SHA1" 2013/02/05 13:51:34 [debug] 38701#0: *112 http process request line 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL_read: 1 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL_read: 524 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL_read: -1 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL_get_error: 2 2013/02/05 13:51:34 [debug] 38701#0: *112 http request line: "GET / HTTP/1.1" So I'm getting OpenSSL error #20 and then #27. According to verify documentation: 20 X509_V_ERR_UNABLE_TO_GET_ISSUER_CERT_LOCALLY: unable to get local issuer certificate the issuer certificate could not be found: this occurs if the issuer certificate of an untrusted certificate cannot be found. 27 X509_V_ERR_CERT_UNTRUSTED: certificate not trusted the root CA is not marked as trusted for the specified purpose.

    Read the article

  • Bugs in Excel's ActiveX combo boxes?

    - by k.robinson
    I have noticed that I get all sorts of annoying errors when: I have ActiveX comboboxes on a worksheet (not an excel form) The comboboxes have event code linked to them (eg, onchange events) I use their listfillrange or linkedcell properties (clearing these properties seems to alleviate a lot of problems) (Not sure if this is connected) but there is data validation on the targeted linkedcell. I program a fairly complex excel application that does a ton of event handling and uses a lot of controls. Over the months, I have been trying to deal with a variety of bugs dealing with those combo boxes. I can't recall all the details of each instance now, but these bugs tend to involve pointing the listfillrange and linkedcell properties at named ranges, and often have to do with the combo box events triggering at inappropriate times (such as when application.enableevents = false). These problems seemed to grow bigger in Excel 2007, so that I had to give up on these combo boxes entirely (I now use combo boxes contained in user forms, rather than directly on the sheets). Has anyone else seen similar problems? If so, was there a graceful solution? I have looked around with Google and so far haven't spotted anyone with similar issues. Some of the symptoms I end up seeing are: Excel crashing when I start up (involves combobox_onchange, listfillrange-named range on another different sheet, and workbook_open interactions). (note, I also had some data validation on the the linked cells in case a user edited them directly.) Excel rendering bugs (usually when the combo box changes, some cells from another sheet get randomly drawn over the top of the current sheet) Sometimes it involves the screen flashing entirely to another sheet for a moment. Excel losing its mind (or rather, the call stack) (related to the first bullet point). Sometimes when a function modifies a property of the comboboxes, the combobox onchange event fires, but it never returns control to the function that caused the change in the first place. The combobox_onchange events are triggered even when application.enableevents = false. Events firing when they shouldn't (I posted another question on stack overflow related to this). At this point, I am fairly convinced that ActiveX comboboxes are evil incarnate and not worth the trouble. I have switched to including these comboboxes inside a userform module instead. I would rather inconvenience users with popup forms than random visual artifacts and crashing (with data loss).

    Read the article

  • How to best design a date/geographic proximity query on GAE?

    - by Dane
    Hi all, I'm building a directory for finding athletic tournaments on GAE with web2py and a Flex front end. The user selects a location, a radius, and a maximum date from a set of choices. I have a basic version of this query implemented, but it's inefficient and slow. One way I know I can improve it is by condensing the many individual queries I'm using to assemble the objects into bulk queries. I just learned that was possible. But I'm also thinking about a more extensive redesign that utilizes memcache. The main problem is that I can't query the datastore by location because GAE won't allow multiple numerical comparison statements (<,<=,=,) in one query. I'm already using one for date, and I'd need TWO to check both latitude and longitude, so it's a no go. Currently, my algorithm looks like this: 1.) Query by date and select 2.) Use destination function from geopy's distance module to find the max and min latitude and longitudes for supplied distance 3.) Loop through results and remove all with lat/lng outside max/min 4.) Loop through again and use distance function to check exact distance, because step 2 will include some areas outside the radius. Remove results outside supplied distance (is this 2/3/4 combination inefficent?) 5.) Assemble many-to-many lists and attach to objects (this is where I need to switch to bulk operations) 6.) Return to client Here's my plan for using memcache.. let me know if I'm way out in left field on this as I have no prior experience with memcache or server caching in general. -Keep a list in the cache filled with "geo objects" that represent all my data. These have five properties: latitude, longitude, event_id, event_type (in anticipation of expanding beyond tournaments), and start_date. This list will be sorted by date. -Also keep a dict of pointers in the cache which represent the start and end indices in the cache for all the date ranges my app uses (next week, 2 weeks, month, 3 months, 6 months, year, 2 years). -Have a scheduled task that updates the pointers daily at 12am. -Add new inserts to the cache as well as the datastore; update pointers. Using this design, the algorithm would now look like: 1.) Use pointers to slice off appropriate chunk of list based on supplied date. 2-4.) Same as above algorithm, except with geo objects 5.) Use bulk operation to select full tournaments using remaining geo objects' event_ids 6.) Assemble many-to-manys 7.) Return to client Thoughts on this approach? Many thanks for reading and any advice you can give. -Dane

    Read the article

  • Google Chrome audit on caching

    - by Álvaro G. Vicario
    If I run an audit on my sites with Google Chrome, I get this message in the Leverage browser caching section: The following resources are missing a cache expiration. Resources that do not specify an expiration may not be cached by browsers: A list of all the pictures follows. I get a similar notice in Leverage proxy caching: Consider adding a "Cache-Control: public" header to the following resources: Apart from pictures, I also get a notice about HTML, CSS and JavaScript files: The following resources are explicitly non-cacheable. Consider making them cacheable if possible: Its funny because I've worked hard to cache all static contents (except for pictures, where I just left Apache's default settings). Firefox does indeed store all these items in cache. Is there anything I should improve in my HTTP headers? Here's the complete header set of some items as loaded after removing the browser caché. Pictures use default settings I didn't really check before, the rest should be cachéd for three hours. I can set headers with both .htaccess and PHP. PNG HTTP/1.1 200 OK Date: Sat, 31 Jul 2010 12:46:14 GMT Server: Apache Last-Modified: Thu, 18 Mar 2010 21:40:54 GMT Etag: "c48024-230-4821a15d6c580" Accept-Ranges: bytes Content-Length: 560 Keep-Alive: timeout=4 Connection: Keep-Alive Content-Type: image/png HTML HTTP/1.1 200 OK Date: Sat, 31 Jul 2010 12:46:13 GMT Server: Apache X-Powered-By: PHP/5.2.11 Expires: Sat, 31 Jul 2010 15:46:13 GMT Cache-Control: max-age=10800, s-maxage=10800, must-revalidate, proxy-revalidate Content-Encoding: gzip Vary: Accept-Encoding Last-Modified: Wed, 24 Mar 2010 20:30:36 GMT Keep-Alive: timeout=4 Connection: Keep-Alive Transfer-Encoding: chunked Content-Type: text/html; charset=ISO-8859-15 CSS HTTP/1.1 200 OK Date: Sat, 31 Jul 2010 12:48:21 GMT Server: Apache X-Powered-By: PHP/5.2.11 Expires: Sat, 31 Jul 2010 15:48:21 GMT Cache-Control: max-age=10800, s-maxage=10800, must-revalidate, proxy-revalidate Content-Encoding: gzip Vary: Accept-Encoding Last-Modified: Thu, 18 Mar 2010 21:40:12 GMT Keep-Alive: timeout=4 Connection: Keep-Alive Transfer-Encoding: chunked Content-Type: text/css JavaScript HTTP/1.1 200 OK Date: Sat, 31 Jul 2010 12:48:21 GMT Server: Apache X-Powered-By: PHP/5.2.11 Expires: Sat, 31 Jul 2010 15:48:21 GMT Cache-Control: max-age=10800, s-maxage=10800, must-revalidate, proxy-revalidate Content-Encoding: gzip Vary: Accept-Encoding Last-Modified: Thu, 18 Mar 2010 21:40:12 GMT Keep-Alive: timeout=4 Connection: Keep-Alive Transfer-Encoding: chunked Content-Type: application/x-javascript Update I've tested Jumby's suggestion and set my CSS's expire to 1 year: Cache-Control:max-age=31536000, s-maxage=31536000, must-revalidate, proxy-revalidate Connection:Keep-Alive Content-Encoding:gzip Content-Length:4198 Content-Type:text/css Date:Mon, 02 Aug 2010 20:48:56 GMT Expires:Tue, 02 Aug 2011 20:48:56 GMT Keep-Alive:timeout=5, max=99 Last-Modified:Thu, 18 Mar 2010 20:40:12 GMT Server:Apache/2.2.14 (Win32) PHP/5.3.1 Vary:Accept-Encoding X-Powered-By:PHP/5.3.1 However, Chrome still claims "explicitly non-cacheable".

    Read the article

  • Find location using only distance and range?

    - by pinnacler
    Triangulation works by checking your angle to three KNOWN targets. "I know the that's the Lighthouse of Alexandria, it's located here (X,Y) on a map, and it's to my right at 90 degrees." Repeat 2 more times for different targets and angles. Trilateration works by checking your distance from three KNOWN targets. "I know the that's the Lighthouse of Alexandria, it's located here (X,Y) on a map, and I'm 100 meters away from that." Repeat 2 more times for different targets and ranges. But both of those methods rely on knowing WHAT you're looking at. Say you're in a forest and you can't differentiate between trees, but you know where key trees are. These trees have been hand picked as "landmarks." You have a robot moving through that forest slowly. Do you know of any ways to determine location based solely off of angle and range, exploiting geometry between landmarks? Note, you will see other trees as well, so you won't know which trees are key trees. Ignore the fact that a target may be occluded. Our pre-algorithm takes care of that. 1) If this exists, what's it called? I can't find anything. 2) What do you think the odds are of having two identical location 'hits?' I imagine it's fairly rare. 3) If there are two identical location 'hits,' how can I determine my exact location after I move the robot next. (I assume the chances of having 2 occurrences of EXACT angles in a row, after I reposition the robot, would be statistically impossible, barring a forest growing in rows like corn). Would I just calculate the position again and hope for the best? Or would I somehow incorporate my previous position estimate into my next guess? If this exists, I'd like to read about it, and if not, develop it as a side project. I just don't have time to reinvent the wheel right now, nor have the time to implement this from scratch. So if it doesn't exist, I'll have to figure out another way to localize the robot since that's not the aim of this research, if it does, lets hope it's semi-easy.

    Read the article

  • Find location using only distance and bearing?

    - by pinnacler
    Triangulation works by checking your angle to three KNOWN targets. "I know the that's the Lighthouse of Alexandria, it's located here (X,Y) on a map, and it's to my right at 90 degrees." Repeat 2 more times for different targets and angles. Trilateration works by checking your distance from three KNOWN targets. "I know the that's the Lighthouse of Alexandria, it's located here (X,Y) on a map, and I'm 100 meters away from that." Repeat 2 more times for different targets and ranges. But both of those methods rely on knowing WHAT you're looking at. Say you're in a forest and you can't differentiate between trees, but you know where key trees are. These trees have been hand picked as "landmarks." You have a robot moving through that forest slowly. Do you know of any ways to determine location based solely off of angle and range, exploiting geometry between landmarks? Note, you will see other trees as well, so you won't know which trees are key trees. Ignore the fact that a target may be occluded. Our pre-algorithm takes care of that. 1) If this exists, what's it called? I can't find anything. 2) What do you think the odds are of having two identical location 'hits?' I imagine it's fairly rare. 3) If there are two identical location 'hits,' how can I determine my exact location after I move the robot next. (I assume the chances of having 2 occurrences of EXACT angles in a row, after I reposition the robot, would be statistically impossible, barring a forest growing in rows like corn). Would I just calculate the position again and hope for the best? Or would I somehow incorporate my previous position estimate into my next guess? If this exists, I'd like to read about it, and if not, develop it as a side project. I just don't have time to reinvent the wheel right now, nor have the time to implement this from scratch. So if it doesn't exist, I'll have to figure out another way to localize the robot since that's not the aim of this research, if it does, lets hope it's semi-easy.

    Read the article

  • Function Point Analysis -- a seriously over-estimating technique?

    - by kizzx2
    I know questions about FPA has been asked numerous times before, but this time I'm taking a more analytical angle at it, backed up with data. 1. First, some data This question is based on a tutorial. He had a "Sample Count" section where he demonstrated it step by step. You can see some screenshots of his sample application here. In the end, he calculated the unadjusted FP to be 99. There is another article on InformIT with industry data on typical hour/FP. It ranges from 2 hours/FP to 27.4 hours/FP. Let's try to stick with 2 for the moment (since SO readers are probably the more efficient crowd :p). 2. Reality check!? Now just check out the screenshots again. Do a little math here 99 * 2 = 198 hours 198 hours / 40 hours per week = 5 weeks Seriously? That sample application is going to take 5 weeks to implement? Is it just my feeling that it wouldn't take any decent programmer longer than one week to have it completed? Now let's try estimating the cost of the project. We'll use New York's minimum wage at the moment (Wikipedia), which is $7.25 198 * 7.25 = $1435.5 From what I could see from the screenshots, this application is a small excel-improvement app. I could have bought MS Office Pro for 200 bucks which gives me greater interoperability (.xls files) and flexibility (spreadsheets). (For the record, that same Web site has another article discussing productivity. It seems like they typically use 4.2 hours/FP, which gives us even more shocking stats: 99 * 4.2 = 415 hours = 10 weeks = almost 3 whopping months! 415 hours * $7.25 = $3000 zomg (That's even assuming that all our poor coders get the minimum wage!) 3. Am I missing something here? Right now, I could come up with several possible explanation: FPA is really only suited for bigger projects (1000+ FPs) so it becomes extremely inaccurate at smaller scale. The hours/FP metric fluctuates abruptly from team to team, project to project. For a small project like this, we could have used something like 0.5 hour/FP or something. (Now this kind of makes the whole estimation thing pointless, unless my firm does the same type of projects for several years with the same team, not really common.) From my experience with several software metrics, Function Point is really not a lightweight metric. If the hour/FP thing fluctuates so much, then what's the point, maybe I could have gone with User Story Points which is a lot faster to get and arguably almost as uncertain. What would be the FP experts' answers to this?

    Read the article

  • PostgreSQL - fetch the row which has the Max value for a column

    - by Joshua Berry
    I'm dealing with a Postgres table (called "lives") that contains records with columns for time_stamp, usr_id, transaction_id, and lives_remaining. I need a query that will give me the most recent lives_remaining total for each usr_id There are multiple users (distinct usr_id's) time_stamp is not a unique identifier: sometimes user events (one by row in the table) will occur with the same time_stamp. trans_id is unique only for very small time ranges: over time it repeats remaining_lives (for a given user) can both increase and decrease over time example: time_stamp|lives_remaining|usr_id|trans_id ----------------------------------------- 07:00 | 1 | 1 | 1 09:00 | 4 | 2 | 2 10:00 | 2 | 3 | 3 10:00 | 1 | 2 | 4 11:00 | 4 | 1 | 5 11:00 | 3 | 1 | 6 13:00 | 3 | 3 | 1 As I will need to access other columns of the row with the latest data for each given usr_id, I need a query that gives a result like this: time_stamp|lives_remaining|usr_id|trans_id ----------------------------------------- 11:00 | 3 | 1 | 6 10:00 | 1 | 2 | 4 13:00 | 3 | 3 | 1 As mentioned, each usr_id can gain or lose lives, and sometimes these timestamped events occur so close together that they have the same timestamp! Therefore this query won't work: SELECT b.time_stamp,b.lives_remaining,b.usr_id,b.trans_id FROM (SELECT usr_id, max(time_stamp) AS max_timestamp FROM lives GROUP BY usr_id ORDER BY usr_id) a JOIN lives b ON a.max_timestamp = b.time_stamp Instead, I need to use both time_stamp (first) and trans_id (second) to identify the correct row. I also then need to pass that information from the subquery to the main query that will provide the data for the other columns of the appropriate rows. This is the hacked up query that I've gotten to work: SELECT b.time_stamp,b.lives_remaining,b.usr_id,b.trans_id FROM (SELECT usr_id, max(time_stamp || '*' || trans_id) AS max_timestamp_transid FROM lives GROUP BY usr_id ORDER BY usr_id) a JOIN lives b ON a.max_timestamp_transid = b.time_stamp || '*' || b.trans_id ORDER BY b.usr_id Okay, so this works, but I don't like it. It requires a query within a query, a self join, and it seems to me that it could be much simpler by grabbing the row that MAX found to have the largest timestamp and trans_id. The table "lives" has tens of millions of rows to parse, so I'd like this query to be as fast and efficient as possible. I'm new to RDBM and Postgres in particular, so I know that I need to make effective use of the proper indexes. I'm a bit lost on how to optimize. I found a similar discussion here. Can I perform some type of Postgres equivalent to an Oracle analytic function? Any advice on accessing related column information used by an aggregate function (like MAX), creating indexes, and creating better queries would be much appreciated! P.S. You can use the following to create my example case: create TABLE lives (time_stamp timestamp, lives_remaining integer, usr_id integer, trans_id integer); insert into lives values ('2000-01-01 07:00', 1, 1, 1); insert into lives values ('2000-01-01 09:00', 4, 2, 2); insert into lives values ('2000-01-01 10:00', 2, 3, 3); insert into lives values ('2000-01-01 10:00', 1, 2, 4); insert into lives values ('2000-01-01 11:00', 4, 1, 5); insert into lives values ('2000-01-01 11:00', 3, 1, 6); insert into lives values ('2000-01-01 13:00', 3, 3, 1);

    Read the article

  • (resolved) empty response body in ajax (or 206 Partial Content)

    - by Nikita Rybak
    Hi guys, I'm feeling completely stupid because I've spent two hours solving task which should be very simple and which I solved many times before. But now I'm not even sure in which direction to dig. I fail to fetch static content using ajax from local servers (Apache and Mongrel). I get responses 200 and 206 (depending on the server), empty response text (although Content-Length header is always correct), firebug shows request in red. Javascript is very generic, I'm getting same results even here: http://www.w3schools.com/ajax/tryit.asp?filename=tryajax_first (just change document location to 'http://localhost:3000/whatever') So, it's probably not the cause. Well, now I'm out of ideas. I can also post http headers, if it'll help. Thanks! Response Headers Connection close Date Sat, 01 May 2010 21:05:23 GMT Last-Modified Sun, 18 Apr 2010 19:33:26 GMT Content-Type text/html Content-Length 7466 Request Headers Host localhost:3000 User-Agent Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 Accept text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language en-us,en;q=0.5 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive Referer http://www.w3schools.com/ajax/tryit_view.asp Origin http://www.w3schools.com Response Headers Date Sat, 01 May 2010 21:54:59 GMT Server Apache/2.2.14 (Unix) mod_ssl/2.2.14 OpenSSL/0.9.8l DAV/2 mod_jk/1.2.28 Etag "3d5cbdb-fb4-4819c460d4a40" Accept-Ranges bytes Content-Length 4020 Cache-Control max-age=7200, public, proxy-revalidate Expires Sat, 01 May 2010 23:54:59 GMT Content-Range bytes 0-4019/4020 Keep-Alive timeout=5, max=100 Connection Keep-Alive Content-Type application/javascript Request Headers Host localhost User-Agent Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 Accept text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language en-us,en;q=0.5 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive Origin null UPDATED: I've found a problem, it was about cross-domain requests. I knew that there are restrictions, but thought they're relaxed for local filesystem and local servers. (and expected more descriptive error message, anyway) Thanks everybody!

    Read the article

  • How To Create A Download Quota.

    - by snikolov
    I need to create an handy file down loader which will count the amount of bytes downloaded and stop when it has exceed a preset limit. i need to mirror some files but i only have 7 gb per moth of bandwidth and i dont want to exceed the limit. Example limits can be in bytes or number of files, each user has their own limit, as well as a limit for Download Quota itself. So if you set a limit of 2 gigabytes for Download Quota, downloads stop at 2 gigabytes, even if you have 3 users with a limit of 1 gigabyte each. if ($range) { //pass client Range header to rapidshare // _insert($range); $cookie .= "\r\nRange: $range"; $multipart = true; header("X-UR-RANGE-Range: $range"); } //octet-stream + attachment => client always stores file header('Content-type: application/octet-stream'); header('Content-Disposition: attachment; filename="' . $fn . '"'); //always included so clients know this script supports resuming header("Accept-Ranges: bytes"); //awful hack to pass rapidshare the premium cookie $user_agent = ini_get("user_agent"); ini_set("user_agent", $user_agent . "\r\nCookie: enc=$cookie"); $httphandle = fopen($url, "r"); $headers = stream_get_meta_data($httphandle); //let's check the return header of rapidshare for range / length indicators //we'll just pass these to the client foreach ($headers["wrapper_data"] as $header) { $header = trim($header); if (substr(strtolower($header), 0, strlen("content-range")) == "content-range") { // _insert($range); header($header); header("X-RS-RANGE-" . $header); $multipart = true; //content-range indicates partial download } elseif (substr(strtolower($header), 0, strlen("Content-Length")) == "content-length") { // _insert($range); header($header); header("X-RS-CL-" . $header); } } //now show the client he has a partial download if ($multipart) header('HTTP/1.1 206 Partial Content'); flush(); $download_rate = 100; while (!feof($httphandle)) { // send the current file part to the browser $var_stat = fread($httphandle, round($download_rate * 1024)); $var12 = strlen($var_stat); ////////////////////////////////// echo $var_stat; ///////////////////////////////// // flush the content to the browser flush(); // sleep one second sleep(1); }

    Read the article

  • Php miniserver downloader bandwidth

    - by snikolov
    $httpsock = @socket_create_listen("9090"); if (!$httpsock) { print "Socket creation failed!\n"; exit; } while (1) { $client = socket_accept($httpsock); $input = trim(socket_read ($client, 4096)); $input = explode(" ", $input); $input = $input[1]; $fileinfo = pathinfo($input); switch ($fileinfo['extension']) { default: $mime = "text/html"; } if ($input == "/") { $input = "index.html"; } $input = ".$input"; if (file_exists($input) && is_readable($input)) { echo "Serving $input\n"; $contents = file_get_contents($input); $output = "HTTP/1.0 200 OK\r\nServer: APatchyServer\r\nConnection: close\r\nContent-Type: $mime\r\n\r\n$contents"; } else { //$contents = "The file you requested doesn't exist. Sorry!"; //$output = "HTTP/1.0 404 OBJECT NOT FOUND\r\nServer: BabyHTTP\r\nConnection: close\r\nContent-Type: text/html\r\n\r\n$contents"; $filename = "dada"; $file = fopen($filename, 'r'); $filesize = filesize($filename); $buffer = fread($file, $filesize); $send = array("Output"=$buffer,"filesize"=$filesize,"filename"=$filename); $file = $send['filename']; $filesize = $send['filesize']; $output = 'HTTP/1.0 200 OK\r\n'; $output .= "Content-type: application/octet-stream\r\n"; $output .= 'Content-Disposition: attachment; filename="'.$file.'"\r\n'; $output .= "Content-Length:$filesize\r\n"; $output .= "Accept-Ranges: bytes\r\n"; $output .= "Cache-Control: private\n\n"; $output .= $send['Output']; $output .= "Content-Transfer-Encoding: binary"; $output .= "Connection: Keep-Alive\r\n"; } socket_write($client, $output); socket_close ($client); } socket_close ($httpsock); Hello i have create a code here to work as a webserver downloader the client can request files and download, i got it working at last however, i would like to know if the webserver can show much bandwidth the user request via sockets, perl has the same option as php but its more hardcore than php, i dont understand much about perl, i even saw that a miniwebserver can show much the client user pulls from the server would it be possible that you can assist me with this coding, i much aprreciate it thank you guys.

    Read the article

  • Function Point Analysis -- a seriously overestimating technique?

    - by kizzx2
    I know questions about FPA has been asked numerous times before, but this time I'm taking a more analytical angle at it, backed up with data. 1. First, some data This question is based on a tutorial. He had a "Sample Count" section where he demonstrated it step by step. You can see some screenshots of his sample application here. In the end, he calculated the unadjusted FP to be 99. There is another article on InformIT with industry data on typical hour/FP. It ranges from 2 hours/FP to 27.4 hours/FP. Let's try to stick with 2 for the moment (since SO readers are probably the more efficient crowd :p). 2. Reality check!? Now just check out the screenshots again. Do a little math here 99 * 2 = 198 hours 198 hours / 40 hours per week = 5 weeks Seriously? That sample application is going to take 5 weeks to implement? Is it just my feeling that it wouldn't take any decent programmer longer than one week (I"m not even saying weekend) to have it completed? Now let's try estimating the cost of the project. We'll use New York's minimum wage at the moment (Wikipedia), which is $7.25 198 * 7.25 = $1435.5 From what I could see from the screenshots, this application is a small excel-improvement app. I could have bought MS Office Pro for 200 bucks which gives me greater interoperability (.xls files) and flexibility (spreadsheets). (For the record, that same Web site has another article discussing productivity. It seems like they typically use 4.2 hours/FP, which gives us even more shocking stats: 99 * 4.2 = 415 hours = 10 weeks = almost 3 whopping months! 415 hours * $7.25 = $3000 zomg (That's even assuming that all our poor coders get the minimum wage!) 3. Am I missing something here? Right now, I could come up with several possible explanation: FPA is really only suited for bigger projects (1000+ FPs) so it becomes extremely inaccurate at smaller scale. The hours/FP metric fluctuates abruptly from team to team, project to project. For a small project like this, we could have used something like 0.5 hour/FP or something. (Now this kind of makes the whole estimation thing pointless, unless my firm does the same type of projects for several years with the same team, not really common.) From my experience with several software metrics, Function Point is really not a lightweight metric. If the hour/FP thing fluctuates so much, then what's the point, maybe I could have gone with User Story Points which is a lot faster to get and arguably almost as uncertain. What would be the FP experts' answers to this?

    Read the article

  • PHP miniwebsever file download

    - by snikolov
    $httpsock = @socket_create_listen("9090"); if (!$httpsock) { print "Socket creation failed!\n"; exit; } while (1) { $client = socket_accept($httpsock); $input = trim(socket_read ($client, 4096)); $input = explode(" ", $input); $input = $input[1]; $fileinfo = pathinfo($input); switch ($fileinfo['extension']) { default: $mime = "text/html"; } if ($input == "/") { $input = "index.html"; } $input = ".$input"; if (file_exists($input) && is_readable($input)) { echo "Serving $input\n"; $contents = file_get_contents($input); $output = "HTTP/1.0 200 OK\r\nServer: APatchyServer\r\nConnection: close\r\nContent-Type: $mime\r\n\r\n$contents"; } else { //$contents = "The file you requested doesn't exist. Sorry!"; //$output = "HTTP/1.0 404 OBJECT NOT FOUND\r\nServer: BabyHTTP\r\nConnection: close\r\nContent-Type: text/html\r\n\r\n$contents"; function openfile() { $filename = "a.pl"; $file = fopen($filename, 'r'); $filesize = filesize($filename); $buffer = fread($file, $filesize); $array = array("Output"=$buffer,"filesize"=$filesize,"filename"=$filename); return $array; } $send = openfile(); $file = $send['filename']; $filesize = $send['filesize']; $output = 'HTTP/1.0 200 OK\r\n'; $output .= "Content-type: application/octet-stream\r\n"; $output .= 'Content-Disposition: attachment; filename="'.$file.'"\r\n'; $output .= "Content-Length:$filesize\r\n"; $output .= "Accept-Ranges: bytes\r\n"; $output .= "Cache-Control: private\n\n"; $output .= $send['Output']; $output .= "Content-Transfer-Encoding: binary"; $output .= "Connection: Keep-Alive\r\n"; } socket_write($client, $output); socket_close ($client); } socket_close ($httpsock); Hello, I am snikolov i am creating a miniwebserver with php and i would like to know how i can send the client a file to download with his browser such as firefox or internet explore i am sending a file to the user to download via sockets, but the cleint is not getting the filename and the information to download can you please help me here,if i declare the file again i get this error in my server Fatal error: Cannot redeclare openfile() (previously declared in C:\User s\fsfdsf\sfdsfsdf\httpd.php:31) in C:\Users\hfghfgh\hfghg\httpd.php on li ne 29, if its possible, i would like to know if the webserver can show much banwdidth the user request via sockets, perl has the same option as php but its more hardcore than php i dont understand much about perl, i even saw that a miniwebserver can show much the client user pulls from the server would it be possible that you can assist me with this coding, i much aprreciate it thank you guys.

    Read the article

  • Multiplying matrices: error: expected primary-expression before 'struct'

    - by justin
    I am trying to write a program that is supposed to multiply matrices using threads. I am supposed to fill the matrices using random numbers in a thread. I am compiling in g++ and using PTHREADS. I have also created a struct to pass the data from my command line input to the thread so it can generate the matrix of random numbers. The sizes of the two matrices are also passed in the command line as well. I keep getting: main.cpp:7: error: expected primary-expression before 'struct' my code @ line 7 =: struct a{ int Arow; int Acol; int low; int high; }; My inpust are : Sizes of two matrices ( 4 arguments) high and low ranges in which o generate the random numbers between. Complete code: [headers] using namespace std; void *matrixACreate(struct *); void *status; int main(int argc, char * argv[]) { int Arow = atoi(argv[1]); // Matrix A int Acol = atoi(argv[2]); // WxX int Brow = atoi(argv[3]); // Matrix B int Bcol = atoi(argv[4]); // XxZ, int low = atoi(argv[5]); // Range low int high = atoi(argv[6]); struct a{ int Arow; // Matrix A int Acol; // WxX int low; // Range low int high; }; pthread_t matrixAthread; //pthread_t matrixBthread; pthread_t runner; int error, retValue; if (Acol != Brow) { cout << " This matrix cannot be multiplied. FAIL" << endl; return 0; } error = pthread_create(&matrixAthread, NULL, matrixACreate, struct *a); //error = pthread_create(&matrixAthread, NULL, matrixBCreate, sendB); retValue = pthread_join(matrixAthread, &status); //retValue = pthread_join(matrixBthread, &status); return 0; } void matrixACreate(struct * a) { struct a *data = (struct a *) malloc(sizeof(struct a)); data->Arow = Arow; data->Acol = Acol; data->low = low; data->high = high; int range = ((high - low) + 1); cout << Arow << endl<< Acol << endl; }// just trying to print to see if I am in the thread

    Read the article

  • Techniques for querying a set of object in-memory in a Java application

    - by Edd Grant
    Hi All, We have a system which performs a 'coarse search' by invoking an interface on another system which returns a set of Java objects. Once we have received the search results I need to be able to further filter the resulting Java objects based on certain criteria describing the state of the attributes (e.g. from the initial objects return all objects where x.y z && a.b == c). The criteria used to filter the set of objects each time is partially user configurable, by this I mean that users will be able to select the values and ranges to match on but the attributes they can pick from will be a fixed set. The data sets are likely to contain <= 10,000 objects for each search. The search will be executed manually by the application user base probably no more than 2000 times a day (approx). It's probably worth mentioning that all the objects in the result set are known domain object classes which have Hibernate and JPA annotations describing their structure and relationship. Off the top of my head I can think of 3 ways of doing this: For each search persist the initial result set objects in our database, then use Hibernate to re-query them using the finer grained criteria. Use an in-memory Database (such as hsqldb?) to query and refine the initial result set. Write some custom code which iterates the initial result set and pulls out the desired records. Option 1 seems to involve a lot of toing and froing across a network to a physical Database (Oracle 10g) which might result in a lot of network and disk activity. It would also require the results from each search to be isolated from other result sets to ensure that different searches don't interfere with each other. Option 2 seems like a good idea in principle as it would allow me to do the finer query in memory and would not require the persistence of result data which would only be discarded after the search was complete. Gut feeling is that this could be pretty performant too but might result in larger memory overheads (which is fine as we can be pretty flexible on the amount of memory our JVM gets). Option 3 could be very performant but is something I would like to avoid as any code we write would require such careful testing that the time taken to acheive something flexible and robust enough would probably be prohibitive. I don't have time to prototype all 3 ideas so I am looking for comments people may have on the 3 options above, plus any further ideas I have not considered, to help me decide which idea might be most suitable. I'm currently leaning toward option 2 (in memory database) so would be keen to hear from people with experience of querying POJOs in memory too. Hopefully I have described the situation in enough detail but don't hesitate to ask if any further information is required to better understand the scenario. Cheers, Edd

    Read the article

  • Finding open contiguous blocks of time for every day of a month, fast

    - by Chris
    I am working on a booking availability system for a group of several venues, and am having a hard time generating the availability of time blocks for days in a given month. This is happening server-side in PHP, but the concept itself is language agnostic -- I could be doing this in JS or anything else. Given a venue_id, month, and year (6/2012 for example), I have a list of all events occurring in that range at that venue, represented as unix timestamps start and end. This data comes from the database. I need to establish what, if any, contiguous block of time of a minimum length (different per venue) exist on each day. For example, on 6/1 I have an event between 2:00pm and 7:00pm. The minimum time is 5 hours, so there's a block open there from 9am - 2pm and another between 7pm and 12pm. This would continue for the 2nd, 3rd, etc... every day of June. Some (most) of the days have nothing happening at all, some have 1 - 3 events. The solution I came up with works, but it also takes waaaay too long to generate the data. Basically, I loop every day of the month and create an array of timestamps for each 15 minutes of that day. Then, I loop the time spans of events from that day by 15 minutes, marking any "taken" timeslot as false. Remaining, I have an array that contains timestamp of free time vs. taken time: //one day's array after processing through loops (not real timestamps) array( 12345678=>12345678, // <--- avail 12345878=>12345878, 12346078=>12346078, 12346278=>false, // <--- not avail 12346478=>false, 12346678=>false, 12346878=>false, 12347078=>12347078, // <--- avail 12347278=>12347278 ) Now I would need to loop THIS array to find continuous time blocks, then check to see if they are long enough (each venue has a minimum), and if so then establish the descriptive text for their start and end (i.e. 9am - 2pm). WHEW! By the time all this looping is done, the user has grown bored and wandered off to Youtube to watch videos of puppies; it takes ages to so examine 30 or so days. Is there a faster way to solve this issue? To summarize the problem, given time ranges t1 and t2 on day d, how can I determine the remaining time left in d that is longer than the minimum time block m. This data is assembled on demand via AJAX as the user moves between calendar months. Results are cached per-page-load, so if the user goes to July a second time, the data that was generated the first time would be reused. Any other details that would help, let me know. Edit Per request, the database structure (or the part that is relevant here) *events* id (bigint) title (varchar) *event_times* id (bigint) event_id (bigint) venue_id (bigint) start (bigint) end (bigint) *venues* id (bigint) name (varchar) min_block (int) min_start (varchar) max_start (varchar)

    Read the article

  • OAM OVD integration - Error Encounterd while performance test "LDAP response read timed out, timeout used:2000ms"

    - by siddhartha_sinha
    While working on OAM OVD integration for one of my client, I have been involved in the performance test of the products wherein I encountered OAM authentication failures while talking to OVD during heavy load. OAM logs revealed the following: oracle.security.am.common.policy.common.response.ResponseException: oracle.security.am.engines.common.identity.provider.exceptions.IdentityProviderException: OAMSSA-20012: Exception in getting user attributes for user : dummy_user1, idstore MyIdentityStore with exception javax.naming.NamingException: LDAP response read timed out, timeout used:2000ms.; remaining name 'ou=people,dc=oracle,dc=com' at oracle.security.am.common.policy.common.response.IdentityValueProvider.getUserAttribute(IdentityValueProvider.java:271) ... During the authentication and authorization process, OAM complains that the LDAP repository is taking too long to return user attributes.The default value is 2 seconds as can be seen from the exception, "2000ms". While troubleshooting the issue, it was found that we can increase the ldap read timeout in oam-config.xml.  For reference, the attribute to add in the oam-config.xml file is: <Setting Name="LdapReadTimeout" Type="xsd:string">2000</Setting> However it is not recommended to increase the time out unless it is absolutely necessary and ensure that back-end directory servers are working fine. Rather I took the path of tuning OVD in the following manner: 1) Navigate to ORACLE_INSTANCE/config/OPMN/opmn folder and edit opmn.xml. Search for <data id="java-options" ………> and edit the contents of the file with the highlighted items: <category id="start-options"><data id="java-bin" value="$ORACLE_HOME/jdk/bin/java"/><data id="java-options" value="-server -Xms1024m -Xmx1024m -Dvde.soTimeoutBackend=0 -Didm.oracle.home=$ORACLE_HOME -Dcommon.components.home=$ORACLE_HOME/../oracle_common -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:/opt/bea/Middleware/asinst_1/diagnostics/logs/OVD/ovd1/ovdGClog.log -XX:+UseConcMarkSweepGC -Doracle.security.jps.config=$ORACLE_INSTANCE/config/JPS/jps-config-jse.xml"/><data id="java-classpath" value="$ORACLE_HOME/ovd/jlib/vde.jar$:$ORACLE_HOME/jdbc/lib/ojdbc6.jar"/></category></module-data><stop timeout="120"/><ping interval="60"/></process-type> When the system is busy, a ping from the Oracle Process Manager and Notification Server (OPMN) to Oracle Virtual Directory may fail. As a result, OPMN will restart Oracle Virtual Directory after 20 seconds (the default ping interval). To avoid this, consider increasing the ping interval to 60 seconds or more. 2) Navigate to ORACLE_INSTANCE/config/OVD/ovd1 folder.Open listeners.os_xml file and perform the following changes: · Search for <ldap id=”Ldap Endpoint”…….> and point the cursor to that line. · Change threads count to 200. · Change anonymous bind to Deny. · Change workQueueCapacity to 8096. Add a new parameter <useNIO> and set its value to false viz: <useNIO>false</useNio> Snippet: <ldap version="8" id="LDAP Endpoint"> ....... .......  <socketOptions><backlog>128</backlog>         <reuseAddress>false</reuseAddress>         <keepAlive>false</keepAlive>         <tcpNoDelay>true</tcpNoDelay>         <readTimeout>0</readTimeout>      </socketOptions> <useNIO>false</useNIO></ldap> Restart OVD server. For more information on OVD tuneup refer to http://docs.oracle.com/cd/E25054_01/core.1111/e10108/ovd.htm. Please Note: There were few patches released from OAM side for performance tune-up as well. Will provide the updates shortly !!!

    Read the article

  • SQL SERVER – Disable Clustered Index and Data Insert

    - by pinaldave
    Earlier today I received following email. “Dear Pinal, [Removed unrelated content] We looked at your script and found out that in your script of disabling indexes, you have only included non-clustered index during the bulk insert and missed to disabled all the clustered index. Our DBA[name removed] has changed your script a bit and included all the clustered indexes. Since our application is not working. When DBA [name removed] tried to enable clustered indexes again he is facing error incorrect syntax error. We are in deep problem [word replaced] [Removed Identity of organization and few unrelated stuff ]“ I have replied to my client and helped them fixed the problem. What really came to my attention is the concept of disabling clustered index. Let us try to learn a lesson from this experience. In this case, there was no need to disable clustered index at all. I had done necessary work when I was called in to work on tuning project. I had removed unused indexes, created few optimal indexes and wrote a script to disable few selected high cost indexes when bulk insert (and similar) operations are performed. There was another script which rebuild all the indexes as well. The solution worked till they included clustered index in disabling the script. Clustered indexes are in fact original table (or heap) physically ordered (any more things – not scope of this article) according to one or more keys(columns). When clustered index is disabled data rows of the disabled clustered index cannot be accessed. This means there will be no insert possible. When non clustered indexes are disabled all the data related to physically deleted but the definition of the index is kept in the system. Due to the same reason even reorganization of the index is not possible till the clustered index (which was disabled) is rebuild. Now let us come to the second part of the question, regarding receiving the error when clustered index is ‘enabled’. This is very common question I receive on the blog. (The following statement is written keeping the syntax of T-SQL in mind) Clustered indexes can be disabled but can not be enabled, they have to rebuild. It is intuitive to think that something which we have ‘disabled’ can be ‘enabled’ but the syntax for the same is ‘rebuild’. This issue has been explained here: SQL SERVER – How to Enable Index – How to Disable Index – Incorrect syntax near ‘ENABLE’. Let us go over this example where inserting the data is not possible when clustered index is disabled. USE AdventureWorks GO -- Create Table CREATE TABLE [dbo].[TableName]( [ID] [int] NOT NULL, [FirstCol] [varchar](50) NULL, CONSTRAINT [PK_TableName] PRIMARY KEY CLUSTERED ([ID] ASC) ) GO -- Create Nonclustered Index CREATE UNIQUE NONCLUSTERED INDEX [IX_NonClustered_TableName] ON [dbo].[TableName] ([FirstCol] ASC) GO -- Populate Table INSERT INTO [dbo].[TableName] SELECT 1, 'First' UNION ALL SELECT 2, 'Second' UNION ALL SELECT 3, 'Third' GO -- Disable Nonclustered Index ALTER INDEX [IX_NonClustered_TableName] ON [dbo].[TableName] DISABLE GO -- Insert Data should work fine INSERT INTO [dbo].[TableName] SELECT 4, 'Fourth' UNION ALL SELECT 5, 'Fifth' GO -- Disable Clustered Index ALTER INDEX [PK_TableName] ON [dbo].[TableName] DISABLE GO -- Insert Data will fail INSERT INTO [dbo].[TableName] SELECT 6, 'Sixth' UNION ALL SELECT 7, 'Seventh' GO /* Error: Msg 8655, Level 16, State 1, Line 1 The query processor is unable to produce a plan because the index 'PK_TableName' on table or view 'TableName' is disabled. */ -- Reorganizing Index will also throw an error ALTER INDEX [PK_TableName] ON [dbo].[TableName] REORGANIZE GO /* Error: Msg 1973, Level 16, State 1, Line 1 Cannot perform the specified operation on disabled index 'PK_TableName' on table 'dbo.TableName'. */ -- Rebuliding should work fine ALTER INDEX [PK_TableName] ON [dbo].[TableName] REBUILD GO -- Insert Data should work fine INSERT INTO [dbo].[TableName] SELECT 6, 'Sixth' UNION ALL SELECT 7, 'Seventh' GO -- Clean Up DROP TABLE [dbo].[TableName] GO I hope this example is clear enough. There were few additional posts I had written years ago, I am listing them here. SQL SERVER – Enable and Disable Index Non Clustered Indexes Using T-SQL SQL SERVER – Enabling Clustered and Non-Clustered Indexes – Interesting Fact Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Constraint and Keys, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • BizTalk and IBM WebSphere MQ Errors

    - by Christopher House
    The project I'm currently working on is going to make heavy use of IBM WebShere MQ to send messages from BizTalk to the client's iSeries box.  I'd never previously worked with WebSphere MQ, so I didn't really have any idea what it would take to get this to work.  I was pleasantly surprised that it wasn't too difficult to configure a send port and pass messages through it to a queue.  Or so I thought... A couple of weeks ago, the client gave me the name of a host, queue manager and queue that I'd been using for my development.  Everything was going great, I was able to put messages onto the queue, I was happy, the client was happy.  Life was good.  Then the client tells me that the host I've been connecting to is actually a Solaris box and that in prod, we'll actually be sending to an iSeries.  We both agree that it would behoove us to start pointing my dev environment to their dev iSeries box in order to flush out any weirdness there might be.  As it turns out, it was a good thing we made the change.  As soon as I reconfigured my BRE policy that sets endpoint information to point to the iSeries queue, we started seeing failures in the event log.  An example from the event log: Event Type: Error Event Source: BizTalk Server 2009 Event Category: BizTalk Server 2009 Event ID: 5754 Date:  6/9/2010 Time:  10:16:41 AM User:  N/A Computer: WINDOWS2003 Description: A message sent to adapter "MQSC" on send port "<my dynamic sendport name>" with URI "mqsc://client/tcp/<hostname>(1414)/<queue manager name>/<queue name>" is suspended.  Error details: Failure encountered while attempting to open queue. queue = <queue name> queueManager = <queue manager name>, reasonCode = 6124  MessageId:  {76825C7C-611A-4A56-8A6F-35E1124BDB5C}  InstanceID: {BA389103-DF9B-493F-8C61-44574822AAD6} The key piece of information in the event entry is the reasonCode, 6124.  A quick Google search shows that reasonCode 6124 is the code for MQRC_NOT_CONNECTED.  According to IBM's docs, this means that you've tried to send a message without first opening a connection to the queue manager.  Obviously, in the context of BizTalk, this is an unexpected error, since this sort of thing should be managed entirely by the send adapter. Perusing IBM's documentation a bit more, I came across some info on how to turn on tracing for MQ.  With tracing enabled, I tried sending a message again, then went and reviewed the trace files.  The bulk of the information in the trace files didn't mean a thing to me, but at the end of one of the files, I did notice this: 00006257 15:40:20.327795   3500.4      RSESS:000009 ------{  reqReleaseConn 00006258 15:40:20.328714   3500.4      RSESS:000009 ------}  reqReleaseConn (rc=OK) 00006259 15:40:20.328727   3500.4      RSESS:000009 ------{  xcsClearTraceIdent 0000625A 15:40:20.328739   3500.4           :       ------}  xcsClearTraceIdent (rc=OK) 0000625B 15:40:20.328752   3500.4           :       -----}! trmzstMQCONNX (rc=MQRC_NOT_AUTHORIZED) 0000625C 15:40:20.328765   3500.4           :       ----}! MQCONNX (rc=MQRC_NOT_AUTHORIZED) 0000625D 15:40:20.328766   3500.4           :       ---}! ImqQueueManager::connect (rc=MQRC_NOT_AUTHORIZED) 0000625E 15:40:20.328767   3500.4           :       --}! ImqObject::open (rc=MQRC_NOT_CONNECTED) 0000625F 15:40:20.328768   3500.4           :       --{  ImqQueue::lock 00006260 15:40:20.328769   3500.4           :       --}! ImqQueue::lock (rc=Unknown(1)) 00006261 15:40:20.328769   3500.4           :       --{  ImqQueue::unlock 00006262 15:40:20.328769   3500.4           :       --}! ImqQueue::unlock (rc=Unknown(1)) It seemed like the MQRC_NOT_CONNECTED error was being caused by a security related issue (MQRC_NOT_AUTHORIZED).  I did notice something earlier in the log where it appeared that MQ was passing a field named UID with a value equal to the account name that my BizTalk service was running under.  I ended up creating a new local account on the BizTalk server that had the same name as a user which had access to the queue manager on the iSeries.  I then created a new host instance that ran under this new account, created a send handler for the MQSC adapter on this new host instance and reconfigured my orchestration to run on the new host instance.  After bouncing all my host instances, I was now able to send messages to the iSeries. It's still not clear to me why we were able to connect to the Solaris server.  I ended up contacting IBM's support and they did confirm that the process sending to MQ does in fact pass the identity to the queue manager it's connecting to.

    Read the article

  • A basic T4 template for generating Model Metadata in ASP.NET MVC2

    - by rajbk
    I have been learning about T4 templates recently by looking at the awesome ADO.NET POCO entity generator. By using the POCO entity generator template as a base, I created a T4 template which generates metadata classes for a given Entity Data Model. This speeds coding by reducing the amount of typing required when creating view specific model and its metadata. To use this template, Download the template provided at the bottom. Set two values in the template file. The first one should point to the EDM you wish to generate metadata for. The second is used to suffix the namespace and classes that get generated. string inputFile = @"Northwind.edmx"; string suffix = "AutoMetadata"; Add the template to your MVC 2 Visual Studio 2010 project. Once you add it, a number of classes will get added to your project based on the number of entities you have.    One of these classes is shown below. Note that the DisplayName, Required and StringLength attributes have been added by the t4 template. //------------------------------------------------------------------------------ // <auto-generated> // This code was generated from a template. // // Changes to this file may cause incorrect behavior and will be lost if // the code is regenerated. // </auto-generated> //------------------------------------------------------------------------------   using System; using System.ComponentModel; using System.ComponentModel.DataAnnotations;   namespace NorthwindSales.ModelsAutoMetadata { public partial class CustomerAutoMetadata { [DisplayName("Customer ID")] [Required] [StringLength(5)] public string CustomerID { get; set; } [DisplayName("Company Name")] [Required] [StringLength(40)] public string CompanyName { get; set; } [DisplayName("Contact Name")] [StringLength(30)] public string ContactName { get; set; } [DisplayName("Contact Title")] [StringLength(30)] public string ContactTitle { get; set; } [DisplayName("Address")] [StringLength(60)] public string Address { get; set; } [DisplayName("City")] [StringLength(15)] public string City { get; set; } [DisplayName("Region")] [StringLength(15)] public string Region { get; set; } [DisplayName("Postal Code")] [StringLength(10)] public string PostalCode { get; set; } [DisplayName("Country")] [StringLength(15)] public string Country { get; set; } [DisplayName("Phone")] [StringLength(24)] public string Phone { get; set; } [DisplayName("Fax")] [StringLength(24)] public string Fax { get; set; } } } The gen’d class can be used from your project by creating a partial class with the entity name and setting the MetadataType attribute.namespace MyProject.Models{ [MetadataType(typeof(CustomerAutoMetadata))] public partial class Customer { }} You can also copy the code in the metadata class generated and create your own ViewModel class. Note that the template is super basic  and does not take into account complex properties. I have tested it with the Northwind database. This is a work in progress. Feel free to modify the template to suite your requirements. Standard disclaimer follows: Use At Your Own Risk, Works on my machine running VS 2010 RTM/ASP.NET MVC 2 AutoMetaData.zip Mr. Incredible: Of course I have a secret identity. I don't know a single superhero who doesn't. Who wants the pressure of being super all the time?

    Read the article

  • Developer’s Life – Every Developer is a Superman

    - by Pinal Dave
    I enjoyed comparing developers to Spiderman so much, that I have decided to continue the trend and encourage some of my favorite people (developers) with another favorite superhero – Superman.  Superman is probably the most famous superhero – and one of the most inspiring. Everyone has their own favorite, but Superman has been the longest enduring of all comic book characters.  Clark Kent has inspired multiple movie series, TV shows, books, cartoons, and costumes.  Superman’s enduring popularity has been attributed to his superhuman strength, integrity, dedication to good, and his humility in keeping his identity a secret. So how are developers like Superman? Well, read on my list of reasons. Secret Identities They have secret identities.  I’m not saying that all developers wear thick glasses and go by an alias like “Clark Kent.”  But developers certainly work in the background, making sure everything runs smoothly, often without recognition.  Like Superman, when they have done their job right, no one knows they were there. Working Alone You don’t have to work alone.  Superman doesn’t have a sidekick like Robin or Bat Girl, but he is a major player in the Justice League.  Developers have amazing skills, and they shouldn’t be afraid to unite those skills to solve some of the world’s major problems (like slow networks). Daily Inspiration Developers are inspiring.  Clark Kent works at The Daily Planet, Metropolis’ newspaper, which is lucky because he can keep some of the publicity Superman inspires under wraps.  Developers might go unnoticed sometimes, but when people hear about some of the tasks they accomplish on a daily basis, it inspires awe. Discover Your Superpowers You have to discover your superpowers.  Clark Kent didn’t just wake up one morning with the full understanding that he could fly, leap tall buildings in a single bound, and was stronger than a speeding locomotive.  He slowly discovered these powers (after a few comic book-worthy misunderstandings!).  Developers are always learning and growing as well.  You probably won’t wake up with super powers, either, but years of practice and continuing education can get you close. Every Day is a New Day The story continues.  The Superman comic books are still being printed, and have been in print since 1938.  There have been two TV series, (one, Smallville, was on TV for ten seasons) and multiple cartoon adaptations.  There have been multiple movies, with many different actors.  A new reboot came out last year, and another is set to premier in 2016.   So, developers, when you are having a bad day or a problem seems unsolvable – remember, the story will continue!  There is always tomorrow. I hope you are all enjoying reading about developers-as-superheroes as much as I am enjoying writing about them.  Please tell me how else developers are like Superheroes in the comments – especially if you know any developers who are faster than a speeding bullet and can leap tall buildings in a single bound. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Developer, Superhero

    Read the article

< Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >