Search Results

Search found 64621 results on 2585 pages for 'asp net performance'.

Page 155/2585 | < Previous Page | 151 152 153 154 155 156 157 158 159 160 161 162  | Next Page >

  • Does DataAdapter.Fill() close its connection when an Exception is thrown?

    - by motto
    Hi, I am using ADO.NET (.NET 1.1) in a legacy app. I know that DataAdapter.Fill() opens and closes connections if the connection hasn't been opened manually before it's given to the DataAdapter. My question: Does it also close the connection if the .Fill() causes an Exception? (due to SQL Server cannot be reached, or whatever). Does it leak a connection or does it have a built-in Finally-clause to make sure the connection is being closed. Code Example: Dim cmd As New SqlCommand Dim da As New SqlDataAdapter Dim ds As New DataSet cmd.Connection = New SqlConnection(strConnection) cmd.CommandText = strSQL da.SelectCommand = cmd da.Fill(ds)

    Read the article

  • Asp.net Login Status Question: It Aint Working

    - by contactmatt
    I'm starting to use Role Management in my website, and I'm current following along on the tutorial from http://www.asp.net/Learn/Security/tutorial-02-vb.aspx . I'm having a problem with the asp:LoginStatus control. It is not telling me that I am currently logged in after a successful login. This can't be true because after successfully logging in, my LoggedInTemplate is shown. The username and passwords are simply stored in a array. Heres the Login.aspx page code. Protected Sub btnLogin_Click(ByVal sender As Object, ByVal e As System.EventArgs) _ Handles btnLogin.Click ' Three valid username/password pairs: Scott/password, Jisun/password, and Sam/password. Dim users() As String = {"Scott", "Jisun", "Sam"} Dim passwords() As String = {"password", "password", "password"} For i As Integer = 0 To users.Length - 1 Dim validUsername As Boolean = (String.Compare(txtUserName.Text, users(i), True) = 0) Dim validPassword As Boolean = (String.Compare(txtPassword.Text, passwords(i), False) = 0) If validUsername AndAlso validPassword Then FormsAuthentication.RedirectFromLoginPage(txtUserName.Text, chkRemember.Checked) End If Next ' If we reach here, the user's credentials were invalid lblInvalid.Visible = True End Sub Here is the content place holder on the master page specifically designed to hold Login Information. On successfull login, the page is redirected to '/Default.aspx', and the LoggedIn Template below is shown...but the status says Log In. <asp:ContentPlaceHolder Id="LoginContent" runat="server"> <asp:LoginView ID="LoginView1" runat="server"> <LoggedInTemplate> Welcome back, <asp:LoginName ID="LoginName1" runat="server" />. </LoggedInTemplate> <AnonymousTemplate> Hello, stranger. </AnonymousTemplate> </asp:LoginView> <br /> <asp:LoginStatus ID="LoginStatus1" runat="server" LogoutAction="Redirect" LogoutPageUrl="~/Logout.aspx" /> </asp:ContentPlaceHolder> Forms authentication is enabled. I'm not sure what to do about this :o.

    Read the article

  • ASP.NET MVC.NET JQueryUI datepicker inside a div loaded/updated with ajax.actionlink

    - by ArjanW
    Im trying to incorporate jqueryUI's datepicker inside a partialview like this: <% using (Ajax.BeginForm("/EditData", new AjaxOptions { HttpMethod = "POST", UpdateTargetId = "div1" })) {%> Date: <%= Html.TextBox("date", String.Format("{0:g}", Model.date), new { id = "datePicker"})%> <% } %> <script type="text/javascript"> $(function() { $("#datePicker").datepicker(); }); </script> when i directly call the url to this partial view, so it renders only this view the datepicker works perfectly. (for the purpose of testing this i added the needed jquerry and jquerryui script and css references directly to the partial view) But if i use a Ajax.Actionlink to load this partial view inside a div (called div2, submitting the above form should update div1) like this: <div id="div1"> <%= Ajax.ActionLink("Edit", "/EditData", new { id = Model.id }, new AjaxOptions { HttpMethod = "GET", UpdateTargetId = "div2" } )%> </div> <div2>placeholder for the form</div> the datepicker wont apear anymore. My best guess is the javascript included in the loaded html doesnt get executed, ($(document).ready(function() { $("#datepicker").datepicker(); }); doesnt work either if that's the case how and where should i call the $("datepicker").datepicker(); ? (putting it in the ajaxoptions of the ajax.actionlink as oncomplete = "$(function() { $('#datepicker').datepicker();});" still doesnt work. if thats not the case, then where's my problem?

    Read the article

  • AJAX - ASP.NET - Timer delay problem

    - by Julian
    Hi, I'm trying to make an webapplication where you see an Ajax countdown timer. Whenever I push a button the countdown should go back to 30 and keep counting down. Now the problem is whenever I push the button the timer keeps counting down for a second or 2 and most of the time after that the timer keeps standing on 30 for to long. WebForm code: <asp:UpdatePanel ID="UpdatePanel1" runat="server"> <ContentTemplate> <asp:Label ID="Label1" runat="server" Text="geen verbinding"></asp:Label> <br /> <asp:Button ID="Button1" runat="server" onclick="Button1_Click" Text="Button" /> <br /> </ContentTemplate> <Triggers> <asp:AsyncPostBackTrigger ControlID="Timer1" EventName="Tick" /> </Triggers> </asp:UpdatePanel> <asp:Timer ID="Timer1" runat="server" Interval="1000" ontick="Timer1_Tick"> </asp:Timer> </form> Code Behind: static int timer = 30; protected void Page_Load(object sender, EventArgs e) { Label1.Text = timer.ToString(); } protected void Timer1_Tick(object sender, EventArgs e) { timer--; } protected void Button1_Click(object sender, EventArgs e) { timer = 30; } Hope somebody knows what the problem is and if there is anyway to fix this. Thanks in advance!

    Read the article

  • ASP.NET MVC & ADO.NET Entity Framework clientside validation

    - by JK
    Using aspnet mvc2 with the model auto-generated by entity framework: Is it possible to tell entity framework to auto-annotate all fields? eg: If database field says not null then add [Required] If DB field is a nvarchar(x) then add [StringLength(x)] And so on? What if the field name contains the string "email" eg CustomerEmail - can I get EF to auto-annotate that with an appropriate annotation ([Regex()] maybe) As I understand it, if the model fields are annotated, and I use both Html.ValidationMessageFor() and use if (ModelState.IsValid) in my controller, then that is all I need to do to have basic clientside input validation working? Thanks

    Read the article

  • Label,Asp.net ,C#.net.

    - by viahal
    I have a label and a text box associated to it . I have added some text in text box which is invisible at first... now I want to display the content after I go On the label... just help Me out..

    Read the article

  • Control Reference Static Method Performance

    - by dotnetguts
    I have just asked which one is better? Static Vs Non-Static? http://stackoverflow.com/questions/3016717/static-vs-non-static-method-performance-c I would like to take this discussion one step ahead. Consider If i pass reference of Panel control as parameter to Public static method, will static method still rules in performance?

    Read the article

  • Create pdf file dynamically in vb.net for .net 1.1 framework

    - by Urbycoz
    I need to create a pdf file dynamically in vb.net. It needs to contain several images and lines of text. I am using VS 2003, so whatever solution I use will need to be compatible with the .net 1.1 framework. The current method I am using is wpcubed, but this requires that all images be converted to bmp format before adding them to the pdf, which can be extremely slow when dealing with a large number of images. I am aware that there are an awful lot of other 3rd party products that claim to do this, and I have had a search through them. But without registering, downloading, installing and writing code to use each of them in turn, it is very difficult to differentiate between them. So far I have looked into evo pdf and pdfsharp but neither seem to work with .net 1.1. (Although they don't make this abundantly clear.) Has anyone else found a method that works and they would recommend (a free one-if possible)?

    Read the article

  • Which one has a faster runtime performance: WPF or Winforms?

    - by Joan Venge
    I know WPF is more complex an flexible so could be thought to do more calculations. But since the rendering is done on the GPU, wouldn't it be faster than Winforms for the same application (functionally and visually)? I mean when you are not running any games or heavy 3d rendering, the GPU isn't doing heavy work, right? Whereas the CPU is always busy. Is this a valid assumption or is the GPU utilization of WPF a very minor operation in its pipeline?

    Read the article

  • Implicit and Explicit implementations for Multiple Interface inheritance

    Following C#.NET demo explains you all the scenarios for implementation of Interface methods to classes. There are two ways you can implement a interface method to a class. 1. Implicit Implementation 2. Explicit Implementation. Please go though the sample. using System; namespace ImpExpTest {     class Program     {         static void Main(string[] args)         {             C o3 = new C();             Console.WriteLine(o3.fu());             I1 o1 = new C();             Console.WriteLine(o1.fu());             I2 o2 = new C();             Console.WriteLine(o2.fu());             var o4 = new C();       //var is considered as C             Console.WriteLine(o4.fu());             var o5 = (I1)new C();   //var is considered as I1             Console.WriteLine(o5.fu());             var o6 = (I2)new C();   //var is considered as I2             Console.WriteLine(o6.fu());             D o7 = new D();             Console.WriteLine(o7.fu());             I1 o8 = new D();             Console.WriteLine(o8.fu());             I2 o9 = new D();             Console.WriteLine(o9.fu());         }     }     interface I1     {         string fu();     }     interface I2     {         string fu();     }     class C : I1, I2     {         #region Imicitly Defined I1 Members         public string fu()         {             return "Hello C"         }         #endregion Imicitly Defined I1 Members         #region Explicitly Defined I1 Members         string I1.fu()         {             return "Hello from I1";         }         #endregion Explicitly Defined I1 Members         #region Explicitly Defined I2 Members         string I2.fu()         {             return "Hello from I2";         }         #endregion Explicitly Defined I2 Members     }     class D : C     {         #region Imicitly Defined I1 Members         public string fu()         {             return "Hello from D";         }         #endregion Imicitly Defined I1 Members     } } Output:- Hello C Hello from I1 Hello from I2 Hello C Hello from I1 Hello from I2 Hello from D Hello from I1 Hello from I2 span.fullpost {display:none;}

    Read the article

  • How to enable extended logging for classic asp on IIS7 on Windows 2008 R2

    - by Neil Trodden
    I had to deploy an application that was not written by me onto the above configuration. It is a rather bizarre hybrid of asp.net and classic asp and it's the classic asp that is proving troublesome. The client is having problems with 500 Internal Server Errors appearing and I can see some of these in the logs but I only get the error code and the page name but little else. What I would like to see is the actual error message to at least give me an idea what is going on (or not going on, depending on your point of view) I don't want to display errors in the browser as I don't know the code well enough and this could (for all I know) display some crazy code where the db password is hard-coded into the site.

    Read the article

  • Classic ASP on large memory server

    - by Steve Evans
    I have a client with a large ASP app that apparently is fairly memory intensive. I’m helping them migrate to new hardware they have running Win2k8 R2. They have 4 physical servers with 32gb of RAM each. I’m making the assumption that ASP apps run as a x32 process. So I see that we have two options: On the application pool enable web gardens. Use the physical servers as VM hosts and split the box into say 4 web servers each. Any thoughts on which path will provide us better performance? I’m just not really sure how ASP will handle a machine with lots of memory, and I’m worried it won’t really be able to address the memory well. (you can ignore all the obvious stuff like increased maintenance of 16 web servers vs 4, or the flexibility virtualization gets us over physical servers, etc)

    Read the article

  • Asp.net error messages when on server are not displayed

    - by asn187
    I have been tasked with setting up asp.net websites on a windows server 2008 which are all in debug mode When browsing a website on the server and an error occurs, for example the database connection cannot be open I would expect as per normal to receive the Asp.net Server error page with an error dump Something like - http://www.codeproject.com/KB/books/1861005040/image091.gif However, what actually happens is I get random characters on the web page. For example: <?)=????*??2o????v??YK?WuZ,?6[N??f?O??b??@!???u]S??yQ?iN?&e???E???j??1z??x??????o?y????U??M???2d?i?4 This is not the correct or expected behaviour. The event log does however show what has gone wrong. How do I get the Server Error page to render properly, am I missing something in the servers asp.net setup?

    Read the article

  • How to enable extended logging for classic asp on IIS7 on Windows 2008 R2

    - by Neil Trodden
    I had to deploy an application that was not written by me onto the above configuration. It is a rather bizarre hybrid of asp.net and classic asp and it's the classic asp that is proving troublesome. The client is having problems with 500 Internal Server Errors appearing and I can see some of these in the logs but I only get the error code and the page name but little else. What I would like to see is the actual error message to at least give me an idea what is going on (or not going on, depending on your point of view) I don't want to display errors in the browser as I don't know the code well enough and this could (for all I know) display some crazy code where the db password is hard-coded into the site.

    Read the article

  • Deploying ASP.Net MVC 4 application to IIS 6 - Bundles are not working

    - by ShaneC
    We have a ASP.Net MVC 4 application we are trying to deploy to a Windows 2003 machine running IIS 6. We have it running on a separate app pool and it is setup to use asp.net 4.0. We have a Wildcard application mapping to aspnet_isapi.dll which was required to get the page to appear. The problem we've ran into now is that the bundling which is part of asp.net mvc 4 isn't working. When you try to follow the /js?v=ASDfljkFSDlkjDSF link you get a 404 returned to you. We know it uses extensionless urls but these should be handled by the Wildcard application mapping if I'm not mistaken? Has anyone got this working or have any ideas?

    Read the article

  • How to host ASP.NET application externally?

    - by Josh
    I have an ASP.NET application that I can get to locally by going to 192.168.1.102:81/TestApp. I would like to host the application externally by going to domain.com:81/TestApp (I already have my domain pointing to my router and this works fine - I have apache running on port 80 on another server). I modified the router settings to point any request coming through port 81 to 192.168.1.102. I am still having trouble accessing the ASP.NET site (I get the error message that "This link appears to be broken"). Am I missing something? How can I redirect domain.com:81/TestApp to my ASP.NET application? Thanks.

    Read the article

  • Integrating WIF with WCF Data Services

    - by cibrax
    A time ago I discussed how a custom REST Starter kit interceptor could be used to parse a SAML token in the Http Authorization header and wrap that into a ClaimsPrincipal that the WCF services could use. The thing is that code was initially created for Geneva framework, so it got deprecated quickly. I recently needed that piece of code for one of projects where I am currently working on so I decided to update it for WIF. As this interceptor can be injected in any host for WCF REST services, also represents an excellent solution for integrating claim-based security into WCF Data Services (previously known as ADO.NET Data Services). The interceptor basically expects a SAML token in the Authorization header. If a token is found, it is parsed and a new ClaimsPrincipal is initialized and injected in the WCF authorization context. public class SamlAuthenticationInterceptor : RequestInterceptor {   SecurityTokenHandlerCollection handlers;   public SamlAuthenticationInterceptor()     : base(false)   {     this.handlers = FederatedAuthentication.ServiceConfiguration.SecurityTokenHandlers;   }   public override void ProcessRequest(ref RequestContext requestContext)   {     SecurityToken token = ExtractCredentials(requestContext.RequestMessage);     if (token != null)     {       ClaimsIdentityCollection claims = handlers.ValidateToken(token);       var principal = new ClaimsPrincipal(claims);       InitializeSecurityContext(requestContext.RequestMessage, principal);     }     else     {       DenyAccess(ref requestContext);     }   }   private void DenyAccess(ref RequestContext requestContext)   {     Message reply = Message.CreateMessage(MessageVersion.None, null);     HttpResponseMessageProperty responseProperty = new HttpResponseMessageProperty() { StatusCode = HttpStatusCode.Unauthorized };     responseProperty.Headers.Add("WWW-Authenticate",           String.Format("Basic realm=\"{0}\"", ""));     reply.Properties[HttpResponseMessageProperty.Name] = responseProperty;     requestContext.Reply(reply);     requestContext = null;   }   private SecurityToken ExtractCredentials(Message requestMessage)   {     HttpRequestMessageProperty request = (HttpRequestMessageProperty)  requestMessage.Properties[HttpRequestMessageProperty.Name];     string authHeader = request.Headers["Authorization"];     if (authHeader != null && authHeader.Contains("<saml"))     {       XmlTextReader xmlReader = new XmlTextReader(new StringReader(authHeader));       var col = SecurityTokenHandlerCollection.CreateDefaultSecurityTokenHandlerCollection();       SecurityToken token = col.ReadToken(xmlReader);                                        return token;     }     return null;   }   private void InitializeSecurityContext(Message request, IPrincipal principal)   {     List<IAuthorizationPolicy> policies = new List<IAuthorizationPolicy>();     policies.Add(new PrincipalAuthorizationPolicy(principal));     ServiceSecurityContext securityContext = new ServiceSecurityContext(policies.AsReadOnly());     if (request.Properties.Security != null)     {       request.Properties.Security.ServiceSecurityContext = securityContext;     }     else     {       request.Properties.Security = new SecurityMessageProperty() { ServiceSecurityContext = securityContext };      }    }    class PrincipalAuthorizationPolicy : IAuthorizationPolicy    {      string id = Guid.NewGuid().ToString();      IPrincipal user;      public PrincipalAuthorizationPolicy(IPrincipal user)      {        this.user = user;      }      public ClaimSet Issuer      {        get { return ClaimSet.System; }      }      public string Id      {        get { return this.id; }      }      public bool Evaluate(EvaluationContext evaluationContext, ref object state)      {        evaluationContext.AddClaimSet(this, new DefaultClaimSet(System.IdentityModel.Claims.Claim.CreateNameClaim(user.Identity.Name)));        evaluationContext.Properties["Identities"] = new List<IIdentity>(new IIdentity[] { user.Identity });        evaluationContext.Properties["Principal"] = user;        return true;      }    } A WCF Data Service, as any other WCF Service, contains a service host where this interceptor can be injected. The following code illustrates how that can be done in the “svc” file. <%@ ServiceHost Language="C#" Debug="true" Service="ContactsDataService"                 Factory="AppServiceHostFactory" %> using System; using System.ServiceModel; using System.ServiceModel.Activation; using Microsoft.ServiceModel.Web; class AppServiceHostFactory : ServiceHostFactory {    protected override ServiceHost CreateServiceHost(Type serviceType, Uri[] baseAddresses)   {     WebServiceHost2 result = new WebServiceHost2(serviceType, true, baseAddresses);     result.Interceptors.Add(new SamlAuthenticationInterceptor());                 return result;   } } WCF Data Services includes an specific WCF host of out the box (DataServiceHost). However, the service is not affected at all if you replace it with a custom one as I am doing in the code above (WebServiceHost2 is part of the REST Starter kit). Finally, the client application needs to pass the SAML token somehow to the data service. In case you are using any Http client library for consuming the data service, that’s easy to do, you only need to include the SAML token as part of the “Authorization” header. If you are using the auto-generated data service proxy, a little piece of code is needed to inject a SAML token into the DataServiceContext instance. That class provides an event “SendingRequest” that any client application can leverage to include custom code that modified the Http request before it is sent to the service. So, you can easily create an extension method for the DataServiceContext that negotiates the SAML token with an existing STS, and adds that token as part of the “Authorization” header. public static class DataServiceContextExtensions {        public static void ConfigureFederatedCredentials(this DataServiceContext context, string baseStsAddress, string realm)   {     string address = string.Format(STSAddressFormat, baseStsAddress, realm);                  string token = NegotiateSecurityToken(address);     context.SendingRequest += (source, args) =>     {       args.RequestHeaders.Add("Authorization", token);     };   } private string NegotiateSecurityToken(string address) { } } I left the NegociateSecurityToken method empty for this extension as it depends pretty much on how you are negotiating tokens from an existing STS. In case you want to end-to-end REST solution that involves an Http endpoint for the STS, you should definitely take a look at the Thinktecture starter STS project in codeplex.

    Read the article

  • Implementing a generic repository for WCF data services

    - by cibrax
    The repository implementation I am going to discuss here is not exactly what someone would call repository in terms of DDD, but it is an abstraction layer that becomes handy at the moment of unit testing the code around this repository. In other words, you can easily create a mock to replace the real repository implementation. The WCF Data Services update for .NET 3.5 introduced a nice feature to support two way data bindings, which is very helpful for developing WPF or Silverlight based application but also for implementing the repository I am going to talk about. As part of this feature, the WCF Data Services Client library introduced a new collection DataServiceCollection<T> that implements INotifyPropertyChanged to notify the data context (DataServiceContext) about any change in the association links. This means that it is not longer necessary to manually set or remove the links in the data context when an item is added or removed from a collection. Before having this new collection, you basically used the following code to add a new item to a collection. Order order = new Order {   Name = "Foo" }; OrderItem item = new OrderItem {   Name = "bar",   UnitPrice = 10,   Qty = 1 }; var context = new OrderContext(); context.AddToOrders(order); context.AddToOrderItems(item); context.SetLink(item, "Order", order); context.SaveChanges(); Now, thanks to this new collection, everything is much simpler and similar to what you have in other ORMs like Entity Framework or L2S. Order order = new Order {   Name = "Foo" }; OrderItem item = new OrderItem {   Name = "bar",   UnitPrice = 10,   Qty = 1 }; order.Items.Add(item); var context = new OrderContext(); context.AddToOrders(order); context.SaveChanges(); In order to use this new feature, you first need to enable V2 in the data service, and then use some specific arguments in the datasvcutil tool (You can find more information about this new feature and how to use it in this post). DataSvcUtil /uri:"http://localhost:3655/MyDataService.svc/" /out:Reference.cs /dataservicecollection /version:2.0 Once you use those two arguments, the generated proxy classes will use DataServiceCollection<T> rather than a simple ObjectCollection<T>, which was the default collection in V1. There are some aspects that you need to know to use this feature correctly. 1. All the entities retrieved directly from the data context with a query track the changes and report those to the data context automatically. 2. A entity created with “new” does not track any change in the properties or associations. In order to enable change tracking in this entity, you need to do the following trick. public Order CreateOrder() {   var collection = new DataServiceCollection<Order>(this.context);   var order = new Order();   collection.Add(order);   return order; } You basically need to create a collection, and add the entity to that collection with the “Add” method to enable change tracking on that entity. 3. If you need to attach an existing entity (For example, if you created the entity with the “new” operator rather than retrieving it from the data context with a query) to a data context for tracking changes, you can use the “Load” method in the DataServiceCollection. var order = new Order {   Id = 1 }; var collection = new DataServiceCollection<Order>(this.context); collection.Load(order); In this case, the order with Id = 1 must exist on the data source exposed by the Data service. Otherwise, you will get an error because the entity did not exist. These cool extensions methods discussed by Stuart Leeks in this post to replace all the magic strings in the “Expand” operation with Expression Trees represent another feature I am going to use to implement this generic repository. Thanks to these extension methods, you could replace the following query with magic strings by a piece of code that only uses expressions. Magic strings, var customers = dataContext.Customers .Expand("Orders")         .Expand("Orders/Items") Expressions, var customers = dataContext.Customers .Expand(c => c.Orders.SubExpand(o => o.Items)) That query basically returns all the customers with their orders and order items. Ok, now that we have the automatic change tracking support and the expression support for explicitly loading entity associations, we are ready to create the repository. The interface for this repository looks like this,public interface IRepository { T Create<T>() where T : new(); void Update<T>(T entity); void Delete<T>(T entity); IQueryable<T> RetrieveAll<T>(params Expression<Func<T, object>>[] eagerProperties); IQueryable<T> Retrieve<T>(Expression<Func<T, bool>> predicate, params Expression<Func<T, object>>[] eagerProperties); void Attach<T>(T entity); void SaveChanges(); } The Retrieve and RetrieveAll methods are used to execute queries against the data service context. While both methods receive an array of expressions to load associations explicitly, only the Retrieve method receives a predicate representing the “where” clause. The following code represents the final implementation of this repository.public class DataServiceRepository: IRepository { ResourceRepositoryContext context; public DataServiceRepository() : this (new DataServiceContext()) { } public DataServiceRepository(DataServiceContext context) { this.context = context; } private static string ResolveEntitySet(Type type) { var entitySetAttribute = (EntitySetAttribute)type.GetCustomAttributes(typeof(EntitySetAttribute), true).FirstOrDefault(); if (entitySetAttribute != null) return entitySetAttribute.EntitySet; return null; } public T Create<T>() where T : new() { var collection = new DataServiceCollection<T>(this.context); var entity = new T(); collection.Add(entity); return entity; } public void Update<T>(T entity) { this.context.UpdateObject(entity); } public void Delete<T>(T entity) { this.context.DeleteObject(entity); } public void Attach<T>(T entity) { var collection = new DataServiceCollection<T>(this.context); collection.Load(entity); } public IQueryable<T> Retrieve<T>(Expression<Func<T, bool>> predicate, params Expression<Func<T, object>>[] eagerProperties) { var entitySet = ResolveEntitySet(typeof(T)); var query = context.CreateQuery<T>(entitySet); foreach (var e in eagerProperties) { query = query.Expand(e); } return query.Where(predicate); } public IQueryable<T> RetrieveAll<T>(params Expression<Func<T, object>>[] eagerProperties) { var entitySet = ResolveEntitySet(typeof(T)); var query = context.CreateQuery<T>(entitySet); foreach (var e in eagerProperties) { query = query.Expand(e); } return query; } public void SaveChanges() { this.context.SaveChanges(SaveChangesOptions.Batch); } } For instance, you can use the following code to retrieve customers with First name equal to “John”, and all their orders in a single call. repository.Retrieve<Customer>(    c => c.FirstName == “John”, //Where    c => c.Orders.SubExpand(o => o.Items)); In case, you want to have some pre-defined queries that you are going to use across several places, you can put them in an specific class. public static class CustomerQueries {   public static Expression<Func<Customer, bool>> LastNameEqualsTo(string lastName)   {     return c => c.LastName == lastName;   } } And then, use it with the repository. repository.Retrieve<Customer>(    CustomerQueries.LastNameEqualsTo("foo"),    c => c.Orders.SubExpand(o => o.Items));

    Read the article

  • VSTO Development - Key Improvements In VS2010 / .NET 4.0?

    - by dferraro
    Hi all, I am trying to make a case to my bosses on why we should use VS2010 for an upcoming Excel Workbook VSTO application. I haven't used VSTO before but have used VBA. With 2010 just around the corner, I wanted to read about the improvements made to see if it was worth using 2010 to develop this application. So far I have read 2 major improvements are ease of deployments and also debugging / com interop improvements ... I was just wondering if there was anything else I wasn't aware of, or if anyone here is actually developing in VSTO and has used 2010 and both 2008 and could help make a case / arm me with information. The main concern of my bosses is deploying .NET 4.0 runtime on the Citrix servers here... however it seems that with 3.5, we would have to deploy the VSTO runtime and PIA's, etc... So really wouldn't deployments be easier with 2010 because installing just the 4.0 runtime is better than having to install the 'VSTO Runtime' as well as PIA's, etc? Or is there something I'm missing here? Anyone here deploy VSTO app in an enterprise and can speak to this? Also - I'm trying to also fight to use C# over VB.NET for this app. Does anyone know any key reasons why (except for my bias on preference of syntax) it would be better to use C# over VB for this? Any key features lacking in VB VSTO development? I've read about the VSTO Power Tools, and one of them describes LINQ enalbment of the Excel Object Model classes - however it says 'a set of C# classes'... Does anyone know if they literally mean C# - so this would not work with VB.NET, or do they just mean the code is written in C#? Anyone ever used these power tools with VB? I am going to download & play with it now, but any help again is greatly appreciated Thanks very much for any information.

    Read the article

  • MySQL performance - 100Mb ethernet vs 1Gb ethernet

    - by Rob Penridge
    Hi All I've just started a new job and noticed that the analysts computers are connected to the network at 100Mbps. The ODBC queries we run against the MySQL server can easily return 500MB+ and it seems at times when the servers are under high load the DBAs kill low priority jobs as they are taking too long to run. My question is this... How much of this server time is spent executing the request, and how much time is spent returning the data to the client? Could the query speeds be improved by upgrading the network connections to 1Gbps? (Updated for the why): The database in question was built to accomodate reporting needs and contains massive amounts of data. We usually work with subsets of this data at a granular level in external applications such as SAS or Excel, hence the reason for the large amounts of data being transmitted. The queries are not poorly structured - they are very simple and the appropriate joins/indexes etc are being used. I've removed 'query' from the Title of the post as I realised this question is more to do with general MySQL performance rather than query related performance. I was kind of hoping that someone with a Gigabit connection may be able to actually quantify some results for me here by running a query that returns a decent amount of data, then they could limit their connection speed to 100Mb and rerun the same query. Hopefully this could be done in an environment where loads are reasonably stable so as not to skew the results. If ethernet speed can improve the situation I wanted some quantifiable results to help argue my case for upgrading the network connections. Thanks Rob

    Read the article

  • Creating Array of settings names and values using ADO.NET Entities

    - by jordan.baucke
    I'm using an ADO.NET Entities (.edmx) data-model along with MVC2 to build an application. I have a DB table where I want to store settings for method that run elsewhere. MVC2 allows me to create a view, editor, etc. to update this table which is great, but now when I want to do simple assignments based on column titles I'm a bit confused. For example, I would like to easily build an array that I could offset into the record's value based on it's "Title" Column: var entities = new ManagerEntities(); Setting[] settings = entities.settings.ToArray(); This returns something like: Settings[0].[SettingTitle][SettingValue] However, I would like to more easily index into the value than having to loop through all the returned settings, when they're already index. string URL_ID_NEED = [NeededUrl][http://www.url.com] Am I missing something relatively simple? Thanks! ========================= *Update* ========================= Ok, I think I've got a solution, but I'm wondering why this would be so complicated, and if I'm just not thinking of the right context for ADO.NET objects, here's what I did: public string GetSetting(string SettingName) { var entities = new LabelManagerEntities(); IEnumerable<KeyValuePair<string, object>> entityKeyValues = new KeyValuePair<string, object>[] { new KeyValuePair<string, object>("SettingTitle", SettingName) }; EntityKey key = new EntityKey("LabelManagerEntities.Settings", entityKeyValues); // Get the object from the context or the persisted store by its key. Setting settingvalue = (Setting)entities.GetObjectByKey(key); return settingvalue.SettingValue.ToString(); } This method handles the job of querying the Entities by "Key" to get back the correct value as a returned string (which I can than strip out the " ", or or cast to an integer, etc. etc.,) Am I just duplicating functionality that already exists in ADO.NET's design patterns (I'm pretty new to it) -- or is this a reasonable solution?

    Read the article

  • VB.Net Custom Controls

    - by Paul
    This might be basic question, however I am confused on some .Net Concpets. I am trying to create a "Data Browser" in VB.net. Similar to a Web Browser however each Tab in the Data Browser is a view of some Data (from a database or flat files) not a webpage. The UI on each Tab is mostly the same. A list Box (showing datatypes, etc), a TextBox (where you can create a filter), and a DataGridView, a DataSource Picker, etc. The only thing that would change on each tab is that there could be a custom "Viewer". In most cases (depending on the datasource), this would be the datagrid, however in other cases it would be a treecontrol. From reading through the .Net documents, it appears that I need to Create a Custom Control (MyDataBrowser) Consisting of a Panel with all the common Controls (except the viewer). Every time the user says "New Tab", a new tabpage is created and this MyDataBrowser Control is added, The MyDataBrowser control would contain some function that was able to then create the approriate viewer based on the data at hand. If this is the suggested route, how is the best way to go about creating the MyDataBrowser Control (A) Is this a Custom Control Library? (B) Is this an Inhertited Form? (C) Is this an Inherrited User Control? I assume that I have to create a .DLL and add as a reference. Any direction on this would be appreciated. Does the custom Control have its own properties (I would like to save/load these from a configuration file). Is it possible to add a backgroundworker to this customcontrol? Thanks.

    Read the article

  • iPhone openGLES performance tuning

    - by genesys
    Hey there! I'm trying now for quite a while to optimize the framerate of my game without really making progress. I'm running on the newest iPhone SDK and have a iPhone 3G 3.1.2 device. I invoke arround 150 drawcalls, rendering about 1900 Triangles in total (all objects are textured using two texturelayers and multitexturing. most textures come from the same textureAtlasTexture stored in pvrtc 2bpp compressed texture). This renders on my phone at arround 30 fps, which appears to me to be way too low for only 1900 triangles. I tried many things to optimize the performance, including batching together the objects, transforming the vertices on the CPU and rendering them in a single drawcall. this yelds 8 drawcalls (as oposed to 150 drawcalls), but performance is about the same (fps drop to arround 26fps) I'm using 32byte vertices stored in an interleaved array (12bytes position, 12bytes normals, 8bytes uv). I'm rendering triangleLists and the vertices are ordered in TriStrip order. I did some profiling but I don't really know how to interprete it. instruments-sampling using Instruments and Sampling yelds this result: http://neo.cycovery.com/instruments_sampling.gif telling me that a lot of time is spent in "mach_msg_trap". I googled for it and it seems this function is called in order to wait for some other things. But wait for what?? instruments-openGL instruments with the openGL module yelds this result: http://neo.cycovery.com/intstruments_openglES_debug.gif but here i have really no idea what those numbers are telling me shark profiling: profiling with shark didn't tell me much either: http://neo.cycovery.com/shark_profile_release.gif the largest number is 10%, spent by DrawTriangles - and the whole rest is spent in very small percentage functions Can anyone tell me what else I could do in order to figure out the bottleneck and could help me to interprete those profiling information? Thanks a lot!

    Read the article

  • Java performance problem with LinkedBlockingQueue

    - by lofthouses
    Hello, this is my first post on stackoverflow...i hope someone can help me i have a big performance regression with Java 6 LinkedBlockingQueue. In the first thread i generate some objects which i push in to the queue In the second thread i pull these objects out. The performance regression occurs when the take() method of the LinkedBlockingQueue is called frequently. I monitored the whole program and the take() method claimed the most time overall. And the throughput goes from ~58Mb/s to 0.9Mb/s... the queue pop and take methods ar called with a static method from this class public class C_myMessageQueue { private static final LinkedBlockingQueue<C_myMessageObject> x_queue = new LinkedBlockingQueue<C_myMessageObject>( 50000 ); /** * @param message * @throws InterruptedException * @throws NullPointerException */ public static void addMyMessage( C_myMessageObject message ) throws InterruptedException, NullPointerException { x_queue.put( message ); } /** * @return Die erste message der MesseageQueue * @throws InterruptedException */ public static C_myMessageObject getMyMessage() throws InterruptedException { return x_queue.take(); } } how can i tune the take() method to accomplish at least 25Mb/s, or is there a other class i can use which will block when the "queue" is full or empty. kind regards Bart P.S.: sorry for my bad english, i'm from germany ;)

    Read the article

  • Reporting System architecture for better performance

    - by pauloya
    Hi, We have a product that runs Sql Server Express 2005 and uses mainly ASP.NET. The database has around 200 tables, with a few (4 or 5) that can grow from 300 to 5000 rows per day and keep a history of 5 years, so they can grow to have 10 million rows. We have built a reporting platform, that allows customers to build reports based on templates, fields and filters. We face performance problems almost since the beginning, we try to keep reports display under 10 seconds but some of them go up to 25 seconds (specially on those customers with long history). We keep checking indexes and trying to improve the queries but we get the feeling that there's only so much we can do. Off course the fact that the queries are generated dynamically doesn't help with the optimization. We also added a few tables that keep redundant data, but then we have the added problem of maintaining this data up to date, and also Sql Express has a limit on the size of databases. We are now facing a point where we have to decide if we want to give up real time reports, or maybe cut the history to be able to have better performance. I would like to ask what is the recommended approach for this kind of systems. Also, should we start looking for third party tools/platforms? I know OLAP can be an option but can we make it work on Sql Server Express, or at least with a license that is cheap enough to distribute to thousands of deployments? Thanks

    Read the article

< Previous Page | 151 152 153 154 155 156 157 158 159 160 161 162  | Next Page >