Search Results

Search found 22065 results on 883 pages for 'performance testing'.

Page 309/883 | < Previous Page | 305 306 307 308 309 310 311 312 313 314 315 316  | Next Page >

  • July, the 31 Days of SQL Server DMO’s – Day 19 (sys.dm_exec_query_stats)

    - by Tamarick Hill
    The sys.dm_exec_query_stats DMV is one of the most useful DMV’s out there when it comes to performance tuning. If you have been keeping up with this blog series this month, you know that I started out on Day 1 reviewing many of the DMV’s within the ‘exec’ namespace. I’m not sure how I missed this one considering how valuable it is, but hey, they say it’s better late than never right?? On Day 7 and Day 8 we reviewed the sys.dm_exec_procedure_stats and sys.dm_exec_trigger_stats respectively. This sys.dm_exec_query_stats DMV is very similar to these two. As a matter of fact, this DMV will return all of the information you saw in the other two DMV’s, but in addition to that, you can see stats for all queries that have cached execution plans on your server. You can even see stats for statements that are ran Ad-Hoc as long as they are still cached in the buffer pool. To better illustrate this DMV, let have a quick look at it: SELECT * FROM sys.dm_exec_query_stats As you can see, there is a lot of information returned from this DMV. I wont go into detail about each and every one of these columns, but I will touch on a few of them briefly. The first column is the ‘sql_handle’, which if you remember from Day 4 of our blog series, I explained how you can use this column to extract the actual SQL text that was executed. The next columns statement_start_offset and statement_end_offset provide you a way of extracting the exact SQL statement that was executed as part of a batch. The plan_handle column is used to extract the Execution plan that was used, which we talked about during Day 5 of this blog series. Later in the result set, you have columns to identify how many times a particular statement was executed, how much CPU time it used, how many reads/writes it performed, the duration, how many rows were returned, etc. These columns provide you with a solid avenue to begin your performance optimization. The last column I will touch on is the query_plan_hash column. A lot of times when you have Dynamic SQL running on your server, you have similar statements with different parameter values being passed in. Many times these types of statements will get similar execution plans and then a Binary hash value can be generated based on these similar plans. This query plan hash can be used to find the cost of all queries that have similar execution plans and then you can tune based on that plan to improve the performance of all of the individual queries. This is a very powerful way of identifying and tuning Ad-hoc statements that run on your server. As I stated earlier, this sys.dm_exec_query_stats DMV is a very powerful and recommended DMV for performance tuning. You are able to quickly identify statements that are running on your server and analyze their impact on system resources. Using this DMV to track down the biggest performance killers on your server will allow you to make the biggest gains once you focus your tuning efforts on those top offenders. For more information about this DMV, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/ms189741.aspx Follow me on Twitter @PrimeTimeDBA

    Read the article

  • SQLAuthority News – Community Service and Public Speaking Engagements

    - by pinaldave
    Today is the last day of the year and I was going over my memories for year 2010. Almost all of them are good and I feel for sure better person in terms of knowledge, nature and overall human being. Looking back at the year, it is very satisfying as I was able to go out in public and help community out at various capacity. Thought, most of the time my contribution was as speaker, many times, I have reached out and helped organized event and worked at any capacity to get the event out. I have taken parts in many TechEds, PASS events, Virtual Tech Days, Various Community Events around the Globe and my contribution is not limited to my country only. Overall – I feel good to be part of this wonderful and supportive community. SQLAuthority News – A Successful Community TechDays at Ahmedabad – December 11, 2010 SQLAuthority News – A Successful Performance Tuning Seminar at Pune – Dec 4-5, 2010 SQL SERVER – A Successful Performance Tuning Seminar – Hyderabad – Nov 27-28, 2010 – Next Pune SQLAuthority News – SQLPASS Nov 8-11, 2010-Seattle – An Alternative Look at Experience SQLAuthority News – Statistics and Best Practices – Virtual Tech Days – Nov 22, 2010 SQLAuthority News – SQL Server Performance Optimizations Seminar – Grand Success – Colombo, Sri Lanka – Oct 4 – 5, 2010 SQL SERVER – Visiting Alma Mater – Delivering Session on Database Performance and Career – Nirma Institute of Technology SQLAuthority News – Feedback Received for Virtual Tech Days Sessions on Spatial Database SQLAuthority News – Community Tech Days, Ahmedabad – July 24, 2010 SQLAuthority News – SQL Data Camp, Chennai, July 17, 2010 – A Huge Success SQLAuthority News – 2 Sessions at TechInsight 2010 – June 29 – July 1, 2010 SQLAuthority News – Author Visit – SQL Server 2008 R2 Launch SQLAuthority News – Professional Development and Community SQLAuthority News – TechEd India – April 12-14, 2010 Bangalore – An Unforgettable Experience – An Opportunity of A Lifetime SQLAuthority News – Speaking Sessions at TechEd India – 3 Sessions – 1 Panel Discussion SQLAuthority News – Meeting with Allen Bailochan Tuladhar – An Unlimited Experience SQLAuthority News – Author Visit Review – TechMela Nepal – March 29-30, 2010 SQLAuthority News – Excellent Event – TechEd Sri Lanka – Feb 8, 2010 SQLAuthority News – Hyderabad Techies February Fever Feb 11, 2010 – Indexing for Performance SQLAuthority News – MUGH – Microsoft User Group Hyderabad – Feb 2, 2010 Session Review SQLAuthority News – Ahmedabad Community Tech Days – Jan 30, 2010 – Huge Success For earlier year’s contribution you can check my webpage over here. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Book Review, SQLAuthority News, T SQL, Technology

    Read the article

  • How to convert from amateur web app developer to professional web apper?

    - by Nilesh
    This is more of a practical question on web app development and deployment process. Here is some background information. I use PHP for server side scripting, javascript for client side. I use Netbeans and notepad++. I user Firefox and firebug for debugging and testing. The process I use is very amateurish, I code something in netbeans, something in notepad++ and since there is nothing to compile, I just refresh the firefox browser and test it. This is convenient and faster compared to the Java development enviornment where you would have to atleast compile and deploy the jar files before you could run them. I have been thinking of putting a formal process in my development and find it hard putting it together. There are so many things to do before you can deploy your final web app. I keep hearing jslint, compression, unit testing (selenium), Ant, YUI compressor etc but I am now looking for some steps that I can take to make me more organized. For e.g I use netbeans but don't use any projects within it. I directly update the files. I don't use any source control but use my Iomega backup that saves each save into a different version and at the end of the day I backup the dev directory to my Amazon s3 account. For me development environment is just a DEV directory, TEST is my intermediate stage and PROD is the final directory that gets pushed out to the server. But all these directories are in the same apache home. I have few php scripts that just copies the needed files into the production directory. Thats about it for my development approach. I know I am missing the following - Regression testing (manual or automated ??) - automated testing (selenium ??) - automated deployment (ANT ??) - source control (svn ??) - quality control (jslint ??) Can someone explain what are the missing steps and how to go about filling those steps in order to have more professional approach. I am looking for tools with example tutorials in streamlining the whole development to deployment stage. For me just getting a hang of database, server side and client side development all in synchronization was itself a huge accomplishment. And now I feel there is lot missing before you can produce quality web application. For e.g I see lot of mention about using automated testing but how to put in use with respect to javascript and php. How to use ANT for the deployment etc. Is this all too much for a single or two person development team? Is there a way to automate all the above so that I just keep coding in netbeans and then run a batch file that is configured once and run it everytime to produce the code in the production directory? Lot of these information is scattered on the web and here, if someone can guide I would be happy to consolidate here. Thank you for your patience :)

    Read the article

  • ArchBeat Top 10 for December 2-8, 2012

    - by Bob Rhubart
    The Top 10 most-clicked items shared on the OTN ArchBeat Facebook page for the week of December 2-8, 2012 Configure Oracle SOA JMSAdatper to Work with WLS JMS Topics Another of the four posts published on Dec 4 by the Fusion Middleware A-Team blogger identified as "fip" illlustrates "how to configure the JMS Topic, the JmsAdapter connection factory, as well as the composite so that the JMS Topic messages will be evenly distributed to same composite running off different SOA cluster nodes without causing duplication." Web Service Example - Part 3: Asynchronous Part 3 in this series from the Oracle ADF Mobile blog looks at "firing the web service asynchronously and then filling in the UI when it completes." Denis says, "This can be useful when you have data on the device in a local store and want to show that to the user while the application uses lazy loading from a web service to load more data." Advanced Oracle SOA Suite Oracle Open World 2012 SOA Presentations Oracle SOA & BPM Partner Community blogger Juergen Kress shares a list of 13 SOA presentations delivered or moderated by Oracle SOA Product Management at OOW12 in San Francisco. Oracle WebLogic Server WLS Domain Browser My colleague Jeff Davies, a frequent speaker at OTN Architect Day events and a genuinely nice guy, emailed me last night with this message: "I just came across this app on Google Play. It allows WebLogic administrators to browse WLS 12c domain information. I installed it on my phone and tried it out. Works very fast." I'm an iPhone guy, but I'm perfectly comfortable taking Jeff at his word. The app is called WLS Domain Browser. Follow the link for more info from the Google Play site. Retrieve Performance Data from SOA Infrastructure Database Another of the four blog posts published on Dec 4 by very busy Oracle Fusion Middleware A-Team member "fip," this one offers "examples of some basic SQL queries you can run against the infrastructure database of Oracle SOA Suite 11G to acquire the performance statistics for a given period of time." How to Achieve OC4J RMI Load Balancing "Having returned from a customer who faced challenges with OC4J RMI load balancing, I felt there is still some confusion in the field [about] how OC4J RMI load balancing works," says the Oracle Fusion Middleware A-Team member known only as "fip." "Hence I decide to dust off an old tech note that I wrote a few years back and share it with the general public." From XaaS to Java EE – Which damn cloud is right for me in 2012? Oracle ACE Director Markus Eisele wrestles with a timely technical issue and shares his observations on several of the alternatives. Exalogic 2.0.1 Tea Break Snippets - Creating a ModifyJeOS VirtualBox "One of the main advantages of this is that Templates can be created away from the Exalogic Environment," explains The Old Toxophilist. (BTW: I had to look it up: a toxophilist is one who collects bows and arrows.) ADF Mobile - Implementing Reusable Mobile Architecture "Reusability was always a strong part of ADF," says Oracle ACE Director Andrejus Baranovskis. "The same high reusability level is supported now in ADF Mobile." The objective of this post is "to prove technically that [the] reusable architecture concept works for ADF Mobile." Using BPEL Performance Statistics to Diagnose Performance Bottlenecks Someone had a busy day… This post, one of four published on DeC 4 by a member of the Oracle Fusion Middleware A-Team identified only as "fip," offers details on how to "enable, retrieve and interpret the performance statistics, before the future versions provides a more pleasant user experience." Thought for the Day "If you're afraid to change something it is clearly poorly designed." — Martin Fowler Source: SoftwareQuotes.com

    Read the article

  • ArchBeat Link-o-Rama Top 10 for August 2012

    - by Bob Rhubart
    The Top 10 most popular items shared via the OTN ArchBeat Facebook page for the month of August 2012. Now Available: Oracle SQL Developer 3.2 (3.2.09.23) New features include APEX listener, UI enhancements, and 12c database support. The Role of Oracle VM Server for SPARC in a Virtualization Strategy In this article, Matthias Pfutzner discusses hardware, desktop, and operating system virtualization, along with various Oracle virtualization technologies, including Oracle VM Server for SPARC. How to Manually Install Flash Player Plugin to see the Oracle Enterprise Manager Performance Page | Kai Yu So, you're a DBA and you want to check the Performance page in Oracle Enterprise Manager (11g or 12c). So you click the Performance tab and… nothing. Zip. Nada. The Flash plugin is a no-show. Relax! Oracle ACE Director Kai Yu shows you what you need to do to see all the pretty colors instead of that dull grey screen. Relationally Challenged (CX - CRM - EQ/RQ/CRQ) | Chris Warticki Self-proclaimed Oracle Support "spokesmodel" Chris Chris Warticki has some advice for those interested in Customer Relationship Management: "How about we just dumb it down, strip it to the core, keep it simple and LISTEN?! No more focus groups, no more surveys, and no need to gather more data. We have plenty of that. Why not just provide the customer what they are asking for?" Free WebLogic Server Course | Middleware Magic So you want to sharpen your Oracle WebLogic Server skills, but you prefer to skip the whole classroom bit and don't want to be bothered with dealing with an instructor? No problem! Oracle ACE Rene van Wijk, a prolific Middleware Magic blogger, has information on an Oracle WebLogic course you can take on your own time, at your own pace. Oracle VM VirtualBox 4.1.20 released Oracle VM VirtualBox 4.1.20 was just released at the community and Oracle download sites, reports the Fat Bloke. This is a maintenance release containing bug fixes and stability improvements. Optimizing OLTP Oracle Database Performance using Dell Express Flash PCIe SSDs | Kai Yu Oracle ACE Director Kai Yu shares resources based on "several extensive performance studies on a single node Oracle 11g R2 database as well as a two node 11gR2 Oracle Real Application clusters (RAC) database running on Dell PowerEdge R720 servers with Dell Express Flash PCIe SSDs on Oracle Enterprise Linux 6.2 platform." Oracle ACE sessions at Oracle OpenWorld With so many great sessions at this year's event, building your Oracle OpenWorld schedule can involve making a lot of tough choices. But you'll find that the sessions led by Oracle ACEs just might be the icing on the cake for your OpenWorld experience. MySQL Update: The Cleveland MySQL Meetup (Independence, OH) Oracle MySQL team member Benjamin Wood, a MySQL engineer and five year veteran of the MySQL organization, will speak at the Cleveland MySQL Meetup event on September 12. The presentation will include a MySQL 5.5 Overview, Oracle's Roadmap for MySQL, including specifics on MySQL 5.6, best practices and how to overcome development and operational MySQL challenges, and the new MySQL commercial extensions. Click the link for time and location information. Parsing XML in Oracle Database | Martijn van der Kamp Martijn van der Kamp's post deals with processing XML in PL/SQL code and processing the data into the database. Thought for the Day "Walking on water and developing software from a specification are easy if both are frozen." — Edward V. Berard Source: SoftwareQuotes.com

    Read the article

  • Profiling Silverlight Applications after installing Visual Studio 2010 Service Pack 1

    - by mbcrump
    Introduction Now that the dust has settled and everyone has downloaded and installed Visual Studio 2010 Service Pack 1, its time to talk about a new feature included that will help Silverlight Developers profile their applications. Let’s take a look at what the official documentation says about it: Performance Wizard for Silverlight – taken from VS2010 SP1 KB. Visual Studio 2010 SP1 enables you to tune the Silverlight application performance by profiling the code. A traditional code profiler cannot tune the rendering performance for Silverlight applications. Many higher-level profilers are added to Visual Studio 2010 SP1 so that you can better determine which parts of the application consume time. So, how do you do it? After you finish installing VS2010 SP1, make sure it took by going to Help –> About. You should see SP1Rel under Visual Studio 2010 as shown below. Now, that we have verified you are on the most current release, let’s load up a Silverlight Application. I’m going to take my hobby Silverlight project that I created a month or so ago. The reason that I’m picking this project is that I didn’t focus so much on performance as it was just built for fun and to see what I could do with Silverlight. I believe this makes the perfect application to profile.  After the project is loaded, click on Analyze then Launch Performance Wizard. Go ahead and click on CPU Sampling (recommended). You will notice that it ask which application to target. By Default, it will select the .Web project in an Silverlight Application. Go ahead and leave the default Web Project checked. We are going to leave the client as Internet Explorer. Now, go ahead and click finish. Now your Silverlight Application will launch. While your application is running, you will see the following inside of Visual Studio 2010. Here is where you will need to attach your Silverlight Application to the web application that is current being profiled. Simply click on the  Attach/Detach button below and find your application to attach to the profiler. In my case, I am using IE8 and could find it by the title. After you close your browser, you will notice it generated a report: These files will end with a .VSP If you click on the .VSP you will it generated the following report: We could turn off “Just My Code” but it may pick up things that we didn’t want to profile as shown below: One other feature to note is that you may want to export the data to a CSV or XML. You can do that by looking at the toolbar and clicking the button highlighted below. Conclusion The profiler for Silverlight is a great addition to an already great product. So before you ship a Silverlight Application run it through the profile and see what comes up. Since its included and free I can’t see a reason not to do this. Thanks again for reading and I hope you subscribe to my blog or follow me on Twitter for more Silverlight/WP7 fun.  Subscribe to my feed

    Read the article

  • Rethinking Oracle Optimizer Statistics for P6 Part 2

    - by Brian Diehl
    In the previous post (Part 1), I tried to draw some key insights about the relationship between P6 and Oracle Optimizer Statistics.  The first is that average cardinality has the greatest impact on query optimization and that the particular queries generated by P6 are more likely to use this average during calculations. The second is that these are statistics that are unlikely to change greatly over the life of the application. Ultimately, our goal is to get the best query optimization possible.  Or is it? Stability No application administrator wants to get the call at 9am that their application users cannot get there work done because everything is running slow. This is a possibility with a regularly scheduled nightly collection of statistics. It may not just be slow performance, but a complete loss of service because one or more queries are optimized poorly. Ideally, this should not be the case. The database optimizer should make better decisions with more up-to-date data. Better statistics may give incremental performance benefit. However, this benefit must be balanced against the potential cost of system down time.  It is stability that we ultimately desire and not absolute optimal performance. We do want the benefit from more accurate statistics and better query plans, but not at the risk of an unusable system. As a result, I've developed the following methodology around managing database statistics for the P6 database.  1. No Automatic Re-Gathering - The daily, weekly, or other interval of statistic gathering is unlikely to be beneficial. Quite the opposite. It is more likely to cause problems. 2. Smart Re-Gathering - The time to collect statistics is when things have changed significantly. For a new installation of P6, this is happening more often because the data is growing from a few rows to thousands and more. But for a mature system, the data is not changing significantly from week-to-week. There are times to collect statistics: New releases of the application Changes in the underlying hardware or software versions (ex. new Oracle RDBMS version) When additional user groups are added. The new groups may use the software in significantly different ways. After significant changes in the data. This may be monthly, quarterly or yearly.  3. Always Test - If you take away one thing from this post, it would be to always have a plan to test after changing statistics. In reality, statistics can be collected as often as you desire provided there are tests in place to verify that performance is the same or better. These might be automated tests or simply a manual script of application functions. 4. Have a Way Out - Never change the statistics without a way to return to the previous set. Think of the statistics as one part of the overall application code that also includes the source code--both application and RDBMS. It would be foolish to change to the new code without a way to get back to the previous version. In the final post, I will talk about the actual script I created for P6 PMDB and possible future direction for managing query performance

    Read the article

  • Django "comment_was_flagged" signal

    - by tsoporan
    Hello, This is my first time working with django signals and I would like to hook the "comment_was_flagged" signal provided by the comments app to notify me when a comment is flagged. This is my code, but it doesn't seem to work, am I missing something? from django.contrib.comments.signals import comment_was_flagged from django.core.mail import send_mail def comment_flagged_notification(sender, **kwargs): send_mail('testing moderation', 'testing', 'test@localhost', ['[email protected]',]) comment_was_flagged.connect(comment_flagged_notification) (I am just testing the email for now, but I have assured the email is sending properly.) Thanks!

    Read the article

  • What are good design practices when working with Entity Framework

    - by AD
    This will apply mostly for an asp.net application where the data is not accessed via soa. Meaning that you get access to the objects loaded from the framework, not Transfer Objects, although some recommendation still apply. This is a community post, so please add to it as you see fit. Applies to: Entity Framework 1.0 shipped with Visual Studio 2008 sp1. Why pick EF in the first place? Considering it is a young technology with plenty of problems (see below), it may be a hard sell to get on the EF bandwagon for your project. However, it is the technology Microsoft is pushing (at the expense of Linq2Sql, which is a subset of EF). In addition, you may not be satisfied with NHibernate or other solutions out there. Whatever the reasons, there are people out there (including me) working with EF and life is not bad.make you think. EF and inheritance The first big subject is inheritance. EF does support mapping for inherited classes that are persisted in 2 ways: table per class and table the hierarchy. The modeling is easy and there are no programming issues with that part. (The following applies to table per class model as I don't have experience with table per hierarchy, which is, anyway, limited.) The real problem comes when you are trying to run queries that include one or many objects that are part of an inheritance tree: the generated sql is incredibly awful, takes a long time to get parsed by the EF and takes a long time to execute as well. This is a real show stopper. Enough that EF should probably not be used with inheritance or as little as possible. Here is an example of how bad it was. My EF model had ~30 classes, ~10 of which were part of an inheritance tree. On running a query to get one item from the Base class, something as simple as Base.Get(id), the generated SQL was over 50,000 characters. Then when you are trying to return some Associations, it degenerates even more, going as far as throwing SQL exceptions about not being able to query more than 256 tables at once. Ok, this is bad, EF concept is to allow you to create your object structure without (or with as little as possible) consideration on the actual database implementation of your table. It completely fails at this. So, recommendations? Avoid inheritance if you can, the performance will be so much better. Use it sparingly where you have to. In my opinion, this makes EF a glorified sql-generation tool for querying, but there are still advantages to using it. And ways to implement mechanism that are similar to inheritance. Bypassing inheritance with Interfaces First thing to know with trying to get some kind of inheritance going with EF is that you cannot assign a non-EF-modeled class a base class. Don't even try it, it will get overwritten by the modeler. So what to do? You can use interfaces to enforce that classes implement some functionality. For example here is a IEntity interface that allow you to define Associations between EF entities where you don't know at design time what the type of the entity would be. public enum EntityTypes{ Unknown = -1, Dog = 0, Cat } public interface IEntity { int EntityID { get; } string Name { get; } Type EntityType { get; } } public partial class Dog : IEntity { // implement EntityID and Name which could actually be fields // from your EF model Type EntityType{ get{ return EntityTypes.Dog; } } } Using this IEntity, you can then work with undefined associations in other classes // lets take a class that you defined in your model. // that class has a mapping to the columns: PetID, PetType public partial class Person { public IEntity GetPet() { return IEntityController.Get(PetID,PetType); } } which makes use of some extension functions: public class IEntityController { static public IEntity Get(int id, EntityTypes type) { switch (type) { case EntityTypes.Dog: return Dog.Get(id); case EntityTypes.Cat: return Cat.Get(id); default: throw new Exception("Invalid EntityType"); } } } Not as neat as having plain inheritance, particularly considering you have to store the PetType in an extra database field, but considering the performance gains, I would not look back. It also cannot model one-to-many, many-to-many relationship, but with creative uses of 'Union' it could be made to work. Finally, it creates the side effet of loading data in a property/function of the object, which you need to be careful about. Using a clear naming convention like GetXYZ() helps in that regards. Compiled Queries Entity Framework performance is not as good as direct database access with ADO (obviously) or Linq2SQL. There are ways to improve it however, one of which is compiling your queries. The performance of a compiled query is similar to Linq2Sql. What is a compiled query? It is simply a query for which you tell the framework to keep the parsed tree in memory so it doesn't need to be regenerated the next time you run it. So the next run, you will save the time it takes to parse the tree. Do not discount that as it is a very costly operation that gets even worse with more complex queries. There are 2 ways to compile a query: creating an ObjectQuery with EntitySQL and using CompiledQuery.Compile() function. (Note that by using an EntityDataSource in your page, you will in fact be using ObjectQuery with EntitySQL, so that gets compiled and cached). An aside here in case you don't know what EntitySQL is. It is a string-based way of writing queries against the EF. Here is an example: "select value dog from Entities.DogSet as dog where dog.ID = @ID". The syntax is pretty similar to SQL syntax. You can also do pretty complex object manipulation, which is well explained [here][1]. Ok, so here is how to do it using ObjectQuery< string query = "select value dog " + "from Entities.DogSet as dog " + "where dog.ID = @ID"; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>(query, EntityContext.Instance)); oQuery.Parameters.Add(new ObjectParameter("ID", id)); oQuery.EnablePlanCaching = true; return oQuery.FirstOrDefault(); The first time you run this query, the framework will generate the expression tree and keep it in memory. So the next time it gets executed, you will save on that costly step. In that example EnablePlanCaching = true, which is unnecessary since that is the default option. The other way to compile a query for later use is the CompiledQuery.Compile method. This uses a delegate: static readonly Func<Entities, int, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, Dog>((ctx, id) => ctx.DogSet.FirstOrDefault(it => it.ID == id)); or using linq static readonly Func<Entities, int, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, Dog>((ctx, id) => (from dog in ctx.DogSet where dog.ID == id select dog).FirstOrDefault()); to call the query: query_GetDog.Invoke( YourContext, id ); The advantage of CompiledQuery is that the syntax of your query is checked at compile time, where as EntitySQL is not. However, there are other consideration... Includes Lets say you want to have the data for the dog owner to be returned by the query to avoid making 2 calls to the database. Easy to do, right? EntitySQL string query = "select value dog " + "from Entities.DogSet as dog " + "where dog.ID = @ID"; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>(query, EntityContext.Instance)).Include("Owner"); oQuery.Parameters.Add(new ObjectParameter("ID", id)); oQuery.EnablePlanCaching = true; return oQuery.FirstOrDefault(); CompiledQuery static readonly Func<Entities, int, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, Dog>((ctx, id) => (from dog in ctx.DogSet.Include("Owner") where dog.ID == id select dog).FirstOrDefault()); Now, what if you want to have the Include parametrized? What I mean is that you want to have a single Get() function that is called from different pages that care about different relationships for the dog. One cares about the Owner, another about his FavoriteFood, another about his FavotireToy and so on. Basicly, you want to tell the query which associations to load. It is easy to do with EntitySQL public Dog Get(int id, string include) { string query = "select value dog " + "from Entities.DogSet as dog " + "where dog.ID = @ID"; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>(query, EntityContext.Instance)) .IncludeMany(include); oQuery.Parameters.Add(new ObjectParameter("ID", id)); oQuery.EnablePlanCaching = true; return oQuery.FirstOrDefault(); } The include simply uses the passed string. Easy enough. Note that it is possible to improve on the Include(string) function (that accepts only a single path) with an IncludeMany(string) that will let you pass a string of comma-separated associations to load. Look further in the extension section for this function. If we try to do it with CompiledQuery however, we run into numerous problems: The obvious static readonly Func<Entities, int, string, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, string, Dog>((ctx, id, include) => (from dog in ctx.DogSet.Include(include) where dog.ID == id select dog).FirstOrDefault()); will choke when called with: query_GetDog.Invoke( YourContext, id, "Owner,FavoriteFood" ); Because, as mentionned above, Include() only wants to see a single path in the string and here we are giving it 2: "Owner" and "FavoriteFood" (which is not to be confused with "Owner.FavoriteFood"!). Then, let's use IncludeMany(), which is an extension function static readonly Func<Entities, int, string, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, string, Dog>((ctx, id, include) => (from dog in ctx.DogSet.IncludeMany(include) where dog.ID == id select dog).FirstOrDefault()); Wrong again, this time it is because the EF cannot parse IncludeMany because it is not part of the functions that is recognizes: it is an extension. Ok, so you want to pass an arbitrary number of paths to your function and Includes() only takes a single one. What to do? You could decide that you will never ever need more than, say 20 Includes, and pass each separated strings in a struct to CompiledQuery. But now the query looks like this: from dog in ctx.DogSet.Include(include1).Include(include2).Include(include3) .Include(include4).Include(include5).Include(include6) .[...].Include(include19).Include(include20) where dog.ID == id select dog which is awful as well. Ok, then, but wait a minute. Can't we return an ObjectQuery< with CompiledQuery? Then set the includes on that? Well, that what I would have thought so as well: static readonly Func<Entities, int, ObjectQuery<Dog>> query_GetDog = CompiledQuery.Compile<Entities, int, string, ObjectQuery<Dog>>((ctx, id) => (ObjectQuery<Dog>)(from dog in ctx.DogSet where dog.ID == id select dog)); public Dog GetDog( int id, string include ) { ObjectQuery<Dog> oQuery = query_GetDog(id); oQuery = oQuery.IncludeMany(include); return oQuery.FirstOrDefault; } That should have worked, except that when you call IncludeMany (or Include, Where, OrderBy...) you invalidate the cached compiled query because it is an entirely new one now! So, the expression tree needs to be reparsed and you get that performance hit again. So what is the solution? You simply cannot use CompiledQueries with parametrized Includes. Use EntitySQL instead. This doesn't mean that there aren't uses for CompiledQueries. It is great for localized queries that will always be called in the same context. Ideally CompiledQuery should always be used because the syntax is checked at compile time, but due to limitation, that's not possible. An example of use would be: you may want to have a page that queries which two dogs have the same favorite food, which is a bit narrow for a BusinessLayer function, so you put it in your page and know exactly what type of includes are required. Passing more than 3 parameters to a CompiledQuery Func is limited to 5 parameters, of which the last one is the return type and the first one is your Entities object from the model. So that leaves you with 3 parameters. A pitance, but it can be improved on very easily. public struct MyParams { public string param1; public int param2; public DateTime param3; } static readonly Func<Entities, MyParams, IEnumerable<Dog>> query_GetDog = CompiledQuery.Compile<Entities, MyParams, IEnumerable<Dog>>((ctx, myParams) => from dog in ctx.DogSet where dog.Age == myParams.param2 && dog.Name == myParams.param1 and dog.BirthDate > myParams.param3 select dog); public List<Dog> GetSomeDogs( int age, string Name, DateTime birthDate ) { MyParams myParams = new MyParams(); myParams.param1 = name; myParams.param2 = age; myParams.param3 = birthDate; return query_GetDog(YourContext,myParams).ToList(); } Return Types (this does not apply to EntitySQL queries as they aren't compiled at the same time during execution as the CompiledQuery method) Working with Linq, you usually don't force the execution of the query until the very last moment, in case some other functions downstream wants to change the query in some way: static readonly Func<Entities, int, string, IEnumerable<Dog>> query_GetDog = CompiledQuery.Compile<Entities, int, string, IEnumerable<Dog>>((ctx, age, name) => from dog in ctx.DogSet where dog.Age == age && dog.Name == name select dog); public IEnumerable<Dog> GetSomeDogs( int age, string name ) { return query_GetDog(YourContext,age,name); } public void DataBindStuff() { IEnumerable<Dog> dogs = GetSomeDogs(4,"Bud"); // but I want the dogs ordered by BirthDate gridView.DataSource = dogs.OrderBy( it => it.BirthDate ); } What is going to happen here? By still playing with the original ObjectQuery (that is the actual return type of the Linq statement, which implements IEnumerable), it will invalidate the compiled query and be force to re-parse. So, the rule of thumb is to return a List< of objects instead. static readonly Func<Entities, int, string, IEnumerable<Dog>> query_GetDog = CompiledQuery.Compile<Entities, int, string, IEnumerable<Dog>>((ctx, age, name) => from dog in ctx.DogSet where dog.Age == age && dog.Name == name select dog); public List<Dog> GetSomeDogs( int age, string name ) { return query_GetDog(YourContext,age,name).ToList(); //<== change here } public void DataBindStuff() { List<Dog> dogs = GetSomeDogs(4,"Bud"); // but I want the dogs ordered by BirthDate gridView.DataSource = dogs.OrderBy( it => it.BirthDate ); } When you call ToList(), the query gets executed as per the compiled query and then, later, the OrderBy is executed against the objects in memory. It may be a little bit slower, but I'm not even sure. One sure thing is that you have no worries about mis-handling the ObjectQuery and invalidating the compiled query plan. Once again, that is not a blanket statement. ToList() is a defensive programming trick, but if you have a valid reason not to use ToList(), go ahead. There are many cases in which you would want to refine the query before executing it. Performance What is the performance impact of compiling a query? It can actually be fairly large. A rule of thumb is that compiling and caching the query for reuse takes at least double the time of simply executing it without caching. For complex queries (read inherirante), I have seen upwards to 10 seconds. So, the first time a pre-compiled query gets called, you get a performance hit. After that first hit, performance is noticeably better than the same non-pre-compiled query. Practically the same as Linq2Sql When you load a page with pre-compiled queries the first time you will get a hit. It will load in maybe 5-15 seconds (obviously more than one pre-compiled queries will end up being called), while subsequent loads will take less than 300ms. Dramatic difference, and it is up to you to decide if it is ok for your first user to take a hit or you want a script to call your pages to force a compilation of the queries. Can this query be cached? { Dog dog = from dog in YourContext.DogSet where dog.ID == id select dog; } No, ad-hoc Linq queries are not cached and you will incur the cost of generating the tree every single time you call it. Parametrized Queries Most search capabilities involve heavily parametrized queries. There are even libraries available that will let you build a parametrized query out of lamba expressions. The problem is that you cannot use pre-compiled queries with those. One way around that is to map out all the possible criteria in the query and flag which one you want to use: public struct MyParams { public string name; public bool checkName; public int age; public bool checkAge; } static readonly Func<Entities, MyParams, IEnumerable<Dog>> query_GetDog = CompiledQuery.Compile<Entities, MyParams, IEnumerable<Dog>>((ctx, myParams) => from dog in ctx.DogSet where (myParams.checkAge == true && dog.Age == myParams.age) && (myParams.checkName == true && dog.Name == myParams.name ) select dog); protected List<Dog> GetSomeDogs() { MyParams myParams = new MyParams(); myParams.name = "Bud"; myParams.checkName = true; myParams.age = 0; myParams.checkAge = false; return query_GetDog(YourContext,myParams).ToList(); } The advantage here is that you get all the benifits of a pre-compiled quert. The disadvantages are that you most likely will end up with a where clause that is pretty difficult to maintain, that you will incur a bigger penalty for pre-compiling the query and that each query you run is not as efficient as it could be (particularly with joins thrown in). Another way is to build an EntitySQL query piece by piece, like we all did with SQL. protected List<Dod> GetSomeDogs( string name, int age) { string query = "select value dog from Entities.DogSet where 1 = 1 "; if( !String.IsNullOrEmpty(name) ) query = query + " and dog.Name == @Name "; if( age > 0 ) query = query + " and dog.Age == @Age "; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>( query, YourContext ); if( !String.IsNullOrEmpty(name) ) oQuery.Parameters.Add( new ObjectParameter( "Name", name ) ); if( age > 0 ) oQuery.Parameters.Add( new ObjectParameter( "Age", age ) ); return oQuery.ToList(); } Here the problems are: - there is no syntax checking during compilation - each different combination of parameters generate a different query which will need to be pre-compiled when it is first run. In this case, there are only 4 different possible queries (no params, age-only, name-only and both params), but you can see that there can be way more with a normal world search. - Noone likes to concatenate strings! Another option is to query a large subset of the data and then narrow it down in memory. This is particularly useful if you are working with a definite subset of the data, like all the dogs in a city. You know there are a lot but you also know there aren't that many... so your CityDog search page can load all the dogs for the city in memory, which is a single pre-compiled query and then refine the results protected List<Dod> GetSomeDogs( string name, int age, string city) { string query = "select value dog from Entities.DogSet where dog.Owner.Address.City == @City "; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>( query, YourContext ); oQuery.Parameters.Add( new ObjectParameter( "City", city ) ); List<Dog> dogs = oQuery.ToList(); if( !String.IsNullOrEmpty(name) ) dogs = dogs.Where( it => it.Name == name ); if( age > 0 ) dogs = dogs.Where( it => it.Age == age ); return dogs; } It is particularly useful when you start displaying all the data then allow for filtering. Problems: - Could lead to serious data transfer if you are not careful about your subset. - You can only filter on the data that you returned. It means that if you don't return the Dog.Owner association, you will not be able to filter on the Dog.Owner.Name So what is the best solution? There isn't any. You need to pick the solution that works best for you and your problem: - Use lambda-based query building when you don't care about pre-compiling your queries. - Use fully-defined pre-compiled Linq query when your object structure is not too complex. - Use EntitySQL/string concatenation when the structure could be complex and when the possible number of different resulting queries are small (which means fewer pre-compilation hits). - Use in-memory filtering when you are working with a smallish subset of the data or when you had to fetch all of the data on the data at first anyway (if the performance is fine with all the data, then filtering in memory will not cause any time to be spent in the db). Singleton access The best way to deal with your context and entities accross all your pages is to use the singleton pattern: public sealed class YourContext { private const string instanceKey = "On3GoModelKey"; YourContext(){} public static YourEntities Instance { get { HttpContext context = HttpContext.Current; if( context == null ) return Nested.instance; if (context.Items[instanceKey] == null) { On3GoEntities entity = new On3GoEntities(); context.Items[instanceKey] = entity; } return (YourEntities)context.Items[instanceKey]; } } class Nested { // Explicit static constructor to tell C# compiler // not to mark type as beforefieldinit static Nested() { } internal static readonly YourEntities instance = new YourEntities(); } } NoTracking, is it worth it? When executing a query, you can tell the framework to track the objects it will return or not. What does it mean? With tracking enabled (the default option), the framework will track what is going on with the object (has it been modified? Created? Deleted?) and will also link objects together, when further queries are made from the database, which is what is of interest here. For example, lets assume that Dog with ID == 2 has an owner which ID == 10. Dog dog = (from dog in YourContext.DogSet where dog.ID == 2 select dog).FirstOrDefault(); //dog.OwnerReference.IsLoaded == false; Person owner = (from o in YourContext.PersonSet where o.ID == 10 select dog).FirstOrDefault(); //dog.OwnerReference.IsLoaded == true; If we were to do the same with no tracking, the result would be different. ObjectQuery<Dog> oDogQuery = (ObjectQuery<Dog>) (from dog in YourContext.DogSet where dog.ID == 2 select dog); oDogQuery.MergeOption = MergeOption.NoTracking; Dog dog = oDogQuery.FirstOrDefault(); //dog.OwnerReference.IsLoaded == false; ObjectQuery<Person> oPersonQuery = (ObjectQuery<Person>) (from o in YourContext.PersonSet where o.ID == 10 select o); oPersonQuery.MergeOption = MergeOption.NoTracking; Owner owner = oPersonQuery.FirstOrDefault(); //dog.OwnerReference.IsLoaded == false; Tracking is very useful and in a perfect world without performance issue, it would always be on. But in this world, there is a price for it, in terms of performance. So, should you use NoTracking to speed things up? It depends on what you are planning to use the data for. Is there any chance that the data your query with NoTracking can be used to make update/insert/delete in the database? If so, don't use NoTracking because associations are not tracked and will causes exceptions to be thrown. In a page where there are absolutly no updates to the database, you can use NoTracking. Mixing tracking and NoTracking is possible, but it requires you to be extra careful with updates/inserts/deletes. The problem is that if you mix then you risk having the framework trying to Attach() a NoTracking object to the context where another copy of the same object exist with tracking on. Basicly, what I am saying is that Dog dog1 = (from dog in YourContext.DogSet where dog.ID == 2).FirstOrDefault(); ObjectQuery<Dog> oDogQuery = (ObjectQuery<Dog>) (from dog in YourContext.DogSet where dog.ID == 2 select dog); oDogQuery.MergeOption = MergeOption.NoTracking; Dog dog2 = oDogQuery.FirstOrDefault(); dog1 and dog2 are 2 different objects, one tracked and one not. Using the detached object in an update/insert will force an Attach() that will say "Wait a minute, I do already have an object here with the same database key. Fail". And when you Attach() one object, all of its hierarchy gets attached as well, causing problems everywhere. Be extra careful. How much faster is it with NoTracking It depends on the queries. Some are much more succeptible to tracking than other. I don't have a fast an easy rule for it, but it helps. So I should use NoTracking everywhere then? Not exactly. There are some advantages to tracking object. The first one is that the object is cached, so subsequent call for that object will not hit the database. That cache is only valid for the lifetime of the YourEntities object, which, if you use the singleton code above, is the same as the page lifetime. One page request == one YourEntity object. So for multiple calls for the same object, it will load only once per page request. (Other caching mechanism could extend that). What happens when you are using NoTracking and try to load the same object multiple times? The database will be queried each time, so there is an impact there. How often do/should you call for the same object during a single page request? As little as possible of course, but it does happens. Also remember the piece above about having the associations connected automatically for your? You don't have that with NoTracking, so if you load your data in multiple batches, you will not have a link to between them: ObjectQuery<Dog> oDogQuery = (ObjectQuery<Dog>)(from dog in YourContext.DogSet select dog); oDogQuery.MergeOption = MergeOption.NoTracking; List<Dog> dogs = oDogQuery.ToList(); ObjectQuery<Person> oPersonQuery = (ObjectQuery<Person>)(from o in YourContext.PersonSet select o); oPersonQuery.MergeOption = MergeOption.NoTracking; List<Person> owners = oPersonQuery.ToList(); In this case, no dog will have its .Owner property set. Some things to keep in mind when you are trying to optimize the performance. No lazy loading, what am I to do? This can be seen as a blessing in disguise. Of course it is annoying to load everything manually. However, it decreases the number of calls to the db and forces you to think about when you should load data. The more you can load in one database call the better. That was always true, but it is enforced now with this 'feature' of EF. Of course, you can call if( !ObjectReference.IsLoaded ) ObjectReference.Load(); if you want to, but a better practice is to force the framework to load the objects you know you will need in one shot. This is where the discussion about parametrized Includes begins to make sense. Lets say you have you Dog object public class Dog { public Dog Get(int id) { return YourContext.DogSet.FirstOrDefault(it => it.ID == id ); } } This is the type of function you work with all the time. It gets called from all over the place and once you have that Dog object, you will do very different things to it in different functions. First, it should be pre-compiled, because you will call that very often. Second, each different pages will want to have access to a different subset of the Dog data. Some will want the Owner, some the FavoriteToy, etc. Of course, you could call Load() for each reference you need anytime you need one. But that will generate a call to the database each time. Bad idea. So instead, each page will ask for the data it wants to see when it first request for the Dog object: static public Dog Get(int id) { return GetDog(entity,"");} static public Dog Get(int id, string includePath) { string query = "select value o " + " from YourEntities.DogSet as o " +

    Read the article

  • How to improve Java perfomance on Informix for Windows

    - by Michal Niklas
    I have problem with performance of Java UDR functions on Informix on Windows. On this server I already have some functions in C and SPL. I chose one function to write it in those 3 languages and I measured performance of this function on test table. Function calculates some kind of checksum so it not use any db libraries etc. only string and math operations. I observed performance on 30k records with SQL like: select function(txt) from _tmp_perf_test and I changed function to 'function_c, function_spl or function_java. My performance tests showed that C function is the fastest, SPL function is about 5 times slower, where Java is 100 (one hundred!) times slower than C. I checked it few times and 1:100 ratio didn't improve. I changed Java function to simply return length of the string but even this do not help so it looks, that there is general problem with Java function invocation, because there was no difference in time between Java function that calculate checksum and Java function that returns length of the string. I increased JVM_MAX_HEAP_SIZE to 128 and it not helped too. I use IBM Informix Dynamic Server Version 11.50.TC6DE. The same test on Linux server: IBM Informix Dynamic Server Version 11.50.FC6 show more "normal" results, i.e. Java is slower from C and SPL but only 2 to 5 times. What can I do to improve Java performance on Informix server on Windows? More info about Java on servers: c:\Informix\extend\krakatoa\jre\bin>java -version java version "1.5.0" Java(TM) 2 Runtime Environment, Standard Edition (build pwi32dev-20081129a (SR9-0 )) IBM J9 VM (build 2.3, J2RE 1.5.0 IBM J9 2.3 Windows Server 2003 x86-32 j9vmwi3223-20081129 (JIT enabled) J9VM - 20081126_26240_lHdSMr JIT - 20081112_1511ifx1_r8 GC - 200811_07) JCL - 20081129 [root@informix11 bin]# ./java -version java version "1.5.0" Java(TM) 2 Runtime Environment, Standard Edition (build pxa64devifx-20071025 (SR6b)) IBM J9 VM (build 2.3, J2RE 1.5.0 IBM J9 2.3 Linux amd64-64 j9vmxa6423-20071005 (JIT enabled) J9VM - 20071004_14218_LHdSMr JIT - 20070820_1846ifx1_r8 GC - 200708_10) JCL - 20071025

    Read the article

  • codeigniter and OOP general question regarding calling functions, and the parent constuctor

    - by thrice801
    Ok so I have some gaps in my understanding of PHP OOP, classes and functions specifically, this whole constructor class deal. I use both Zend and CI but right now Im trying to figure this out in CI as it is less complicated. So all Im trying to do is understand how to call a function from a view page in code igniter. I understand that might go against MVC but Im working with an api and search results not from my database, so basically, I want to define a function in my class that I am able to call in one of my view pages.. and I keep getting "Fatal error: Call to undefined function functionname" error no matter what I try. I thought I just had to declare public function testing() { echo "testing testing 123; } but calling that from the view I get that error. Then I read something about having to go parent::Controller(); in the index of the class where the testing function also resides? But that didnt work either. Anyways, ya, can someone explain what I need to do in order to call the "testing()" function on one of my view pages? and clarification on the constructor class and what exactly parent::Controller() even does, would be much appreciated as well.

    Read the article

  • Test Case Design and Responsibility

    - by Sakamoto Kazuma
    So it seems like a lot of people are playing the blame game around where I work, and it brings up an interesting question. Knowns: Requirements team writes requirements for product. Developers create their own unit tests out of requirements. Testing team creates their general tests out of requirements and past customer issues. Product released if and only if X% of testcases from Testing team passes Customer response team gets bugs from the field, and lets the testing team know about these issues. Question: If the customer ends up filing a lot of defects, who is to blame? Is it the Testing team for not covering those? Or is it the requirements team for not writing better requirements? And how does one improve upon the system?

    Read the article

  • Are there any worse sorting algorithms than Bogosort (a.k.a Monkey Sort)?

    - by womp
    My co-workers took me back in time to my University days with a discussion of sorting algorithms this morning. We reminisced about our favorites like StupidSort, and one of us was sure we had seen a sort algorithm that was O(n!). That got me started looking around for the "worst" sorting algorithms I could find. We postulated that a completely random sort would be pretty bad (i.e. randomize the elements - is it in order? no? randomize again), and I looked around and found out that it's apparently called BogoSort, or Monkey Sort, or sometimes just Random Sort. Monkey Sort appears to have a worst case performance of O(∞), a best case performance of O(n), and an average performance of O(n * n!). Are there any named algorithms that have worse average performance than O(n * n!)? Or are just sillier than Monkey Sort in general?

    Read the article

  • Implications of using many USB web cameras

    - by Martin
    I'm looking into connecting multiple low resolution USB webcams to a single computer. What implications might this have on performance? How does, for example, four 320x240 cameras fare against a single 640x480 camera? I'm not well versed in the architecture of the USB interface, what are the performance caveats? By performance I mean how would it affect the time to read the image data from multiple cameras compared to a single one.

    Read the article

  • Run a universal app as a 'legacy' iPhone app on an iPod

    - by Paul Alexander
    I do most development testing on my iPad. When I test an iPhone app, it runs in 'compatibility' mode where the little iPhone app runs in a small window or x2 magnification. Now that I've created a universal app it runs as a native iPad app. For testing I'd like to use the simulated iPhone when I don't have an iPhone handy for testing. How can I build the project so that the iPad will run the app in compatibility mode?

    Read the article

  • Outlook VBA - Find & Replace Incoming Emails

    - by user1912198
    Good morning everyone at Stackoverflow, I am trying to find a VBA script that finds and replaces a certain text in incoming e-mails. So far i've been unable to find such a script that is working. I found several scripts to find and replace stuff in the e-mail but these don't work as a rule on incoming e-mails, they only work on outgoing / creating e-mails. Does anyone have such a script that they can share with me? I would like to find & replace a certain text on every incoming e-mail. I would really apreciate it! Regards, Kris ps: I don't know how to program a whole VBA script, that's why I am asking here :) Current Code: Sub testing(MyMail As MailItem) Dim mail As MailItem Dim Inbox As Outlook.Folder Set Inbox = Session.GetDefaultFolder(olFolderInbox) For Each mail In Inbox.Items 'change subject mail.Subject = "TESTING" 'replace body text If mail.BodyFormat = olFormatHTML Then mail.HTMLBody = Replace(mail.HTMLBody, "Test 123", "TESTING") Else mail.Body = Replace(mail.Body, "Test 123", "TESTING") End If Next mail End Sub

    Read the article

  • Java mail attachment not working on Tomcat

    - by losintikfos
    Hello guys, I have an application which e-mails confirmations. The email part utilises Commons Mail API. The simple code which does the send mail is as shown below; import org.apache.commons.mail.*; ... // Create the attachment EmailAttachment attachment = new EmailAttachment(); attachment.setURL(new URL("http://cashew.org/doc.pdf")); attachment.setDisposition(EmailAttachment.ATTACHMENT); attachment.setDescription("Testing attach"); attachment.setName("doc.pdf"); // Create the email message MultiPartEmail email = new MultiPartEmail(); email.setHostName("mail.cashew.com"); email.addTo("[email protected]"); email.setFrom("[email protected]"); email.setSubject("Testing); email.setMsg("testing message"); // add the attachment email.attach(attachment); // send the email email.send(); My problem is, when I execute this application from Eclipse, I get email sent with attachment without any issues. But when i deploy the application to Tomcat server (I have tried both version 5 & 6 no joy), the e-mail is sent with below content; ------=_Part_0_25002283.1275298567928 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit testing Regards, los ------=_Part_0_25002283.1275298567928 Content-Type: application/pdf; name="doc.pdf" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="doc.pdf" Content-Description: Testing attach JVBERi0xLjQNJeLjz9MNCjYzIDAgb2JqDTw8L0xpbmVhcml6ZWQgMS9MIDMxMzE4Mi9PIDY1L0Ug Mjg2NjY5L04gMS9UIDMxMTgwMi9IIFsgMjgzNiAzNzZdPj4NZW5kb2JqDSAgICAgICAgICAgICAg DQp4cmVmDQo2MyAxMjcNCjAwMDAwMDAwMTYgMDAwMDAgbg0KMDAwMDAwMzM4MCAwMDAwMCBuDQow MDAwMDAzNTIzIDAwMDAwIG4NCjAwMDAwMDQzMDcgMDAwMDAgbg0KMDAwMDAwNTEwOSAwMDAwMCBu DQowMDAwMDA2Mjc5IDAwMDAwIG4NCjAwMDAwMDY0MTAgMDAwMDAgbg0KMDAwMDAwNjU0NiAwMDAw MCBuDQowMDAwMDA3OTY3IDAwMDAwIG4NCjAwMDAwMDkwMjMgMDAwMDAgbg0KMDAwMDAwOTk0OSAw MDAwMCBuDQowMDAwMDExMDAwIDAwMDAwIG4NCjAwMDAwMTIwNTkgMDAwMDAgbg0KMDAwMDAxMjky MCAwMDAwMCBuDQowMDAwMDEyOTU0IDAwMDAwIG4NCjAwMDAwMTI5ODIgMDAwMDAgbg0KMDAwMDAx ....... CnN0YXJ0eHJlZg0KMTE2DQolJUVPRg0K ------=_Part_0_25002283.1275298567928-- One thing also I have noticed is, the header information donot show TO and Subject values. Hmm pretty wierd. I have to point out that, above is not generated of DEBUG, it is the actual message recieved in my outlook client. Can someone help me please! Do anyone knows what's going on?

    Read the article

  • What is the "box model?"

    - by Chris
    During a recent interview for a front-end developer position I was asked what the box model was. I thought the interviewer was referring to testing (i.e. white box testing, black box testing). I was wrong. What is the box model, in reference to front-end development?

    Read the article

  • Passing a string representing my format specifier into stringWithFormat and seeing issues

    - by Jeff
    If I call "[NSString stringWithFormat:@"Testing \n %@",variableString]"; I get what I would expect, which is Testing, followed by a new line, then the contents of variableString. However, if i try NSString *testString = @"Testing \n %@"; //forgive shorthand here [NSString stringWithFormat,testString,variableString] the output actually literally writes \n to the screen instead of a newline. any workaround to this? seems odd to me

    Read the article

  • Many DIVs inside parent DIV, CSS height issue

    - by Benjamin
    Hi everyone, I am putting together a dynamic photo gallery and getting stuck trying to place thumbnails. Basically I am trying to place each thumbnail and caption in its own DIV, floated to the left. The thumbnails are working just as I want them to but for some reason the parent DIV refuses to cover the height of the thumbnail area. Here is the CSS I am using.. #galleryBox { width: 650px; background: #fff; margin: auto; padding: 5px; text-align: center; } .item { display: block; margin: 10px; padding: 10px; float: left; background: #353535; min-width: 120px; } .label { display: block; color: #fff; } I have tried height: auto and that hasn't done anything. Here is what I am trying to style: <div id="galleryBox" class="ui-corner-all"> <div class="item ui-corner-all"> <img src="http://tapp-essexvfd.org/images/ajax-loader.gif" alt="test"/><br/> <p><span class="label">Testing</span></p> </div> <div class="item ui-corner-all"> <img src="http://tapp-essexvfd.org/images/ajax-loader.gif" alt="test"/><br/> <p><span class="label">Testing</span></p> </div> <div class="item ui-corner-all"> <img src="http://tapp-essexvfd.org/images/ajax-loader.gif" alt="test"/><br/> <p><span class="label">Testing</span></p> </div> <div class="item ui-corner-all"> <img src="http://tapp-essexvfd.org/images/ajax-loader.gif" alt="test"/><br/> <p><span class="label">Testing</span></p> </div> <div class="item ui-corner-all"> <img src="http://tapp-essexvfd.org/images/ajax-loader.gif" alt="test"/><br/> <p><span class="label">Testing</span></p> </div> <div class="item ui-corner-all"> <img src="http://tapp-essexvfd.org/images/ajax-loader.gif" alt="test"/><br/> <p><span class="label">Testing</span></p> </div> <div class="item ui-corner-all"> <img src="http://tapp-essexvfd.org/images/ajax-loader.gif" alt="test"/><br/> <p><span class="label">Testing</span></p> </div> </div> Thanks!

    Read the article

  • Python logger dynamic filename

    - by sharjeel
    I want to configure my Python logger in such a way so that each instance of logger should log in a file having the same name as the name of the logger itself. e.g.: log_hm = logging.getLogger('healthmonitor') log_hm.info("Testing Log") # Should log to /some/path/healthmonitor.log log_sc = logging.getLogger('scripts') log_sc.debug("Testing Scripts") # Should log to /some/path/scripts.log log_cr = logging.getLogger('cron') log_cr.info("Testing cron") # Should log to /some/path/cron.log I want to keep it generic and dont want to hardcode all kind of logger names I can have. Is that possible?

    Read the article

  • NTFS-compressing Virtual PC disks (on host and/or guest)

    - by nlawalker
    I'm hoping someone here can answer these definitively: Does putting a VHD file in an NTFS-compressed folder on the host improve performance of the virtual machine, diminish performance, or neither? What about using NTFS compression within the guest? Does using compresssion on either the host or the guest lead to any problems like read or write errors? If I were to put a VHD in a compressed folder on the host, would I benefit from compacting it? I've seen references to using NTFS compression on quite a few VPC "tips and tricks" blog posts, and it seems like half of them say to never do it and the other half say that not only does it save disk space but it actually can improve performance if you have a fast CPU and your primary performance bottleneck is the disk.

    Read the article

  • What is dot and hash symbols mean in JQuery

    - by kwokwai
    Hi all, I feel confused of the dot and hash symbols in the following example: <DIV ID="row"> <DIV ID="c1"> <Input type="radio" name="testing" id="testing" VALUE="1">testing1 </DIV> </DIV> Code 1: $('#row DIV').mouseover(function(){ $('#row DIV).addClass('testing'); }); Code 2 $('.row div').mouseover(function(){ $(this).addClass('testing'); });? Codes 1 and 2 look very similar, and so it makes me so confused that when I should use ".row div" to refer to a specific DIV instead of using "#row div" ?

    Read the article

< Previous Page | 305 306 307 308 309 310 311 312 313 314 315 316  | Next Page >