Search Results

Search found 13757 results on 551 pages for 'performance diagnostics'.

Page 256/551 | < Previous Page | 252 253 254 255 256 257 258 259 260 261 262 263  | Next Page >

  • SQL Rally Presentations

    - by AllenMWhite
    As I drove to Dallas for this year's SQL Rally conference (yes, I like to drive) I got a call asking if I could step in for another presenter who had to cancel at the last minute. Life happens, and it's best to be flexible, and I said sure, I can do that. Which presentation would you like me to do? (I'd submitted a few presentations, so it wasn't a problem.) So yesterday I presented "Gathering Performance Metrics With PowerShell" at 8:45AM, and my newest presentation, "Manage SQL Server 2012 on Windows...(read more)

    Read the article

  • Outstanding SQL Saturday

    - by merrillaldrich
    I had the privilege to attend the SQL Saturday held in Redmond today, and it was really outstanding. Among the many sessions, I especially enjoyed and took a lot of useful information away from Greg Larsen’s Dynamic Management Views session, Kalen Delaney’s Compression Session – I am planning to implement 2008 Enterprise compression on my company’s data warehouse later this year – Remus Rusanu’s session on Service Broker to process NAP data, and Matt Masson’s presentation on high performance SSIS...(read more)

    Read the article

  • How to draw texture to screen in Unity?

    - by user1306322
    I'm looking for a way to draw textures to screen in Unity in a similar fashion to XNA's SpriteBatch.Draw method. Ideally, I'd like to write a few helper methods to make all my XNA code work in Unity. This is the first issue I've faced on this seemingly long journey. I guess I could just use quads, but I'm not so sure it's the least expensive way performance-wise. I could do that stuff in XNA anyway, but they made SpriteBatch not without a reason, I believe.

    Read the article

  • How to enable multiple displays with Catalyst drivers in Ubuntu 13.04?

    - by Lokitez
    First, I installed Ubuntu 13.04. I have an ATI Radeon HD 7850. The open source drivers allowed multiple displays, but were horrendously laggy (even opening a browser window took several seconds). When I installed the Catalyst proprietary drivers, performance was perfect. The only problem is that trying to enable dual-monitors in the Catalyst center was grayed out and in the Ubuntu settings resulted in the resolution error. Is there any way around this?

    Read the article

  • A problem with texture atlasing in Unity

    - by Hamzeh Soboh
    I have the texture below and I need to get rectangular parts from it. I could finally combine meshes of different quads to improve performance, but I with quads of different tilings, this means different materials, then combining meshes will fail. Can anybody tell me how to have a part of that texture in C#? Such that all quads will be of the same material only then combining meshes passes. Thanks in advance.

    Read the article

  • História de Sucesso

    - by Wesley Faria
    ? Neste mês o processador SPARC completa 25 anos, tudo começou  em 1992 quando a Sun lançou o primeiro servidor high-end SPARC. Hoje a família de processadores SPARC é usada nos servidores enterprise da Oracle criando uma arquitetura otimizada para obter o máximo de performance em todo tipo de aplicação, desde CRM, ERP até o Java/Web. Veja a tragetória de Glória do SPARC no link: http://www.oracle-downloads.com/sparc25info/ Parabéns por essa tragetória de sucesso e vida longa ao SPARC !!!!!

    Read the article

  • The .NET 4.5 async/await Commands in Promise and Practice

    The .NET 4.5 async/await feature provides an opportunity for improving the scalability and performance of applications, particularly where tasks are more effectively done in parallel. The question is: do the scalability gains come at a cost of slowing individual methods? In this article Jon Smith investigates this issue by conducting a side-by-side evaluation of the standard synchronous methods and the new async methods in real applications.

    Read the article

  • Why IBM DB2 DBAs Love Load Testing

    A load test gives the database administrator quite a lot of valuable information and may make the difference between poor and acceptable application performance. Here are some proactive tips to make your IBM DB2 production implementation a success.

    Read the article

  • Proactive Project Decision Making

    This Industry AppsCast will discuss the importance of proactive project decision making. Oracle Primavera enables you to track project status in real-time, calculate ongoing project performance metrics, and forecast project completion metrics so that you no longer react to changing project needs, but instead avoid surprises and proactively manage projects to successful conclusion.

    Read the article

  • T-SQL User-Defined Functions: the good, the bad, and the ugly (part 2)

    - by Hugo Kornelis
    In a previous blog post , I demonstrated just how much you can hurt your performance by encapsulating expressions and computations in a user-defined function (UDF). I focused on scalar functions that didn’t include any data access. In this post, I will complete the discussion on scalar UDFs by covering the effect of data access in a scalar UDF. Note that, like the previous post, this all applies to T-SQL user-defined functions only. SQL Server also supports CLR user-defined functions (written in...(read more)

    Read the article

  • New Slides - and a discussion about Dictionary Statistics

    - by Mike Dietrich
    First of all we have just upoaded a new version of the Upgrade and Migration Workshop slides with some added information. So please feel free to download them from here.The slides have one new interesting information which lead to a discussion I've had in the past days with a very large customer regarding their upgrades - and internally on the mailing list targeting an EBS database upgrade from Oracle 10.2 to Oracle 11.2. Why are we creating dictionary statistics during upgrade? I'd believe this forced dictionary statistics creation got introduced with the desupport of the Rule Based Optimizer in Oracle 10g. The goal: as RBO is not supported anymore we have to make sure that the data dictionary has fresh and non-stale statistics. Actually that would have led in Oracle 9i to strange behaviour in some databases - so in Oracle 9i this was strongly disrecommended. The upgrade scripts got hardcoded to create these stats. But during tests we had the following findings: It's important to create dictionary statistics the night before the upgrade. Not two weeks before, not 60 minutes before your downtime begins. But very close to the upgrade. From Oracle 10g onwards you'd just say: $ execute DBMS_STATS.GATHER_DICTIONARY_STATS; This is important to make sure you have fresh dictionary statistics during upgrade for performance reasons. Tests have shown that running an upgrade without valid dictionary statistics might slow down the whole upgrade by factors of 2x-3x. And it would be also a great idea post upgrade to create again fresh dictionary statistics when you've did suppress the stats creation during the upgrade process. Suppress? Yes, you could set this underscore parameter in the init.ora: _optim_dict_stats_at_db_cr_upg=FALSE to suppress the forced dictionary statistics collection during an upgrade. We believe strongly that (a) people using the default statistics creation process which will create dictionary statistics by default and (b) create fresh stats before upgrade on the dictionary. Therefore we find it save once you have followed our advice to use the underscore during upgrade. And we've taken out that forced statistics collection during upgrade in the next release of the database. Please note: If you are using the DBUA for the upgrade it will remove underscore parameters for the upgrade run to improve performance - which is generally a good idea. So you'll have to start the DBUA with that call: $ dbua -initParam "_optim_dict_stats_at_cb_cr_upg"=FALSE -Mike

    Read the article

  • Why does DVD playback still not work after installing libdvdcss2?

    - by mac9416
    I have installed libdvdcss2, but I still get this error when trying to play DVDs: libdvdread4 was installed by default (This is a new System76 Pangolin Performance). I ran the install-css.sh script, and it completed with no problems. I can confirm that libdvdread4 and libdvdcss2 are installed: mac9416@charlotte:~$ dpkg -l | grep dvdcss ii libdvdcss2 1.2.12-0.0medibuntu1 Simple foundation for reading DVDs - runtime libraries mac9416@charlotte:~$ dpkg -l | grep dvdread ii libdvdread4 4.2.0-1ubuntu3 library for reading DVDs

    Read the article

  • SQLAuthority News Three Posts on Reporting T-SQL Tuesday #005

    If you are following my blog, you already know that I am more of T-SQL and Performance Tuning type of person. I do have a good understanding of Business Intelligence suit and I also do certain training sessions on the same subject. When I was writing the blog post for T-SQL Tuesday #005 Reporting, [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • thunar-archive-plugin not working

    - by Sergio
    After I experienced serious not yet resolved performance issues with Nautilus I decided to move to XUbuntu so I installed its metapackage from Ubuntu and started using it. It turns out that the archive plugin for Thunar (provides the "Extract here" option in the contextual menu when right clicking over a compressed archive) is not working, even after I apt-get purged it and reinstalled. It simply doesn't show its options in the contextual menu. What should I do to make it work?

    Read the article

  • Parallel MSBuild FTW - Build faster in parallel

    - by deadlydog
    Hey everyone, I just discovered this great post yesterday that shows how to have msbuild build projects in parallel Basically all you need to do is pass the switches “/m:[NumOfCPUsToUse] /p:BuildInParallel=true” into MSBuild. Example to use 4 cores/processes (If you just pass in “/m” it will use all CPU cores): MSBuild /m:4 /p:BuildInParallel=true "C:\dev\Client.sln" Obviously this trick will only be useful on PCs with multi-core CPUs (which we should all have by now) and solutions with multiple projects; So there’s no point using it for solutions that only contain one project.  Also, testing shows that using multiple processes does not speed up Team Foundation Database deployments either in case you’re curious Also, I found that if I didn’t explicitly use “/p:BuildInParallel=true” I would get many build errors (even though the MSDN documentation says that it is true by default). The poster boasts compile time improvements up to 59%, but the performance boost you see will vary depending on the solution and its project dependencies.  I tested with building a solution at my office, and here are my results (runs are in seconds): # of Processes 1st Run 2nd Run 3rd Run Avg Performance 1 192 195 200 195.67 100% 2 155 156 156 155.67 79.56% 4 146 149 146 147.00 75.13% 8 136 136 138 136.67 69.85%   So I updated all of our build scripts to build using 2 cores (~20% speed boost), since that gives us the biggest bang for our buck on our solution without bogging down a machine, and developers may sometimes compile more than 1 solution at a time.  I’ve put the any-PC-safe batch script code at the bottom of this post. The poster also has a follow-up post showing how to add a button and keyboard shortcut to the Visual Studio IDE to have VS build in parallel as well (so you don’t have to use a build script); if you do this make sure you use the .Net 4.0 MSBuild, not the 3.5 one that he shows in the screenshot.  While this did work for me, I found it left an MSBuild.exe process always hanging around afterwards for some reason, so watch out (batch file doesn’t have this problem though).  Also, you do get build output, but it may not be the same that you’re used to, and it doesn’t say “Build succeeded” in the status bar when completed, so I chose to not make this my default Visual Studio build option, but you may still want to. Happy building! ------------------------------------------------------------------------------------- :: Calculate how many Processes to use to do the build. SET NumberOfProcessesToUseForBuild=1  SET BuildInParallel=false if %NUMBER_OF_PROCESSORS% GTR 2 (                 SET NumberOfProcessesToUseForBuild=2                 SET BuildInParallel=true ) MSBuild /maxcpucount:%NumberOfProcessesToUseForBuild% /p:BuildInParallel=%BuildInParallel% "C:\dev\Client.sln"

    Read the article

  • What is difference between install desktop-environment and run directly distro?

    - by Pandya
    My question is what is difference between installing perticular desktop environment on Ubuntu And Using directly that (Default -environmented) distro/flavour of Ubuntu? Example: Two options: Install ubuntu-gnome-desktop or kubuntu-desktop or xubuntu-desktopetc. (official & recognized derivatives) alternatively on Ubuntu (Default -Unity Desktop) Use (Run) perticular distro/flavour Ubuntu-Gnome or Kubuntu or Xubuntu etc. I want to know is both method working same performance? and which is proper method to use Desktop Environment.

    Read the article

  • First PC Build (Part 1)

    - by Anthony Trudeau
    Originally posted on: http://geekswithblogs.net/tonyt/archive/2014/08/05/157959.aspxA couple of months ago I made the decision to build myself a new computer. The intended use is gaming and for using the last real version of Photoshop. I was motivated by the poor state of console gaming and a simple desire to do something I haven’t done before – build a PC from the ground up. I’ve been using PCs for more than two decades. I’ve replaced a component hear and there, but for the last 10 years or so I’ve only used laptops. Therefore, this article will be written from the perspective of someone familiar with PCs, but completely new at building. I’m not an expert and this is not a definitive guide for building a PC, but I do hope that it encourages you to try it yourself. Component List Research There was a lot of research necessary, because building a PC is completely new to me, and I haven’t kept up with what’s out there. The first thing you want to do is nail down what your goals are. Your goals are going to be driven by what you want to do with your computer and personal choice. Don’t neglect the second one, because if you’re doing this for fun you want to get what you want. In my case, I focused on three things: performance, longevity, and aesthetics. The performance aspect is important for gaming and Photoshop. This will drive what components you get. For example, heavy gaming use is going to drive your choice of graphics card. Longevity is relevant to me, because I don’t want to be changing things out anytime soon for the next hot game. The consequence of performance and longevity is cost. Finally, aesthetics was my next consideration. I could have just built a box, but it wouldn’t have been nearly as fun for me. Aesthetics might not be important to you. They are for me. I also like gadgets and that played into at least one purchase for this build. I used PC Part Picker to put together my component list. I found it invaluable during the process and I’d recommend it to everyone. One caveat is that I wouldn’t trust the compatibility aspects. It does a pretty good job of not steering you wrong, but do your own research. The rest of it isn’t really sexy. I started out with what appealed to me and then I made changes and additions as I dived deep into researching each component and interaction I could find. The resources I used are innumerable. I used reviews, product descriptions, forum posts (praises and problems), et al. to assist me. I also asked friends into gaming what they thought about my component list. And when I got near the end I posted my list to the Reddit /r/buildapc forum. I cannot stress the value of extra sets of eyeballs and first hand experiences. Some of the resources I used: PC Part Picker Tom’s Hardware bit-tech Reddit Purchase PC Part Picker favors certain vendors. You should look at others too. In my case I found their favorites to be the best. My priorities were out-the-door price and shipping time. I knew that once I started getting parts I’d want to start building. Luckily, I timed it well and everything arrived within the span of a few days. Here are my opinions on the vendors I ended up using in alphabetical order. Amazon.com is a good, reliable choice. They have excellent customer service in my experience, and I knew I wouldn’t have trouble with them. However, shipping time is often a problem when you use their free shipping unless you order expensive items (I’ve found items over $100 ship quickly). Ultimately though, price wasn’t always the best and their collection of sales tax in my state turned me off them. I did purchase my case from them. I ordered the mouse as well, but I cancelled after it was stuck four days in a “shipping soon” state. I purchased the mouse locally. Best Buy is not my favorite place to do business. There’s a lot of history with poor, uninterested sales representatives and they used to have a lot of bad anti-consumer policies. That’s a lot better now, but the bad taste is still in my mouth. I ended up purchasing the accessories from them including mouse (locally) and headphones. NCIX is a company that I’ve never heard of before. It popped up as a recommendation for my CPU cooler on PC Part Picker. I didn’t do a lot of research on the company, because their policy on you buying insurance for your orders turned me off. That policy makes it clear to me that the company finds me responsible for the shipment once it leaves their dock. That’s not right, and may run afoul of state laws. Regardless they shipped my CPU cooler quickly and I didn’t have a problem. NewEgg.com is a well known company. I had never done business with them, but I’m glad I did. They shipped quickly and provided good visibility over everything. The prices were also the best in most cases. My main complaint is that they have a lot of exchange only return policies on components. To their credit those policies are listed in the cart underneath each item. The visibility tells me that they’re not playing any shenanigans and made me comfortable dealing with that risk. The vast majority of what I ordered came from them. Coming Next In the next part I’ll tackle my build experience.

    Read the article

  • Exadata ROI cases

    - by Javier Puerta
    The following cases illustrate the type of ROI benefits that customers can obtain from their investment in Exadata infrastructure. Australian Finance Group will achieve a 42% ROI by and break even in three years by consolidating Oracle E-Business Suite and Siebel applications on Oracle Exadata.  Read the ROI case at: http://www.oracle.com/us/corporate/customers/afg-1-exadata-cs-1354807.pdf In addition to this study, there are Oracle Exadata Mainstay ROI Case Studies for the following: Merck -Pharma, Oracle Exadata Achieves Fivefold Performance Increase for Critical Product Research Platform Turkcell Accelerates Reporting Tenfold, Saves on Storage and Energy Costs with Consolidated Oracle Exadata Platform

    Read the article

  • Oracle Technology Network April 2012 Special Offers

    - by programmarketingOTN
    Several of our books publishing partners have added new titles to the list of books they are offering discounts on.  To see full details and get discount links/codes please visit the OTN Member Discount page.  The Oracle store has also extended their 15% discount until the end of the month as well.  Happy Shopping!Oracle Press - Effective MySQL: Backup and Recovery Oracle Database 11g Performance Tuning Tips & Techniques Packt Publishing - Oracle WebCenter 11g PS3 Administration Cookbook Oracle Service Bus 11g Development Cookbook Pearson Java Application Architecture

    Read the article

  • PostSharp, Obfuscation, and IL

    - by simonc
    Aspect-oriented programming (AOP) is a relatively new programming paradigm. Originating at Xerox PARC in 1994, the paradigm was first made available for general-purpose development as an extension to Java in 2001. From there, it has quickly been adapted for use in all the common languages used today. In the .NET world, one of the primary AOP toolkits is PostSharp. Attributes and AOP Normally, attributes in .NET are entirely a metadata construct. Apart from a few special attributes in the .NET framework, they have no effect whatsoever on how a class or method executes within the CLR. Only by using reflection at runtime can you access any attributes declared on a type or type member. PostSharp changes this. By declaring a custom attribute that derives from PostSharp.Aspects.Aspect, applying it to types and type members, and running the resulting assembly through the PostSharp postprocessor, you can essentially declare 'clever' attributes that change the behaviour of whatever the aspect has been applied to at runtime. A simple example of this is logging. By declaring a TraceAttribute that derives from OnMethodBoundaryAspect, you can automatically log when a method has been executed: public class TraceAttribute : PostSharp.Aspects.OnMethodBoundaryAspect { public override void OnEntry(MethodExecutionArgs args) { MethodBase method = args.Method; System.Diagnostics.Trace.WriteLine( String.Format( "Entering {0}.{1}.", method.DeclaringType.FullName, method.Name)); } public override void OnExit(MethodExecutionArgs args) { MethodBase method = args.Method; System.Diagnostics.Trace.WriteLine( String.Format( "Leaving {0}.{1}.", method.DeclaringType.FullName, method.Name)); } } [Trace] public void MethodToLog() { ... } Now, whenever MethodToLog is executed, the aspect will automatically log entry and exit, without having to add the logging code to MethodToLog itself. PostSharp Performance Now this does introduce a performance overhead - as you can see, the aspect allows access to the MethodBase of the method the aspect has been applied to. If you were limited to C#, you would be forced to retrieve each MethodBase instance using Type.GetMethod(), matching on the method name and signature. This is slow. Fortunately, PostSharp is not limited to C#. It can use any instruction available in IL. And in IL, you can do some very neat things. Ldtoken C# allows you to get the Type object corresponding to a specific type name using the typeof operator: Type t = typeof(Random); The C# compiler compiles this operator to the following IL: ldtoken [mscorlib]System.Random call class [mscorlib]System.Type [mscorlib]System.Type::GetTypeFromHandle( valuetype [mscorlib]System.RuntimeTypeHandle) The ldtoken instruction obtains a special handle to a type called a RuntimeTypeHandle, and from that, the Type object can be obtained using GetTypeFromHandle. These are both relatively fast operations - no string lookup is required, only direct assembly and CLR constructs are used. However, a little-known feature is that ldtoken is not just limited to types; it can also get information on methods and fields, encapsulated in a RuntimeMethodHandle or RuntimeFieldHandle: // get a MethodBase for String.EndsWith(string) ldtoken method instance bool [mscorlib]System.String::EndsWith(string) call class [mscorlib]System.Reflection.MethodBase [mscorlib]System.Reflection.MethodBase::GetMethodFromHandle( valuetype [mscorlib]System.RuntimeMethodHandle) // get a FieldInfo for the String.Empty field ldtoken field string [mscorlib]System.String::Empty call class [mscorlib]System.Reflection.FieldInfo [mscorlib]System.Reflection.FieldInfo::GetFieldFromHandle( valuetype [mscorlib]System.RuntimeFieldHandle) These usages of ldtoken aren't usable from C# or VB, and aren't likely to be added anytime soon (Eric Lippert's done a blog post on the possibility of adding infoof, methodof or fieldof operators to C#). However, PostSharp deals directly with IL, and so can use ldtoken to get MethodBase objects quickly and cheaply, without having to resort to string lookups. The kicker However, there are problems. Because ldtoken for methods or fields isn't accessible from C# or VB, it hasn't been as well-tested as ldtoken for types. This has resulted in various obscure bugs in most versions of the CLR when dealing with ldtoken and methods, and specifically, generic methods and methods of generic types. This means that PostSharp was behaving incorrectly, or just plain crashing, when aspects were applied to methods that were generic in some way. So, PostSharp has to work around this. Without using the metadata tokens directly, the only way to get the MethodBase of generic methods is to use reflection: Type.GetMethod(), passing in the method name as a string along with information on the signature. Now, this works fine. It's slower than using ldtoken directly, but it works, and this only has to be done for generic methods. Unfortunately, this poses problems when the assembly is obfuscated. PostSharp and Obfuscation When using ldtoken, obfuscators don't affect how PostSharp operates. Because the ldtoken instruction directly references the type, method or field within the assembly, it is unaffected if the name of the object is changed by an obfuscator. However, the indirect loading used for generic methods was breaking, because that uses the name of the method when the assembly is put through the PostSharp postprocessor to lookup the MethodBase at runtime. If the name then changes, PostSharp can't find it anymore, and the assembly breaks. So, PostSharp needs to know about any changes an obfuscator does to an assembly. The way PostSharp does this is by adding another layer of indirection. When PostSharp obfuscation support is enabled, it includes an extra 'name table' resource in the assembly, consisting of a series of method & type names. When PostSharp needs to lookup a method using reflection, instead of encoding the method name directly, it looks up the method name at a fixed offset inside that name table: MethodBase genericMethod = typeof(ContainingClass).GetMethod(GetNameAtIndex(22)); PostSharp.NameTable resource: ... 20: get_Prop1 21: set_Prop1 22: DoFoo 23: GetWibble When the assembly is later processed by an obfuscator, the obfuscator can replace all the method and type names within the name table with their new name. That way, the reflection lookups performed by PostSharp will now use the new names, and everything will work as expected: MethodBase genericMethod = typeof(#kGy).GetMethod(GetNameAtIndex(22)); PostSharp.NameTable resource: ... 20: #kkA 21: #zAb 22: #EF5a 23: #2tg As you can see, this requires direct support by an obfuscator in order to perform these rewrites. Dotfuscator supports it, and now, starting with SmartAssembly 6.6.4, SmartAssembly does too. So, a relatively simple solution to a tricky problem, with some CLR bugs thrown in for good measure. You don't see those every day! Cross posted from Simple Talk.

    Read the article

  • Help needed on a UI/Developer Interview

    - by AJ Seth
    I have a phone interview with a major Internet company and it is a mostly front-end developer position. If anyone has experience with UI/developer interviews and can give some advice/questions asked etc. that'll be great. Additionally, what resources can be read and reviewed for the following: Designing for performance, scalability and availability Internet and OS security fundamentals EDIT: Now I am told that the interview I am told will be mostly on coding, Data Structures, design questions etc. Anyone?

    Read the article

  • Google I/O 2012 - Measuring the End-to-End Value of Your App

    Google I/O 2012 - Measuring the End-to-End Value of Your App Neil Rhodes, Nick Mihailovski, Mike Kwong We've rethought mobile app analytics from the ground up. If you are a mobile app developer, come see what's new from the land of Google Analytics; Understand how to measure the end-to-end value of your app, and improve its performance to drive usage and retention. For all I/O 2012 sessions, go to developers.google.com From: GoogleDevelopers Views: 69 4 ratings Time: 01:04:12 More in Science & Technology

    Read the article

  • What are the differences between Bigloo and ECL?

    - by Pubby
    I've been looking to embed Lisp in some C++ code. Two options I'm interested in is Bigloo Scheme and ECL. Reading through the docs they seem to support a very similar feature set. Obviously Bigloo is Scheme and ECL is CLisp, but what other differences do they have? In particular I'm interested in the following criteria: Ease of embedding (for C++, not just C) Performance Style of coding Size Tail call support I'm targeting this question towards someone who has used both.

    Read the article

< Previous Page | 252 253 254 255 256 257 258 259 260 261 262 263  | Next Page >