Search Results

Search found 19055 results on 763 pages for 'high performance'.

Page 496/763 | < Previous Page | 492 493 494 495 496 497 498 499 500 501 502 503  | Next Page >

  • How to Test and Deploy Applications Faster

    - by rickramsey
    photo courtesy of mtoleric via Flickr If you want to test and deploy your applications much faster than you could before, take a look at these OTN resources. They won't disappoint. Developer Webinar: How to Test and Deploy Applications Faster - April 10 Our second developer webinar, conducted by engineers Eric Reid and Stephan Schneider, will focus on how the zones and ZFS filesystem in Oracle Solaris 11 can simplify your development environment. This is a cool topic because it will show you how to test and deploy apps in their likely real-world environments much quicker than you could before. April 10 at 9:00 am PT Video Interview: Tips for Developing Faster Applications with Oracle Solaris 11 Express We recorded this a while ago, and it talks about the Express version of Oracle Solaris 11, but most of it applies to the production release. George Drapeau, who manages a group of engineers whose sole mission is to help customers develop better, faster applications for Oracle Solaris, shares some tips and tricks for improving your applications. How ZFS and Zones create the perfect developer sandbox. What's the best way for a developer to use DTrace. How Crossbow's network bandwidth controls can improve an application's performance. To borrow the classic Ed Sullivan accolade, it's a "really good show." "White Paper: What's New For Application Developers Excellent in-depth analysis of exactly how the capabilities of Oracle Solaris 11 help you test and deploy applications faster. Covers the tools in Oracle Solaris Studio and what you can do with each of them, plus source code management, scripting, and shells. How to replicate your development, test, and production environments, and how to make sure your application runs as it should in those different environments. How to migrate Oracle Solaris 10 applications to Oracle Solaris 11. How to find and diagnose faults in your application. And lots, lots more. - Rick Website Newsletter Facebook Twitter

    Read the article

  • SQL SERVER – 2012 – Summary of All the Analytic Functions – MSDN and SQLAuthority

    - by pinaldave
    SQL Server 2012 (RC0 Available here) has introduced new analytic functions. These functions were long awaited and I am glad that they are here. Previously when any of this function was needed people use to write long T-SQL code to simulate that and now no need of the same. Having available native function also helps performance as well readability. In last few days I have written many articles on this subject on my blog. The goal was make these complex analytic functions easy to understand and make it widely accepted. As this new functions are available and as awareness spreads we should start using the new functions. Here is the quick list of the new function and relevant MSDN site. Function SQLAuthority MSDN CUME_DIST CUME_DIST CUME_DIST FIRST_VALUE FIRST_VALUE FIRST_VALUE LAST_VALUE LAST_VALUE LAST_VALUE LEAD LEAD LEAD LAG LAG LAG PERCENTILE_CONT PERCENTILE_CONT PERCENTILE_CONT PERCENTILE_DISC PERCENTILE_DISC PERCENTILE_DISC PERCENT_RANK PERCENT_RANK PERCENT_RANK I also enjoyed three different puzzles during the course of this series which gave clear idea to the SQL Server 2012 analytic functions. SQL SERVER – Puzzle to Win Print Book – Functions FIRST_VALUE and LAST_VALUE with OVER clause and ORDER BY SQL SERVER – Puzzle to Win Print Book – Write T-SQL Self Join Without Using LEAD and LAG SQL SERVER – Puzzle to Win Print Book – Explain Value of PERCENTILE_CONT() Using Simple Example This series will be always my dear series as during this series I had went through very unique experience of my book going out of stock and becoming available after 48 hours. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Function, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Capturing Wait Types and Wait Stats Information at Interval – Wait Type – Day 5 of 28

    - by pinaldave
    Earlier, I have tried to cover some important points about wait stats in detail. Here are some points that we had covered earlier. DMV related to wait stats reset when we reset SQL Server services DMV related to wait stats reset when we manually reset the wait types However, at times, there is a need of making this data persistent so that we can take a look at them later on. Sometimes, performance tuning experts do some modifications to the server and try to measure the wait stats at that point of time and after some duration. I use the following method to measure the wait stats over the time. -- Create Table CREATE TABLE [MyWaitStatTable]( [wait_type] [nvarchar](60) NOT NULL, [waiting_tasks_count] [bigint] NOT NULL, [wait_time_ms] [bigint] NOT NULL, [max_wait_time_ms] [bigint] NOT NULL, [signal_wait_time_ms] [bigint] NOT NULL, [CurrentDateTime] DATETIME NOT NULL, [Flag] INT ) GO -- Populate Table at Time 1 INSERT INTO MyWaitStatTable ([wait_type],[waiting_tasks_count],[wait_time_ms],[max_wait_time_ms],[signal_wait_time_ms], [CurrentDateTime],[Flag]) SELECT [wait_type],[waiting_tasks_count],[wait_time_ms],[max_wait_time_ms],[signal_wait_time_ms], GETDATE(), 1 FROM sys.dm_os_wait_stats GO ----- Desired Delay (for one hour) WAITFOR DELAY '01:00:00' -- Populate Table at Time 2 INSERT INTO MyWaitStatTable ([wait_type],[waiting_tasks_count],[wait_time_ms],[max_wait_time_ms],[signal_wait_time_ms], [CurrentDateTime],[Flag]) SELECT [wait_type],[waiting_tasks_count],[wait_time_ms],[max_wait_time_ms],[signal_wait_time_ms], GETDATE(), 2 FROM sys.dm_os_wait_stats GO -- Check the difference between Time 1 and Time 2 SELECT T1.wait_type, T1.wait_time_ms Original_WaitTime, T2.wait_time_ms LaterWaitTime, (T2.wait_time_ms - T1.wait_time_ms) DiffenceWaitTime FROM MyWaitStatTable T1 INNER JOIN MyWaitStatTable T2 ON T1.wait_type = T2.wait_type WHERE T2.wait_time_ms > T1.wait_time_ms AND T1.Flag = 1 AND T2.Flag = 2 ORDER BY DiffenceWaitTime DESC GO -- Clean up DROP TABLE MyWaitStatTable GO If you notice the script, I have used an additional column called flag. I use it to find out when I have captured the wait stats and then use it in my SELECT query to SELECT wait stats related to that time group. Many times, I select more than 5 or 6 different set of wait stats and I find this method very convenient to find the difference between wait stats. In a future blog post, we will talk about specific wait stats. Read all the post in the Wait Types and Queue series. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL DMV, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • What is the definition of "Big Data"?

    - by Ben
    Is there one? All the definitions I can find describe the size, complexity / variety or velocity of the data. Wikipedia's definition is the only one I've found with an actual number Big data sizes are a constantly moving target, as of 2012 ranging from a few dozen terabytes to many petabytes of data in a single data set. However, this seemingly contradicts the MIKE2.0 definition, referenced in the next paragraph, which indicates that "big" data can be small and that 100,000 sensors on an aircraft creating only 3GB of data could be considered big. IBM despite saying that: Big data is more simply than a matter of size. have emphasised size in their definition. O'Reilly has stressed "volume, velocity and variety" as well. Though explained well, and in more depth, the definition seems to be a re-hash of the others - or vice-versa of course. I think that a Computer Weekly article title sums up a number of articles fairly well "What is big data and how can it be used to gain competitive advantage". But ZDNet wins with the following from 2012: “Big Data” is a catch phrase that has been bubbling up from the high performance computing niche of the IT market... If one sits through the presentations from ten suppliers of technology, fifteen or so different definitions are likely to come forward. Each definition, of course, tends to support the need for that supplier’s products and services. Imagine that. Basically "big data" is "big" in some way shape or form. What is "big"? Is it quantifiable at the current time? If "big" is unquantifiable is there a definition that does not rely solely on generalities?

    Read the article

  • SyncToBlog #10 Lots of Azure and Cloud Links including MIX10 videos

    - by Eric Nelson
    Just getting a few interesting cloud links “down on paper”. I last did one of these on Azure in Feb 20010. Cloud Links: Article on Debugging in the Cloud http://code.msdn.microsoft.com/azurescale  A sample app that demonstrates monitoring and automatically scaling an Azure application in response to dropping performance etc. Basically a console app that checks perf stats and then uses the Service Management API to spin up new instances when needed. Azure In Action book is imminent :) Running Memcached in Windows Azure from the MS UK team Using Microsoft Codename Dallas as a data source for Drupal also from the MS UK team I often mention them – but this post is the biz! Metodi on fault and upgrade domains Detailed blog post on comparing Azure AppFabric Service Bus REST support to the free Faye Ruby+JavaScript gem that implements the JSON publish/subscribe protocol Bayeux. AppFabric LABS allow you to test out and play with experimental AppFabric technologies. Details of the upcoming VM support in Windows Azure Nice series of posts from J D Meier in the Patterns and Practice team How To Use ASP.NET Forms Auth with Azure Tables  How To Use ASP.NET Forms Auth with Roles in Azure Tables How To Use ASP.NET Forms Auth with SQL Server on Windows Azure And sessions from MIX10 held March 15th to 17th: Lap around the Windows Azure Platform – Steve Marx Building and Deploying Windows Azure Based Applications with Microsoft Visual Studio 2010 – Jim Nakashima Building PHP Applications using the Windows Azure Platform – Craig Kitterman, Sumit Chawla Using Ruby on Rails to Build Windows Azure Applications – Sriram Krishnan Microsoft Project Code Name “Dallas": Data for your apps – Moe Khosravy Using Storage in the Windows Azure Platform – Chris Auld Building Web Applications with Windows Azure Storage – Brad Calder Building Web Application with Microsoft SQL Azure – David Robinson Connecting Your Applications in the Cloud with Windows Azure AppFabric – Clemens Vasters Microsoft Silverlight and Windows Azure: A Match Made for the Web – Matt Kerner Something for everyone :)

    Read the article

  • Advanced Oracle SOA Suite Oracle Open World 2012 SOA Presentations

    - by JuergenKress
    The list below only includes SOA presentations delivered or moderated by Oracle SOA Product Management. For a complete list of Oracle Open World 2012 presentations, please go here. Oracle SOA Suite, the Most Capable Tool for Every Possible Integration Challenge Using the Right Tools, Techniques, and Technologies for Integration Projects Administration and Management Essentials for Oracle SOA Suite 11g Extreme Performance and Scale Delivered by SOA on Oracle Exalogic Successful Application Integration and SOA Projects: Customer Panel How to Integrate Cloud Applications with Oracle SOA Suite Transforming the Utilities Industry with Oracle Fusion Middleware Cloud and On-Premises Applications Integration, Using Oracle Integration Adapters Delivering High Value B2B Gateways with Oracle SOA Suite 11g Implementing Successful Healthcare Applications with Oracle SOA Suite Migrating to Oracle SOA Suite: A Sun Java CAPS Customer Experience If Mobile Enablement Is on Your Mind, Oracle SOA Suite and Oracle Service Bus Can Help Building Shared Services Infrastructure with Oracle Service Bus: Customer Panel SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: OOW,OOW presentations,OOW soa ppt,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • SQL SERVER – Copy Column Headers from Resultset – SQL in Sixty Seconds #027 – Video

    - by pinaldave
    SQL Server Management Studio returns results in Grid View, Text View and to the file. When we copy results from Grid View to Excel there is a common complaint that the column  header displayed in resultset is not copied to the Excel. I often spend time in performance tuning databases and I run many DMV’s in SSMS to get a quick view of the server. In my case it is almost certain that I need all the time column headers when I copy my data to excel or any other place. SQL Server Management Studio have two different ways to do this. Method 1: Ad-hoc When result is rendered you can right click on the resultset and click on Copy Header. This will copy the headers along with the resultset. Additionally, you can use the shortcut key CTRL+SHIFT+C for coping column headers along with the resultset. Method 2: Option Setting at SSMS level This is SSMS level settings and I kept this option always selected as I often need the column headers when I select the resultset. Go Tools >> Options >> Query Results >> SQL Server >> Results to Grid >> Check the Box “Include column header when copying or saving the results.” Both of the methods are discussed in following SQL in Sixty Seconds Video. Here is the code used in the video. Related Tips in SQL in Sixty Seconds: Copy Column Headers in Query Analyzers in Result Set Getting Columns Headers without Result Data – SET FMTONLY ON If we like your idea we promise to share with you educational material. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

  • What needs to be done to port a Windows Flash game to Android?

    - by Russell Yan
    My team has developed a game using Flash (with Lua embeded) that targets Windows. And we want to port that game to Android (also iOS in the future). It's acceptable to rewrite the Flash part of client. And some other team in our company recommended Unity3D to replace the Flash part. Since we have no experience in Android/iOS development, we would need to learn some new tool/language anyway. So we would like that learning still be useful after the porting and when we starting a new game. Any thoughts? Edits: I think it is worth noticing the background of the game : the project is started by a tester and developer, both without knowing much about flex and actionscript. They built the game while learning, so most of the code is hard to maintain. I and other two developers joined the project after a year or so when one has leave (and the other be our manager). The other two developers are just graduates with little experience and little knowledge of flex. I am good at the server part and the c# language. Based on the fact that the code is hard to maintain (and we do need to change a lot of code to make the game easier to playe in a mobile device), and I am good at c# (and learning). I still tend to do the porting with Unity, which could get better performance and possibly save time.

    Read the article

  • LobsterPot Solutions in the USA

    - by Rob Farley
    We’re expanding! I’m thrilled to announce that Microsoft Gold Partner LobsterPot Solutions has started another branch appointing the amazing Ted Krueger (5-time SQL MVP awardee) as the US lead. Ted is well-known in the SQL Server world, having written books on indexing, consulting and on being a DBA (not to mention contributing chapters to both MVP Deep Dives books). He is an expert on replication and high availability, and strong in the Business Intelligence space – vast experience which is both broad and deep. Ted is based in the south east corner of Wisconsin, just north of Chicago. He has been a consultant for eons and has helped many clients with their projects and problems, taking the role as both technical lead and consulting lead. He is also tireless in supporting and developing the SQL Server community, presenting at conferences across America, and helping people through his blog, Twitter and more. Despite all this – it’s neither his technical excellence with SQL Server nor his consulting skill that made me want him to lead LobsterPot’s US venture. I wanted Ted because of his values. In the time I’ve known Ted, I’ve found his integrity to be excellent, and found him to be morally beyond reproach. This is the biggest priority I have when finding people to represent the LobsterPot brand. I have no qualms in recommending Ted’s character or work ethic. It’s not just my thoughts on him – all my trusted friends that know Ted agree about this. So last week, LobsterPot Solutions LLC was formed in the United States, and in a couple of weeks, we will be open for business! LobsterPot Solutions can be contacted via email at [email protected], on the web at either www.lobsterpot.com.au or www.lobsterpotsolutions.com, and on Twitter as @lobsterpot_au and @lobsterpot_us. Ted Kruger blogs at LessThanDot, and can also be found on Twitter and LinkedIn. This post is cross-posted from http://lobsterpotsolutions.com/lobsterpot-solutions-in-the-usa

    Read the article

  • How to read Scala code with lots of implicits?

    - by Petr Pudlák
    Consider the following code fragment (adapted from http://stackoverflow.com/a/12265946/1333025): // Using scalaz 6 import scalaz._, Scalaz._ object Example extends App { case class Container(i: Int) def compute(s: String): State[Container, Int] = state { case Container(i) => (Container(i + 1), s.toInt + i) } val d = List("1", "2", "3") type ContainerState[X] = State[Container, X] println( d.traverse[ContainerState, Int](compute) ! Container(0) ) } I understand what it does on high level. But I wanted to trace what exactly happens during the call to d.traverse at the end. Clearly, List doesn't have traverse, so it must be implicitly converted to another type that does. Even though I spent a considerable amount of time trying to find out, I wasn't very successful. First I found that there is a method in scalaz.Traversable traverse[F[_], A, B] (f: (A) => F[B], t: T[A])(implicit arg0: Applicative[F]): F[T[B]] but clearly this is not it (although it's most likely that "my" traverse is implemented using this one). After a lot of searching, I grepped scalaz source codes and I found scalaz.MA's method traverse[F[_], B] (f: (A) => F[B])(implicit a: Applicative[F], t: Traverse[M]): F[M[B]] which seems to be very close. Still I'm missing to what List is converted in my example and if it uses MA.traverse or something else. The question is: What procedure should I follow to find out what exactly is called at d.traverse? Having even such a simple code that is so hard analyze seems to me like a big problem. Am I missing something very simple? How should I proceed when I want to understand such code that uses a lot of imported implicits? Is there some way to ask the compiler what implicits it used? Or is there something like Hoogle for Scala so that I can search for a method just by its name?

    Read the article

  • The fastest way to resize images from ASP.NET. And it’s (more) supported-ish.

    - by Bertrand Le Roy
    I’ve shown before how to resize images using GDI, which is fairly common but is explicitly unsupported because we know of very real problems that this can cause. Still, many sites still use that method because those problems are fairly rare, and because most people assume it’s the only way to get the job done. Plus, it works in medium trust. More recently, I’ve shown how you can use WPF APIs to do the same thing and get JPEG thumbnails, only 2.5 times faster than GDI (even now that GDI really ultimately uses WIC to read and write images). The boost in performance is great, but it comes at a cost, that you may or may not care about: it won’t work in medium trust. It’s also just as unsupported as the GDI option. What I want to show today is how to use the Windows Imaging Components from ASP.NET APIs directly, without going through WPF. The approach has the great advantage that it’s been tested and proven to scale very well. The WIC team tells me you should be able to call support and get answers if you hit problems. Caveats exist though. First, this is using interop, so until a signed wrapper sits in the GAC, it will require full trust. Second, the APIs have a very strong smell of native code and are definitely not .NET-friendly. And finally, the most serious problem is that older versions of Windows don’t offer MTA support for image decoding. MTA support is only available on Windows 7, Vista and Windows Server 2008. But on 2003 and XP, you’ll only get STA support. that means that the thread safety that we so badly need for server applications is not guaranteed on those operating systems. To make it work, you’d have to spin specialized threads yourself and manage the lifetime of your objects, which is outside the scope of this article. We’ll assume that we’re fine with al this and that we’re running on 7 or 2008 under full trust. Be warned that the code that follows is not simple or very readable. This is definitely not the easiest way to resize an image in .NET. Wrapping native APIs such as WIC in a managed wrapper is never easy, but fortunately we won’t have to: the WIC team already did it for us and released the results under MS-PL. The InteropServices folder, which contains the wrappers we need, is in the WicCop project but I’ve also included it in the sample that you can download from the link at the end of the article. In order to produce a thumbnail, we first have to obtain a decoding frame object that WIC can use. Like with WPF, that object will contain the command to decode a frame from the source image but won’t do the actual decoding until necessary. Getting the frame is done by reading the image bytes through a special WIC stream that you can obtain from a factory object that we’re going to reuse for lots of other tasks: var photo = File.ReadAllBytes(photoPath); var factory = (IWICComponentFactory)new WICImagingFactory(); var inputStream = factory.CreateStream(); inputStream.InitializeFromMemory(photo, (uint)photo.Length); var decoder = factory.CreateDecoderFromStream( inputStream, null, WICDecodeOptions.WICDecodeMetadataCacheOnLoad); var frame = decoder.GetFrame(0); We can read the dimensions of the frame using the following (somewhat ugly) code: uint width, height; frame.GetSize(out width, out height); This enables us to compute the dimensions of the thumbnail, as I’ve shown in previous articles. We now need to prepare the output stream for the thumbnail. WIC requires a special kind of stream, IStream (not implemented by System.IO.Stream) and doesn’t directlyunderstand .NET streams. It does provide a number of implementations but not exactly what we need here. We need to output to memory because we’ll want to persist the same bytes to the response stream and to a local file for caching. The memory-bound version of IStream requires a fixed-length buffer but we won’t know the length of the buffer before we resize. To solve that problem, I’ve built a derived class from MemoryStream that also implements IStream. The implementation is not very complicated, it just delegates the IStream methods to the base class, but it involves some native pointer manipulation. Once we have a stream, we need to build the encoder for the output format, which could be anything that WIC supports. For web thumbnails, our only reasonable options are PNG and JPEG. I explored PNG because it’s a lossless format, and because WIC does support PNG compression. That compression is not very efficient though and JPEG offers good quality with much smaller file sizes. On the web, it matters. I found the best PNG compression option (adaptive) to give files that are about twice as big as 100%-quality JPEG (an absurd setting), 4.5 times bigger than 95%-quality JPEG and 7 times larger than 85%-quality JPEG, which is more than acceptable quality. As a consequence, we’ll use JPEG. The JPEG encoder can be prepared as follows: var encoder = factory.CreateEncoder( Consts.GUID_ContainerFormatJpeg, null); encoder.Initialize(outputStream, WICBitmapEncoderCacheOption.WICBitmapEncoderNoCache); The next operation is to create the output frame: IWICBitmapFrameEncode outputFrame; var arg = new IPropertyBag2[1]; encoder.CreateNewFrame(out outputFrame, arg); Notice that we are passing in a property bag. This is where we’re going to specify our only parameter for encoding, the JPEG quality setting: var propBag = arg[0]; var propertyBagOption = new PROPBAG2[1]; propertyBagOption[0].pstrName = "ImageQuality"; propBag.Write(1, propertyBagOption, new object[] { 0.85F }); outputFrame.Initialize(propBag); We can then set the resolution for the thumbnail to be 96, something we weren’t able to do with WPF and had to hack around: outputFrame.SetResolution(96, 96); Next, we set the size of the output frame and create a scaler from the input frame and the computed dimensions of the target thumbnail: outputFrame.SetSize(thumbWidth, thumbHeight); var scaler = factory.CreateBitmapScaler(); scaler.Initialize(frame, thumbWidth, thumbHeight, WICBitmapInterpolationMode.WICBitmapInterpolationModeFant); The scaler is using the Fant method, which I think is the best looking one even if it seems a little softer than cubic (zoomed here to better show the defects): Cubic Fant Linear Nearest neighbor We can write the source image to the output frame through the scaler: outputFrame.WriteSource(scaler, new WICRect { X = 0, Y = 0, Width = (int)thumbWidth, Height = (int)thumbHeight }); And finally we commit the pipeline that we built and get the byte array for the thumbnail out of our memory stream: outputFrame.Commit(); encoder.Commit(); var outputArray = outputStream.ToArray(); outputStream.Close(); That byte array can then be sent to the output stream and to the cache file. Once we’ve gone through this exercise, it’s only natural to wonder whether it was worth the trouble. I ran this method, as well as GDI and WPF resizing over thirty twelve megapixel images for JPEG qualities between 70% and 100% and measured the file size and time to resize. Here are the results: Size of resized images   Time to resize thirty 12 megapixel images Not much to see on the size graph: sizes from WPF and WIC are equivalent, which is hardly surprising as WPF calls into WIC. There is just an anomaly for 75% for WPF that I noted in my previous article and that disappears when using WIC directly. But overall, using WPF or WIC over GDI represents a slight win in file size. The time to resize is more interesting. WPF and WIC get similar times although WIC seems to always be a little faster. Not surprising considering WPF is using WIC. The margin of error on this results is probably fairly close to the time difference. As we already knew, the time to resize does not depend on the quality level, only the size does. This means that the only decision you have to make here is size versus visual quality. This third approach to server-side image resizing on ASP.NET seems to converge on the fastest possible one. We have marginally better performance than WPF, but with some additional peace of mind that this approach is sanctioned for server-side usage by the Windows Imaging team. It still doesn’t work in medium trust. That is a problem and shows the way for future server-friendly managed wrappers around WIC. The sample code for this article can be downloaded from: http://weblogs.asp.net/blogs/bleroy/Samples/WicResize.zip The benchmark code can be found here (you’ll need to add your own images to the Images directory and then add those to the project, with content and copy if newer in the properties of the files in the solution explorer): http://weblogs.asp.net/blogs/bleroy/Samples/WicWpfGdiImageResizeBenchmark.zip WIC tools can be downloaded from: http://code.msdn.microsoft.com/wictools To conclude, here are some of the resized thumbnails at 85% fant:

    Read the article

  • Learning frameworks without learning languages

    - by Tom Morris
    I've been reading up on GUI frameworks including WPF, GTK and Cocoa (UIKit). I don't really do anything related to Windows (I'm a Mac and Linux guy) or .NET, but I'd like to be able to throw together GUIs for various operating systems. We are in the enviable position now of having high level scripting languages that work with all of the major GUI toolkits. If you are doing Linux GUI programming, you could use GTK in C, but why not just use PyGTK (or PyQt). Similarly, for Java, one can use JRuby. For Mac, there's MacRuby. And on .NET, there's IronRuby. This is all fine and good, and if you are building a serious project, there are tradeoffs that you might encounter when deciding whether to, say, build a WPF app in C# or in IronRuby, or whether you are going to use PyGTK or not. The subjective question I have is: what about learning those frameworks? Are there strong reasons why one should or should not learn something like WPF or Cocoa in a language one is familiar with rather than having to learn a new language as well? I'm not saying you should never learn the language. If you are building Windows applications and you don't know C#, that might be a bit of a problem. But do you think it is okay to learn the framework first? This is both a general question and a specific question. I've used some Cocoa classes from Ruby and Python using things like PyObjC and there always seems to be an impedance mismatch because of the way Objective C libraries get built. Experiences and strong opinions welcome!

    Read the article

  • SQL SERVER – Copy Column Headers from Resultset – SQL in Sixty Seconds #026 – Video

    - by pinaldave
    SQL Server Management Studio returns results in Grid View, Text View and to the file. When we copy results from Grid View to Excel there is a common complaint that the column  header displayed in resultset is not copied to the Excel. I often spend time in performance tuning databases and I run many DMV’s in SSMS to get a quick view of the server. In my case it is almost certain that I need all the time column headers when I copy my data to excel or any other place. SQL Server Management Studio have two different ways to do this. Method 1: Ad-hoc When result is rendered you can right click on the resultset and click on Copy Header. This will copy the headers along with the resultset. Additionally, you can use the shortcut key CTRL+SHIFT+C for coping column headers along with the resultset. Method 2: Option Setting at SSMS level This is SSMS level settings and I kept this option always selected as I often need the column headers when I select the resultset. Go Tools >> Options >> Query Results >> SQL Server >> Results to Grid >> Check the Box “Include column header when copying or saving the results.” Both of the methods are discussed in following SQL in Sixty Seconds Video. Here is the code used in the video. Related Tips in SQL in Sixty Seconds: Copy Column Headers in Query Analyzers in Result Set Getting Columns Headers without Result Data – SET FMTONLY ON If we like your idea we promise to share with you educational material. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

  • What is the best way to evaluate new programmers?

    - by Rafael
    What is the best way to evaluate the best candidates to get a new job (talking merely in terms of programming skills)? In my company we have had a lot of bad experiences with people who have good grades but do not have real programming skills. Their skills are merely like code monkeys, without the ability to analyze the problems and find solutions. More things that I have to note: The education system in my country sucks--really sucks. The people that are good in this kind of job are good because they have talent for it or really try to learn on their own. The university / graduate /post-grad degree doesn't mean necessarily that you know exactly how to do the things. Certifications also mean nothing here because the people in charge of the certification course also don't have skills (or are in low paying jobs). We need really to get the good candidates that are flexible and don't have mechanical thinking (because this type of people by experience have a low performance). We are in a government institution and the people that are candidates don't necessarily come from outside, but we have the possibility to accept or not any candidates until we find the correct one. I hope I'm not sounding too aggressive in my question; and BTW I'm a programmer myself. edit: I figured out that asked something really complex here. I will un-toggle "the correct answer" only to let the discussion going fluent, without any bias.

    Read the article

  • Free CodeSmith License!

    - by Randy Walker
    The catch?  Attend the Ozarks .Net User Group meeting on April 1st. Here’s a list of the other prizes for the event GRAND PRIZE 1 - iPad (Wi-Fi 16GB) THIRD PARTY COMPONENTS 6 - Telerik Premium Collection 5 - Infragistics NetAdvantage for .NET 1 - Nevron Chart for .NET Lite DevExpress Xceed PRODUCTIVITY 2 - CodeRush with Refactor! Pro 2 - ReSharper CodeSmith GAMES 3 - Halo3 ODST (XBox 360) 3 - Forza Motorsport (Xbox 360) OTHER SOFTWARE 3 - Windows 7 Ultimate 2 - Microsoft Office Standard 2007 HARDWARE 2 - Microsoft Arc Mouse BOOKS 12 - OReilly eBooks 12 - Microsoft Press books 5 - Apress books 3 - Addison-Wesley books 2 - Manning books 2 - Sams books The Info: "Be a Professional Developer and Write Clean Code!" by Claudio Lassala on April 1, 2010 PRESENTATION TOPIC "Be a Professional Developer and Write Clean Code!" - by Claudio Lassala Poorly written code can be created quickly, but it comes at a cost of high maintenance. Most of the time, code can be improved easily by following some simple practices. Professional developers should know these practices and tools and apply it to their work every day. This session will cover the importance of writing clean code, the kind of attitude all developers should have towards the code they produce, as well as the practices and tools that can be used to aid you in becoming a better developer. BIOGRAPHY Claudio Lassala is a Senior Developer at EPS Software Corp. He has presented several lectures at Microsoft events such as PDC Brazil and various other Microsoft seminars, as well as several conferences and user groups across North America and Brazil. He is a multiple winner of the Microsoft MVP Award since 2001 (for Visual FoxPro in 2001-2002, and for C# ever since), an INETA speaker, and also holds the MCSD for .NET certification. He has articles published on several magazines, such as MSDN Brazil Magazine, CoDe Magazine, UTMag, Developers Magazine, and FoxPro Advisor. More detailed information regarding his presentations and articles can be found in his MVP Profile. You can also read more about Claudio on his blog or on Twitter Schedule 5:30 PM – 6:30 PM Social Networking 6:30 PM - 7:00 PM  Prizes 7:00 PM - 8:30 PM Presentation:  "Be a Professional Developer and Write Clean Code!" by Claudio Lassala 8:30 PM - 9:00 PM Wrap-Up

    Read the article

  • Make it simple. Make it work.

    - by Sean Feldman
    In 2010 I had an experience to work for a business that had lots of challenges. One of those challenges was luck of technical architecture and business value recognition which translated in spending enormous amount of manpower and money on creating C++ solutions for desktop client w/o using .NET to minimize “footprint” (2#) of the client application in deployment environments. This was an awkward experience, considering that C++ custom code was created from scratch to make clients talk to .NET backend while simple having .NET as a dependency would cut time to market by at least 50% (and I’m downplaying the estimate). Regardless, recent Microsoft announcement about .NET vNext has reminded me that experience and how short sighted architecture at that company was. Investment made into making C++ client that cannot be maintained internally by team due to it’s specialization in .NET have created a situation where code to maintain will be more brutal over the time and  number of developers understanding it will be going and shrinking. Not only that. The ability to go cross-platform (#3) and performance achievement gained with native compilation (#1) would be an immediate pay back. Why am I saying all this? To make a simple point to myself and remind again – when working on a product that needs to get to the market, make it simple, make it work, and then see how technology is changing and how you can adopt. Simplicity will not let you down. But a complex solution will always do.

    Read the article

  • On contract work, obligations to said contract, and looking out for yourself…

    - by jlnorsworthy
    Without boring you all with details, my last two contract assignments were cut short; I was given 3 days notice on one, and 4 weeks notice on the other. Neither of these were due to performance – they both basically came down budget issues. On my second contract, I got the feeling that I may not have been a great place to stay for the duration of my contract. Because of money/time spent getting me in the door, and the possible negative effect of my employer/recruiter, I decided to stay at least for a few months (and start looking several weeks before the end of my supposedly “extendable” contract). These experiences have left me a little wary of contract work. It seems that if I land a bad contract, that my recruiter would take a hit (reputation or otherwise) if I quickly found another job. But on the other hand, the client company won’t think twice of ending the contract early for any reason. I know that the counter argument to this is “maybe your recruiter shouldn’t have put you into a crappy assignment”… either way, it seems that since I am relying on him to provide me with work, that I should try to not damage his reputation with client companies. I’m basically brand new to contracting (these were my first two contracts) so these concerns are new to me. TLDR: Is contract work, by its very nature, largely unstable? Am I worried too much about my recruiter? Should I be quicker to start looking for a new job even after just weeks at a new company (when the environment seems unstable)? If so, do I look through my recruiter or just find another position by any means necessary?

    Read the article

  • Desktop Fun: Beaches Theme Wallpapers

    - by Asian Angel
    Is your vacation still a few weeks or months away? Make the waiting a little easier by adding some of that vacation scenery to your desktop with our Beaches Theme Wallpapers collection. Note: Click on the picture to see the full-size image—these wallpapers vary in size so you may need to crop, stretch, or place them on a colored background in order to best match them to your screen’s resolution.                       For more fun wallpapers be certain to visit our new Desktop Fun section. Similar Articles Productive Geek Tips Windows 7 Welcome Screen Taking Forever? Here’s the Fix (Maybe)Desktop Fun: Starship Theme WallpapersDesktop Fun: Underwater Theme WallpapersDesktop Fun: Forest Theme WallpapersDesktop Fun: Fantasy Theme Wallpapers TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Run Command Missing in Windows 7? Share High Res Photos using Divvyshot Draw Online using Harmony How to Browse Privately in Firefox Kill Processes Quickly with Process Assassin Need to Come Up with a Good Name? Try Wordoid

    Read the article

  • Java - System design with distributed Queues and Locks

    - by sunny
    Looking for inputs to evaluate a design for a system (java) which would have a distributed queue serving several (but not too many) nodes. These nodes would process objects present in the distributed queue and on occasion require a distributed lock across the cluster on an arbitrary (distributed) data structures. These (distributed) data structures could potentially lie in a distributed cache. Eliminating Terracotta (DSO),Hazelcast and Akka what could be alternative choices. Currently considering zookeeper as a distributed locking mechanism. Since the recommendation of a znode is not to exceed the 1M size , the understanding is that zookeeper should not be used a distributed queue. And also from Netflix curator tech note 4. So should a distributed cache, say like memcached, or redis be used to emulate a distributed queue ? i.e. The distributed queue will be stored in the caches and will be locked cluster-wide via zookeeper. Are there potential pitfalls with this high-level approach. The objects don't need to be taken off the queue. The object will pass through a lifecycle which will determine its removal from the queue. There would be about 10k+ objects in a queue at a given time changing states and any node could service one stage of the object's lifecycle. (Although not strictly necessary .. i.e. one node could serve the entire lifecycle if that is more efficient.) Any suggestions/alternatives ? sidenote: new to zookeeper ; redis etc.

    Read the article

  • TechEd 2012: Office, SharePoint And More Office

    - by Tim Murphy
    I haven’t spent any time looking at Office 365 up to this point.  I met Donovan Follette on the flight down from Chicago.  I also got to spend some time discussing the product offerings with him at the TechExpo and that sealed my decision to attend this session. The main actor of his presentation is the BCS – Business Connectivity Services. He explained that while this feature has existed in on-site SharePoint it is a valuable new addition to Office 365 SharePoint Online.  If you aren’t familiar with the BCS, it allows you to leverage non-SharePoint enterprise data source from SharePoint.  The greatest benefactor is the end users who can leverage the data using a variety of Office products and more.  The one thing I haven’t shaken my skepticism of is the use of SharePoint Designer which Donovan used to create a WCF service.  It is mostly my tendency to try to create solutions that can be managed through the whole application life cycle.  It the past migrating through test environments has been near impossible with anything other than content created by SharePiont Designer. There is a lot of end user power here.  The biggest consideration I think you need to examine when reaching from you enterprise LOB data stores out to an online service and back is that you are going to take a performance hit.  This means that you have to be very aware of how you configure these integrated self serve solutions.  As a rule make sure you are using the right tool for the right situation. I appreciated that he showed both no code and code solutions for the consumer of the LOB data.  I came out of this session much better informed about the possibilities around this product. del.icio.us Tags: Office 365,SharePoint Online,TechEd,TechEd 2012

    Read the article

  • Squeezing hardware

    - by [email protected]
    It's very common that high availability means duplicate hardware so costs grows up.Nowadays, CIOs and DBAs has the main challenge of reduce the money spent increasing the performance and the availability. Since Grid Infrastructure 11gR2, there is a new feature that helps them to afford this challenge: Server PoolsNow, in Grid Infrastructure 11gR2, you can define server pools across the cluster setting up the minimum number of servers, the maximum and how important is the pool.For example:Consider  that "Velasco, Boixeda & co"  has 3 apps in a 6 servers cluster.First One is the main core business appSecond one is Mid RangeAnd third it's a database not very important.We Define the following resource requirements for expected workload:1- Main App 2 servers required2- Mid Range App requires 1 server3- Is not a required app in case of disasterThe we define 3 server pools across the cluster:1- Main pool min two servers, max three servers, importance four2- Mid pool, min one server max two servers, importance two3- test pool,min zero servers, max one server, importance oneSo the initial configuration is:-Main pool has three servers-Mid pool has two servers-Test pool has one serverLogically, we can see the cluster like this:If any server fails, the following algorithm will be applied:1.-The server pool of least importance2.-IF server pools are of the same importance,   THEN then the Server Pool that has more than its defined minimum servers Is chosenHope it helps 

    Read the article

  • A new version of Oracle Enterprise Manager Ops Center Doctor (OCDoctor ) Utility released

    - by Anand Akela
    In February,  we posted a blog of Oracle Enterprise Manager Ops Center Doctor aka OCDoctor Utility. This utility assists in various stages of the Ops Center deployment and can be a real life saver. It is updated on a regular basis with additional knowledge (similar to an antivirus subscription) to help you identify and resolve known issues or suggest ways to improve performance.A new version ( Version 4.00 ) of the OCDoctor is now available . This new version adds full support for recently announced Oracle Enterprise Manager Ops Center 12c including prerequisites checks, troubleshoot tests, log collection, tuning and product metadata updates. In addition, it adds several bug fixes and enhancements to OCDoctor Utility.To download OCDoctor for new installations:https://updates.oracle.com/OCDoctor/OCDoctor-latest.zipFor existing installations, simply run:# /var/opt/sun/xvm/OCDoctor/OCDoctor.sh --updateTip : If you have Oracle Enterprise Manager Ops Center12c EC installed, your OCDoctor will automatically update overnight. Join Oracle Launch Webcast : Total Cloud Control for Systems on April 12th at 9 AM PST to learn more about  Oracle Enterprise Manager Ops Center 12c from Oracle Senior Vice President John Fowler, Oracle Vice President of Systems Management Steve Wilson and a panel of Oracle executive. Stay connected with  Oracle Enterprise Manager   :  Twitter | Facebook | YouTube | Linkedin | Newsletter

    Read the article

  • IASA South East Florida Chapter February Meeting Report

    - by Rainer Habermann
    IASA South East Florida Chapter – February Meeting The topic for our February chapter meeting was Legal Issues in IT. Ms. Kennedy, Intellectual Property Attorney with an active litigation, trademark and copyright practice, presented: How Google, Wal-Mart & Apple Make their Millions – The Secret Ingredient: Intellectual Property This topic initiated great interest and the meeting room at Microsoft Ft. Lauderdale filled up to the last seat. Most Architects, Engineers, and MBA’s are not aware about Intellectual Property, Basic Patent, Trademark, or legal issues related to the web. After clarifying the basic definitions, Ms. Kennedy explained in detail how intellectual property issues could make or break a company. Members had the opportunity at the end of the presentation to ask questions, discuss legal problems, and several members shared their experiences related to Intellectual Property and other IT related issues. If you want to protect your ideas and intellectual property, you have to be aware of the implications and need to take the right steps in order to protect them. All Chapter Members agreed that it was an outstanding and lively presentation. Ms. Kennedy presented high quality content and made participants aware of legal IT issues. In the name of all chapter members, thank you Ms. Kennedy for taking the time for this amazing presentation and to Quent Herschelman for hosting the meeting. Rainer Habermann President IASA South East Florida Chapter

    Read the article

  • Keyboard locking up in Visual Studio 2010

    - by Jim Wang
    One of the initiatives I’m involved with on the ASP.NET and Visual Studio teams is the Tactical Test Team (TTT), which is a group of testers who dedicate a portion of their time to roaming around and testing different parts of the product.  What this generally translates to is a day and a bit a week helping out with areas of the product that have been flagged as risky, or tackling problems that span both ASP.NET and Visual Studio.  There is also a separate component of this effort outside of TTT which is to help with customer scenarios and design. I enjoy being on TTT because it allows me the opportunity to look at the entire product and gain expertise in a wide range of areas.  This week, I’m looking at Visual Studio 2010 performance problems, and this gem with the keyboard in Visual Studio locking up ended up catching my attention. First of all, here’s a link to one of the many Connect bugs describing the problem: Microsoft Connect I like this problem because it really highlights the challenges of reproducing customer bugs.  There aren’t any clear steps provided here, and I don’t know a lot about your environment: not just the basics like our OS version, but also what third party plug-ins or antivirus software you might be running that might contribute to the problem.  In this case, my gut tells me that there is more than one bug here, just by the sheer volume of reports.  Here’s another thread where users talk about it: Microsoft Connect The volume and different configurations are staggering.  From a customer perspective, this is a very clear cut case of basic functionality not working in the product, but from our perspective, it’s hard to find something reproducible: even customers don’t quite agree on what causes the problem (installing ReSharper seems to cause a problem…or does it?). So this then, is the start of a QA investigation. If anybody has isolated repro steps (just comment on this post) that they can provide this will immensely help us nail down the issue(s), but I’ll be doing a multi-part series on my progress and methodologies as I look into the problem.

    Read the article

  • Tuxedo 11gR1 Released

    - by todd.little
    I've been a little quiet the last several months as the Tuxedo team has been very busy. Today Oracle announced the 11gR1 release of the Tuxedo product family. This release includes updates to Tuxedo, TSAM, and SALT, as well as 3 new products that Oracle is announcing today. These 3 new products are the Oracle Tuxedo Application Runtime for CICS and Batch, Oracle Application Rehosting Workbench, and the Tuxedo JCA Adapter. By providing a CICS equivalent runtime and a rehosting workbench to automate the rehosting of COBOL CICS code, JCL procedures, data definitions, and data, Oracle has significantly lowered the effort and risk to rehost mainframe CICS and Batch applications onto the Tuxedo runtime on open systems. By moving off proprietary legacy mainframes, customers have experienced better performance and achieved a 50-80% lowering of their total cost of ownership. The rehosting tools allow the COBOL business logic to remain unchanged and automate the replacement of CICS statements with calls to Tuxedo. The rehosted code can then run on open systems 'as-is'. Users can still use the same TN3270 interfaces they are used to eliminating the need for retraining. Batch procedures can be run and managed under a JES2 like environment. For the first time, customers have the tools and enterprise class runtime environment to move their key legacy assets off the mainframe and on to distributed open systems whether the application uses 250 MIPS, 25,000 MIPS, or more. More on these exciting new options in additional blog entries.

    Read the article

< Previous Page | 492 493 494 495 496 497 498 499 500 501 502 503  | Next Page >