Search Results

Search found 9983 results on 400 pages for 'fuzzy c means'.

Page 242/400 | < Previous Page | 238 239 240 241 242 243 244 245 246 247 248 249  | Next Page >

  • Is Nick Clegg a man or a mouse?

    - by BizTalk Visionary
    Well we got the hung election so many of us wanted! I believe it really is time for electoral change. Why? Consider: the ConMen under Cameroon have polled 36% of the great British voting public – well those that got to vote!! That means 64% of us don’t want him as PM. So what gives him the right to govern? Well an ancient voting system ideal for two party politics. But for the last 30 years we’ve had multi-party politics and going forward we may see 4 or 5 parties stepping up. We have to set in place a system that makes this work! So what does that mean today: Nick has a golden chance to push forward the case and in fact the absolute right for the change. He needs to keep this in mind when he discusses coalition with both Labour and the ConMen. So the mouse approach: Decides it is only fair to side with the ‘biggest’ vote and team up with the ConMen. Chances of electoral change? Big fat zero. Chance of achieving any of his other targets. Big fat zero. Why? Simple (as the Meer Kat would say). Cameroon needs to become PM by hook or crook. Once PM he holds the whip hand. Labour will dump Brown and head off into Leadership race land, Glegg will be knocking on number 10, having meaningless meetings and seeing no reward. Finally while Labour is at 6‘s and 7’s  the ‘new’ PM will call a new election, gain the majority they need and dump luckless Nick!! So the man approach: Team up with Labour. As one of the conditions – Brown to go. Run referendum for PR. Get PR through then force Labour to have new election under PR. Nick now hero and should be in a much better place following a PR election!! The man bit is standing up to the media attack for supporting Labour. Come Nick – be a man for a better Britain!!

    Read the article

  • Solution 6 : Kill a Non-Clustered Process during Two-Node Cluster Failover

    - by StanleyGu
    Using Visual Studio 2008 and C#, I developed a windows service A and deployed it to two nodes of a windows server 2008 failover cluster. The service A is part of the failover cluster service, which means, when failover occurs at node1, the cluster service will failover the windows service A from node 1 to node 2. One of the tasks implemented by the windows service A is to start, monitor or kill a process B. The process B is installed to the two nodes but is not part of the failover cluster service. When a failover occurs at node1, the cluster service does not failover the process B from node 1 to node 2, and the process B continues running at node1. The requirement is: When failover occurs at node1, we want the process B running at node1 gets killed, but we do not want the process B be part of the failover cluster service. The first idea that pops up immediately is to put some code in an event handler triggered by the failover in the windows service A. The failover effect to the windows service A is similar to using the task manager to kill the process of the windows service A, but there is no event in windows service that can be triggered by killing the process of the window service. The events related to terminating a windows service are OnStop and OnShutDown, but killing the process of windows service A triggers neither of them. The OnStop event can only be triggered by stopping the windows service using Services Control Manager or Services Management Console. Apparently, the first idea is not feasible. The second idea that emerges is to put code into the OnStart event handler of the windows service A. When failover occurs at node 1, the windows service A is killed at node 1 and started at node 2. During the starting, the windows service A at node 2 kills the process B that is running at node 1. It is a workaround and works very well. The C# code implementation within the OnStart event handler is as following: 1.       Capture server names of the two nodes from App.config 2.       Determine server name of the remote node. 3.       Kill the process B running on the remote node. Check here for sample code.  

    Read the article

  • Paying by Cash

    - by David Dorf
    I'll grant you paying by cash in the context of stores isn't particularly interesting, but in my quest to try new payment methods I decided to pay by cash at an online store. Using a credit card means I have to hoist myself off the couch, find the card, and enter all those digits. Google Checkout certainly makes that task easier by storing my credit card information, but what happens to all those people that don't have a credit card? What about the ones that are afraid to use credit cards over the internet. There are three main options for cash payment, not all of which are accepted by every merchant. The most popular is PayPal. The issue I have with them is that returns and disputes have to be handled with PayPal, not the merchant. I once used PayPal at a shady online store and lost my money. Yeah, my bad but they wouldn't help me at all. PayPal was purchased by eBay in 2002. BillMeLater is best for larger purchases, because at checkout they actually run a credit check to make sure you're credit worthy. Assuming you are, they pay the merchant on your behalf and mail you a bill, which you better pay quickly or interest will start to accrue. That's nice for the merchant because they get paid right away, and I presume there's no charge-backs. BillMeLater was purchased by eBay in 2008. Last night I tried eBillMe for the first time. After checkout, they send you a bill via email and expect you to pay either via online banking (they provide the instructions to set everything up) or walk-in locations across the US (typically banks). The process was quick and easy. The merchant doesn't ship the product until the bill is paid, so there's a day or two delay. For the merchant there are no charge-backs, and the fees are less than credit cards. For the shopper, they provide buyer protection similar to that offered by credit cards, and 1% cashback on purchases. Once the online bill-pay is setup, its easy to reuse in the future. Seems like a win-win for merchants and shoppers.

    Read the article

  • Indian government department have more unsecure website then others.

    - by Anirudha
    Originally posted on: http://geekswithblogs.net/anirugu/archive/2013/10/26/indian-government-department-have-more-unsecure-website-then-others.aspxOne of my friend share his college experience with me. He is not related with computer science. One day he told me that Ankia Fadia come to their college. In front of many student he show how to hack BSNL website by tricks. he break the flow how BSNL site work. I have told them BSNL is one of the most unsecure website of India   If you logged-in to website maybe it’s run in few seconds but sometime it run in 58 minute. OK this is not grammar mistake 58 minute is less then 1 hour. This means open a tab and put the link to open. it will open in hours. If you are using IE8, Chrome and Firefox you will be forced to use IE7 or downgrade. I simply use Ie7 mode in IE for make it work. This happen because they use something that is called DynaTrace. This site is most unsecure. now guess how !   Suppose my username is xyz and password is abc. How I can reset the password I simply go to website and in their site when I do reset my password he told me to fill password and password will not worked here.you can use here password here to reset my password. Remember that username are different then broadband username and password. Suppose if I want to reset your password I simply need to know your broadband username and I can reset it myself. I just logged in with my username and when I open the page for reset password I can fill your bb username and password will work here. I have not tried this. the broadband username can easily guess. this is depend on same way how people’s broandband username made. IS this Safe ? Nope, There are many thing on the site which make me feel that is 1900 century website. They still lived in popup life.  These site are nothing but a crap. not work most of time and when work it’s run too slowly.

    Read the article

  • Incremental Statistics Maintenance – what statistics will be gathered after DML occurs on the table?

    - by Maria Colgan
    Incremental statistics maintenance was introduced in Oracle Database 11g to improve the performance of gathering statistics on large partitioned table. When incremental statistics maintenance is enabled for a partitioned table, oracle accurately generated global level  statistics by aggregating partition level statistics. As more people begin to adopt this functionality we have gotten more questions around how they expected incremental statistics to behave in a given scenario. For example, last week we got a question around what partitions should have statistics gathered on them after DML has occurred on the table? The person who asked the question assumed that statistics would only be gathered on partitions that had stale statistics (10% of the rows in the partition had changed). However, what they actually saw when they did a DBMS_STATS.GATHER_TABLE_STATS was all of the partitions that had been affected by the DML had statistics re-gathered on them. This is the expected behavior, incremental statistics maintenance is suppose to yield the same statistics as gathering table statistics from scratch, just faster. This means incremental statistics maintenance needs to gather statistics on any partition that will change the global or table level statistics. For instance, the min or max value for a column could change after just one row is inserted or updated in the table. It might easier to demonstrate this using an example. Let’s take the ORDERS2 table, which is partitioned by month on order_date.  We will begin by enabling incremental statistics for the table and gathering statistics on the table. After the statistics gather the last_analyzed date for the table and all of the partitions now show 13-Mar-12. And we now have the following column statistics for the ORDERS2 table. We can also confirm that we really did use incremental statistics by querying the dictionary table sys.HIST_HEAD$, which should have an entry for each column in the ORDERS2 table. So, now that we have established a good baseline, let’s move on to the DML. Information is loaded into the latest partition of the ORDERS2 table once a month. Existing orders maybe also be update to reflect changes in their status. Let’s assume the following transactions take place on the ORDERS2 table this month. After these transactions have occurred we need to re-gather statistic since the partition ORDERS_MAR_2012 now has rows in it and the number of distinct values and the maximum value for the STATUS column have also changed. Now if we look at the last_analyzed date for the table and the partitions, we will see that the global statistics and the statistics on the partitions where rows have changed due to the update (ORDERS_FEB_2012) and the data load (ORDERS_MAR_2012) have been updated. The column statistics also reflect the changes with the number of distinct values in the status column increase to reflect the update. So, incremental statistics maintenance will gather statistics on any partition, whose data has changed and that change will impact the global level statistics.

    Read the article

  • Willy Rotstein on Analytics and Social Media in Retail

    - by sarah.taylor(at)oracle.com
    Recently I came across a presentation from Dan Zarrella on "The Science of Retweets. (http://www.slideshare.net/HubSpot/the-science-of-retweets-with-dan-zarrella). It is an insightful, fact-based analysis of how tweets propagate and what makes them successful. The analysis is of course very interesting for those of us interested Tweeting. However, what really caught my attention is how well it illustrates, form a very different angle, some of the issues I am discussing with retailers these days. In particular the opportunities that e-commerce and social media open to those retailers with the appetite and vision to tackle the associated analytical challenges. And these challenges are of course not straightforward.   In his presentation Dan introduces the concept of Observability, I haven't had the opportunity to discuss with Dan his specific definition for the term. However, in practical retail terms, I would say that it means that through social media (and other web channels such as search) we can analyze and track processes by measuring Indicators that were not measurable before. The focus is in identifying patterns across a large number of consumers rather than what a particular individual "Likes".   The potential impact for retailers is huge. It opens the opportunity to monitor changes in consumer preference  and plan the business accordingly. And you can do this almost "real time" rather than through infrequent surveys that provide a "rear view" picture of your consumer behaviour. For instance, you could envision identifying when a particular set of fashion styles are breaking out from the pack, and commit a re-buy. Or you could monitor when the preference for a specific mobile device has declined and hence markdowns should be considered; or how demand for a specific ready-made food typically flows across regions and manage the inventory accordingly. Search, blogging, website and store data may need to be considered in identifying these trends. The data volumes involved are huge (check Andrea Morgan's recent post on "Big Data" in retail) but so are the benefits. As Andrea says, for the first time we can start getting insight into "Why" the business is performing in a certain way rather than just reporting on what is happening. And it is not just about the data volumes. Tackling the challenge also calls for integrated planning systems that can bring data and insight into the context of the Decision Making process Buyers, Merchandisers and Supply Chain managers are following. I strongly believe that only when data and process come together you can move from the anecdotal to systematically improving business performance.   I would love to hear your opinions on these trends and where you think Retail is heading to exploit these topics - please email me: [email protected]

    Read the article

  • Using Dependency Walker

    - by Valter Minute
    Dependency Walker is a very useful tool that can be used to find dependencies of a Portable Executable module. The PE format is used also on Windows CE and this means that Dependency Walker can be used to analyze also Windows CE/Windows Embedded Compact module. On Win32 it can be used also to monitor modules loaded by an application during runtime, this feature is not supported on CE. You can download dependency walker for free here: http://dependencywalker.com/. To analyze the dependencies of a Windows CE/Windows Embedded Compact 7 module you can just open it using Dependency Walker. If you want to check if a specific module can run on a Windows CE/Windows Compact 7 OS Image you can copy the executable in the same directory that contains your OS binaries (FLATRELEASEDIR). In this way Dependency Walker will highlight missing dlls or missing entry points inside existing dlls. Let’s do a quick sample. You need to check if myapp.exe (an application from a third party) can run on an image generated with your Test01 OSDesign. Copy Myapp.exe to the flat release directory of your OS Design. Launch depends.exe and use the File\Open option of its main menu to open the application executable file you just copied. You may receive an error if some of the modules required by your applications are missing. Before you analyze the module dependencies is important to configure Dependency Walker to check DLL in the same folder where your application file is stored. This is needed because some Windows CE DLLs have the same name of Win32 system DLLs but different entry points. To configure the DLL search path select “Options\Configure Module Search Order…” from Depenency Walker main menu. Select “The application directory” from the “Current Search Order” list, select it, and move it to the top of the list using the “Move Up” button. The system will ask to refresh the window contents to reflect your configuration change, click on “Yes” to proceed. Now you can inspect myapp.exe dependencies. Some DLLs are missing (XAMLRUNTIME.DLL and TILEENGINE.DLL) and OLE32.DLL exists but does not export the “CoInitialize” entry point that is required by myapp.exe. The bad news is that MyApp.exe will not run on your OS Image, the good news is that now you know what’s missing and you can add the required modules to your OS Design and fix the problem!

    Read the article

  • .NET development on a “Retina” MacBook Pro

    - by Jeff
    The rumor that Apple would release a super high resolution version of its 15” laptop has been around for quite awhile, and one I watched closely. After more than three years with a 17” MacBook Pro, and all of the screen real estate it offered, I was ready to replace it with something much lighter. It was a fantastic machine, still doing 6 or 7 hours after 460 charge cycles, but I wanted lighter and faster. With the SSD I put in it, I was able to sell it for $750. The appeal of higher resolution goes way back, when I would plug into a projector and scale up. Consolas, as it turns out, is a nice looking font for code when it’s bigger. While I have mostly indifference for iOS, I have to admit that a higher dot pitch on the iPhone and iPad is pretty to look at. So I ordered the new 15” “Retina” model as soon as the Apple Store went live with it, and got it seven days later. I’ve been primarily using Parallels as my VM of choice from OS X for about five years. They recently put out an update for compatibility with the display, though I’m not entirely sure what that means. I figured there would have to be some messing around to get the VM to look right. The combination that seems to work best is this: Set the display in OS X to “more room,” which is roughly the equivalent of the 1920x1200 that my 17” did. It’s not as stunning as the text at the default 1440x900 equivalent (in OS X), but it’s still quite readable. Parallels still doesn’t entirely know what to do with the high resolution, though what it should do is somehow treat it as native. That flaw aside, I set the Windows 7 scaling to 125%, and it generally looks pretty good. It’s not really taking advantage of the display for sharpness, but hopefully that’s something that Parallels will figure out. Screen tweaking aside, I got the base model with 16 gigs of RAM, so I give the VM 8. I can boot a Windows 7 VM in 9 seconds. Nine seconds! The Windows Experience Index scores are all 7 and above, except for graphics, which are both at 6. Again, that’s in a VM. It’s hard to believe there’s something so fast in a little slim package like that. Hopefully this one gets me at least three years, like the last one.

    Read the article

  • Mono for Android Book has been Released!!!!!

    - by Wallym
    If I understand things correctly, and I make no guarantees that I do, our Mono for Android book has been RELEASED!  I'm not quite sure what this means, but my guess is that that it has been printed and is being shipped to various book sellers.So, if you have pre-ordered a copy, its now up to Amazon to send it to you.  Its fully out of my control, Wrox, Wiley, as well as everyone but Amazon.If you haven't bought a copy already, why?  Seriously, go order 8-10 copies for the ones you love.  They'll make great romantic gifts for the ones you love.  Just think at the look on the other person's face when you give them a copy of our book. Here's a little about the book:The wait is over! For the millions of .NET/C# developers who have been eagerly awaiting the book that will guide them through the white-hot field of Android application programming, this is the book. As the first guide to focus on Mono for Android, this must-have resource dives into writing applications against Mono with C# and compiling executables that run on the Android family of devices.Putting the proven Wrox Professional format into practice, the authors provide you with the knowledge you need to become a successful Android application developer without having to learn another programming language. You'll explore screen controls, UI development, tables and layouts, and MonoDevelop as you become adept at developing Android applications with Mono for Android.Develop Android apps using tools you already know—C# and .NETAimed at providing readers with a thorough, reliable resource that guides them through the field of Android application programming, this must-have book shows how to write applications using Mono with C# that run on the Android family of devices. A team of authors provides you with the knowledge you need to become a successful Android application developer without having to learn another programming language. You'll explore screen controls, UI development, tables and layouts, and MonoDevelop as you become adept at planning, building, and developing Android applications with Mono for Android.Professional Android Programming with Mono for Android and .NET/C#:Shows you how to use your existing C# and .NET skills to build Android appsDetails optimal ways to work with data and bind data to controlsExplains how to program with Android device hardwareDives into working with the file system and application preferencesDiscusses how to share code between Mono for Android, MonoTouch, and Windows® Phone 7Reveals tips for globalizing your apps with internationalization and localization supportCovers development of tablet apps with Android 4Wrox Professional guides are planned and written by working programmers to meet the real-world needs of programmers, developers, and IT professionals. Focused and relevant, they address the issues technology professionals face every day. They provide examples, practical solutions, and expert education in new technologies, all designed to help programmers do a better job.Now, go buy a bunch of copies!!!!!If you are interested in iPhone and Android and would like to get a little more knowledgeable in the area of development, you can purchase the 3 pack of books from Wrox on Mobile Development with Mono.  This will cover MonoTouch, Mono for Android, and cross platform methods for using both tools.  A great package in and of itself.  The name of that package is: Wrox Cross Platform Android and iOS Mobile Development Three-Pack 

    Read the article

  • SharpDX and game engines, back to zero?

    - by Baboon
    I'm a desktop developer (I mainly do WPF for a living) but I want to make games as a hobby. So a few months ago, I started reading blogs, gamedevSE, you name it. I understand in the C++ DirectX world, you have engines such as Unity3D with designers and whatnot, but as much as I'm ok to spend months understanding game development, I'd prefer to stay with my comfy C#. So I thought I'd develop my first games in /C#/DirectX through SharpDX But then, I can't use game engines anymore, since they're made for C++DirectX and not SharpDX. (ok, I could do P/Invokes but that defeats the purpose of SharpDX). I do know about XNA, and I also do know I can't publish to the marketplace with it (and quite frankly, I don't really want to learn an API that is in jeopardy). So how do you conceal writing games in C# and using existing game engines instead of reinventing the wheel? wait for ports? So I've found out the following: After doing the first tutorials of MOgre and digging around, it seems MOgre gives you the worse of both worlds: . You can't port it with Mono because it directly references a C++/CLI dll (Ogre) and C++/CLI isn't supported by Mono. And since it's not C++ itself, you can't make a pure build. Which, as far as I understand, means I'd be stuck on windows (and not even WinRT/Metro compatible), without capability of porting anything to mobiles or other OS (Mac/Linux). Even though it looks really nice to develop with MOgre, I'd like to learn something a little more open to future broadening. On the other hand, MonoGame seems to be a rewrite of XNA with SharpDX which sounds very promising: Mono allows me to easily port my games to other platforms, mobiles included. SharpDX allows me to access the latest DirectX versions XNA would be my first choice if only MS showed some hope for the future It really looks like MonoGame is nothing else than XNA on SharpDX. Axiom looks good too but I lack info on the subject (and pages with poor design don't give me a good feeling about an API...) XNA with SunBurn looks good: It should be portable to Mono (can anyone with experience give us feedback on that?), thus multi-platform. It's then marketplace-able since Mono itself is. Did I miss something or are 2) and 4) my best options (aside from the fact Mono doesn't support any XAML)?

    Read the article

  • Gawker Passwords

    - by Nick Harrison
    There has been much news about the hack of the Gawker web sites. There has even been an analysis of the common passwords found. This list is embarrassing in many ways. The most common password was "123456". The second most common password was "password". Much has also been written providing advice on how to create good passwords. This article provides some interesting advice, none of which should be taken. Anyone reading my blog, probably already knows the importance of strong passwords, so I am not going to reiterate the reasons here. My target audience is more the folks defining password complexity requirements. A user cannot come up with a strong password, if we have complexity requirements that don't make sense. With that in mind, here are a few guidelines:  Long Passwords Insist on long passwords. In some cases, you may need to change to allow a long password. I have seen many places that cap passwords at 8 characters. Passwords need to be at least 8 characters minimal. Consider how much stronger the passwords would be if you double the length. Passwords that are 15-20 characters will be that much harder to crack. There is no need to have limit passwords to 8 characters. Don't Require Special Characters Many complexity rules will require that your password include a capital letter, a lower case letter, a number, and one of the "special" characters, the shits above the number keys. The problem with such rules is that the resulting passwords are harder to remember. It also means that you will have a smaller set of characters in the resulting passwords. If you must include one of the 9 digits and one of the 9 "special" characters, then you have dramatically reduced the character set that will make up the final password. Two characters will be one of 10 possible values instead of one of 70. Two additional characters will be one of 26 possible characters instead of a 70 character potential character set. If you limit passwords to 8 characters, you are left with only 7 characters having the full set of 70 potential values. With these character restrictions in place, there are 1.6 x1012 possible passwords. Without these special character restrictions, but allowing numbers and special characters, you get a total of 5.76x1014 possible passwords. Even if you only allowed upper and lower case characters, you will still have 2.18X1014 passwords. You can do the math any number of ways, requiring special characters will always weaken passwords. Now imagine the number of passwords when you require more than 8 characters.  If you are responsible for defining complexity rules, I urge you to take these guidelines into account. What other guidelines do you follow?

    Read the article

  • Silverlight Cream for June 03, 2010 -- #875

    - by Dave Campbell
    In this Issue: Ben Hodson, Fons Sonnemans, SilverLaw, Mike Snow, John Papa, René Schulte, Walt Ritscher, and David Anson. Shoutouts: René Schulte announced a whole batch of new features for WriteableBitmap that are now available: Filled To The Bursting Point - WriteableBitmapEx 0.9.5.0 Check out John Papa's Sticky Seesmic Desktop Plugin ... download it, play with it... he's going to blog about building plugins later Tim Heuer reported a Silverlight 4 minor update–June 2010 Erik Mork and Crew have a new Podcast up: This Week in Silverlight: Redmond Exodus? From SilverlightCream.com: Tutorial for Configuring Silverlight 4, Entity Framework and WCF RIA Services in Separate Component Assemblies (DLL’s) Ben Hodson is a new author(to me) that submitted his post at SilverlightCream.com... this is a good-looking tutorial on configuring separate component assemblies for all your project pieces. SpiralText in Silverlight 4 Fons Sonnemans had a good time playing with the PathListBox in Blend and produced a demo of text on a Spiral... you can run it right on the post, then grab the code. How To: Starting A Storyboard Not Before The Application Has Completed Loading - Silverlight 4 SilverLaw takes a look at the problem of having a Storyboard start too early, and demonstrates code to avoid the problem. Silverlight Tip of the Day#27 – Displaying Special Characters in XAML Mike Snow's latest Tip of the day is on encoding 'special' characters for use in XAML... simple looking at it, frustrating to debug if you don't do it right. Diving into the RichTextBox (Silverlight TV #31) John Papa talks about the RichTextBox with Mark Rideout in this edition of Silverlight TV. Mark provides a great video tutorial for the control. Push and Pull - Silverlight Webcam Capturing Details Boy, René Schulte doesn't slow down does he?... his latest is (in his words from a section heading) "Silverlight Webcam 101" ... and he means it... this is one to save to OneNote or as a PDF! Looking for Silverlight BiDi or RTL? Use the FlowDirection property If you need RTL or BiDi in Silverlight and you haven't checked it out yet, Walt Ritscher has a nice intro up on using the FlowDirection property, with demos and code. How to: Show text labels on a numeric axis with Silverlight/WPF Toolkit Charting David Anson has another charting puzzle resolved on his site... putting text labels on the dependent axis. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • GlassFish Back from Devoxx 2011 Mature Java EE 6 and EE 7 well on its way

    - by alexismp
    I'm back from my 8th (!) Devoxx conference (I don't think I've missed one since 2004) and this conference keeps delivering on the promise of a Java developer paradise week. GlassFish was covered in many different ways and I was not involved in a good number of them which can only be a good sign! Several folks asked me when my Java EE 6 session with Antonio Goncalves was scheduled (we've been covering this for the past two years in University sessions, hands-on labs and regular sessions). It turns out we didn't team up this year (Antonio was crazy busy preparing for Devoxx France) and I had a regular GlassFish session. Instead, this year, Bert Ertman and Paul Bakker covered the 3-hour Java EE 6 University session ("Duke’s Duct Tape Adventures") on the very first day (using GlassFish) with great success it seems. The Java EE 6 lab was also a hit with a full room of folks covering a lot of technical ground in 2.5 hours (with GlassFish of course). GlassFish was also mentioned during Cameron Purdy's keynote (pretty natural even if that surprised a number of folks that had not been closely following GlassFish) but also in Stephan Janssen's Keynote as the engine powering Parleys.com. In fact Stephan was a speaker in the GlassFish session describing how they went from a single-instance Tomcat setup to a clustered GlassFish + MQ environment. Also in the session was Johan Vos (of Mollom fame, along other things). Both of these customer testimonials were made possible because GlassFish has been delivering full Java EE 6 implementations for almost two years now which is plenty of time to see serious production deployments on it. The Java EE Gathering (BOF) was very well attended and very lively with many spec leads participating and discussing progress and also pain points with folks in the room. Thanks to all those attending this session, a good number of RFE's, and priority points came out of this. While this wasn't a GlassFish session by any means, it's great to have the current RESTful Admin and upcoming Java EE 7 planned features be a satisfactory answer to some of the requests from the attendance. Last but certainly not least, the GlassFish team is busy with Java EE 7 and version 4 of the product. This was discussed and shown during the Java EE keynote and in greater details in Jerome Dochez' session. If any indication, the tweets on his demo (virtualization, provisioning, etc...) were very encouraging. Java EE 6 adoption is doing great and GlassFish, being a production-quality reference implementation, is one of the first to benefit from this. And with GlassFish 4.0, we're looking at increasing the product and community adoption by offering a pragmatic technical solution to Java EE PaaS deployments. Stay tuned ! (the impatient in you is encouraged to grab a 4.0 build and provide feedback).

    Read the article

  • Partition Wise Joins II

    - by jean-pierre.dijcks
    One of the things that I did not talk about in the initial partition wise join post was the effect it has on resource allocation on the database server. When Oracle applies a different join method - e.g. not PWJ - what you will see in SQL Monitor (in Enterprise Manager) or in an Explain Plan is a set of producers and a set of consumers. The producers scan the tables in the the join. If there are two tables the producers first scan one table, then the other. The producers thus provide data to the consumers, and when the consumers have the data from both scans they do the join and give the data to the query coordinator. Now that behavior means that if you choose a degree of parallelism of 4 to run such query with, Oracle will allocate 8 parallel processes. Of these 8 processes 4 are producers and 4 are consumers. The consumers only actually do work once the producers are fully done with scanning both sides of the join. In the plan above you can see that the producers access table SALES [line 11] and then do a PX SEND [line 9]. That is the producer set of processes working. The consumers receive that data [line 8] and twiddle their thumbs while the producers go on and scan CUSTOMERS. The producers send that data to the consumer indicated by PX SEND [line 5]. After receiving that data [line 4] the consumers do the actual join [line 3] and give the data to the QC [line 2]. BTW, the myth that you see twice the number of processes due to the setting PARALLEL_THREADS_PER_CPU=2 is obviously not true. The above is why you will see 2 times the processes of the DOP. In a PWJ plan the consumers are not present. Instead of producing rows and giving those to different processes, a PWJ only uses a single set of processes. Each process reads its piece of the join across the two tables and performs the join. The plan here is notably different from the initial plan. First of all the hash join is done right on top of both table scans [line 8]. This query is a little more complex than the previous so there is a bit of noise above that bit of info, but for this post, lets ignore that (sort stuff). The important piece here is that the PWJ plan typically will be faster and from a PX process number / resources typically cheaper. You may want to look out for those plans and try to get those to appear a lot... CREDITS: credits for the plans and some of the info on the plans go to Maria, as she actually produced these plans and is the expert on plans in general... You can see her talk about explaining the explain plan and other optimizer stuff over here: ODTUG in Washington DC, June 27 - July 1 On the Optimizer blog At OpenWorld in San Francisco, September 19 - 23 Happy joining and hope to see you all at ODTUG and OOW...

    Read the article

  • Miracle Growth Of Organs From Our Own Cells

    - by Rekha
    At the current situation, there is a shortage of healthy organs. The donor and patient also have to be closely matched and there are chances for the patient’s immune system may reject the transplant. Right now, researchers are seriously involved in a new kind of solution: "bioartifical" organs are being grown from the patient’s own cells. There are a few people who have already received lab-grown bladders. Bladder technique was developed by Anthony Atala of the Wake Forest Institute for Regenerative Medicine in Winston-Salem, North Carolina. The healthy cells from the patient’s diseased bladder is taken and cause them to multiply profusely in petri dishes. The muscle cells go on the outside, urothelial cells on the inside by layering the cells one layer at a time. The bladder-to-be is then incubated at body temperature until the cells form functioning tissue. This process could take six to eight months. Organs with lots of blood vessels, such as kidneys or livers, are harder to grow than hollow ones like bladders. Atala’s group works on 22 organs and tissues including ears, recently made a functioning piece of human liver. Others in the list includes:  Columbia University – Jawbone, Yale University – Lung, University of Minnesota – Rat heart, University of Michigan – Artificial Kidney There are possibilities that growing a copy of patient’s organ is not always possible – for instance, when the original is completely damaged by cancer. By using stem cell bank collected without harming human embryos from amniotic fluid in the womb, those cells are coaxed into becoming heart, liver and other organ cells. A bank of 1,00,000 stem cell samples would have enough genetic variety to match nearly any patient. Surgeons can order organs grown as needed instead of waiting for the perfect donor. "There are few things as devastating for a surgeon as knowing you have to replace the tissue and you’re doing something that’s not ideal," says Atala, a urologic surgeon himself. "Wouldn’t it be great if they had their own organ?" Great for the patient especially, he means. Via National Geographic  and cc image credit This article titled,Miracle Growth Of Organs From Our Own Cells, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • Is your test method self-validating ?

    - by mehfuzh
    Writing state of art unit tests that can validate your every part of the framework is challenging and interesting at the same time, its like becoming a samurai. One of the key concept in this is to keep our test synced all the time as underlying code changes and thus breaking them to the furthest unit as possible.  This also means, we should avoid  multiple conditions embedded in a single test. Let’s consider the following example of transfer funds. [Fact] public void ShouldAssertTranserFunds() {     var currencyService = Mock.Create<ICurrencyService>();     //// current rate     Mock.Arrange(() => currencyService.GetConversionRate("AUS", "CAD")).Returns(0.88f);       Account to = new Account { Currency = "AUS", Balance = 120 };     Account from = new Account { Currency = "CAD" };       AccountService accService = new AccountService(currencyService);       Assert.Throws<InvalidOperationException>(() => accService.TranferFunds(to, from, 200f));       accService.TranferFunds(to, from, 100f);       Assert.Equal(from.Balance, 88);     Assert.Equal(20, to.Balance); } At first look,  it seems ok but as you look more closely , it is actually doing two tasks in one test. At line# 10 it is trying to validate the exception for invalid fund transfer and finally it is asserting if the currency conversion is successfully made. Here, the name of the test itself is pretty vague. The first rule for writing unit test should always reflect to inner working of the target code, where just by looking at their names it is self explanatory. Having a obscure name for a test method not only increase the chances of cluttering the test code, but it also gives the opportunity to add multiple paths into it and eventually makes things messy as possible. I would rater have two test methods that explicitly describes its intent and are more self-validating. ShouldThrowExceptionForInvalidTransferOperation ShouldAssertTransferForExpectedConversionRate Having, this type of breakdown also helps us pin-point reported bugs easily rather wasting any time on debugging for something more general and can minimize confusion among team members. Finally, we should always make our test F.I.R.S.T ( Fast.Independent.Repeatable.Self-validating.Timely) [ Bob martin – Clean Code]. Only this will be enough to ensure, our test is as simple and clean as possible.   Hope that helps

    Read the article

  • How to Write an E-Book

    A few days ago my attention was drawn to a tweet spat between Karl Seguin and Scott Hanselman around the relaunch of ASP.NET and the title element in HTML. Tempest in a teapot of course, but worthwhile as I did some googling on Karl and found his blog at codebetter.com. From there it was a short jump to his free e-book, The Foundations of Programming. This short book is distinguished by its orientation, opinionated, its tone, mentoring and its honesty, which is refreshing. In Foundations, Karl covers what he considers the basics of programming and good design, including test driven development, dependency injection and domain driven design. Karl is opinionated, as the topics suggest, and doesnt bother to pretend that he doesnt think what hes suggesting is the better way, not just another way. He is aligned with ALT.NET, and gives an excellent overview of what that means; an overview more enlightening than the ALT.NET site. ALT.NET has its critics, but presenting a strong opinion grabbed my attention as a reader. It is a short walk from opinionated to hectoring,  but Karl held my attention without insulting me. He takes the time to explain, with examples, from the ground up, the problems that test driven development and dependency injection solve. So for dependency injection he builds it up from no DI, to a hand crafted approach, to a full fledged DI framework. This approach is more persuasive than just proscriptive and engaged me as the reader to follow along with his train of thought. Foundations is not as pedantic as I am making it sound. The final ingredient in Karls mix is honesty. He acknowledges that sometimes unit testing does cost more up front and take more time. He admits that sometimes he designs something a certain way just to be testable. He also warns that focusing too much on DI and loose coupling can lead to the poor design you are trying to avoid. These points add depth to his argument as I could tell hes speaking from experience, with some hard won lessons. I enjoyed The Foundations of Programming. When I was done with it, I was amazed how much I got a lot out of its 80 some pages. It is a rarity to come across something worthwhile that is longer then a tweet, but shorter than a tome these days. Well done Karl.   -- Relevant Links -- The now titled and newly relaunched page in question: http://www.asp.net/ The pleasantly confusing ALT.NET homepage: http://altdotnet.org/ A longer review, with details, chapter listings and all that important stuff: http://accidentaltechnologist.com/book-reviews/book-review-foundations-of-programming-by-karl-seguin/Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Silverlight Cream for May 05, 2010 -- #856

    - by Dave Campbell
    In this Issue: Jeremy Alles(-2-), Kunal Chowdhury, anand iyer, Yochay Kiriaty(-2-, -3-), Max Paulousky, David Kelley, smartyP, Tim Heuer, and Dan Wahlin. Shoutout: Tim Heuer provides links for all the Ways to give feedback on Silverlight From SilverlightCream.com: [WP7] Bug when using NavigationService in Windows Phone 7 Jeremy Alles has blogged about a bug he found using the Navigation service in WP7. He gives the steps to reproduce and a couple possible workarounds. [WP7] Using the camera in the emulator Jeremy Alles is also digging into the camera functionality in the emulator. He has code demonstrating launching a camera task, and a list of other tasks available. Silverlight Tutorials Chapter 3: Introduction to Panels Kunal Chowdhury has Chapter 3 of his Silverlight 4 Tutorial series up and he's talking about Panels this time out. Push Notifications in Windows Phone 7 developer tools CTP April Refresh anand iyer is discussing the Push Notifications, only from a code perspective. Good information and good additional links to follow. Windows Phone Application Life Cycle Yochay Kiriaty talks with Tudor Toma and Jaime Rodriguez about the WP7 application lifecycle on Channel 9. Understanding Microsoft Push Notifications for Windows Phones Yochay Kiriaty has a 2-part post up on WP7 Push Notifications. The first part is explaining what Push Notifications are and why we need them... as a developer and as an end user viewing Toast or Tile notifications. Understanding How Microsoft Push Notification Works – Part 2 In the 2nd part of his Push Notification series, Yochay Kiriaty discusses how the Push Notification works under the covers. To Remember: Deployment of Silverlight Applications With Wcf Ria Services Max Paulousky has a post up for reference on what to look into when you get "Load Operation Failed" in WCF RIA services. Launching a URL from an OOB Silverlight Application David Kelley has a quick post up on launching URLs from an OOB app. If you haven't tried it, you may be surprised as he was at first. Creating a Windows Phone 7 XNA Game in Landscape Orientation smartyP is looking at recreating a landscape WP7 game in XNA and is detailing some of the issues he's been dealing with, and is also sharing a project file. New Silverlight 4 Themes available–get the raw bits Tim Heuer provided 'raw' versions of 3 new themes. Read his post to see exactly what he means by 'raw' ... they're definitely good looking, and are going to get a lot of play. Handling WCF Service Paths in Silverlight 4 – Relative Path Support Dan Wahlin shares his technique for avoiding the pain involved with ServiceReferences.ClientConfig by using Silverlight 4 relative path support. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • What will be important in Training in 2011?

    - by anders.northeved
      Now that we have started a new year I would like to give you a list of topics I think we will be discussing in training and learning in 2011. Some of the areas we have discussed earlier will still be just as important in 2011: Time-to-knowledge Still one of the most important issues for the training department. Internal content production Related to time-to-knowledge. How do we convert internal knowledge to a format that can be used for teaching others? LMS integration How do we get our existing LMS fully integrated with our other ERP modules like HCM, Order Management, Finance, Payroll etc. Some areas have been discussed before, but we’ll focus more on these in 2011: Combining internal and external training A majority of training departments use a combination of external and internal training. Having the right mix is vital for the quality and efficiency for most training organizations. Certification More rules and regulations means managing all employee certifications is more important than ever. Evolving trends in 2011: Social Learning We have been talking about this for a long time, but 2011 will be the year where we will start using it for real (OK, I also said so last year – but this year I’m right…). Real-life use of SCORM 2004 Again a topic we have talked about for a long time, but we are now actually starting to use it to give learners a better e-learning experience. How do we engage and delight the learner? e-learning makes economical sense, it can be easy to understand, it is convenient – but how do we make it more engaging and delight our learners? How to include more types of training in LMS One of the main focus area of 2011 will be how to manage and measure mobile learning , on-the-job-training and other forms of training in the LMS. Mobile Learning With the ever growing use of smart phones mobile learning will be THE hot topic of 2011 in the training world. New topics we will begin discussing in 2011: What is beyond web 2.0 and social learning? - could it be content verification and personal accreditation? Why gaming will not be the silver bullet for all types of e-learning Many people believe gaming can be used for any kind of training, but the creation is too expensive and time consuming for most applications. Do you agree with these predictions? What are your own predictions? Let me see your comments! (photo: © Marti, photoxpress.com)

    Read the article

  • Development processes, the use of version control, and unit-testing

    - by ct01
    Preface I've worked at quite a few "flat" organizations in my time. Most of the version control policy/process has been "only commit after it's been tested". We were constantly committing at each place to "trunk" (cvs/svn). The same was true with unit-testing - it's always been a "we need to do this" mentality but it never really materializes in a substantive form b/c there is no institutional knowledge base to do it - no mentorship. Version Control The emphasis for version control management at one place was a very strict protocol for commit messages (format & content). The other places let employees just do "whatever". The branching, tagging, committing, rolling back, and merging aspect of things was always ill defined and almost never used. This sort of seems to leave the version control system in the position of being a fancy file-storage mechanism with a meta-data component that never really gets accessed/utilized. (The same was true for unit testing and committing code to the source tree) Unit tests It seems there's a prevailing "we must/should do this" mentality in most places I've worked. As a policy or standard operating procedure it never gets implemented because there seems to be a very ill-defined understanding about what that means, what is going to be tested, and how to do it. Summary It seems most places I've been to think version control and unit testing is "important" b/c the trendy trade journals say it is but, if there's very little mentorship to use these tools or any real business policies, then the full power of version control/unit testing is never really expressed. So grunts, like myself, never really have a complete understanding of the point beyond that "it's a good thing" and "we should do it". Question I was wondering if there are blogs, books, white-papers, or online journals about what one could call the business process or "standard operating procedures" or uses cases for version control and unit testing? I want to know more than the trade journals tell me and get serious about doing these things. PS: @Henrik Hansen had a great comment about the lack of definition for the question. I'm not interested in a specific unit-testing/versioning product or methodology (like, XP) - my interest is more about work-flow at the individual team/developer level than evangelism. This is more-or-less a by product of the management situation I've operated under more than a lack of reading software engineering books or magazines about development processes. A lot of what I've seen/read is more marketing oriented material than any specifically enumerated description of "well, this is how our shop operates".

    Read the article

  • SQL SERVER – Introduction to PERCENTILE_DISC() – Analytic Functions Introduced in SQL Server 2012

    - by pinaldave
    SQL Server 2012 introduces new analytical function PERCENTILE_DISC(). The book online gives following definition of this function: Computes a specific percentile for sorted values in an entire rowset or within distinct partitions of a rowset in Microsoft SQL Server 2012 Release Candidate 0 (RC 0). For a given percentile value P, PERCENTILE_DISC sorts the values of the expression in the ORDER BY clause and returns the value with the smallest CUME_DIST value (with respect to the same sort specification) that is greater than or equal to P. If you are clear with understanding of the function – no need to read further. If you got lost here is the same in simple words – find value of the column which is equal or more than CUME_DIST. Before you continue reading this blog I strongly suggest you read about CUME_DIST function over here Introduction to CUME_DIST – Analytic Functions Introduced in SQL Server 2012. Now let’s have fun following query: USE AdventureWorks GO SELECT SalesOrderID, OrderQty, ProductID, CUME_DIST() OVER(PARTITION BY SalesOrderID ORDER BY ProductID ) AS CDist, PERCENTILE_DISC(0.5) WITHIN GROUP (ORDER BY ProductID) OVER (PARTITION BY SalesOrderID) AS PercentileDisc FROM Sales.SalesOrderDetail WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY SalesOrderID DESC GO The above query will give us the following result: You can see that I have used PERCENTILE_DISC(0.5) in query, which is similar to finding median but not exactly. PERCENTILE_DISC() function takes a percentile as a passing parameters. It returns the value as answer which value is equal or great to the percentile value which is passed into the example. For example in above example we are passing 0.5 into the PERCENTILE_DISC() function. It will go through the resultset and identify which rows has values which are equal to or great than 0.5. In first example it found two rows which are equal to 0.5 and the value of ProductID of that row is the answer of PERCENTILE_DISC(). In some third windowed resultset there is only single row with the CUME_DIST() value as 1 and that is for sure higher than 0.5 making it as a answer. To make sure that we are clear with this example properly. Here is one more example where I am passing 0.6 as a percentile. Now let’s have fun following query: USE AdventureWorks GO SELECT SalesOrderID, OrderQty, ProductID, CUME_DIST() OVER(PARTITION BY SalesOrderID ORDER BY ProductID ) AS CDist, PERCENTILE_DISC(0.6) WITHIN GROUP (ORDER BY ProductID) OVER (PARTITION BY SalesOrderID) AS PercentileDisc FROM Sales.SalesOrderDetail WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY SalesOrderID DESC GO The above query will give us the following result: The result of the PERCENTILE_DISC(0.6) is ProductID of which CUME_DIST() is more than 0.6. This means for SalesOrderID 43670 has row with CUME_DIST() 0.75 is the qualified row, resulting answer 773 for ProductID. I hope this explanation makes it further clear. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Function, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • 10.10 freezing when sftp downloading

    - by aGr
    My ubuntu 10.10 is freezing during downloading from my other computer on sftp. I thought it might be some nautilus issues so I tried it via command line and I got the same thing - after few minutes the whole computer freezes. Mostly the numlock led is blinking (I've heard somewhere that this means a kernel panic), but not in 100% cases. I dunno if that helps but here is a log from /var/log/message in the time that this happened. At least I hope so - it wasn't that big, when it happened before. But this looks quite "errorish", right? (isn't complete - see bottom) Jan 5 17:57:49 tomas-ntb kernel: imklog 4.2.0, log source = /proc/kmsg started. Jan 5 17:57:49 tomas-ntb rsyslogd: [origin software="rsyslogd" swVersion="4.2.0" x-pid="953" x-info="http://www.rsyslog.com"] (re)start Jan 5 17:57:49 tomas-ntb rsyslogd: rsyslogd's groupid changed to 103 Jan 5 17:57:49 tomas-ntb rsyslogd: rsyslogd's userid changed to 101 Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] Initializing cgroup subsys cpuset Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] Initializing cgroup subsys cpu Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] Linux version 2.6.35-24-generic (buildd@vernadsky) (gcc version 4.4.5 (Ubuntu/Linaro 4.4.4-14ubuntu5) ) #42-Ubuntu SMP Thu Dec 2 01:41:57 UTC 2010 (Ubuntu 2.6.35-24.42-generic 2.6.35.8) Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] BIOS-provided physical RAM map: Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] BIOS-e820: 0000000000000000 - 000000000009e800 (usable) Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] BIOS-e820: 000000000009e800 - 00000000000a0000 (reserved) Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] BIOS-e820: 00000000000e0000 - 0000000000100000 (reserved) Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] BIOS-e820: 0000000000100000 - 00000000dffa8000 (usable) Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] BIOS-e820: 00000000dffa8000 - 00000000dffb0000 (ACPI NVS) Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] BIOS-e820: 00000000dffb0000 - 00000000dffbff00 (ACPI data) Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] BIOS-e820: 00000000dffbff00 - 00000000dfff0000 (ACPI NVS) Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] BIOS-e820: 00000000dfff0000 - 00000000e0000000 (reserved) Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] BIOS-e820: 00000000fec00000 - 00000000fec01000 (reserved) Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] BIOS-e820: 00000000fee00000 - 00000000fee01000 (reserved) Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] BIOS-e820: 00000000ff700000 - 0000000100000000 (reserved) Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] BIOS-e820: 0000000100000000 - 0000000120000000 (usable) Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] BIOS-e820: 0000000120000000 - 0000000140000000 (reserved) Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] Notice: NX (Execute Disable) protection cannot be enabled: non-PAE kernel! Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] DMI 2.5 present. Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] AMI BIOS detected: BIOS may corrupt low RAM, working around it. Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] last_pfn = 0xdffa8 max_arch_pfn = 0x100000 Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] x86 PAT enabled: cpu 0, old 0x7040600070406, new 0x7010600070106 Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] Scanning 0 areas for low memory corruption Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] modified physical RAM map: Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] modified: 0000000000000000 - 0000000000010000 (reserved) Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] modified: 0000000000010000 - 000000000009e800 (usable) Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] modified: 000000000009e800 - 00000000000a0000 (reserved) Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] modified: 00000000000e0000 - 0000000000100000 (reserved) Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] modified: 0000000000100000 - 00000000dffa8000 (usable) ... to long - for whole and better view click here

    Read the article

  • Windows Azure Use Case: High-Performance Computing (HPC)

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx  Description: High-Performance Computing (also called Technical Computing) at its most simplistic is a layout of computer workloads where a “head node” accepts work requests, and parses them out to “worker nodes'”. This is useful in cases such as scientific simulations, drug research, MatLab work and where other large compute loads are required. It’s not the immediate-result type computing many are used to; instead, a “job” or group of work requests is sent to a cluster of computers and the worker nodes work on individual parts of the calculations and return the work to the scheduler or head node for the requestor in a batch-request fashion. This is typical to the way that many mainframe computing use-cases work. You can use commodity-based computers to create an HPC Cluster, such as the Linux application called Beowulf, and Microsoft has a server product for HPC using standard computers, called the Windows Compute Cluster that you can read more about here. The issue with HPC (from any vendor) that some organization have is the amount of compute nodes they need. Having too many results in excess infrastructure, including computers, buildings, storage, heat and so on. Having too few means that the work is slower, and takes longer to return a result to the calling application. Unless there is a consistent level of work requested, predicting the number of nodes is problematic. Implementation: Recently, Microsoft announced an internal partnership between the HPC group (Now called the Technical Computing Group) and Windows Azure. You now have two options for implementing an HPC environment using Windows. You can extend the current infrastructure you have for HPC by adding in Compute Nodes in Windows Azure, using a “Broker Node”.  You can then purchase time for adding machines, and then stop paying for them when the work is completed. This is a common pattern in groups that have a constant need for HPC, but need to “burst” that load count under certain conditions. The second option is to install only a Head Node and a Broker Node onsite, and host all Compute Nodes in Windows Azure. This is often the pattern for organizations that need HPC on a scheduled and periodic basis, such as financial analysis or actuarial table calculations. References: Blog entry on Hybrid HPC with Windows Azure: http://blogs.msdn.com/b/ignitionshowcase/archive/2010/12/13/high-performance-computing-on-premise-and-in-the-windows-azure-cloud.aspx  Links for further research on HPC, includes Windows Azure information: http://blogs.msdn.com/b/ncdevguy/archive/2011/02/16/handy-links-for-hpc-and-azure.aspx 

    Read the article

  • TSAM 11gR1

    - by todd.little
    The Tuxedo System and Application Monitor (TSAM) 11gR1 release provides powerful new application monitoring capabilities, as well as significant improvements in ease of use. The first thing users will notice is the completely redesigned user interface in the TSAM console. Based on Oracle ADF, the console is much easier to navigate, provides a Web 2.0 style interface with dynamically updating panels, and a look and feel familiar to those that have used Oracle Enterprise Manager. Monitoring data can be viewed in both tabular and graphical form and exported to Excel for further analysis. A number of new metrics are collected and displayed in this release. Call path monitoring now displays CPU time, message size, total transport time, and client address giving even more end-to-end information about a specific Tuxedo request. As well the call path display has been completely revamped to make it much easier to see the branches of the call path. The call pattern display now provides statistics on successful vs failed calls, system and application failures, and end-to-end average elapsed time. Service monitoring now displays minimum and maximum message size, CPU usage, and client address. System server monitoring now includes monitoring the SALT gateway servers to provide detailed performance metrics about those servers. Perhaps the most significant new feature is the consolidation of alert definitions and policy management. In previous versions of TSAM, some alerts were defined and checked on the monitored systems while others were defined and checked in the console. Policy management could be performed on both the monitored node via environment variable or command, as well as from the console. Now all alert definitions and policy definitions are only made using the console. For alerts this means that regardless of where the alert is evaluated it is defined in one and only one place. Thus the plug-in alert mechanism of previous releases can now be managed using the TSAM console, making SLA alert definition much easier and cleaner. Finally there is support in TSAM for monitoring rehosted mainframe applications. The newly announced Oracle Tuxedo Application Runtime for CICS and Batch can be monitored in the TSAM console using traditional mainframe views of the application such as regions. Look for a future blog entry with more details on this as well as some entries providing a glimpse of the console. TSAM gives users a single point for monitoring the performance of all of their Tuxedo applications.

    Read the article

  • Live Webcast for Skire Customers - 8 November

    - by Melissa Centurio Lopes
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Join our Important Customer Briefing live webcast with Oracle Executive Mike Sicilia to learn more about the product strategy for the combined Oracle Primavera and Skire offering. Mike Sicilia, Senior Vice President and General Manager, Oracle Primavera Global Business Unit, invites you to join him for an exclusive update on Oracle’s acquisition of substantially all of Skire’s assets. Don’t miss this special, live webcast on November 8th, Attend this online event and listen to Mike Sicilia share with you: The strategic reasons behind Oracle’s acquisition of substantially all of Skire’s assets and what it means to you and your organization Oracle’s vision to deliver the most comprehensive Enterprise Project Portfolio Management (EPPM) offering to manage the complete project lifecycle, from capital planning and construction to operations and maintenance Exciting new product releases to help organizations manage their projects and facilities with more predictability and financial control, improving profitability and operational efficiency. Oracle’s consistent commitment to customer success and product support Save your seat: register now to attend this exclusive online event and learn how the combination of Oracle Primavera and Skire can help your organization succeed. For more information about the combination of Oracle and Skire, please visit oracle.com/skire

    Read the article

< Previous Page | 238 239 240 241 242 243 244 245 246 247 248 249  | Next Page >