Search Results

Search found 9744 results on 390 pages for 'k means'.

Page 233/390 | < Previous Page | 229 230 231 232 233 234 235 236 237 238 239 240  | Next Page >

  • Silverlight Cream for May 05, 2010 -- #856

    - by Dave Campbell
    In this Issue: Jeremy Alles(-2-), Kunal Chowdhury, anand iyer, Yochay Kiriaty(-2-, -3-), Max Paulousky, David Kelley, smartyP, Tim Heuer, and Dan Wahlin. Shoutout: Tim Heuer provides links for all the Ways to give feedback on Silverlight From SilverlightCream.com: [WP7] Bug when using NavigationService in Windows Phone 7 Jeremy Alles has blogged about a bug he found using the Navigation service in WP7. He gives the steps to reproduce and a couple possible workarounds. [WP7] Using the camera in the emulator Jeremy Alles is also digging into the camera functionality in the emulator. He has code demonstrating launching a camera task, and a list of other tasks available. Silverlight Tutorials Chapter 3: Introduction to Panels Kunal Chowdhury has Chapter 3 of his Silverlight 4 Tutorial series up and he's talking about Panels this time out. Push Notifications in Windows Phone 7 developer tools CTP April Refresh anand iyer is discussing the Push Notifications, only from a code perspective. Good information and good additional links to follow. Windows Phone Application Life Cycle Yochay Kiriaty talks with Tudor Toma and Jaime Rodriguez about the WP7 application lifecycle on Channel 9. Understanding Microsoft Push Notifications for Windows Phones Yochay Kiriaty has a 2-part post up on WP7 Push Notifications. The first part is explaining what Push Notifications are and why we need them... as a developer and as an end user viewing Toast or Tile notifications. Understanding How Microsoft Push Notification Works – Part 2 In the 2nd part of his Push Notification series, Yochay Kiriaty discusses how the Push Notification works under the covers. To Remember: Deployment of Silverlight Applications With Wcf Ria Services Max Paulousky has a post up for reference on what to look into when you get "Load Operation Failed" in WCF RIA services. Launching a URL from an OOB Silverlight Application David Kelley has a quick post up on launching URLs from an OOB app. If you haven't tried it, you may be surprised as he was at first. Creating a Windows Phone 7 XNA Game in Landscape Orientation smartyP is looking at recreating a landscape WP7 game in XNA and is detailing some of the issues he's been dealing with, and is also sharing a project file. New Silverlight 4 Themes available–get the raw bits Tim Heuer provided 'raw' versions of 3 new themes. Read his post to see exactly what he means by 'raw' ... they're definitely good looking, and are going to get a lot of play. Handling WCF Service Paths in Silverlight 4 – Relative Path Support Dan Wahlin shares his technique for avoiding the pain involved with ServiceReferences.ClientConfig by using Silverlight 4 relative path support. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • How to Write an E-Book

    A few days ago my attention was drawn to a tweet spat between Karl Seguin and Scott Hanselman around the relaunch of ASP.NET and the title element in HTML. Tempest in a teapot of course, but worthwhile as I did some googling on Karl and found his blog at codebetter.com. From there it was a short jump to his free e-book, The Foundations of Programming. This short book is distinguished by its orientation, opinionated, its tone, mentoring and its honesty, which is refreshing. In Foundations, Karl covers what he considers the basics of programming and good design, including test driven development, dependency injection and domain driven design. Karl is opinionated, as the topics suggest, and doesnt bother to pretend that he doesnt think what hes suggesting is the better way, not just another way. He is aligned with ALT.NET, and gives an excellent overview of what that means; an overview more enlightening than the ALT.NET site. ALT.NET has its critics, but presenting a strong opinion grabbed my attention as a reader. It is a short walk from opinionated to hectoring,  but Karl held my attention without insulting me. He takes the time to explain, with examples, from the ground up, the problems that test driven development and dependency injection solve. So for dependency injection he builds it up from no DI, to a hand crafted approach, to a full fledged DI framework. This approach is more persuasive than just proscriptive and engaged me as the reader to follow along with his train of thought. Foundations is not as pedantic as I am making it sound. The final ingredient in Karls mix is honesty. He acknowledges that sometimes unit testing does cost more up front and take more time. He admits that sometimes he designs something a certain way just to be testable. He also warns that focusing too much on DI and loose coupling can lead to the poor design you are trying to avoid. These points add depth to his argument as I could tell hes speaking from experience, with some hard won lessons. I enjoyed The Foundations of Programming. When I was done with it, I was amazed how much I got a lot out of its 80 some pages. It is a rarity to come across something worthwhile that is longer then a tweet, but shorter than a tome these days. Well done Karl.   -- Relevant Links -- The now titled and newly relaunched page in question: http://www.asp.net/ The pleasantly confusing ALT.NET homepage: http://altdotnet.org/ A longer review, with details, chapter listings and all that important stuff: http://accidentaltechnologist.com/book-reviews/book-review-foundations-of-programming-by-karl-seguin/Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Some of my favourite Visual Studio 2012 things&ndash;Teams

    - by Aaron Kowall
    Getting the balance right for when and how many team projects to create has always been a bit of a balance.  On large initiatives, there are often teams who work toward a common system.  These teams often have quite a bit of autonomy, but need to roll up to some higher level initiative.  In TFS 2010, people were often tempted to create separate Team Projects for each of the sub-teams and then do some magic with reporting and cross-team queries to get the consolidated view.  My recommendation was always to use Areas as a means of separating work across the team, but that always resulted in a large number of queries that need to be maintained and just seemed confusing.  When doing anything you had to remember to filter the query or view by Area in order to get correct results. Along with the awesome web access portal that comes in TFS 2012 (which I will cover details of in another post) the product group has introduced the concept of Teams.  A team is a sub-group within a TFS 2012 Team Project which allows us to more easily divide work along team boundaries. Technically, a Team is defined by an Area Path and a TFS Group, both of which could be done in TFS 2012.  However, by allowing for creation of a ‘Team’ in TFS 2012, the web portal is able to do a bunch of ‘magic’ for us.  We can view the project site (backlog, taskboard, etc) for the the team, we can assign items to the team and we can view the burndown for the team.  Basically, all the stuff that we had to prepare manually we now get created and managed for us with a nice UI. When you create a Team Project in TFS 2012, a ‘Default’ team is created with the same name as the Team Project.  So, if you only have 1 team working on the project, you are set.  If you want to divide the work into additional teams, you can create teams by using the Team Web Client. Teams are created using the ‘Administer Server’ icon in the top right of the web site.   You can select the team site by using the team chooser: Once you have selected a team, the Product Backlog, TaskBoard, Burndown Charts, etc. are all filtered to that team. NOTE: You always have the ability to choose the ‘Default’ team to see items for the entire project. PS: It’s been a long while since I shared on this blog.  To help with that I’m in a blogging challenge with some other developer and agilist friends.  Please check out their blogs as well: Steve Rogalsky: http://winnipegagilist.blogspot.ca Dylan Smith: http://www.geekswithblogs.net/optikal Tyler Doerkson: http://blog.tylerdoerksen.com David Alpert: http://www.spinthemoose.com Dave White: http://www.agileramblings.com   Technorati Tags: TFS 2012,Agile,Team

    Read the article

  • Miracle Growth Of Organs From Our Own Cells

    - by Rekha
    At the current situation, there is a shortage of healthy organs. The donor and patient also have to be closely matched and there are chances for the patient’s immune system may reject the transplant. Right now, researchers are seriously involved in a new kind of solution: "bioartifical" organs are being grown from the patient’s own cells. There are a few people who have already received lab-grown bladders. Bladder technique was developed by Anthony Atala of the Wake Forest Institute for Regenerative Medicine in Winston-Salem, North Carolina. The healthy cells from the patient’s diseased bladder is taken and cause them to multiply profusely in petri dishes. The muscle cells go on the outside, urothelial cells on the inside by layering the cells one layer at a time. The bladder-to-be is then incubated at body temperature until the cells form functioning tissue. This process could take six to eight months. Organs with lots of blood vessels, such as kidneys or livers, are harder to grow than hollow ones like bladders. Atala’s group works on 22 organs and tissues including ears, recently made a functioning piece of human liver. Others in the list includes:  Columbia University – Jawbone, Yale University – Lung, University of Minnesota – Rat heart, University of Michigan – Artificial Kidney There are possibilities that growing a copy of patient’s organ is not always possible – for instance, when the original is completely damaged by cancer. By using stem cell bank collected without harming human embryos from amniotic fluid in the womb, those cells are coaxed into becoming heart, liver and other organ cells. A bank of 1,00,000 stem cell samples would have enough genetic variety to match nearly any patient. Surgeons can order organs grown as needed instead of waiting for the perfect donor. "There are few things as devastating for a surgeon as knowing you have to replace the tissue and you’re doing something that’s not ideal," says Atala, a urologic surgeon himself. "Wouldn’t it be great if they had their own organ?" Great for the patient especially, he means. Via National Geographic  and cc image credit This article titled,Miracle Growth Of Organs From Our Own Cells, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • How Microsoft listens

    - by Stacy Vicknair
    This being my freshman year as an MVP, I had a realization that I perhaps should be embarrassed hasn’t happened sooner. The realization comes much like the iconic M&Ms commercial where the M&Ms run into Santa and exclaim, “He does exist!” My personal realization arguably has a greater implication: Microsoft does listen. This is the most important lesson that I received this year attending the MVP Summit. My hope is that I can convince you that we are empowered to make a difference. Instead of using “Man I hate how this works / doesn’t work!” as cooler conversation, we can use it as true interaction with Microsoft. We as customers to Microsoft need to stop asking the question “Will this work for me?” and instead ask “How can this work for me?” There are three quick resources that the average developer has access to today that they can use to be heard by the product teams, and by no means should you think twice if you have a concern that you’d like a real response on. MVPs MVPs are members of your community who have a deep relationship with Microsoft and will have connections to their associated product group. Don’t think of them as just a resource for answers, but also as your ambassador for getting your experiences heard. You can find your local MVPs by browsing the directory at: https://mvp.support.microsoft.com/communities/mvp.aspx Evangelists Evangelists are employees of Microsoft who work to foster and grow communities in their assigned region. They are first-class citizens of Microsoft and are often deeply involved with the product groups. As a result, they will be more than glad to direct your questions or concerns to those who can answer them most expertly. With that said, evangelists are also very busy people (who do amazing things for the community) and might not be able to get you that conversation as quickly as a local MVP. You can find your local evangelist at the following website: http://msdn.microsoft.com/en-us/bb905078.aspx Microsoft Connect This is one of the resources that I haven’t used enough, but it cannot be understated. Connect is the starting point of the social conversation that happens between Microsoft and the community daily. Connect acts as a portal where you can provide new feedback as well as comment and rate the feedback provided by others. Power is in numbers when it comes to Connect, so the exposure that your feedback can get not only lets you know that you aren’t the only one who wants change, but also lets Microsoft know the same. https://connect.microsoft.com   Technorati Tags: Microsoft,MVP,Feedback,Connect

    Read the article

  • What will be important in Training in 2011?

    - by anders.northeved
      Now that we have started a new year I would like to give you a list of topics I think we will be discussing in training and learning in 2011. Some of the areas we have discussed earlier will still be just as important in 2011: Time-to-knowledge Still one of the most important issues for the training department. Internal content production Related to time-to-knowledge. How do we convert internal knowledge to a format that can be used for teaching others? LMS integration How do we get our existing LMS fully integrated with our other ERP modules like HCM, Order Management, Finance, Payroll etc. Some areas have been discussed before, but we’ll focus more on these in 2011: Combining internal and external training A majority of training departments use a combination of external and internal training. Having the right mix is vital for the quality and efficiency for most training organizations. Certification More rules and regulations means managing all employee certifications is more important than ever. Evolving trends in 2011: Social Learning We have been talking about this for a long time, but 2011 will be the year where we will start using it for real (OK, I also said so last year – but this year I’m right…). Real-life use of SCORM 2004 Again a topic we have talked about for a long time, but we are now actually starting to use it to give learners a better e-learning experience. How do we engage and delight the learner? e-learning makes economical sense, it can be easy to understand, it is convenient – but how do we make it more engaging and delight our learners? How to include more types of training in LMS One of the main focus area of 2011 will be how to manage and measure mobile learning , on-the-job-training and other forms of training in the LMS. Mobile Learning With the ever growing use of smart phones mobile learning will be THE hot topic of 2011 in the training world. New topics we will begin discussing in 2011: What is beyond web 2.0 and social learning? - could it be content verification and personal accreditation? Why gaming will not be the silver bullet for all types of e-learning Many people believe gaming can be used for any kind of training, but the creation is too expensive and time consuming for most applications. Do you agree with these predictions? What are your own predictions? Let me see your comments! (photo: © Marti, photoxpress.com)

    Read the article

  • SQL SERVER – Introduction to PERCENTILE_DISC() – Analytic Functions Introduced in SQL Server 2012

    - by pinaldave
    SQL Server 2012 introduces new analytical function PERCENTILE_DISC(). The book online gives following definition of this function: Computes a specific percentile for sorted values in an entire rowset or within distinct partitions of a rowset in Microsoft SQL Server 2012 Release Candidate 0 (RC 0). For a given percentile value P, PERCENTILE_DISC sorts the values of the expression in the ORDER BY clause and returns the value with the smallest CUME_DIST value (with respect to the same sort specification) that is greater than or equal to P. If you are clear with understanding of the function – no need to read further. If you got lost here is the same in simple words – find value of the column which is equal or more than CUME_DIST. Before you continue reading this blog I strongly suggest you read about CUME_DIST function over here Introduction to CUME_DIST – Analytic Functions Introduced in SQL Server 2012. Now let’s have fun following query: USE AdventureWorks GO SELECT SalesOrderID, OrderQty, ProductID, CUME_DIST() OVER(PARTITION BY SalesOrderID ORDER BY ProductID ) AS CDist, PERCENTILE_DISC(0.5) WITHIN GROUP (ORDER BY ProductID) OVER (PARTITION BY SalesOrderID) AS PercentileDisc FROM Sales.SalesOrderDetail WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY SalesOrderID DESC GO The above query will give us the following result: You can see that I have used PERCENTILE_DISC(0.5) in query, which is similar to finding median but not exactly. PERCENTILE_DISC() function takes a percentile as a passing parameters. It returns the value as answer which value is equal or great to the percentile value which is passed into the example. For example in above example we are passing 0.5 into the PERCENTILE_DISC() function. It will go through the resultset and identify which rows has values which are equal to or great than 0.5. In first example it found two rows which are equal to 0.5 and the value of ProductID of that row is the answer of PERCENTILE_DISC(). In some third windowed resultset there is only single row with the CUME_DIST() value as 1 and that is for sure higher than 0.5 making it as a answer. To make sure that we are clear with this example properly. Here is one more example where I am passing 0.6 as a percentile. Now let’s have fun following query: USE AdventureWorks GO SELECT SalesOrderID, OrderQty, ProductID, CUME_DIST() OVER(PARTITION BY SalesOrderID ORDER BY ProductID ) AS CDist, PERCENTILE_DISC(0.6) WITHIN GROUP (ORDER BY ProductID) OVER (PARTITION BY SalesOrderID) AS PercentileDisc FROM Sales.SalesOrderDetail WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY SalesOrderID DESC GO The above query will give us the following result: The result of the PERCENTILE_DISC(0.6) is ProductID of which CUME_DIST() is more than 0.6. This means for SalesOrderID 43670 has row with CUME_DIST() 0.75 is the qualified row, resulting answer 773 for ProductID. I hope this explanation makes it further clear. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Function, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • TSAM 11gR1

    - by todd.little
    The Tuxedo System and Application Monitor (TSAM) 11gR1 release provides powerful new application monitoring capabilities, as well as significant improvements in ease of use. The first thing users will notice is the completely redesigned user interface in the TSAM console. Based on Oracle ADF, the console is much easier to navigate, provides a Web 2.0 style interface with dynamically updating panels, and a look and feel familiar to those that have used Oracle Enterprise Manager. Monitoring data can be viewed in both tabular and graphical form and exported to Excel for further analysis. A number of new metrics are collected and displayed in this release. Call path monitoring now displays CPU time, message size, total transport time, and client address giving even more end-to-end information about a specific Tuxedo request. As well the call path display has been completely revamped to make it much easier to see the branches of the call path. The call pattern display now provides statistics on successful vs failed calls, system and application failures, and end-to-end average elapsed time. Service monitoring now displays minimum and maximum message size, CPU usage, and client address. System server monitoring now includes monitoring the SALT gateway servers to provide detailed performance metrics about those servers. Perhaps the most significant new feature is the consolidation of alert definitions and policy management. In previous versions of TSAM, some alerts were defined and checked on the monitored systems while others were defined and checked in the console. Policy management could be performed on both the monitored node via environment variable or command, as well as from the console. Now all alert definitions and policy definitions are only made using the console. For alerts this means that regardless of where the alert is evaluated it is defined in one and only one place. Thus the plug-in alert mechanism of previous releases can now be managed using the TSAM console, making SLA alert definition much easier and cleaner. Finally there is support in TSAM for monitoring rehosted mainframe applications. The newly announced Oracle Tuxedo Application Runtime for CICS and Batch can be monitored in the TSAM console using traditional mainframe views of the application such as regions. Look for a future blog entry with more details on this as well as some entries providing a glimpse of the console. TSAM gives users a single point for monitoring the performance of all of their Tuxedo applications.

    Read the article

  • ASP.NET MVC 4: Short syntax for script and style bundling

    - by DigiMortal
    ASP.NET MVC 4 introduces new methods for style and scripts bundling. I found something brilliant there I want to introduce you. In this posting I will show you how easy it is to include whole folder with stylesheets or JavaScripts to your page. I’m using ASP.NET MVC 4 Internet Site template for this example. When we open layout pages located in shared views folder we can see something like this in layout file header: <link href="@System.Web.Optimization.BundleTable.Bundles.ResolveBundleUrl("~/Content/css")" rel="stylesheet" type="text/css" />    <link href="@System.Web.Optimization.BundleTable.Bundles.ResolveBundleUrl("~/Content/themes/base/css")" rel="stylesheet" type="text/css" />    <script src="@System.Web.Optimization.BundleTable.Bundles.ResolveBundleUrl("~/Scripts/js")"></script> Let’s take the last line and modify it so it looks like this: <script src="/Scripts/js"></script> After saving the layout page let’s run browser and see what is coming in over network. As you can see the request to folder ended up with result code 200 which means that request was successful. 327.2KB was received and it is not mark-up size for error page or directory index. Here is the body of response: I scrolled down to point where one script ends and another one starts when I made the screenshot above. All scripts delivered with ASP.NET MVC project templates start with this green note. So now we can be sure that the request to scripts folder ended up with bundled script and not with something else. Conclusion Script and styles bundling uses currently by default long syntax where bundling is done through Bundling class. We can still avoid those long lines and use extremely short syntax for script and styles bundling – we just write usual script or link tag and give folder URL as source. ASP.NET MVC 4 is smart enough to combine styles or scripts when request like this comes in.

    Read the article

  • How to migrate ASP.NET MVC 3 , MVC4 project to ASP.NET MVC5 ?

    - by Anirudha
    Originally posted on: http://geekswithblogs.net/anirugu/archive/2013/10/16/how-to-migrate-asp.net-mvc-3--mvc4-project-to.aspxSoon you will see a new version of MVC5 in VS2013. MVC5 will be incorporated in VS2013. MVC3 will not be supported in VS2013. I confirmed it on channel9 last time. So People who have installed only VS2013 or doesn’t have old version will be got trouble with the project that is still in MVC3. This error happen because MVC4 and 5 installation doesn’t contain the DLL that is used in Version 3 of ASP.NET MVC.   Don’t be panic. You guys want to upgrade your project. Here is a trick  to solve the issue.   When you open the project you have seen that in Reference there is some dll that have yellow icon. This means that dll are missing or not found in your configuration or system.   Now remember that dll name. Remove them from reference and add them from adding reference. I telling you to remove so VS will not prevent you to add new version of same assembly. Add all those assembly. Those dll will be following : System.Web.Mvc Razor and Webpages Dll.   Remember that in MVC3 we use old version of these assembly. Now When you done by adding all assembly then now open web.config.   There is 2 web.config file in our mvc project.  One is in root folder and second in Views folder. You need to update all those version no. This is not a big deal if you know the name of assembly. Now if you web.config show you assembly version as 3.000.00 then 3 would be replaced with 4 or 5 according to version no. Same thing need to applied all dll for both web.config.   Note :- In VS Template Views goes in ~/Views folder but if someone use any other folder then Views for views and those folder have also web.config then remember to update them also. Your project will be compile and make no warning and error but that certainly not work. for examples areas/views and themes/views that contain web.config also need to be updated with newer assembly version no.   After done these thing you can compile your project and it will be work as it should be Thanks for read my post. Follow me on FB and Twitter to stay updated

    Read the article

  • Open World Day 1 Continued

    - by Antony Reynolds
    A Day in the Life of an Oracle OpenWorld Attendee Part II A couple of things I forgot to mention about yesterdays OpenWorld. First I attended a presentation on SOA Suite and Virtualization which explained how Oracle Virtual Assembly Builder (OVAB) can be used to accelerate the deployment of an Enterprise Deployment Guide (EDG) compliant SOA Suite infrastructure.  OVAB provides the ability to introspect a deployed software component such as WebLogic Server, SOA Suite or other components and extract the configuration and package it up for rapid deployment into an Oracle Virtual Machine.  OVAB allows multiple machines to be configured and connections made between the machines and outside resources such as databases.  That by itself is pretty cool and has been available for a while in OVAB.  What is new is that Oracle has done this for an EDG compliant installations and made it available as an OVAB assembly for customers to use, significantly accelerating the deployment of an EDG deployment.  A real help for customers standing up EDG environments, particularly in test, dev and QA environments. The other thing I forgot to mention was the most memorable demo I saw at OpenWorld.  This was done by my co-author Matt Wright who was showcasing the products of his company Rubicon Red.  They showed a really cool application called OneSpot which puts all the information about a single users business processes in one spot!  Apparently a customer suggested the name.  It allows business flows to be defined that map onto events.  As events occur the status of the business flow is updated to reflect the change.  The interface is strongly reminiscent of social media sites and provides a graphical view of business flows.  So how does this differ from BPEL and BPM process flows?  The OneSpot process flow is more like a BAM process flow, it is based on events arriving from multiple sources, and is focused on the clients view of the process, not the actual business process.  This is important because it allows an end user to get a view of where his current business flow is and what actions, if any, are required of him.  This by itself is great, but better still is that OneSpot has a real time updating view of events that have occurred (BAM style no need to refresh the browser).  This means that as new events occur the end user can see them and jump to the business flow or take other appropriate actions.  Under the covers OneSpot makes use of Oracle Human Workflow to provide a forms interface, but this is not the HWF GUI you know!  The HWF GUI screens are much prettier and have more of a social media feel about them due to their use of images and pulling in relevant related information.  If you are at OOW I strongly recommend you visit Matt or John at the Rubicon Red stand and ask, no demand a demo of OneSpot!

    Read the article

  • 10.10 freezing when sftp downloading

    - by aGr
    My ubuntu 10.10 is freezing during downloading from my other computer on sftp. I thought it might be some nautilus issues so I tried it via command line and I got the same thing - after few minutes the whole computer freezes. Mostly the numlock led is blinking (I've heard somewhere that this means a kernel panic), but not in 100% cases. I dunno if that helps but here is a log from /var/log/message in the time that this happened. At least I hope so - it wasn't that big, when it happened before. But this looks quite "errorish", right? (isn't complete - see bottom) Jan 5 17:57:49 tomas-ntb kernel: imklog 4.2.0, log source = /proc/kmsg started. Jan 5 17:57:49 tomas-ntb rsyslogd: [origin software="rsyslogd" swVersion="4.2.0" x-pid="953" x-info="http://www.rsyslog.com"] (re)start Jan 5 17:57:49 tomas-ntb rsyslogd: rsyslogd's groupid changed to 103 Jan 5 17:57:49 tomas-ntb rsyslogd: rsyslogd's userid changed to 101 Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] Initializing cgroup subsys cpuset Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] Initializing cgroup subsys cpu Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] Linux version 2.6.35-24-generic (buildd@vernadsky) (gcc version 4.4.5 (Ubuntu/Linaro 4.4.4-14ubuntu5) ) #42-Ubuntu SMP Thu Dec 2 01:41:57 UTC 2010 (Ubuntu 2.6.35-24.42-generic 2.6.35.8) Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] BIOS-provided physical RAM map: Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] BIOS-e820: 0000000000000000 - 000000000009e800 (usable) Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] BIOS-e820: 000000000009e800 - 00000000000a0000 (reserved) Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] BIOS-e820: 00000000000e0000 - 0000000000100000 (reserved) Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] BIOS-e820: 0000000000100000 - 00000000dffa8000 (usable) Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] BIOS-e820: 00000000dffa8000 - 00000000dffb0000 (ACPI NVS) Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] BIOS-e820: 00000000dffb0000 - 00000000dffbff00 (ACPI data) Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] BIOS-e820: 00000000dffbff00 - 00000000dfff0000 (ACPI NVS) Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] BIOS-e820: 00000000dfff0000 - 00000000e0000000 (reserved) Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] BIOS-e820: 00000000fec00000 - 00000000fec01000 (reserved) Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] BIOS-e820: 00000000fee00000 - 00000000fee01000 (reserved) Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] BIOS-e820: 00000000ff700000 - 0000000100000000 (reserved) Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] BIOS-e820: 0000000100000000 - 0000000120000000 (usable) Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] BIOS-e820: 0000000120000000 - 0000000140000000 (reserved) Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] Notice: NX (Execute Disable) protection cannot be enabled: non-PAE kernel! Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] DMI 2.5 present. Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] AMI BIOS detected: BIOS may corrupt low RAM, working around it. Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] last_pfn = 0xdffa8 max_arch_pfn = 0x100000 Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] x86 PAT enabled: cpu 0, old 0x7040600070406, new 0x7010600070106 Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] Scanning 0 areas for low memory corruption Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] modified physical RAM map: Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] modified: 0000000000000000 - 0000000000010000 (reserved) Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] modified: 0000000000010000 - 000000000009e800 (usable) Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] modified: 000000000009e800 - 00000000000a0000 (reserved) Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] modified: 00000000000e0000 - 0000000000100000 (reserved) Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] modified: 0000000000100000 - 00000000dffa8000 (usable) ... to long - for whole and better view click here

    Read the article

  • 2 Birds, 1 Stone: Enabling M2M and Mobility in Healthcare

    - by Eric Jensen
    Jim Connors has created a video showcase of a comprehensive healthcare solution, connecting a mobile application directly to an embedded patient monitoring system. In the demo, Jim illustrates how you can easily build solutions on top of the Java embedded platform, using Oracle products like Berkeley DB and Database Mobile Server. Jim is running Apache Tomcat on an embedded device, using Berkeley DB as the data store. BDB is transparently linked to an Oracle Database backend using  Database Mobile Server. Information protection is important in healthcare, so it is worth pointing out that these products offer strong data encryption, for storage as well as transit. In his video, Jim does a great job of demystifying M2M. What's compelling about this demo is that uses a solution architecture that enterprise developers are already comfortable and familiar with: a Java apps server with a database backend. The additional pieces used to embed this solution are Oracle Berkeley DB and Database Mobile Server. It functions transparently, from the perspective of Java apps developers. This means that organizations who understand Java apps (basically everyone) can use this technology to develop embedded M2M products. The potential uses for this technology in healthcare alone are immense; any device that measures and records some aspect of the patient could be linked, securely and directly, to the medical records database. Breathing, circulation, other vitals, sensory perception, blood tests, x-rats or CAT scans. The list goes on and on. In this demo case, it's a testament to the power of the Java embedded platform that they are able to easily interface the device, called a Pulse Oximeter, with the web application. If Jim had stopped there, it would've been a cool demo. But he didn't; he actually saved the most awesome part for the end! At 9:52 Jim drops a bombshell: He's also created an Android app, something a doctor would use to view patient health data from his mobile device. The mobile app is seamlessly integrated into the rest of the system, using the device agent from Oracle's Database Mobile Server. In doing so, Jim has really showcased the full power of this solution: the ability to build M2M solutions that integrate seamlessly with mobile applications. In closing, I want to point out that this is not a hypothetical demo using beta or even v1.0 products. Everything in Jim's demo is available today. What's more, every product shown is mature, and already in production at many customer sites, albeit not in the innovative combination Jim has come up with. If your customers are in the market for these type of solutions (and they almost certainly are) I encourage you to download the components and try it out yourself! All the Oracle products showcased in this video are available for evaluation download via Oracle Technology Network.

    Read the article

  • Managing Social Relationships for the Enterprise – Part 1

    - by Michael Hylton
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Reggie Bradford, Senior Vice President, Oracle Today, Mark Hurd, President of Oracle, Thomas Kurian, Executive Vice President of Oracle and I discussed the strategic importance of how social media is impacting the enterprise and how it is changing the way customers, prospects employees and investors interact with brands worldwide. Oracle understands that the consumer is in control and as such, brands must evolve and change to meet growing needs. In addition, according to social media thought leader and Analyst from Altimeter Group, Jeremiah Owyang, companies now average 178 corporate-owned social media accounts. When Oracle added leading social marketing, listening analytics and development tools from Vitrue, Collective Intellect and Involver to its Oracle’s Cloud Services Suite we went beyond providing a single set of tools. We developed an entire framework to include a comprehensive social relationship management suite to help companies move beyond the social enterprise and achieve the social-enabled enterprise. The fundamental shift from transaction to engagement means that enterprises need not only a social strategy, but should also ensure that the information and data received from social initiatives flow back to marketing, sales, support and service. Doing so enables companies to deliver a proactive and compelling experience and provides analytics to turn engagement into opportunity – and ultimately that opportunity into revenue. On September 13, 2012, I am delighted to sit down with Jeremiah to further the discussion about how enterprises are addressing social media strategies and managing content. In addition, we will be taking your questions after the webinar via Twitter (@Oracle, @ReggieBradford, @cwfinn, @jowyang). Use #oracle and #socbiz to submit questions and follow the conversation. I look forward to speaking with you and answering your questions online. For more information about becoming a social-enabled enterprise, visit www.oracle.com/social. And don’t miss the insights of other social business thought leaders at www.oracle.com/goto/socialbusiness. Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Arial","sans-serif"; mso-fareast-font-family:Cambria; mso-fareast-theme-font:minor-latin;}

    Read the article

  • Transfer websites and domains to new server

    - by Albert
    We have currently around 40 websites and 80+ domains/sub-domains in a shared 1&1 hosting package, and we just acquired a managed dedicated server with 1&1 as well. Now it's time to start transferring everything over to the new server. Transferring just the websites and databases wouldn't be a problem, it would take time but it's pretty straight forward. The problem comes when transferring the domains, let me explain why. Many of the websites we have are accessible via sub-domains of a parent domain. Ideally, we would like to transfer the sites one by one, in order to check for each one that everything works fine in the new server. However, since we also need to transfer the domain so it's managed in the new server, once we do that means that all the websites using that domain need to be already in the new server before transferring that domain, thus not allowing the "one by one" philosophy. Another issue is the downtime when transferring the domain, from the moment it stops working in the hosting package and becomes active in the new server. I believe there's nothing we can do here. So my question is if there's any way we can do the "one by one" transferring of the websites (and their corresponding sub-domains) in the circumstances described above. One idea I had would be: 1. Let's say we have website A, which is accessible using subdomain.mydomain.com (and there are many other websites accessible via other sub-domains of mydomain.com) 2. Transfer the files of website A to the new server 3. Point a test domain in the new server to the website A's folder (the new server comes with a "test" domain) 4. Test if website A works with that "test" domain 5. In the old hosting, somehow point the real sub-domain (subdomain.mydomain.com) to the new location of website A, in a way that user always see the same URL as always 6. Repeat 2-5 for every website belonging to the same domain 7. Once all are working in the new server, do the actual transfer of the domain to the new server, and then re-create all the sub-domains and point them to their corresponding website That way, users wouldn't notice that there's been a change (except for a small down time of the websites when doing the domain transfer). The part I'm not sure about is point 5 of the above. Is there any way to do that? I mean do it in a way that users see the original domain all the time in their browser, even for internal pages (so not only for the "home page", which would be sub-domain.mydomain.com, but also for example for the contact page, which would be sub-domain.mydomain.com/contact.php). Is there any way to do this? Or are we SOL and we're going to have to transfer all at the same time?

    Read the article

  • Dynamically loading Assemblies to reduce Runtime Depencies

    - by Rick Strahl
    I've been working on a request to the West Wind Application Configuration library to add JSON support. The config library is a very easy to use code-first approach to configuration: You create a class that holds the configuration data that inherits from a base configuration class, and then assign a persistence provider at runtime that determines where and how the configuration data is store. Currently the library supports .NET Configuration stores (web.config/app.config), XML files, SQL records and string storage.About once a week somebody asks me about JSON support and I've deflected this question for the longest time because frankly I think that JSON as a configuration store doesn't really buy a heck of a lot over XML. Both formats require the user to perform some fixup of the plain configuration data - in XML into XML tags, with JSON using JSON delimiters for properties and property formatting rules. Sure JSON is a little less verbose and maybe a little easier to read if you have hierarchical data, but overall the differences are pretty minor in my opinion. And yet - the requests keep rolling in.Hard Link Issues in a Component LibraryAnother reason I've been hesitant is that I really didn't want to pull in a dependency on an external JSON library - in this case JSON.NET - into the core library. If you're not using JSON.NET elsewhere I don't want a user to have to require a hard dependency on JSON.NET unless they want to use the JSON feature. JSON.NET is also sensitive to versions and doesn't play nice with multiple versions when hard linked. For example, when you have a reference to V4.4 in your project but the host application has a reference to version 4.5 you can run into assembly load problems. NuGet's Update-Package can solve some of this *if* you can recompile, but that's not ideal for a component that's supposed to be just plug and play. This is no criticism of JSON.NET - this really applies to any dependency that might change.  So hard linking the DLL can be problematic for a number reasons, but the primary reason is to not force loading of JSON.NET unless you actually need it when you use the JSON configuration features of the library.Enter Dynamic LoadingSo rather than adding an assembly reference to the project, I decided that it would be better to dynamically load the DLL at runtime and then use dynamic typing to access various classes. This allows me to run without a hard assembly reference and allows more flexibility with version number differences now and in the future.But there are also a couple of downsides:No assembly reference means only dynamic access - no compiler type checking or IntellisenseRequirement for the host application to have reference to JSON.NET or else get runtime errorsThe former is minor, but the latter can be problematic. Runtime errors are always painful, but in this case I'm willing to live with this. If you want to use JSON configuration settings JSON.NET needs to be loaded in the project. If this is a Web project, it'll likely be there already.So there are a few things that are needed to make this work:Dynamically create an instance and optionally attempt to load an Assembly (if not loaded)Load types into dynamic variablesUse Reflection for a few tasks like statics/enumsThe dynamic keyword in C# makes the formerly most difficult Reflection part - method calls and property assignments - fairly painless. But as cool as dynamic is it doesn't handle all aspects of Reflection. Specifically it doesn't deal with object activation, truly dynamic (string based) member activation or accessing of non instance members, so there's still a little bit of work left to do with Reflection.Dynamic Object InstantiationThe first step in getting the process rolling is to instantiate the type you need to work with. This might be a two step process - loading the instance from a string value, since we don't have a hard type reference and potentially having to load the assembly. Although the host project might have a reference to JSON.NET, that instance might have not been loaded yet since it hasn't been accessed yet. In ASP.NET this won't be a problem, since ASP.NET preloads all referenced assemblies on AppDomain startup, but in other executable project, assemblies are just in time loaded only when they are accessed.Instantiating a type is a two step process: Finding the type reference and then activating it. Here's the generic code out of my ReflectionUtils library I use for this:/// <summary> /// Creates an instance of a type based on a string. Assumes that the type's /// </summary> /// <param name="typeName">Common name of the type</param> /// <param name="args">Any constructor parameters</param> /// <returns></returns> public static object CreateInstanceFromString(string typeName, params object[] args) { object instance = null; Type type = null; try { type = GetTypeFromName(typeName); if (type == null) return null; instance = Activator.CreateInstance(type, args); } catch { return null; } return instance; } /// <summary> /// Helper routine that looks up a type name and tries to retrieve the /// full type reference in the actively executing assemblies. /// </summary> /// <param name="typeName"></param> /// <returns></returns> public static Type GetTypeFromName(string typeName) { Type type = null; // Let default name binding find it type = Type.GetType(typeName, false); if (type != null) return type; // look through assembly list var assemblies = AppDomain.CurrentDomain.GetAssemblies(); // try to find manually foreach (Assembly asm in assemblies) { type = asm.GetType(typeName, false); if (type != null) break; } return type; } To use this for loading JSON.NET I have a small factory function that instantiates JSON.NET and sets a bunch of configuration settings on the generated object. The startup code also looks for failure and tries loading up the assembly when it fails since that's the main reason the load would fail. Finally it also caches the loaded instance for reuse (according to James the JSON.NET instance is thread safe and quite a bit faster when cached). Here's what the factory function looks like in JsonSerializationUtils:/// <summary> /// Dynamically creates an instance of JSON.NET /// </summary> /// <param name="throwExceptions">If true throws exceptions otherwise returns null</param> /// <returns>Dynamic JsonSerializer instance</returns> public static dynamic CreateJsonNet(bool throwExceptions = true) { if (JsonNet != null) return JsonNet; lock (SyncLock) { if (JsonNet != null) return JsonNet; // Try to create instance dynamic json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); if (json == null) { try { var ass = AppDomain.CurrentDomain.Load("Newtonsoft.Json"); json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); } catch (Exception ex) { if (throwExceptions) throw; return null; } } if (json == null) return null; json.ReferenceLoopHandling = (dynamic) ReflectionUtils.GetStaticProperty("Newtonsoft.Json.ReferenceLoopHandling", "Ignore"); // Enums as strings in JSON dynamic enumConverter = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.Converters.StringEnumConverter"); json.Converters.Add(enumConverter); JsonNet = json; } return JsonNet; }This code's purpose is to return a fully configured JsonSerializer instance. As you can see the code tries to create an instance and when it fails tries to load the assembly, and then re-tries loading.Once the instance is loaded some configuration occurs on it. Specifically I set the ReferenceLoopHandling option to not blow up immediately when circular references are encountered. There are a host of other small config setting that might be useful to set, but the default seem to be good enough in recent versions. Note that I'm setting ReferenceLoopHandling which requires an Enum value to be set. There's no real easy way (short of using the cardinal numeric value) to set a property or pass parameters from static values or enums. This means I still need to use Reflection to make this work. I'm using the same ReflectionUtils class I previously used to handle this for me. The function looks up the type and then uses Type.InvokeMember() to read the static property.Another feature I need is have Enum values serialized as strings rather than numeric values which is the default. To do this I can use the StringEnumConverter to convert enums to strings by adding it to the Converters collection.As you can see there's still a bit of Reflection to be done even in C# 4+ with dynamic, but with a few helpers this process is relatively painless.Doing the actual JSON ConversionFinally I need to actually do my JSON conversions. For the Utility class I need serialization that works for both strings and files so I created four methods that handle these tasks two each for serialization and deserialization for string and file.Here's what the File Serialization looks like:/// <summary> /// Serializes an object instance to a JSON file. /// </summary> /// <param name="value">the value to serialize</param> /// <param name="fileName">Full path to the file to write out with JSON.</param> /// <param name="throwExceptions">Determines whether exceptions are thrown or false is returned</param> /// <param name="formatJsonOutput">if true pretty-formats the JSON with line breaks</param> /// <returns>true or false</returns> public static bool SerializeToFile(object value, string fileName, bool throwExceptions = false, bool formatJsonOutput = false) { dynamic writer = null; FileStream fs = null; try { Type type = value.GetType(); var json = CreateJsonNet(throwExceptions); if (json == null) return false; fs = new FileStream(fileName, FileMode.Create); var sw = new StreamWriter(fs, Encoding.UTF8); writer = Activator.CreateInstance(JsonTextWriterType, sw); if (formatJsonOutput) writer.Formatting = (dynamic)Enum.Parse(FormattingType, "Indented"); writer.QuoteChar = '"'; json.Serialize(writer, value); } catch (Exception ex) { Debug.WriteLine("JsonSerializer Serialize error: " + ex.Message); if (throwExceptions) throw; return false; } finally { if (writer != null) writer.Close(); if (fs != null) fs.Close(); } return true; }You can see more of the dynamic invocation in this code. First I grab the dynamic JsonSerializer instance using the CreateJsonNet() method shown earlier which returns a dynamic. I then create a JsonTextWriter and configure a couple of enum settings on it, and then call Serialize() on the serializer instance with the JsonTextWriter that writes the output to disk. Although this code is dynamic it's still fairly short and readable.For full circle operation here's the DeserializeFromFile() version:/// <summary> /// Deserializes an object from file and returns a reference. /// </summary> /// <param name="fileName">name of the file to serialize to</param> /// <param name="objectType">The Type of the object. Use typeof(yourobject class)</param> /// <param name="binarySerialization">determines whether we use Xml or Binary serialization</param> /// <param name="throwExceptions">determines whether failure will throw rather than return null on failure</param> /// <returns>Instance of the deserialized object or null. Must be cast to your object type</returns> public static object DeserializeFromFile(string fileName, Type objectType, bool throwExceptions = false) { dynamic json = CreateJsonNet(throwExceptions); if (json == null) return null; object result = null; dynamic reader = null; FileStream fs = null; try { fs = new FileStream(fileName, FileMode.Open, FileAccess.Read); var sr = new StreamReader(fs, Encoding.UTF8); reader = Activator.CreateInstance(JsonTextReaderType, sr); result = json.Deserialize(reader, objectType); reader.Close(); } catch (Exception ex) { Debug.WriteLine("JsonNetSerialization Deserialization Error: " + ex.Message); if (throwExceptions) throw; return null; } finally { if (reader != null) reader.Close(); if (fs != null) fs.Close(); } return result; }This code is a little more compact since there are no prettifying options to set. Here JsonTextReader is created dynamically and it receives the output from the Deserialize() operation on the serializer.You can take a look at the full JsonSerializationUtils.cs file on GitHub to see the rest of the operations, but the string operations are very similar - the code is fairly repetitive.These generic serialization utilities isolate the dynamic serialization logic that has to deal with the dynamic nature of JSON.NET, and any code that uses these functions is none the wiser that JSON.NET is dynamically loaded.Using the JsonSerializationUtils WrapperThe final consumer of the SerializationUtils wrapper is an actual ConfigurationProvider, that is responsible for handling reading and writing JSON values to and from files. The provider is simple a small wrapper around the SerializationUtils component and there's very little code to make this work now:The whole provider looks like this:/// <summary> /// Reads and Writes configuration settings in .NET config files and /// sections. Allows reading and writing to default or external files /// and specification of the configuration section that settings are /// applied to. /// </summary> public class JsonFileConfigurationProvider<TAppConfiguration> : ConfigurationProviderBase<TAppConfiguration> where TAppConfiguration: AppConfiguration, new() { /// <summary> /// Optional - the Configuration file where configuration settings are /// stored in. If not specified uses the default Configuration Manager /// and its default store. /// </summary> public string JsonConfigurationFile { get { return _JsonConfigurationFile; } set { _JsonConfigurationFile = value; } } private string _JsonConfigurationFile = string.Empty; public override bool Read(AppConfiguration config) { var newConfig = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfiguration)) as TAppConfiguration; if (newConfig == null) { if(Write(config)) return true; return false; } DecryptFields(newConfig); DataUtils.CopyObjectData(newConfig, config, "Provider,ErrorMessage"); return true; } /// <summary> /// Return /// </summary> /// <typeparam name="TAppConfig"></typeparam> /// <returns></returns> public override TAppConfig Read<TAppConfig>() { var result = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfig)) as TAppConfig; if (result != null) DecryptFields(result); return result; } /// <summary> /// Write configuration to XmlConfigurationFile location /// </summary> /// <param name="config"></param> /// <returns></returns> public override bool Write(AppConfiguration config) { EncryptFields(config); bool result = JsonSerializationUtils.SerializeToFile(config, JsonConfigurationFile,false,true); // Have to decrypt again to make sure the properties are readable afterwards DecryptFields(config); return result; } }This incidentally demonstrates how easy it is to create a new provider for the West Wind Application Configuration component. Simply implementing 3 methods will do in most cases.Note this code doesn't have any dynamic dependencies - all that's abstracted away in the JsonSerializationUtils(). From here on, serializing JSON is just a matter of calling the static methods on the SerializationUtils class.Already, there are several other places in some other tools where I use JSON serialization this is coming in very handy. With a couple of lines of code I was able to add JSON.NET support to an older AJAX library that I use replacing quite a bit of code that was previously in use. And for any other manual JSON operations (in a couple of apps I use JSON Serialization for 'blob' like document storage) this is also going to be handy.Performance?Some of you might be thinking that using dynamic and Reflection can't be good for performance. And you'd be right… In performing some informal testing it looks like the performance of the native code is nearly twice as fast as the dynamic code. Most of the slowness is attributable to type lookups. To test I created a native class that uses an actual reference to JSON.NET and performance was consistently around 85-90% faster with the referenced code. That being said though - I serialized 10,000 objects in 80ms vs. 45ms so this isn't hardly slouchy. For the configuration component speed is not that important because both read and write operations typically happen once on first access and then every once in a while. But for other operations - say a serializer trying to handle AJAX requests on a Web Server one would be well served to create a hard dependency.Dynamic Loading - Worth it?On occasion dynamic loading makes sense. But there's a price to be paid in added code complexity and a performance hit. But for some operations that are not pivotal to a component or application and only used under certain circumstances dynamic loading can be beneficial to avoid having to ship extra files and loading down distributions. These days when you create new projects in Visual Studio with 30 assemblies before you even add your own code, trying to keep file counts under control seems a good idea. It's not the kind of thing you do on a regular basis, but when needed it can be a useful tool. Hopefully some of you find this information useful…© Rick Strahl, West Wind Technologies, 2005-2013Posted in .NET  C#   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Exploding maps in Reporting Services 2008 R2

    - by Rob Farley
    Kaboom! Well, that was the imagery that secretly appeared in my mind when I saw “USA By State Exploded” in the list of installed maps in Report Builder 3.0 – part of the spatial offering of SQL Server Reporting Server 2008 R2. Alas, it just means that the borders are bigger. Clicking on it showed me. Unfortunately, I’m not interested in maps of the US. None of my clients are there (at least, not yet – feel free to get in touch if you want to change this ‘feature’ of my company). So instead, I’ve recently been getting hold of some data for Australian areas. I’ve just bought some PostCode shapes for South Australia, and will use this in demos for conferences and for showing clients how this kind of report can really impact their reporting. One of the companies I was talking about getting shape files sent me a sample. So I chose the “ESRI shapefile” option you see above, and browsed to my file. It appeared in the window like this: Australians will immediately recognise this as the area around Wollongong, just south of Sydney. Well, apart from me. I didn’t. I had to put a Bing Maps layer behind it to work that out, but that’s not for this post. The thing that I discovered was that if I selected the Exploded USA option (but without clicking Next), and then chose my shape file, then my area around Wollongong would be exploded too! Huh! I think this is actually a bug, but a potentially useful one! Some further investigation (involving creating two identical reports, one with this exploded view, one without), showed that the Exploded View is done by reducing the ScaleFactor property of the PolygonLayer in the map control. The Exploded version has it below 1. If you set to above one, your shapes overlap. I discovered this by accident… I guess I hadn’t looked through all the PolygonLayer options to work out what they all do. And because this post is about Reporting, it can qualify for this month’s T-SQL Tuesday, hosted by Aaron Nelson (@sqlvariant). Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Windows 8: Everything from design, build, and how to sell a Metro style app

    - by Thomas Mason
    For me, there are a lot of similarities between an application developed for Windows Phone and a Metro style app developed for Windows 8. A Windows Phone 7 application (rather than an XNA game) is built in .NET and XAML against a subset of the .NET framework and the application has a lifecycle which needs to be conscious of battery life and so is split out into foreground/background pieces. The application is sandboxed in terms of its interactions with the local device and is packaged with a manifest which describes those interactions. The app needs to be aware of network connectivity status and its work on the network is done asynchronously to preserve the user experience.The app is packaged and deployed to a Marketplace which the user browses to find the app, read reviews, perhaps purchase it and then install it and receive updates over time. Quite a lot of those statements are as true of a Windows 8 Metro style app as they are for a Windows Phone app and so a Windows Phone app developer already has a good head start when it comes to building Metro style apps for Windows 8. With that in mind, there is an event to help developers with a Windows Phone app in Marketplace to begin the process of looking at Windows 8 and whether you can get a quick win by bringing your Phone application onto Windows. The idea of the event was to provide a space where developers can get together over 2 days and take the time out to look at what it means to take their app from Windows Phone to Windows 8. Kicking off on Saturday 16th June at 10am, we are told they have plenty of power sockets, WiFi, whiteboards, drinks, pizza, games, prizes and some quiet space that you can work in. Including people on hand with Windows Phone and Windows 8 experience to help everything along the way. There will be an attendee-voted schedule of talks but we’ll keep these out of your way if you just want to get on and code. We’ll also provide information around submitting your app to the Windows Store If you have a Windows Phone app in Marketplace, now’s a great time to look at getting it onto Windows 8. Sign up. Bring your laptop. Bring your app. Bring Windows 8 and Visual Studio 11. And everyone will their best to help you get your app onto Windows 8. Location & Venue TBA but it will be in central London, accessible by major railway and underground transportation. Day 1 Saturday 16th June 10am – 9pm Day 2 Sunday 17th June 10am – 4pm

    Read the article

  • SQLAuthority News – A Quick Note on @Pluralsight Video – Call Me Maybe Developer Way

    - by pinaldave
    I write a lot about how important learning and training is.  Any of my readers will know that I think the key to success is staying current with your education and taking very opportunity to increase your “tool kit” of skills.  I hope that I have not made the impression that it is all in the employees hands to make sure they are happy and satisfied at their jobs. I also firmly believe that a good boss will make good employees.  A boss who is good at communicating,  and leading, who knows how to nip problem in the bud and allocate resources wisely will have a well-oiled machine.  This means happy employees and a great work environment. It is important to have a healthy work environment because you will not succeed without one.  Successful business will always have the type of environment that fosters creativity and has efficient employees.  A healthy environment doesn’t force employees to produce results, but allows them to progress and create the results themselves. The result of a healthy work environment is that employees will enjoy their work and then work harder.  This can bring the company more revenue, and hopefully the employees will see the result of their hard work in bonuses and raises.  However, money is important but it is certainly secondary – the important part is the dedication of the employees to their work and to their company.  This is the true key to success. Any employee who recognizes this description as their working environment should consider themselves fortunate.  They are allowed to grow and do better, and employees being treated fairly can be a rarity in this world.  One company that I believe adheres to this principle is Pluralsight – as evidenced by this fun video. I have blogged about it earlier. (check out my cameo at 0:37). It was great fun to work with the employees at Pluralsight while making this video.  They are a great bunch and clearly have a great work environment – we wouldn’t have had this much fun if not!  I have to tell you a little bit about making this video.  My wife shot it with her mobile phone, which was certainly a different but exciting experience!  It was hard to get the look of the video right, since I was trying to portray a body builder – this was a little outside of my own personal experience.  I have what I like to call a “healthy” body type, so trying to look extremely fit like some of the other “actors” in this video was a challenge – but I do hope that you all think I succeeded.  All in all, it was great fun to participate in this video and I hope to see my friends at Pluralsight again soon. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

  • Hadoop growing pains

    - by Piotr Rodak
    This post is not going to be about SQL Server. I have been reading recently more and more about “Big Data” – very catchy term that describes untamed increase of the data that mankind is producing each day and the struggle to capture the meaning of these data. Ten years ago, and perhaps even three years ago this need was not so recognized. Increasing number of smartphones and discernable trend of mainstream Internet traffic moving to the smartphone generated one means that there is bigger and bigger stream of information that has to be stored, transformed, analysed and perhaps monetized. The nature of this traffic makes if very difficult to wrap it into boundaries of relational database engines. The amount of data makes it near to impossible to process them in relational databases within reasonable time. This is where ‘cloud’ technologies come to play. I just read a good article about the growing pains of Hadoop, which became one of the leading players on distributed processing arena within last year or two. Toby Baer concludes in it that lack of enterprise ready toolsets hinders Hadoop’s apprehension in the enterprise world. While this is true, something else drew my attention. According to the article there are already about half of a dozen of commercially supported distributions of Hadoop. For me, who has not been involved into intricacies of open-source world, this is quite interesting observation. On one hand, it is good that there is competition as it is beneficial in the end to the customer. On the other hand, the customer is faced with difficulty of choosing the right distribution. In future, when Hadoop distributions fork even more, this choice will be even harder. The distributions will have overlapping sets of features, yet will be quite incompatible with each other. I suppose it will take a few years until leaders emerge and the market will begin to resemble what we see in Linux world. There are myriads of distributions, but only few are acknowledged by the industry as enterprise standard. Others are honed by bearded individuals with too much time to spend. In any way, the third fact I can’t help but notice about the proliferation of distributions of Hadoop is that IT professionals will have jobs.   BuzzNet Tags: Hadoop,Big Data,Enterprise IT

    Read the article

  • How to become a solid python web developer [closed]

    - by Estarius
    Possible Duplicate: How do I learn Python from zero to web development? I have started Python recently with the goal to become a solid developer to make a web application eventually. However, as time goes by I am wondering if I am being optimal about how I will achieve my goal. I would compare it to a game for example, to be better you must spend time playing and trying new things... However, if you just log in and sit in the lobby chatting you are most likely not progressing. So far, this is my plan (feel free to comment or judge it): Review basic programmation concepts Start coding slowly in Python Once comfortable in Python, learn about web development in Python Learn about those things we heard about: SQLAlchemy, MVC, TDD, Git, Agile (Group project) To achieve these things, I started the Learn python the hard way exercises, which I am doing at the rate of 5 per days. I also started to read Think Python at the same time and planning to move on with Dive into python. As far as my research goes, these documentations along with Python documentation is usually what is the most recommended to learn Python. I consider this to get my point 1 and 2 done. While learning Python is really great, my goal remains to do quality web development. I know there are books about Django etc. however I would like to become comfortable with any Python web development. This means without Framework and with Framework... Any framework, then be able to choose the one which best fits our needs. For this I would like to know if some people have suggestions. Should I just get a book on Django and it should apply to everything ? What would be the best method to go from Python to Web Python and not end up creating crappy code which would turn into nightmares for other programmers ? Then finally, those "things we hear about". While I understand what they all do basically, I am fairly sure that like everything, there are good and wrong ways of making use of them. Should I go through at least a whole book on each before starting to use them or keep it at their respective online documentation ? Are there some kind of documentation which links their use to Python ? Also, from looking at Django and Pyramid they seems to use something else than MVC, while the Django model looks similar, the Pyramid one seems to cut a whole part of it... Is learning MVC still worth it ? Sorry for the wall of text, Thanks in advance !

    Read the article

  • Should we exclude code for the code coverage analysis?

    - by romaintaz
    I'm working on several applications, mainly legacy ones. Currently, their code coverage is quite low: generally between 10 and 50%. Since several weeks, we have recurrent discussions with the Bangalore teams (main part of the development is made offshore in India) regarding the exclusions of packages or classes for Cobertura (our code coverage tool, even if we are currently migrating to JaCoCo). Their point of view is the following: as they will not write any unit tests on some layers of the application (1), these layers should be simply excluded from the code coverage measure. In others words, they want to limit the code coverage measure to the code that is tested or should be tested. Also, when they work on unit test for a complex class, the benefits - purely in term of code coverage - will be unnoticed due in a large application. Reducing the scope of the code coverage will make this kind of effort more visible... The interest of this approach is that we will have a code coverage measure that indicates the current status of the part of the application we consider as testable. However, my point of view is that we are somehow faking the figures. This solution is an easy way to reach higher level of code coverage without any effort. Another point that bothers me is the following: if we show a coverage increase from one week to another, how can we tell if this good news is due to the good work of the developers, or simply due to new exclusions? In addition, we will not be able to know exactly what is considered in the code coverage measure. For example, if I have a 10,000 lines of code application with 40% of code coverage, I can deduct that 40% of my code base is tested (2). But what happen if we set exclusions? If the code coverage is now 60%, what can I deduct exactly? That 60% of my "important" code base is tested? How can I As far as I am concerned, I prefer to keep the "real" code coverage value, even if we can't be cheerful about it. In addition, thanks to Sonar, we can easily navigate in our code base and know, for any module / package / class, its own code coverage. But of course, the global code coverage will still be low. What is your opinion on that subject? How do you do on your projects? Thanks. (1) These layers are generally related to the UI / Java beans, etc. (2) I know that's not true. In fact, it only means that 40% of my code base

    Read the article

  • Executable Resumes

    - by Liam McLennan
    Over the past twelve months I have been thinking a lot about executable specifications. Long considered the holy grail of agile software development, executable specifications means expressing a program’s functionality in a way that is both readable by the customer and computer verifiable in an automatic, repeatable way. With the current generation of BDD and ATDD tools executable specifications seem finally within the reach of a significant percentage of the development community. Lately, and partly as a result of my craftsmanship tour, I have decided that soon I am going to have to get a job (gasp!). As Dave Hoover describes in Apprenticeship Patters, “you … have mentors and kindred spirits that you meet with periodically, [but] when it comes to developing software, you work alone.” The time may have come where the only way for me to feel satisfied and enriched by my work is to seek out a work environment where I can work with people smarter and more knowledgeable than myself. Having been on both sides of the interview desk many times I know how difficult and unreliable the process can be. Therefore, I am proposing the idea of executable resumes. As a journeyman programmer looking for a fruitful work environment I plan to write an application that demonstrates my understanding of the state of the art. Potential employers can download, view and execute my executable resume and judge wether my aesthetic sensibility matches their own. The concept of the executable resume is based upon the following assertion: A line of code answers a thousand interview questions Asking people about their experiences and skills is not a direct way of assessing their value to your organisation. Often it simple assesses their ability to mislead an interviewer. An executable resume demonstrates: The highest quality code that the person is able to produce. That the person is sufficiently motivated to produce something of value in their own time. That the person loves their craft. The idea of publishing a program to demonstrate a developer’s skills comes from Rob Conery, who suggested that each developer should build their own blog engine since it is the public representation of their level of mastery. Rob said: Luke had to build his own lightsaber – geeks should have to build their own blogs. And that should be their resume. In honour of Rob’s inspiration I plan to build a blog engine as my executable resume. While it is true that the world does not need another blog engine it is as good a project as any, it is a well understood domain, and I have not found an existing blog engine that I like. Executable resumes fit well with the software craftsmanship metaphor. It is not difficult to imagine that under the guild system master craftsmen may have accepted journeymen based on the quality of the work they had produced in the past. We now understand that when it comes to the functionality of an application that code is the final arbiter. Why not apply the same rule to hiring?

    Read the article

  • Finally home - and something fully off topic

    - by Mike Dietrich
    Arrived at Munich Pasing last night at 0:50am ... finally :-) On Sunday I've left the Dylan Hotel in Dublin (thanks to the staff there as well: you were REALLY helpful!!) around 7:30pm to go to the port - and came home on Tuesday morning 1:15am. So all together 29:45hrs door-to-door - not bad for nearly 2000km just relying on public transport. And could have been faster if there were seats in ealier TGV's left. But I don't complain at all ;-) Just checked the website of Dublin Airport - it says currently: 17.00pm: Latest on flight disruptions at Dublin Airport The IAA have advised us that based on the latest Volcanic Ash Advisory Centre London Dublin Airport will remain closed for all inbound and outbound commercial flights until 20.00hours. This effectively means that no flights will land or take off at Dublin Airport until then. A further update will be posted this afternoon. When traveling I have always my iPod with me. It has gotten a bit old now (I think I've bought it 3 years ago in November 2007) but it has a 160GB hard disk in it so it fits most of my music collection (not the entire collection anymore as I'm currently re-riping everything to Apple Lossless because at least for my ears it makes a big difference - but I listen to good ol' vinyl as well ...and I don't download compressed music ;-) ). The battery of my little travel companion is still good for more than 20 hours consistent music playback - and there was a band from Texas being in my ears most of the whole journey called Midlake. I haven't heard of them before until I asked a lady at a Munich store some few weeks ago what she's playing on the speakers in the shop. She was amazed and came back with the CD cover but I hesitated to buy it as I always want to listen the tunes before - and at this day I had no time left to do so. But in Dublin I had a bit of spare time on Saturday and I always enter record stores - and the Tower Records was the sort of store I really enjoy and so I've spent there nearly two hours - leaving with 3 Midlake CDs in my bag. So if you are interested just listen those tunes which may remind some people on Fleetwood Mac: As I said in the title, fully off topic ;-)

    Read the article

  • Walking to the North Pole to raise money to protect children from cruelty.

    - by jessica.ebbelaar
    Hi, my name is Luca. I joined Oracle in 2005 and I am currently working as a Dell EMEA Channel Manager UK, Ireland and Iberia and I am responsible for the Oracle Dell relationship for the above 3 countries. On the 31st of March 2011 I will set out to complete the ultimate challenge. I will walk and ski across the frozen Arctic to the Top of The World: the GEOGRAPHIC North Pole. While dragging all my supplies over 60 Nautical miles of moving sea ice, in temperatures as low as minus 30 degrees Celsius. I will spend 8 to 10 days preparing, working, living and travelling to the North Pole to 90 degree north. In November I spent a full week of training for this trip.( watch my video). This gave me the opportunity to meet the rest of the team, testing all the gear and carrying an 18inch tyre around the country side for 8 hours per day. I am honored to embark this challenging journey to support the National Society for the Prevention of Cruelty to Children (NSPCC). The NSPCC helped more than 750,000 young people to speak out for the first time about abuse they had suffered. I am a firm believer that in order to build a stronger, healthier and wiser society we need to support and help future generations from the beginning of their life journey. This is why cruelty to children must stop. FULL STOP.   Through Virgin Money Giving, you can sponsor me and donations will be quickly processed and passed to NSPCC. Virgin Money Giving is a non-profit organization and will claim gift aid on a charity's behalf where the donor is eligible for this. If you are a UK tax payer please don't forget to select Gift Aid. Gift Aid is great because it means charities get extra money added to their donations at no extra cost to the donor. For every £1 donated, the charity currently receives £1.28 when you add Gift Aid. Anyone who would like to find out more can visit my Facebook page ‘Luca North Pole charity fundraising trip’ I really appreciate all your support and thank you for supporting the NSPCC. Tags van Technorati: Channel Manager,challenge,Arctic,North Pole,NSPCC,cruelty to children,Luca North Pole charity fundraising trip. If fou have any questions related to this article contact [email protected].

    Read the article

< Previous Page | 229 230 231 232 233 234 235 236 237 238 239 240  | Next Page >