Search Results

Search found 5885 results on 236 pages for 'and finally'.

Page 139/236 | < Previous Page | 135 136 137 138 139 140 141 142 143 144 145 146  | Next Page >

  • Security and the Mobile Workforce

    - by tobyehatch
    Now that many organizations are moving to the BYOD philosophy (bring your own devices), security for phones and tablets accessing company sensitive information is of paramount importance. I had the pleasure to interview Brian MacDonald, Principal Product Manager for Oracle Business Intelligence (BI) Mobile Products, about this subject, and he shared some wonderful insight about how the Oracle Mobile Security Tool Kit is addressing mobile security and doing some pretty cool things.  With the rapid proliferation of phones and tablets, there is a perception that mobile devices are a security threat to corporate IT, that mobile operating systems are not secure, and that there are simply too many ways to inadvertently provide access to critical analytic data outside the firewall. Every day, I see employees working on mobile devices at the airport, while waiting for their airplanes, and using public WIFI connections at coffee houses and in restaurants. These methods are not typically secure ways to access confidential company data. I asked Brian to explain why. “The native controls for mobile devices and applications are indeed insufficiently secure for corporate deployments of Business Intelligence and most certainly for businesses where data is extremely critical - such as financial services or defense - although it really applies across the board. The traditional approach for accessing data from outside a firewall is using a VPN connection which is not a viable solution for mobile. The problem is that once you open up a VPN connection on your phone or tablet, you are creating an opening for the whole device, for all the software and installed applications. Often the VPN connection by itself provides insufficient encryption – if any – which means that data can be potentially intercepted.” For this reason, most organizations that deploy Business Intelligence data via mobile devices will only do so with some additional level of control. So, how has the industry responded? What are companies doing to address this very real threat? Brian explained that “Mobile Device Management (MDM) and Mobile Application Management (MAM) software vendors have rapidly created solutions for mobile devices that provide a vast array of services for controlling, managing and establishing enterprise mobile usage policies. On the device front, vendors now support full levels of encryption behind the firewall, encrypted local data storage, credential management such as federated single-sign-on as well as remote wipe, geo-fencing and other risk reducing features (should a device be lost or stolen). More importantly, these software vendors have created methods for providing these capabilities on a per application basis, allowing for complete isolation of the application from the mobile operating system. Finally, there are tools which allow the applications themselves to be distributed through enterprise application stores allowing IT organizations to manage who has access to the apps, when updates to the applications will happen, and revoke access after an employee leaves. So even though an employee may be using a personal device, access to company data can be controlled while on or near the company premises. So do the Oracle BI mobile products integrate with the MDM and MAM vendors? Brian explained that our customers use a wide variety of mobile security vendors and may even have more than one in-house. Therefore, Oracle is ensuring that users have a choice and a mechanism for linking together Oracle’s BI offering with their chosen vendor’s secure technology. The Oracle BI Mobile Security Toolkit, which is a version of the Oracle BI Mobile HD application, delivered through the Oracle Technology Network (OTN) in its component parts, helps Oracle users to build their own version of the Mobile HD application, sign it with their own enterprise development certificates, link with their security vendor of choice, then deploy the combined application through whichever means they feel most appropriate, including enterprise application stores.  Brian further explained that Oracle currently supports most of the major mobile security vendors, has close relationships with each, and maintains strong partnerships enabling both Oracle and the vendors to test, update and release a cooperating solution in lock-step. Oracle also ensures that as new versions of the Oracle HD application are made available on the Apple iTunes store, the same version is also immediately made available through the Security Toolkit on OTN.  Rest assured that as our workforce continues down the mobile path, company sensitive information can be secured.  To listen to the entire podcast, click here. To learn more about the Oracle BI Mobile HD, click  here To learn more about the BI Mobile Security Toolkit, click here 

    Read the article

  • Open source adventures with... wait for it... Microsoft

    - by Jeff
    Last week, Microsoft announced that it was going to open source the rest of the ASP.NET MVC Web stack. The core MVC framework has been open source for a long time now, but the other pieces around it are also now out in the wild. Not only that, but it's not what I call "big bang" open source, where you release the source with each version. No, they're actually committing in real time to a public repository. They're also taking contributions where it makes sense. If that weren't exciting enough, CodePlex, which used to be a part of the team I was on, has been re-org'd to a different part of the company where it is getting the love and attention (and apparently money) that it deserves. For a period of several months, I lobbied to get a PM gig with that product, but got nowhere. A year and a half later, I'm happy to see it finally treated right. In any case, I found a bug in Razor, the rendering engine, before the beta came out. I informally sent the bug info to some people, but it wasn't fixed for the beta. Now, with the project being developed in the open, I was able to submit the issue, and went back and forth with the developer who wrote the code (I met him once at a meet up in Bellevue, I think), and he committed a fix. I tried it a day later, and the bug was gone. There's a lot to learn from all of this. That open source software is surprisingly efficient and often of high quality is one part of it. For me the win is that it demonstrates how open and collaborative processes, as light as possible, lead to better software. In other words, even if this were a project being developed internally, at a bank or something, getting stakeholders involved early and giving people the ability to respond leads to awesomeness. While there is always a place for big thinking, experience has shown time and time again that trying to figure everything out up front takes too long, and rarely meets expectations. This is a lesson that probably half of Microsoft has yet to learn, including the team I was on before I split. It's the reason that team still hasn't shipped anything to general availability. But I've seen what an open and iterative development style can do for teams, at Microsoft and other places that I've worked. When you can have a conversation with people, and take ideas and turn them into code quickly, you're winning. So why don't people like winning? I think there are a lot of reasons, and they can generally be categorized into fear, skepticism and bad experiences. I can't give the Web stack teams enough credit. Not only did they dream big, but they changed a culture that often seems immovable and hopelessly stuck. This is a very public example of this culture change, but it's starting to happen at every scale in Microsoft. It's really interesting to see in a company that has been written off as dead the last decade.

    Read the article

  • SQL Server Date Comparison Functions

    - by HighAltitudeCoder
    A few months ago, I found myself working with a repetitive cursor that looped until the data had been manipulated enough times that it was finally correct.  The cursor was heavily dependent upon dates, every time requiring the earlier of two (or several) dates in one stored procedure, while requiring the later of two dates in another stored procedure. In short what I needed was a function that would allow me to perform the following evaluation: WHERE MAX(Date1, Date2) < @SomeDate The problem is, the MAX() function in SQL Server does not perform this functionality.  So, I set out to put these functions together.  They are titled: EarlierOf() and LaterOf(). /**********************************************************                               EarlierOf.sql   **********************************************************/ /**********************************************************   Return the later of two DATETIME variables.   Parameter 1: DATETIME1 Parameter 2: DATETIME2   Works for a variety of DATETIME or NULL values. Even though comparisons with NULL are actually indeterminate, we know conceptually that NULL is not earlier or later than any other date provided.   SYNTAX: SELECT dbo.EarlierOf('1/1/2000','12/1/2009') SELECT dbo.EarlierOf('2009-12-01 00:00:00.000','2009-12-01 00:00:00.521') SELECT dbo.EarlierOf('11/15/2000',NULL) SELECT dbo.EarlierOf(NULL,'1/15/2004') SELECT dbo.EarlierOf(NULL,NULL)   **********************************************************/ USE AdventureWorks GO   IF EXISTS       (SELECT *       FROM sysobjects       WHERE name = 'EarlierOf'       AND xtype = 'FN'       ) BEGIN             DROP FUNCTION EarlierOf END GO   CREATE FUNCTION EarlierOf (       @Date1                              DATETIME,       @Date2                              DATETIME )   RETURNS DATETIME   AS BEGIN       DECLARE @ReturnDate     DATETIME         IF (@Date1 IS NULL AND @Date2 IS NULL)       BEGIN             SET @ReturnDate = NULL             GOTO EndOfFunction       END         ELSE IF (@Date1 IS NULL AND @Date2 IS NOT NULL)       BEGIN             SET @ReturnDate = @Date2             GOTO EndOfFunction       END         ELSE IF (@Date1 IS NOT NULL AND @Date2 IS NULL)       BEGIN             SET @ReturnDate = @Date1             GOTO EndOfFunction       END         ELSE       BEGIN             SET @ReturnDate = @Date1             IF @Date2 < @Date1                   SET @ReturnDate = @Date2             GOTO EndOfFunction       END         EndOfFunction:       RETURN @ReturnDate   END -- End Function GO   ---- Set Permissions --GRANT SELECT ON EarlierOf TO UserRole1 --GRANT SELECT ON EarlierOf TO UserRole2 --GO                                                                                             The inverse of this function is only slightly different. /**********************************************************                               LaterOf.sql   **********************************************************/ /**********************************************************   Return the later of two DATETIME variables.   Parameter 1: DATETIME1 Parameter 2: DATETIME2   Works for a variety of DATETIME or NULL values. Even though comparisons with NULL are actually indeterminate, we know conceptually that NULL is not earlier or later than any other date provided.   SYNTAX: SELECT dbo.LaterOf('1/1/2000','12/1/2009') SELECT dbo.LaterOf('2009-12-01 00:00:00.000','2009-12-01 00:00:00.521') SELECT dbo.LaterOf('11/15/2000',NULL) SELECT dbo.LaterOf(NULL,'1/15/2004') SELECT dbo.LaterOf(NULL,NULL)   **********************************************************/ USE AdventureWorks GO   IF EXISTS       (SELECT *       FROM sysobjects       WHERE name = 'LaterOf'       AND xtype = 'FN'       ) BEGIN             DROP FUNCTION LaterOf END GO   CREATE FUNCTION LaterOf (       @Date1                              DATETIME,       @Date2                              DATETIME )   RETURNS DATETIME   AS BEGIN       DECLARE @ReturnDate     DATETIME         IF (@Date1 IS NULL AND @Date2 IS NULL)       BEGIN             SET @ReturnDate = NULL             GOTO EndOfFunction       END         ELSE IF (@Date1 IS NULL AND @Date2 IS NOT NULL)       BEGIN             SET @ReturnDate = @Date2             GOTO EndOfFunction       END         ELSE IF (@Date1 IS NOT NULL AND @Date2 IS NULL)       BEGIN             SET @ReturnDate = @Date1             GOTO EndOfFunction       END         ELSE       BEGIN             SET @ReturnDate = @Date1             IF @Date2 > @Date1                   SET @ReturnDate = @Date2             GOTO EndOfFunction       END         EndOfFunction:       RETURN @ReturnDate   END -- End Function GO   ---- Set Permissions --GRANT SELECT ON LaterOf TO UserRole1 --GRANT SELECT ON LaterOf TO UserRole2 --GO                                                                                             The interesting thing about this function is its simplicity and the built-in NULL handling functionality.  Its interesting, because it seems like something should already exist in SQL Server that does this.  From a different vantage point, if you create this functionality and it is easy to use (ideally, intuitively self-explanatory), you have made a successful contribution. Interesting is good.  Self-explanatory, or intuitive is FAR better.  Happy coding! Graeme

    Read the article

  • Using T4 to generate Configuration classes

    - by Justin Hoffman
    I wanted to try to use T4 to read a web.config and generate all of the appSettings and connectionStrings as properties of a class.  I elected in this template only to output appSettings and connectionStrings but you can see it would be easily adapted for app specific settings, bindings etc.  This allows for quick access to config values as well as removing the potential for typo's when accessing values from the ConfigurationManager. One caveat: a developer would need to remember to run the .tt file after adding an entry to the web.config.  However, one would quickly notice when trying to access the property from the generated class (it wouldn't be there).  Additionally, there are other options as noted here. The first step was to create the .tt file.  Note that this is a basic example, it could be extended even further I'm sure.  In this example I just manually input the path to the web.config file. <#@ template debug="false" hostspecific="true" language="C#" #><#@ output extension=".cs" #><#@ assembly Name="System.Configuration" #><#@ assembly name="System.Xml" #><#@ assembly name="System.Xml.Linq" #><#@ assembly name="System.Net" #><#@ assembly name="System" #><#@ import namespace="System.Configuration" #><#@ import namespace="System.Xml" #><#@ import namespace="System.Net" #><#@ import namespace="Microsoft.VisualStudio.TextTemplating" #><#@ import namespace="System.Xml.Linq" #>using System;using System.Configuration;using System.Xml;using System.Xml.Linq;using System.Linq;namespace MyProject.Web { public partial class Configurator { <# var xDocument = XDocument.Load(@"G:\MySolution\MyProject\Web.config"); var results = xDocument.Descendants("appSettings"); const string key = "key"; const string name = "name"; foreach (var xElement in results.Descendants()) {#> public string <#= xElement.Attribute(key).Value#>{get {return ConfigurationManager.AppSettings[<#= string.Format("{0}{1}{2}","\"" , xElement.Attribute(key).Value, "\"")#>];}} <#}#> <# var connectionStrings = xDocument.Descendants("connectionStrings"); foreach(var connString in connectionStrings.Descendants()) {#> public string <#= connString.Attribute(name).Value#>{get {return ConfigurationManager.ConnectionStrings[<#= string.Format("{0}{1}{2}","\"" , connString.Attribute(name).Value, "\"")#>].ConnectionString;}} <#} #> }} The resulting .cs file: using System;using System.Configuration;using System.Xml;using System.Xml.Linq;using System.Linq;namespace MyProject.Web { public partial class Configurator { public string ClientValidationEnabled{get {return ConfigurationManager.AppSettings["ClientValidationEnabled"];}} public string UnobtrusiveJavaScriptEnabled{get {return ConfigurationManager.AppSettings["UnobtrusiveJavaScriptEnabled"];}} public string ServiceUri{get {return ConfigurationManager.AppSettings["ServiceUri"];}} public string TestConnection{get {return ConfigurationManager.ConnectionStrings["TestConnection"].ConnectionString;}} public string SecondTestConnection{get {return ConfigurationManager.ConnectionStrings["SecondTestConnection"].ConnectionString;}} }} Next, I extended the partial class for easy access to the Configuration. However, you could just use the generated class file itself. using System;using System.Linq;using System.Xml.Linq;namespace MyProject.Web{ public partial class Configurator { private static readonly Configurator Instance = new Configurator(); public static Configurator For { get { return Instance; } } }} Finally, in my example, I used the Configurator class like so: [TestMethod] public void Test_Web_Config() { var result = Configurator.For.ServiceUri; Assert.AreEqual(result, "http://localhost:30237/Service1/"); }

    Read the article

  • Microsoft Build 2012 Day 1 Keynote Summary

    - by Tim Murphy
    So I have finally dried the tears after watching the Keynote for Build 2012.  This wasn’t because it was an emotional presentation, but because for the second year I missed the goodies.  Each on site attendee got a Surface RT, a Lumia 920 and a voucher for 100GB of SkyDrive storage. The event was opened with the announcement that in the three days since the launch of Windows 8 over 4 million upgrades have been sold.  I don’t care who you are that is an impressive stat.  Ballmer then spent a fair amount of time remaking the case for the Windows and Windows Phone platforms similar to what we have heard over the last to launch events. There were some cool, but non-essential demos.  The one that was the most fun was the Perceptive Pixel 82” slate device.  At first glance I wondered why I would ever want such a device, but then Ballmer explained it’s possible use for schools and boardrooms.  The actually made sense. Then things got strange.  Steve started explaining features that developers could leverage.  Usually this type of information is left to the product leads.  He focused on the integration with the Charms features such as Search and Share. Steve “Guggs” Guggenheim showed off an app that would appeal to my kids from Disney called “Agent P” which is base on Phineas and Ferb.  Then he got to the meat of the presentation.  We found out that you could add a tile that can be used to sell ad space.  In the same vein we also found out that you could use Microsoft’s, Paypal’s or any commerce engine of your own creation or choosing. For those who are interested in sports and especially developing sports apps you would have found the small presentation from Michael Bayle of ESPN.  He introduced the ESPN app which has tons of features.  For the developers in the crowd he also mentioned that ESPN has an API available at developer.espn.com. During the launch events we were told apps were coming.  In this presentation we were actually shown a scrolling list of logos and told about a couple of them.  Ballmer mentioned specifically Twitter, SAP and DropBox.  These are impressive names that were just a couple of the list impressive names. Steve Ballmer addressed the question of why you should develop for the Windows 8 platform.  He feels that Microsoft has the best commercial terms for developers, a better way to build apps than other platforms and a variety of form factors.  His key point though was the available volume of customers given the current Windows install base and assuming even a flat growth of the platform.  This he backed with a promise that Microsoft is going to do better at marketing and you won’t be able to avoid the ads that they are bringing out. The last section of the key note was present by Kevin Gallo from the Windows Phone team.  This was the real reason I tuned into the webcast.  He impressed upon those watching that the strength of developing for the Microsoft platform is the common programming model that now exist.  While there are difference between form factor implementations you can leverage code across them. He claimed that 90% of developer requests for Windows Phone 8 had been implemented.  These include: More controls with better performance Better live tiles including lock screen integration Speech support in custom apps Easier submission to the market place App camera integration VOIP and chat support Bluetooth and NFC support Native C++ development Direct 3D development   The quote from Kevin that stood out for me was that “Take your Dramamine and buckle your seatbelt type of games are coming to Windows Phone 8”.  He back this up by displaying a list of game development frameworks and then having Unity come out and do a demo. Ok, almost done … The last two things of note for me were the announcement that the SDK is immediately available at dev.windowsphone.com and that they were reducing the cost of an individual developer account to $8 for the next 8 days. Let the development commence. del.icio.us Tags: Build 2012,Windows 8,Windows Phone 8,Windows Phone

    Read the article

  • A story of Murphy&ndash;my technical issues at TechDays Switzerland #chtd

    - by Laurent Bugnion
    I had two sessions at the recent Swiss TechDays. While the first one (Advanced Development for Windows Phone 8) went extremely well (I think), I had a very annoying technical issue in the beginning of my second session. First let me add that I talked to Microsoft about that and I hope they will change a few things in the room assignment for next year. My two sessions were one right after the other, with only 15 minutes break to change room. I don’t mind having two sessions so close from each other, but I would really like them to be in the same room in order to avoid having to move my laptops (plural, that will become important later) and redoing the tech check. That being said, I am guilty of not checking where my talks would be before the day before the conference, and when I did notice, it was too late to change it. After my first session, I quickly moved to the other room and setup my main laptop, a Dell Precision. We tested the video output (VGA) and didn’t notice anything special. The projectors are using a fairly high resolution (kudos to the Basel conference center for not having old school 1024x768 projectors anymore, that makes Blend really hard to demo ;) but since everything went great during the first talk, I was not worried. In fact I even had some time to chat with some early attendees about my Microsoft Surface and the Samsung Slate 7, which I had carried with me in addition to the Precision. I just thought it would be nice to show the hardware that Windows 8 can run on, without thinking any further. When the session started, I immediately noticed that the main screen was not showing anything. I thought I had just forgotten to switch to “duplicate” for the video output, and did that with a quick Win-P. However it didn’t “hold”. After 2 seconds, it reverted back to a black display for my attendees. Then I started to really worry. We tried everything, switching from VGA to HDMI, changing the resolution, setting the projector as primary display, but nothing did the trick. The projector was just refusing to show my screen. Now, to show you how despaired I started to be, I even considered using the “extend” setting (which worked just fine), and to use one of the feedback monitors on the floor but really it was super cumbersome. Eventually, my last resort arrived: I started my Samsung Slate 7, which by chance has Visual Studio 12 and Blend 5 installed, plugged the HDMI projector in the dock (yes, I had the dock with me, which I usually don’t!), connected it to internet (had to enter a long password for that), loaded the source code from my main machine using a USB stick and…. finally started to give my presentation. All in all I think we lost about 10 minutes. Amongst the most horrible minutes of my whole life, truly (yes I am blessed, I didn’t have that many horrible minutes in my life ;) I really want to apologize to my attendees. We joked a bit during the attempts to resolve the issue, the reactions I had after the session were all very nice and sympathetic. Only a handful of people left my session while I was having the issues, and I really don’t blame them (who knew how long the problem would last!!). But still, I probably talked at more than 60 sessions over the years, and this was by far my most painful moment. What did I learn? So what did I learn from this? Well from now on I will always have my slate ready with the latest source code, internet connection and every tool I might need during the presentation. This way, if I detect even a hint that the Precision might not work, I will just switch to the Slate. The experience of presenting on the slate is actually not bad at all, it is just a bit slow for my taste, but it does work. By the way, I will be posting the code and slides for my sessions very soon, I just need to “clean it and zip it”. Stay tuned, and thanks again for your patience in that horrible circumstance. Cheers Laurent   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • IE9 and the Mystery of the Broken Video Tag

    - by David Wesst
    I was very excited when Microsoft released the Internet Explorer 9 Release Candidate. As far as I was concerned, this was another nail in the coffin for IE6 and step in the right direction for us .NET web developers as our base camp was finally starting to support the latest and greatest future-web standards. Unfortunately, my celebration was short lived as I soon hit a snag while loading up an HTML5 site I was building in Visual Studio 2010. The Mystery After updating Internet Explorer, I ran my HTML5 site that had the oh-so-lovely HTML5 video tag showing a video. Even though this worked in IE9 Beta, it appeared that IE9 RC could not load the same file. I figured that it was the video codec. Maybe IE9 RC no longer supported the video codec I used to encode my video. Here's the code I used: <video width="854" height="480" id="myOtherVideo" autoplay="" controls=""> <source src="/DemoSite1/Media/big_buck_bunny.mp4"/> <div> <p>Your browser does not support HTML5 Video.</p> </div> </video> As you can see from the code, I had the "fail-safe" code inside the video tag. The idea there being that if the video tag, or the video files themselves, are not supported by the browser my video should fail gracefully. What was even more strange was the fact that it worked in all the other HTML5 browsers that supported video. The Investigation Whoa! DJ stop the music. How can any of that make sense? Would the IE team really take such huge strides forward only to forget to include a feature that was already in the beta? I don't think so. I did plenty of searching on the web and asking around on the web, but could not seem to find anyone else having the same problem. Eventually I came across this post talking about declaring the MIME type in the .htaccess file. That got me thinking: does my web server support the video MIME type? I was using VS2010, so how do I know what kind of MIME types are supported by default? Still, my page hosted in Cassini (the web development server in VS2010) works on the other browsers. Why wouldn't it work with IE9 RC? To answer that, it was time to open up the upgraded toolbox known as the Developer's Tools in IE9 and use the new Network Tab. The Conclusion If you take a closer look at the results displayed from the Network tab, you can see that IE9 RC has interpreted the video file as text/html rather than video/mp4. To make this work, I decided to use IIS to debug my HTML5 web application by setting the web project's properties. Then, I added the MIME types that I want to support (i.e. video/mp4, video/ogg, video/webm). Et voila! The Mystery of the Broken Video Tag is solved. After Thoughts After solving the mystery, I still had the question about why my site worked in Chrome, Safari, and Firefox 3.6. After asking around, the best answer that I received was from my colleague Tyler Doerksen. He said that IE9 likely depends on the server telling it what kind of file it is downloading rather than trying to read the metadata about the data it is trying to download before doing anything. I have no facts to back this up, but it makes sense to me. In a browser war where milliseconds can make your browser fall back a few places in the race for supremacy, maybe the IE team opted to depend on the server knowing what kind of content it is serving up. Makes sense to me. In any case, that is just an educated guess. If you have any comments, feel free to post on them below. This post also appears at http://david.wes.st

    Read the article

  • Changes in Language Punctuation [closed]

    - by Wes Miller
    More social curiosity than actual programming question... (I got shot for posting this on Stack Overflow. They sent me here. At least i hope here is where they meant.) Based on the few responses I got before the content police ran me off Stack Overflow, I should note that I am legally blind and neatness and consistency in programming are my best friends. A thousand years ago when I took my first programming class (Fortran 66) and a mere 500 years ago when I tokk my first C and C++ classes, there were some pretty standard punctuation practices across languages. I saw them in Basic (shudder), PL/1, PL/AS, Rexx even Pascal. Ok, APL2 is not part of this discussion. Each language has its own peculiar punctuation. Pascal's periods, Fortran's comma separated do loops, almost everybody else's semicolons. As I learned it, each language also has KEYWORDS (if, for, do, while, until, etc.) which are set off by whitespace (or the left margin) if, etc. Each language has function, subroutines of whatever they're called. Some built-in some user coded. They were set off by function_name( parameters );. As in sqrt( x ) or rand( y ); Lately, there seems to be a new set of punctuation rules. Especially in c++ where initializers get glued onto the end of variable declarations int x(0); or auto_ptr p(new gizmo); This usually, briefly fools me into thinking someone is declaring a function prototype or using a function as a integer. Then "if" and 'for' seems to have grown parens; if(true) for(;;), etc. Since when did keywords become functions. I realize some people think they ARE functions with iterators as parameters. But if "for" is a function, where did the arg separating commas go? And finally, functions seem to have shed their parens; sqrt (2) select (...) I know, I koow, loosening whitespace rules is good. Keep reading. Question: when did the old ways disappear and this new way come into vogue? Does anyone besides me find it irritating to read and that the information that the placement of punctuation used to convey is gone? I know full well that K&R put the { at the end of the "if" or "for" to save a byte here and there. Can't use that excuse here. Space as an excuse for loss of readability died as HDD space soared past 100 MiB. Your thoughts are solicited. If there is a good reason to do this, I'll gladly learn it and maybe in another 50 years I'll get used to it. Of course it's good that compilers recognize these (IMHO) typos and keep right on going, but just because you CAN code it that way doesn't mean you HAVE to, right?

    Read the article

  • How views are changing in future versions of SQL

    - by Rob Farley
    April is here, and this weekend, SQL v11.0 (previous known as Denali, now known as SQL Server 2012) reaches general availability. And so I thought I’d share some news about what’s coming next. I didn’t hear this at the MVP Summit earlier this year (where there was lots of NDA information given, but I didn’t go), so I think I’m free to share it. I’ve written before about CTEs being query-scoped views. Well, the actual story goes a bit further, and will continue to develop in future versions. A CTE is a like a “temporary temporary view”, scoped to a single query. Due to globally-scoped temporary objects using a two-hashes naming style, and session-scoped (or ‘local’) temporary objects a one-hash naming style, this query-scoped temporary object uses a cunning zero-hash naming style. We see this implied in Books Online in the CREATE TABLE page, but as we know, temporary views are not yet supported in the SQL Server. However, in a breakaway from ANSI-SQL, Microsoft is moving towards consistency with their naming. We know that a CTE is a “common table expression” – this is proving to be a more strategic than you may have appreciated. Within the Microsoft product group, the term “Table Expression” is far more widely used than just CTEs. Anything that can be used in a FROM clause is referred to as a Table Expression, so long as it doesn’t actually store data (which would make it a Table, rather than a Table Expression). You can see this is not just restricted to the product group by doing an internet search for how the term is used without ‘common’. In the past, Books Online has referred to a view as a “virtual table” (but notice that there is no SQL 2012 version of this page). However, it was generally decided that “virtual table” was a poor name because it wasn’t completely accurate, and it’s typically accepted that virtualisation and SQL is frowned upon. That page I linked to says “or stored query”, which is slightly better, but when the SQL 2012 version of that page is actually published, the line will be changed to read: “A view is a stored table expression (STE)”. This change will be the first of many. During the SQL 2012 R2 release, the keyword VIEW will become deprecated (this will be SQL v11 SP1.5). Three versions later, in SQL 14.5, you will need to be in compatibility mode 140 to allow “CREATE VIEW” to work. Also consistent with Microsoft’s deprecation policy, the execution of any query that refers to an object created as a view (rather than the new “CREATE STE”), will cause a Deprecation Event to fire. This will all be in preparation for the introduction of Single-Column Table Expressions (to be introduced in SQL 17.3 SP6) which will finally shut up those people waiting for a decent implementation of Inline Scalar Functions. And of course, CTEs are “Common” because the Table Expression definition needs to be repeated over and over throughout a stored procedure. ...or so I think I heard at some point. Oh, and congratulations to all the new MVPs on this April 1st. @rob_farley

    Read the article

  • Creating a Corporate Data Hub

    - by BuckWoody
    The Windows Azure Marketplace has a rich assortment of data and software offerings for you to use – a type of Software as a Service (SaaS) for IT workers, not necessarily for end-users. Among those offerings is the “Data Hub” – a  codename for a project that ironically actually does what the codename says. In many of our organizations, we have multiple data quality issues. Finding data is one problem, but finding it just once is often a bigger problem. Lots of departments and even individuals have stored the same data more than once, and in some cases, made changes to one of the copies. It’s difficult to know which location or version of the data is authoritative. Then there’s the problem of accessing the data. It’s fairly straightforward to publish a database, share or other location internally to store the data. But then you have to figure out who owns it, how it is controlled, and pass out the various connection strings to those who want to use it. And then you need to figure out how to let folks access the internal data externally – bringing up all kinds of security issues. Finally, in many cases our user community wants us to combine data from the internally sources with external data, bringing up the security, strings, and exploration features up all over again. Enter the Data Hub. This is an online offering, where you assign an administrator and data stewards. You import the data into the service, and it’s available to you - and only you and your organization if you wish. The basic steps for this service are to set up the portal for your company, assign administrators and permissions, and then you assign data areas and import data into them. From there you make them discoverable, and then you have multiple options that you or your users can access that data. You’re then able, if you wish, to combine that data with other data in one location. So how does all that work? What about security? Is it really that easy? And can you really move the data definition off to the Subject Matter Experts (SME’s) that know the particular data stack better than the IT team does? Well, nothing good is easy – but using the Data Hub is actually pretty simple. I’ll give you a link in a moment where you can sign up and try this yourself. Once you sign up, you assign an administrator. From there you’ll create data areas, and then use a simple interface to bring the data in. All of this is done in a portal interface – nothing to install, configure, update or manage. After the data is entered in, and you’ve assigned meta-data to describe it, your users have multiple options to access it. They can simply use the portal – which actually has powerful visualizations you can use on any platform, even mobile phones or tablets.     Your users can also hit the data with Excel – which gives them ultimate flexibility for display, all while using an authoritative, single reference for the data. Since the service is online, they can do this wherever they are – given the proper authentication and permissions. You can also hit the service with simple API calls, like this one from C#: http://msdn.microsoft.com/en-us/library/hh921924  You can make HTTP calls instead of code, and the data can even be exposed as an OData Feed. As you can see, there are a lot of options. You can check out the offering here: http://www.microsoft.com/en-us/sqlazurelabs/labs/data-hub.aspx and you can read the documentation here: http://msdn.microsoft.com/en-us/library/hh921938

    Read the article

  • RPi and Java Embedded GPIO: Big Data and Java Technology

    - by hinkmond
    Java Embedded and Big Data go hand-in-hand, especially as demonstrated by prototyping on a Raspberry Pi to show how well the Java Embedded platform can perform on a small embedded device which then becomes the proof-of-concept for industrial controllers, medical equipment, networking gear or any type of sensor-connected device generating large amounts of data. The key is a fast and reliable way to access that data using Java technology. In the previous blog posts you've seen the integration of a static electricity sensor and the Raspberry Pi through the GPIO port, then accessing that data through Java Embedded code. It's important to point out how this works and why it works well with Java code. First, the version of Linux (Debian Wheezy/Raspian) that is found on the RPi has a very convenient way to access the GPIO ports through the use of Linux OS managed file handles. This is key in avoiding terrible and complex coding using register manipulation in C code, or having to program in a less elegant and clumsy procedural scripting language such as python. Instead, using Java Embedded, allows a fast way to access those GPIO ports through those same Linux file handles. Java already has a very easy to program way to access file handles with a high degree of performance that matches direct access of those file handles with the Linux OS. Using the Java API java.io.FileWriter lets us open the same file handles that the Linux OS has for accessing the GPIO ports. Then, by first resetting the ports using the unexport and export file handles, we can initialize them for easy use in a Java app. // Open file handles to GPIO port unexport and export controls FileWriter unexportFile = new FileWriter("/sys/class/gpio/unexport"); FileWriter exportFile = new FileWriter("/sys/class/gpio/export"); ... // Reset the port unexportFile.write(gpioChannel); unexportFile.flush(); // Set the port for use exportFile.write(gpioChannel); exportFile.flush(); Then, another set of file handles can be used by the Java app to control the direction of the GPIO port by writing either "in" or "out" to the direction file handle. // Open file handle to input/output direction control of port FileWriter directionFile = new FileWriter("/sys/class/gpio/gpio" + gpioChannel + "/direction"); // Set port for input directionFile.write("in"); // Or, use "out" for output directionFile.flush(); And, finally, a RandomAccessFile handle can be used with a high degree of performance on par with native C code (only milliseconds to read in data and write out data) with low overhead (unlike python) to manipulate the data going in and out on the GPIO port, while the object-oriented nature of Java programming allows for an easy way to construct complex analytic software around that data access functionality to the external world. RandomAccessFile[] raf = new RandomAccessFile[GpioChannels.length]; ... // Reset file seek pointer to read latest value of GPIO port raf[channum].seek(0); raf[channum].read(inBytes); inLine = new String(inBytes); It's Big Data from sensors and industrial/medical/networking equipment meeting complex analytical software on a small constraint device (like a Linux/ARM RPi) where Java Embedded allows you to shine as an Embedded Device Software Designer. Hinkmond

    Read the article

  • SQL Server IO handling mechanism can be severely affected by high CPU usage

    - by sqlworkshops
    Are you using SSD or SAN / NAS based storage solution and sporadically observe SQL Server experiencing high IO wait times or from time to time your DAS / HDD becomes very slow according to SQL Server statistics? Read on… I need your help to up vote my connect item – https://connect.microsoft.com/SQLServer/feedback/details/744650/sql-server-io-handling-mechanism-can-be-severely-affected-by-high-cpu-usage. Instead of taking few seconds, queries could take minutes/hours to complete when CPU is busy.In SQL Server when a query / request needs to read data that is not in data cache or when the request has to write to disk, like transaction log records, the request / task will queue up the IO operation and wait for it to complete (task in suspended state, this wait time is the resource wait time). When the IO operation is complete, the task will be queued to run on the CPU. If the CPU is busy executing other tasks, this task will wait (task in runnable state) until other tasks in the queue either complete or get suspended due to waits or exhaust their quantum of 4ms (this is the signal wait time, which along with resource wait time will increase the overall wait time). When the CPU becomes free, the task will finally be run on the CPU (task in running state).The signal wait time can be up to 4ms per runnable task, this is by design. So if a CPU has 5 runnable tasks in the queue, then this query after the resource becomes available might wait up to a maximum of 5 X 4ms = 20ms in the runnable state (normally less as other tasks might not use the full quantum).In case the CPU usage is high, let’s say many CPU intensive queries are running on the instance, there is a possibility that the IO operations that are completed at the Hardware and Operating System level are not yet processed by SQL Server, keeping the task in the resource wait state for longer than necessary. In case of an SSD, the IO operation might even complete in less than a millisecond, but it might take SQL Server 100s of milliseconds, for instance, to process the completed IO operation. For example, let’s say you have a user inserting 500 rows in individual transactions. When the transaction log is on an SSD or battery backed up controller that has write cache enabled, all of these inserts will complete in 100 to 200ms. With a CPU intensive parallel query executing across all CPU cores, the same inserts might take minutes to complete. WRITELOG wait time will be very high in this case (both under sys.dm_io_virtual_file_stats and sys.dm_os_wait_stats). In addition you will notice a large number of WAITELOG waits since log records are written by LOG WRITER and hence very high signal_wait_time_ms leading to more query delays. However, Performance Monitor Counter, PhysicalDisk, Avg. Disk sec/Write will report very low latency times.Such delayed IO handling also occurs to read operations with artificially very high PAGEIOLATCH_SH wait time (with number of PAGEIOLATCH_SH waits remaining the same). This problem will manifest more and more as customers start using SSD based storage for SQL Server, since they drive the CPU usage to the limits with faster IOs. We have a few workarounds for specific scenarios, but we think Microsoft should resolve this issue at the product level. We have a connect item open – https://connect.microsoft.com/SQLServer/feedback/details/744650/sql-server-io-handling-mechanism-can-be-severely-affected-by-high-cpu-usage - (with example scripts) to reproduce this behavior, please up vote the item so the issue will be addressed by the SQL Server product team soon.Thanks for your help and best regards,Ramesh MeyyappanHome: www.sqlworkshops.comLinkedIn: http://at.linkedin.com/in/rmeyyappan

    Read the article

  • How do you update live web sites with code changes?

    - by Aaron Anodide
    I know this is a very basic question. If someone could humor me and tell me how they would handle this, I'd be greatful. I decided to post this because I am about to install SynchToy to remedy the issue below, and I feel a bit unprofessional using a "Toy" but I can't think of a better way. Many times I find when I am in this situation, I am missing some painfully obvious way to do things - this comes from being the only developer in the company. ASP.NET web application developed on my computer at work Solution has 2 projects: Website (files) WebsiteLib (C#/dll) Using a Git repository Deployed on a GoGrid 2008R2 web server Deployment: Make code changes. Push to Git. Remote desktop to server. Pull from Git. Overwrite the live files by dragging/dropping with windows explorer. In Step 5 I delete all the files from the website root.. this can't be a good thing to do. That's why I am about to install SynchToy... UPDATE: THANKS for all the useful responses. I can't pick which one to mark answer - between using a web deployment - it looks like I have several useful suggesitons: Web Project = whole site packaged into a single DLL - downside for me I can't push simple updates - being a lone developer in a company of 50, this remains something that is simpler at times. Pulling straight from SCM into web root of site - i originally didn't do this out of fear that my SCM hidden directory might end up being exposed, but the answers here helped me get over that (although i still don't like having one more thing to worry about forgetting to make sure is still true over time) Using a web farm, and systematically deploying to nodes - this is the ideal solution for zero downtime, which is actually something I care about since the site is essentially a real time revenue source for my company - i might have a hard time convincing them to double the cost of the servers though. -- finally, the re-enforcement of the basic principal that there needs to be a single click deployment for the site OR ELSE THERE SOMETHING WRONG is probably the most useful thing I got out of the answers. UPDATE 2: I thought I come back to this and update with the actual solution that's been in place for many months now and is working perfectly (for my single web server solution). The process I use is: Make code changes Push to Git Remote desktop to server Pull from Git Run the following batch script: cd C:\Users\Administrator %systemroot%\system32\inetsrv\appcmd.exe stop site "/site.name:Default Web Site" robocopy Documents\code\da\1\work\Tree\LendingTreeWebSite1 c:\inetpub\wwwroot /E /XF connectionsconfig Web.config %systemroot%\system32\inetsrv\appcmd.exe start site "/site.name:Default Web Site" As you can see this brings the site down, uses robocopy to intelligently copy the files that have changed then brings the site back up. It typically runs in less than 2 seconds. Since peak traffic on this site is about 2 requests per second, missing 4 requests per site update is acceptable. Sine I've gotten more proficient with Git I've found that the first four steps above being a "manual process" is also acceptable, although I'm sure I could roll the whole thing into a single click if I wanted to. The documentation for AppCmd.exe is here. The documentation for Robocopy is here.

    Read the article

  • Crime Scene Investigation: SQL Server

    - by Rodney Landrum
    “The packages are running slower in Prod than they are in Dev” My week began with this simple declaration from one of our lead BI developers, quickly followed by an emailed spreadsheet demonstrating that, over 5 executions, an extensive ETL process was running average 630 seconds faster on Dev than on Prod. The situation needed some scientific investigation to determine why the same code, the same data, the same schema would yield consistently slower results on a more powerful server. Prod had yet to be officially christened with a “Go Live” date so I had the time, and having recently been binge watching CSI: New York, I also had the inclination. An inspection of the two systems, Prod and Dev, revealed the first surprise: although Prod was indeed a “bigger” system, with double the amount of RAM of Dev, the latter actually had twice as many processor cores. On neither system did I see much sign of resources being heavily taxed, while the ETL process was running. Without any real supporting evidence, I jumped to a conclusion that my years of performance tuning should have helped me avoid, and that was that the hardware differences explained the better performance on Dev. We spent time setting up a Test system, similarly scoped to Prod except with 4 times the cores, and ported everything across. The results of our careful benchmarks left us truly bemused; the ETL process on the new server was slower than on both other systems. We burned more time tweaking server configurations, monitoring IO and network latency, several times believing we’d uncovered the smoking gun, until the results of subsequent test runs pitched us back into confusion. Finally, I decided, enough was enough. Hadn’t I learned very early in my DBA career that almost all bottlenecks were caused by code and database design, not hardware? It was time to get back to basics. With over 100 SSIS packages and hundreds of queries, each handling specific tasks such as file loads, bulk inserts, transforms, logging, and so on, the task seemed formidable. And yet, after barely an hour spent with Profiler, Extended Events, and wait statistics DMVs, I had a lead in the shape of a query that joined three tables, containing millions of rows, returned 3279 results, but performed 239K logical reads. As soon as I looked at the execution plans for the query in Dev and Test I saw the culprit, an implicit conversion warning on a join predicate field that was numeric in one table and a varchar(50) in another! I turned this information over to the BI developers who quickly resolved the data type mismatches and found and fixed “several” others as well. After the schema changes the same query with the same databases ran in under 1 second on all systems and reduced the logical reads down to fewer than 300. The analysis also revealed that on Dev, the ETL task was pulling data across a LAN, whereas Prod and Test were connected across slower WAN, in large part explaining why the same process ran slower on the latter two systems. Loading the data locally on Prod delivered a further 20% gain in performance. As we progress through our DBA careers we learn valuable lessons. Sometimes, with a project deadline looming and pressure mounting, we choose to forget them. I was close to giving into the temptation to throw more hardware at the problem. I’m pleased at least that I resisted, though I still kick myself for not looking at the code on day one. It can seem a daunting prospect to return to the fundamentals of the code so close to roll out, but with the right tools, and surprisingly little time, you can collect the evidence that reveals the true problem. It is a lesson I trust I will remember for my next 20 years as a DBA, if I’m ever again tempted to bypass the evidence.

    Read the article

  • History of Mobile Technology

    - by David Dorf
    Over the last ten years, mobile phones have gone through several incremental technology leaps that have added capabilities that impact the retail industry.  I've listed the six major ones below, along with their long-lasting impact. 1. Location In the US, the FCC required mobile phones to implement E911 (emergency calls) by 2006, requiring the caller to be located to within 300 meters.  Back in 2000, GPS was opened up for civilian use, and by 2004 Qualcomm had figured out how to use GPS in mobile phones.  So mobile operators moved from cell tower triangulation to GPS, principally for E911.  But then lots of other uses became apparent, especially navigation.  The earliest mobile apps from retailers made it easy to find nearby stores, and companies are looking at ways to use WiFi triangulation inside stores. 2. Computer Vision In 1997 Philippe Kahn shared a photo of his newborn using a mobile phone thus launching the popularity of instant visual communications.  Over the years the quality of the cameras got better, reaching the point where barcodes could be read around 2008.  That's when Occipital came on the scene with their Red Laser application, which was eventually acquired by eBay.  This opened up the ability for consumers to easily price compare inside stores.  Other interesting apps included Tesco's Wine Finder and Amazon's Price Checker, both allowing products to be identified by picture. 3. Augmented Reality Once the mobile phone had GPS, a video camera, and compass functionality it was suddenly possible to overlay digital information on the screen in real-time.  Yelp, which was using GPS to find nearby merchants, created a backdoor called Monocle on the iPhone that showed nearby merchants overlayed on the video camera view.  Today AR apps are mostly used by retailers for marketing, like Moosejaw's app that undresses models in their catalog. 4. Geo-Fencing So if we're able to track the location of a mobile phone, why not use that context to offer timely information?  My first experience with geo-fencing came courtesy of North Face, the outdoor enthusiast store. When a mobile phone enters a predetermined area, like near a store, a text message is sent to phone with an offer or useful information.  Of course retailers can geo-fence their competitors as well and find out which customers are aren't so loyal. 5. Digital Wallet Mobile payments leverage different technologies such as NFC, QRCodes, bluetooth, and SMS to facilitate communication between the consumers's phone and the retailer's point-of-sale. The key here is the potential to consolidate loyalty cards, coupons, and bank cards into the mobile phone and enable faster checkout.  Nobody does this better than Starbucks today, but McDonald's and Duncan Donuts aren't far behind.  Google, Isis, Paypal, Square, and MCX are all vying for leadership in this area.  If NFC does finally take off, it will be leveraged by retailers in more places than just the POS. 6. Voice Response Mobile Phones have had the ability to interpret simple voice commands for a while, but Google and Amazon were the first to use voice to allow searches for products.  Allowing searches by text, barcode, and voice makes it easy to comparison shop in the aisles.  Walmart even uses voice to build shopping lists, and if the Siri API is even opened we could see lots more innovation in this area.

    Read the article

  • Friday Tips #6, Part 1

    - by Chris Kawalek
    We have a two parter this week, with this post focusing on desktop virtualization and the next one on server virtualization. Question: Why would I use the Oracle Secure Global Desktop Secure Gateway? Answer by Rick Butland, Principal Sales Consultant, Oracle Desktop Virtualization: Well, for the benefit of those who might not be familiar with client connections in Oracle Secure Global Desktop (SGD), let me back up and briefly explain. An SGD client connects to an SGD server using two distinct protocols, which, by default, require two distinct TCP ports. The first is the HTTP protocol, used by the web browser to connect to the SGD webserver on TCP port 80, or if secure connections are enabled (SSL/TLS), then TCP port 443, commonly identified as the "HTTPS" port, that is, "SSL encrypted HTTP." The second protocol from the client to the server is the Adaptive Internet Protocol, or AIP, which is used for displaying applications, transferring drive mapping data, print jobs, and so on. By default, AIP uses the TCP port 3104, or port 5307 when SSL is enabled. When SGD clients need to access SGD over a firewall, the ports that AIP requires are typically "closed"; and most administrators are reluctant, to put it mildly, to change their firewall configurations to allow AIP traffic on 3144/5307.   To avoid this problem, SGD introduced "Firewall Forwarding", a technique where, in effect, both http and AIP traffic are "multiplexed" onto a single "well-known" TCP port, that is port 443, the https port.  This is also known as single-port firewall traversal.  This technique takes advantage of the fact that, as a "well-known service", port 443 is usually "open",   allowing (encrypted) traffic to pass. At the target SGD server, the two protocols are de-multiplexed and routed appropriately. The Secure Gateway was developed in response to requirements from customers for SGD to support multi-stage DMZ's, and to avoid exposing SGD servers and the information they contain directly to connections from the Internet. The Secure Gateway acts as a reverse-proxy in the first-tier of the DMZ, accepting, authenticating, and terminating incoming client connections, and then re-encrypting the connections, and proxying them, routing them on to SGD servers, deeper in the network. The client no longer needs to know the name/IP address of the SGD servers in their network, they connect to the gateway, only. The gateway takes care of those internal network details.     The Secure Gateway supports the same "single-port firewall" capability as does "Firewall Forwarding", but offers the additional advantage of load-balancing incoming client connections amongst SGD array members, which could be cumbersome without a forward-deployed secure gateway. Load-balancing weights and policies can be monitored and tuned using the "Balancer Manager" application, and Apache mod_proxy_balancer directives.   Going forward, our architects recommend the use of the Secure Gateway over "Firewall Forwarding" for single-port firewall traversal, due to its architectural advantages, its greater flexibility and enhanced features.  Finally, it should be noted that the Secure Gateway is not separately priced; any licensed SGD customer may use the Secure Gateway component at no additional cost.   For more information, see the "Secure Gateway Administrator's Guide".

    Read the article

  • Where to store front-end data for "object calculator"

    - by Justin Grahn
    I recently have completed a language library that acts as a giant filter for food items, and flows a bit like this :Products -> Recipes -> MenuItems -> Meals and finally, upon submission, creates an Order. I have also completed a database structure that stores all the pertinent information to each class, and seems to fit my needs. The issue I'm having is linking the two. I imagined all of the information being local to each instance of the product, where there exists one backend user who edits and manipulates data, and multiple front end users who select their Meal(s) to create an Order. Ideally, all of the front end users would have all of this information stored locally within the library, and would update the library on startup from a database. How should I go about storing the data so that I can load it into the library every time the user opens the application? Do I package a database onboard and just load and populate every time? The only method I can currently conceive of doing this, even if I only have 500 possible Product objects, would require me to foreach the list for every Product that I need to match to a Recipe and so on and so forth every time I relaunch the program, which seems like a lot of wasteful loading. Here is a general flow of my architecture: Products: public class Product : IPortionable { public Product(string n, uint pNumber = 0) { name = n; productNumber = pNumber; } public string name { get; set; } public uint productNumber { get; set; } } Recipes: public Recipe(string n, decimal yieldAmt, Volume.Unit unit) { name = n; yield = new Volume(yieldAmt, unit); yield.ConvertUnit(); } /// <summary> /// Creates a new ingredient object /// </summary> /// <param name="n">Name</param> /// <param name="yieldAmt">Recipe Yield</param> /// <param name="unit">Unit of Yield</param> public Recipe(string n, decimal yieldAmt, Weight.Unit unit) { name = n; yield = new Weight(yieldAmt, unit); } public Recipe(Recipe r) { name = r.name; yield = r.yield; ingredients = r.ingredients; } public string name { get; set; } public IMeasure yield; public Dictionary<IPortionable, IMeasure> ingredients = new Dictionary<IPortionable,IMeasure>(); MenuItems: public abstract class MenuItem : IScalable { public static string title = null; public string name { get; set; } public decimal maxPortionSize { get; set; } public decimal minPortionSize { get; set; } public Dictionary<IPortionable, IMeasure> ingredients = new Dictionary<IPortionable, IMeasure>(); and Meal: public class Meal { public Meal(int guests) { guestCount = guests; } public int guestCount { get; private set; } //TODO: Make a new MainCourse class that holds pasta and Entree public Dictionary<string, int> counts = new Dictionary<string, int>(){ {MainCourse.title, 0}, {Side.title , 0}, {Appetizer.title, 0} }; public List<MenuItem> items = new List<MenuItem>(); The Database just stores and links each of these basic names and amounts together usings ID's (RecipeID, ProductID and MenuItemID)

    Read the article

  • Ubiquitous BIP

    - by Tim Dexter
    The last number I heard from Mike and the PM team was that BIP is now embedded in more than 40 oracle products. That's a lot of products to keep track of and to help out with new releases, etc. Its interesting to see how internal Oracle product groups have integrated BIP into their products. Just as you might integrate BIP they have had to make a choice about how to integrate. 1. Library level - BIP is a pure java app and at the bottom of the architecture are a group of java libraries that expose APIs that you can use. they fall into three main areas, data extraction, template processing and formatting and delivery. There are post processing capabilities but those APIs are embedded withing the template processing libraries. Taking this integration route you are going to need to manage templates, data extraction and processing. You'll have your own UI to allow users to control all of this for themselves. Ultimate control but some effort to build and maintain. I have been trawling some of the products during a coffee break. I found a great post on the reporting capabilities provided by BIP in the records management product within WebCenter Content 11g. This integration falls into the first category, content manager looks after the report artifacts itself and provides you the UI to manage and run the reports. 2. Web Service level - further up in the stack is the web service layer. This is sitting on the BI Publisher server as a set of services, runReport and scheduleReport are the main protagonists. However, you can also manage the reports and users (locally managed) on the server and the catalog itself via the services layer.Taking this route, you still need to provide the user interface to choose reports and run them but the creation and management of the reports is all handled by the Publisher server. I have worked with a few customer on this approach. The web services provide the ability to retrieve a list of reports the user can access; then the parameters and LOVs for the selected report and finally a service to submit the report on the server. 3. Embedded BIP server UI- the final level is not so well supported yet. You can currently embed a report and its various levels of surrounding  'chrome' inside another html based application using a URL. Check the docs here. The look and feel can be customized but again, not easy, nor documented. I have messed with running the server pages inside an IFRAME, not bad, but not great. Taking this path should present the least amount of effort on your part to get BIP integrated but there are a few gotchas you need to get around. So a reasonable amount of choices with varying amounts of effort involved. There is another option coming soon for all you ADF developers out there, the ability to drop a BIP report into your application pages. But that's for another post.

    Read the article

  • Efficiently separating Read/Compute/Write steps for concurrent processing of entities in Entity/Component systems

    - by TravisG
    Setup I have an entity-component architecture where Entities can have a set of attributes (which are pure data with no behavior) and there exist systems that run the entity logic which act on that data. Essentially, in somewhat pseudo-code: Entity { id; map<id_type, Attribute> attributes; } System { update(); vector<Entity> entities; } A system that just moves along all entities at a constant rate might be MovementSystem extends System { update() { for each entity in entities position = entity.attributes["position"]; position += vec3(1,1,1); } } Essentially, I'm trying to parallelise update() as efficiently as possible. This can be done by running entire systems in parallel, or by giving each update() of one system a couple of components so different threads can execute the update of the same system, but for a different subset of entities registered with that system. Problem In reality, these systems sometimes require that entities interact(/read/write data from/to) each other, sometimes within the same system (e.g. an AI system that reads state from other entities surrounding the current processed entity), but sometimes between different systems that depend on each other (i.e. a movement system that requires data from a system that processes user input). Now, when trying to parallelize the update phases of entity/component systems, the phases in which data (components/attributes) from Entities are read and used to compute something, and the phase where the modified data is written back to entities need to be separated in order to avoid data races. Otherwise the only way (not taking into account just "critical section"ing everything) to avoid them is to serialize parts of the update process that depend on other parts. This seems ugly. To me it would seem more elegant to be able to (ideally) have all processing running in parallel, where a system may read data from all entities as it wishes, but doesn't write modifications to that data back until some later point. The fact that this is even possible is based on the assumption that modification write-backs are usually very small in complexity, and don't require much performance, whereas computations are very expensive (relatively). So the overhead added by a delayed-write phase might be evened out by more efficient updating of entities (by having threads work more % of the time instead of waiting). A concrete example of this might be a system that updates physics. The system needs to both read and write a lot of data to and from entities. Optimally, there would be a system in place where all available threads update a subset of all entities registered with the physics system. In the case of the physics system this isn't trivially possible because of race conditions. So without a workaround, we would have to find other systems to run in parallel (which don't modify the same data as the physics system), other wise the remaining threads are waiting and wasting time. However, that has disadvantages Practically, the L3 cache is pretty much always better utilized when updating a large system with multiple threads, as opposed to multiple systems at once, which all act on different sets of data. Finding and assembling other systems to run in parallel can be extremely time consuming to design well enough to optimize performance. Sometimes, it might even not be possible at all because a system just depends on data that is touched by all other systems. Solution? In my thinking, a possible solution would be a system where reading/updating and writing of data is separated, so that in one expensive phase, systems only read data and compute what they need to compute, and then in a separate, performance-wise cheap, write phase, attributes of entities that needed to be modified are finally written back to the entities. The Question How might such a system be implemented to achieve optimal performance, as well as making programmer life easier? What are the implementation details of such a system and what might have to be changed in the existing EC-architecture to accommodate this solution?

    Read the article

  • Lead, Follow, or Get out of the way

    - by Daniel Moth
    This is one of the sayings (attributed to Thomas Paine) that totally resonated with me from the first time I heard it, which was only 3 years ago during some training course at work: "Lead, Follow, or Get out of the way" You'll find many books with this title and you'll find it quoted by politicians and other leaders in various countries at various times... the quote is open to interpretation and works on many levels. To set the tone of what this means to me, I'll use a simple micro example: In any given conversation, you are either leading it or following it, at different times/snapshots of the conversation. If you are not willing or able to lead it, and you are not willing or able to follow it, then you should depart. The bad alternative which this guidance encourages you NOT to do is to stick around and obstruct progress by not following, not leading, and simply complaining or trying to derail the discussion in no particular direction. The same pattern applies at your position/role at work. Either follow your management/leadership team, or try to lead them to what you think is a better place, or change jobs. Don't stick around complaining about the direction things are going, while not actively trying to either change things or make peace with it. In the previous paragraph you can replace the word "your management" with "the people reporting to you" and the guidance still holds. Either lead your direct reports to where you think they should go, or follow their lead, or change jobs. Complaining about folks not taking direction while doing nothing is not a maintainable state. To me this quote is not about a permanent state, it is not about some people always leading and some always following: It is about a role/hat that anybody can play/wear at any given moment. One minute I am leading you, the next I am following you, and the next we are both following someone else and so on... When there is disagreement, debate the different directions for as long as it takes for you to be comfortable that you can either follow or lead. If you don't become comfortable with either of those, get out of the way. Something to remember is that it is impossible to learn how to lead well, without learning how to follow well (probably deserves its own blog entry)... Things go wrong when someone thinks that they must always be leading, or when everybody wants to follow and nobody steps up to lead... Things go wrong when more than one person wants to lead and they don't try to reach agreement on a shared direction, stubbornly sticking to their guns pulling the rest of the team in multiple directions... Things go wrong when more than one person wants to lead and after numerous and lengthy discussions, none of them decides to follow or get out of the way... Things go wrong when people don't want to lead, don't want to follow, and insist on sticking around... While there are a few ways things that can go wrong as enumerated in the previous paragraph, the most common one in my experience is the last one I mentioned. You'll recognize these folks as the ones that always complain about everything that is wrong with their company/product but do nothing about it. Every time you hear someone giving feedback on how something is wrong or suboptimal, ask them "So now that you identified the problem, what do you think the solution is and what are you doing to drive us to that solution?" The next time things start going wrong, step up and remind everyone: Lead, Follow, or Get out of the way. For more perspectives, and for input to help you form your own interpretation, search the web for this phrase to see in what contexts it is being used (bing, google). Finally, regardless of your political views, I hope you can appreciate if only as an example this perspective of someone leading by actually getting out of the way. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • How should I plan the inheritance structure for my game?

    - by Eric Thoma
    I am trying to write a platform shooter in C++ with a really good class structure for robustness. The game itself is secondary; it is the learning process of writing it that is primary. I am implementing an inheritance tree for all of the objects in my game, but I find myself unsure on some decisions. One specific issue that it bugging me is this: I have an Actor that is simply defined as anything in the game world. Under Actor is Character. Both of these classes are abstract. Under Character is the Philosopher, who is the main character that the user commands. Also under Character is NPC, which uses an AI module with stock routines for friendly, enemy and (maybe) neutral alignments. So under NPC I want to have three subclasses: FriendlyNPC, EnemyNPC and NeutralNPC. These classes are not abstract, and will often be subclassed in order to make different types of NPC's, like Engineer, Scientist and the most evil Programmer. Still, if I want to implement a generic NPC named Kevin, it would nice to be able to put him in without making a new class for him. I could just instantiate a FriendlyNPC and pass some values for the AI machine and for the dialogue; that would be ideal. But what if Kevin is the one benevolent Programmer in the whole world? Now we must make a class for him (but what should it be called?). Now we have a character that should inherit from Programmer (as Kevin has all the same abilities but just uses the friendly AI functions) but also should inherit from FriendlyNPC. Programmer and FriendlyNPC branched away from each other on the inheritance tree, so inheriting from both of them would have conflicts, because some of the same functions have been implemented in different ways on the two of them. 1) Is there a better way to order these classes to avoid these conflicts? Having three subclasses; Friendly, Enemy and Neutral; from each type of NPC; Engineer, Scientist, and Programmer; would amount to a huge number of classes. I would share specific implementation details, but I am writing the game slowly, piece by piece, and so I haven't implemented past Character yet. 2) Is there a place where I can learn these programming paradigms? I am already trying to take advantage of some good design patterns, like MVC architecture and Mediator objects. The whole point of this project is to write something in good style. It is difficult to tell what should become a subclass and what should become a state (i.e. Friendly boolean v. Friendly class). Having many states slows down code with if statements and makes classes long and unwieldy. On the other hand, having a class for everything isn't practical. 3) Are there good rules of thumb or resources to learn more about this? 4) Finally, where does templating come in to this? How should I coordinate templates into my class structure? I have never actually taken advantage of templating honestly, but I hear that it increases modularity, which means good code.

    Read the article

  • PowerShell Script To Find Where SharePoint 2010 Features Are Activated

    - by Brian Jackett
    The script on this post will find where features are activated within your SharePoint 2010 farm.   Problem    Over the past few months I’ve gotten literally dozens of emails, blog comments, or personal requests from people asking “how do I find where a SharePoint feature has been activated?”  I wrote a script to find which features are installed on your farm almost 3 years ago.  There is also the Get-SPFeature PowerShell commandlet in SharePoint 2010.  The problem is that these only tell you if a feature is installed not where they have been activated.  This is especially important to know if you have multiple web applications, site collections, and /or sites.   Solution    The default call (no parameters) for Get-SPFeature will return all features in the farm.  Many of the parameter sets accept filters for specific scopes such as web application, site collection, and site.  If those are supplied then only the enabled / activated features are returned for that filtered scope.  Taking the concept of recursively traversing a SharePoint farm and merging that with calls to Get-SPFeature at all levels of the farm you can find out what features are activated at that level.  Store the results into a variable and you end up with all features that are activated at every level.    Below is the script I came up with (slight edits for posting on blog).  With no parameters the function lists all features activated at all scopes.  If you provide an Identity parameter you will find where a specific feature is activated.  Note that the display name for a feature you see in the SharePoint UI rarely matches the “internal” display name.  I would recommend using the feature id instead.  You can download a full copy of the script by clicking on the link below.    Note: This script is not optimized for medium to large farms.  In my testing it took 1-3 minutes to recurse through my demo environment.  This script is provided as-is with no warranty.  Run this in a smaller dev / test environment first.   001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050 051 052 053 054 055 056 057 058 059 060 061 062 063 064 065 066 067 068 function Get-SPFeatureActivated { # see full script for help info, removed for formatting [CmdletBinding()] param(   [Parameter(position = 1, valueFromPipeline=$true)]   [Microsoft.SharePoint.PowerShell.SPFeatureDefinitionPipeBind]   $Identity )#end param   Begin   {     # declare empty array to hold results. Will add custom member `     # for Url to show where activated at on objects returned from Get-SPFeature.     $results = @()         $params = @{}   }   Process   {     if([string]::IsNullOrEmpty($Identity) -eq $false)     {       $params = @{Identity = $Identity             ErrorAction = "SilentlyContinue"       }     }       # check farm features     $results += (Get-SPFeature -Farm -Limit All @params |              % {Add-Member -InputObject $_ -MemberType noteproperty `                 -Name Url -Value ([string]::Empty) -PassThru} |              Select-Object -Property Scope, DisplayName, Id, Url)     # check web application features     foreach($webApp in (Get-SPWebApplication))     {       $results += (Get-SPFeature -WebApplication $webApp -Limit All @params |                % {Add-Member -InputObject $_ -MemberType noteproperty `                   -Name Url -Value $webApp.Url -PassThru} |                Select-Object -Property Scope, DisplayName, Id, Url)       # check site collection features in current web app       foreach($site in ($webApp.Sites))       {         $results += (Get-SPFeature -Site $site -Limit All @params |                  % {Add-Member -InputObject $_ -MemberType noteproperty `                     -Name Url -Value $site.Url -PassThru} |                  Select-Object -Property Scope, DisplayName, Id, Url)                          $site.Dispose()         # check site features in current site collection         foreach($web in ($site.AllWebs))         {           $results += (Get-SPFeature -Web $web -Limit All @params |                    % {Add-Member -InputObject $_ -MemberType noteproperty `                       -Name Url -Value $web.Url -PassThru} |                    Select-Object -Property Scope, DisplayName, Id, Url)           $web.Dispose()         }       }     }   }   End   {     $results   } } #end Get-SPFeatureActivated   Snippet of output from Get-SPFeatureActivated   Conclusion    This script has been requested for a long time and I’m glad to finally getting a working “clean” version.  If you find any bugs or issues with the script please let me know.  I’ll be posting this to the TechNet Script Center after some internal review.  Enjoy the script and I hope it helps with your admin / developer needs.         -Frog Out

    Read the article

  • How to solve conflicts with another programmer..

    - by Tio
    Hi all.. I've read this question, but I think it doesn't really applies in my case.. I started to work at new company about 2 months ago with the position of senior web developer, there was already one programmer there, my position is above his, but I'm not his boss.. I don't tell him what to do.. Since the day I started to work at the company, I managed to implement a kind of a test server which he refuses to use, implemented a project management tool which he refuses to use also, and I'm in the process of implementing version control using mercurial ( damn Mercurial that's giving me so much headaches ), which he is going to use.. He is a nice guy, but just the other day we had a big discussion about "best practices", and "coding standards".. for me it's absolutely necessary to have this two things, at the place I'm currently working... otherwise it's not going to work.. This discussion, basically revolved about using short tags and the echo shortcut, and how we shouldn't use it anymore ( because I sometimes use short tags ).. this went for about 15 minutes, until I finally dropped the subject because I had work to do.. and of course he didn't budge even a millimeter, he's continuing to use short tags, and the echo shortcut and he not even cares about what I think.. When I mentioned that we are a team, he told me: "We are not a team, you work on your projects and I work on mine".. Let's just say, that the switch in my brain flipped, I raised my voice, and I told him that he was going nuts.. this was the most improper way to deal it with, I know, but there are certain things that can't be said to me.. The question here, is how do I deal with this? I want, to implement more changes on our work workflow, and I know that it's going to be a pain, with him always complaining and saying things like this.. Our boss is going to intervene in a few days, I talked to him today, the other programmer send him an email the day we had the discussion complaining.. Just to clarify, when I talk about implementing changes, I just don't appear at work, the next day with a sheet of paper, and say: "This is our we are doing things from now on! And there is no discussion.." For example, when I was trying to implement the project management tool, I took the time to talk to everybody that was going to be involved in it, to see what they think about it.. everyone was positive except him, he responded that it was just a mean to control us even more.. Does anyone has any idea on how to deal with this? PS: Truth to be told, I didn't start the best way with him, in the 4º day of work at the company, I found a really bad piece of code on our custom CMS, one of those things that I only expect to find in code produced by a programmer that has only 1 month of training in programming, and I talked to my boss about it... he showed up at the time I saw that piece of code, he saw my face and asked about it, so I told him.. I know the worst thing that can be said to a programmer is saying that their code is awful, but I've already learned that my code isn't the best of the world, so I take criticism in a completely different way now.. maybe he doesn't..

    Read the article

  • Process Power to the People that Create Engagement

    - by Michael Snow
    Organizations often speak about their engagement problems as if the problem is the people they are trying to engage - employees,  partners, customers and citizens.  The reality of most engagement problems is that the processes put in place to engage are impersonal, inflexible, unintuitive, and often completely ignorant of the population they are trying to serve. Life, Liberty and the Pursuit of Delight? How appropriate during this short week of the US Independence Day Holiday that we're focusing on People, Process and Engagement. As we celebrate this holiday in the US and the historic independence we gained (sorry Brits!) - it's interesting to think back to 1776 to the creation of that pivotal document, the Declaration of Independence. What tremendous pressure to create an engaging document and founding experience they must have felt. "On June 11, 1776, in anticipation of the impending vote for independence from Great Britain, the Continental Congress appointed five men — Thomas Jefferson, John Adams, Benjamin Franklin, Roger Sherman, and Robert Livingston — to write a declaration that would make clear to people everywhere why this break from Great Britain was both necessary and inevitable. The committee then appointed Jefferson to draft a statement. Jefferson produced a "fair copy" of his draft declaration, which became the basic text of his "original Rough draught." The text was first submitted to Adams, then Franklin, and finally to the other two members of the committee. Before the committee submitted the declaration to Congress on June 28, they made forty-seven emendations to the document. During the ensuing congressional debates of July 1-4, 1776, Congress adopted thirty-nine further revisions to the committee draft. (http://www.constitution.org) If anything was an attempt for engaging the hearts and minds of the 13 Colonies at the time, this document certainly succeeded in its mission. ...Their tools at the time were pen and ink and parchment. Although the final document would later be typeset with lead type for a printing press to distribute to the colonies, all of the original drafts were hand written. And today's enterprise complains about using "Review and Track Changes" at times.  Can you imagine the manual revision control process? or lack thereof?  Collaborative process? Time delays? Would  implementing a better process have helped our founding fathers collaborate better? Declaration of Independence rough draft below. One of many during the creation process. Great comparison across multiple versions of the document here. (from http://www.ushistory.org/): While you may not be creating a new independent nation, getting your employees to engage is crucial to your success as a company in today's world. Oracle WebCenter provides the tools that power engagement. Employees that have better tools for communication, collaboration and getting their job done are more engaged employees. Better engaged employees create more engaged customers and partners. 12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 -"/ /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif"; mso-fareast-font-family:"Times New Roman";}

    Read the article

  • Business School graduate joins Oracle

    - by jessica.ebbelaar(at)oracle.com
    My name is Mathias, I work as an Applications Inside Sales Rep for the French market, and I’d like to give you a brief snapshot of my experience at Oracle. First things first, how did you hear about Oracle? Where have you seen the sharp and recognizable red logo? Was it in Charles de Gaulle Airport when your eyes crossed the 20-metre banner with a picture of a strange big machine in the middle? Was it through reading the Forbes 10 top IT companies worldwide ranking? Or is it because IT is your thing and you cannot but know one of the “big four”? Meeting with a Grenoble Alumnus My story is a little different. My plan was to work in sales, in the IT industry. I had heard about Oracle, but my opinion at the time was that this kind of multinational company was way out of reach for a young graduate, even with high enthusiasm and great excitement to be (finally) on the job market. So, I was really surprised when I had an interesting conversation with a top alumnus of my business school. We were at the Grenoble Ecole de Management graduation ceremony (our graduation!), and before the party got really started, I got to chat with her. She told me of the great experience she was getting by living and working in Dublin. She had already figured it all out: “you work with another 100 young people from 10 different nationalities across Europe, you can be based in Dublin, but then once you work really hard you can move to Malaga Spain or other BUs around the world, you can work with different lines of business and learn about new “techy” and business oriented products, move to the field in your home country or elsewhere, etc.” What, what, what? Moving around Europe, trained by the best sales coaches in the world, acquiring strong IT knowledge and getting on board with one of fastest-growing and most watched companies in the world? Well, I was in. The next day (OK, 3 days after, the time to recover), I sent her my CV, and 3 months later I started as a Business Development Consultant at Oracle in Dublin, representing the latest cloud based CRM across the French market. That was 15 months ago. Since then, I moved line of business twice, I’m always learning new things and working with different and senior stakeholders; I have attended hundreds of hours of sales and product training (priceless when you come from a business background); I passed the Dublin Institute of Technology Sales Certification through different trainings given onsite within Oracle; I’ve led projects based around social media and I’ve gotten involved within various sales deals going on my market. Despite all of these great things, two will remain in my spirit: the multiculturalism that I experience every day in the office, and the American style of management - more direct and open than what you can find in “regular French companies”. Sales Progression Board In May 2012, I passed what we call a ‘Sales Progression Board’ to be promoted to an Inside Sales position. I am now in charge of generating revenue through the sale of Oracle applications on my specific territory. Always keeping in my mind my personal ambition: going to the field one day. Interested to join Oracle in the same role as Mathias? Visit http://campus.oracle.com.

    Read the article

< Previous Page | 135 136 137 138 139 140 141 142 143 144 145 146  | Next Page >