Search Results

Search found 6401 results on 257 pages for 'constant expression'.

Page 214/257 | < Previous Page | 210 211 212 213 214 215 216 217 218 219 220 221  | Next Page >

  • IIS Logfile Visualization with XNA

    - by BobPalmer
    In my office, I have a wall mounted monitor who's whole purpose in life is to display perfmon stats from our various servers.  And on a fairly regular basis, I have folks walk by asking what the lines mean.    After providing the requisite explaination about CPU utilization, disk I/O bottlenecks, etc. this is usually followed by some blank stares from the user in question, and a distillation of all of our engineering wizardry down to the phrase 'So when the red line goes up that's bad then?'   This of course would not do.  So I talked to my friends and our network admin about an option to show something more eye catching and visual, with which we could catch at a glance a feel for what was up with our site.    He initially pointed me out to a video showing GLTail and Chipmunk done in Ruby.  Realizing this was both awesome, and that I needed an excuse to do something in XNA, I decided to knock out a proof of concept for something very similar, but with a few tweaks.   Here's a link to a video of the current prototype:   http://www.youtube.com/watch?v=jM_PWZbtH2I   Essentially this app opens up a log file (even an active one) and begins pulling out the lines of text.  (Here's a good Code Project link that covers how to do tail reading from an active text file: http://www.codeproject.com/KB/files/tail.aspx).   As new data is added, a bubble is generated in the application - a GET statement comes from the left, and a POST from the right.  I then run it through a series of expression checkers, and based on the kind of statement and the pattern, a bubble of an appropriate color is generated.   For example, if I get a 500, a huge red bubble pops out.  Others are based on the part of the system the page is from - i.e. green bubbles are from our claims management subsystem, and blue bubbles are from the pages our scheduling staff use to schedule patients.  Others include the purple bubbles for security and login, and yellow bubbles for some miscellaneous pages.   The little grey bubbles represent things like images, JS, CSS, etc - and their small size makes them work like grease to keep the larger page bubbles moving.   The app is also smart enough that if it is starting to bog down with handling the physics and interactions, it will suspend new bubbles until enough have dropped off that performance can resume (you can see this slight stuttering in the sample video).   The net result is that anyone will be able to look up on the wall monitor, and instantly get a quick feel for how things are going on the floor.  Website slow?  You can get a feel for both volume and utilized modules with one glance.  Website crashing?  Look for a wall of giant red bubbles.  No activity at all?  Maybe the site is down.  Now couple this with utilization within a farm, and cross referenced with a second app showing the same kind of data from your SQL database...   As for the app itself, it's a windows XNA project with the code in C#.   The physics are handled by the Farseer physicis eingine for XNA (http://www.codeplex.com/FarseerPhysics) which is just pure goodness.  The samples are great, and I had the app up and working in two evenings (half of that was fine tuning, and the other was me coding with a kid in my lap).   My next steps include wiring this to SQL (I have some ideas...), and adding a nice configuration module.  For example, you could use polygons, etc to tie to your regex - or more entertaining things like having a little human ragdoll to represent a user login.     Once that's wrapped up and I have a chance to complete some hardening, I will be releasing the whole thing into the wild as opensource.     Feel free to ping me if you have any questions! -Bob

    Read the article

  • Silverlight Cream for February 02, 2011 -- #1039

    - by Dave Campbell
    In this Issue: Tony Champion, Gill Cleeren, Alex van Beek, Michael James, Ollie Riches, Peter Kuhn, Mike Ormond, WindowsPhoneGeek(-2-), Daniel N. Egan, Loek Van Den Ouweland, and Paul Thurott. Above the Fold: Silverlight: "Using the AutoCompleteBox" Peter Kuhn WP7: "Windows Phone Image Button" Loek Van Den Ouweland Training: "New WP7 Virtual Labs" Daniel N. Egan Shoutouts: SilverlightShow has their top 5 most popular news articles up: SilverlightShow for Jan 24-30, 2011 Rudi Grobler posted answers he gives to questions about Silverlight - Where do I start? Brian Noyes starts a series of Webinars at SilverlightShow this morning at 10am PDT: Free Silverlight Show Webinar: Querying and Updating Data From Silverlight Clients with WCF RIA Services Join your fellow geeks at Gangplank in Chandler Arizona this Saturday as Scott Cate and AZGroups brings you Azure Boot Camp – Feb 5th 2011 From SilverlightCream.com: Deploying Silverlight with WCF Services Tony Champion takes a step out of his norm (Pivot) and has a post up about deploying WCF Services with your SL app, and how to take the pain out of that without pulling out your hair. Getting ready for Microsoft Silverlight Exam 70-506 (Part 3) Gill Cleeren's part 3 of getting ready for the Silverlight Exam is up at SilverlightShow... with links to the first two parts. There's so much good information linked off these... thanks Gill and 'The Show'! A guide through WCF RIA Services attributes Alex van Beek has a post up you will probably want to bookmark unless you're not using WCF RIA... do you know all the attributes by heart? ... how about an excellent explanation of 10 of them? Using DeferredLoadListBox in a Pivot Control Michael James discusses using the DeferredLoadListBox, and then also using it with the Pivot control... but not without some pain points which he defines and gives the workaround for. WP7: Know your data Ollie Riches' latest is about Data and WP7 ... specifically 'knowing' what data you're needing/using to avoid the 90MB memory limit... He gives a set of steps to follow to measure your data model to avoid getting in trouble. Using the AutoCompleteBox Peter Kuhn takes a great look at the AutoCompleteBox... the basics, and then well beyond with custom data, item templates, custom filters, asynchronous filtering, and a behavior for MVVM async filtering. OData and Windows Phone 7 Part 2 Mike Ormond has part 2 of his OData/WP7 post up... lashing up the images to go along with the code this time out... nice looking app. WP7 RoundToggleButton and RoundButton in depth WindowsPhoneGeek is checking out the RoundToggleButton and RoundButton controls from the Coding4fun Toolkit in detail... of course where to get them, and then the setup, demo project included. All about Dependency Properties in Silverlight for WP7 WindowsPhoneGeek's latest post is a good dependency-property discussion related to WP7 development, but if you're just learning, it's a good place to learn about the subject. New WP7 Virtual Labs Daniel N. Egan posted links to 6 new WP7 Virtual Labs released on 1/25. Windows Phone Image Button Loek Van Den Ouweland has a style up on his blog that gives you an imageButton for your WP7 apps, and a sweet little video showing how it's done in Expression Blend too. Yet another free Windows Phone book for developers Paul Thurott found a link to another Free eBook for WP7 development. This one is by Puja Pramudya and is an English translation of the original, and is an introductory text, but hey... it's free... give it a look! Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Talend Enterprise Data Integration overperforms on Oracle SPARC T4

    - by Amir Javanshir
    The SPARC T microprocessor, released in 2005 by Sun Microsystems, and now continued at Oracle, has a good track record in parallel execution and multi-threaded performance. However it was less suited for pure single-threaded workloads. The new SPARC T4 processor is now filling that gap by offering a 5x better single-thread performance over previous generations. Following our long-term relationship with Talend, a fast growing ISV positioned by Gartner in the “Visionaries” quadrant of the “Magic Quadrant for Data Integration Tools”, we decided to test some of their integration components with the T4 chip, more precisely on a T4-1 system, in order to verify first hand if this new processor stands up to its promises. Several tests were performed, mainly focused on: Single-thread performance of the new SPARC T4 processor compared to an older SPARC T2+ processor Overall throughput of the SPARC T4-1 server using multiple threads The tests consisted in reading large amounts of data --ten's of gigabytes--, processing and writing them back to a file or an Oracle 11gR2 database table. They are CPU, memory and IO bound tests. Given the main focus of this project --CPU performance--, bottlenecks were removed as much as possible on the memory and IO sub-systems. When possible, the data to process was put into the ZFS filesystem cache, for instance. Also, two external storage devices were directly attached to the servers under test, each one divided in two ZFS pools for read and write operations. Multi-thread: Testing throughput on the Oracle T4-1 The tests were performed with different number of simultaneous threads (1, 2, 4, 8, 12, 16, 32, 48 and 64) and using different storage devices: Flash, Fibre Channel storage, two stripped internal disks and one single internal disk. All storage devices used ZFS as filesystem and volume management. Each thread read a dedicated 1GB-large file containing 12.5M lines with the following structure: customerID;FirstName;LastName;StreetAddress;City;State;Zip;Cust_Status;Since_DT;Status_DT 1;Ronald;Reagan;South Highway;Santa Fe;Montana;98756;A;04-06-2006;09-08-2008 2;Theodore;Roosevelt;Timberlane Drive;Columbus;Louisiana;75677;A;10-05-2009;27-05-2008 3;Andrew;Madison;S Rustle St;Santa Fe;Arkansas;75677;A;29-04-2005;09-02-2008 4;Dwight;Adams;South Roosevelt Drive;Baton Rouge;Vermont;75677;A;15-02-2004;26-01-2007 […] The following graphs present the results of our tests: Unsurprisingly up to 16 threads, all files fit in the ZFS cache a.k.a L2ARC : once the cache is hot there is no performance difference depending on the underlying storage. From 16 threads upwards however, it is clear that IO becomes a bottleneck, having a good IO subsystem is thus key. Single-disk performance collapses whereas the Sun F5100 and ST6180 arrays allow the T4-1 to scale quite seamlessly. From 32 to 64 threads, the performance is almost constant with just a slow decline. For the database load tests, only the best IO configuration --using external storage devices-- were used, hosting the Oracle table spaces and redo log files. Using the Sun Storage F5100 array allows the T4-1 server to scale up to 48 parallel JVM processes before saturating the CPU. The final result is a staggering 646K lines per second insertion in an Oracle table using 48 parallel threads. Single-thread: Testing the single thread performance Seven different tests were performed on both servers. Given the fact that only one thread, thus one file was read, no IO bottleneck was involved, all data being served from the ZFS cache. Read File ? Filter ? Write File: Read file, filter data, write the filtered data in a new file. The filter is set on the “Status” column: only lines with status set to “A” are selected. This limits each output file to about 500 MB. Read File ? Load Database Table: Read file, insert into a single Oracle table. Average: Read file, compute the average of a numeric column, write the result in a new file. Division & Square Root: Read file, perform a division and square root on a numeric column, write the result data in a new file. Oracle DB Dump: Dump the content of an Oracle table (12.5M rows) into a CSV file. Transform: Read file, transform, write the result data in a new file. The transformations applied are: set the address column to upper case and add an extra column at the end, which is the concatenation of two columns. Sort: Read file, sort a numeric and alpha numeric column, write the result data in a new file. The following table and graph present the final results of the tests: Throughput unit is thousand lines per second processed (K lines/second). Improvement is the % of improvement between the T5140 and T4-1. Test T4-1 (Time s.) T5140 (Time s.) Improvement T4-1 (Throughput) T5140 (Throughput) Read/Filter/Write 125 806 645% 100 16 Read/Load Database 195 1111 570% 64 11 Average 96 557 580% 130 22 Division & Square Root 161 1054 655% 78 12 Oracle DB Dump 164 945 576% 76 13 Transform 159 1124 707% 79 11 Sort 251 1336 532% 50 9 The improvement of single-thread performance is quite dramatic: depending on the tests, the T4 is between 5.4 to 7 times faster than the T2+. It seems clear that the SPARC T4 processor has gone a long way filling the gap in single-thread performance, without sacrifying the multi-threaded capability as it still shows a very impressive scaling on heavy-duty multi-threaded jobs. Finally, as always at Oracle ISV Engineering, we are happy to help our ISV partners test their own applications on our platforms, so don't hesitate to contact us and let's see what the SPARC T4-based systems can do for your application! "As describe in this benchmark, Talend Enterprise Data Integration has overperformed on T4. I was generally happy to see that the T4 gave scaling opportunities for many scenarios like complex aggregations. Row by row insertion in Oracle DB is faster with more than 650,000 rows per seconds without using any bulk Oracle capabilities !" Cedric Carbone, Talend CTO.

    Read the article

  • Visual Studio Little Wonders: Box Selection

    - by James Michael Hare
    So this week I decided I’d do a Little Wonder of a different kind and focus on an underused IDE improvement: Visual Studio’s Box Selection capability. This is a handy feature that many people still don’t realize was made available in Visual Studio 2010 (and beyond).  True, there have been other editors in the past with this capability, but now that it’s fully part of Visual Studio we can enjoy it’s goodness from within our own IDE. So, for those of you who don’t know what box selection is and what it allows you to do, read on! Sometimes, we want to select beyond the horizontal… The problem with traditional text selection in many editors is that it is horizontally oriented.  Sure, you can select multiple rows, but if you do you will pull in the entire row (at least for the middle rows).  Under the old selection scheme, if you wanted to select a portion of text from each row (a “box” of text) you were out of luck.  Box selection rectifies this by allowing you to select a box of text that bounded by a selection rectangle that you can grow horizontally or vertically.  So let’s think a situation that could occur where this comes in handy. Let’s say, for instance, that we are defining an enum in our code that we want to be able to translate into some string values (possibly to be stored in a database, output to screen, etc.). Perhaps such an enum would look like this: 1: public enum OrderType 2: { 3: Buy, // buy shares of a commodity 4: Sell, // sell shares of a commodity 5: Exchange, // exchange one commodity for another 6: Cancel, // cancel an order for a commodity 7: } 8:  Now, let’s say we are in the process of creating a Dictionary<K,V> to translate our OrderType: 1: var translator = new Dictionary<OrderType, string> 2: { 3: // do I really want to retype all this??? 4: }; Yes the example above is contrived so that we will pull some garbage if we do a multi-line select. I could select the lines above using the traditional multi-line selection: And then paste them into the translator code, which would result in this: 1: var translator = new Dictionary<OrderType, string> 2: { 3: Buy, // buy shares of a commodity 4: Sell, // sell shares of a commodity 5: Exchange, // exchange one commodity for another 6: Cancel, // cancel an order for a commodity 7: }; But I have a lot of junk there, sure I can manually clear it out, or use some search and replace magic, but if this were hundreds of lines instead of just a few that would quickly become cumbersome. The Box Selection Now that we have the ability to create box selections, we can select the box of text to delete!  Most of us are familiar with the fact we can drag the mouse (or hold [Shift] and use the arrow keys) to create a selection that can span multiple rows: Box selection, however, actually allows us to select a box instead of the typical horizontal lines: Then we can press the [delete] key and the pesky comments are all gone! You can do this either by holding down [Alt] while you select with your mouse, or by holding down [Alt+Shift] and using the arrow keys on the keyboard to grow the box horizontally or vertically. So now we have: 1: var translator = new Dictionary<OrderType, string> 2: { 3: Buy, 4: Sell, 5: Exchange, 6: Cancel, 7: }; Which is closer, but we still need an opening curly, the string to translate to, and the closing curly and comma. Fortunately, again, this is easy with box selections due to the fact box selection can even work for a zero-width selection! That is, hold down [Alt] and either drag down with no width, or hold down [Alt+Shift] and arrow down and you will define a selection range with no width, essentially, a vertical line selection: Notice the faint selection line on the right? So why is this useful? Well, just like with any selected range, we can type and it will replace the selection. What does this mean for box selections? It means that we can insert the same text all the way down on each line! If we have the same selection above, and type a curly and a space, we’d get: Imagine doing this over hundreds of lines and think of what a time saver it could be! Now make a zero-width selection on the other side: And type a curly and a comma, and we’d get: So close! Now finally, imagine we’ve already defined these strings somewhere and want to paste them in: 1: const private string BuyText = "Buy Shares"; 2: const private string SellText = "Sell Shares"; 3: const private string ExchangeText = "Exchange"; 4: const private string CancelText = "Cancel"; We can, again, use our box selection to pull out the constant names: And clicking copy (or [CTRL+C]) and then selecting a range to paste into: And finally clicking paste (or [CTRL+V]) to get the final result: 1: var translator = new Dictionary<OrderType, string> 2: { 3: { Buy, BuyText }, 4: { Sell, SellText }, 5: { Exchange, ExchangeText }, 6: { Cancel, CancelText }, 7: };   Sure, this was a contrived example, but I’m sure you’ll agree that it adds myriad possibilities of new ways to copy and paste vertical selections, as well as inserting text across a vertical slice. Summary: While box selection has been around in other editors, we finally get to experience it in VS2010 and beyond. It is extremely handy for selecting columns of information for cutting, copying, and pasting. In addition, it allows you to create a zero-width vertical insertion point that can be used to enter the same text across multiple rows. Imagine the time you can save adding repetitive code across multiple lines!  Try it, the more you use it, the more you’ll love it! Technorati Tags: C#,CSharp,.NET,Visual Studio,Little Wonders,Box Selection

    Read the article

  • Silverlight Cream for December 27, 2010 -- #1016

    - by Dave Campbell
    In this Issue: Sacha Barber, David Anson, Jesse Liberty, Shawn Wildermuth, Jeff Blankenburg(-2-), Martin Krüger, Ryan Alford(-2-), Michael Crump, Peter Kuhn(-2-). Above the Fold: Silverlight: "Part 4 of 4 : Tips/Tricks for Silverlight Developers" Michael Crump WP7: "Navigating with the WebBrowser Control on WP7" Shawn Wildermuth Shoutouts: John Papa posted that the open call is up for MIX11 presenters: Your Chance to Speak at MIX11 From SilverlightCream.com: Aspect Examples (INotifyPropertyChanged via aspects) If you're wanting to read a really in-depth discussion of aspect oriented programming (AOP), check out the article Sacha Barber has up at CodeProject discussing INPC via aspects. How to: Localize a Windows Phone 7 application that uses the Windows Phone Toolkit into different languages David Anson has a nice tutorial up on localizing your WP7 app, including using the Toolkit and controls such as DatePicker... remember we're talking localized Windows Phone From Scratch – Animation Part 1 Jesse Liberty continues in his 'From Scratch' series with this first post on WP7 Animation... good stuff, Jesse! Navigating with the WebBrowser Control on WP7 In building his latest WP7 app, Shawn Wildermuth ran into some obscure errors surrounding browser.InvokeScript. He lists the simple solution and his back, refresh, and forward button functionality for us. What I Learned In WP7 – Issue #7 In the time I was out, Jeff Blankenburg got ahead of me, so I'll catch up 2 at a time... in this number 7 he discusses making videos of your apps, links to the Learn Visual Studio series, and his new website What I Learned In WP7 – Issue #8 Jeff Blankenburg's number 8 is a very cool tip on using the return key on the keyboard to handle the loss of focus and handling of text typed into a textbox. Resize of a grid by using thumb controls Martin Krüger has a sample in the Expression Gallery of a grid that is resizable by using 'thumb controls' at the 4 corners... all source, so check it out! Silverlight 4 – Productivity Power Tools and EF4 Ryan Alford found a very interesting bug associated with EF4 and the Productivity Power Tools, and the way to get out of it is just weird as well. Silverlight 4 – Toolkit and Theming Ryan Alford also had a problem adding a theme from the Toolkit, and what all you might have to do to get around this one.... Part 4 of 4 : Tips/Tricks for Silverlight Developers. Michael Crump has part 4 of his series on Silverlight Development tips and tricks. This is numbers 16 through 20 and covers topics such as Version information, Using Lambdas, Specifying a development port, Disabling ChildWindow Close button, and XAML cleanup. The XML content importer and Windows Phone 7 Peter Kuhn wanted to use the XML content inporter with a WP7 app and ran into problems implementing the process and a lack of documentation as well... he pounded through it all and has a class he's sharing for loading sounds via XML file settings. WP7 snippet: analyzing the hyperlink button style In a second post, Peter Kuhn responds to a forum discussion about the styles for the hyperlink button in WP7 and why they're different than SL4 ... and styles-to-go to get all the hyperlink goodness you want... wrapped text, or even non-text content. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Using Hadooop (HDInsight) with Microsoft - Two (OK, Three) Options

    - by BuckWoody
    Microsoft has many tools for “Big Data”. In fact, you need many tools – there’s no product called “Big Data Solution” in a shrink-wrapped box – if you find one, you probably shouldn’t buy it. It’s tempting to want a single tool that handles everything in a problem domain, but with large, complex data, that isn’t a reality. You’ll mix and match several systems, open and closed source, to solve a given problem. But there are tools that help with handling data at large, complex scales. Normally the best way to do this is to break up the data into parts, and then put the calculation engines for that chunk of data right on the node where the data is stored. These systems are in a family called “Distributed File and Compute”. Microsoft has a couple of these, including the High Performance Computing edition of Windows Server. Recently we partnered with Hortonworks to bring the Apache Foundation’s release of Hadoop to Windows. And as it turns out, there are actually two (technically three) ways you can use it. (There’s a more detailed set of information here: http://www.microsoft.com/sqlserver/en/us/solutions-technologies/business-intelligence/big-data.aspx, I’ll cover the options at a general level below)  First Option: Windows Azure HDInsight Service  Your first option is that you can simply log on to a Hadoop control node and begin to run Pig or Hive statements against data that you have stored in Windows Azure. There’s nothing to set up (although you can configure things where needed), and you can send the commands, get the output of the job(s), and stop using the service when you are done – and repeat the process later if you wish. (There are also connectors to run jobs from Microsoft Excel, but that’s another post)   This option is useful when you have a periodic burst of work for a Hadoop workload, or the data collection has been happening into Windows Azure storage anyway. That might be from a web application, the logs from a web application, telemetrics (remote sensor input), and other modes of constant collection.   You can read more about this option here:  http://blogs.msdn.com/b/windowsazure/archive/2012/10/24/getting-started-with-windows-azure-hdinsight-service.aspx Second Option: Microsoft HDInsight Server Your second option is to use the Hadoop Distribution for on-premises Windows called Microsoft HDInsight Server. You set up the Name Node(s), Job Tracker(s), and Data Node(s), among other components, and you have control over the entire ecostructure.   This option is useful if you want to  have complete control over the system, leave it running all the time, or you have a huge quantity of data that you have to bulk-load constantly – something that isn’t going to be practical with a network transfer or disk-mailing scheme. You can read more about this option here: http://www.microsoft.com/sqlserver/en/us/solutions-technologies/business-intelligence/big-data.aspx Third Option (unsupported): Installation on Windows Azure Virtual Machines  Although unsupported, you could simply use a Windows Azure Virtual Machine (we support both Windows and Linux servers) and install Hadoop yourself – it’s open-source, so there’s nothing preventing you from doing that.   Aside from being unsupported, there are other issues you’ll run into with this approach – primarily involving performance and the amount of configuration you’ll need to do to access the data nodes properly. But for a single-node installation (where all components run on one system) such as learning, demos, training and the like, this isn’t a bad option. Did I mention that’s unsupported? :) You can learn more about Windows Azure Virtual Machines here: http://www.windowsazure.com/en-us/home/scenarios/virtual-machines/ And more about Hadoop and the installation/configuration (on Linux) here: http://en.wikipedia.org/wiki/Apache_Hadoop And more about the HDInsight installation here: http://www.microsoft.com/web/gallery/install.aspx?appid=HDINSIGHT-PREVIEW Choosing the right option Since you have two or three routes you can go, the best thing to do is evaluate the need you have, and place the workload where it makes the most sense.  My suggestion is to install the HDInsight Server locally on a test system, and play around with it. Read up on the best ways to use Hadoop for a given workload, understand the parts, write a little Pig and Hive, and get your feet wet. Then sign up for a test account on HDInsight Service, and see how that leverages what you know. If you're a true tinkerer, go ahead and try the VM route as well. Oh - there’s another great reference on the Windows Azure HDInsight that just came out, here: http://blogs.msdn.com/b/brunoterkaly/archive/2012/11/16/hadoop-on-azure-introduction.aspx  

    Read the article

  • EU Digital Agenda scores 85/100

    - by trond-arne.undheim
    If the Digital Agenda was a bottle of wine and I were wine critic Robert Parker, I would say the Digital Agenda has "a great bouquet, many good elements, with astringent, dry and puckering mouth feel that will not please everyone, but still displaying some finesse. A somewhat controlled effort with no surprises and a few noticeable flaws in the delivery. Noticeably shorter aftertaste than advertised by the producers. Score: 85/100. Enjoy now". The EU Digital Agenda states that "standards are vital for interoperability" and has a whole chapter on interoperability and standards. With this strong emphasis, there is hope the EU's outdated standardization system finally is headed for reform. It has been 23 years since the legal framework of standardisation was completed by Council Decision 87/95/EEC8 in the Information and Communications Technology (ICT) sector. Standardization is market driven. For several decades the IT industry has been developing standards and specifications in global open standards development organisations (fora/consortia), many of which have transparency procedures and practices far superior to the European Standards Organizations. The Digital Agenda rightly states: "reflecting the rise and growing importance of ICT standards developed by certain global fora and consortia". Some fora/consortia, of course, are distorted, influenced by single vendors, have poor track record, and need constant vigilance, but they are the minority. Therefore, the recognition needs to be accompanied by eligibility criteria focused on openness. Will the EU reform its ICT standardization by the end of 2010? Possibly, and only if DG Enterprise takes on board that Information and Communications Technologies (ICTs) have driven half of the productivity growth in Europe over the past 15 years, a prominent fact in the EU's excellent Digital Competitiveness report 2010 published on Monday 17 May. It is ok to single out the ICT sector. It simply is the most important sector right now as it fuels growth in all other sectors. Let's not wait for the entire standardization package which may take another few years. Europe does not have time. The Digital Agenda is an umbrella strategy with deliveries from a host of actors across the Commission. For instance, the EU promises to issue "guidance on transparent ex-ante disclosure rules for essential intellectual property rights and licensing terms and conditions in the context of standard setting", by 2011 in the Horisontal Guidelines now out for public consultation by DG COMP and to some extent by DG ENTR's standardization policy reform. This is important. The EU will issue procurement guidance as interoperability frameworks are put into practice. This is a joint responsibility of several DGs, and is likely to suffer coordination problems, controversy and delays. We have seen plenty of the latter already and I have commented on the Commission's own interoperability elsewhere, with mixed luck. :( Yesterday, I watched the cartoonesque Korean western film The Good, the Bad and the Weird. In the movie (and I meant in the movie only), a bandit, a thief, and a bounty hunter, all excellent at whatever they do, fight for a treasure map. Whether that is a good analogy for the situation within the Commission, others are better judges of than I. However, as a movie fanatic, I still await the final shoot-out, and, as in the film, the only certainty is that "life is about chasing and being chased". The missed opportunity (in this case not following up the push from Member States to better define open standards based interoperability) is a casualty of the chaos ensued in the European Wild West (and I mean that in the most endearing sense, and my excuses beforehand to actors who possibly justifiably cannot bear being compared to fictional movie characters). Instead of exposing the ongoing fight, the EU opted for the legalistic use of the term "standards" throughout the document. This is a term that--to the EU-- excludes most standards used by the IT industry world wide. So, while it, for a moment, meant "weapon down", it will not lead to lasting peace. The Digital Agenda calls for the Member States to "Implement commitments on interoperability and standards in the Malmö and Granada Declarations by 2013". This is a far cry from the actual Ministerial Declarations which called upon the Commission to help them with this implementation by recognizing and further defining open standards based interoperability. Unless there is more forthcoming from the Commission, the market's judgement will be: you simply fall short. Generally, I think the EU focus now should be "from policy to practice" and the Digital Agenda does indeed stop short of tackling some highly practical issues. There is need for progress beyond the Digital Agenda. Here are some suggestions that would help Europe re-take global leadership on openness, public sector reform, and economic growth: A strong European software strategy centred around open standards based interoperability by 2011. An ambitious new eCommission strategy for 2011-15 focused on migration to open standards by 2015. Aligning the IT portfolio across the Commission into one Digital Agenda DG by 2012. Focusing all best practice exchange in eGovernment on one social networking site, epractice.eu (full disclosure: I had a role in getting that site up and running) Prioritizing public sector needs in global standardization over European standardization by 2014.

    Read the article

  • Partner Blog: Hub City Media Introduces iPad Application for Oracle Identity Analytics

    - by Tanu Sood
    About the Writer:Steve Giovannetti is CTO of Hub City Media, Inc., a company that specializes in implementation and product development on the Oracle Identity Management platform. Recently, Hub City Media announced the introduction of iPad application IdentityCert for Oracle Identity Analytics. This post explore the business use cases and application of IdentityCert.Hub City Media(HCM) has been deploying certification solutions based on Oracle Identity Analytics since it first appeared on the market as Vaau RBACx. With each deployment we've seen the same pattern repeat time and time again:1. Customers suffering under the weight of manual access certification regimens deploy Oracle Identity Analytics (OIA) for automated certification. 2. OIA improves the frequency, speed, accuracy, and participation of certifications across the organization. 3. Then the certifiers, typically managers and supervisors, ask, “Is there any easier way to do these certifications offline?”The current version of OIA has a way to export certification data to a spreadsheet.  For some customers, we've leveraged this feature and combined it with some of our own custom code to provide a solution based on spreadsheet exports and imports.  Customers export the certification to Microsoft Excel, complete it, and then import the spreadsheet to OIA. It worked well for offline certification, but if the user accidentally altered the format of the spreadsheet, the import of the data could fail. We were close to a solution but it wasn’t reliable.Over the past few years, we've seen the proliferation of Apple iOS devices, specifically the iPhone and iPad, in the enterprise.  As our customers were asking for offline certification, we noticed the same population of users traditionally responsible for access certification, were early adopters of the iPad. The environment seemed ideal for us to create an iPad application to support offline certifications using Oracle Identity Analytics. That’s why we created IdentityCert™.IdentityCert allows users to view their analytics dashboard, complete user certifications, and resolve policy violations with OIA, from their iPads.The current IdentityCert analytics dashboard displays the same charts that are available in the Oracle Identity Analytics product. However, we plan to expand the number of available analytics in future releases.The main function of IdentityCert is user certification which can be performed quickly and efficiently using a simple touch interface. Managers tap into a certification, use simple gestures to claim users and certify their access.  Certifications can be securely downloaded to IdentityCert and can be completed with or without a network connection. The user can upload the completed certifications once they are connected to a cellular or wi-fi network.Oracle Identity Analytics can generate policy violation notifications based on detective scans of identity warehouse or via preventative analysis of identity access requests. IdentityCert allows users to view all policy violations, resolve, or delegate them to appropriate users. IdentityCert also analyzes the policy violation expression and produces more human friendly descriptions of the policy violation which improves the ability of users to resolve the violation. IdentityCert can be deployed quickly into a customer's environment. It is deployed with Hub City Media's ID Services to connect Oracle Identity Analytics securely with the iPad application.Oracle Identity Management 11g R2 is an important evolutionary release. Oracle's Identity Management suite has more characteristics of a cohesive platform. This platform provides an integrated set of identity services that can be used to protect, manage, and audit security within the enterprise. At HCM we take the platform concept a step further and see it as an opportunity to create unique solutions for Oracle Identity Management customers. IdentityCert is our commitment to this platform. You can download IdentityCert from the Apple iOS App Store today. It includes a demo dataset that you can use to explore the functions of the product without any server infrastructure. Download it. Give it a try. We would appreciate your interest and welcome any feedback.Resources:Press Release: Hub City Media Introduces iPad Application IdentityCert™ for Oracle Identity AnalyticsApp Store Download: http://bit.ly/IdentityCertOracle Identity Governance Suite

    Read the article

  • LINQ to Twitter Queries with LINQPad

    - by Joe Mayo
    LINQPad is a popular utility for .NET developers who use LINQ a lot.  In addition to standard SQL queries, LINQPad also supports other types of LINQ providers, including LINQ to Twitter.  The following sections explain how to set up LINQPad for making queries with LINQ to Twitter. LINQPad comes in a couple versions and this example uses LINQPad4, which runs on the .NET Framework 4.0. 1. The first thing you'll need to do is set up a reference to the LinqToTwitter.dll. From the Query menu, select query properties. Click the Browse button and find the LinqToTwitter.dll binary. You should see something similar to the Query Properties window below. 2. While you have the query properties window open, add the namespace for the LINQ to Twitter types.  Click the Additional Namespace Imports tab and type in LinqToTwitter. The results are shown below: 3. The default query type, when you first start LINQPad, is C# Expression, but you'll need to change this to support multiple statements.  Change the Language dropdown, on the Main window, to C# Statements. 4. To query LINQ to Twitter, instantiate a TwitterContext, by typing the following into the LINQPad Query window: var ctx = new TwitterContext(); Note: If you're getting syntax errors, go back and make sure you did steps #2 and #3 properly. 5. Next, add a query, but don't materialize it, like this: var tweets = from tweet in ctx.Status where tweet.Type == StatusType.Public select new { tweet.Text, tweet.Geo, tweet.User }; 6. Next, you want the output to be displayed in the LINQPad grid, so do a Dump, like this: tweets.Dump(); The following image shows the final results:   That was an unauthenticated query, but you can also perform authenticated queries with LINQ to Twitter's support of OAuth.  Here's an example that uses the PinAuthorizer (type this into the LINQPad Query window): var auth = new PinAuthorizer { Credentials = new InMemoryCredentials { ConsumerKey = "", ConsumerSecret = "" }, UseCompression = true, GoToTwitterAuthorization = pageLink => Process.Start(pageLink), GetPin = () => { // this executes after user authorizes, which begins with the call to auth.Authorize() below. Console.WriteLine("\nAfter you authorize this application, Twitter will give you a 7-digit PIN Number.\n"); Console.Write("Enter the PIN number here: "); return Console.ReadLine(); } }; // start the authorization process (launches Twitter authorization page). auth.Authorize(); var ctx = new TwitterContext(auth, "https://api.twitter.com/1/", "https://search.twitter.com/"); var tweets = from tweet in ctx.Status where tweet.Type == StatusType.Public select new { tweet.Text, tweet.Geo, tweet.User }; tweets.Dump(); This code is very similar to what you'll find in the LINQ to Twitter downloadable source code solution, in the LinqToTwitterDemo project.  For obvious reasons, I changed the value assigned to ConsumerKey and ConsumerSecret, which you'll have to obtain by visiting http://dev.twitter.com and registering your application. One tip, you'll probably want to make this easier on yourself by creating your own DLL that encapsulates all of the OAuth logic and then call a method or property on you custom class that returns a fully functioning TwitterContext.  This will help avoid adding all this code every time you want to make a query. Now, you know how to set up LINQPad for LINQ to Twitter, perform unauthenticated queries, and perform queries with OAuth. Joe

    Read the article

  • Logging connection strings

    If you some of the dynamic features of SSIS such as package configurations or property expressions then sometimes trying to work out were your connections are pointing can be a bit confusing. You will work out in the end but it can be useful to explicitly log this information so that when things go wrong you can just review the logs. You may wish to develop this idea further and encapsulate such logging into a custom task, but for now lets keep it simple and use the Script Task. The Script Task code below will raise an Information event showing the name and connection string for a connection. Imports System Imports Microsoft.SqlServer.Dts.Runtime Public Class ScriptMain Public Sub Main() Dim fireAgain As Boolean ' Get the connection string, we need to know the name of the connection Dim connectionName As String = "My OLE-DB Connection" Dim connectionString As String = Dts.Connections(connectionName).ConnectionString ' Format the message and log it via an information event Dim message As String = String.Format("Connection ""{0}"" has a connection string of ""{1}"".", _ connectionName, connectionString) Dts.Events.FireInformation(0, "Information", message, Nothing, 0, fireAgain) Dts.TaskResult = Dts.Results.Success End Sub End Class Building on that example it is probably more flexible to log all connections in a package as shown in the next example. Imports System Imports Microsoft.SqlServer.Dts.Runtime Public Class ScriptMain Public Sub Main() Dim fireAgain As Boolean ' Loop through all connections in the package For Each connection As ConnectionManager In Dts.Connections ' Get the connection string and log it via an information event Dim message As String = String.Format("Connection ""{0}"" has a connection string of ""{1}"".", _ connection.Name, connection.ConnectionString) Dts.Events.FireInformation(0, "Information", message, Nothing, 0, fireAgain) Next Dts.TaskResult = Dts.Results.Success End Sub End Class By using the Information event it makes it readily available in the designer, for example the Visual Studio Output window (Ctrl+Alt+O) or the package designer Execution Results tab, and also allows you to readily control the logging by choosing which events to log in the normal way. Now before somebody starts commenting that this is a security risk, I would like to highlight good practice for building connection managers. Firstly the Password property, or any other similar sensitive property is always defined as write-only, and secondly the connection string property only uses the public properties to assemble the connection string value when requested. In other words the connection string will never contain the password. I have seen a couple of cases where this is not true, but that was just bad development by third-parties, you won’t find anything like that in the box from Microsoft.   Whilst writing this code it made me wish that there was a custom log entry that you could just turn on that did this for you, but alas connection managers do not even seem to support custom events. It did however remind me of a very useful event that is often overlooked and fits rather well alongside connection string logging, the Execute SQL Task’s custom ExecuteSQLExecutingQuery event. To quote the help reference Custom Messages for Logging - Provides information about the execution phases of the SQL statement. Log entries are written when the task acquires connection to the database, when the task starts to prepare the SQL statement, and after the execution of the SQL statement is completed. The log entry for the prepare phase includes the SQL statement that the task uses. It is the last part that is so useful, how often have you used an expression to derive a SQL statement and you want to log that to make sure the correct SQL is being returned? You need to turn it one, by default no custom log events are captured, but I’ll refer you to a walkthrough on setting up the logging for ExecuteSQLExecutingQuery by Jamie.

    Read the article

  • Silverlight Cream for May 27, 2010 -- #871

    - by Dave Campbell
    In this Issue: Phil Middlemiss, Max Paulousky, Jeff Wilcox, David Anson, René Schulte, Xianzhong Zhu, Jeff Handley, John Papa, Jeremy Likness, and Marlon Grech. Shoutouts: SilverLaw has a great demo at the Expression Gallery, and we're all going to look forward to the blog post explaining it: Flexible Surface Effect SilverLaw> has another use for the above in this text morphing Effect: Morphing Text Effect Matthias Shapiro contributed a chapter for a book on Visualization and it's available as a free download: Free Chapter From Beautiful Visualization Andy Beaulieu has a demo up as almost a spoiler for a future Coding4Fun app... and how cool is this: Shuffleboard: A Windows Phone 7 Sample Game From SilverlightCream.com: Separating Content and Presentation with the ContentControl Phil Middlemiss' latest is out on SilverlightShow and is all about the ContentControl and separating layout and content ... demo project source included Search Engine Optimization (SEO) for Silverlight Applications. Part 1 Max Paulousky has part one of a long series he's starting on a demo project to explain a bunch of MEF, MVVM, and WCF RIA concepts. This first one contains the overview and also discusses SEO. There is a link to the app and material in the post if you read Russian :) Updated Silverlight Unit Test Framework bits for Windows Phone and Silverlight 3 Jeff Wilcox has available updated Unit Test bits for Silverlight 3 -- read that as WP7... read the rest of the information on his post. Easily animate orientation changes for any Windows Phone application with this handy source code David Anson has some code up that you're going to want if you're programming WP7 ... just watch the video ... you'll be downloading the code just like I did :) SilverShader – Introduction to Silverlight and WPF Pixel Shaders René Schulte has a post up at Coding4Fun about PixelShaders... how to write them and an application that uses them... this is a great long tutorial... a must read. Developing Freecell Game Using Silverlight 3 Part 2 Xianzhong Zhu has part 2 of his FreeCell game development posted ... lots of detailed descriptions and code, plus all the code of course! Async Validation with RIA Services Jeff Handley has a post up that is sort of a follow-on to a year-old post on async validation with RIA services and DataForm and how it's all much easier now in SL4. Learning Blend with .toolbox (Silverlight TV #29) John Papa and Arturo Toledo discuss .toolbox in Silverlight TV #29 -- have you made yourself an avatar yet? ... well go get on-board with this great learning tool! Silverlight Out of Browser Dynamic Modules in Offline Mode OOB isn't difficult, dynamic modules can become a bit more, but what if you're OOB... ok what if you're OOB and offline? ... Jeremy Likness has a possible solution for this with an OfflineCatalog. MEFedMVVM v1.0 Explained Marlon Grech has a great into to MEFedMVVM in this post. If you're trying to get your head around MEF and MVVM in either WPF or Silverlight, here's a good starting point. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Using the Script Component as a Conditional Split

    This is a quick walk through on how you can use the Script Component to perform Conditional Split like behaviour, splitting your data across multiple outputs. We will use C# code to decide what does flows to which output, rather than the expression syntax of the Conditional Split transformation. Start by setting up the source. For my example the source is a list of SQL objects from sys.objects, just a quick way to get some data: SELECT type, name FROM sys.objects type name S syssoftobjrefs F FK_Message_Page U Conference IT queue_messages_23007163 Shown above is a small sample of the data you could expect to see. Once you have setup your source, add the Script Component, selecting Transformation when prompted for the type, and connect it up to the source. Now open the component, but don’t dive into the script just yet. First we need to select some columns. Select the Input Columns page and then select the columns we want to uses as part of our filter logic. You don’t need to choose columns that you may want later, this is just the columns used in the script itself. Next we need to add our outputs. Select the Inputs and Outputs page.You get one by default, but we need to add some more, it wouldn’t be much of a split otherwise. For this example we’ll add just one more. Click the Add Output button, and you’ll see a new output is added. Now we need to set some properties, so make sure our new Output 1 is selected. In the properties grid change the SynchronousInputID property to be our input Input 0, and  change the ExclusionGroup property to 1. Now select Ouput 0 and change the ExclusionGroup property to 2. This value itself isn’t important, provided each output has a different value other than zero. By setting this property on both outputs it allows us to split the data down one or the other, making each exclusive. If we left it to 0, that output would get all the rows. It can be a useful feature allowing you to copy selected rows to one output whilst retraining the full set of data in the other. Now we can go back to the Script page and start writing some code. For the example we will do a very simple test, if the value of the type column is U, for user table, then it goes down the first output, otherwise it ends up in the other. This mimics the exclusive behaviour of the conditional split transformation. public override void Input0_ProcessInputRow(Input0Buffer Row) { // Filter all user tables to the first output, // the remaining objects down the other if (Row.type.Trim() == "U") { Row.DirectRowToOutput0(); } else { Row.DirectRowToOutput1(); } } The code itself is very simple, a basic if clause that determines which of the DirectRowToOutput methods we call, there is one for each output. Of course you could write a lot more code to implement some very complex logic, but the final direction is still just a method call. If we now close the script component, we can hook up the outputs and test the package. Your numbers will vary depending on the sample database but as you can see we have clearly split out input data into two outputs. As a final tip, when adding the outputs I would normally rename them, changing the Name in the Properties grid. This means the generated methods follow the pattern as do the path label shown on the design surface, making everything that much easier to recognise.

    Read the article

  • The Oracle Graduate Experience...A Graduates Perspective by Angelie Tierney

    - by david.talamelli
    [Note: Angelie has just recently joined Oracle in Australia in our 2011 Graduate Program. Last week I shared my thoughts on our 2011 Graduate Program, this week Angelie took some time to share her thoughts of our Graduate Program. The notes below are Angelie's overview from her experience with us starting with our first contact last year - David Talamelli] How does the 1 year program work? It consists of 3 weeks of training, followed by 2 rotations in 2 different Lines of Business (LoB's). The first rotation goes for 4 months, while your 2nd rotation goes for 7, when you are placed into your final LoB for the program. The interview process: After sorting through the many advertised graduate jobs, submitting so many resumes and studying at the same time, it can all be pretty stressful. Then there is the interview process. David called me on a Sunday afternoon and I spoke to him for about 30 minutes in a mini sort of phone interview. I was worried that working at Oracle would require extensive technical experience, but David stressed that even the less technical, and more business-minded person could, and did, work at Oracle. I was then asked if I would like to attend a group interview in the next weeks, to which I said of course! The first interview was a day long, consisting of a brief introduction, a group interview where we worked on a business plan with a group of other potential graduates and were marked by 3 Oracle employees, on our ability to work together and presentation. After lunch, we then had a short individual interview each, and that was the end of the first round. I received a call a few weeks later, and was asked to come into a second interview, at which I also jumped at the opportunity. This was an interview based purely on your individual abilities and would help to determine which Line of Business you would go to, should you land a graduate position. So how did I cope throughout the interview stages? I believe the best tool to prepare for the interview, was to research Oracle and its culture and to see if I thought I could fit into that. I personally found out about Oracle, its partners as well as competitors and along the way, even found out about their part (or Larry Ellison's specifically) in the Iron Man 2 movie. Armed with some Oracle information and lots of enthusiasm, I approached the Oracle Graduate Interview process. Why did I apply for an Oracle graduate position? I studied a Bachelor of Business/Bachelor of Science in IT, and wanted to be able to use both my degrees, while have the ability to work internationally in the future. Coming straight from university, I wasn't sure exactly what I wanted to do in terms of my career. With the program, you are rotated across various lines of business, to not only expose you to different parts of the business, but to also help you to figure out what you want to achieve out of your career. As a result, I thought Oracle was the perfect fit. So what can an Oracle ANZ Graduate expect? First things first, you can expect to line up for your visitor pass. Really. Next you enter a room full of unknown faces, graduates just like you, and then you realise you're in this with 18 other people, going through the same thing as you. 3 weeks later you leave with many memories, colleagues you can call your friends, and a video of your presentation. Vanessa, the Graduate Manager, will also take lots of photos and keep you (well) fed. Well that's not all you leave with, you are also equipped with a wealth of knowledge and contacts within Oracle, both that will help you throughout your career there. What training is involved? We started our Oracle experience with 3 weeks of training, consisting of employee orientation, extensive product training, presentations on the various lines of business (LoB's), followed by sales and presentation training. While there was potential for an information overload, maybe even death by Powerpoint, we were able to have access to the presentations for future reference, which was very helpful. This period also allowed us to start networking, not only with the graduates, but with the managers who presented to us, as well as through the monthly chinwag, HR celebrations and even with the sharing of tea facilities. We also had a team bonding day when we recorded a "commercial" within groups, and learned how to play an Irish drum. Overall, the training period helped us to learn about Oracle, as well as ourselves, and to prepare us for our transition into our rotations. Where to now? I'm now into my 2nd week of my first graduate rotation. It has been exciting to finally get out into the work environment and utilise that knowledge we gained from training. My manager has been a great mentor, extremely knowledgeable, and it has been good being able to participate in meetings, conference calls and make a contribution towards the business. And while we aren't necessarily working directly with the other graduates, they are still reachable via email, Pidgin and lunch and they are important as a resource and support, after all, they are going through a similar experience to you. While it is only the beginning, there is a lot more to learn and a lot more to experience along the way, especially because, as we learned during training, at Oracle, the only constant is change.

    Read the article

  • Using the latest (stable release) of Oracle Developer Tools for Visual Studio 11.1.0.7.20.

    - by mbcrump
    +  = Simple and safe Data connections.   This guide is for someone wanting to use the latest ODP.NET quickly without reading the official documentation. This guide will get you up and running in about 15 minutes. I have reviewed my referral link to my simple Setting up ODP.net with Win7 x64 and noticed most people were searching for one of the following terms: “how to use odp.net with vs” “setup connection odp.net” “query db using odp and vs” While my article provided links and a sample tnsnames.ora file, it really didn’t tell you how to use it. I’m hoping that this brief tutorial will help. So before we get started, you will need the following: Download the following: www.oracle.com/technology/software/tech/dotnet/utilsoft.html from oracle and install it. It is the first one on the page. Visual Studio 2008 or 2010. It should be noted that The System.Data.OracleClient namespace is the OLD .NET Framework Data Provider for Oracle. It should not be used anymore as it has been depreciated. The latest version which is what we are using is Oracle.DataAccess.Client. First things first, Add a reference to the Oracle.DataAccess.Client after you install ODP.NET   Copy and paste the following C# code into your project and replace the relevant info including the query string and you should be able to return data. I have commented several lines of code to assist in understanding what it is doing.   Lambda Expression. using System; using System.Data; using Oracle.DataAccess.Client;   namespace ConsoleApplication1 {     class Program     {         static void Main(string[] args)         {           try         {             //Setup DataSource             string oradb = "Data Source=(DESCRIPTION ="                                    + "(ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = hostname)(PORT = 1521)))"                                    + "(CONNECT_DATA = (SERVICE_NAME = SERVICENAME))) ;"                                    + "Persist Security Info=True;User ID=USER;Password=PASSWORD;";                        //Open Connection to Oracle - this could be moved outside the try.             OracleConnection conn = new OracleConnection(oradb);             conn.Open();               //Create cmd and use parameters to prevent SQL injection attacks.             OracleCommand cmd = new OracleCommand();             cmd.Connection = conn;               cmd.CommandText = "select username from table where username = :username";               OracleParameter p1 = new OracleParameter("username", OracleDbType.Varchar2);             p1.Value = username;             cmd.Parameters.Add(p1);               cmd.CommandType = CommandType.Text;               OracleDataReader dr = cmd.ExecuteReader();             dr.Read();               //Contains the value of the datarow             Console.WriteLine(dr["username"].ToString());               //Disposes of objects.             dr.Dispose();             cmd.Dispose();             conn.Dispose();         }           catch (OracleException ex) // Catches only Oracle errors         {             switch (ex.Number)             {                 case 1:                     Console.WriteLine("Error attempting to insert duplicate data.");                     break;                 case 12545:                     Console.WriteLine("The database is unavailable.");                     break;                 default:                     Console.WriteLine(ex.Message.ToString());                     break;             }         }           catch (Exception ex) // Catches any error not previously caught         {                   Console.WriteLine("Unidentified Error: " + ex.Message.ToString());              }         }       }           } At this point, you should have a working Program that returns data from an oracle database. If you are still having trouble then drop me a line and I will be happy to assist. As of this writing, oracle has announced the latest beta release of ODP.NET 11.2.0.1.1 Beta.  This release includes .NET Framework 4 and .NET Framework 4 Client Profile support. You may want to hold off on this version for a while as its BETA, and I wouldn’t want any production code using any BETA software.

    Read the article

  • Database-as-a-Service on Exadata Cloud

    - by Gagan Chawla
    Note – Oracle Enterprise Manager 12c DBaaS is platform agnostic and is designed to work on Exadata/non-Exadata, physical/virtual, Oracle/non Oracle platforms and it’s not a mandatory requirement to use Exadata as the base platform. Database-as-a-Service (DBaaS) is an important trend these days and the top business drivers motivating customers towards private database cloud model include constant pressure to reduce IT Costs and Complexity, and also to be able to improve Agility and Quality of Service. The first step many enterprises take in their journey towards cloud computing is to move to a consolidated and standardized environment and Exadata being already a proven best-in-class popular consolidation platform, we are seeing now more and more customers starting to evolve from Exadata based platform into an agile self service driven private database cloud using Oracle Enterprise Manager 12c. Together Exadata Database Machine and Enterprise Manager 12c provides industry’s most comprehensive and integrated solution to transform from a typical silo’ed environment into enterprise class database cloud with self service, rapid elasticity and pay-per-use capabilities.   In today’s post, I’ll list down the important steps to enable DBaaS on Exadata using Enterprise Manager 12c. These steps are chalked down based on a recent DBaaS implementation from a real customer engagement - Project Planning - First step involves defining the scope of implementation, mapping functional requirements and objectives to use cases, defining high availability, network, security requirements, and delivering the project plan. In a Cloud project you plan around technology, business and processes all together so ensure you engage your actual end users and stakeholders early on in the project right from the scoping and planning stage. Setup your EM 12c Cloud Control Site – Once the project plan approval and sign off from stakeholders is achieved, refer to EM 12c Install guide and these are some important tips to follow during the site setup phase - Review the new EM 12c Sizing paper before you get started with install Cloud, Chargeback and Trending, Exadata plug ins should be selected to deploy during install Refer to EM 12c Administrator’s guide for High Availability, Security, Network/Firewall best practices and options Your management and managed infrastructure should not be combined i.e. EM 12c repository should not be hosted on same Exadata where target Database Cloud is to be setup Setup Roles and Users – Cloud Administrator (EM_CLOUD_ADMINISTRATOR), Self Service Administrator (EM_SSA_ADMINISTRATOR), Self Service User (EM_SSA_USER) are the important roles required for cloud lifecycle management. Roles and users are managed by Super Administrator via Setup menu –> Security option. For Self Service/SSA users custom role(s) based on EM_SSA_USER should be created and EM_USER, PUBLIC roles should be revoked during SSA user account creation. Configure Software Library – Cloud Administrator logs in and in this step configures software library via Enterprise menu –> provisioning and patching option and the storage location is OMS shared filesystem. Software Library is the centralized repository that stores all software entities and is often termed as ‘local store’. Setup Self Update – Self Update is one of the most innovative and cool new features in EM 12c framework. Self update can be accessed via Setup -> Extensibility option by Super Administrator and is the unified delivery mechanism to get all new and updated entities (Agent software, plug ins, connectors, gold images, provisioning bundles etc) in EM 12c. Deploy Agents on all Compute nodes, and discover Exadata targets – Refer to Exadata discovery cookbook for detailed walkthrough to ensure successful discovery of Exadata targets. Configure Privilege Delegation Settings – This step involves deployment of privilege setting template on all the nodes by Super Administrator via Setup menu -> Security option with the option to define whether to use sudo or powerbroker for all provisioning and patching operations. Provision Grid Infrastructure with RAC Database on Compute Nodes – Software is provisioned in this step via a provisioning profile using EM 12c database provisioning. In case of Exadata, Grid Infrastructure and RAC Database software is already deployed on compute nodes via OneCommand from Oracle, so SSA Administrator just needs to discover Oracle Homes and Listener as EM targets. Databases will be created as and when users request for databases from cloud. Customize Create Database Deployment Procedure – the actual database creation steps are "templatized" in this step by Self Service Administrator and the newly saved deployment procedure will be used during service template creation in next step. This is an important step and make sure you have locked all the required variables marked as locked as ‘Y’ in this table. Setup Self Service Portal – This step involves setting up of zones, user quotas, service templates, chargeback plan. The SSA portal is setup by Self Service Administrator via Setup menu -> Cloud -> Database option and following guided workflow. Refer to DBaaS cookbook for details. You also have an option to customize SSA login page via steps documented in EM 12c Cloud Administrator’s guide Final Checks – Define and document process guidelines for SSA users and administrators. Get your SSA users trained on Self Service Portal features and overall DBaaS model and SSA administrators should be familiar with Self Service Portal setup pieces, EM 12c database lifecycle management capabilities and overall EM 12c monitoring framework. GO LIVE – Announce rollout of Database-as-a-Service to your SSA users. Users can login to the Self Service Portal and request/monitor/view their databases in Exadata based database cloud. Congratulations! You just delivered a successful database cloud implementation project! In future posts, we will cover these additional useful topics around database cloud – DBaaS Implementation tips and tricks – right from setup to self service to managing the cloud lifecycle ‘How to’ enable real production databases copies in DBaaS with rapid provisioning in database cloud Case study of a customer who recently achieved success with their transformational journey from traditional silo’ed environment on to Exadata based database cloud using Enterprise Manager 12c. More Information – Podcast on Database as a Service using Oracle Enterprise Manager 12c Oracle Enterprise Manager 12c Installation and Administration guide, Cloud Administration guide DBaaS Cookbook Exadata Discovery Cookbook Screenwatch: Private Database Cloud: Set Up the Cloud Self-Service Portal Screenwatch: Private Database Cloud: Use the Cloud Self-Service Portal Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • C#/.NET Little Wonders &ndash; Cross Calling Constructors

    - by James Michael Hare
    Just a small post today, it’s the final iteration before our release and things are crazy here!  This is another little tidbit that I love using, and it should be fairly common knowledge, yet I’ve noticed many times that less experienced developers tend to have redundant constructor code when they overload their constructors. The Problem – repetitive code is less maintainable Let’s say you were designing a messaging system, and so you want to create a class to represent the properties for a Receiver, so perhaps you design a ReceiverProperties class to represent this collection of properties. Perhaps, you decide to make ReceiverProperties immutable, and so you have several constructors that you can use for alternative construction: 1: // Constructs a set of receiver properties. 2: public ReceiverProperties(ReceiverType receiverType, string source, bool isDurable, bool isBuffered) 3: { 4: ReceiverType = receiverType; 5: Source = source; 6: IsDurable = isDurable; 7: IsBuffered = isBuffered; 8: } 9: 10: // Constructs a set of receiver properties with buffering on by default. 11: public ReceiverProperties(ReceiverType receiverType, string source, bool isDurable) 12: { 13: ReceiverType = receiverType; 14: Source = source; 15: IsDurable = isDurable; 16: IsBuffered = true; 17: } 18:  19: // Constructs a set of receiver properties with buffering on and durability off. 20: public ReceiverProperties(ReceiverType receiverType, string source) 21: { 22: ReceiverType = receiverType; 23: Source = source; 24: IsDurable = false; 25: IsBuffered = true; 26: } Note: keep in mind this is just a simple example for illustration, and in same cases default parameters can also help clean this up, but they have issues of their own. While strictly speaking, there is nothing wrong with this code, logically, it suffers from maintainability flaws.  Consider what happens if you add a new property to the class?  You have to remember to guarantee that it is set appropriately in every constructor call. This can cause subtle bugs and becomes even uglier when the constructors do more complex logic, error handling, or there are numerous potential overloads (especially if you can’t easily see them all on one screen’s height). The Solution – cross-calling constructors I’d wager nearly everyone knows how to call your base class’s constructor, but you can also cross-call to one of the constructors in the same class by using the this keyword in the same way you use base to call a base constructor. 1: // Constructs a set of receiver properties. 2: public ReceiverProperties(ReceiverType receiverType, string source, bool isDurable, bool isBuffered) 3: { 4: ReceiverType = receiverType; 5: Source = source; 6: IsDurable = isDurable; 7: IsBuffered = isBuffered; 8: } 9: 10: // Constructs a set of receiver properties with buffering on by default. 11: public ReceiverProperties(ReceiverType receiverType, string source, bool isDurable) 12: : this(receiverType, source, isDurable, true) 13: { 14: } 15:  16: // Constructs a set of receiver properties with buffering on and durability off. 17: public ReceiverProperties(ReceiverType receiverType, string source) 18: : this(receiverType, source, false, true) 19: { 20: } Notice, there is much less code.  In addition, the code you have has no repetitive logic.  You can define the main constructor that takes all arguments, and the remaining constructors with defaults simply cross-call the main constructor, passing in the defaults. Yes, in some cases default parameters can ease some of this for you, but default parameters only work for compile-time constants (null, string and number literals).  For example, if you were creating a TradingDataAdapter that relied on an implementation of ITradingDao which is the data access object to retreive records from the database, you might want two constructors: one that takes an ITradingDao reference, and a default constructor which constructs a specific ITradingDao for ease of use: 1: public TradingDataAdapter(ITradingDao dao) 2: { 3: _tradingDao = dao; 4:  5: // other constructor logic 6: } 7:  8: public TradingDataAdapter() 9: { 10: _tradingDao = new SqlTradingDao(); 11:  12: // same constructor logic as above 13: }   As you can see, this isn’t something we can solve with a default parameter, but we could with cross-calling constructors: 1: public TradingDataAdapter(ITradingDao dao) 2: { 3: _tradingDao = dao; 4:  5: // other constructor logic 6: } 7:  8: public TradingDataAdapter() 9: : this(new SqlTradingDao()) 10: { 11: }   So in cases like this where you have constructors with non compiler-time constant defaults, default parameters can’t help you and cross-calling constructors is one of your best options. Summary When you have just one constructor doing the job of initializing the class, you can consolidate all your logic and error-handling in one place, thus ensuring that your behavior will be consistent across the constructor calls. This makes the code more maintainable and even easier to read.  There will be some cases where cross-calling constructors may be sub-optimal or not possible (if, for example, the overloaded constructors take completely different types and are not just “defaulting” behaviors). You can also use default parameters, of course, but default parameter behavior in a class hierarchy can be problematic (default values are not inherited and in fact can differ) so sometimes multiple constructors are actually preferable. Regardless of why you may need to have multiple constructors, consider cross-calling where you can to reduce redundant logic and clean up the code.   Technorati Tags: C#,.NET,Little Wonders

    Read the article

  • What is the SharePoint Action Framework and Why do I need it ?

    - by SAF
    For those out there that are a little curious as to whether SAF is any use to your organisation, please read this FAQ.  What is SAF ? SAF is free to use. SAF is the "SharePoint Action Framework", it was built by myself and Hugo (plus a few others along the way). SAF is written entirely in C# code available from : http://saf.codeplex.com.   SAF is a way to automate SharePoint configuration changes. An Action is a command/class/task/script written in C# that performs a unit of execution against SharePoint such as "CreateWeb"  or "AddLookupColumn". A SAF Macro is collection of one or more Actions. SAF Macro can be run from Msbuild, a Feature, StsAdm or common plain old .Net code. Parameters can be passed to a Macro at run-time from a variety of sources such as "Environment Variable", "*.config", "Msbuild Properties", Feature Properties, command line args, .net code. SAF emits lots of trace statements at run-time, these can be viewed using "DebugView". One Action can pass parameters to another Action. Parameters can be set using Expression Syntax such as "DateTime.Now".  You should consider SAF is you suffer from one of the following symptoms... "Our developers write lots of code to deploy changes at release time - it's always rushed" "I don't want my developers shelling out to Powershell or Stsadm from a Feature". "We have loads of Console applications now, I have lost track of where they are, or what they do" "We seem to be writing similar scripts against SharePoint in lots of ways, testing is hard". "My scripts often have lots of errors - they are done at the last minute". "When something goes wrong - I have no idea what went wrong or how to solve it". "Our Features get stuck and bomb out half way through - there no way to roll them back". "We have tons of Features now - I can't keep track". "We deploy Features to run one-off tasks" "We have a library of reusable scripts, but, we can only run it in one way, sometimes we want to run it from MSbuild and a Feature". "I want to automate the deployment of changes to our development environment". "I would like to run a housekeeping task on a scheduled basis"   So I like the sound of SAF - what's the problems ?  Realistically, there are few things that need to be considered: Someone on your team will need to spend a day or 2 understanding SAF and deciding exactly how you want to use it. I would suggest a Tech Lead, SysAdm or SP Architect will need to download it, try out the examples, look through the unit tests. Ask us questions. Although, SAF can be downloaded and set to go in a few minutes, you will still need to address issues such as - "Do you want to execute your Macros in MsBuild or from a Feature ?" You will need to decide who is going to do your deployments - is it each developer to themself, or do you require a dedicated Build Manager ? As most environments (Dev, QA, Live etc) require different settings (e.g. Urls, Database names, accounts etc), you will more than likely want to define these and set a properties file up for each environment. (These can then be injected into Saf at run-time). There may be no Action to solve your particular problem. If this is the case, suggest it to us - we can try and write it, or write it yourself. It's very easy to write a new Action - we have an approach to easily unit test it, document it and author it. For example, I wrote one to deploy  a WSP in 2 hours the other day. Alternatively, Saf can also call Stsadm commands and Powershell scripts.   Anyway, I do hope this helps! If you still need help, or a quick start, we can also offer consultancy around SAF. If you want to know more give us a call or drop an email to [email protected]

    Read the article

  • Some non-generic collections

    - by Simon Cooper
    Although the collections classes introduced in .NET 2, 3.5 and 4 cover most scenarios, there are still some .NET 1 collections that don't have generic counterparts. In this post, I'll be examining what they do, why you might use them, and some things you'll need to bear in mind when doing so. BitArray System.Collections.BitArray is conceptually the same as a List<bool>, but whereas List<bool> stores each boolean in a single byte (as that's what the backing bool[] does), BitArray uses a single bit to store each value, and uses various bitmasks to access each bit individually. This means that BitArray is eight times smaller than a List<bool>. Furthermore, BitArray has some useful functions for bitmasks, like And, Xor and Not, and it's not limited to 32 or 64 bits; a BitArray can hold as many bits as you need. However, it's not all roses and kittens. There are some fundamental limitations you have to bear in mind when using BitArray: It's a non-generic collection. The enumerator returns object (a boxed boolean), rather than an unboxed bool. This means that if you do this: foreach (bool b in bitArray) { ... } Every single boolean value will be boxed, then unboxed. And if you do this: foreach (var b in bitArray) { ... } you'll have to manually unbox b on every iteration, as it'll come out of the enumerator an object. Instead, you should manually iterate over the collection using a for loop: for (int i=0; i<bitArray.Length; i++) { bool b = bitArray[i]; ... } Following on from that, if you want to use BitArray in the context of an IEnumerable<bool>, ICollection<bool> or IList<bool>, you'll need to write a wrapper class, or use the Enumerable.Cast<bool> extension method (although Cast would box and unbox every value you get out of it). There is no Add or Remove method. You specify the number of bits you need in the constructor, and that's what you get. You can change the length yourself using the Length property setter though. It doesn't implement IList. Although not really important if you're writing a generic wrapper around it, it is something to bear in mind if you're using it with pre-generic code. However, if you use BitArray carefully, it can provide significant gains over a List<bool> for functionality and efficiency of space. OrderedDictionary System.Collections.Specialized.OrderedDictionary does exactly what you would expect - it's an IDictionary that maintains items in the order they are added. It does this by storing key/value pairs in a Hashtable (to get O(1) key lookup) and an ArrayList (to maintain the order). You can access values by key or index, and insert or remove items at a particular index. The enumerator returns items in index order. However, the Keys and Values properties return ICollection, not IList, as you might expect; CopyTo doesn't maintain the same ordering, as it copies from the backing Hashtable, not ArrayList; and any operations that insert or remove items from the middle of the collection are O(n), just like a normal list. In short; don't use this class. If you need some sort of ordered dictionary, it would be better to write your own generic dictionary combining a Dictionary<TKey, TValue> and List<KeyValuePair<TKey, TValue>> or List<TKey> for your specific situation. ListDictionary and HybridDictionary To look at why you might want to use ListDictionary or HybridDictionary, we need to examine the performance of these dictionaries compared to Hashtable and Dictionary<object, object>. For this test, I added n items to each collection, then randomly accessed n/2 items: So, what's going on here? Well, ListDictionary is implemented as a linked list of key/value pairs; all operations on the dictionary require an O(n) search through the list. However, for small n, the constant factor that big-o notation doesn't measure is much lower than the hashing overhead of Hashtable or Dictionary. HybridDictionary combines a Hashtable and ListDictionary; for small n, it uses a backing ListDictionary, but switches to a Hashtable when it gets to 9 items (you can see the point it switches from a ListDictionary to Hashtable in the graph). Apart from that, it's got very similar performance to Hashtable. So why would you want to use either of these? In short, you wouldn't. Any gain in performance by using ListDictionary over Dictionary<TKey, TValue> would be offset by the generic dictionary not having to cast or box the items you store, something the graphs above don't measure. Only if the performance of the dictionary is vital, the dictionary will hold less than 30 items, and you don't need type safety, would you use ListDictionary over the generic Dictionary. And even then, there's probably more useful performance gains you can make elsewhere.

    Read the article

  • A proposal for #DAX Code Formatting #ssas #powerpivot #tabular

    - by Marco Russo (SQLBI)
    I recently published a set of rules for DAX code formatting. The following is an example of what I obtain: CALCULATE (     SUMX (         Orders,         Orders[Amount]     ),     FILTER (         ALL ( Customers ),         CALCULATE (             COUNTROWS ( Sales ),             ALL ( Calendar[Date] )         ) > 42 + 8 – 25 * ( 3 - 1 )             + 2 – 1 + 2 – 1             + CALCULATE (                   2 + 2 – 2                   + 2 - 2               )             – CALCULATE ( 4 )     ) ) The goal is to improve code readability and I look forward to implement a code formatting feature in DAX Studio. The DAX Editor already supports the rules described in the article. I am also considering whether to add a rule specific for ADDCOLUMNS / SUMMARIZE because I would like to see the “pairs” of arguments to define a column in the same row or with a special indentation rule (DAX expression for a column is indented in the line following the column name). EVALUATE CALCULATETABLE (        CALCULATETABLE (         SUMMARIZE (             Audience,             'Date'[Year],             Individuals[Gender],             Individuals[AgeRange],             "Num of Rows", FORMAT (COUNTROWS (Audience), "#,#"),             "Weighted Mean Age",                 SUMX (Audience, Audience[Weight] * Audience[Age]) / SUM (Audience[Weight])         ),         SUMMARIZE (             BridgeIndividualsTargets,             Individuals[ID_Individual]         ),         Audience[Weight] > 0        ),        Targets[Target] = "Maschi",     'Date'[Year] = 2010,     'Date'[MonthName] = "January" ) I would like to get feedback for that – you can use comments here or comments in original article. Thanks!

    Read the article

  • ArchBeat Link-o-Rama for 2012-08-30

    - by Bob Rhubart
    Next Generation Mobile Clients for Oracle Applications & the role of Oracle Fusion Middleware | Manish Palaparthy Manish Palaparthy examines some of Oracle's mobile applications, and takes a look at the underlying technology. Master Data Management: A Foundation for Big Data Analysis | Manouj Tahiliani "Businesses that have embraced MDM to get a single, enriched and unified view of Master data by resolving semantic discrepancies and augmenting the explicit master data information from within the enterprise with implicit data from outside the enterprise like social profiles will have a leg up in embracing Big Data solutions. This is especially true for large and medium-sized businesses in industries like Retail, Communications, Financial Services, etc that would find it very challenging to get comprehensive analytical coverage and derive long-term success without resolving the limitations of the heterogeneous topology that leads to disparate, fragmented and incomplete master data." — Manouj Tahiliani Architect Day: Boston - Agenda Update Here's the latest updated information on the session schedule and content for Oracle Technology Network Architect Day in Boston, MA on September 12, 2012. Registration is open, but seating is limited. OTN Architect Day: Boston is being held on Wednesday September 12, 2012, 8:00 a.m. – 5:00 p.m., at the Boston Marriott Burlington, One Burlington Mall Road, Burlington, MA 01803. Integrating Coherence & Java EE 6 Applications using ActiveCache | Ricardo Ferreira The seamless integration between Oracle Coherence and Oracle WebLogic Server "provides a comprehensive environment to develop applications without the complexity of extra Java code to manage cache as a dependency," explains Ricardo Ferreira, "since Oracle provides a DI (Dependency Injection) mechanism for Coherence, the same DI mechanism available in standard Java EE applications. This feature is called ActiveCache." Ricardo shows you how to configure ActiveCache in WebLogic and your Java EE application. Cloud Infrastructure has a new standard from the DMTF "Unlike a de facto standard where typically one vendor has change control over the interface, and everyone else has to reverse engineer the inner workings of it, [Cloud Infrastructure Management Interface (CIMI)] is a de jure standard that is under change control of a standards body. One reason the standard took two years to create is that we factored in use cases, requirements and contributed APIs from multiple vendors. These vendors have products shipping today and as a result CIMI has a strong foundation in real world experience." Oracle GoldenGate 11g Release Launch Webcast- September 12 The new release of Oracle GoldenGate 11g is now available for major databases and platforms. Register for this webcast and live Q&A with product experts to learn about the solution's new features. September 12, 2012. 8:00am AM and 10:00AM PT. Speakers: Doug Reid (Director, Product Management, Oracle GoldenGate), Irem Radzik (Director, Product Marketing, Oracle Data Integration Products) Thought for the Day "[When] asking skilled architects…what they do when confronted with highly complex problems… [they] would most likely answer, 'Just use Common Sense.' [A] better expression than 'common sense' is 'contextual sense'—a knowledge of what is reasonable within a given context. Practicing architects through eduction, experience and examples accumulate a considerable body of contextual sense by the time they're entrusted with solving a system-level problem…" — Eberhardt Rechtin (January 16, 1926 – April 14, 2006) Source: SoftwareQuotes.com

    Read the article

  • Agile Development Requires Agile Support

    - by Matt Watson
    Agile developmentAgile development has become the standard methodology for application development. The days of long term planning with giant Gantt waterfall charts and detailed requirements is fading away. For years the product planning process frustrated product owners and businesses because no matter the plan, nothing ever went to plan. Agile development throws the detailed planning out the window and instead focuses on giving developers some basic requirements and pointing them in the right direction. Constant collaboration via quick iterations with the end users, product owners, and the development team helps ensure the project is done correctly.  The various agile development methodologies have helped greatly with creating products faster, but not without causing new problems. Complicated application deployments now occur weekly or monthly. Most of the products are web-based and deployed as a software service model. System performance and availability of these apps becomes mission critical. This is all much different from the old process of mailing new releases of client-server apps on CD once per quarter or year.The steady stream of new products and product enhancements puts a lot of pressure on IT operations to keep up with the software deployments and adding infrastructure capacity. The problem is most operations teams still move slowly thanks to change orders, documentation, procedures, testing and other processes. Operations can slow the process down and push back on the development team in some organizations. The DevOps movement is trying to solve some of these problems by integrating the development and operations teams more together. Rapid change introduces new problemsThe rapid product change ultimately creates some application problems along the way. Higher rates of change increase the likelihood of new application defects. Delivering applications as a software service also means that scalability of applications is critical. Development teams struggle to keep up with application defects and scalability concerns in their applications. Fixing application problems is a never ending job for agile development teams. Fixing problems before your customers do and fixing them quickly is critical. Most companies really struggle with this due to the divide between the development and operations groups. Fixing application problems typically requires querying databases, looking at log files, reviewing config files, reviewing error logs and other similar tasks. It becomes difficult to work on new features when your lead developers are working on defects from the last product version. Developers need more visibilityThe problem is most developers are not given access to see server and application information in the production environments. The operations team doesn’t trust giving all the developers the keys to the kingdom to log in to production and poke around the servers. The challenge is either give them no access, or potentially too much access. Those with access can still waste time figuring out the location of the application and how to connect to it over VPN. In addition, reproducing problems in test environments takes too much time and isn't always possible. System administrators spend a lot of time helping developers track down server information. Most companies give key developers access to all of the production resources so they can help resolve application defects. The problem is only those key people have access and they become a bottleneck. They end up spending 25-50% of their time on a daily basis trying to solve application issues because they are the only ones with access. These key employees’ time is best spent on strategic new projects, not addressing application defects. This job should fall to entry level developers, provided they have access to all the information they need to troubleshoot the problems.The solution to agile application support is giving all the developers limited access to the production environment and all the server information they need to see. Some companies create their own solutions internally to collect log files, centralize errors or other things to address the problem. Some developers even have access to server monitoring or other tools. But they key is giving them access to everything they need so they can see the full picture and giving access to the whole team. Giving access to everyone scales up the application support team and creates collaboration around providing improved application support.Stackify enables agile application supportStackify has created a solution that can give all developers a secure and read only view of the entire production server environment without console or remote desktop access.They provide a web application that provides real time visibility to the important information that developers need to see. An application centric view enables them to see all of their apps across multiple datacenters and environments. They don’t need to know where the application is deployed, just the name of the application to find it and dig in to see more. All your developers can see server health, application health, log files, config files, windows event viewer, deployment history, application notes, and much more. They can receive email and text alerts when problems arise and even safely query your production databases.Stackify enables companies that do agile development to scale up their application support team by getting more team members involved. The lead developers can spend more time on new projects. Application issues can be fixed quicker than ever. Operations can spend less time helping developers collect server information. Agile application support starts with Stackify. Visit Stackify.com to learn more.

    Read the article

  • Bug in Delphi XE RegularExpressions Unit

    - by Jan Goyvaerts
    Using the new RegularExpressions unit in Delphi XE, you can iterate over all the matches that a regex finds in a string like this: procedure TForm1.Button1Click(Sender: TObject); var RegEx: TRegEx; Match: TMatch; begin RegEx := TRegex.Create('\w+'); Match := RegEx.Match('One two three four'); while Match.Success do begin Memo1.Lines.Add(Match.Value); Match := Match.NextMatch; end end; Or you could save yourself two lines of code by using the static TRegEx.Match call: procedure TForm1.Button2Click(Sender: TObject); var Match: TMatch; begin Match := TRegEx.Match('One two three four', '\w+'); while Match.Success do begin Memo1.Lines.Add(Match.Value); Match := Match.NextMatch; end end; Unfortunately, due to a bug in the RegularExpressions unit, the static call doesn’t work. Depending on your exact code, you may get fewer matches or blank matches than you should, or your application may crash with an access violation. The RegularExpressions unit defines TRegEx and TMatch as records. That way you don’t have to explicitly create and destroy them. Internally, TRegEx uses TPerlRegEx to do the heavy lifting. TPerlRegEx is a class that needs to be created and destroyed like any other class. If you look at the TRegEx source code, you’ll notice that it uses an interface to destroy the TPerlRegEx instance when TRegEx goes out of scope. Interfaces are reference counted in Delphi, making them usable for automatic memory management. The bug is that TMatch and TGroupCollection also need the TPerlRegEx instance to do their work. TRegEx passes its TPerlRegEx instance to TMatch and TGroupCollection, but it does not pass the instance of the interface that is responsible for destroying TPerlRegEx. This is not a problem in our first code sample. TRegEx stays in scope until we’re done with TMatch. The interface is destroyed when Button1Click exits. In the second code sample, the static TRegEx.Match call creates a local variable of type TRegEx. This local variable goes out of scope when TRegEx.Match returns. Thus the reference count on the interface reaches zero and TPerlRegEx is destroyed when TRegEx.Match returns. When we call MatchAgain the TMatch record tries to use a TPerlRegEx instance that has already been destroyed. To fix this bug, delete or rename the two RegularExpressions.dcu files and copy RegularExpressions.pas into your source code folder. Make these changes to both the TMatch and TGroupCollection records in this unit: Declare FNotifier: IInterface; in the private section. Add the parameter ANotifier: IInterface; to the Create constructor. Assign FNotifier := ANotifier; in the constructor’s implementation. You also need to add the ANotifier: IInterface; parameter to the TMatchCollection.Create constructor. Now try to compile some code that uses the RegularExpressions unit. The compiler will flag all calls to TMatch.Create, TGroupCollection.Create and TMatchCollection.Create. Fix them by adding the ANotifier or FNotifier parameter, depending on whether ARegEx or FRegEx is being passed. With these fixes, the TPerlRegEx instance won’t be destroyed until the last TRegEx, TMatch, or TGroupCollection that uses it goes out of scope or is used with a different regular expression.

    Read the article

  • How Estimates Became Quotes

    - by Lee Brandt
    It’s our fault. Well, not completely, but we haven’t helped the situation any. All of what follows comes from my own experiences which, from talking to lots of other developers about it, seems to be pretty much par for the course. Where We Started When we first started estimating, we estimated pretty clearly. We would try to imagine something we’d done that was similar to the project being estimated and we’d toss it about in our heads a bit and see how much bigger or smaller we thought this new thing was, and add or subtract accordingly. We wouldn’t spend too much time on it, because we wanted to get to writing the software. Eventually, we’d come across some huge problem that there was now way we could’ve known about ahead of time. Either we didn’t see this thing or, we didn’t realize that this particular version of a problem would be so… problematic. We usually call this “not knowing what we don’t know”. It’s unavoidable. We just can’t know. Until we wade in and start putting some code together, there are just some things we won’t know… and some things we don’t even know that we don’t know. Y’know? So what happens? We go over budget. Project managers scream and dance the dance of the stressed-out project manager, and there is nothing we can do (or could’ve done) about it. We didn’t know. We thought about it for a bit and we didn’t see this herculean task sitting in the middle of our nice quiet project, and it has bitten us in the rear end. We now know how to handle this in the future, though. We will take some more time to pick around the requirements and discover all those things we don’t know. We’ll do some prototyping, we’ll read some blogs about similar projects, we’ll really grill the customer with questions during the requirements gathering phase. We’ll keeping asking “what else?” until the shove us down the stairs. We’ll take our time and uncover it all. We Learned, But Good The next time comes, and you know what happens? We do it. We grill the customer for weeks and prototype and read and research and we estimate everything down to the last button on the last form. Know what that gets us? It gets us three months of wasted time, and our estimate will still be off. Possibly off by a factor of four. WTF, mate? No way we could be surprised by something! We uncovered every particle. We turned every stone. How is it we still came across unknowns? Because we STILL didn’t know what we didn’t know. How could we? We didn’t know to ask. The worst part is, we’ve now convinced the product that this is NOT an estimate. It is a solid number based on massive research and an endless number of questions that they answered. There is absolutely now way you don’t know everything there is to know about this project now. No way there is anything you haven’t uncovered. And their faith in that “Esti-Quote” goes through the roof. When the project goes over this time, they might even begin to question whether or not you know what you’re doing. Who could blame them? You drilled them for weeks about every little thing, and when they complained about all the questions, you told them you wanted to uncover everything so there would be no surprises. SO we set them up to faile Guess, Then Plan We had a chance. At the beginning we could have just said, “That’s just a gut-feeling estimate, based on my past experience with similar projects. There could still be surprises.” If we spend SOME time doing SOME discovery and then bounce that against our own past experiences, we can come up with a fairly healthy estimate. We can then help the product owner understand that an estimate is a guess. Sure, it’s an educated guess, but it is still a guess. If we get it right it will be almost completely luck. Then, we help them to plan the development by taking that guess (yes, they still need the guess for planning purposes) and start measuring early and often to see if we still think we are right. We should adjust the estimate and alert the product owner as soon as we see problems (bad news does not age well) and we should be able to see any problems immediately if we are constantly measuring our pace. In lean software, we start with that guess and begin measuring cycle times immediately. Then we can make projections based on those cycle times and compare them to our estimate. This constant feedback is the best way to ensure that there are no surprises at the END of the project. There sill still be surprises, but we’ll see them sooner and have a better understanding of how they will affect our overall timeline. What do you think?

    Read the article

  • Books are Dead! Long Live the Books!

    - by smisner
    We live in interesting times with regard to the availability of technical material. We have lots of free written material online in the form of vendor documentation online, forums, blogs, and Twitter. And we have written material that we can buy in the form of books, magazines, and training materials. Online videos and training – some free and some not free – are also an option. All of these formats are useful for one need or another. As an author, I pay particular attention to the demand for books, and for now I see no reason to stop authoring books. I assure you that I don’t get rich from the effort, and fortunately that is not my motivation. As someone who likes to refer to books frequently, I am still a big believer in books and have evidence from book sales that there are others like me. If I can do my part to help others learn about the technologies I work with, I will continue to produce content in a variety of formats, including books. (You can view a list of all of my books on the Publications page of my site and my online training videos at Pluralsight.) As a consumer of technical information, I prefer books because a book typically can get into a topic much more deeply than a blog post, and can provide more context than vendor documentation. It comes with a table of contents and a (hopefully accurate) index that helps me zero in on a topic of interest, and of course I can use the Search feature in digital form. Some people suggest that technology books are outdated as soon as they get published. I guess it depends on where you are with technology. Not everyone is able to upgrade to the latest and greatest version at release. I do assume, however, that the SQL Server 7.0 titles in my library have little value for me now, but I’m certain that the minute I discard the book, I’m going to want it for some reason! Meanwhile, as electronic books overtake physical books in sales, my husband is grateful that I can continue to build my collection digitally rather than physically as the books have a way of taking over significant square footage in our house! Blog posts, on the other hand, are useful for describing the scenarios that come up in real-life implementations that wouldn’t fit neatly into a book. As many years that I have working with the Microsoft BI stack, I still run into new problems that require creative thinking. Likewise, people who work with BI and other technologies that I use share what they learn through their blogs. Internet search engines help us find information in blogs that simply isn’t available anywhere else. Another great thing about blogs, also, is the connection to community and the dialog that can ensue between people with common interests. With the trend towards electronic formats for books, I imagine that we’ll see books continue to adapt to incorporate different forms of media and better ways to keep the information current. At the moment, I wish I had a better way to help readers with my last two Reporting Services books. In the case of the Microsoft® SQL Server™ 2005 Reporting Services Step by Step book, I have heard many cases of readers having problems with the sample database that shipped on CD – either the database was missing or it was corrupt. So I’ve provided a copy of the database on my site for download from http://datainspirations.com/uploads/rs2005sbsDW.zip. Then for the Microsoft® SQL Server™ 2008 Reporting Services Step by Step book, we decided to avoid the database problem by using the AdventureWorks2008 samples that Microsoft published on Codeplex (although code samples are still available on CD). We had this silly idea that the URL for the download would remain constant, but it seems that expectation was ill-founded. Currently, the sample database is found at http://msftdbprodsamples.codeplex.com/releases/view/37109 but I have no idea how long that will remain valid. My latest books (#9 and #10 which are milestones I never anticipated), Building Integrated Business Intelligence Solutions with SQL Server 2008 R2 and Office 2010 (McGraw Hill, 2011) and Business Intelligence in Microsoft SharePoint 2010 (Microsoft Press, 2011), will not ship with a CD, but will provide all code samples for download at a site maintained by the respective publishers. I expect that the URLs for the downloads for the book will remain valid, but there are lots of references to other sites that can change or disappear over time. Does that mean authors shouldn’t make reference to such sites? Personally, I think the benefits to be gained from including links are greater than the risks of the links becoming invalid at some point. Do you think the time for technology books has come to an end? Is the delivery of books in electronic format enough to keep them alive? If technological barriers were no object, what would make a book more valuable to you than other formats through which you can obtain information?

    Read the article

  • SQL Authority News – FalafelCON 2014: 2 days with the Best Developers in the World

    - by Pinal Dave
    I love presenting at various forums on various technologies. I am extremely excited that I got invited to speak at Falafel Conference 2014 in San Francisco. I will present two technology sessions on SQL Server. If you are into web development or if you just want to attend a conference with the best of the industry speakers, this may be the right conference for you. What set apart this conference from other conference is technology presented as well as speakers. Usually one has to attend very expensive and high scale event when they have to hear good speakers. At this conference, you will find quite a many industry legends are available to present on the bleeding edge technology. Here are few of the reasons why I believe you should attend this conference: Choose from four tracks covering Web, Mobile development and testing, Sitefinity, and Automated Testing, or attend sessions from all four! Learn from the best developers and testers in the business in an intimate setting. Surround yourself with your peers and the opportunity to network Learn about the latest platforms and technologies including Kendo UI, AngularJS, ASP.NET MVC, WebAPI, and more! Here are the details for the sessions which I am going to present at Falafel Conference. Secrets of SQL Server: Database Worst Practices Abstract: Chances are you have heard, or even uttered, this expression. This demo-oriented session will show many examples where database professionals were dumbfounded by their own mistakes, and could even bring back memories of your own early DBA days. The goal of this session is to expose the small details that can be dangerous to the production environment and SQL Server as a whole, as well as talk about worst practices and how to avoid them. Shedding light on some of these perils and the tricks to avoid them may even save your current job. After attending this session, Developers will only need 60 seconds to improve performance of their database server in their SharePoint implementation. We will have a quiz during the session to keep the conversation alive. Developers will walk out with scripts and knowledge that can be applied to their servers, immediately post the session. Additionally, all attendees of the session will have access to learning material presented in the session. The Unsung Hero Abstract: Slow Running Queries are the most common problem that developers face while working with SQL Server. While it is easy to blame the SQL Server for unsatisfactory performance, however the issue often persists with the way queries have been written, and how Indexes has been set up. The session will focus on the ways of identifying problems that slow down SQL Server, and Indexing tricks to fix them. Developers will walk out with scripts and knowledge that can be applied to their servers, immediately post the session. Register Now! I have learned from the Falafel Team that they are running out of tickets and soon they will close the registration.  For next 10 days the price for the registration is only USD 149. Trust me, you can’t get such a world class training and networking opportunity at such a low price. Click to Register Here! Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL

    Read the article

< Previous Page | 210 211 212 213 214 215 216 217 218 219 220 221  | Next Page >