Search Results

Search found 5908 results on 237 pages for 'cody short'.

Page 108/237 | < Previous Page | 104 105 106 107 108 109 110 111 112 113 114 115  | Next Page >

  • When creating an library published on CodePlex, how "bad" would it be for the unit-test projects to rely on commercial products?

    - by Lasse V. Karlsen
    I have started a project on CodePlex for a WebDAV server implementation for .NET, so that I can host a WebDAV server in my own programs. This is both a learning/research project (WebDAV + server portion) as well as a project I think I can have much fun with, both in terms of making it and using it. However, I see a need to do mocking of types here in order to unit-testing properly. For instance, I will be relying on HttpListener for the web server portion of the WebDAV server, and since this type has no interface, and is sealed, I cannot easily make mocks or stubs out of it. Unless I use something like TypeMock. So if I used TypeMock in the unit-test projects on this library, how bad would this be for potential users? The projects are made in C# 3.5 for .NET 3.5 and 4.0, and the project files was created with Visual Studio 2010 Professional. The actual class libraries you would end up referencing in your software would of course not be encumbered with anything remotely like this, only the unit-test libraries. What's your thoughts on this? As an example, I have in my old code-base, which is private, the ability to just initiate a WebDAV server with just this: var server = new WebDAVServer(); This constructs, and owns, a HttpListener instance internally, and I would like to verify through unit-tests that if I dispose of this server object, the internal listener is disposed of. If, on the other hand, I use the overload where I hand it a listener object, this object should not be disposed of. Short of exposing the internal listener object to the outside world, something I'm a bit loath to do, how can I in a good way ensure that the object was disposed of? With TypeMock I can mock away parts of this object even though it isn't accessed through interfaces. The alternative would be for me to wrap everything in wrapper classes, where I have complete control.

    Read the article

  • Controlling a GameObject from another GameObject's script component

    - by OhMrBigshot
    I'm creating a game where when starting the game, a Cube is duplicated GridSize * GridSize times when the game starts. Now, after the cubes are duplicated I want to attach a variable to them, say "Flag" which is a bool, from another script component (let's say I have a Prefab that generates the cloned cubes). In short, I have something like this: CreateTiles.cs : Attached to Prefab void Start() { createMyTiles(); // a function that clones the tiles flagRandomTiles(); // a function that (what I'm trying to do) "Flags" 10 random cubes } CubeBehavior.cs : Attached to each Cube public bool hasFlag; // other stuff Now, I want flagRandomTiles() to set a Cube's hasFlag property via code, assuming I have access to them via a GameObject[] array. Here's what I've tried: Cubes[x].hasFlag = true; - No access. Making a function such as Cubes[x].setHasFlag(true) - still no access. Initializing Cubes as a CubeBehavior object array, then doing the above - GameObjects can't be converted to CubeBehaviors - I get this error when I try to assign the Cubes into the array. How do I do this?

    Read the article

  • Introduction to the ADF Debugger

    - by Shay Shmeltzer
    Not that you'll ever need this blog entry - after all there are never bugs in the code that YOU write. But maybe one day one of your peers will ask you for help debugging their ADF application so here we go... One of the cool features of JDeveloper and ADF is the ADF Debugger - a way to debug the declarative pars of Oracle ADF. The debugger goes beyond your regular Java debugger and shows you in a clear way specific information related to Oracle ADF - things like where are you in the taskflow/region hierarchy, what is in your various scopes, what is the value of a specific EL and much more. However, from the number of posts on OTN where people are saying "I placed a System.out.println() to see what the value was...", it seems that not many are familiar with the power of the debugger. So here is a short demo that shows you some aspects of the debugger such as: Setting breakpoints on various ADF artifacts The ADF structure window The ADF Data window The EL Evaluater window Want to learn more about debugging ADF applications - I highly recommend that you go back in time to 2009 and attend Steve Muench's OOW presentation about ADF debugging. Can't travel in time yet? Then the second best option is to look at his very clear ADF Debugging Slides, which were the inspiration to the above demo.

    Read the article

  • Code Behaviour via Unit Tests

    - by Dewald Galjaard
    Normal 0 false false false EN-ZA X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Some four months ago my car started acting up. Symptoms included a sputtering as my car’s computer switched between gears intermittently. Imagine building up speed, then when you reach 80km/h the car magically and mysteriously decide to switch back to third or even second gear. Clearly it was confused! I managed to track down a technician, an expert in his field to help me out. As he fitted his handheld computer to some hidden port under the dash, he started to explain “These cars are quite intelligent, you know. When they sense something is wrong they run in a restrictive program which probably account for how you managed to drive here in the first place...”  I was surprised and thought this was certainly going to be an interesting test drive. The car ran smoothly down the first couple of stretches as the technician ran through routine checks. Then he said “Ok, all looking good. We need to start testing aspects of the gearbox. Inside the gearbox there are a couple of sensors. One of them is a speed sensor which talks to the computer, which in turn will decide which gear to switch to. The restrictive program avoid these sensors altogether and allow the computer to obtain its input from other [non-affected] sources”. Then, as soon as he forced the speed sensor to come back online the symptoms and ill behaviour re-emerged... What an incredible analogy for getting into a discussion on unit testing software? Besides I should probably put my ill fortune to some good use, right? This example provide a lot of insight into how and why we should conduct unit tests when writing code. More importantly, it captures what is easily and unfortunately often the most overlooked goal of writing unit tests by those new to the art and those who oppose it alike - The goal of writing unit tests is to test the behaviour of our code under predefined conditions. Although it is very possible to test the intrinsic workings of each and every component in your code, writing several tests for each method in practise will soon prove to be an exhausting and ultimately fruitless exercise given the certain and ever changing nature of business requirements. Consequently it is true and quite possible whilst conducting proper unit tests, to call any single method several times as you examine and contemplate different scenarios. Let’s write some code to demonstrate what I mean. In my example I make use of the Moq framework and NUnit to create my tests. Truly you can use whatever you’re comfortable with. First we’ll create an ISpeedSensor interface. This is to represent the speed sensor located in the gearbox.  Then we’ll create a Gearbox class which we’ll pass to a constructor when we instantiate an object of type Computer. All three are described below.   ISpeedSensor.cs namespace AutomaticVehicle {     public interface ISpeedSensor     {         int ReportCurrentSpeed();     } }   Gearbox.cs namespace AutomaticVehicle {      public class Gearbox     {         private ISpeedSensor _speedSensor;           public Gearbox( ISpeedSensor gearboxSpeedSensor )         {             _speedSensor = gearboxSpeedSensor;         }         /// <summary>         /// This method obtain it's reading from the speed sensor.         /// </summary>         /// <returns></returns>         public int ReportCurrentSpeed()         {             return _speedSensor.ReportCurrentSpeed();         }     } } Computer.cs namespace AutomaticVehicle {     public class Computer     {         private Gearbox _gearbox;         public Computer( Gearbox gearbox )         {                     }          public int GetCurrentSpeed()         {             return _gearbox.ReportCurrentSpeed( );         }     } } Since this post is about Unit testing, that is exactly what we’ll create next. Create a second project in your solution. I called mine AutomaticVehicleTests and I immediately referenced the respective nunit, moq and AutomaticVehicle dll’s. We’re going to write a test to examine what happens inside the Computer class. ComputerTests.cs namespace AutomaticVehicleTests {     [TestFixture]     public class ComputerTests     {         [Test]         public void Computer_Gearbox_SpeedSensor_DoesThrow()         {             // Mock ISpeedSensor in gearbox             Mock< ISpeedSensor > speedSensor = new Mock< ISpeedSensor >( );             speedSensor.Setup( n => n.ReportCurrentSpeed() ).Throws<Exception>();             Gearbox gearbox = new Gearbox( speedSensor.Object );               // Create Computer instance to test it's behaviour  towards an exception in gearbox             Computer carComputer = new Computer( gearbox );             // For simplicity let’s assume for now the car only travels at 60 km/h.             Assert.AreEqual( 60, carComputer.GetCurrentSpeed( ) );          }     } }   What is happening in this test? We have created a mocked object using the ISpeedsensor interface which we've passed to our Gearbox object. Notice that I created the mocked object using an interface, not the implementation. I’ll talk more about this in future posts but in short I do this to accentuate the fact that I'm not not really concerned with how SpeedSensor work internally at this particular point in time. Next I’ve gone ahead and created a scenario where I’ve declared the speed sensor in Gearbox to be faulty by forcing it to throw an exception should we ask Gearbox to report on its current speed. Sneaky, sneaky. This test is a simulation of how things may behave in the real world. Inevitability things break, whether it’s caused by mechanical failure, some logical error on your part or a fellow developer which didn’t consult the documentation (or the lack thereof ) - whether you’re calling a speed sensor, making a call to a database, calling a web service or just trying to write a file to disk. It’s a scenario I’ve created and this test is about how the code within the Computer instance will behave towards any such error as I’ve depicted. Now, if you’ve followed closely in my final assert method you would have noticed I did something quite unexpected. I might be getting ahead of myself now but I’m testing to see if the value returned is equal to what I expect it to be under perfect conditions – I’m not testing to see if an error has been thrown! Why is that? Well, in short this is TDD. Test Driven Development is about first writing your test to define the result we want, then to go back and change the implementation within your class to obtain the desired output (I need to make sure I can drive back to the repair shop. Remember? ) So let’s go ahead and run our test as is. It’s fails miserably... Good! Let’s go back to our Computer class and make a small change to the GetCurrentSpeed method.   Computer.cs public int GetCurrentSpeed() {   try   {     return _gearbox.ReportCurrentSpeed( );   }   catch   {     RunRestrictiveProgram( );   } }     This is a simple solution, I know, but it does provide a way to allow for different behaviour. You’re more than welcome to provide an implementation for RunRestrictiveProgram should you feel the need to. It's not within the scope of this post or related to the point I'm trying to make. What is important is to notice how the focus has shifted in our approach from how things can break - to how things behave when broken.   Happy coding!

    Read the article

  • Edd strikes again &ndash; IronRuby for Rubyists on InfoQ

    - by Eric Nelson
    Colleague, friend and generally top guy on IronRuby Edd Morgan has just been published over on InfoQ. To wet the appetite… a snippet or three. IronRuby for Rubyists IronRuby is Microsoft's implementation of the Ruby language we all know and love with the added bonus of interoperability with the .NET framework — the Iron in the name is actually an acronym for 'Implementation running on .NET'. It's supported by the .NET Common Language Runtime as well as, albeit unofficially, the Mono project. You'd be forgiven for harbouring some question in your mind about running a dynamic language such as Ruby atop the CLR - that's where the DLR (Dynamic Language Runtime) comes in. The DLR is Microsoft's way of providing dynamic language capability on top of the CLR. Both IronRuby and the DLR are, as part of Microsoft's commitment to open source software, available as part of the Microsoft Public License on GitHub and CodePlex respectively… And Metaprogramming with IronRuby The art and science of metaprogramming — especially in Ruby, where it's an absolute joy — is something that could very easily span an entire article. As you would hope, IronRuby code is fully able to manipulate itself allowing you to bend your classes to your whim just as you would expect with a good dynamic language… And Riding the irails? So let's get to the point. I think it's a solid bet to make that a large proportion of Ruby programmers are familiar with the Rails framework - perhaps it's even safe to assume that most were first led to the Ruby language by the siren song of the Rails framework itself. Long story short, IronRuby is compatible enough to run your Rails app… Now… get yourself over to the full article and also check out some of Edds other work below. Related Links: 5 Steps to getting started with IronRuby Mini Book Review of IronRuby Unleashed by Shay Friedman Guest Post: Using IronRuby and .NET to produce the ‘Hello World of WPF’ – also by Edd Getting PhP and Ruby working on Windows Azure and SQL Azure Guest Post: What's IronRuby, and how do I put it on Rails? – also by Edd

    Read the article

  • How to subtract 1 from a orginal count in an ASP.NET gridview

    - by SAMIR BHOGAYTA
    I have a gridview that contains a count (whic is Quantity) were i have a button that adds a row under the orginal row and i need the sub row's count (Quantity) to subtract one from the orgianl row Quantity. EX: Before button click Orgianl row = 3 After click Orginal row = 2 Subrow = 1 Code: ASP.NET // FUNCTION : Adds a new subrow protected void gvParent_RowCommand(object sender, GridViewCommandEventArgs e) { if (e.CommandName.Equals("btn_AddRow", StringComparison.OrdinalIgnoreCase)) { // Get the row that was clicked (index 0. Meaning that 0 is 1, 1 is 2 and so on) // Objects can be null, Int32s cannot not. // Int16 = 2 bytes long (short) // Int32 = 4 bytes long (int) // Int64 = 8 bytes long (long) int i = Convert.ToInt32(e.CommandArgument); // create a DataTable based off the view state DataTable dataTable = (DataTable)ViewState["gvParent"]; for (int part = 0; part 1) { dataTable.Rows[part]["Quantity"] = oldQuantitySubtract - 1; // Instert a new row at a specific index DataRow dtAdd = dataTable.NewRow(); for (int k = 0; k dtAdd[k] = dataTable.Rows[part][k]; dataTable.Rows.InsertAt(dtAdd, i + 1); break; //dataTable.Rows.Add(dtAdd); } } // Rebind the data gvParent.DataSource = dataTable; gvParent.DataBind(); } }

    Read the article

  • ASP.NET Session Management

    - by geekrutherford
    Great article (a little old but still relevant) about the inner workings of session management in ASP.NET: Underpinnings of the Session State Management Implementation in ASP.NET.   Using StateServer and the BinaryFormatter serialization occuring caused me quite the headache over the last few days. Curiously, it appears the w3wp.exe process actually consumes more memory when utilizing StateServer and storing somewhat large and complex data types in session.   Users began experiencing Out Of Memory exceptions in the production environment. Looking at the stack trace it related to serialization using the BinaryFormatter. Using remote debugging against our QA server I noted that the code in the application functioned without issue. The exception occured outside the context of the application itself when the request had completed and the web server was trying to serialize session state into the StateServer.   The short term solution is switching back to the InProc method. Thus far this has proven to consume considerably less memory and has caused no issues. Long term the complex object stored in session will be off-loaded into a web service used to access the information directly from the database outside the context of the object used to encapsulate it.

    Read the article

  • ArchBeat Link-o-Rama for 2012-06-27

    - by Bob Rhubart
    Resource Kit: Oracle Exadata for the Communications industry In addition to several customer case studies, in video and white paper formats, this resource kit also includes a technical overview of Oracle Exadata Database Machine and a product datasheet. Registration is required for those who don't already have a free Oracle.com membership account. Call for Nominations: Oracle Fusion Middleware Innovation Awards 2012 - Win a free pass to #OOW12 These awards honor customers for their cutting-edge solutions using Oracle Fusion Middleware. Either a customer, their partner, or an Oracle representative can submit the nomination form on behalf of the customer. Submission deadline: July 17. Winners receive a free pass to Oracle OpenWorld 2012 in San Francisco. BPM – Disable DBMS job to refresh B2B Materialized View | Mark Nelson "If you are running BPM and you are not using B2B, you might want to disable the DBMS job that refreshes the B2B materialized view," says Fusion Middleware A-Team blogger Mark Nelson. Learn how in his short post. A Universal JMX Client for Weblogic –Part 1: Monitoring BPEL Thread Pools in SOA 11g | Stefan Koser A concise how-to from Oracle Fusion Middleware A-Team blogger Stefan Koser. Thought for the Day "There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." — C. A. R. Hoare Source: SoftwareQuotes.com/

    Read the article

  • Small change in MVVM Light Toolkit templates for Blend 4 RC

    - by Laurent Bugnion
    Ah, the joy of new releases… You will find that the MVVM Light Toolkit works fine with Visual Studio 2010 RTM and Blend 4 RC except for a few adjustments: Blend templates The path to the Expression Blend 4 project templates changed. If you start Expression Blend 4 RC now, you will likely not see the MVVM Light templates in the New Project dialog.   New Project dialog with MVVM Light To restore the templates, follow the steps: Open Windows Explorer Navigate to C:\Users\[username]\Documents\Expression (or simply type My Documents in Windows Explorer and then open the Expression folder). Change the name of the “Blend 4 beta” folder into “Blend 4”. That’s it, you should now see the templates in the New Project dialog in Blend 4. Note that since the new name is “Blend 4”, I hope that I won’t need to do the same exercise when Blend 4 RTM is released! Windows Phone 7 templates Since the Windows Phone 7 tools are not ready yet for Visual Studio 2010 RTM and Blend 4 RC, the templates in the Silverlight for Windows Phone folders will not work. You will get an error if you try to create a new such project in the newly released environment. I hesitated to remove these templates from the current packages, but honestly that is a lot of trouble for a very short time before the tools for Windows Phone 7 are released (note: I don’t have any information as to when these tools will be released). In the mean time, just don’t create a WinPhone7 application. Reminder: If you want to write code for Windows Phone 7, you need to keep the Visual Studio 2010 RC as well as Expression Blend 4 beta. Updated package I uploaded an update to the Blend 4 templates. It is available like before on the “Install manually” page and on the Codeplex page.   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • One Does Like To Code: DevoxxUK

    - by Tori Wieldt
    What's happening at Devoxx UK? I'll be talking to Rock Star speakers, Community leaders, authors, JSR leads and more.  This video is a short introduction.   Check out these great sessions: Thursday, June 12Perchance to Stream with Java 8by Paul Sandoz13:40 - 14:30 | Room 1 Making the Internet-of-Things a Reality with Embedded Javaby Simon Ritter11:50 - 12:40 | Room 4 Java SE 8 Lambdas and Streams Labby Simon Ritter17:00 - 20:00 | Room Mezzanine Safety Not Guaranteed: Sun. Misc. Unsafe and the Quest for Safe Alternativesby Paul Sandoz18:45 - 19:45 | Room 3 Join the Java EvolutionHeather VanCuraPatrick Curran19:45 – 20:45 | Room 2  Glassfish is Here to StayDavid DelabasseeAntonio Goncalves19:45 – 20:45 | Room Expo Here is the full line-up of sessions. Devoxx UK includes a Hackergarten, where can devs work an Open Source project of their choice. The Adopt OpenJDK and Adopt a JSR Program folks will be there to help attendees contribute back to Java SE and Java EE itself!   Saturday includes a special Devoxx4Kids event in conjunction with the London Java Community. It's design to teach 10-16 year-olds simple programming concepts, robotics, electronics, and games making. Workshops include LEGO Mindstorms (robotic engineering), Greenfoot (programming), Arduino (electronics), Scratch (games making), Minecraft Modding (game hacking) and NAO (robotic programming). Small fee, you must register. If you can't attend Devoxx UK in person, stay tuned to the YouTube/Java channel. I'll be doing plenty of interviews so you can join the fun from around the world. 

    Read the article

  • A Modern Marketing Marvel: Eloqua Experience 2013

    - by kristin.jellison
    Hey there, partners— You’d be hard pressed to find a more convincing example of modern marketing than the one that descended upon San Francisco last week. We’re talking about Eloqua Experience 2013, of course. It is remarkable that a marketing technology conference has become a case study in successful 21st-century marketing practices. Eloqua Experience 2013 (#EE13) was all about customer-focused, targeted messaging, multichannel content, analytics and real-time multiscreen engagement. It made for a busy, yet interactive experience for over 2,000 eager attendees. This year’s event brought together some of the world’s most innovative marketers for three days of immersive sessions covering marketing best practices, customer stories and deep-dive technical classes. With 70 breakout sessions, product announcements, and a special conversation with Vince Gilligan, creator and executive producer of “Breaking Bad,” #EE13 brought a lot of critical marketing news to light. Oracle’s goal: to make sure our partners stay updated. As you know, Eloqua joined Oracle in late 2012, further rounding out our Customer Experience applications platform. Eloqua is a marketing automation solution and marketing cloud centerpiece that partners can use to target the right buyers, easily execute campaigns, bring leads to sales and bring in high ROIs. The resources below will help you stay on top of the industry’s best practices for marketing, plus all the advantages Eloqua can bring to partners. Partner Opportunities and Strategy with Eloqua The latest Eloqua partner strategy. Interview with Oracle Eloqua GM Kevin Akeroyd on Eloqua Experience A short recap of 2013’s Experience. Eloqua Product Announcements John Stetic, VP of Products for Oracle Eloqua, highlights the top product news, including a new profiler app and the ability to integrate display advertising into multichannel campaigns. Eloqua Experience Highlight Reel See what all the bustle was about. Eloqua Experience Session Overviews A quick look at what the keynote and breakout sessions covered, with links to session content. Modern Marketing Essentials Library Tips, blueprints, and strategies for success based on the 5 Tenets of Modern Marketing. Over and out, Your OPN Marketing Allies

    Read the article

  • Penne alla MVP

    - by Valter Minute
    I’m sorry for the long silence on this blog and the long delay in replying to the friends that commented on my articles. I’ve been quite busy in the last weeks and I spent a lot of time traveling around Italy (not for pleasure!). In the meantime I’ve been renewed as an MVP on April the 1st (nice date to renew someone with such a bad sense of humor…). I decided to celebrate my MVP award with a new recipe (to be honest, I celebrated by eating the results of this recipe!) and I decided to call it “penne alla MVP”… just because I’m not good in finding nice names for my recipes. Ingredients (for 4 people): 360g pasta (penne or other short pasta) 300g small shrimps 1 cup of whipped cream 2 tablespoons of olive oil 1 small leek 1 glass of beer (I used Hoegaarden dutch white beer… but just because I like it and I finished the rest of the bootle while cooking) Chives Salt, pepper Prepare the pasta by boiling it in salted water, as usual. In the meantime chop the leek in very small bits, heat the oil inside a pan and when the oil is hot, drop the leek chops and let them cook for a few minutes. Add the shrimps and the glass of beer. Let them cook inside beer until they are cooked (if you used pre-cooked shrimps a couple of minutes would be enough to heat them and gave them the flavour of beer). Add the whipped cream and mix it well with the shrimps and the sauce. Dry the pasta and drop the sauce on top of it and then add the chives finely chopped.

    Read the article

  • EBS Workflow Overview & Best Practices - US

    - by Annemarie Provisero
    ADVISOR WEBCAST:  EBS Workflow Overview & Best Practices - US PRODUCT FAMILY:  ATG - Workflow   February 17, 2011 at 17:00 UK / 18:00 CET / 09:00 am Pacific / 10:00 am Mountain / 12:00 Eastern This 1.5-hour session is recommended for technical and functional Users who are interested to get an generic overview about the Tools and Utilities available to get a closer look into the Java Virtual Machine used in an E-Business Suite Environment and how to tune it. TOPICS WILL INCLUDE: Introduction of Workflow Useful Utilities and Tools Best Practices Q&A A short, live demonstration (only if applicable) and question and answer period will be included. Oracle Advisor Webcasts are dedicated to building your awareness around our products and services. This session does not replace offerings from Oracle Global Support Services. Click here to register for this session ------------------------------------------------------------------------------------------------------------- The above webcast is a service of the E-Business Suite Communities in My Oracle Support.For more information on other webcasts, please reference the Oracle Advisor Webcast Schedule.Click here to visit the E-Business Communities in My Oracle Support Note that all links require access to My Oracle Support.

    Read the article

  • Advanced Level Troubleshooting for Oracle Process Manufacturing Financials

    - by Annemarie Provisero
    ADVISOR WEBCAST: Advanced Level Troubleshooting for Oracle Process Manufacturing Financials PRODUCT FAMILY: Oracle Process Manufacturing     February 16, 2011 at 8 am PT, 9 am MT, 11 am ET This one-hour session provides basic to advanced level troubleshooting information for Functional Users, System Administrators, DBAs and Customers. TOPICS WILL INCLUDE: Find Log File and Error messages for important processes in OPM Financials. Important SQL queries and filtering transaction related issues. Enabling Debug mode in OPM Financials and SLA. A short, live demonstration (only if applicable) and question and answer period will be included. Oracle Advisor Webcasts are dedicated to building your awareness around our products and services. This session does not replace offerings from Oracle Global Support Services. Click here to register for this session ------------------------------------------------------------------------------------------------------------- The above webcast is a service of the E-Business Suite Communities in My Oracle Support.For more information on other webcasts, please reference the Oracle Advisor Webcast Schedule.Click here to visit the E-Business Communities in My Oracle Support Note that all links require access to My Oracle Support.

    Read the article

  • Branching and CI Builds with Agile

    - by Bob Horn
    We follow many agile processes, including automated tests, continuous integration, sprint reviews, etc... We're currently having a debate about how often we should branch release builds. We've been doing two-week sprints and trying to deploy to production at the end of each sprint. Some of us think we should be branching every sprint. Some of us think that's overkill. If a project encompasses three Visual Studio solutions, and we branch every sprint, then that's three branches, and three CI builds to create every two weeks. If we do this for six months, we'll end up with 36 branches and 36 CI builds. There is overhead involved in that. For those of us that think that branching every sprint is overkill, we don't have a very good alternative. On my last project, we deployed some solutions from the Main trunk. Yeah, that's not good, but it saved on some of the overhead. What's the right way to manage branching/releasing and CI builds, using agile, when we have such short (two-week) sprint cycles?

    Read the article

  • Oracle Advanced Benefits: Plan Design Maintenance for Open Enrollment

    - by Annemarie Provisero
    ADVISOR WEBCAST: Oracle Advanced Benefits: Plan Design Maintenance for Open Enrollment PRODUCT FAMILY: Oracle HCM - Benefits  July 13, 2011 at 1 pm PT, 2 pm MT, 4 pm ET This session AU gives you the information to define new and maintain all Compensation Objects used in your Benefits setup. Course highlights things to consider when getting ready for Open Enrollment or when there is a need to change compensation objects. We will review creating a new or ending an old program, plan, or option. We also review what to do when you need to move from an Unrestricted program to a Restricted one. TOPICS WILL INCLUDE: Adding or Modifying Compensation Objects Ending Compensation Objects Elements and Element Links Standard and Variable Rates Dependents and Beneficiaries Moving from Oracle Standard Benefits to Oracle Advanced Benefits A short, live demonstration (only if applicable) and question and answer period will be included. Oracle Advisor Webcasts are dedicated to building your awareness around our products and services. This session does not replace offerings from Oracle Global Support Services. Click here to register for this session ------------------------------------------------------------------------------------------------------------- The above webcast is a service of the E-Business Suite Communities in My Oracle Support. For more information on other webcasts, please reference the Oracle Advisor Webcast Schedule.Click here to visit the E-Business Communities in My Oracle Support Note that all links require access to My Oracle Support.

    Read the article

  • Making The EBS Upgrade From 11.5.10 Easier - Part III

    - by Annemarie Provisero
    ADVISOR WEBCAST: Making The EBS Upgrade From 11.5.10 Easier - Part III PRODUCT FAMILY: E-Business Suite July 19, 2011 at 8 am PT, 9 am MT, 11 am ET This one-hour session is recommended for technical users who are responsible for upgrading their E-Business Suite applications from Release 11.5.10 to Release 12.1.x. As you begin your upgrade process, there are a number of tools available to assist you in a successful upgrade. A successful upgrade requires careful planning, correct upgrade processing, detailed testing, and user (re)training prior to upgrade. Over three sessions we will discuss the tools that you can use to assist in your upgrade tasks. These tools are available to you via My Oracle Support and as part of the E-Business Suite product offerings. In this third session, we’ll cover the Best Practices for Using The Upgrade Tools. Additionally, this session includes an extended question and answer period. In the first part of the three-session series, we covered the following topics: Overview of Tools Available for Upgrading Upgrade versus Re-implementing Upgrade Community Upgrade Product Information Center Page Detailed Look at Upgrade Advisor In the second session, we covered the following topics: Recap of Part I Detailed Look at Maintenance Wizard Detailed Look at Patch Wizard A replay of those sessions is available via Note 740964.1, Advisor Webcast Archive. A short, live demonstration (only if applicable) and question and answer period will be included. Oracle Advisor Webcasts are dedicated to building your awareness around our products and services. This session does not replace offerings from Oracle Global Support Services. Click here to register for this session ------------------------------------------------------------------------------------------------------------- The above webcast is a service of the E-Business Suite Communities in My Oracle Support. For more information on other webcasts, please reference the Oracle Advisor Webcast Schedule.Click here to visit the E-Business Communities in My Oracle Support Note that all links require access to My Oracle Support.

    Read the article

  • Announcing Oracle Database Mobile Server 11gR2

    - by Eric Jensen
    I'm pleased to announce that Oracle Database Mobile Server 11gR2 has been released. It's available now for download by existing customers, or anyone who wants to try it out. New features include: Support for J2ME platforms, specifically CDC platforms including OJEC(this is in addition to our existing support for Java SE and SE Embedded) Per-application integration with Berkeley DB on Android Server-side support for Apache TomEE platform Adding support for Oracle Java Micro Edition Embedded Client (OJEC for short) is an important milestone for us; it enables Database Mobile Server to work with any of the incredibly wide array of devices that run J2ME. In particular, it enables management of  networks of embedded devices, AKA machine to machine (M2M) networks. As these types of networks become more common in areas like healthcare, automotive, and manufacturing, we're seeing demand for Database Mobile Server from new and different areas. This is in addition to our existing array of mobile device use cases. The Android integration feature with Berkeley DB represents the completion of phase I of our Android support plan, we now offer a full set of sync, device and app management features for that platform. Going forward, we plan to continue the dual-focus approach, supporting mobile platforms such as Android, and iOS (hint) on the one hand, and networks of embedded M2M devices on the other. In either case, Database Mobile Server continues to be the best way to connect data-driven applications to an Oracle backend.

    Read the article

  • An Oracle Exastack Recap

    - by Kristin Rose
    For those ISVs and OEMs who tuned into Oracle’s FY13 Partner Kickoff, thank you! It was with your participation and presence that helped make this year’s show another great success. The OPN Communications team was lucky enough to get a chance to sit down with Chris Baker, Oracle SVP of Worldwide ISV, OEM & Java, all the way from London, as he recapped the achievements that were seen over the past year with the Oracle Exastack Program. Be sure to watch his short video below: Here are some highlights: 1000 “Readies”- Those partners that are ready to use the latest version of our products Over 100 partners that are ready to use Oracle Exastack,  Oracle Exadata Database Machine, and Oracle Exalogic Elastic Cloud New Oracle Exalytics machine for analysis In less than one year, more than 100 ISV applications have achieved an Oracle Exastack Ready status and more than 35 ISV applications have achieved Oracle Exastack Optimized status. These partners can be found by Oracle customers listed in the Oracle Solutions Catalog.Demonstrating to customers that their solutions are tuned to deliver optimum speed, scalability and reliability on Oracle Engineered Systems, Oracle partners are rapidly achieving Oracle Exastack Optimized certification. Read the press release here. By simplifying your company’s architecture with the Oracle Exastack program, both ISV’s and OEMs are able to better concentrate on their application and deliver enhanced benefits to their customers.Cheers!The OPN Communications Team

    Read the article

  • Many small scripts, one repository or multiple?

    - by The Jug
    A co-worker and myself have run into an issue that we have multiple opinions on. Currently we have a git repository that we are keeping all of our cronjobs in. There are about 20 crons and they are not really related except for the fact that they are all small python scripts and essential for some activity. We are using a fabric.py file to deploy and a requirements.txt file to manage requirements for all of the scripts. Our issue is basically, do we keep all of these scripts in one git repository or should we be separating them out into their own repositories? By keeping them in one repository it is easier to deploy them onto one server. We can use just one cron file for all the scripts. However this feels wrong, as the 20 cronjobs are not logically related. Additionally, when using one requirements.txt file for all the scripts, it's hard to figure out what the dependencies are for a particular script and they all have to use the same versions of packages. We could separate all of the scripts out into their own repositories but this creates 20 different repositories that need to be remembered and dealt with. Most of these scripts are not very large and that solution seems to be overkill. A related question is, do we use one big crontab file for all cronjobs, or a separate file for each? If each has their own, how does one crontab's installation avoid overwriting the other 19? This also seems like a pain as there would then by 20 different cron files to keep track of. In short, our main question and issue is do we keep them all closely bundled as one repository or do we separate them out into their own repository with their own requirements.txt and fabfile.py? We feel like we're also probably looking over some really simple solution. Is there an easier way to deal with this issue?

    Read the article

  • Meet the New Windows Azure

    - by ScottGu
    Today we are releasing a major set of improvements to Windows Azure.  Below is a short-summary of just a few of them: New Admin Portal and Command Line Tools Today’s release comes with a new Windows Azure portal that will enable you to manage all features and services offered on Windows Azure in a seamless, integrated way.  It is very fast and fluid, supports filtering and sorting (making it much easier to use for large deployments), works on all browsers, and offers a lot of great new features – including built-in VM, Web site, Storage, and Cloud Service monitoring support. The new portal is built on top of a REST-based management API within Windows Azure – and everything you can do through the portal can also be programmed directly against this Web API. We are also today releasing command-line tools (which like the portal call the REST Management APIs) to make it even easier to script and automate your administration tasks.  We are offering both a Powershell (for Windows) and Bash (for Mac and Linux) set of tools to download.  Like our SDKs, the code for these tools is hosted on GitHub under an Apache 2 license. Virtual Machines Windows Azure now supports the ability to deploy and run durable VMs in the cloud.  You can easily create these VMs using a new Image Gallery built-into the new Windows Azure Portal, or alternatively upload and run your own custom-built VHD images. Virtual Machines are durable (meaning anything you install within them persists across reboots) and you can use any OS with them.  Our built-in image gallery includes both Windows Server images (including the new Windows Server 2012 RC) as well as Linux images (including Ubuntu, CentOS, and SUSE distributions).  Once you create a VM instance you can easily Terminal Server or SSH into it in order to configure and customize the VM however you want (and optionally capture your own image snapshot of it to use when creating new VM instances).  This provides you with the flexibility to run pretty much any workload within Windows Azure.   The new Windows Azure Portal provides a rich set of management features for Virtual Machines – including the ability to monitor and track resource utilization within them.  Our new Virtual Machine support also enables the ability to easily attach multiple data-disks to VMs (which you can then mount and format as drives).  You can optionally enable geo-replication support on these – which will cause Windows Azure to continuously replicate your storage to a secondary data-center at least 400 miles away from your primary data-center as a backup. We use the same VHD format that is supported with Windows virtualization today (and which we’ve released as an open spec), which enables you to easily migrate existing workloads you might already have virtualized into Windows Azure.  We also make it easy to download VHDs from Windows Azure, which also provides the flexibility to easily migrate cloud-based VM workloads to an on-premise environment.  All you need to do is download the VHD file and boot it up locally, no import/export steps required. Web Sites Windows Azure now supports the ability to quickly and easily deploy ASP.NET, Node.js and PHP web-sites to a highly scalable cloud environment that allows you to start small (and for free) and then scale up as your traffic grows.  You can create a new web site in Azure and have it ready to deploy to in under 10 seconds: The new Windows Azure Portal provides built-in administration support for Web sites – including the ability to monitor and track resource utilization in real-time: You can deploy to web-sites in seconds using FTP, Git, TFS and Web Deploy.  We are also releasing tooling updates today for both Visual Studio and Web Matrix that enable developers to seamlessly deploy ASP.NET applications to this new offering.  The VS and Web Matrix publishing support includes the ability to deploy SQL databases as part of web site deployment – as well as the ability to incrementally update database schema with a later deployment. You can integrate web application publishing with source control by selecting the “Set up TFS publishing” or “Set up Git publishing” links on a web-site’s dashboard: Doing do will enable integration with our new TFS online service (which enables a full TFS workflow – including elastic build and testing support), or create a Git repository that you can reference as a remote and push deployments to.  Once you push a deployment using TFS or Git, the deployments tab will keep track of the deployments you make, and enable you to select an older (or newer) deployment and quickly redeploy your site to that snapshot of the code.  This provides a very powerful DevOps workflow experience.   Windows Azure now allows you to deploy up to 10 web-sites into a free, shared/multi-tenant hosting environment (where a site you deploy will be one of multiple sites running on a shared set of server resources).  This provides an easy way to get started on projects at no cost. You can then optionally upgrade your sites to run in a “reserved mode” that isolates them so that you are the only customer within a virtual machine: And you can elastically scale the amount of resources your sites use – allowing you to increase your reserved instance capacity as your traffic scales: Windows Azure automatically handles load balancing traffic across VM instances, and you get the same, super fast, deployment options (FTP, Git, TFS and Web Deploy) regardless of how many reserved instances you use. With Windows Azure you pay for compute capacity on a per-hour basis – which allows you to scale up and down your resources to match only what you need. Cloud Services and Distributed Caching Windows Azure also supports the ability to build cloud services that support rich multi-tier architectures, automated application management, and scale to extremely large deployments.  Previously we referred to this capability as “hosted services” – with this week’s release we are now referring to this capability as “cloud services”.  We are also enabling a bunch of new features with them. Distributed Cache One of the really cool new features being enabled with cloud services is a new distributed cache capability that enables you to use and setup a low-latency, in-memory distributed cache within your applications.  This cache is isolated for use just by your applications, and does not have any throttling limits. This cache can dynamically grow and shrink elastically (without you have to redeploy your app or make code changes), and supports the full richness of the AppFabric Cache Server API (including regions, high availability, notifications, local cache and more).  In addition to supporting the AppFabric Cache Server API, it also now supports the Memcached protocol – allowing you to point code written against Memcached at it (no code changes required). The new distributed cache can be setup to run in one of two ways: 1) Using a co-located approach.  In this option you allocate a percentage of memory in your existing web and worker roles to be used by the cache, and then the cache joins the memory into one large distributed cache.  Any data put into the cache by one role instance can be accessed by other role instances in your application – regardless of whether the cached data is stored on it or another role.  The big benefit with the “co-located” option is that it is free (you don’t have to pay anything to enable it) and it allows you to use what might have been otherwise unused memory within your application VMs. 2) Alternatively, you can add “cache worker roles” to your cloud service that are used solely for caching.  These will also be joined into one large distributed cache ring that other roles within your application can access.  You can use these roles to cache 10s or 100s of GBs of data in-memory very effectively – and the cache can be elastically increased or decreased at runtime within your application: New SDKs and Tooling Support We have updated all of the Windows Azure SDKs with today’s release to include new features and capabilities.  Our SDKs are now available for multiple languages, and all of the source in them is published under an Apache 2 license and and maintained in GitHub repositories. The .NET SDK for Azure has in particular seen a bunch of great improvements with today’s release, and now includes tooling support for both VS 2010 and the VS 2012 RC. We are also now shipping Windows, Mac and Linux SDK downloads for languages that are offered on all of these systems – allowing developers to develop Windows Azure applications using any development operating system. Much, Much More The above is just a short list of some of the improvements that are shipping in either preview or final form today – there is a LOT more in today’s release.  These include new Virtual Private Networking capabilities, new Service Bus runtime and tooling support, the public preview of the new Azure Media Services, new Data Centers, significantly upgraded network and storage hardware, SQL Reporting Services, new Identity features, support within 40+ new countries and territories, and much, much more. You can learn more about Windows Azure and sign-up to try it for free at http://windowsazure.com.  You can also watch a live keynote I’m giving at 1pm June 7th (later today) where I’ll walk through all of the new features.  We will be opening up the new features I discussed above for public usage a few hours after the keynote concludes.  We are really excited to see the great applications you build with them. Hope this helps, Scott

    Read the article

  • F# Simple Twitter Update

    - by mroberts
    A short while ago I posted some code for a C# twitter update.  I decided to move the same functionality / logic to F#.  Here is what I came up with. 1: namespace Server.Actions 2:   3: open System 4: open System.IO 5: open System.Net 6: open System.Text 7:   8: type public TwitterUpdate() = 9: 10: //member variables 11: [<DefaultValue>] val mutable _body : string 12: [<DefaultValue>] val mutable _userName : string 13: [<DefaultValue>] val mutable _password : string 14:   15: //Properties 16: member this.Body with get() = this._body and set(value) = this._body <- value 17: member this.UserName with get() = this._userName and set(value) = this._userName <- value 18: member this.Password with get() = this._password and set(value) = this._password <- value 19:   20: //Methods 21: member this.Execute() = 22: let login = String.Format("{0}:{1}", this._userName, this._password) 23: let creds = Convert.ToBase64String(Encoding.ASCII.GetBytes(login)) 24: let tweet = Encoding.ASCII.GetBytes(String.Format("status={0}", this._body)) 25: let request = WebRequest.Create("http://twitter.com/statuses/update.xml") :?> HttpWebRequest 26: 27: request.Method <- "POST" 28: request.ServicePoint.Expect100Continue <- false 29: request.Headers.Add("Authorization", String.Format("Basic {0}", creds)) 30: request.ContentType <- "application/x-www-form-urlencoded" 31: request.ContentLength <- int64 tweet.Length 32: 33: let reqStream = request.GetRequestStream() 34: reqStream.Write(tweet, 0, tweet.Length) 35: reqStream.Close() 36:   37: let response = request.GetResponse() :?> HttpWebResponse 38:   39: match response.StatusCode with 40: | HttpStatusCode.OK -> true 41: | _ -> false   While the above seems to work, it feels to me like it is not taking advantage of some functional concepts.  Love to get some feedback as to how to make the above more “functional” in nature.  For example, I don’t like the mutable properties.

    Read the article

  • Infrastructure to effectively set up experiements and learn from them

    - by David
    Open-org.com is in the early stages of creating our first product, a place on the web, where one can ask lawyers questions at a fraction of their normal costs. An early stage front page can be found here. I got inspired by this video, which is recommended by Jeff Atwood, which talks about getting feedback faster, which is the reason for this question. The problem Needless to say, we want our conversion rates to be as high as possible. Therefore, we want to be able to rapidly set up a new experiment where we change something on the site (like moving an image slightly, rewriting a sentence etc.). We then want to present the modified page to a random subset of the users. After that we will compare the conversion rates of the experiment with another version. I could very well imagine that we want to run 10-100 experiments simultaneously and it would be nice to have features, where experiments that obviously have worse results will be ended before schedule. My question Does infrastructure to support the whole process exist? A short description of our infrastructure... We use EC2 and PHP and have a script to automatically start up new instances with all needed software. Still, starting up a new server for every experiment, seems like a bit of overkill, so I am wondering what other options exist. Btw. If you feel like working for Open-org.com, you can pick a task, and start working, or suggest a new task. All profits are given out to the contributors.

    Read the article

  • Workflow workarounds: tracking individual column changes

    - by PeterBrunone
    This post is long overdue, but since the question keeps popping up on various SharePoint discussion lists, I figured I'd document the answer here (next time I can just post a link instead of typing the whole thing out again).In short, you cannot trigger a SharePoint workflow when a column changes; you can only use the ItemChanged event.  To get more granular, then, you need to add some extra bits.Let's say you have a list called "5K Races" with a column called StartTime, and you want to execute some actions when the StartTime value changes.  Simply perform the following steps:1)  Create an additional column (same datatype) called OldStartTime.2)  When the workflow starts, compare StartTime to OldStartTime.    a) If they are equal, then do nothing (end).    b) If they are NOT equal, proceed with your workflow.3)  If 2b, then set OldStartTime to the value of StartTime.By performing step 3, you ensure that by the end of the workflow, OldStartTime will be equal to StartTime -- this is important because the workflow will continue to run every time a particular item is changed, but by taking away the criterion that would cause the workflow to run the second time, you have avoided an endless loop situation. 

    Read the article

  • Converting raw data type to enumerated type

    - by Jim Lahman
    There are times when an enumerated type is preferred over using the raw data type.  An example of using a scheme is when we need to check the health of x-ray gauges in use on a production line.  Rather than using a scheme like 0, 1 and 2, we can use an enumerated type: 1: /// <summary> 2: /// POR Healthy status indicator 3: /// </summary> 4: /// <remarks>The healthy status is for each POR x-ray gauge; each has its own status.</remarks> 5: [Flags] 6: public enum POR_HEALTH : short 7: { 8: /// <summary> 9: /// POR1 healthy status indicator 10: /// </summary> 11: POR1 = 0, 12: /// <summary> 13: /// POR2 healthy status indicator 14: /// </summary> 15: POR2 = 1, 16: /// <summary> 17: /// Both POR1 and POR2 healthy status indicator 18: /// </summary> 19: BOTH = 2 20: } By using the [Flags] attribute, we are treating the enumerated type as a bit mask.  We can then use bitwise operations such as AND, OR, NOT etc. . Now, when we want to check the health of a specific gauge, we would rather use the name of the gauge than the numeric identity; it makes for better reading and programming practice. To translate the numeric identity to the enumerated value, we use the Parse method of Enum class: POR_HEALTH GaugeHealth = (POR_HEALTH) Enum.Parse(typeof(POR_HEALTH), XrayMsg.Gauge_ID.ToString()); The Parse method creates an instance of the enumerated type.  Now, we can use the name of the gauge rather than the numeric identity: 1: if (GaugeHealth == POR_HEALTH.POR1 || GaugeHealth == POR_HEALTH.BOTH) 2: { 3: XrayHealthyTag.Name = Properties.Settings.Default.POR1XRayHealthyTag; 4: } 5: else if (GaugeHealth == POR_HEALTH.POR2) 6: { 7: XrayHealthyTag.Name = Properties.Settings.Default.POR2XRayHealthyTag; 8: }

    Read the article

< Previous Page | 104 105 106 107 108 109 110 111 112 113 114 115  | Next Page >