Search Results

Search found 1568 results on 63 pages for 'ben mills'.

Page 3/63 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Towards Ultra-Reusability for ADF - Adaptive Bindings

    - by Duncan Mills
    The task flow mechanism embodies one of the key value propositions of the ADF Framework, it's primary contribution being the componentization of your applications and implicitly the introduction of a re-use culture, particularly in large applications. However, what if we could do more? How could we make task flows even more re-usable than they are today? Well one great technique is to take advantage of a feature that is already present in the framework, a feature which I will call, for want of a better name, "adaptive bindings". What's an adaptive binding? well consider a simple use case.  I have several screens within my application which display tabular data which are all essentially identical, the only difference is that they happen to be based on different data collections (View Objects, Bean collections, whatever) , and have a different set of columns. Apart from that, however, they happen to be identical; same toolbar, same key functions and so on. So wouldn't it be nice if I could have a single parametrized task flow to represent that type of UI and reuse it? Hold on you say, great idea, however, to do that we'd run into problems. Each different collection that I want to display needs different entries in the pageDef file and: I want to continue to use the ADF Bindings mechanism rather than dropping back to passing the whole collection into the taskflow   If I do use bindings, there is no way I want to have to declare iterators and tree bindings for every possible collection that I might want the flow to handle  Ah, joy! I reply, no need to panic, you can just use adaptive bindings. Defining an Adaptive Binding  It's easiest to explain with a simple before and after use case.  Here's a basic pageDef definition for our familiar Departments table.  <executables> <iterator Binds="DepartmentsView1" DataControl="HRAppModuleDataControl" RangeSize="25"             id="DepartmentsView1Iterator"/> </executables> <bindings> <tree IterBinding="DepartmentsView1Iterator" id="DepartmentsView1">   <nodeDefinition DefName="oracle.demo.model.vo.DepartmentsView" Name="DepartmentsView10">     <AttrNames>       <Item Value="DepartmentId"/>         <Item Value="DepartmentName"/>         <Item Value="ManagerId"/>         <Item Value="LocationId"/>       </AttrNames>     </nodeDefinition> </tree> </bindings>  Here's the adaptive version: <executables> <iterator Binds="${pageFlowScope.voName}" DataControl="HRAppModuleDataControl" RangeSize="25"             id="TableSourceIterator"/> </executables> <bindings> <tree IterBinding="TableSourceIterator" id="GenericView"> <nodeDefinition Name="GenericViewNode"/> </tree> </bindings>  You'll notice three changes here.   Most importantly, you'll see that the hard-coded View Object name  that formally populated the iterator Binds attribute is gone and has been replaced by an expression (${pageFlowScope.voName}). This of course, is key, you can see that we can pass a parameter to the task flow, telling it exactly what VO to instantiate to populate this table! I've changed the IDs of the iterator and the tree binding, simply to reflect that they are now re-usable The tree binding itself has simplified and the node definition is now empty.  Now what this effectively means is that the #{node} map exposed through the tree binding will expose every attribute of the underlying iterator's collection - neat! (kudos to Eugene Fedorenko at this point who reminded me that this was even possible in his excellent "deep dive" session at OpenWorld  this year) Using the adaptive binding in the UI Now we have a parametrized  binding we have to make changes in the UI as well, first of all to reflect the new ID that we've assigned to the binding (of course) but also to change the column list from being a fixed known list to being a generic metadata driven set: <af:table value="#{bindings.GenericView.collectionModel}" rows="#{bindings.GenericView.rangeSize}"         fetchSize="#{bindings.GenericView.rangeSize}"           emptyText="#{bindings.GenericView.viewable ? 'No data to display.' : 'Access Denied.'}"           var="row" rowBandingInterval="0"           selectedRowKeys="#{bindings.GenericView.collectionModel.selectedRow}"           selectionListener="#{bindings.GenericView.collectionModel.makeCurrent}"           rowSelection="single" id="t1"> <af:forEach items="#{bindings.GenericView.attributeDefs}" var="def">   <af:column headerText="#{bindings.GenericView.labels[def.name]}" sortable="true"            sortProperty="#{def.name}" id="c1">     <af:outputText value="#{row[def.name]}" id="ot1"/>     </af:column>   </af:forEach> </af:table> Of course you are not constrained to a simple read only table here.  It's a normal tree binding and iterator that you are using behind the scenes so you can do all the usual things, but you can see the value of using ADFBC as the back end model as you have the rich pantheon of UI hints to use to derive things like labels (and validators and converters...)  One Final Twist  To finish on a high note I wanted to point out that you can take this even further and achieve the ultra-reusability I promised. Here's the new version of the pageDef iterator, see if you can notice the subtle change? <iterator Binds="{pageFlowScope.voName}"  DataControl="${pageFlowScope.dataControlName}" RangeSize="25"           id="TableSourceIterator"/>  Yes, as well as parametrizing the collection (VO) name, we can also parametrize the name of the data control. So your task flow can graduate from being re-usable within an application to being truly generic. So if you have some really common patterns within your app you can wrap them up and reuse then across multiple developments without having to dictate data control names, or connection names. This also demonstrates the importance of interacting with data only via the binding layer APIs. If you keep any code in the task flow generic in that way you can deal with data from multiple types of data controls, not just one flavour. Enjoy!

    Read the article

  • Can I use Ubuntu One Icons for 3rd party thingy?

    - by Joseph Mills
    So I am wondering what the guide lines are for using Ubuntu One icons as I have heard from a number of people that Ubuntu One has some propitiatory things to it. So I am not sure if I am to use there logo in something like this http://bazaar.launchpad.net/~josephjamesmills/ubuntutv/fan_art/download/josephjamesmills%40gmail.com-20120728145710-sy00cvq1ja8o9qad/ubuntuoneactive.png-20120728145613-jtjdupswpqiocpb2-266/ubuntuone-active.png If That is OK ? I know that this might be a silly question but I do not want to get myself in trouble.Thanks so much for reading this and helping me with a project that helps others ;)

    Read the article

  • Controlling the Sizing of the af:messages Dialog

    - by Duncan Mills
    Over the last day or so a small change in behaviour between 11.1.2.n releases of ADF and earlier versions has come to my attention. This has concerned the default sizing of the dialog that the framework automatically generates to handle the display of JSF messages being handled by the <af:messages> component. Unlike a normal popup, you don't have a physical <af:dialog> or <af:window> to set the sizing on in your page definition, so you're at the mercy of what the framework provides. In this case the framework now defines a fixed 250x250 pixel content area dialog for these messages, which can look a bit weird if the message is either very short, or very long. Unfortunately this is not something that you can control through the skin, instead you have to be a little more creative. Here's the solution I've come up with.  Unfortunately, I've not found a supportable way to reset the dialog so as to say  just size yourself based on your contents, it is actually possible to do this by tweaking the correct DOM objects, but I wanted to start with a mostly supportable solution that only uses the best practice of working through the ADF client side APIs. The Technique The basic approach I've taken is really very simple.  The af:messages dialog is just a normal richDialog object, it just happens to be one that is pre-defined for you with a particular known name "msgDlg" (which hopefully won't change). Knowing this, you can call the accepted APIs to control the content width and height of that dialog, as our meerkat friends would say, "simples" 1 The JavaScript For this example I've defined three JavaScript functions.   The first does all the hard work and is designed to be called from server side Java or from a page load event to set the default. The second is a utility function used by the first to validate the values you're about to use for height and width. The final function is one that can be called from the page load event to set an initial default sizing if that's all you need to do. Function resizeDefaultMessageDialog() /**  * Function that actually resets the default message dialog sizing.  * Note that the width and height supplied define the content area  * So the actual physical dialog size will be larger to account for  * the chrome containing the header / footer etc.  * @param docId Faces component id of the document  * @param contentWidth - new content width you need  * @param contentHeight - new content height  */ function resizeDefaultMessageDialog(docId, contentWidth, contentHeight) {   // Warning this value may change from release to release   var defMDName = "::msgDlg";   //Find the default messages dialog   msgDialogComponent = AdfPage.PAGE.findComponentByAbsoluteId(docId + defMDName); // In your version add a check here to ensure we've found the right object!   // Check the new width is supplied and is a positive number, if so apply it.   if (dimensionIsValid(contentWidth)){       msgDialogComponent.setContentWidth(contentWidth);   }   // Check the new height is supplied and is a positive number, if so apply it.   if (dimensionIsValid(contentHeight)){       msgDialogComponent.setContentHeight(contentHeight);   } }  Function dimensionIsValid()  /**  * Simple function to check that sensible numeric values are   * being proposed for a dimension  * @param sampleDimension   * @return booolean  */ function dimensionIsValid(sampleDimension){     return (!isNaN(sampleDimension) && sampleDimension > 0); } Function  initializeDefaultMessageDialogSize() /**  * This function will re-define the default sizing applied by the framework   * in 11.1.2.n versions  * It is designed to be called with the document onLoad event  */ function initializeDefaultMessageDialogSize(loadEvent){   //get the configuration information   var documentId = loadEvent.getSource().getProperty('documentId');   var newWidth = loadEvent.getSource().getProperty('defaultMessageDialogContentWidth');   var newHeight = loadEvent.getSource().getProperty('defaultMessageDialogContentHeight');   resizeDefaultMessageDialog(documentId, newWidth, newHeight); } Wiring in the Functions As usual, the first thing we need to do when using JavaScript with ADF is to define an af:resource  in the document metaContainer facet <af:document>   ....     <f:facet name="metaContainer">     <af:resource type="javascript" source="/resources/js/hackMessagedDialog.js"/>    </f:facet> </af:document> This makes the script functions available to call.  Next if you want to use the option of defining an initial default size for the dialog you use a combination of <af:clientListener> and <af:clientAttribute> tags like this. <af:document title="MyApp" id="doc1">   <af:clientListener method="initializeDefaultMessageDialogSize" type="load"/>   <af:clientAttribute name="documentId" value="doc1"/>   <af:clientAttribute name="defaultMessageDialogContentWidth" value="400"/>   <af:clientAttribute name="defaultMessageDialogContentHeight" value="150"/>  ...   Just in Time Dialog Sizing  So  what happens if you have a variety of messages that you might add and in some cases you need a small dialog and an other cases a large one? Well in that case you can re-size these dialogs just before you submit the message. Here's some example Java code: FacesContext ctx = FacesContext.getCurrentInstance();          //reset the default dialog size for this message ExtendedRenderKitService service =              Service.getRenderKitService(ctx, ExtendedRenderKitService.class); service.addScript(ctx, "resizeDefaultMessageDialog('doc1',100,50);");          FacesMessage msg = new FacesMessage("Short message"); msg.setSeverity(FacesMessage.SEVERITY_ERROR); ctx.addMessage(null, msg);  So there you have it. This technique should, at least, allow you to control the dialog sizing just enough to stop really objectionable whitespace or scrollbars. 1 Don't worry if you don't get the reference, lest's just say my kids watch too many adverts.

    Read the article

  • ADF - Now with Robots!

    - by Duncan Mills
    I mentioned this briefly in a tweet the other day, just before the full rush of OOW really kicked off, so I though it was worth re-visiting. Check out this video, and then read on: So why so interesting? Well - you probably guessed from the title, ADF is involved. Indeed this is as about as far from the traditional ADF data entry application as you can get. Instead of a database at the back-end there's basically a robot. That's right, this remarkable tape drive is controlled through an ADF using all your usual friends of ADF Faces, Controller and Binding (but no ADFBC for obvious reasons). ADF is used both on the touch screen you see on the front of the device in the video, and also for the remote management console which provides a visual representation of the slots and drives. The latter uses ADF's Active Data Framework to provide a real-time view of what's going on the rack. . What's even more interesting (for the techno-geeks) is the fact that all of this is running out of flash storage on a ridiculously small form factor with tiny processor - I probably shouldn't reveal the actual specs but take my word for it, don't complain about the capabilities of your laptop ever again! This is a project that I've been personally involved in and I'm pumped to see such a good result and,  I have to say, those hardware guys are great to work with (and have way better toys on their desks than we do). More info in the SL150 (should you feel the urge to own one) is here. 

    Read the article

  • Selective Suppression of Log Messages

    - by Duncan Mills
    Those of you who regularly read this blog will probably have noticed that I have a strange predilection for logging related topics, so why break this habit I ask?  Anyway here's an issue which came up recently that I thought was a good one to mention in a brief post.  The scenario really applies to production applications where you are seeing entries in the log files which are harmless, you know why they are there and are happy to ignore them, but at the same time you either can't or don't want to risk changing the deployed code to "fix" it to remove the underlying cause. (I'm not judging here). The good news is that the logging mechanism provides a filtering capability which can be applied to a particular logger to selectively "let a message through" or suppress it. This is the technique outlined below. First Create Your Filter  You create a logging filter by implementing the java.util.logging.Filter interface. This is a very simple interface and basically defines one method isLoggable() which simply has to return a boolean value. A return of false will suppress that particular log message and not pass it onto the handler. The method is passed the log record of type java.util.logging.LogRecord which provides you with access to everything you need to decide if you want to let this log message pass through or not, for example  getLoggerName(), getMessage() and so on. So an example implementation might look like this if we wanted to filter out all the log messages that start with the string "DEBUG" when the logging level is not set to FINEST:  public class MyLoggingFilter implements Filter {     public boolean isLoggable(LogRecord record) {         if ( !record.getLevel().equals(Level.FINEST) && record.getMessage().startsWith("DEBUG")){          return false;            }         return true;     } } Deploying   This code needs to be put into a JAR and added to your WebLogic classpath.  It's too late to load it as part of an application, so instead you need to put the JAR file into the WebLogic classpath using a mechanism such as the PRE_CLASSPATH setting in your domain setDomainEnv script. Then restart WLS of course. Using The final piece if to actually assign the filter.  The simplest way to do this is to add the filter attribute to the logger definition in the logging.xml file. For example, you may choose to define a logger for a specific class that is raising these messages and only apply the filter in that case.  <logger name="some.vendor.adf.ClassICantChange"         filter="oracle.demo.MyLoggingFilter"/> You can also apply the filter using WLST if you want a more script-y solution.

    Read the article

  • Adaptive Connections For ADFBC

    - by Duncan Mills
    Some time ago I wrote an article on Adaptive Bindings showing how the pageDef for a an ADF UI does not have to be wedded to a fixed data control or collection / View Object. This article has proved pretty popular, so as a follow up I wanted to cover another "Adaptive" feature of your ADF applications, the ability to make multiple different connections from an Application Module, at runtime. Now, I'm sure you'll be aware that if you define your application to use a data-source rather than a hard-coded JDBC connection string, then you have the ability to change the target of that data-source after deployment to point to a different database. So that's great, but the reality of that is that this single connection is effectively fixed within the application right?  Well no, this it turns out is a common misconception. To be clear, yes a single instance of an ADF Application Module is associated with a single connection but there is nothing to stop you from creating multiple instances of the same Application Module within the application, all pointing at different connections.  If fact this has been possible for a long time using a custom extension point with code that which extends oracle.jbo.http.HttpSessionCookieFactory. This approach, however, involves writing code and no-one likes to write any more code than they need to, so, is there an easier way? Yes indeed.  It is in fact  a little publicized feature that's available in all versions of 11g, the ELEnvInfoProvider. What Does it Do?  The ELEnvInfoProvider  is  a pre-existing class (the full path is  oracle.jbo.client.ELEnvInfoProvider) which you can plug into your ApplicationModule configuration using the jbo.envinfoprovider property. Visuallty you can set this in the editor, or you can also set it directly in the bc4j.xcfg (see below for an example) . Once you have plugged in this envinfoprovider, here's the fun bit, rather than defining the hard-coded name of a datasource instead you can plug in a EL expression for the connection to use.  So what's the benefit of that? Well it allows you to defer the selection of a connection until the point in time that you instantiate the AM. To define the expression itself you'll need to do a couple of things: First of all you'll need a managed bean of some sort – e.g. a sessionScoped bean defined in your ViewController project. This will need a getter method that returns the name of the connection. Now this connection itself needs to be defined in your Application Server, and can be managed through Enterprise Manager, WLST or through MBeans. (You may need to read the documentation [http://docs.oracle.com/cd/E28280_01/web.1111/b31974/deployment_topics.htm#CHDJGBDD] here on how to configure connections at runtime if you're not familiar with this)   The EL expression (e.g. ${connectionManager.connection} is then defined in the configuration by editing the bc4j.xcfg file (there is a hyperlink directly to this file on the configuration editing screen in the Application Module editor). You simply replace the hardcoded JDBCName value with the expression.  So your cfg file would end up looking something like this (notice the reference to the ELEnvInfoProvider that I talked about earlier) <BC4JConfig version="11.1" xmlns="http://xmlns.oracle.com/bc4j/configuration">   <AppModuleConfigBag ApplicationName="oracle.demo.model.TargetAppModule">   <AppModuleConfig DeployPlatform="LOCAL"  JDBCName="${connectionManager.connection}" jbo.project="oracle.demo.model.Model" name="TargetAppModuleLocal" ApplicationName="oracle.demo.model.TargetAppModule"> <AM-Pooling jbo.doconnectionpooling="true"/> <Database jbo.locking.mode="optimistic">       <Security AppModuleJndiName="oracle.demo.model.TargetAppModule"/>    <Custom jbo.envinfoprovider="oracle.jbo.client.ELEnvInfoProvider"/> </AppModuleConfig> </AppModuleConfigBag> </BC4JConfig> Still Don't Quite Get It? So far you might be thinking, well that's fine but what difference does it make if the connection is resolved "just in time" rather than up front and changed as required through Enterprise Manager? Well a trivial example would be where you have a single application deployed to your application server, but for different users you want to connect to different databases. Because, the evaluation of the connection is deferred until you first reference the AM you have a decision point that can take the user identity into account. However, think about it for a second.  Under what circumstances does a new AM get instantiated? Well at the first reference of the AM within the application yes, but also whenever a Task Flow is entered -  if the data control scope for the Task Flow is ISOLATED.  So the reality is, that on a single screen you can embed multiple Task Flows, all of which are pointing at different database connections concurrently. Hopefully you'll find this feature useful, let me know... 

    Read the article

  • ADF Logging In Deployed Apps

    - by Duncan Mills
    Harking back to my series on using the ADF logger and the related  ADF Insider Video, I've had a couple of queries this week about using the logger from Enterprise Manager (EM). I've alluded in those previous materials to how EM can be used but it's evident that folks need a little help.  So in this article, I'll quickly look at how you can switch logging on from the EM console for an application and how you can view the output.  Before we start I'm assuming that you have EM up and running, in my case I have a small test install of Fusion Middleware Patchset 5 with an ADF application deployed to a managed server. Step 1 - Select your Application In the EM navigator select the app you're interested in: At this point you can actually bring up the context ( right mouse click) menu to jump to the logging, but let's do it another way.  Step 2 - Open the Application Deployment Menu At the top of the screen, underneath the application name, you'll find a drop down menu which will take you to the options to view log messages and configure logging, thus: Step 3 - Set your Logging Levels  Just like the log configuration within JDeveloper, we can set up transient or permanent (not recommended!) loggers here. In this case I've filtered the class list down to just oracle.demo, and set the log level to config. You can now go away and do stuff in the app to generate log entries. Step 4 - View the Output  Again from the Application Deployment menu we can jump to the log viewer screen and, as I have here, start to filter down the logging output to the stuff you're interested in.  In this case I've filtered by module name. You'll notice here that you can again look at related log messages. Importantly, you'll also see the name of the log file that holds this message, so it you'd rather analyse the log in more detail offline, through the ODL log analyser in JDeveloper, then you can see which log to download.

    Read the article

  • Dutch for once: op zoek naar een nieuwe uitdaging!

    - by Dennis Vroegop
    Originally posted on: http://geekswithblogs.net/dvroegop/archive/2013/10/11/dutch-for-once-op-zoek-naar-een-nieuwe-uitdaging.aspxI apologize to my non-dutch speaking readers: this post is about me looking for a new job and since I am based in the Netherlands I will do this in Dutch… Next time I will be technical (and thus in English) again! Het leuke van interim zijn is dat een klus een keer afloopt. Ik heb heel bewust gekozen voor het leven als freelancer: ik wil graag heel veel verschillende mensen en organisaties leren kennen. Dit werk is daar bij uitstek geschikt voor! Immers: bij iedere klus breng ik niet alleen nieuwe ideeën en kennis maar ik leer zelf ook iedere keer ontzettend veel. Die kennis kan ik dan weer gebruiken bij een vervolgklus en op die manier verspreid ik die kennis onder de bedrijven in Nederland. En er is niets leukers dan zien dat wat ik meebreng een organisatie naar een ander niveau brengt! Iedere keer een ander bedrijf zoeken houdt in dat ik iedere keer weg moet gaan bij een organisatie. Het lastige daarvan is het juiste moment te vinden. Van buitenaf gezien is dat lastig in te schatten: wanneer kan ik niets vernieuwends meer bijdragen en is het tijd om verder te gaan? Wanneer is het tijd om te zeggen dat de organisatie alles weet wat ik ze kan bijbrengen? In mijn huidige klus is dat moment nu aangebroken. In de afgelopen elf maanden heb ik dit bedrijf zien veranderen van een kleine maar enthousiaste groep ontwikkelaars naar een professionele organisatie met ruim twee keer zo veel ontwikkelaars. Dat veranderingsproces is erg leerzaam geweest en ik ben dan ook erg blij dat ik die verandering heb kunnen en mogen begeleiden. Van drie teams met ieder vijf of zes ontwikkelaars naar zes teams met zeven tot acht ontwikkelaars per team groeien betekent dat je je ontwikkelproces heel anders moet insteken. Ook houdt dat in dat je je teams anders moet indelen, dat de organisatie zelf anders gemodelleerd moet worden en dat mensen anders met elkaar om moeten gaan. Om dat voor elkaar te krijgen is er door iedereen heel hard gewerkt, is er een aantal fouten gemaakt, is heel veel van die fouten geleerd en is uiteindelijk een vrijwel nieuw bedrijf ontstaan. Het is tijd om dit bedrijf te verlaten. Ik ben benieuwd waar ik hierna terecht kom: ik ben aan het rondkijken naar mogelijkheden. Ik weet wèl: het bedrijf waar ik naar op zoek ben, is een bedrijf dat openstaat voor veranderingen. Veranderingen, maar dan wel met het oog voor het individu; mensen staan immers centraal in de software ontwikkeling! Ik heb er in ieder geval weer zin in!

    Read the article

  • Troubleshooting Windows Blue Screen Errors

    The so-called ‘Blue Screen of Death’ has inspired fear in the hearts of mere mortals, but Systems Administrators are expected be capable of casually beating back this sinister beast. So imagine Ben Lye’s distress when he discovered that many aspiring SysAdmins had no structured approach to tackling the root of the problem. Setting out to remedy the situation, Ben lays out a simple 3-step plan, and dispenses some good advice.

    Read the article

  • Woolrich Parka re elegante nel design e nella qualità

    - by WoolrichParka
    Il broker di sicurezza in teflon deve essere un fronte di nuovo leader della tecnologia nella nuova creazione di negozio outfits.Cool Woolrich Prezzi su internet dovrebbero essere le nuove gamme prossimi Woolrich qui che sono l'ideale per il vostro options.Not solo è resistente e ben protetto, con 625 oca completo potere bianco giù, ma il suo design famoso dispone anche Giubbotti Woolrich di una selezione altamente efficiente di tasche esterne, ideale per lo spazio di un facile deposito attrezzature e mano-warming.It è ben conosciuta dal mercato del design.wufengfengmaple36

    Read the article

  • ANTS Memory Profiler 8 released!

    - by Ben Emmett
    I’m excited to say that we’ve just released ANTS Memory Profiler 8! The big news is support for profiling .NET’s usage of unmanaged memory. There are two main parts to this. Firstly you can see a breakdown of unmanaged memory usage by module. This lets you see at a high level where unmanaged memory is being used – for example in the image below, it’s being used by a PDF generation library. Separately, when looking at a list of .NET classes, you can see how much unmanaged memory those classes are responsible for holding on to. You can also see that information for individual instances of those classes. Some clues you might need this: You’re using system objects or 3rd party components which deal with unmanaged memory under the hood (this includes things like the GDI+ functions used for working with bitmaps) Your application still relies on some legacy Delphi / C++ / etc code from left over from the days before your company moved over to using .NET You’ve used a previous version of ANTS Memory Profiler, and have ever seen a pie chart that looks something like this: You’ll also notice that the startup process has been entirely redesigned, bringing it in line with ANTS Performance Profiler 8, which was released earlier in the year. This makes it faster to start profiling and to run repeat profiling sessions, lets you profile using any browser instead of Internet Explorer, and also provides a host of stability improvements, particularly when launching websites in IIS. Download the new version (there’s a free trial), and as always I’d love to know what you think – just email [email protected]. Cheers! Ben

    Read the article

  • I'm tasked with leading the documentation effort for an existing, entirely undocumented, software product - what resources are there to help me?

    - by Ben Rose
    I'm a software developer at a technology company. I have been tasked with leading the documentation effort for the product I work on. The goal is to produce documentation internal to developer, and the project spills over into the business side, where it covers requirements documentation. This project is challenging. Specifically, I'm dealing with a product which: - has been around for a long time, at least 6 years. - has no form of documentation other than some small, outdated pieces here and there. - has comments in the code, but they are technical and do not convey any over-arching behavior (even on technical side). - as a consequence of having little to no documentation, is often unnecessarily complex under the covers In addition, we have not been given a lot of time to work on this project. I do not have any formal documentation or writing background, training, or experience. I have displayed some ability in writing/communication around the office, which may be why I was assigned to this project. Please share your advice or recommendation for resources to help me prepare and deal with this project. I'm looking for references to books/website/forums/whatever, to help me come up with the design of a plan with milestones, learn about best practices, task delegation, templates, buy-in, etc. I'm hoping specifically for resources targeting or giving special mention of introducing good documentation to existing, undocumented, projects. I would be very grateful for your responses. Ben

    Read the article

  • Are there good resources for leading documentation for an existing software product having none?

    - by Ben Rose
    Hello. I'm a software developer at a technology company. I have been tasked with leading the documentation effort for the product I work on, both internal to developers as well as spilling over into facilitating the business side of requirements documentation. This internal product has been around for at least 6 years. One challenge is that this software application has no form of documentation other than some small, outdated pieces here and there. There are comments in the code, but they are technical and do not convey any over-arching behavior (even on technical side). As a consequence of having little to no documentation, this product is often unnecessarily complex under the covers adding to the challenge. We are very limited on time that will be given to us to work on documentation. Another thing about me is that I've displayed some ability in writing/communication around the office, but I'm not coming from any sort of documentation or formal writing background (beyond my academic career). Please share your advise or recommend resources, book/website/forum/whatever, for helping me come up with a plan with milestones, best practices, task delegation, templates, buy-in, etc. I'm hoping for a resource targeting or giving special mention of introducing good documentation on existing projects where there previously was none. I would be very grateful for your responses. Ben

    Read the article

  • New database profiling support in ANTS Performance Profiler

    - by Ben Emmett
    In May last year, the ANTS Performance Profiler team added the ability to profile database requests your application makes to SQL Server or Oracle. The really cool thing is that you’re shown those requests in the application’s call tree, so you can see what .NET code caused those queries to run. It’s particularly helpful if you’re using an ORM which automagically generates and runs queries for you, but which doesn’t necessarily do it in the most efficient way possible. Now by popular demand, we’ve added support for profiling MySQL (or MariaDB) and PostgreSQL, so you can see queries run against those databases too. Some of you have also said that you’re using the Devart dotConnect data providers instead of the native .NET ones, so we’ve added support for those drivers too. Hope it helps! For the record, here’s a list of supported connectors (ones in bold are new): SQL Server .NET Framework Data Provider Devart dotConnect for SQL Server Oracle .NET Framework Data Provider Oracle Data Provider for .NET Devart dotConnect for Oracle MySQL / MariaDB MySQL Connector/Net Devart dotConnect for MySQL PostgreSQL Npgsql .NET Data Provider for PostgreSQL Devart dotConnect for PostgreSQL SQL Server Compact Edition .NET Framework Data Provider for SQL Server Compact Edition Devart dotConnect for SQL Server Pro Have we missed a connector or database which you’d find useful? Tell us about it in the comments or by emailing [email protected]. Ben

    Read the article

  • Checkpoint Endpoint Connect Imaging - Are there any gotchas?

    - by Ben
    I am about to install CheckPoint EPC on a load of new laptops being rolled out. I plan to add it to the image, but want to check if there are any gotchas? By this I mean are there any UIDs that need removing before the image is taken to "depersonalise" the install. For example when you install LANDesk or McAfee (when using ePolicy) you need to remove the identifiers - do I need to do the same with EPC? Thanks in advance, Ben

    Read the article

  • Outlook uses > 700mb of RAM. How do I fix this?!

    - by Ben Baril
    I've been using outlook for years now, and I've never run into this problem before. Using Microsoft Outlook 2007, with only 1 email account, and no more than 100 emails in my inbox (though I have many many folders, with emails in them), Outlook can sit around and eventually get up to 700mb of ram usage. I've tried different types I've read, like compacting my folders, or not using Internet Calendars / RSS features, and right now I've even disabled Xobni...but still no effect. Any ideas?! Thanks! Ben

    Read the article

  • What is causing my newly built PC to BSOD?

    - by Ben S
    I recently built my own PC from parts and installed Windows 7 and I have been getting BSODs with various different stop codes. The latest was 0x24, but I've also had 0xd1 and 0x1e. However, Windows does not let me know where the fault occurred, so I have no idea how to go about resolving this. I've uploaded the last three minidumps in case someone can make sense out of them and let me know what could be causing my BSODs. Thanks, Ben

    Read the article

  • Account to read AD, join machine to domain, delete computer accounts and move computers to OUs

    - by Ben
    I want to create an account that will perform the following: Join computers to a domain (not restricted to 10, like a normal user) Check for computer accounts in AD Delete computers from AD Move computers between OUs I don't want to allow it to do anything else, so don't want a domain admin account. Can anyone guide me in the right direction in terms of permissions? Not sure if I should be using delegation of control wizard? Cheers, Ben

    Read the article

  • How to interpret Rackspace server diagnostics?

    - by Ben
    We've been having some trouble lately with our site timing out during times of high traffic. We're working on a number of things to resolve it. During this process I came across our server diagnostics page on Rackspace, and it has the following line: The host server's load is: 0.08 0.08 0.03 1/204 2437 I couldn't find an explanation on their site or Google. Can anyone explain what these numbers mean? For I am a lowly programmer. Much appreciated, -Ben

    Read the article

  • Running a shell script in *nix

    - by Ben
    Bit of a newb using *nix (I'm actually using a bash shell in OS X 10.5) and I wondered what the probably very simple answer is to this... When I write a script (called, say, my_script) and I've saved it to the current directory, why do I have to put a period and a forward slash in front of its name to run it? Like this: ./my_script Can't the shell tell that I want to run it from the current directory? Windows seems to handle that situation. Cheers Ben

    Read the article

  • Zend Server + Daemontools

    - by Ben
    Hey, Is anyone running Zend Server under Daemontools? I know I can use -D NO_DETACH to run apache under daemontools. But, I'm not sure if the other Zend Server components (monitor, lighthttpd, scd, jobqueue) have similar options. (The end goal is "run Zend Server with service supervision", so upstart would be fine too. For reference, my servers are running Ubuntu 10.04 LTS.) Ben

    Read the article

  • Clever memory usage through the years

    - by Ben Emmett
    A friend and I were recently talking about the really clever tricks people have used to get the most out of memory. I thought I’d share my favorites, and would love to hear yours too! Interleaving on drum memory Back in the ye olde days before I’d been born (we’re talking the 50s / 60s here), working memory commonly took the form of rotating magnetic drums. These would spin at a constant speed, and a fixed head would read from memory when the correct part of the drum passed it by, a bit like a primitive platter disk. Because each revolution took a few milliseconds, programmers took to manually arranging information non-sequentially on the drum, timing when an instruction or memory address would need to be accessed, then spacing information accordingly around the edge of the drum, thus reducing the access delay. Similar techniques were still used on hard disks and floppy disks into the 90s, but have become irrelevant with modern disk technologies. The Hashlife algorithm Conway’s Game of Life has attracted numerous implementations over the years, but Bill Gosper’s Hashlife algorithm is particularly impressive. Taking advantage of the repetitive nature of many cellular automata, it uses a quadtree structure to store the hashes of pieces of the overall grid. Over time there are fewer and fewer new structures which need to be evaluated, so it starts to run faster with larger grids, drastically outperforming other algorithms both in terms of speed and the size of grid which can be simulated. The actual amount of memory used is huge, but it’s used in a clever way, so makes the list . Elite’s procedural generation Ok, so this isn’t exactly a memory optimization – more a storage optimization – but it gets an honorable mention anyway. When writing Elite, David Braben and Ian Bell wanted to build a rich world which gamers could explore, but their 22K memory was something of a limitation (for comparison that’s about the size of my avatar picture at the top of this page). They procedurally generated all the characteristics of the 2048 planets in their virtual universe, including the names, which were stitched together using a lookup table of parts of names. In fact the original plans were for 2^52 planets, but it was decided that that was probably too many. Oh, and they did that all in assembly language. Other games of the time used similar techniques too – The Sentinel’s landscape generation algorithm being another example. Modern Garbage Collectors Garbage collection in managed languages like Java and .NET ensures that most of the time, developers stop needing to care about how they use and clean up memory as the garbage collector handles it automatically. Achieving this without killing performance is a near-miraculous feet of software engineering. Much like when learning chemistry, you find that every time you think you understand how the garbage collector works, it turns out to be a mere simplification; that there are yet more complexities and heuristics to help it run efficiently. Of course introducing memory problems is still possible (and there are tools like our memory profiler to help if that happens to you) but they’re much, much rarer. A cautionary note In the examples above, there were good and well understood reasons for the optimizations, but cunningly optimized code has usually had to trade away readability and maintainability to achieve its gains. Trying to optimize memory usage without being pretty confident that there’s actually a problem is doing it wrong. So what have I missed? Tell me about the ingenious (or stupid) tricks you’ve seen people use. Ben

    Read the article

  • Simple Cisco ASA 5505 config issue

    - by Ben Sebborn
    I have a Cisco ASA setup with two interfaces: inside: 192.168.2.254 / 255.255.255.0 SecLevel:100 outside: 192.168.3.250 / 255.255.255.0 SecLevel: 0 I have a static route setup to allow PCs on the inside network to access the internet via a gateway on the outside interface (3.254): outside 0.0.0.0 0.0.0.0 192.168.3.254 This all works fine. I now need to be able to access a PC on the outside interface (3.253) from a PC on the inside interface on port 35300. I understand I should be able to do this with no problems, as I'm going from a higher security level to a lower one. However I can't get any connection. Do I need to set up a seperate static route? Perhaps the route above is overriding what I need to be able to do (is it routing ALL traffic through the gateway?) Any advice on how to do this would be apprecaited. I am configuring this via ASDM but the config can be seen as below: Result of the command: "show running-config" : Saved : ASA Version 8.2(5) ! hostname ciscoasa domain-name xxx.internal names name 192.168.2.201 dev.xxx.internal description Internal Dev server name 192.168.2.200 Newserver ! interface Ethernet0/0 switchport access vlan 2 ! interface Ethernet0/1 ! interface Ethernet0/2 ! interface Ethernet0/3 shutdown ! interface Ethernet0/4 shutdown ! interface Ethernet0/5 shutdown ! interface Ethernet0/6 shutdown ! interface Ethernet0/7 shutdown ! interface Vlan1 nameif inside security-level 100 ip address 192.168.2.254 255.255.255.0 ! interface Vlan2 nameif outside security-level 0 ip address 192.168.3.250 255.255.255.0 ! ! time-range Workingtime periodic weekdays 9:00 to 18:00 ! ftp mode passive clock timezone GMT/BST 0 clock summer-time GMT/BDT recurring last Sun Mar 1:00 last Sun Oct 2:00 dns domain-lookup inside dns server-group DefaultDNS name-server Newserver domain-name xxx.internal same-security-traffic permit inter-interface object-group service Mysql tcp port-object eq 3306 object-group protocol TCPUDP protocol-object udp protocol-object tcp access-list inside_access_in extended permit ip any any access-list outside_access_in remark ENABLES OUTSDIE ACCESS TO DEV SERVER! access-list outside_access_in extended permit tcp any interface outside eq www time-range Workingtime inactive access-list outside_access_in extended permit tcp host www-1.xxx.com interface outside eq ssh access-list inside_access_in_1 extended permit tcp any any eq www access-list inside_access_in_1 extended permit tcp any any eq https access-list inside_access_in_1 remark Connect to SSH services access-list inside_access_in_1 extended permit tcp any any eq ssh access-list inside_access_in_1 remark Connect to mysql server access-list inside_access_in_1 extended permit tcp any host mysql.xxx.com object-group Mysql access-list inside_access_in_1 extended permit tcp any host mysql.xxx.com eq 3312 access-list inside_access_in_1 extended permit object-group TCPUDP host Newserver any eq domain access-list inside_access_in_1 extended permit icmp any any access-list inside_access_in_1 remark Draytek Admin access-list inside_access_in_1 extended permit tcp any 192.168.3.0 255.255.255.0 eq 4433 access-list inside_access_in_1 remark Phone System access-list inside_access_in_1 extended permit tcp any 192.168.3.0 255.255.255.0 eq 35300 log disable pager lines 24 logging enable logging asdm warnings logging from-address [email protected] logging recipient-address [email protected] level errors mtu inside 1500 mtu outside 1500 ip verify reverse-path interface inside ip verify reverse-path interface outside ipv6 access-list inside_access_ipv6_in permit tcp any any eq www ipv6 access-list inside_access_ipv6_in permit tcp any any eq https ipv6 access-list inside_access_ipv6_in permit tcp any any eq ssh ipv6 access-list inside_access_ipv6_in permit icmp6 any any icmp unreachable rate-limit 1 burst-size 1 icmp permit any outside no asdm history enable arp timeout 14400 global (outside) 1 interface nat (inside) 1 0.0.0.0 0.0.0.0 static (inside,outside) tcp interface www dev.xxx.internal www netmask 255.255.255.255 static (inside,outside) tcp interface ssh dev.xxx.internal ssh netmask 255.255.255.255 access-group inside_access_in in interface inside control-plane access-group inside_access_in_1 in interface inside access-group inside_access_ipv6_in in interface inside access-group outside_access_in in interface outside route outside 0.0.0.0 0.0.0.0 192.168.3.254 10 route outside 192.168.3.252 255.255.255.255 192.168.3.252 1 timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00 timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00 timeout sip-provisional-media 0:02:00 uauth 0:05:00 absolute timeout tcp-proxy-reassembly 0:01:00 timeout floating-conn 0:00:00 dynamic-access-policy-record DfltAccessPolicy aaa authentication telnet console LOCAL aaa authentication enable console LOCAL

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >