Search Results

Search found 2170 results on 87 pages for 'workflow versioning'.

Page 20/87 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • Tellago && Tellago Studios 2010

    - by gsusx
    With 2011 around the corner we, at Tellago and Tellago Studios , we have been spending a lot of times evaluating our successes and failures (yes those too ;)) of 2010 and delineating some of our goals and strategies for 2011. When I look at 2010 here are some of the things that quickly jump off the page: Growing Tellago by 300% Launching a brand new company: Tellago Studios Expanding our customer base Establishing our business intelligence practice http://tellago.com/what-we-say/events/business-intelligence...(read more)

    Read the article

  • DonXml does WCF in NYC

    - by gsusx
    Tomorrow is WCF day in New York city!!!!! My good friend and Tellago's CTO Don Demsak will be doing a session WCF Data and RIA Services at the WCF fire-starter event to be hosted at the Microsoft offices in New York city. Don has a encyclopedic knowledge of both technologies and will be sharing lots of best practices learned from applying these technologies in large service oriented environments. In addition to Don, my crazy Cuban friend Miguel Castro will also be presenting three sessions at the...(read more)

    Read the article

  • Tips/Process for web-development using Django in a small team

    - by Mridang Agarwalla
    We're developing a web app uing Django and we're a small team of 3-4 programmers — some doing the UI stuff and some doing the Backend stuff. I'd love some tips and suggestions from the people here. This is out current setup: We're using Git as as our SCM tool and following this branching model. We're following the PEP8 for your style guide. Agile is our software development methodology and we're using Jira for that. We're using the Confluence plugin for Jira for documentation and I'm going to be writing a script that also dumps the PyDocs into Confluence. We're using virtualenv for sandboxing We're using zc.buildout for building This is whatever I can think of off the top of my head. Any other suggestions/tips would be welcome. I feel that we have a pretty good set up but I'm also confident that we could do more. Thanks.

    Read the article

  • Announcing SO-Aware Test Workbench

    - by gsusx
    Yesterday was a big day for Tellago Studios . After a few months hands down working, we announced the release of the SO-Aware Test Workbench tool which brings sophisticated performance testing and test visualization capabilities to theWCF world. This work has been the result of the feedback received by many of our SO-Aware and Tellago customers in terms of how to improve the WCF testing. More importantly, with the SO-Aware Test Workbench we are trying to address what has been one of the biggest challenges...(read more)

    Read the article

  • Handling HumanTask attachments in Oracle BPM 11g PS4FP+ (II)

    - by ccasares
    Retrieving uploaded attachments -UCM- As stated in my previous blog entry, Oracle BPM 11g 11.1.1.5.1 (aka PS4FP) introduced a new cool feature whereby you can use Oracle WebCenter Content (previously known as Oracle UCM) as the repository for the human task attached documents. For more information about how to use or enable this feature, have a look here. The attachment scope (either TASK or PROCESS) also applies to UCM-attachments. But even with this other feature, one question might arise when using UCM attachments. How can I get them from within the process? The first answer would be to use the same getTaskAttachmentContents() XPath function already explained in my previous blog entry. In fact, that's the way it should be. But in Oracle BPM 11g 11.1.1.5.1 (PS4FP) and 11.1.1.6.0 (PS5) there's a bug that prevents you to do that. If you invoke such function against a UCM-attachment, you'll get a null content response (bug#13907552). Even if the attachment was correctly uploaded. While this bug gets fixed, next I will show a workaround that lets me to retrieve the UCM-attached documents from within a BPM process. Besides, the sample will show how to interact with WCC API from within a BPM process.Aside note: I suggest you to read my previous blog entry about Human Task attachments where I briefly describe some concepts that are used next, such as the execData/attachment[] structure. Sample Process I will be using the following sample process: A dummy UserTask using "HumanTask2" Human Task, followed by an Embedded Subprocess that will retrieve the attachments payload. In this case, and here's the key point of the sample, we will retrieve such payload using WebCenter Content WebService API (IDC): and once retrieved, we will write each of them back to a file in the server using a File Adapter service: In detail:  We will use the same attachmentCollection XSD structure and same BusinessObject definition as in the previous blog entry. However we create a separate variable, named attachmentUCM, based on such BusinessObject. We will still need to keep a copy of the HumanTask output's execData structure. Therefore we need to create a new variable of type TaskExecutionData (different one than the other used for non-UCM attachments): As in the non-UCM attachments flow, in the output tab of the UserTask mapping, we'll keep a copy of the execData structure: Now we get into the embedded subprocess that will retrieve the attachments' payload. First, and using an XSLT transformation, we feed the attachmentUCM variable with the following information: The name of each attachment (from execData/attachment/name element) The WebCenter Content ID of the uploaded attachment. This info is stored in execData/attachment/URI element with the format ecm://<id>. As we just want the numeric <id>, we need to get rid of the protocol prefix ("ecm://"). We do so with some XPath functions as detailed below: with these two functions being invoked, respectively: We, again, set the target payload element with an empty string, to get the <payload></payload> tag created. The complete XSLT transformation is shown below. Remember that we're using the XSLT for-each node to create as many target structures as necessary.  Once we have fed the attachmentsUCM structure and so it now contains the name of each of the attachments along with each WCC unique id (dID), it is time to iterate through it and get the payload. Therefore we will use a new embedded subprocess of type MultiInstance, that will iterate over the attachmentsUCM/attachment[] element: In each iteration we will use a Service activity that invokes WCC API through a WebService. Follow these steps to create and configure the Partner Link needed: Login to WCC console with an administrator user (i.e. weblogic). Go to Administration menu and click on "Soap Wsdls" link. We will use the GetFile service to retrieve a file based on its dID. Thus we'll need such service WSDL definition that can be downloaded by clicking the GetFile link. Save the WSDL file in your JDev project folder. In the BPM project's composite view, drag & drop a WebService adapter to create a new External Reference, based on the just added GetFile.wsdl. Name it UCM_GetFile. WCC services are secured through basic HTTP authentication. Therefore we need to enable the just created reference for that: Right-click the reference and click on Configure WS Policies. Under the Security section, click "+" to add the "oracle/wss_username_token_client_policy" policy The last step is to set the credentials for the security policy. For the sample we will use the admin user for WCC (weblogic/welcome1). Open the composite.xml file and select the Source view. Search for the UCM_GetFile entry and add the following highlighted elements into it:   <reference name="UCM_GetFile" ui:wsdlLocation="GetFile.wsdl">     <interface.wsdl interface="http://www.stellent.com/GetFile/#wsdl.interface(GetFileSoap)"/>     <binding.ws port="http://www.stellent.com/GetFile/#wsdl.endpoint(GetFile/GetFileSoap)"                 location="GetFile.wsdl" soapVersion="1.1">       <wsp:PolicyReference URI="oracle/wss_username_token_client_policy"                            orawsp:category="security" orawsp:status="enabled"/>       <property name="weblogic.wsee.wsat.transaction.flowOption"                 type="xs:string" many="false">WSDLDriven</property>       <property name="oracle.webservices.auth.username"                 type="xs:string">weblogic</property>       <property name="oracle.webservices.auth.password"                 type="xs:string">welcome1</property>     </binding.ws>   </reference> Now the new external reference is ready: Once the reference has just been created, we should be able now to use it from our BPM process. However we find here a problem. The WCC GetFile service operation that we will use, GetFileByID, accepts as input a structure similar to this one, where all element tags are optional: <get:GetFileByID xmlns:get="http://www.stellent.com/GetFile/">    <get:dID>?</get:dID>   <get:rendition>?</get:rendition>   <get:extraProps>      <get:property>         <get:name>?</get:name>         <get:value>?</get:value>      </get:property>   </get:extraProps></get:GetFileByID> and we need to fill up just the <get:dID> tag element. Due to some kind of restriction or bug on WCC, the rest of the tag elements must NOT be sent, not even empty (i.e.: <get:rendition></get:rendition> or <get:rendition/>). A sample request that performs the query just by the dID, must be in the following format: <get:GetFileByID xmlns:get="http://www.stellent.com/GetFile/">   <get:dID>12345</get:dID></get:GetFileByID> The issue here is that the simple mapping in BPM does create empty tags being a sample result as follows: <get:GetFileByID xmlns:get="http://www.stellent.com/GetFile/"> <get:dID>12345</get:dID> <get:rendition/> <get:extraProps/> </get:GetFileByID> Although the above structure is perfectly valid, it is not accepted by WCC. Therefore, we need to bypass the problem. The workaround we use (many others are available) is to add a Mediator component between the BPM process and the Service that simply copies the input structure from BPM but getting rid of the empty tags. Follow these steps to configure the Mediator: Drag & drop a new Mediator component into the composite. Uncheck the creation of the SOAP bindings and use the Interface Definition from WSDL template and select the existing GetFile.wsdl Double click in the mediator to edit it. Add a static routing rule to the GetFileByID operation, of type Service and select References/UCM_GetFile/GetFileByID target service: Create the request and reply XSLT mappers: Make sure you map only the dID element in the request: And do an Auto-mapper for the whole response: Finally, we can now add and configure the Service activity in the BPM process. Drag & drop it to the embedded subprocess and select the NormalizedGetFile service and getFileByID operation: Map both the input: ...and the output: Once this embedded subprocess ends, we will have all attachments (name + payload) in the attachmentsUCM variable, which is the main goal of this sample. But in order to test everything runs fine, we finish the sample writing each attachment to a file. To that end we include a final embedded subprocess to concurrently iterate through each attachmentsUCM/attachment[] element: On each iteration we will use a Service activity that invokes a File Adapter write service. In here we have two important parameters to set. First, the payload itself. The file adapter awaits binary data in base64 format (string). We have to map it using XPath (Simple mapping doesn't recognize a String as a base64-binary valid target): Second, we must set the target filename using the Service Properties dialog box: Again, note how we're making use of the loopCounter index variable to get the right element within the embedded subprocess iteration. Final blog entry about attachments will handle how to inject documents to Human Tasks from the BPM process and how to share attachments between different User Tasks. Will come soon. Again, once I finish will all posts on this matter, I will upload the whole sample project to java.net.

    Read the article

  • Is WCF suitable for writing an application which is shared among applications?

    - by RPK
    I have developed and deployed few ASP.NET applications. Sometimes I want to stop the users from either inserting or updating a record when: Maintenance is going on. Stop operations due to payment delay. In one of my recent application I have implemented this feature to first check the database operations for locked status. If any of the above condition fulfils, database operations like insert and update are not carried out. I now need this feature in all the old applications and the future applications I build. I want to know whether WCF is suitable in this scenario as I want to share methods or an independent locking application among various other applications. Is WCF appropriate for this type of scenario?

    Read the article

  • SSIS and StreamInsight Working Together.

    I have been thinking a lot recently about what it would be like to have StreamInsight and SSIS working together.  Well the CAT team have produced a paper on some of our options here. Here are some of my thoughts. There is of course a slight mismatch in their types of usage.  StreamInsight is an Event Stream processing engine capable of operating on new data in the sub second timeframe.  The engine allows you to do real time analytics and take decisions on events that have potentially only just happened.  SSIS on the other hand is a batch processing engine.  In general I do not like having to invoke the same package more than once every 90 seconds or so as it can start to get expensive.  Usually when doing batch processing we have an hour or longer of grace before we have to move data from A –> B. StreamInsight operates on streams of data.  Before anyone mentions it yes I know StreamInsight is equally adept at using the IEnumerable interface, but I would argue live streaming and real-time analytics is a primary goal of the product.  SSIS does not have an “Always On” button I do not like the idea of embedding StreamInsight inside SSIS using a transform particularly.  It means StreamInsight becomes a batch processing engine because it can only operate when the SSIS package is running and SSIS is in charge of when that happens. If I am to have StreamInsight within SSIS then I prefer to have StreamInsight on the adapters.  This way you can force the adapters to stay open and introduce events into your Pipeline.   SSIS has a much richer set of transforms out of the box than StreamInsight.  Although “Always On” was not a design goal of SSIS I have used it like this and it works just fine. SSIS being called from within StreamInsight, now that excites me.  see below   For a while now I have been thinking what it would be like to decouple the Data Flow task from the SSIS package and expose it as something with which you can interact.  Anything can instantiate this version of a DFT as it would expose one or more  input interfaces and one or more output interfaces.  I can imagine that this would be a big hit when moving to “The Cloud” as well.  I could see the Data Flow task maybe being hosted in Azure Appfabric or some such layer. StreamInsight would be able to take advantage of this as well.   I am interested to see where this goes and will be pressing for more meat around the subject when I visit Redmond soon.

    Read the article

  • How to improve designer and developer work flow?

    - by mbdev
    I work in a small startup with two front end developers and one designer. Currently the process starts with the designer sending a png file with the whole page design and assets if needed. My task as front end developer is to convert it to a HTML/CSS page. My work flow currently looks like this: Lay out the distinct parts using html elements. Style each element very roughly (floats, minimal fonts and padding) so I can modify it using inspection. Using Chrome Developer Tools (inspect) add/change css attributes while updating the css file. Refresh the page after X amount of changes Use Pixel Perfect to refine the design more. Sit with the designer to make last adjustments. Inferring the paddings, margins, font sizes using trial and error takes a lot of time and I feel the process could become more efficient but not sure how to improve it. Using PSD files is not an option since buying Photoshop for each developer is currently not considered. Design guide is also not available since design is still evolving and new features are introduced. Ideas for improving the process above and sharing how the process looks like in your company will be great.

    Read the article

  • How can I find out a file's path in the text encoding used by PosteRazor?

    - by ændrük
    PosteRazor uses an apparently outdated GUI that is incapable of properly displaying my filenames: For the sake of convenience, I want to be able to open any file in PosteRazor by copying and pasting its path from Nautilus. This works in other applications, but sadly, PosteRazor in unable to understand the path: How can I convert the path that Nautilus generates into a text encoding that is compatible with PosteRazor?

    Read the article

  • How do you remember where in your code you want to continue next time?

    - by bitbonk
    When you interrupt the work on some code (be it because you have to work on something else or go on vacation or simply because it is the end of the day), once you close that Visual Studio project, what is your preferred way to remember what you want to do next when you start working on that code again. Do you set a Visual Studio bookmark or do write down something like // TODO: continue here next time? Maybe you have a special tag like // NEXT:? Do you put a sticky note on your monitor? Do you use a cool tool or Visual Studio plugin I should know? Do you have any personal trick that helps you find the place in your code where you left off the last time you worked on your code?

    Read the article

  • Database change management tools?

    - by zodeus
    We are currently in the process of solidifying a database change management process. We run MySql 5 running on RedHat 5. I have selected LiquiBase as the tool because it's open source and allows us to expand its functionality later if needed. It also seems to be one of the few free projects that are still active. Has anyone here had any experience using LiquiBase or other Db versioning tools? Company Background: We are an SaaS company providing a 7/24 hosted application. There are dozens of instances running different versions of the same database and we need a way to manage the deploy process as it's starting to get out of control. The databases have hundreds of tables and we typically do releases once every 3 months.

    Read the article

  • What version numbering scheme to use?

    - by deamon
    I'm looking for a version numbering scheme that expresses the extent of change, especially compatiblity. Apache APR, for example, use the well known version numbering scheme <major>.<minor>.<patch> example: 4.5.11 Maven suggests a similar but more detailed schema: <major>.<minor>.<patch>-<qualifier>-<build number> example: 4.5.11-RC1-3732 Where is the Maven versioning scheme defined? Are there conventions for qualifier and build number? Probably it is a bad idea to use maven but not to follow the Maven version scheme ... What other version numbering schemes do you know? What scheme would you prefer and why?

    Read the article

  • rails wiki site - article edit highlighting/strikethrough with htmldiff maxes cpu

    - by mark
    Hi I'm implementing a wiki style site and want to highlight changes made to articles between successive versions. Using htmldiff to highlight changes works great, except it is rather cpu intensive. I'm using the awesome vestal_versions plugin for versioning. So how best to handle this? I considered having an on_create callback on version creation create a delayed job that processes and then stores the htmldiff processed article (in the version table row). If this is a good approach, how can I extend vestal_versions without touching the gem? Or maybe there would be a better approach. Any advice is much appreciated. :)

    Read the article

  • How to validate Windows VC++ DLL on Unix systems

    - by Guildencrantz
    I have a solution, mostly C#, but with a few VC++ projects, that is pushed through our standard release process (perl and bash scripts on Unix boxes). Currently the initiative is to validate DLL and EXE versions as they pass through the process. All the versioning is set so that File Version is of the format $Id: $ (between the colon and the second dollar should be a git commit hash), and the Product Version is of the format $Hudson Build: $ (between the colon and the second dollar should be a string representing the hudson build details). Currently this system works extremely well for the C# projects because this version information is stored as plain strings within the compiled code (you can literally use the unix strings command and see the version information); the problem is that the VC++ projects do not expose this information as strings (I have used a windows system to verify that the version information is correctly being set), so I'm not sure how to extract the version on a unix system. Any suggestions for either A) Getting a string representation of the version embedded in the compiled code, or B) A utility/script which can extract this information?

    Read the article

  • Design ideas for a versioned db schema with related tables also versioned

    - by vfilby
    Here is the drill, I want to version a database. I have done this before using multiple rows where the table primary key becomes a combination of the row id and either a datestamp or a version #. Now I want to version a table that depends on many other small tables. Versioning each table will be a giant PITA, so I am looking for good options to verion a schema where the data to be versioned spreads over multiple tables. All related tables are properly keyed with foreign key relationships. The database is currently on Sql Server 2005.

    Read the article

  • Git Workflow With Capistrano

    - by jerhinesmith
    I'm trying to get my head around a good git workflow using capistrano. I've found a few good articles, but I'm either not grasping completely what they're suggesting (likely) or they're somewhat lacking. Here's kind of what I had in mind so far, but I get caught up when to merge back into the master branch (i.e. before moving to stage? after?) and trying to hook it into capistrano for deployments: Make sure you’re up to date with all the changes made on the remote master branch by other developers git checkout master git pull Create a new branch that pertains to the particular bug you're trying to fix git checkout -b bug-fix-branch Make your changes git status git add . git commit -m "Friendly message about the commit" So, this is usually where I get stuck. At this point, I have a master branch that is healthy and a new bug-fix-branch that contains my (untested -- other than unit tests) changes. If I want to push my changes to stage (through cap staging deploy), do I have to merge my changes back into the master branch (I'd prefer not to since it seems like master should be kept free of untested code)? Do I even deploy from master (or should I be tagging a release first and then modifying my production.rb file to deploy from that tag)? git-deployment seems to address some of these workflow issues, but I can't seem to find out how on earth it actually hooks into cap staging deploy and cap production deploy. Thoughts? I assume there's a likely canonical way to do this, but I either can't find it or I'm too new to git to recognize that I have found it. Help!

    Read the article

  • Understanding the workflow of the messages in a generic server implementation in Erlang

    - by Chiron
    The following code is from "Programming Erlang, 2nd Edition". It is an example of how to implement a generic server in Erlang. -module(server1). -export([start/2, rpc/2]). start(Name, Mod) -> register(Name, spawn(fun() -> loop(Name, Mod, Mod:init()) end)). rpc(Name, Request) -> Name ! {self(), Request}, receive {Name, Response} -> Response end. loop(Name, Mod, State) -> receive {From, Request} -> {Response, State1} = Mod:handle(Request, State), From ! {Name, Response}, loop(Name, Mod, State1) end. -module(name_server). -export([init/0, add/2, find/1, handle/2]). -import(server1, [rpc/2]). %% client routines add(Name, Place) -> rpc(name_server, {add, Name, Place}). find(Name) -> rpc(name_server, {find, Name}). %% callback routines init() -> dict:new(). handle({add, Name, Place}, Dict) -> {ok, dict:store(Name, Place, Dict)}; handle({find, Name}, Dict) -> {dict:find(Name, Dict), Dict}. server1:start(name_server, name_server). name_server:add(joe, "at home"). name_server:find(joe). I tried so hard to understand the workflow of the messages. Would you please help me to understand the workflow of this server implementation during the executing of the functions: server1:start, name_server:add and name_server:find?

    Read the article

  • Need help with version number schemes

    - by Davy8
    I know the standard Major.Minor.Build.Revision but there's several considerations for us that are somewhat unique -We do internal releases almost daily, occasionally more than once a day. -Windows Installer doesn't check Revision so that's almost moot for our purposes. -Major and Minor numbers ideally are only updated for public releases and should be done manually. -That leaves the Build # that needs to be automatically updated. -We want internal releases to be able to be performed from any developer's machine so that leaves out using x.x.* in Visual Studio because different numbers could be generated from different machines and each build isn't guaranteed to be larger than the previous. -We have about 15 or so projects as part of the product so saving the version numbers in SVN isn't ideal since every release we'd have commit all those files. Given those criteria I can't really come up with a good versioning scheme. The last 2 criteria could be dropped but meeting all of those seems ideal. A date stamp is insufficient because we might do more than one a day, and given the max size of Uint32 (around 64000) (Actually using WiX it complains about numbers higher than Int32.MaxValue) a date/time won't fit.

    Read the article

  • .NET WF4: Should it be in the middle of everything?

    - by stimpy77
    I am aware that WF4 (Windows Workflow 4.0, part of .NET 4.0) is a significant rework and redesign of WF3, where much of what made WF3 a poor technology choice has been cleaned up in WF4. For example, as far as I can tell, WF4 (Windows Workflow 4.0) activities are more or less testable with [TestMethod] and mocking. This among other things, like improved performance, has grabbed my attention about the technology again, whereas I had previously pooh-poohed WF3. I'm working on a new architecture for essentially an n-tier collaborative application (not enterprise-class, just a smallish project with potential to grow significantly) where I'm already trying to discipline myself to use IoC and, to some extent, TDD, and I'm wondering, in general terms, whether it is wiser to just hand-code workflow logic or if I should delve into learning and integrating WF4 so that WF becomes literally the controller of the entire application, i.e. the practical C in "MVC" (not ASP.NET MVC but rather the pattern). So should workflow activities in WF4 be the primary controller for a highly expandable/growable web-based collaborative application? Or am I asking entirely the wrong question? This is a vague question, I'm sure, so abstract answers are as welcome as specific ones.

    Read the article

  • What workflow engines are companies using and would you use it again? [on hold]

    - by cbmeeks
    I've been asked to find out "what's out there" when it comes to workflow engines. We have projects where a workflow based development environment makes sense. I've looked a little into jBPM but it seemed to have a steep learning curve. Google seems to take me to commercial products or products that I think are open source but instead have very limited "community editions". I could simply be searching for the wrong terms. What I would like to know are what actual workflow based products have you used at your company and to what degree of success or failure was it? Would you use it again? Thanks.

    Read the article

  • Workflow Automation software for SVN

    - by KyleMit
    We're currently using IBM's ClearQuest for task management and ClearCase for change management. They plug and play very well with each other. Users can create tasks in ClearCase as defects and enhancements, and developers can use those tasks to check out and modify code in source control. We're looking to upgrade to a better, more modern Source Control system, like SVN, although we're not married to that as our Source Control system. There are loads of source control systems out there, but I'm having difficulty finding one that also includes the ability to have users enter tasks and track them, especially in a native way to the source control system itself. Are there any products that replace ClearQuest for systems like SVN? Are there any other cheap / open source application pairs that handle both sides of the coin?

    Read the article

  • F# Async workflow

    - by akaphenom
    Is there a way to look at the definition of the Async workflow? What goes under the hood that would make a line of code behave differently in there, than outside of it?

    Read the article

  • What's the workflow of Continuous Integration With Hudson?

    - by Satoru.Logic
    Hi, all. I am referred to Hudson today. I have heard about continuous integration before, but I have no idea what the heck is a ci-server. Hudson is really easy to install in Ubuntu and in several minutes I managed to set up an instance of it. But I don't quite understand the workflow of a ci-server, or how am I supposed to use it? Please tell me if you have experience about ci, thanks in advance.

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >