Search Results

Search found 1678 results on 68 pages for 'workflow'.

Page 19/68 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • TFS 2010 build template failing to open in designer - how to fix?

    - by Duncan Bayne
    I can open the DefaultTemplate.xaml that was installed as part of our TFS 2010 RC setup. I created a copy of this template called ApplicationTemplate.xaml and modified it slightly, using the workflow designer in Visual Studio. Now, I can no longer open ApplicationTemplate.xaml. When I try, I receive many errors like the following: Error 2 Assembly 'Microsoft.TeamFoundation.Build.Client' was not found. Verify that you are not missing an assembly reference. Also, verify that your project and all referenced assemblies have been built. C:\Projects\tfs\Hydraulics\BuildProcessTemplates\ApplicationTemplate.xaml 1 1828 Miscellaneous Files However, I can still open and edit the DefaultTemplate.xaml file without any issues. Has anyone else come across this problem, & if so, did you manage to resolve it or did you have to recreate the template?

    Read the article

  • WF - Creating a selective list

    - by Michael Bendtsen
    I have a stupid question. I have created a workflow Designer host where I publish my own activities. I also have a property grid which only displays the properties decorated with a special attribute. This designer is going to be used by none IT staff. What I want is that on an activity the user can select a property value from a list. I know I just could create an Enum but I would like it to be dynamic. I.e. all events on a specific interface (extracted using reflection). Is this at all possible or am I stuck with enums?

    Read the article

  • How to link to a folder in document library from sharepoint list item?

    - by kyrisu
    Hi, Background: I have an items on the sharepoint list. I also have a corresponding folder in a document library that contains documents about this item. I want to be able to get to this folder straight from the item properties. I have tried to create a lookup column containing folder ID, but that doesn't help cause folder is not a type and it just doesn't work. Other solution would be to create link column but if I will create it staticly - after creating alternative mapping (and getting to the page from the internet for example) it won't work. (so solution posted here won't work for me). I want to create this link from sharepoint workflow. I have a custom action that can return any info about the folder I want (ID, URL etc). Question: How to link from sharepoint list item to a folder in document library?

    Read the article

  • Is it possible to run an Automator workflow when a USB device is connected?

    - by pkaeding
    I would like to run an Automator workflow every time I connect a certain thumbdrive to my OS X Snow Leopard system. This workflow will copy certain files from my hard drive onto the thumb drive. I have the workflow part figured out, but I don't know how to trigger it. Ideally, I wouldn't need to do anything; it would run as soon as the drive is mounted. Is this possible? Am I approaching this from the wrong angle? The solution only really needs to work on Snow Leopard. Thanks!

    Read the article

  • Enabling support of EUS and Fusion Apps in OUD

    - by Sylvain Duloutre
    Since the 11gR2 release, OUD supports Enterprise User Security (EUS) for database authentication and also Fusion Apps. I'll plan to blog on that soon. Meanwhile, the R2 OUD graphical setup does not let you configure both EUS and FusionApps support at the same time. However, it can be done manually using the dsconfig command line. The simplest way to proceed is to select EUS from the setup tool, then manually add support for Fusion Apps using dsconfig using the commands below: - create a FA workflow element with eusWfe as next element: dsconfig create-workflow-element \           --set enabled:true \           --set next-workflow-element:Eus0 \           --type fa \           --element-name faWfe - modify the workflow so that it starts from your FA workflow element instead of Eus: dsconfig set-workflow-prop \           --workflow-name userRoot0 \           --set workflow-element:faWfe  Note: the configuration changes may slightly differ in case multiple databases/suffixes are configured on OUD.

    Read the article

  • changing user permissions dynamically

    - by ephieste
    I am designing a system on SharePoint. There is a approval list for the items. The members can approve, reject and edit the items. One from approval list has to fill the "assigned to" field in the item while approving it. The user who is added to "assigned to" field should able to edit the content of the item after it is approved. So, how can I give the edit permission to the users after they are added assigned to field of a specific item? The situation is: approval list: A, B ,C (edit, view permission) users: x,y,z .... (no permission, view after approval) items: item1, item2, item3.... items are invisible. A approved the item1 and added X to "assigned to" field. It means This item is under X's responsibility. But X hasn't got edit permission. we can't give edit permission to X for every item. He should edit the items after he is written into the "assigned to" field. How can I create this workflow in SharePoint? Please urgent help needed.

    Read the article

  • Long running stateful service in .NET

    - by Asaf R
    Hi, I need to create a service in .NET that maintains (inner) state in-memory, spawns multiple threads and is generally long-running. There are a lot options - Good-old Windows Service Windows Communication Services Windows Workflow Foundation I really don't know which to choose. Most of the functionality is in a library used by this service, so the service itself is rather simple. On one hand, it's important the service host is as close to "simply working" as possible, which excludes Windows Service. On the other hand, it's important that the service is not taken down by the host just because there's no external activity, which makes WCF kind o' "scary". As for WF, it's strongest selling point is the ability to create processes as, um..., workflows, which is something I don't need nor want. To sum it up, the plethora of Microsoft technologies got me a bit confused. I'd appreciate help regarding the pros and cons of each solution (or other's I've failed to mention) for the problem of a stateful, long running service in .NET Thanks, Asaf P.S., I'm using .NET 4. EDIT: What I mean by the host "simply working" is, for example, that the service I create be reactivated if it crashes. I guess the reason for this question is that I've created Windows Services in the past (I think it was in plain C++ with Win32 API), and I don't want to miss out on something simpler if there's is such as thing. Thanks for all the replies thus far! Asaf.

    Read the article

  • [F#] Parallelize code in nested loops

    - by Juliet
    You always hear that functional code is inherently easier to parallelize than non-functional code, so I decided to write a function which does the following: Given a input of strings, total up the number of unique characters for each string. So, given the input [ "aaaaa"; "bbb"; "ccccccc"; "abbbc" ], our method will returns a: 6; b: 6; c: 8. Here's what I've written: (* seq<#seq<char>> -> Map<char,int> *) let wordFrequency input = input |> Seq.fold (fun acc text -> (* This inner loop can be processed on its own thread *) text |> Seq.choose (fun char -> if Char.IsLetter char then Some(char) else None) |> Seq.fold (fun (acc : Map<_,_>) item -> match acc.TryFind(item) with | Some(count) -> acc.Add(item, count + 1) | None -> acc.Add(item, 1)) acc ) Map.empty This code is ideally parallelizable, because each string in input can be processed on its own thread. Its not as straightforward as it looks since the innerloop adds items to a Map shared between all of the inputs. I'd like the inner loop factored out into its own thread, and I don't want to use any mutable state. How would I re-write this function using an Async workflow?

    Read the article

  • Vim: yank and replace -- the same yanked input -- multiple times, and two other questions

    - by Hassan Syed
    Now that I am using vim for everything I type, rather then just for configuring servers, I wan't to sort out the following trivialities. I tried to formulate Google search queries but the results didn't address my questions :D. Question one: How do I yank and replace multiple times ? Once I have something in the yank history (if that is what its called) and then highlight and use the 'p' char in command mode the replaced text is put at the front of the yank history; therefore subsequent replace operations do not use the the text I intended. I imagine this to be a usefull feature under certain circumstances but I do not have a need for it in my workflow. Question two: How do I type text without causing the line to ripple forward ? I use hard-tab stops to allign my code in a certain way -- e.g., FunctionNameX ( lala * land ); FunctionNameProto ( ); When I figure out what needs to go into the second function, how do I insert it without move the text up ? Question three Is there a way of having a uniform yank history across gvim instances on the same machine ? I have 1 monitors. Just wondering, atm I am using highlight + mouse middle click.

    Read the article

  • Why should I care about RVM's Gemset feature when I use Bundler?

    - by t6d
    I just don't get it. I thought, Bundler was developed to resolve version conflicts between gems. So that I just have to require "bundler/setup" and everything is fine, knowing that Bundler will load the correct versions of all my gems and their dependencies. Now, RVM is great for managing multiple Rubies, I know, but why should I care about the Gemset feature? Do I miss something here? Can it make my development even easier? Maybe, some of you can give me some hints on the perfect RVM + Bundler workflow for both, development and production. I also don't know when RVM starts switching to another Ruby. I know that I can have an .rvmrc file in my project, but do I have to cd to this directory so that the switch happens? Furthermore, I usually use Passenger for development since, thanks to the Passenger.prefpane, integration in Mac OS is great. Can I still do that with RVM or is there a better way to do it? Does Passenger recognize .rvmrc files and switch to the correct Gemset?

    Read the article

  • Refresh page in browser without resubmitting form

    - by Michael
    I'm an ASP.NET developer, and I usually find myself leaving the webpage that I'm working on open in my browser (Chrome is my browser of choice, but this question is relevant for any browser). My workflow typically goes like this: I write code, I rebuild my project in Visual Studio, and then I flip back to my browser with Alt-Tab and hit F5 to refresh the page. This is fine and dandy if a form hasn't been submitted since the page was opened. But if I've been clicking around on ASP.NET form controls, the page has posted form data a number of times, so hitting F5 causes the browser to (sensibly) pop up a confirmation message, e.g., "Confirm Form Resubmission: The page that you're looking for used information that you entered...". Sometimes I do want to resubmit the form, but more often than not, I just want to start over with the page (rather than resubmit form data). The way I usually get around this is to simply add some query string data to the URL so that the browser sees it as a fresh page request, e.g.: page.aspx becomes page.aspx? (or vice-versa). My question is: Is there a better way to quickly request a fresh version of a webpage (and not submit form data) in any of the major browsers? It seems like a no-brainer to me for web development, but maybe I'm missing something. What I'd love to see is something like the last item in this list: F5: refresh page Ctrl-F5: refresh page (and force cache refresh) Alt-F5: request fresh copy of the page without resubmitting the form

    Read the article

  • Javascript libraries + JQuery plugins contradict? How to debug?

    - by Metafaniel
    This is somewhat a newbie question... I effort everyday to learn, so please understand ;) I'm not the very best expert, but I can do a decent job good looking and functional websites or web applications. My main tools are PHP5, HTML5, CSS2 y 3, a database (SQLite, MySQL) and Javascript and JQuery. I'm not an expert at all in Javascript. I often find interesting JQuery plugins or tutorials and try to mix them up to do the functionality needed. This time I'm mixing maybe too much plugins and js files from different sources. In fact, my app do what I want except for certain behaviors... There are no errors, everything looks fine, but the misbehavior persists. So maybe I need to specify a class I don't know about, or one contradicts another one from another plugin and I just can't understand, for example, why a <button type="button">DON'T submit</button> just submits the form... Anyway, my point is: Do you people know a way to debug this situations??? Is there a generic tool, suggestion, workflow or something to help me understand conflicts or omissions between libraries or plugins??? (Javascript libraries, my own Javascripts and JQuery plugins)??? I hope it is a way! THANKS A LOT FOR YOUR HELP AND COMPREHENSION! =)

    Read the article

  • What's the proper approach for writing multi-path "story" flows?

    - by Basiclife
    Hi, I wonder if you can help me. I'm writing a game (2d) which allows players to take multiple routes, some of which branch/merge - perhaps even loop. Each section of the game will decide which section is loaded next. I'm calling each section an IStoryElement - And I'm wondering how best to link these elements up in a way that is easily changed/configured and at the same time, graphable I'm going to have an engine/factory assembly which will load the appropriate StoryElement(s) based on various config options. I initially planned to give each StoryElement a NextElement() As IStoryElement property and a Completed() event. When the vent fires, the engine reads the NextElement property to find the next StoryElement. The downside to this is that if I ever wanted to graph all the routes through the game, I would be unable to - I couldn't determine all possible targets for each StoryElement. I considered a couple of other solutions but they all feel a little clunky - eg Do I need an additional layer of abstraction? ie StoryElementPlayers or similar - Each one would be responsible for stringing together multiple StoryElement perhaps a Series and a ChoicePlayer with each responsible for graphing its own StoryElement - But this will just move the problem up a layer. In short, I need some way of emulating a simple but dynamic workflow (but I'd rather not actually use WWF). Is there a pattern for something this simple? All the ones I've managed to find relate to more advanced control flow (parallel processing, etc.)

    Read the article

  • How can I replicate Google Page Speed's lossless image compression as part of my workflow?

    - by Keefer
    I love that Google's Page Speed is able to losslessly compress a lot of my images, but I'd love to make it part of my workflow, prior to uploading a site and making it live. Is there anything I can run locally to give me the same lossless compression? I currently export images from Export For Web from Photoshop, and use a little application called PNGCrusher to reduce file size of PNGs. I'd love to find a faster way though than saving out and replacing the individual images from Page Speed's results.

    Read the article

  • Workflows gradually take longer to load in a workflow designer that is re-hosted in the .NET Framewo

    981145 ... Workflows gradually take longer to load in a workflow designer that is re-hosted in the .NET Framework 2.0This RSS feed provided by kbAlerz.com.Visit kbAlertz.com to subscribe. It's 100% free and you'll be able to recieve e-mail or RSS updates for the technologies you pick from the Microsoft Knowledge Base....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Creating a new naming context in OUD

    - by Sylvain Duloutre
    A naming context (also known as a directory suffix) is a DN that identifies the top entry in a locally held directory hierarchy. A new naming context can be created using ODSM, the OUD gui admin console, as described in http://docs.oracle.com/cd/E29407_01/admin.111200/e22648/server_config.htm#CBDGCJGF It can also be created using the dsconfig command lione as described below: Creation of a new naming context consists in 3 steps: First create a Local Backend Workflow element (myNewDb in this exemple) ,  responsible for the naming context base dn, e.g o=example. dsconfig create-workflow-element \           --set base-dn:o=example \           --set enabled:true \           --type db-local-backend \           --element-name myNewDb \           --hostname <your host> \           --port <admin port> \           --bindDN cn=Directory\ Manager \           --bindPasswordFile ****** \           --no-prompt Second, create a Workflow element (workFlowForMyNewDb in this exemple) associated with the Local Backend Workflow element. WorkFlow elements are used to route LDAP requests to the appropriate database, based on the target base dn. dsconfig create-workflow \           --set base-dn:o=example \           --set enabled:true \           --set workflow-element:myNewDb \           --type generic \           --workflow-name workFlowForMyNewDb \           --hostname <your host name> \           --port <admin port>\           --bindDN cn=Directory\ Manager \           --bindPasswordFile ****** \           --no-prompt Then, the workflow element must be made visible outside of the directory, i.e added to the internal "routing table". This is done by adding the Workflow to the appropriate Network Group. A Network group  is used to classify incoming client connections and route requests to workflows. dsconfig set-network-group-prop \           --group-name network-group \           --add workflow:workFlowForMyNewDb \           --hostname <your hostname> \           --port <admin port>\           --bindDN cn=Directory\ Manager \           --bindPasswordFile ****** \           --no-prompt At that stage, it is possible to import entries to the new naming context o=example.

    Read the article

  • How do you increase the number of processes in parallel with Powershell 3?

    - by Mark Shay
    I am trying to run 20 processes in parallel. I changed the session as below, but having no luck. I am getting only up to 5 parallel processes per session. $wo=New-PSWorkflowExecutionOption -MaxSessionsPerWorkflow 50 -MaxDisconnectedSessions 200 -MaxSessionsPerRemoteNode 50 -MaxActivityProcesses 50 Register-PSSessionConfiguration -Name ITWorkflows -SessionTypeOption $wo -Force Get-PSSessionConfiguration ITWorkflows | Format-List -Property * Is there a switch parameter to increase the number of processes? This is what I am running: Workflow MyWorkflow1 { Parallel { InlineScript { import-module \\PS_Scripts\bulkins.ps1; BulkIns "where OrderId between 2 and 2975416"} InlineScript { import-module \\PS_Scripts\bulkins.ps1; BulkIns "where OrderId between 2975417 and 5950831"} InlineScript { import-module \\PS_Scripts\bulkins.ps1; BulkIns "where OrderId between 5950832 and 8926246"} InlineScript { import-module \\PS_Scripts\bulkins.ps1; BulkIns "where OrderId between 8926247 and 11901661"} InlineScript { import-module \\PS_Scripts\bulkins.ps1; BulkIns "where OrderId between 11901662 and 14877076"} InlineScript { import-module \\PS_Scripts\bulkins.ps1; BulkIns"where OrderId between 14877077 and 17852491"} InlineScript { import-module \\PS_Scripts\bulkins.ps1; BulkIns "where OrderId between 17852492 and 20827906"} InlineScript { import-module \\PS_Scripts\bulkins.ps1; BulkIns "where OrderId between 20827907 and 23803321"} InlineScript { import-module \\PS_Scripts\bulkins.ps1; BulkIns "where OrderId between 23803322 and 26778736"} InlineScript { import-module \\PS_Scripts\bulkins.ps1; BulkIns "where OrderId between 26778737 and 29754151"} InlineScript { import-module \\PS_Scripts\bulkins.ps1; BulkIns "where OrderId between 29754152 and 32729566"} InlineScript { import-module \\PS_Scripts\bulkins.ps1; BulkIns "where OrderId between 32729567 and 35704981"} InlineScript { import-module \\PS_Scripts\bulkins.ps1; BulkIns "where OrderId between 35704982 and 38680396"} InlineScript { import-module \\PS_Scripts\bulkins.ps1; BulkIns "where OrderId between 38680397 and 432472144"} InlineScript { import-module \\PS_Scripts\bulkins.ps1; BulkIns "where OrderId between 432472145 and 435447559"} InlineScript { import-module \\PS_Scripts\bulkins.ps1; BulkIns "where OrderId between 435447560 and 438422974"} InlineScript { import-module \\PS_Scripts\bulkins.ps1; BulkIns "where OrderId between 864944289 and 867919703"} InlineScript { import-module \\PS_Scripts\bulkins.ps1; BulkIns "where OrderId between 867919704 and 870895118"} InlineScript { import-module \\PS_Scripts\bulkins.ps1; BulkIns "where OrderId between 870895119 and 1291465602"} InlineScript { import-module \\PS_Scripts\bulkins.ps1; BulkIns "where OrderId between 1291465603 and 1717986945"} }

    Read the article

  • How to manage multiple versions of the same record

    - by Darvis Lombardo
    I am doing short-term contract work for a company that is trying to implement a check-in/check-out type of workflow for their database records. Here's how it should work... 1) A user creates a new entity within the application. There are about 20 related tables that will be populated in addition to the main entity table. 2) Once the entity is created the user will mark it as the master. 3) Another user can make changes to the master only by "checking out" the entity. Multiple users can checkout the entity at the same time. 4) Once the user has made all the necessary changes to the entity, they put it in a "needs approval" status. 5) After an authorized user reviews the entity, they can promote it to master which will put the original record in a tombstoned status. The way they are currently accomplishing the "check out" is by duplicating the entity records in all the tables. The primary keys include EntityID + EntityDate, so they duplicate the entity records in all related tables with the same EntityID and an updated EntityDate and give it a status of "checked out". When the record is put into the next state (needs approval), the duplication occurs again. Eventually it will be promoted to master at which time the final record is marked as master and the original master is marked as dead. This design seems hideous to me, but I understand why they've done it. When someone looks up an entity from within the application, they need to see all current versions of that entity. This was a very straightforward way for making that happen. But the fact that they are representing the same entity multiple times within the same table(s) doesn't sit well with me, nor does the fact that they are duplicating EVERY piece of data rather than only storing deltas. I would be interested in hearing your reaction to the design, whether positive or negative. I would also be grateful for any resoures you can point me to that might be useful for seeing how someone else has implemented such a mechanism. Thanks! Darvis

    Read the article

  • NServiceBus pipeline with Distributors

    - by David
    I'm building a processing pipeline with NServiceBus but I'm having trouble with the configuration of the distributors in order to make each step in the process scalable. Here's some info: The pipeline will have a master process that says "OK, time to start" for a WorkItem, which will then start a process like a flowchart. Each step in the flowchart may be computationally expensive, so I want the ability to scale out each step. This tells me that each step needs a Distributor. I want to be able to hook additional activities onto events later. This tells me I need to Publish() messages when it is done, not Send() them. A process may need to branch based on a condition. This tells me that a process must be able to publish more than one type of message. A process may need to join forks. I imagine I should use Sagas for this. Hopefully these assumptions are good otherwise I'm in more trouble than I thought. For the sake of simplicity, let's forget about forking or joining and consider a simple pipeline, with Step A followed by Step B, and ending with Step C. Each step gets its own distributor and can have many nodes processing messages. NodeA workers contain a IHandleMessages processor, and publish EventA NodeB workers contain a IHandleMessages processor, and publish Event B NodeC workers contain a IHandleMessages processor, and then the pipeline is complete. Here are the relevant parts of the config files, where # denotes the number of the worker, (i.e. there are input queues NodeA.1 and NodeA.2): NodeA: <MsmqTransportConfig InputQueue="NodeA.#" ErrorQueue="error" NumberOfWorkerThreads="1" MaxRetries="5" /> <UnicastBusConfig DistributorControlAddress="NodeA.Distrib.Control" DistributorDataAddress="NodeA.Distrib.Data" > <MessageEndpointMappings> </MessageEndpointMappings> </UnicastBusConfig> NodeB: <MsmqTransportConfig InputQueue="NodeB.#" ErrorQueue="error" NumberOfWorkerThreads="1" MaxRetries="5" /> <UnicastBusConfig DistributorControlAddress="NodeB.Distrib.Control" DistributorDataAddress="NodeB.Distrib.Data" > <MessageEndpointMappings> <add Messages="Messages.EventA, Messages" Endpoint="NodeA.Distrib.Data" /> </MessageEndpointMappings> </UnicastBusConfig> NodeC: <MsmqTransportConfig InputQueue="NodeC.#" ErrorQueue="error" NumberOfWorkerThreads="1" MaxRetries="5" /> <UnicastBusConfig DistributorControlAddress="NodeC.Distrib.Control" DistributorDataAddress="NodeC.Distrib.Data" > <MessageEndpointMappings> <add Messages="Messages.EventB, Messages" Endpoint="NodeB.Distrib.Data" /> </MessageEndpointMappings> </UnicastBusConfig> And here are the relevant parts of the distributor configs: Distributor A: <add key="DataInputQueue" value="NodeA.Distrib.Data"/> <add key="ControlInputQueue" value="NodeA.Distrib.Control"/> <add key="StorageQueue" value="NodeA.Distrib.Storage"/> Distributor B: <add key="DataInputQueue" value="NodeB.Distrib.Data"/> <add key="ControlInputQueue" value="NodeB.Distrib.Control"/> <add key="StorageQueue" value="NodeB.Distrib.Storage"/> Distributor C: <add key="DataInputQueue" value="NodeC.Distrib.Data"/> <add key="ControlInputQueue" value="NodeC.Distrib.Control"/> <add key="StorageQueue" value="NodeC.Distrib.Storage"/> I'm testing using 2 instances of each node, and the problem seems to come up in the middle at Node B. There are basically 2 things that might happen: Both instances of Node B report that it is subscribing to EventA, and also that NodeC.Distrib.Data@MYCOMPUTER is subscribing to the EventB that Node B publishes. In this case, everything works great. Both instances of Node B report that it is subscribing to EventA, however, one worker says NodeC.Distrib.Data@MYCOMPUTER is subscribing TWICE, while the other worker does not mention it. In the second case, which seem to be controlled only by the way the distributor routes the subscription messages, if the "overachiever" node processes an EventA, all is well. If the "underachiever" processes EventA, then the publish of EventB has no subscribers and the workflow dies. So, my questions: Is this kind of setup possible? Is the configuration correct? It's hard to find any examples of configuration with distributors beyond a simple one-level publisher/2-worker setup. Would it make more sense to have one central broker process that does all the non-computationally-intensive traffic cop operations, and only sends messages to processes behind distributors when the task is long-running and must be load balanced? Then the load-balanced nodes could simply reply back to the central broker, which seems easier. On the other hand, that seems at odds with the decentralization that is NServiceBus's strength. And if this is the answer, and the long running process's done event is a reply, how do you keep the Publish that enables later extensibility on published events?

    Read the article

  • Github Workflow: Pushing small fix branches to remote, or keep them local?

    - by Isaac Hodes
    In Scott Chacon's workflow (explained eg in this SO answer), with essentially two silos (development, and master), if, say I have a small bug to fix (e.g. can be fixed with a few characters) is the optimal way of doing that: a) branch off of development a branch called e.g. fix_123. Push this branch to origin as I work on it. When it's done, code-reviewed, whatever, merge into development and push development to origin. b) Same as above, but without pushing fix_123 to origin.

    Read the article

  • What diagrams, other than the class diagram and the workflow diagram, are useful for explaining how an application works?

    - by Goran_Mandic
    I am working on a small Delphi project, composed of two units. One unit is for the GUI, and the other for data management, file parsing, list iterating and so on.. I've already made a class diagram, and my workflow looks like hell- it's too complex, even for anyone to read. I've considered making a dataflow diagram, but it would be even more complex. A use case diagram wouldn't be of use either. Am I missing some diagram type which could somehow represent the relationship between my two units?

    Read the article

  • How to improve workflow between developer and designer with Expression Blend?

    - by Amenti
    We use WPF and Expression Blend 4. I'm trying to improve our workflow by tutoring one of our designers to use it for styling and animation. Slowly but surely I get the impression Blend in itself is to technical for the designer in question. I myself use it only occasionally (it's great for Visual States for instance) because a lot of things are easier done in code or not possible at all in Blend alone. It seems a developer with design experience is a lot more productive with it than a sole designer. Are there any good online resources or advice you could give me how to improve this situation?

    Read the article

  • Optimierter Workflow durch neues Geo-Datawarehouse: Ein Erfolgsprojekt für die LINZ AG – von CISS TDI und Primebird

    - by A&C Redaktion
    Zufriedene Kunden sind die beste Marketingstrategie. Deshalb bieten wir spezialisierten Partnern die Möglichkeit, professionelle Anwenderberichte über eigene erfolgreiche Oracle Projekte erstellen zu lassen. Hier im Blog präsentieren wir Ihnen in loser Folge Referenzberichte, mit denen Partner bereits erfolgreich werben. Heute: Die Oracle Partner CISS TDI, PRIMEBIRD und deren gemeinsames Großprojekt für die LINZ AGDie österreichische LINZ AG ist als Energieversorgungsunternehmen unter anderem für Strom, Gas und Fernwärme, das Wasser- und Kanalsystem sowie den öffentlichen Personennahverkehr zuständig. Seit Jahren schon nutzt sie zur Verwaltung ihrer stetig wachsenden Geodaten-Bestände Oracle Lösungen. 2012 nun haben die Oracle Partner CISS TDI und PRIMEBIRD die bisherige Oracle Lösung zu einem “Geo-Datawarehouse” ausgebaut und das Datenmodell für die Internet-Planauskunft optimiert. Das neue Datawarehouse stellt die geografischen Datenbestände der LINZ AG in einheitlicher Struktur dar und ermöglicht so eine deutliche Workflow-Optimierung. Die Voreile: der administrative Aufwand wurde reduziert, der Prozess der Datensammlung vereinheitlicht und der notwendige Datenexport, etwa an Bauträger oder die Kommune, läuft mit der neuen Web-Anwendung reibungslos. Details zum genauen Projektverlauf, den spezifischen Anforderungen bei Geodaten und zur Zusammenarbeit zwischen der Linz AG, CISS TDI und PRIMEBIRD finden Sie hier im Anwenderbericht Linz AG.Die Möglichkeit, sich und ihre Arbeit gewinnbringend zu präsentieren, können alle spezialisierten Partner nutzen, die ein repräsentatives Oracle Projekt abgeschlossen haben. Erfahrene Fachjournalisten interviewen dann sowohl Partner als auch Endkunde und erstellen einen ausführlichen, ansprechend aufbereiteten Bericht. Die Veröffentlichung erfolgt über verschiedene Marketing-Kanäle. Natürlich können die Partner die Anwenderberichte auch für eigene Marketingzwecke nutzen, z. B. für Veranstaltungen.Haben Sie Interesse? Dann wenden Sie sich an Frau Marion Aschenbrenner. Wir benötigen von Ihnen einige Eckdaten wie Kundenname, Ansprechpartner und eingesetzte Oracle Produkte, eine Beschreibung des Projektes in 3-4 Sätzen und Ihren Ansprechpartner im Haus. Und dann: Lassen Sie Ihre gute Arbeit für sich sprechen!

    Read the article

  • Optimierter Workflow durch neues Geo-Datawarehouse: Ein Erfolgsprojekt für die LINZ AG – von CISS TDI und Primebird

    - by A&C Redaktion
    Zufriedene Kunden sind die beste Marketingstrategie. Deshalb bieten wir spezialisierten Partnern die Möglichkeit, professionelle Anwenderberichte über eigene erfolgreiche Oracle Projekte erstellen zu lassen. Hier im Blog präsentieren wir Ihnen in loser Folge Referenzberichte, mit denen Partner bereits erfolgreich werben. Heute: Die Oracle Partner CISS TDI, PRIMEBIRD und deren gemeinsames Großprojekt für die LINZ AGDie österreichische LINZ AG ist als Energieversorgungsunternehmen unter anderem für Strom, Gas und Fernwärme, das Wasser- und Kanalsystem sowie den öffentlichen Personennahverkehr zuständig. Seit Jahren schon nutzt sie zur Verwaltung ihrer stetig wachsenden Geodaten-Bestände Oracle Lösungen. 2012 nun haben die Oracle Partner CISS TDI und PRIMEBIRD die bisherige Oracle Lösung zu einem “Geo-Datawarehouse” ausgebaut und das Datenmodell für die Internet-Planauskunft optimiert. Das neue Datawarehouse stellt die geografischen Datenbestände der LINZ AG in einheitlicher Struktur dar und ermöglicht so eine deutliche Workflow-Optimierung. Die Voreile: der administrative Aufwand wurde reduziert, der Prozess der Datensammlung vereinheitlicht und der notwendige Datenexport, etwa an Bauträger oder die Kommune, läuft mit der neuen Web-Anwendung reibungslos. Details zum genauen Projektverlauf, den spezifischen Anforderungen bei Geodaten und zur Zusammenarbeit zwischen der Linz AG, CISS TDI und PRIMEBIRD finden Sie hier im Anwenderbericht Linz AG.Die Möglichkeit, sich und ihre Arbeit gewinnbringend zu präsentieren, können alle spezialisierten Partner nutzen, die ein repräsentatives Oracle Projekt abgeschlossen haben. Erfahrene Fachjournalisten interviewen dann sowohl Partner als auch Endkunde und erstellen einen ausführlichen, ansprechend aufbereiteten Bericht. Die Veröffentlichung erfolgt über verschiedene Marketing-Kanäle. Natürlich können die Partner die Anwenderberichte auch für eigene Marketingzwecke nutzen, z. B. für Veranstaltungen.Haben Sie Interesse? Dann wenden Sie sich an Frau Marion Aschenbrenner. Wir benötigen von Ihnen einige Eckdaten wie Kundenname, Ansprechpartner und eingesetzte Oracle Produkte, eine Beschreibung des Projektes in 3-4 Sätzen und Ihren Ansprechpartner im Haus. Und dann: Lassen Sie Ihre gute Arbeit für sich sprechen!

    Read the article

  • How should I track approval workflow when users at every security level can create a request?

    - by Eric Belair
    I am writing a new application that allows users to enter requests. Once a request is entered, it must follow an approval workflow to be finally approved by a user the highest security level. So, let's say a user at Security Level 1 enters a request. This request must be approved by his superior - a user at Security Level 2. Once the Security Level 2 user approves it, it must be approved by a user at Security Level 3. Once the Security Level 3 user approves it, it is considered fully approved. However, users at any of the three Security Levels can enter requests. So, if a Security Level 3 user enters a request, it is automatically considered "fully approved". And, if a Security Level 2 user enters a request, it must only be approved by a Security Level 3 user. I'm currently storing each approval status in a Database Log Table, like so: STATUS_ID (PK) REQUEST_ID STATUS STATUS_DATE -------------- ------------- ---------------- ----------------------- 1 1 USER_SUBMIT 2012-09-01 00:00:00.000 2 1 APPROVED_LEVEL2 2012-09-01 01:00:00.000 3 1 APPROVED_LEVEL3 2012-09-01 02:00:00.000 4 2 USER_SUBMIT 2012-09-01 02:30:00.000 5 2 APPROVED_LEVEL2 2012-09-01 02:45:00.000 My question is, which is a better design: Record all three statuses for every request ...or... Record only the statuses needed according to the Security Level of the user submitting the request In Case 2, the data might look like this for two requests - one submitted by Security Level 2 User and another submitted by Security Level 3 user: STATUS_ID (PK) REQUEST_ID STATUS STATUS_DATE -------------- ------------- ---------------- ----------------------- 1 3 APPROVED_LEVEL2 2012-09-01 01:00:00.000 2 3 APPROVED_LEVEL3 2012-09-01 02:00:00.000 3 4 APPROVED_LEVEL3 2012-09-01 02:00:00.000

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >