Search Results

Search found 29203 results on 1169 pages for 'state machine workflow'.

Page 26/1169 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • Revolutionary brand powder packing machine price from affecting marketplace boom and put on uniform in addition to a lengthy service life

    - by user74606
    In mining in stone crushing, our machinery company's encounter becomes much more apparent. As a consequence of production capacity in between 600~800t/h of mining stone crusher, stone is mine Mobile Cone Crushing Plant Price 25~40 times, effectively solved the initially mining stone crusher operation because of low yield prices, no upkeep problems. Full chunk of mining stone crusher. Maximum particle size for crushing 1000x1200mm, an effective answer for the original side is mine stone provide, storing significant chunks of stone can not use complications in mines. Completed goods granularity is modest, only 2~15mm, an effective option for the original mine stone size, generally blocking chute production was an issue even the grinding machine. Two types of material mixed great uniformity, desulfurization of mining stone by adding weight considerably. Present quantity added is often reached 60%, effectively minimizing the cost of raw supplies. Electrical energy consumption has fallen. Dropped 1~2KWh/t tons of mining stone electrical energy consumption, annual electricity savings of one hundred,000 yuan. Efficient labor intensity of workers and also the atmosphere. Due to mine stone powder packing machine price a high degree of automation, with out human make contact with supplies, workers working circumstances enhanced significantly. Positive aspects, and along with mine for stone crushing, CS series cone Crusher has the following efficiency traits. CS series cone Crusher Chamber is divided into 3 unique designs, the user is usually chosen in accordance with the scenario on site crushing efficiency is high, uniform item size, grain shape, rolling mortar wall friction and put on uniform in addition to a extended service life of crushing cavity-. CS series cone Crusher utilizes a one of a kind dust-proof seal, sealing dependable, properly extend the service life of the lubricant replacement cycle and parts. CS series Sprial Sand washer price manufacture of important components to choose unique materials. Each and every stroke left rolling mortar wall of broken cone distances, by permitting a lot more products into the crushing cavity, as well as the formation of big discharge volume, speed of supplies by way of the crushing Chamber. This machine makes use of the principle of crushing cavity, also as unique laminated crushing, particle fragmentation, so that the completed product drastically improved the proportions of a cube, needle-shaped stones to lower particle levels extra evenly.

    Read the article

  • tradeoffs of iSCSI vs. AFP when using Time Machine with a NAS?

    - by Ajit George
    I'm setting up a home NAS device (Synology DS409) that I'm planning to use for Time Machine backups (amongst other things). What are the tradeoffs between using iSCSI or AFP to mount the backup volume? The Synology wiki suggests that iSCSI is better if the Mac will be frequently disconnected from the network or sleeping, from the point of view of the volume automatically remounting. What about filesystem consistency? Given that unplugging a USB drive without properly unmounting it often requires the Time Machine volume to be repaired, would iSCSI have the same issues?

    Read the article

  • tradeoffs of iSCSI vs. AFP when using Time Machine with a NAS?

    - by ajit.george
    I'm setting up a home NAS device (Synology DS409) that I'm planning to use for Time Machine backups (amongst other things). What are the tradeoffs between using iSCSI or AFP to mount the backup volume? The Synology wiki suggests that iSCSI is better if the Mac will be frequently disconnected from the network or sleeping, from the point of view of the volume automatically remounting. What about filesystem consistency? Given that unplugging a USB drive without properly unmounting it often requires the Time Machine volume to be repaired, would iSCSI have the same issues? Thanks in advance.

    Read the article

  • Is there a time machine equivalent for windows that can back up network files?

    - by Jim Thio
    This question is similar to Does an equivalent of Time Machine exist for Windows?, with one difference: The files I want to back up are on a network drive. The computer on that network drive is running Windows XP. I want to back up data on Windows 7. How would I do so? I'd like something similar to Mac OS X' time machine. So copy of data every hour, day, week. Then thinning out, data gets deleted automatically as time goes by. For example, the data for last day is kept as hourly snapshots. For last week, as daily snapshots every day. And for last month as weekly snapshots. How can I achieve this?

    Read the article

  • SQL SERVER – Database in RESTORING State for Long Time

    - by Pinal Dave
    A very interesting question I received the other day. “Our database has been in restoring stage for a long time. We have already restored all the necessary files there. After restoring the files we are expecting that  the database will be in operational mode, however, it is continuously in the restoring mode. Any suggestion?” The question is very common. I sent user follow up emails to understand what is actually going on with the user. I realized after restoring their bak files and log files their database was in the restoring state because they had not restored the latest log file with RECOVERY options. As they had completed all the database restore sequence (bak and log in order), the real need for them was to recover the database from norecovery state. User can restore log files till the database is no recovery mode. If the database is recovered it will be in operation and it can continue database operation. If the database has another operations we cannot restore further log as the chain of the log file after the database is recovered is meaningless. This is the reason why the database has to be norecovery state when it is restored. There are three different ways to recover the database. 1) Recover the database manually with following command. RESTORE DATABASE database_name WITH RECOVERY 2) Recover the database with the last log file. RESTORE LOG database_name FROM backup_device WITH RECOVERY 3) Recover the database when bak is restored RESTORE DATABASE database_name FROM backup_device WITH RECOVERY To understand how the backup restores timeline works read Backup Timeline and Understanding of Database Restore Process in Full Recovery Model. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Building Simple Workflows in Oozie

    - by dan.mcclary
    Introduction More often than not, data doesn't come packaged exactly as we'd like it for analysis. Transformation, match-merge operations, and a host of data munging tasks are usually needed before we can extract insights from our Big Data sources. Few people find data munging exciting, but it has to be done. Once we've suffered that boredom, we should take steps to automate the process. We want codify our work into repeatable units and create workflows which we can leverage over and over again without having to write new code. In this article, we'll look at how to use Oozie to create a workflow for the parallel machine learning task I described on Cloudera's site. Hive Actions: Prepping for Pig In my parallel machine learning article, I use data from the National Climatic Data Center to build weather models on a state-by-state basis. NCDC makes the data freely available as gzipped files of day-over-day observations stretching from the 1930s to today. In reading that post, one might get the impression that the data came in a handy, ready-to-model files with convenient delimiters. The truth of it is that I need to perform some parsing and projection on the dataset before it can be modeled. If I get more observations, I'll want to retrain and test those models, which will require more parsing and projection. This is a good opportunity to start building up a workflow with Oozie. I store the data from the NCDC in HDFS and create an external Hive table partitioned by year. This gives me flexibility of Hive's query language when I want it, but let's me put the dataset in a directory of my choosing in case I want to treat the same data with Pig or MapReduce code. CREATE EXTERNAL TABLE IF NOT EXISTS historic_weather(column 1, column2) PARTITIONED BY (yr string) STORED AS ... LOCATION '/user/oracle/weather/historic'; As new weather data comes in from NCDC, I'll need to add partitions to my table. That's an action I should put in the workflow. Similarly, the weather data requires parsing in order to be useful as a set of columns. Because of their long history, the weather data is broken up into fields of specific byte lengths: x bytes for the station ID, y bytes for the dew point, and so on. The delimiting is consistent from year to year, so writing SerDe or a parser for transformation is simple. Once that's done, I want to select columns on which to train, classify certain features, and place the training data in an HDFS directory for my Pig script to access. ALTER TABLE historic_weather ADD IF NOT EXISTS PARTITION (yr='2010') LOCATION '/user/oracle/weather/historic/yr=2011'; INSERT OVERWRITE DIRECTORY '/user/oracle/weather/cleaned_history' SELECT w.stn, w.wban, w.weather_year, w.weather_month, w.weather_day, w.temp, w.dewp, w.weather FROM ( FROM historic_weather SELECT TRANSFORM(...) USING '/path/to/hive/filters/ncdc_parser.py' as stn, wban, weather_year, weather_month, weather_day, temp, dewp, weather ) w; Since I'm going to prepare training directories with at least the same frequency that I add partitions, I should also add that to my workflow. Oozie is going to invoke these Hive actions using what's somewhat obviously referred to as a Hive action. Hive actions amount to Oozie running a script file containing our query language statements, so we can place them in a file called weather_train.hql. Starting Our Workflow Oozie offers two types of jobs: workflows and coordinator jobs. Workflows are straightforward: they define a set of actions to perform as a sequence or directed acyclic graph. Coordinator jobs can take all the same actions of Workflow jobs, but they can be automatically started either periodically or when new data arrives in a specified location. To keep things simple we'll make a workflow job; coordinator jobs simply require another XML file for scheduling. The bare minimum for workflow XML defines a name, a starting point, and an end point: <workflow-app name="WeatherMan" xmlns="uri:oozie:workflow:0.1"> <start to="ParseNCDCData"/> <end name="end"/> </workflow-app> To this we need to add an action, and within that we'll specify the hive parameters Also, keep in mind that actions require <ok> and <error> tags to direct the next action on success or failure. <action name="ParseNCDCData"> <hive xmlns="uri:oozie:hive-action:0.2"> <job-tracker>localhost:8021</job-tracker> <name-node>localhost:8020</name-node> <configuration> <property> <name>oozie.hive.defaults</name> <value>/user/oracle/weather_ooze/hive-default.xml</value> </property> </configuration> <script>ncdc_parse.hql</script> </hive> <ok to="WeatherMan"/> <error to="end"/> </action> There are a couple of things to note here: I have to give the FQDN (or IP) and port of my JobTracker and NameNode. I have to include a hive-default.xml file. I have to include a script file. The hive-default.xml and script file must be stored in HDFS That last point is particularly important. Oozie doesn't make assumptions about where a given workflow is being run. You might submit workflows against different clusters, or have different hive-defaults.xml on different clusters (e.g. MySQL or Postgres-backed metastores). A quick way to ensure that all the assets end up in the right place in HDFS is just to make a working directory locally, build your workflow.xml in it, and copy the assets you'll need to it as you add actions to workflow.xml. At this point, our local directory should contain: workflow.xml hive-defaults.xml (make sure this file contains your metastore connection data) ncdc_parse.hql Adding Pig to the Ooze Adding our Pig script as an action is slightly simpler from an XML standpoint. All we do is add an action to workflow.xml as follows: <action name="WeatherMan"> <pig> <job-tracker>localhost:8021</job-tracker> <name-node>localhost:8020</name-node> <script>weather_train.pig</script> </pig> <ok to="end"/> <error to="end"/> </action> Once we've done this, we'll copy weather_train.pig to our working directory. However, there's a bit of a "gotcha" here. My pig script registers the Weka Jar and a chunk of jython. If those aren't also in HDFS, our action will fail from the outset -- but where do we put them? The Jython script goes into the working directory at the same level as the pig script, because pig attempts to load Jython files in the directory from which the script executes. However, that's not where our Weka jar goes. While Oozie doesn't assume much, it does make an assumption about the Pig classpath. Anything under working_directory/lib gets automatically added to the Pig classpath and no longer requires a REGISTER statement in the script. Anything that uses a REGISTER statement cannot be in the working_directory/lib directory. Instead, it needs to be in a different HDFS directory and attached to the pig action with an <archive> tag. Yes, that's as confusing as you think it is. You can get the exact rules for adding Jars to the distributed cache from Oozie's Pig Cookbook. Making the Workflow Work We've got a workflow defined and have collected all the components we'll need to run. But we can't run anything yet, because we still have to define some properties about the job and submit it to Oozie. We need to start with the job properties, as this is essentially the "request" we'll submit to the Oozie server. In the same working directory, we'll make a file called job.properties as follows: nameNode=hdfs://localhost:8020 jobTracker=localhost:8021 queueName=default weatherRoot=weather_ooze mapreduce.jobtracker.kerberos.principal=foo dfs.namenode.kerberos.principal=foo oozie.libpath=${nameNode}/user/oozie/share/lib oozie.wf.application.path=${nameNode}/user/${user.name}/${weatherRoot} outputDir=weather-ooze While some of the pieces of the properties file are familiar (e.g., JobTracker address), others take a bit of explaining. The first is weatherRoot: this is essentially an environment variable for the script (as are jobTracker and queueName). We're simply using them to simplify the directives for the Oozie job. The oozie.libpath pieces is extremely important. This is a directory in HDFS which holds Oozie's shared libraries: a collection of Jars necessary for invoking Hive, Pig, and other actions. It's a good idea to make sure this has been installed and copied up to HDFS. The last two lines are straightforward: run the application defined by workflow.xml at the application path listed and write the output to the output directory. We're finally ready to submit our job! After all that work we only need to do a few more things: Validate our workflow.xml Copy our working directory to HDFS Submit our job to the Oozie server Run our workflow Let's do them in order. First validate the workflow: oozie validate workflow.xml Next, copy the working directory up to HDFS: hadoop fs -put working_dir /user/oracle/working_dir Now we submit the job to the Oozie server. We need to ensure that we've got the correct URL for the Oozie server, and we need to specify our job.properties file as an argument. oozie job -oozie http://url.to.oozie.server:port_number/ -config /path/to/working_dir/job.properties -submit We've submitted the job, but we don't see any activity on the JobTracker? All I got was this funny bit of output: 14-20120525161321-oozie-oracle This is because submitting a job to Oozie creates an entry for the job and places it in PREP status. What we got back, in essence, is a ticket for our workflow to ride the Oozie train. We're responsible for redeeming our ticket and running the job. oozie -oozie http://url.to.oozie.server:port_number/ -start 14-20120525161321-oozie-oracle Of course, if we really want to run the job from the outset, we can change the "-submit" argument above to "-run." This will prep and run the workflow immediately. Takeaway So, there you have it: the somewhat laborious process of building an Oozie workflow. It's a bit tedious the first time out, but it does present a pair of real benefits to those of us who spend a great deal of time data munging. First, when new data arrives that requires the same processing, we already have the workflow defined and ready to run. Second, as we build up a set of useful action definitions over time, creating new workflows becomes quicker and quicker.

    Read the article

  • Kipróbálható az ingyenes új Oracle Data Miner 11gR2 grafikus workflow-val

    - by Fekete Zoltán
    Oracle Data Mining technológiai információs oldal. Oracle Data Miner 11g Release 2 - Early Adopter oldal. Megjelent, letöltheto és kipróbálható az Oracle Data Mining, az Oracle adatbányászat új grafikus felülete, az Oracle Data Miner 11gR2. Az Oracle Data Minerhez egyszeruen az SQL Developer-t kell letöltenünk, mivel az adatbányászati felület abból indítható. Az Oracle Data Mining az Oracle adatbáziskezelobe ágyazott adatbányászati motor, ami az Oracle Database Enterprise Edition opciója. Az adatbányászat az adattárházak elemzésének kifinomult eszköze és folyamata. Az Oracle Data Mining in-database-mining elonyeit felvonultatja: - nincs felesleges adatmozgatás, a teljes adatbányászati folyamatban az adatbázisban maradnak az adatok - az adatbányászati modellek is az Oracle adatbázisban vannak - az adatbányászati eredmények, cluster adatok, döntések, valószínuségek, stb. szintén az adatbázisban keletkeznek, és ott közvetlenül elemezhetoek Az új ingyenes Data Miner felület "hatalmas gazdagodáson" ment keresztül az elozo verzióhoz képest. - grafikus adatbányászati workflow szerkesztés és futtatás jelent meg! - továbbra is ingyenes - kibovült a felület - új elemzési lehetoségekkel bovült - az SQL Developer 3.0 felületrol indítható, ez megkönnyíti az adatbányászati funkciók meghívását az adatbázisból, ha épp nem a grafikus felületetet szeretnénk erre használni Az ingyenes Data Miner felület az Oracle SQL Developer kiterjesztéseként érheto el, így az elemzok közvetlenül dolgozhatnak az adatokkal az adatbázisban és a Data Miner grafikus felülettel is, építhetnek és kiértékelhetnek, futtathatnak modelleket, predikciókat tehetnek és elemezhetnek, támogatást kapva az adatbányászati módszertan megvalósítására. A korábbi Oracle Data Miner felület a Data Miner Classic néven fut és továbbra is letöltheto az OTN-rol. Az új Data Miner GUI-ból egy képernyokép: Milyen feladatokra ad megoldási lehetoséget az Oracle Data Mining: - ügyfél viselkedés megjövendölése, prediktálása - a "legjobb" ügyfelek eredményes megcélzása - ügyfél megtartás, elvándorlás kezelés (churn) - ügyfél szegmensek, klaszterek, profilok keresése és vizsgálata - anomáliák, visszaélések felderítése - stb.

    Read the article

  • Creating packages in code - Workflow

    This is just a quick one prompted by a question on the SSIS Forum, how to programmatically add a precedence constraint (aka workflow) between two tasks. To keep the code simple I’ve actually used two Sequence containers which are often used as anchor points for a constraint. Very often this is when you have task that you wish to conditionally execute based on an expression. If it the first or only task in the package you need somewhere to anchor the constraint too, so you can then set the expression on it and control the flow of execution. Anyway, back to my code sample, here’s a quick screenshot of the finished article: Now for the code, which is actually pretty simple and hopefully the comments should explain exactly what is going on. Package package = new Package(); package.Name = "SequenceWorkflow"; // Add the two sequence containers to provide anchor points for the constraint // If you use tasks, it follows exactly the same pattern, they all derive from Executable Sequence sequence1 = package.Executables.Add("STOCK:Sequence") as Sequence; sequence1.Name = "SEQ Start"; Sequence sequence2 = package.Executables.Add("STOCK:Sequence") as Sequence; sequence2.Name = "SEQ End"; // Add the precedence constraint, here we use the package's constraint collection // as it hosts the two objects we want to constrain (link) // The default constraint is a basic On Success constraint just like in the designer PrecedenceConstraint constraint = package.PrecedenceConstraints.Add(sequence1, sequence2); // Change the settings to use a (dummy) expression only constraint.EvalOp = DTSPrecedenceEvalOp.Expression; constraint.Expression = "1 == 1";   The complete code file is available to download below. SequenceWorkflow.cs

    Read the article

  • Session state provider and atomic operations

    - by vtortola
    Hi, I've been thinking about this and it is blowing my mind... How does a session state provider properly works internally? I mean, I tried to write a custom session state provider based on Azure Tables or Blobs, but quickly I realized that because there is no way to ensure an atomic operation or establish a lock, race conditions are suitable to happen when several web servers do operation on that shared information. I know that there is a SQL Server Session State Provider (SQLS-SSP) and people is happy with it, so I guess that it's using some kind of transaction isolation level in order to accomplish some degree of concurrent safety, like checking is the data is lock (a simple column), locking it if not and returning the data in an atomic operation, but is that so? what does happen if the data is lock? does it returns an error? block the call for a while? returns it in read-only fashion? Cloud computing paradigms could be somehow new, but webfarms have been here for a while, so as I'm pretty new on it... do you recommend any good lecture about the topic? Thanks.

    Read the article

  • What does your Lisp workflow look like?

    - by Duncan Bayne
    I'm learning Lisp at the moment, coming from a language progression that is Locomotive BASIC - Z80 Assembler - Pascal - C - Perl - C# - Ruby. My approach is to simultaneously: write a simple web-scraper using SBCL, QuickLisp, closure-html, and drakma watch the SICP lectures I think this is working well; I'm developing good 'Lisp goggles', in that I can now read Lisp reasonably easily. I'm also getting a feel for how the Lisp ecosystem works, e.g. Quicklisp for dependencies. What I'm really missing, though, is a sense of how a seasoned Lisper actually works. When I'm coding for .NET, I have Visual Studio set up with ReSharper and VisualSVN. I write tests, I implement, I refactor, I commit. Then when I'm done enough of that to complete a story, I write some AUATs. Then I kick off a Release build on TeamCity to push the new functionality out to the customer for testing & hopefully approval. If it's an app that needs an installer, I use either WiX or InnoSetup, obviously building the installer through the CI system. So, my question is: as an experienced Lisper, what does your workflow look like? Do you work mostly in the REPL, or in the editor? How do you do unit tests? Continuous integration? Packaging & deployment? When you sit down at your desk, steaming mug of coffee to one side and a framed photo of John McCarthy to the other, what is it that you do? Currently, I feel like I am getting to grips with Lisp coding, but not Lisp development ...

    Read the article

  • What does your Lisp workflow look like?

    - by Duncan Bayne
    I'm learning Lisp at the moment, coming from a language progression that is Locomotive BASIC - Z80 Assembler - Pascal - C - Perl - C# - Ruby. My approach is to simultaneously: write a simple web-scraper using SBCL, QuickLisp, closure-html, and drakma watch the SICP lectures I think this is working well; I'm developing good 'Lisp goggles', in that I can now read Lisp reasonably easily. I'm also getting a feel for how the Lisp ecosystem works, e.g. Quicklisp for dependencies. What I'm really missing, though, is a sense of how a seasoned Lisper actually works. When I'm coding for .NET, I have Visual Studio set up with ReSharper and VisualSVN. I write tests, I implement, I refactor, I commit. Then when I'm done enough of that to complete a story, I write some AUATs. Then I kick off a Release build on TeamCity to push the new functionality out to the customer for testing & hopefully approval. If it's an app that needs an installer, I use either WiX or InnoSetup, obviously building the installer through the CI system. So, my question is: as an experienced Lisper, what does your workflow look like? Do you work mostly in the REPL, or in the editor? How do you do unit tests? Continuous integration? Packaging & deployment? When you sit down at your desk, steaming mug of coffee to one side and a framed photo of John McCarthy to the other, what is it that you do? Currently, I feel like I am getting to grips with Lisp coding, but not Lisp development ...

    Read the article

  • Help us with our git workflow

    - by Brandon Cordell
    We have a web application that gets deployed to multiple regions around our state. An instance of the application for each region. We maintain a staging and production (master) branch in our repository, but we were wondering what is the best way of maintaining each instances codebase. It's similar at the core, but we have to give each region the ability to make specific requests that may not make it into the core of the application. Right now we have branches for each region, like region_one_staging, and region_one_production. At the rate we're growing we'll have hundreds of branches here in the next few years. Is there a better way to do this?

    Read the article

  • How to get this Qt state machine to work?

    - by Ton van den Heuvel
    I have two widgets that can be checked, and a numeric entry field that should contain a value greater than zero. Whenever both widgets have been checked, and the numeric entry field contains a value greater than zero, a button should be enabled. I am struggling with defining a proper state machine for this situation. So far I have the following: QStateMachine *machine = new QStateMachine(this); QState *buttonDisabled = new QState(QState::ParallelStates); buttonDisabled->assignProperty(ui_->button, "enabled", false); QState *a = new QState(buttonDisabled); QState *aUnchecked = new QState(a); QFinalState *aChecked = new QFinalState(a); aUnchecked->addTransition(wa, SIGNAL(checked()), aChecked); a->setInitialState(aUnchecked); QState *b = new QState(buttonDisabled); QState *bUnchecked = new QState(b); QFinalState *bChecked = new QFinalState(b); employeeUnchecked->addTransition(wb, SIGNAL(checked()), bChecked); b->setInitialState(bUnchecked); QState *weight = new QState(registerButtonDisabled); QState *weightZero = new QState(weight); QFinalState *weightGreaterThanZero = new QFinalState(weight); weightZero->addTransition(this, SIGNAL(validWeight()), weightGreaterThanZero); weight->setInitialState(weightZero); QState *buttonEnabled = new QState(); buttonEnabled->assignProperty(ui_->registerButton, "enabled", true); buttonDisabled->addTransition(buttonDisabled, SIGNAL(finished()), buttonEnabled); buttonEnabled->addTransition(this, SIGNAL(invalidWeight()), weightZero); machine->addState(registerButtonDisabled); machine->addState(registerButtonEnabled); machine->setInitialState(registerButtonDisabled); machine->start(); The problem here is that the following transition: buttonEnabled->addTransition(this, SIGNAL(invalidWeight()), weightZero); causes all the child states in the registerButtonDisabled state to be reverted to their initial state. This is unwanted behaviour, as I want the a and b states to remain in the same state. How do I prevent this?

    Read the article

  • How to update the original InfoPath form from within Workflow?

    - by Allen.Zhang
    Hi All, I have created an InfoPath form (e.g. Form_ExpenseReport) for collect data from end users, and a number of task forms (also InfoPath form, e.g. TaskForm_1, TaskForm_2) for my state machine workflow use. The users want to see all the comments of Task forms (TaskForm_1 & TaskForm_2) in the original IP form (Form_ExpenseReport). How can I update the first form from within workflow? Can anybody give me some tips? My environment: MOSS 2007 Enterprise license VS 2008

    Read the article

  • Does MOSS 2007 workflows support calling external mehods ?

    - by Mina Samy
    Hi all I have a custom sharepoint workflow that I need to call an external method defined in a local service it always throws an exception System.InvalidOperationException: Could not find service of type 'ListItemCheckService.IListItemCheck' through the currently configured services. Consider adding the service to ExternalDataExchangeService. at System.Workflow.Activities.CallExternalMethodActivity.Execute(ActivityExecutionContext executionContext) at System.Workflow.ComponentModel.ActivityExecutor`1.Execute(T activity, ActivityExecutionContext executionContext) at System.Workflow.ComponentModel.ActivityExecutor`1.Execute(Activity activity, ActivityExecutionContext executionContext) at System.Workflow.ComponentModel.ActivityExecutorOperation.Run(IWorkflowCoreRuntime workflowCoreRuntime) at System.Workflow.Runtime.Scheduler.Run() the question is does the sharepoint workflow system support calling external methods from a local service ? thanks

    Read the article

  • Any reccomendations for implementing a user-defined workflow in Ruby?

    - by midas06
    I'm interested in creating a system where the user can define the steps in a workflow. Is there a gem that already handles this? I thought about one of the state machine gems, but they all seem to be for pre-defined states. I've been thinking maybe i can use state machine for the individual step types... An email step could have a few states [New, Assigned, Done], and the workflow could just be lists of these stateful steps. Are there other solutions out there?

    Read the article

  • Implementing a post-notification function to perform custom validation

    - by Alejandro Sosa
    Introduction Oracle Workflow Notification System can be extended to perform extra validation or processing via PLSQL procedures when the notification is being responded to. These PLSQL procedures are called post-notification functions since they are executed after a notification action such as Approve, Reject, Reassign or Request Information is performed. The standard signature for the post-notification function is     procedure <procedure_name> (itemtype  in varchar2,                                itemkey   in varchar2,                                actid     in varchar2,                                funcmode  in varchar2,                                resultout in out nocopy varchar2); Modes The post-notification function provides the parameter 'funcmode' which will have the following values: 'RESPOND', 'VALIDATE, and 'RUN' for a notification is responded to (Approve, Reject, etc) 'FORWARD' for a notification being forwarded to another user 'TRANSFER' for a notification being transferred to another user 'QUESTION' for a request of more information from one user to another 'QUESTION' for a response to a request of more information 'TIMEOUT' for a timed-out notification 'CANCEL' when the notification is being re-executed in a loop. Context Variables Oracle Workflow provides different context information that corresponds to the current notification being acted upon to the post-notification function. WF_ENGINE.context_nid - The notification ID  WF_ENGINE.context_new_role - The new role to which the action on the notification is directed WF_ENGINE.context_user_comment - Comments appended to the notification   WF_ENGINE.context_user - The user who is responsible for taking the action that updated the notification's state WF_ENGINE.context_recipient_role - The role currently designated as the recipient of the notification. This value may be the same as the value of WF_ENGINE.context_user variable, or it may be a group role of which the context user is a member. WF_ENGINE.context_original_recipient - The role that has ownership of and responsibility for the notification. This value may differ from the value of the WF_ENGINE.context_recipient_role variable if the notification has previously been reassigned.  Example Let us assume there is an EBS transaction that can only be approved by a certain people thus any attempt to transfer or delegate such notification should be allowed only to users SPIERSON or CBAKER. The way to implement this functionality would be as follows: Edit the corresponding workflow definition in Workflow Builder and open the notification. In the Function Name enter the name of the procedure where the custom code is handled, for instance, TEST_PACKAGE.Post_Notification In PLSQL create the corresponding package TEST_PACKAGE with a procedure named Post_Notification, as follows:     procedure Post_Notification (itemtype  in varchar2,                                  itemkey   in varchar2,                                  actid     in varchar2,                                  funcmode  in varchar2,                                  resultout in out nocopy varchar2) is     l_count number;     begin       if funcmode in ('TRANSFER','FORWARD') then         select count(1) into l_count         from WF_ROLES         where WF_ENGINE.context_new_role in ('SPIERSON','CBAKER');               --and/or any other conditions         if l_count<1 then           WF_CORE.TOKEN('ROLE', WF_ENGINE.context_new_role);           WF_CORE.RAISE('WFNTF_TRANSFER_FAIL');         end if;       end if;     end Post_Notification; Launch the workflow process with the changed notification and attempt to reassign or transfer it. When trying to reassign the notification to user CBROWN the screen would like like below: Check the Workflow API Reference Guide, section Post-Notification Functions, to see all the standard, seeded WF_ENGINE variables available for extending notifications processing. 

    Read the article

  • I don't understand the definition of side effects

    - by Chris Okyen
    I don't understand the wikipedia article on Side Effects: In computer science, a function or expression is said to have a side effect if, in addition to returning a value, it also 1.) Modifies some state or 2.) Has an observable interaction with calling functions or the outside world. I know an example of the first thing that causes a function or expression to have side effects - modifying a state Function and Expression modifying a state : 1.) foo(int X) { return x = x % x; } a = a + 1; What does 2.) - Has an observable interaction with calling functions or the outside world," mean? - Please give an example. The article continues on to say, "For example, a function might modify a global or static variable, modify one of its arguments, raise an exception, write data to a display or file, read data, or call other side-effecting functions...." Are all these examples, examples of 1.) - Modifiying some state , or are they also part of 2.) - Has an observable interaction with calling functions or the outside world?

    Read the article

  • How do you handle animations that are for transitioning between states?

    - by yaj786
    How does one usually handle animations that are for going between a game object's states? For example, imagine a very simple game in which a character can only crouch or stand normally. Currently, I use a custom Animation class like this: class Animation{ int numFrames; int curFrame; Bitmap spriteSheet; //... various functions for pausing, returning frame, etc. } and an example Character class class Character{ int state; Animation standAni; Animation crouchAni; //... etc, etc. } Thus, I use the state of the character to draw the necessary animation. if(state == STATE_STAND) draw(standAni.updateFrame()); else if(state == STATE_CROUCH) draw(crouchAni.updateFrame()); Now I've come to the point where I want to draw "in-between" animations, because right now the character will just jump immediately into a crouch instead of bending down. What is a good way to handle this? And if the way that I handle storing Animations in the Character class is not a good way, what is? I thought of creating states like STATE_STANDING_TO_CROUCHING but I feel like that may get messy fast.

    Read the article

  • How should I fix problems with file permissions while restoring from Time Machine?

    - by Andrew Grimm
    While restoring files from a Time Machine backup, I got the error message "The operation can’t be completed because you don’t have permission to access some of the items." because of problems with files in one folder. What's the safest way to deal with this? The folder in question has permissions like: Andrew-Grimms-MacBook-Pro:kmer agrimm$ pwd /Volumes/Time Machine Backups/Backups.backupdb/Andrew Grimm’s MacBook Pro/2010-12-09-224309/Macintosh HD/Users/agrimm/ruby/kmer Andrew-Grimms-MacBook-Pro:kmer agrimm$ ls -ltra total 6156896 drwxrwxrwx@ 19 agrimm staff 680 18 Jan 2008 Saccharomyces_cerevisiae -r--------@ 1 agrimm staff 60221852 4 Aug 2009 hs_ref_GRCh37_chrY.fa -r--------@ 1 agrimm staff 157488804 4 Aug 2009 hs_ref_GRCh37_chrX.fa (snip a few files) -r--------@ 1 agrimm staff 676063 27 Oct 2009 NC_001143.fna -rw-r--r--@ 1 agrimm staff 6148 23 Mar 2010 .DS_Store drwxr-xr-x@ 3 agrimm staff 1530 23 Mar 2010 . drwxr-xr-x@ 30 agrimm staff 1054 20 Nov 14:43 .. Is it ok to do sudo chmod, or is there a safer approach? Background: Files within the original folder on my computer also had weird permissions - I suspect I may have used sudo to copy some files from a thumbdrive onto my computer.

    Read the article

  • How small (spec wise) can a virtual machine be and still boot up and run some sort of OS?

    - by IllvilJa
    One of the advantages with virtual machines is that you can be very flexible with their sizes. If the host system permits it, you can have a very large virtual machine with a lot of virtual RAM and disk. Also, you can decide to go the other way around, to give the virtual machine a very modest amount of RAM and disk and then choose and configure the OS appropriately. The question is, how small virtual machines have people managed to setup (and get to both boot up and to run)? Virtual machines doing something usuful is preferable, even if I know "useful" in this context is awfully subjective, but laboratory-cases with a configuration stripped beyond common sense could be intresting as well, just to see what people manage to boot and run. Quite open ended question and quite academic, but think of it: an extremely small VM (which still does something useful) takes very little memory and disk and can be quite quickly saved to and restored from disk. If it's also gentle on CPU resources, one might consider having a huge number of such VMs up and running on a host. (Imagine a VM running just an old Commodore 64 or Commodore Amiga in it. Ok, way wrong CPU architecture for modern Virtualization software running on a x86-based PC but still an interesting thought. You could have quite a few such small VMs running on a modern PC.)

    Read the article

  • Best PerfCounters for monitoring system health of IIS, WCF, WWF and .Net for a Workflow based soluti

    - by Gineer
    We have a solution built in .Net that will be installed into a client environment. The solution will span multiple servers and be running on multiple tiers. The client makes us of MOM (Microsoft operations Manager) to monitor the system. What are the best counters to use for monitoring the overall health of the system? Are there any built in counters that we could add into a MOM Pack (as an Alert) to test a given scenario? Any thoughts suggestions would be much apreciated. Thanks

    Read the article

  • Understanding branching strategy/workflow correctly

    - by burnersk
    I'm using svn without branches (trunk-only) for a very long time at my workplace. I had discovered most or all of the issues related to projects which do not have any branching strategy. Unlikely this is not going to change at my workplace but for my private projects. For my private projects which most includes coworkers and working together at the same time on different features I like to have an robust branching strategy with supports long-term releases powered by git. I find out that the Atlassian Toolchain (JIRA, Stash and Bamboo) helped me most and it also recommending me an branching strategy which I like to verify for the team needs. The branching strategy was taken directly from Atlassian Stash recommendation with a small modification to the hotfix branch tree. All hotfixes should also merged into mainline. The branching strategy in words mainline (also known as master with git or trunk with svn) contains the "state of the art" developing release. Everything here was successfully checked with various automated tests (through Bamboo) and looks like everything is working. It is not proven as working because of possible missing tests. It is ready to use but not recommended for production. feature covers all new features which are not completely finished. Once a feature is finished it will be merged into mainline. Sample branch: feature/ISSUE-2-A-nice-Feature bugfix fixes non-critical bugs which can wait for the next normal release. Sample branch: bugfix/ISSUE-1-Some-typos production owns the latest release. hotfix fixes critical bugs which have to be release urgent to mainline, production and all affected long-term *release*es. Sample branch: hotfix/ISSUE-3-Check-your-math release is for long-term maintenance. Sample branches: release/1.0, release/1.1 release/1.0-rc1 I am not an expert so please provide me feedback. Which problems might appear? Which parts are missing or slowing down the productivity?

    Read the article

  • Trying to backup system state on server 2003 sp2, getting "Faulting application vssvc.exe - system state backup failed" in applciation log

    - by IT_Fixr
    Trying to backup system state on Windows Server 2003 (SP2), getting "Faulting application vssvc.exe - system state backup failed" in application log. Volume shadow copy creation: Attempt 1. "MSDEWriter" has reported an error 0x800423f2. This is part of System State. The volume shadow copy operation can be retried. "Event Log Writer" has reported an error 0x800423f2. This is part of System State. The volume shadow copy operation can be retried. "Registry Writer" has reported an error 0x800423f2. This is part of System State. The volume shadow copy operation can be retried. "COM+ REGDB Writer" has reported an error 0x800423f2. This is part of System State. The volume shadow copy operation can be retried. "Removable Storage Manager" has reported an error 0x0. This is part of System State. The backup cannot continue. Error returned while creating the volume shadow copy:800423f2 Aborting Backup.

    Read the article

  • Send regular keyboard samples OR keyboard state changes over network

    - by Ciaran
    Building a multi player asteroids game where ships compete with each other. Using UDP. Wanted to minimize traffic sent to server. Which would you do: Send periodic keyboard state samples every from client every to match server physics update rate e.g. 50 times per second. Highly resilient to packet loss and other reliabilty problems. Out of date packets disacarded by server. Generates a lot of unnuecessary traffic. Only send keyboard state when it changes (key up, key down). Radically less traffic sent from client to server. However, UDP can lose packets without you being informed. So the latter method could result in the vital packet never being resent unless I detect and resend this in a timely manner.

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >