Search Results

Search found 2157 results on 87 pages for 'sequential workflow'.

Page 12/87 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • PHP remote development workflow: git, symfony and hudson

    - by user2022
    I'm looking to develop a website and all the work will be done remotely (no local dev server). The reason for this is that my shared hosting company a2hosting has a specific configuration (symfony,mysql,git) that I don't want to spend time duplicating when I can just ssh and develop remotely or through netbeans remote editing features. My question is how can I use git to separate my site into three areas: live, staging and dev. Here's my initial thought: public_html (live site and git repo) testing: a mirror of the site used for visual tests (full git repo) dev/ticket# : git branches of public_html used for features and bug fixes (full git repo) Version Control with git: Initial setup: cd public_html git init git add * git commit -m ‘initial commit of the site’ cd .. git clone public_html testing mkdir dev Development: cd /dev git clone ../testing ticket# all work is done in ./dev/ticket#, then visit www.domain.com/dev/ticket# to visually test make granular commits as necessary until dev is done git push origin master:ticket# if the above fails: merge latest testing state into current dev work: git merge origin/master then try the push again mark ticket# as ready for integration integration and deployment process: cd ../../testing git merge ticket# -m "integration test for ticket# --no-ff (check for conflicts ) run hudson tests visit www.domain.com/testing for visual test if all tests pass: if this ticket marks the end of a big dev sprint: make a snapshot with git tag git push --tags origin else git push origin cd ../public_html git checkout -f (live site should have the latest dev from ticket#) else: revert the merge: git checkout master~1; git commit -m "reverting ticket#" update ticket# that testing failed with the failure details Snapshots: Each major deployment sprint should have a standard name and should be tracked. Method: git tag Naming convention: TBD Reverting site to previous state If something goes wrong, then revert to previous snapshot and debug the issue in dev with a new ticket#. Once the bug is fixed, follow the deployment process again. My questions: Does this workflow make sense, if not, any recommendations Is my approach for reverting correct or is there a better way to say 'revert to before x commit'

    Read the article

  • Ruby workflow in Windows

    - by Rig
    I've done some searching and quite haven't come across the answer I am looking for. I do not think this is a duplicate of this question. I believe Windows could be a suitable development environment based on the mix of answers in that question. I have been developing in Ruby (mostly Rails but not entirely) for about a year now for personal projects on a Macbook Pro however that machine has faced an untimely death and has been replaced with a nice Windows 7 machine. Ruby development felt almost natural on the Mac after doing some research and setting up the typical stack. My environment then included the standard (Linux like) stuff built into OSX, Text Wrangler, Git, RVM, et al. Not too much of a deviation from what the 'devotees' tend to assume. Now I am setting up my new Windows box for continuing that development. What would my development environment look like? Should I just cave and run Linux in a VM? Ideally I would develop in Windows native. I am aware of the Windows Ruby installer. It seems decent but its not exactly as nice as RVM in the osx/linux world. Mercurial/Git are available so I would assume they play into the stack. Does one develop entirely in Windows? Does one run a webserver in a Linux VM and use it as a test bed while developing in Windows? Do it all in a VM? What does the standard Windows Ruby developer environment look like and what is the workflow? What would a typical step through be for adding a new feature to an ongoing project and what would the technology stack look like?

    Read the article

  • Creating packages in code - Workflow

    This is just a quick one prompted by a question on the SSIS Forum, how to programmatically add a precedence constraint (aka workflow) between two tasks. To keep the code simple I’ve actually used two Sequence containers which are often used as anchor points for a constraint. Very often this is when you have task that you wish to conditionally execute based on an expression. If it the first or only task in the package you need somewhere to anchor the constraint too, so you can then set the expression on it and control the flow of execution. Anyway, back to my code sample, here’s a quick screenshot of the finished article: Now for the code, which is actually pretty simple and hopefully the comments should explain exactly what is going on. Package package = new Package(); package.Name = "SequenceWorkflow"; // Add the two sequence containers to provide anchor points for the constraint // If you use tasks, it follows exactly the same pattern, they all derive from Executable Sequence sequence1 = package.Executables.Add("STOCK:Sequence") as Sequence; sequence1.Name = "SEQ Start"; Sequence sequence2 = package.Executables.Add("STOCK:Sequence") as Sequence; sequence2.Name = "SEQ End"; // Add the precedence constraint, here we use the package's constraint collection // as it hosts the two objects we want to constrain (link) // The default constraint is a basic On Success constraint just like in the designer PrecedenceConstraint constraint = package.PrecedenceConstraints.Add(sequence1, sequence2); // Change the settings to use a (dummy) expression only constraint.EvalOp = DTSPrecedenceEvalOp.Expression; constraint.Expression = "1 == 1";   The complete code file is available to download below. SequenceWorkflow.cs

    Read the article

  • Kipróbálható az ingyenes új Oracle Data Miner 11gR2 grafikus workflow-val

    - by Fekete Zoltán
    Oracle Data Mining technológiai információs oldal. Oracle Data Miner 11g Release 2 - Early Adopter oldal. Megjelent, letöltheto és kipróbálható az Oracle Data Mining, az Oracle adatbányászat új grafikus felülete, az Oracle Data Miner 11gR2. Az Oracle Data Minerhez egyszeruen az SQL Developer-t kell letöltenünk, mivel az adatbányászati felület abból indítható. Az Oracle Data Mining az Oracle adatbáziskezelobe ágyazott adatbányászati motor, ami az Oracle Database Enterprise Edition opciója. Az adatbányászat az adattárházak elemzésének kifinomult eszköze és folyamata. Az Oracle Data Mining in-database-mining elonyeit felvonultatja: - nincs felesleges adatmozgatás, a teljes adatbányászati folyamatban az adatbázisban maradnak az adatok - az adatbányászati modellek is az Oracle adatbázisban vannak - az adatbányászati eredmények, cluster adatok, döntések, valószínuségek, stb. szintén az adatbázisban keletkeznek, és ott közvetlenül elemezhetoek Az új ingyenes Data Miner felület "hatalmas gazdagodáson" ment keresztül az elozo verzióhoz képest. - grafikus adatbányászati workflow szerkesztés és futtatás jelent meg! - továbbra is ingyenes - kibovült a felület - új elemzési lehetoségekkel bovült - az SQL Developer 3.0 felületrol indítható, ez megkönnyíti az adatbányászati funkciók meghívását az adatbázisból, ha épp nem a grafikus felületetet szeretnénk erre használni Az ingyenes Data Miner felület az Oracle SQL Developer kiterjesztéseként érheto el, így az elemzok közvetlenül dolgozhatnak az adatokkal az adatbázisban és a Data Miner grafikus felülettel is, építhetnek és kiértékelhetnek, futtathatnak modelleket, predikciókat tehetnek és elemezhetnek, támogatást kapva az adatbányászati módszertan megvalósítására. A korábbi Oracle Data Miner felület a Data Miner Classic néven fut és továbbra is letöltheto az OTN-rol. Az új Data Miner GUI-ból egy képernyokép: Milyen feladatokra ad megoldási lehetoséget az Oracle Data Mining: - ügyfél viselkedés megjövendölése, prediktálása - a "legjobb" ügyfelek eredményes megcélzása - ügyfél megtartás, elvándorlás kezelés (churn) - ügyfél szegmensek, klaszterek, profilok keresése és vizsgálata - anomáliák, visszaélések felderítése - stb.

    Read the article

  • What does your Lisp workflow look like?

    - by Duncan Bayne
    I'm learning Lisp at the moment, coming from a language progression that is Locomotive BASIC - Z80 Assembler - Pascal - C - Perl - C# - Ruby. My approach is to simultaneously: write a simple web-scraper using SBCL, QuickLisp, closure-html, and drakma watch the SICP lectures I think this is working well; I'm developing good 'Lisp goggles', in that I can now read Lisp reasonably easily. I'm also getting a feel for how the Lisp ecosystem works, e.g. Quicklisp for dependencies. What I'm really missing, though, is a sense of how a seasoned Lisper actually works. When I'm coding for .NET, I have Visual Studio set up with ReSharper and VisualSVN. I write tests, I implement, I refactor, I commit. Then when I'm done enough of that to complete a story, I write some AUATs. Then I kick off a Release build on TeamCity to push the new functionality out to the customer for testing & hopefully approval. If it's an app that needs an installer, I use either WiX or InnoSetup, obviously building the installer through the CI system. So, my question is: as an experienced Lisper, what does your workflow look like? Do you work mostly in the REPL, or in the editor? How do you do unit tests? Continuous integration? Packaging & deployment? When you sit down at your desk, steaming mug of coffee to one side and a framed photo of John McCarthy to the other, what is it that you do? Currently, I feel like I am getting to grips with Lisp coding, but not Lisp development ...

    Read the article

  • What does your Lisp workflow look like?

    - by Duncan Bayne
    I'm learning Lisp at the moment, coming from a language progression that is Locomotive BASIC - Z80 Assembler - Pascal - C - Perl - C# - Ruby. My approach is to simultaneously: write a simple web-scraper using SBCL, QuickLisp, closure-html, and drakma watch the SICP lectures I think this is working well; I'm developing good 'Lisp goggles', in that I can now read Lisp reasonably easily. I'm also getting a feel for how the Lisp ecosystem works, e.g. Quicklisp for dependencies. What I'm really missing, though, is a sense of how a seasoned Lisper actually works. When I'm coding for .NET, I have Visual Studio set up with ReSharper and VisualSVN. I write tests, I implement, I refactor, I commit. Then when I'm done enough of that to complete a story, I write some AUATs. Then I kick off a Release build on TeamCity to push the new functionality out to the customer for testing & hopefully approval. If it's an app that needs an installer, I use either WiX or InnoSetup, obviously building the installer through the CI system. So, my question is: as an experienced Lisper, what does your workflow look like? Do you work mostly in the REPL, or in the editor? How do you do unit tests? Continuous integration? Packaging & deployment? When you sit down at your desk, steaming mug of coffee to one side and a framed photo of John McCarthy to the other, what is it that you do? Currently, I feel like I am getting to grips with Lisp coding, but not Lisp development ...

    Read the article

  • Tomcat + Spring + CI workflow

    - by ex3v
    We're starting our very first project with Spring and java web stack. This project will be mainly about rewriting quite large ERP/CRM from Zend Framework to Java. Important factor in my question is that I come from php territory, where things (in terms of quality) tend to look different than in java world. Fatcs: there will be 2-3 developers, at least one of developers uses Windows, rest uses Linux, there is one remote linux-based machine, which should handle test and production instances, after struggling with buggy legacy code, we want to introduce good programming and development practices (CI, tests, clean code and so on) client: internal, frequent business logic changes, scrum, daily deployments What I want to achieve is good workflow on as many development stages as possible (coding - commiting - testing - deploying). The problem is that I've never done this before, so I don't know what are best practices to do this. What I have so far is: developers code locally, there is vagrant instance on every development machine, managed by puppet. It contains the same linux, jenkins and tomcat versions as production machine, while coding, developer deploys to vagrant machine, after local merge to test branch, jenkins on vagrant handles tests, when everything is fine, developer pushes commits and merges jenkins on remote machine pulls commit from test branch, runs tests and so on, if everything looks green, jenkins deploys to test tomcat instance Deployment to production is manual (altough it can be done using helping scripts) when business logic is tested by other divisions and everything looks fine to client. Now, the real question: does above make any sense? Things that I'm not sure about: Remote machine: won't there be any problems with two (or even three, as jenkins might need one) instances of same app on tomcat? Using vagrant to develop on php environment is just vise. Isn't this overkill while using Tomcat? I mean, is there higher probability that tomcat will act the same on every machine? Is there sense of having local jenkins on vagrant?

    Read the article

  • How to update the original InfoPath form from within Workflow?

    - by Allen.Zhang
    Hi All, I have created an InfoPath form (e.g. Form_ExpenseReport) for collect data from end users, and a number of task forms (also InfoPath form, e.g. TaskForm_1, TaskForm_2) for my state machine workflow use. The users want to see all the comments of Task forms (TaskForm_1 & TaskForm_2) in the original IP form (Form_ExpenseReport). How can I update the first form from within workflow? Can anybody give me some tips? My environment: MOSS 2007 Enterprise license VS 2008

    Read the article

  • Does MOSS 2007 workflows support calling external mehods ?

    - by Mina Samy
    Hi all I have a custom sharepoint workflow that I need to call an external method defined in a local service it always throws an exception System.InvalidOperationException: Could not find service of type 'ListItemCheckService.IListItemCheck' through the currently configured services. Consider adding the service to ExternalDataExchangeService. at System.Workflow.Activities.CallExternalMethodActivity.Execute(ActivityExecutionContext executionContext) at System.Workflow.ComponentModel.ActivityExecutor`1.Execute(T activity, ActivityExecutionContext executionContext) at System.Workflow.ComponentModel.ActivityExecutor`1.Execute(Activity activity, ActivityExecutionContext executionContext) at System.Workflow.ComponentModel.ActivityExecutorOperation.Run(IWorkflowCoreRuntime workflowCoreRuntime) at System.Workflow.Runtime.Scheduler.Run() the question is does the sharepoint workflow system support calling external methods from a local service ? thanks

    Read the article

  • Any reccomendations for implementing a user-defined workflow in Ruby?

    - by midas06
    I'm interested in creating a system where the user can define the steps in a workflow. Is there a gem that already handles this? I thought about one of the state machine gems, but they all seem to be for pre-defined states. I've been thinking maybe i can use state machine for the individual step types... An email step could have a few states [New, Assigned, Done], and the workflow could just be lists of these stateful steps. Are there other solutions out there?

    Read the article

  • Implementing a post-notification function to perform custom validation

    - by Alejandro Sosa
    Introduction Oracle Workflow Notification System can be extended to perform extra validation or processing via PLSQL procedures when the notification is being responded to. These PLSQL procedures are called post-notification functions since they are executed after a notification action such as Approve, Reject, Reassign or Request Information is performed. The standard signature for the post-notification function is     procedure <procedure_name> (itemtype  in varchar2,                                itemkey   in varchar2,                                actid     in varchar2,                                funcmode  in varchar2,                                resultout in out nocopy varchar2); Modes The post-notification function provides the parameter 'funcmode' which will have the following values: 'RESPOND', 'VALIDATE, and 'RUN' for a notification is responded to (Approve, Reject, etc) 'FORWARD' for a notification being forwarded to another user 'TRANSFER' for a notification being transferred to another user 'QUESTION' for a request of more information from one user to another 'QUESTION' for a response to a request of more information 'TIMEOUT' for a timed-out notification 'CANCEL' when the notification is being re-executed in a loop. Context Variables Oracle Workflow provides different context information that corresponds to the current notification being acted upon to the post-notification function. WF_ENGINE.context_nid - The notification ID  WF_ENGINE.context_new_role - The new role to which the action on the notification is directed WF_ENGINE.context_user_comment - Comments appended to the notification   WF_ENGINE.context_user - The user who is responsible for taking the action that updated the notification's state WF_ENGINE.context_recipient_role - The role currently designated as the recipient of the notification. This value may be the same as the value of WF_ENGINE.context_user variable, or it may be a group role of which the context user is a member. WF_ENGINE.context_original_recipient - The role that has ownership of and responsibility for the notification. This value may differ from the value of the WF_ENGINE.context_recipient_role variable if the notification has previously been reassigned.  Example Let us assume there is an EBS transaction that can only be approved by a certain people thus any attempt to transfer or delegate such notification should be allowed only to users SPIERSON or CBAKER. The way to implement this functionality would be as follows: Edit the corresponding workflow definition in Workflow Builder and open the notification. In the Function Name enter the name of the procedure where the custom code is handled, for instance, TEST_PACKAGE.Post_Notification In PLSQL create the corresponding package TEST_PACKAGE with a procedure named Post_Notification, as follows:     procedure Post_Notification (itemtype  in varchar2,                                  itemkey   in varchar2,                                  actid     in varchar2,                                  funcmode  in varchar2,                                  resultout in out nocopy varchar2) is     l_count number;     begin       if funcmode in ('TRANSFER','FORWARD') then         select count(1) into l_count         from WF_ROLES         where WF_ENGINE.context_new_role in ('SPIERSON','CBAKER');               --and/or any other conditions         if l_count<1 then           WF_CORE.TOKEN('ROLE', WF_ENGINE.context_new_role);           WF_CORE.RAISE('WFNTF_TRANSFER_FAIL');         end if;       end if;     end Post_Notification; Launch the workflow process with the changed notification and attempt to reassign or transfer it. When trying to reassign the notification to user CBROWN the screen would like like below: Check the Workflow API Reference Guide, section Post-Notification Functions, to see all the standard, seeded WF_ENGINE variables available for extending notifications processing. 

    Read the article

  • Best PerfCounters for monitoring system health of IIS, WCF, WWF and .Net for a Workflow based soluti

    - by Gineer
    We have a solution built in .Net that will be installed into a client environment. The solution will span multiple servers and be running on multiple tiers. The client makes us of MOM (Microsoft operations Manager) to monitor the system. What are the best counters to use for monitoring the overall health of the system? Are there any built in counters that we could add into a MOM Pack (as an Alert) to test a given scenario? Any thoughts suggestions would be much apreciated. Thanks

    Read the article

  • Maven antrun with sequential ant-contrib fails to run

    - by codevour
    We have a special routine to explode files in a subfolder into extensions, which will be copied and jared into single extension files. For this special approach I wanted to use the maven-antrun-plugin, for the sequential iteration and jar packaging through the dirset, we need the library ant-contrib. The upcoming plugin configuration fails with an error. What did I misconfigured? Thank you. Plugin configuration <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-antrun-plugin</artifactId> <version>1.6</version> <executions> <execution> <phase>validate</phase> <goals> <goal>run</goal> </goals> <configuration> <target> <for param="extension"> <path> <dirset dir="${basedir}/src/main/webapp/WEB-INF/resources/extensions/"> <include name="*" /> </dirset> </path> <sequential> <basename property="extension.name" file="${extension}" /> <echo message="Creating JAR for extension '${extension.name}'." /> <jar destfile="${basedir}/target/extension-${extension.name}-1.0.0.jar"> <zipfileset dir="${extension}" prefix="WEB-INF/resources/extensions/${extension.name}/"> <include name="**/*" /> </zipfileset> </jar> </sequential> </for> </target> </configuration> </execution> </executions> <dependencies> <dependency> <groupId>ant-contrib</groupId> <artifactId>ant-contrib</artifactId> <version>1.0b3</version> <exclusions> <exclusion> <groupId>ant</groupId> <artifactId>ant</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>org.apache.ant</groupId> <artifactId>ant-nodeps</artifactId> <version>1.8.1</version> </dependency> </dependencies> </plugin> Error [ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.6:run (default) on project extension-platform: An Ant BuildException has occured: Problem: failed to create task or type for [ERROR] Cause: The name is undefined. [ERROR] Action: Check the spelling. [ERROR] Action: Check that any custom tasks/types have been declared. [ERROR] Action: Check that any <presetdef>/<macrodef> declarations have taken place. [ERROR] -> [Help 1] [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException

    Read the article

  • How to get notification of workflow errors?

    - by Greg McGuffey
    I am having issues were a workflow is stalled because there is an issue with sending an email (send email activity). Typically, this is simply solved by resuming the workflow. I'm wondering if there any way to react to a workflow error, so the user knows they need to go in and resume the workflow. I'm also wondering about this relative to a workflow that is attempting to assign a task to a user who no longer exists in the CRM or one that has an invalid email address, which I'm assuming would cause errors in workflows as well. Any other suggestions related to this sort if issue would be welcome. Thanks!

    Read the article

  • WorkFlow and WCF dynamically launching WorkFlows

    - by Raj73
    I have a WF which will be hosted on WCF . The service Contract will contain a single operation containing two parameters. Parameter1 will be a string and will contain the name of the workflow to invoke and parameter two will contain the input for the invoked Work Flow. All operations will take the same parameter. All the operations will return the same return value. I have created the service implementation and I would like to depending on the value of parameter1 start executing the appropriate workflow and return the value (There can be number of workflow classes say Operation1, Operation2...which will be the passed in as the value in Parameter1). How can I instantiate different workflow classes and pass parameters and get the return values from them which I should then pass back to the calling Client. (Also Should I be using ReceiveActivities in all of my Launchable WorkFlow Classes ? ) Any code samples or pointers would help

    Read the article

  • sequential SSH command execution not working in Ubuntu/Bash

    - by kumar
    My requirement is I will have a set of commands that needs to be executed in a text file. My Shell script has to read each command, execute and store the results in a separate file. Here is the snippet which does the above requirement. while read command do echo 'Command :' $command >> "$OUTPUT_FILE" redirect_pos=`expr index "$command" '>>'` if [ `expr index "$command" '>>'` != 0 ];then redirect_fn "$redirect_pos" "$command"; else $command state=$? if [ $state != 0 ];then echo "command failed." >> "$OUTPUT_FILE" else echo "executed successfully." >> "$OUTPUT_FILE" fi fi echo >> "$OUTPUT_FILE" done < "$INPUT_FILE" Sample Commands.txt will be like this ... tar -rvf /var/tmp/logs.tar -C /var/tmp/ Commands_log.txt gzip /var/tmp/logs.tar rm -f /var/tmp/list.txt This is working fine for commands which needs to be executed in local machine. But When I am trying to execute the following ssh commands only the 1st command getting executed. Here are the some of the ssh commands added in my text file. ssh uname@hostname1 tar -rvf /var/tmp/logs.tar -C /var/tmp/ Commands_log.txt ssh uname@hostname2 gzip /var/tmp/logs.tar ssh .. etc When I am executing this in cli it is working fine. Could anybody help me in this?

    Read the article

  • Using wget to save sequential files as well as renaming the file extension

    - by Ian
    I run a cron job that requests a snapshot from a remote webcam at a local address: wget http://user:[email protected]/snapshot.cgi This creates the files snapshot.cgi, snapshot.cgi.1, snapshot.cgi.2, each time it's run. My desired result would be for the file to be named similar to file.1.jpg, file.2.jpg. Basically, sequentially or date/time named files with the correct file extension instead of .cgi. Any ideas?

    Read the article

  • Parallel processing slower than sequential?

    - by zebediah49
    EDIT: For anyone who stumbles upon this in the future: Imagemagick uses a MP library. It's faster to use available cores if they're around, but if you have parallel jobs, it's unhelpful. Do one of the following: do your jobs serially (with Imagemagick in parallel mode) set MAGICK_THREAD_LIMIT=1 for your invocation of the imagemagick binary in question. By making Imagemagick use only one thread, it slows down by 20-30% in my test cases, but meant I could run one job per core without issues, for a significant net increase in performance. Original question: While converting some images using ImageMagick, I noticed a somewhat strange effect. Using xargs was significantly slower than a standard for loop. Since xargs limited to a single process should act like a for loop, I tested that, and found it to be about the same. Thus, we have this demonstration. Quad core (AMD Athalon X4, 2.6GHz) Working entirely on a tempfs (16g ram total; no swap) No other major loads Results: /media/ramdisk/img$ time for f in *.bmp; do echo $f ${f%bmp}png; done | xargs -n 2 -P 1 convert -auto-level real 0m3.784s user 0m2.240s sys 0m0.230s /media/ramdisk/img$ time for f in *.bmp; do echo $f ${f%bmp}png; done | xargs -n 2 -P 2 convert -auto-level real 0m9.097s user 0m28.020s sys 0m0.910s /media/ramdisk/img$ time for f in *.bmp; do echo $f ${f%bmp}png; done | xargs -n 2 -P 10 convert -auto-level real 0m9.844s user 0m33.200s sys 0m1.270s Can anyone think of a reason why running two instances of this program takes more than twice as long in real time, and more than ten times as long in processor time to complete the same task? After that initial hit, more processes do not seem to have as significant of an effect. I thought it might have to do with disk seeking, so I did that test entirely in ram. Could it have something to do with how Convert works, and having more than one copy at once means it cannot use processor cache as efficiently or something? EDIT: When done with 1000x 769KB files, performance is as expected. Interesting. /media/ramdisk/img$ time for f in *.bmp; do echo $f ${f%bmp}png; done | xargs -n 2 -P 1 convert -auto-level real 3m37.679s user 5m6.980s sys 0m6.340s /media/ramdisk/img$ time for f in *.bmp; do echo $f ${f%bmp}png; done | xargs -n 2 -P 1 convert -auto-level real 3m37.152s user 5m6.140s sys 0m6.530s /media/ramdisk/img$ time for f in *.bmp; do echo $f ${f%bmp}png; done | xargs -n 2 -P 2 convert -auto-level real 2m7.578s user 5m35.410s sys 0m6.050s /media/ramdisk/img$ time for f in *.bmp; do echo $f ${f%bmp}png; done | xargs -n 2 -P 4 convert -auto-level real 1m36.959s user 5m48.900s sys 0m6.350s /media/ramdisk/img$ time for f in *.bmp; do echo $f ${f%bmp}png; done | xargs -n 2 -P 10 convert -auto-level real 1m36.392s user 5m54.840s sys 0m5.650s

    Read the article

  • What IDE setup and workflow is used for OSGi development?

    - by Falx
    I made quite a few easy OSGi test projects in Eclipse RCP. My typical workflow would always be: Make 3 different projects: APIproject, Clientproject and Serverproject Edit the MANIFEST.MF of APIproject to export the api package Edit the MANIFEST.MF file of Clientproject and Serverproject to add the required API package Choose "Run as..." "Plugin Framework" OSGi console starts in eclipse and everything seems to work I also tried wiring things by using Declarative Services, which worked well like this too. Now recently I wanted to try out iPOJO. The problem is that I get the feeling that I've been doing my OSGi development the wrong way. Can it be that I should instead make 1 project en make it work like no OSGi is involved. And then afterwards, just export each package to its own bundle by means of (for instance) the BNDL tool? Should development be done in a normal Eclipse (java, not RCP) or any other java IDE for that matter? So that's why I have these questions: What IDE setup is normally used to develop OSGi with iPOJO? And what is the normal workflow to be used when developing OSGi projects (maybe with iPOJO)?

    Read the article

  • How would you structure your workflow for a web application ?

    - by cx42net
    Hi ! When designing a web application (or something else), it's good to have a workflow, and it's better to have a well ordered one. Starting with this idea in mind, I'd like to know what is your process from having an idea to maintain this great working project. For me actually, the process is the following one : Having the idea Checking this project already exists, and how it works Describing on a paper its functionalities finding a good and adequate name for this (and checking the domain availability with WHOISMyProject) Making a quick layout of the project on a paper Designing the project (via TheGimp, Photoshop, etc) Making a complete mockup of each pages Developing a prototype of the client-side application (with false datas) Developing the server side Testing Making the documentation/help/faq Releasing the project Maintaining it. Would you change the order of some points ? add/remove some ? I would please to know how you do that. I'm looking to set up a perfect workflow in order to make my project become real in the best way possible. Thank you for your opinion !

    Read the article

  • Sequential GUID in Linq-to-Sql?

    - by JacobE
    I just read a blog post about NHibernate's ability to create a GUID from the system time (Guid.Comb), thus avoiding a good amount of database fragmentation. You could call it the client-side equivalent to the SQL Server Sequential ID. Is there a way I could use a similar strategy in my Linq-to-Sql project (by generating the Guid in code)?

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >