Configuration management in support of scientific computing

Posted by Sharpie on Server Fault See other posts from Server Fault or by Sharpie
Published on 2010-03-30T02:04:41Z Indexed on 2010/03/30 2:13 UTC
Read the original article Hit count: 627

For the past few years I have been involved with developing and maintaining a system for forecasting near-shore waves. Our team has just received a significant grant for further development and as a result we are taking the opportunity to refactor many components of the old system.

We will also be receiving a new server to run the model and so I am taking this opportunity to consider how we set up the system. Basically, the steps that need to happen are:

  1. Some standard packages and libraries such as compilers and databases need to be downloaded and installed.

  2. Some custom scientific models need to be downloaded and compiled from source as they are not commonly provided as packages.

  3. New users need to be created to manage the databases and run the models.

  4. A suite of scripts that manage model-database interaction needs to be checked out from source code control and installed.

  5. Crontabs need to be set up to run the scripts at regular intervals in order to generate forecasts.

I have been pondering applying tools such as Puppet, Capistrano or Fabric to automate the above steps. It seems perfectly possible to implement most of the above functionality except there are a couple usage cases that I am wondering about:

  • During my preliminary research, I have found few examples and little discussion on how to use these systems to abstract and automate the process of building custom components from source.

  • We may have to deploy on machines that are isolated from the Internet- i.e. all configuration and set up files will have to come in on a USB key that can be inserted into a terminal that can connect to the server that will run the models.

I see this as an opportunity to learn a new tool that will help me automate my workflow, but I am unsure which tool I should start with. If any member of the community could suggest a tool that would support the above workflow and the issues specific to scientific computing, I would be very grateful.

Our production server will be running Linux, but support for OS X would be a bonus as it would allow the development team to setup test installations outside of VirtualBox.

© Server Fault or respective owner

Configuration management in support of scientific computing

Posted by Sharpie on Server Fault See other posts from Server Fault or by Sharpie
Published on 2010-03-30T01:10:43Z Indexed on 2010/03/30 1:13 UTC
Read the original article Hit count: 627

For the past few years I have been involved with developing and maintaining a system for forecasting near-shore waves. Our team has just received a significant grant for further development and as a result we are taking the opportunity to refactor many components of the old system.

We will also be receiving a new server to run the model and so I am taking this opportunity to consider how we set up the system. Basically, the steps that need to happen are:

  1. Some standard packages and libraries such as compilers and databases need to be downloaded and installed.

  2. Some custom scientific models need to be downloaded and compiled from source as they are not commonly provided as packages.

  3. New users need to be created to manage the databases and run the models.

  4. A suite of scripts that manage model-database interaction needs to be checked out from source code control and installed.

  5. Crontabs need to be set up to run the scripts at regular intervals in order to generate forecasts.

I have been pondering applying tools such as Puppet, Capistrano or Fabric to automate the above steps. It seems perfectly possible to implement most of the above functionality except there are a couple usage cases that I am wondering about:

  • During my preliminary research, I have found few examples and little discussion on how to use these systems to abstract and automate the process of building custom components from source.

  • We may have to deploy on machines that are isolated from the Internet- i.e. all configuration and set up files will have to come in on a USB key that can be inserted into a terminal that can connect to the server that will run the models.

I see this as an opportunity to learn a new tool that will help me automate my workflow, but I am unsure which tool I should start with. If any member of the community could suggest a tool that would support the above workflow and the issues specific to scientific computing, I would be very grateful.

Our production server will be running Linux, but support for OS X would be a bonus as it would allow the development team to setup test installations outside of VirtualBox.

© Server Fault or respective owner

Related posts about scientific-computing

Related posts about configuration