Search Results

Search found 306 results on 13 pages for 'the daemons advocate'.

Page 1/13 | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Connect two daemons in python

    - by Simon
    What is the best way to connect two daemons in Python? I have daemon A and B. I'd like to receive data generated by B in A's module (maybe bidirectional). Both daemons support plugins, so I'd like to shut communication in plugins. What's the best and cross-platform way to do that? I know few mechanisms from low-level solutions - shared memory (C/C++), linux pipe, sockets (TCP/UDP), etc. and few high-level - queue (JMS, Rabbit), RPC. Both daemons should run on the same host, but obviously better approach is to abstract from connection type. What are typical solutions/libraries in python? I'm looking for an elegant and lightweight solution. I don't need external server, just two processes talking with each other. What should I use in python to do that?

    Read the article

  • How to advocate Stack Overflow at work

    - by Gordon
    I am thinking of doing a short presentation at work about using Stack Overflow as a resource for your day job. What is the experience doing this? Would you deem it a valid resource to tell you colleagues about or is it similar to telling them about Google as a resource? Is there a better way of doing it? I was leaning toward asking questions side of Stack Overflow rather than answering them to avoid you-shouldn't-be-doing-this-on-work-time argument. Just as a follow up. Originally I didn't want to make the question to specific to my own case. My presentation will only be a quick four minute talk, which I will repeat over an hour to different groups. I may ask a question before the talk on Stack Overflow and refer to it during the presentation. Hopefully I will get some activity during the hour. I am also going to talk briefly about some of the other Stack Exchange sites that would fit the audience as they are not all developers. I think Super User, Server Fault and Programmers should work well. I will not be doing the presentation for another couple of months as it has be rescheduled, but I will update on how I got on.

    Read the article

  • How can I relaunch juju daemons without stop/starting local LXC VMs

    - by vaab
    I'm using juju in local mode with LXC. I updated juju on the host with a classical: apt-get install juju And had a plentyful of python errors in juju debug-log probably because juju processes weren't restarted. In order to restart juju processes, I had to do a complete: juju destroy-environment juju bootstrap which deleted all contents in /var/lib/lxc. I was warned, and this was not an issue, but will be when I'll be in a production environment. Is there a way to relaunch juju daemons without stop/starting LXC local VMs ?

    Read the article

  • Daemons die with bus error when their binaries live on NFS

    - by mbac32768
    We have some daemons executing on a number of hosts. The daemon executable images are these very large binaries that are hosted on NFS. When the binaries are updated on the NFS server, the previously running daemons sometimes drop dead with a Bus error. I'm assuming what's happening is the NFS server is replacing the binaries in a way that's invisible to the VFS layer on the NFS clients so they end up loading pages from the updated binary, which of course leads to madness. We tried moving the new binaries into place instead of cp, but that doesn't seem to fix it. I'm considering simply mlock()'ing the binary in the daemon startup script, but surely there's magic NFS options or semantics that we should be abusing. Is there a better way to fix this?

    Read the article

  • Daemons did not start automatically ubuntu 10.04

    - by Anton Prokofiev
    Hello, All! I have a strange behavior on Ubuntu 10.4: few daemons (apache2 and postgresql (8.4SS from enterpriseDB) did not start automatically. Funny things that time-to-to they do. (If I just restart my computer everything looks ok, but if I turn it off for the night, nothing work..., so I have to start them manually) I've googled this problem a little bit, but the only answer I have found was to run: sudo update-rc.d apache2 defaults I've called it but the answer was: System start/stop links for /etc/init.d/apache2 already exist. Any Ideas?

    Read the article

  • should i advocate migrating from access to (my)sql

    - by HotOil
    Hi: We have a windows MFC app that is written against an access database on a company server. The db is not that big: 19 MB. There are at most 2-3 users accessing it at any one time. It is used in a factory environment where access speed (or lack thereof) over the intranet becomes noticeable as it is part of the manufacturing time for our widgets. The scenario is this: as each widget is completed, it gets a record in the db.. by the end of the year, the db is larger and searching for a record takes longer and longer. The solution so far has been to manually move older records to an archival table about once a year. We are reworking other portions of this app right now, and it would be a good time to move to another db if we are going to do it. It is my understanding that if we were using sql, the search time would not go up as the table gets bigger because the entire .mdb does not have to be sent over the network each time. Is this correct? Does anyone have any insight about whether it could be worth it to go to the trouble (time and money) of migrating to a new db, or should I just add more functionality to the application we have now, and maybe automatically purge the older records from time to time, and add additional facilities to the app to get at the older records when needed? Thanks for any wisdom you can share..

    Read the article

  • Hang while starting several daemons [solved]

    - by Adrian Lang
    I’m running a Debian Squeeze AMD64 server. Target runlevel after boot is runlevel 2, which includes rsyslogd, cron, sshd and some other stuff, but not dovecot, postfix, apache2, etc. The system fails to reach runlevel 2 with several symptoms: The system hangs at trying to start rsyslogd Booting into runlevel 1 works, then login from the console works Starting rsyslogd from runlevel 1 via /etc/init.d/rsyslog hangs Starting runlevel 2 with rsyslogd disabled works But then, logging in via console fails: I get the motd, and then nothing Starting sshd from runlevel 1 succeeds But then, I cannot login via ssh. Sometimes password ssh login gives me the motd and then nothing, sometimes not even this. Trying to offer a public key seems to annoy the sshd enough to not talk to me any further. When rebooting from runlevel 1, the server hangs at trying to stop apache2 (which is not running, so this really should be trivial). Trying to stop apache2 when logged in in runleve 1 does hang as well. And that’s just the stuff which fails all the time. RAM has been tested, dmesg shows no problems. I have no clue. Update: (shortened) output from rsyslogd -c4 -d called in runlevel 1 rsyslogd 4.6.4 startup, compatibility mode 4, module path '' caller requested object 'net', not found (iRet -3003) Requested to load module 'lmnet' loading module '/user/lib/rsyslog/lmnet.so' module of type 2 being loaded conf.c requested ref for 'lmnet', refcount 1 rsylog runtime initialized, version 4.6.4, current users 1 syslogd.c requested ref for 'lmnet', refcount now 2 I can kill rsyslogd with Strg+C, then. /var/log shows none of the configured log files, though. Update2: Thanks to @DerfK I still have no clue, but at least I narrowed down the problem. I’m now testing with /etc/init.d/apache2 stop (without an apache2 running, of course) which hangs as well and looks like an even more obvious failure. After some testing I found out that a file with one single line: /usr/sbin/apache2ctl configtest /dev/null 2&1 hangs, while the same line executed in an interactive shell works. I was not able to further reduce this line while, i. e. every single part, the stream redirections and the commando itself is necessary to reproduce the hang. @DerfK also pointed me to strace which gave a shallow hint about what kind of hang we have here: wait4(-1for the init scripts futex(0xsomepointer, FUTEX_WAIT_PRIVATE, 2, NULL for rsyslogd / apache2 binaries called by the init scripts The system was installed as a Debian Lenny by my hoster in autumn 2011, I upgraded it to Squeeze immediately and kept it up to date with Squeeze, which then used to be testing. There were no big changes, though. I guess I never tried to reboot the system before. Update3: I found the problem. My /etc/nsswitch.conf specified ldap as hosts lookup backup, which is not available at that time of the boot. Relying on dns solely fixes my boot problems.

    Read the article

  • Hang while starting several daemons

    - by Adrian Lang
    I’m running a Debian Squeeze AMD64 server. Target runlevel after boot is runlevel 2, which includes rsyslogd, cron, sshd and some other stuff, but not dovecot, postfix, apache2, etc. The system fails to reach runlevel 2 with several symptoms: The system hangs at trying to start rsyslogd Booting into runlevel 1 works, then login from the console works Starting rsyslogd from runlevel 1 via /etc/init.d/rsyslog hangs Starting runlevel 2 with rsyslogd disabled works But then, logging in via console fails: I get the motd, and then nothing Starting sshd from runlevel 1 succeeds But then, I cannot login via ssh. Sometimes password ssh login gives me the motd and then nothing, sometimes not even this. Trying to offer a public key seems to annoy the sshd enough to not talk to me any further. When rebooting from runlevel 1, the server hangs at trying to stop apache2 (which is not running, so this really should be trivial). Trying to stop apache2 when logged in in runleve 1 does hang as well. And that’s just the stuff which fails all the time. RAM has been tested, dmesg shows no problems. I have no clue. Update: (shortened) output from rsyslogd -c4 -d called in runlevel 1 rsyslogd 4.6.4 startup, compatibility mode 4, module path '' caller requested object 'net', not found (iRet -3003) Requested to load module 'lmnet' loading module '/user/lib/rsyslog/lmnet.so' module of type 2 being loaded conf.c requested ref for 'lmnet', refcount 1 rsylog runtime initialized, version 4.6.4, current users 1 syslogd.c requested ref for 'lmnet', refcount now 2 I can kill rsyslogd with Strg+C, then. /var/log shows none of the configured log files, though. Update2: Thanks to @DerfK I still have no clue, but at least I narrowed down the problem. I’m now testing with /etc/init.d/apache2 stop (without an apache2 running, of course) which hangs as well and looks like an even more obvious failure. After some testing I found out that a file with one single line: /usr/sbin/apache2ctl configtest /dev/null 2&1 hangs, while the same line executed in an interactive shell works. I was not able to further reduce this line while, i. e. every single part, the stream redirections and the commando itself is necessary to reproduce the hang. @DerfK also pointed me to strace which gave a shallow hint about what kind of hang we have here: wait4(-1for the init scripts futex(0xsomepointer, FUTEX_WAIT_PRIVATE, 2, NULL for rsyslogd / apache2 binaries called by the init scripts The system was installed as a Debian Lenny by my hoster in autumn 2011, I upgraded it to Squeeze immediately and kept it up to date with Squeeze, which then used to be testing. There were no big changes, though. I guess I never tried to reboot the system before.

    Read the article

  • CentOS send mail with external SMTP server and without local daemons

    - by Vilx-
    I've got a little old server with CentOS 6.5 on it. The hardware is old and crappy, but enough for what it has to do. Which consists of SSH (+SFTP), Apache, PHP and MySQL. Still, I'm trying to cut away all that I can. One thing that it does not need to do is to be an SMTP server. There are no mailboxes on it and nobody will ever route mail through it. However I do want it to send me an email when something goes wrong. Also, the webpages will send emails from PHP. So that brings me to the question - can I set up the mail system in such a way that there isn't an expensive mailer daemon sitting in the background with queues and whatnotelse, but rather every email is directly and immediately delivered to an external SMTP server? And how do I go about it?

    Read the article

  • delayed job problem in rails.

    - by krunal shah
    My controller data_files_controller.rb def upload_balances DataFile.load_balances(params) end My model data_file.rb def self.load_balances(params) # Pull the file out of the http request, write it to file system name = params['Filename'] directory = "public/uploads" errors_table_name = "snapshot_errors" upload_file = File.join(directory, name) File.open(upload_file, "wb") { |f| f.write(params['Filedata'].read) } # Remove the old data from the table Balance.destroy_all ------ more code----- end It's working fine. Now i want to use delayed job with my controller to call my model action like .. My controller data_files_controller.rb def upload_balances DataFile.send_later(:load_balances,params) end Is it possible?? What's the other way to do it? Is it create any problem? With this send_later i am getting this error in column last_error in delayed_job table. uninitialized stream C:/cyncabc/app/models/data_file.rb:12:in read' C:/cyncabc/app/models/data_file.rb:12:inload_balances' C:/cyncabc/app/models/data_file.rb:12:in open' C:/cyncabc/app/models/data_file.rb:12:inload_balances' c:/ruby/lib/ruby/gems/1.8/gems/delayed_job-2.0.3/lib/delayed/performable_method.rb:35:in send' c:/ruby/lib/ruby/gems/1.8/gems/delayed_job-2.0.3/lib/delayed/performable_method.rb:35:inperform' c:/ruby/lib/ruby/gems/1.8/gems/delayed_job-2.0.3/lib/delayed/backend/base.rb:66:in invoke_job' c:/ruby/lib/ruby/gems/1.8/gems/delayed_job-2.0.3/lib/delayed/worker.rb:120:inrun' c:/ruby/lib/ruby/1.8/timeout.rb:62:in timeout' c:/ruby/lib/ruby/gems/1.8/gems/delayed_job-2.0.3/lib/delayed/worker.rb:120:inrun' c:/ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.8/lib/active_support/core_ext/benchmark.rb:10:in realtime' c:/ruby/lib/ruby/gems/1.8/gems/delayed_job-2.0.3/lib/delayed/worker.rb:119:inrun' c:/ruby/lib/ruby/gems/1.8/gems/delayed_job-2.0.3/lib/delayed/worker.rb:180:in reserve_and_run_one_job' c:/ruby/lib/ruby/gems/1.8/gems/delayed_job-2.0.3/lib/delayed/worker.rb:104:inwork_off' c:/ruby/lib/ruby/gems/1.8/gems/delayed_job-2.0.3/lib/delayed/worker.rb:103:in times' c:/ruby/lib/ruby/gems/1.8/gems/delayed_job-2.0.3/lib/delayed/worker.rb:103:inwork_off' c:/ruby/lib/ruby/gems/1.8/gems/delayed_job-2.0.3/lib/delayed/worker.rb:78:in start' c:/ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.8/lib/active_support/core_ext/benchmark.rb:10:inrealtime' c:/ruby/lib/ruby/gems/1.8/gems/delayed_job-2.0.3/lib/delayed/worker.rb:77:in start' c:/ruby/lib/ruby/gems/1.8/gems/delayed_job-2.0.3/lib/delayed/worker.rb:74:inloop' c:/ruby/lib/ruby/gems/1.8/gems/delayed_job-2.0.3/lib/delayed/worker.rb:74:in start' c:/ruby/lib/ruby/gems/1.8/gems/delayed_job-2.0.3/lib/delayed/command.rb:93:inrun' c:/ruby/lib/ruby/gems/1.8/gems/delayed_job-2.0.3/lib/delayed/command.rb:72:in run_process' c:/ruby/lib/ruby/gems/1.8/gems/daemons-1.0.10/lib/daemons/application.rb:215:incall' c:/ruby/lib/ruby/gems/1.8/gems/daemons-1.0.10/lib/daemons/application.rb:215:in start_proc' c:/ruby/lib/ruby/gems/1.8/gems/daemons-1.0.10/lib/daemons/application.rb:225:incall' c:/ruby/lib/ruby/gems/1.8/gems/daemons-1.0.10/lib/daemons/application.rb:225:in start_proc' c:/ruby/lib/ruby/gems/1.8/gems/daemons-1.0.10/lib/daemons/application.rb:255:instart' c:/ruby/lib/ruby/gems/1.8/gems/daemons-1.0.10/lib/daemons/controller.rb:72:in run' c:/ruby/lib/ruby/gems/1.8/gems/daemons-1.0.10/lib/daemons.rb:188:inrun_proc' c:/ruby/lib/ruby/gems/1.8/gems/daemons-1.0.10/lib/daemons/cmdline.rb:105:in call' c:/ruby/lib/ruby/gems/1.8/gems/daemons-1.0.10/lib/daemons/cmdline.rb:105:incatch_exceptions' c:/ruby/lib/ruby/gems/1.8/gems/daemons-1.0.10/lib/daemons.rb:187:in run_proc' c:/ruby/lib/ruby/gems/1.8/gems/delayed_job-2.0.3/lib/delayed/command.rb:71:inrun_process' c:/ruby/lib/ruby/gems/1.8/gems/delayed_job-2.0.3/lib/delayed/command.rb:65:in daemonize' c:/ruby/lib/ruby/gems/1.8/gems/delayed_job-2.0.3/lib/delayed/command.rb:63:intimes' c:/ruby/lib/ruby/gems/1.8/gems/delayed_job-2.0.3/lib/delayed/command.rb:63:in `daemonize' script/delayed_job:5 Without send_later it's working fine... Is there any solution?

    Read the article

  • Installing Daemons

    - by Shahmir Javaid
    Dear Stackoverflowers, A simple link would be nice for me to understand how to install my C++ program as a deamon in UNIX, now i know some will say this should be on SERVERFAULT but as far as i understand it i need the init.d shell script to actually create the start and stop for the daemons. But if you guys can show me a simple shell script for the daemon and the file directories every thing required is associated with, that would be great. I was going to do this http://www.linux.com/archive/feed/46892 but if you read the comments every one is moaning x( . P.S. ive already done the required code for C++ to run as a daemon i just need to know how to actually install it as a daemon. At the moment im using crontab which is just not a good idea for the future of my problem. Thanks in advance

    Read the article

  • Ruby daemons and frequency

    - by mplacona
    Hi, I've written this ruby daemon, and was wondering if somebody could have a look at it, and tell me if the approach I've taken is correct. #!/usr/bin/env ruby require 'logger' # You might want to change this ENV["RAILS_ENV"] ||= "production" require File.dirname(__FILE__) + "/../../config/environment" $running = true Signal.trap("TERM") do $running = false end service = Post.new('http://feed.com/feeds') logger = Logger.new('reader.log') while($running) do # Log my calls logger.info "Run at #{Time.now}" service.update_from_feed_continuously # only run it every 5 minutes or so sleep 300 end I feel like this last loop is not quite the right thing to do, and can be memory intensive, but I'm not sure. Also, the 5 minutes seem to never happen exactly every 5 minutes, and I'll see variations of 4-6 minutes. thanks in advance

    Read the article

  • How would you advocate not using a shared spreadsheet to track bugs / issues ?

    - by Sylvain Defresne
    In our company, the developers want to use a proper bug tracking tool to manager issues in our application. The management however insists on using a shared spreadsheet (formeerly a shared excel file, now a spreadsheet on a web base solution allowing concurrent access). Their argument is that the spreadsheet allow them to have a more highlevel view of the state of the project as they can see how many bugs are open with a quick glance. This also allow them to see who is working on each bug, and get estimation of the time required to close them all (as developer are required to fill time estimation of the bug they are working on). As you can understand, this is not really practical to use for the developers (bug tracking software were invented for a reason). So how can I advocate bug tracking software to ease the work of the developer ? As a bonus, which software would you recommend that would allow the management to be able to get their feedbacks (number of bugs opens, who is working on them, time estimation) with a high level view ?

    Read the article

  • Firing through HTTP a Perl script for sending signals to daemons

    - by Eric Fortis
    Hello guys, I'm using apache2 on Ubuntu. I have a Perl script which basically read the files names of a directory, then rewrites a text file, then sends a signal to a daemon. How can this be done, as secure as possible through a web-page? Actually I can run the code below, but not if I remove the comments. I'm looking for advise considering: Using HTTP Requests? How about Apache file permissions on the directory shown in code? Is htaccess enough to enable user/pass access to the cgi? Should I use a database instead of writing to a file and run a cron querying the db with permission granted to write and send the signal? Granting as less permissions as possible to the webserver. Should I set a VPN? #!/usr/bin/perl -wT use strict; use CGI; #@fileList = </home/user/*>; #read a directory listing my $query = CGI->new(); print $query->header( "text/html" ), $query->p( "FirstFileNameInArray" ), #$query->p( $fileList[0] ), #output the first file in directory $query->end_html;

    Read the article

  • What is good practice for writing web applications that control daemons (and their config files)

    - by Jones R
    Can someone suggest some basic advice on dealing with web applications that interact with configuration files like httpd.conf, bind zone files, etc. I understand that it's bad practice, in fact very dangerous to allow arbitrary execution of code without fully validating it and so on. But say you are tasked to write a small app that allows one to add vhosts to an apache configuration. Do you have your code execute with full privileges, do you write future variables into a database and have a cron job (with full privileges) execute a script that pulls the vars from the database and throws them into a template config file, etc. Some thoughts & contributions on this issue would be appreciated. tl;dr - how can you securely write a web app to update/create entries in a config file like apache's httpd.conf, etc.

    Read the article

  • Is LSB's init script function "start_daemon" really used for real daemons or should I stick to start-stop-daemon?

    - by Fred
    In the context of init scripts, according to the LSB specification, "Each conforming init script shall execute the commands in the file /lib/lsb/init-function", which then defines a couple of functions to be used when using daemons. One of those functions is start_daemon, which obviously "runs the specified program as a daemon" while checking if the daemon is already running. I'm in the process of daemonizing a service app of mine, and I'm looking at how other daemons are run to try to "fit in". In the process of looking how it's done elsewhere, I noticed that not a single daemon on my Ubuntu 10.04 machine uses start_daemon. They all call start-stop-daemon directly. Same goes for my Fedora 14 machine. Should I try to play nice and be the first one to use start_daemon, or is there really no point and start-stop-daemon is the way to go since everybody is already using that? Why is there no daemons using LSB's functions?

    Read the article

  • Are there any downsides in using C++ for network daemons?

    - by badcat
    Hey guys! I've been writing a number of network daemons in different languages over the past years, and now I'm about to start a new project which requires a new custom implementation of a properitary network protocol. The said protocol is pretty simple - some basic JSON formatted messages which are transmitted in some basic frame wrapping to have clients know that a message arrived completely and is ready to be parsed. The daemon will need to handle a number of connections (about 200 at the same time) and do some management of them and pass messages along, like in a chat room. In the past I've been using mostly C++ to write my daemons. Often with the Qt4 framework (the network parts, not the GUI parts!), because that's what I also used for the rest of the projects and it was simple to do and very portable. This usually worked just fine, and I didn't have much trouble. Being a Linux administrator for a good while now, I noticed that most of the network daemons in the wild are written in plain C (of course some are written in other languages, too, but I get the feeling that 80% of the daemons are written in plain C). Now I wonder why that is. Is this due to a pure historic UNIX background (like KISS) or for plain portability or reduction of bloat? What are the reasons to not use C++ or any "higher level" languages for things like daemons? Thanks in advance! Update 1: For me using C++ usually is more convenient because of the fact that I have objects which have getter and setter methods and such. Plain C's "context" objects can be a real pain at some point - especially when you are used to object oriented programming. Yes, I'm aware that C++ is a superset of C, and that C code is basically C++. But that's not the point. ;)

    Read the article

  • How to make/monitor/deploy daemon processes in JRuby

    - by nazdrug
    I'm currently porting a Rails App currently using REE to JRuby so I can offer an easy-to-install JRuby alternative. I've bundled the app into a WAR file using Bundler which I'm currently deploying to GlassFish. However, this app has a couple of daemon processes and it would be ideal if these could be part of the WAR file, and potentially monitored by Glassfish (if possible). I've looked at QuartzScheduler, and while meets my needs for a couple of things, I have a daemon process that must execute every 20 seconds as it's polling the database for any delayed mail to send. If anyone can provide any insight as to how best to set up daemon processes in a JRuby/Java/Glassfish environment any help will be greatly appreciated! :)

    Read the article

  • Log rotation daemons (e.g. logadm) versus Custom bash scripts?

    - by victorhooi
    Hi, We have a number of applications that generate fairly large (500Mb a day) logfiles that we need to archive/compress on a daily basis. Currently, the log rotation/moving/compressions is done either via custom bash scripts and scheduled via Cron, or in the application's code itself. What (if any) are the advantages of using a system daemon like logadm? (These are Solaris boxes). Cheers, Victor

    Read the article

  • Can a pool of memcache daemons be used to share sessions more efficiently?

    - by Tom
    We are moving from a 1 webserver setup to a two webserver setup and I need to start sharing PHP sessions between the two load balanced machines. We already have memcached installed (and started) and so I was pleasantly surprized that I could accomplish sharing sessions between the new servers by changing only 3 lines in the php.ini file (the session.save_handler and session.save_path): I replaced: session.save_handler = files with: session.save_handler = memcache Then on the master webserver I set the session.save_path to point to localhost: session.save_path="tcp://localhost:11211" and on the slave webserver I set the session.save_path to point to the master: session.save_path="tcp://192.168.0.1:11211" Job done, I tested it and it works. But... Obviously using memcache means the sessions are in RAM and will be lost if a machine is rebooted or the memcache daemon crashes - I'm a little concerned by this but I am a bit more worried about the network traffic between the two webservers (especially as we scale up) because whenever someone is load balanced to the slave webserver their sessions will be fetched across the network from the master webserver. I was wondering if I could define two save_paths so the machines look in their own session storage before using the network. For example: Master: session.save_path="tcp://localhost:11211, tcp://192.168.0.2:11211" Slave: session.save_path="tcp://localhost:11211, tcp://192.168.0.1:11211" Would this successfully share sessions across the servers AND help performance? i.e save network traffic 50% of the time. Or is this technique only for failovers (e.g. when one memcache daemon is unreachable)? Note: I'm not really asking specifically about memcache replication - more about whether the PHP memcache client can peak inside each memcache daemon in a pool, return a session if it finds one and only create a new session if it doesn't find one in all the stores. As I'm writing this I'm thinking I'm asking a bit much from PHP, lol... Assume: no sticky-sessions, round-robin load balancing, LAMP servers.

    Read the article

  • How do I rollback capistrano tasks upon failure?

    - by dchua
    In my Capistrano deploy.rb, I have a couple of daemons like delayed_jobs and fetcher, starting and stopping depending on where they are in the deployment process. This method would create problems if a deployment fails, because the daemons wouldn't be managed properly (ie. two processes spawned instead of one, or processes were shutdown without restarting itself until the next deployment). Is there anyway to prevent this from happening like a rollback code? How is deployment of daemons normally done over capistrano?

    Read the article

1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >