Search Results

Search found 28 results on 2 pages for 'disown'.

Page 1/2 | 1 2  | Next Page >

  • disown a process in ksh

    - by fahdshariff
    The "disown" command works in bash, but not in ksh. If I have started a process in ksh, how can I "disown" it, so I can exit my shell. (I know about nohup, but the process has already started!)

    Read the article

  • Restarting shell script with &disown using Monit

    - by Solas Admin
    I have a shell script that runs a C++ backend mail system (PluginHandler). I need to monitor this process in Monit and restart it if it fails. The script: export LD_LIBRARY_PATH=/usr/local/lib/:/CONFIDENTAL/CONFIDENTAL/Common/ cd PluginHandler/ ./PluginHandler This script does not have a PID file and we run this script by executing ./rundaemon.sh &disown ./pluginhandler starts the process and starts logging into /etc/output/output.log I stop the process by identifying the process ID with [ps -f | grep PluginHandler] and then killing the process. I can check the process in Monit just fine, but I think Monit is starting the process if it is not running but it can't do &disown so the process ends as soon as it starts. This is the code in the monitrc file for checking this process: check process Backend matching "PluginHandler" if not exist then alert start "PATH/TO/SCRIPT/rundaemon.sh &disown" alert [email protected] only on {timeout} with mail-format {subject: "[BLAH"} I tried to stop the script from terminating by modifying the script like the following but this does not work either. export LD_LIBRARY_PATH=/usr/local/lib/:/home/CONFIDENTAL/production/CONFIDENTAL/Common/ cd PluginHandler/ (nohup ./PluginHandler &) return Any help to write a proper Monit rules to resolve this issue would be greatly appreciated :)

    Read the article

  • How do I execute a shell-command in background?

    - by Adobe
    Here's a simple defun to run a shell script: (defun bk-konsoles () "Calls: bk-konsoles.bash" (interactive) (shell-command (concat (expand-file-name "~/its/plts/goodies/bk-konsoles.bash ") (if (buffer-file-name) (file-name-directory (buffer-file-name))) " &") nil nil)) If I start a program with no ampersand - it start the script, but blocks emacs until I close the program, if I don't put ampersand it gives error: /home/boris/its/plts/goodies/bk-konsoles.bash /home/boris/scl/geekgeek/: exited abnormally with code 1. Edit: So now I'm using: (defun bk-konsoles () "Calls: bk-konsoles.bash" (interactive) (shell-command (concat (expand-file-name "~/its/plts/goodies/bk-konsoles.bash ") (if (buffer-file-name) (file-name-directory (buffer-file-name))) " & disown") nil nil) (kill-buffer "*Shell Command Output*")) Edit 2: Nope - doesn't work: (defun bk-konsoles () "Calls: bk-konsoles.bash" (interactive) (let ((curDir default-directory)) ;; (shell-command (concat "nohup " (expand-file-name "~/its/plts/goodies/bk-konsoles.bash ") curDir) nil nil) (shell-command (concat (expand-file-name "~/its/plts/goodies/bk-konsoles.bash ") curDir "& disown") nil nil) (kill-buffer "*Shell Command Output*"))) keeps emacs busy - either with disown, or nohup. Here's a script I'm running if it might be of help: bk-konsoles.bash

    Read the article

  • Clipboard: Copy image, get path back when pasting in file input.

    - by disown
    Hi All. First question here on superuser. I sometimes get ad-hoc bug reports from customers, which I need to transfer to our online bug tracker. Works fine for text, but pictures are tedious. I'm looking for a solution to copy-paste images from documents (like excel sheets) in a way that if you paste an image to a file input (or text input) on a html page, the file will automatically be written to disk (tmp dir), and the path written to the file input field. This question is related to Directly paste clipboard image into gmail message, but I would like to ask if there is a solution using a local program only. I'm interested in solutions for all OS's.

    Read the article

  • Strip X Window System from OpenRD (Fedora core 8)

    - by disown
    I just got myself an OpenRD Client embedded box with an old Fedora 8 on it. Runs like a charm, only problem is that the default install has X on it, which I would like to remove in order to free up some space. I found a similar post here on the same topic, but yum groupremove doesn't work in my case, yum grouplist returns nothing, so I presume that no groups are defined. I don't know if this is because fedora 8 didn't have any groups, or because the OpenRD build is special in some way. In any case, I would like some tips on which package(s) to remove in order to remove an as big chunk of X as possible. Thanks in advance.

    Read the article

  • How to disconnect a running bash job from the shell in Linux?

    - by raven
    I have a script that starts a server on a remote VM. All works great until I close the shell where I executed the script. When the shell closes, so does the server. After some looking around I found the following: & will send the job to the background when executed with the symbol disown -h will disconnect the job from the shell and allow it to run regardless of the shell. The command I used is: ./startServer.sh nasb_wxscat160_catalog-4.1.6 1.0.8 > catalog-log.txt & disown -h When I closed the shell and checked using ps -ef | grep java to see if the job is still working I did see it in the list. However when I tried to connect to the server it was unresponsive. On deeper inspection, the log file was filled just until I closed the shell and using the ps -m flag I say the process jobs were not working. Has any one encountered some thing of this sort?

    Read the article

  • Long-running transactions structured approach

    - by disown
    I'm looking for a structured approach to long-running (hours or more) transactions. As mentioned here, these type of interactions are usually handled by optimistic locking and manual merge strategies. It would be very handy to have some more structured approach to this type of problem using standard transactions. Various long-running interactions such as user registration, order confirmation etc. all have transaction-like semantics, and it is both error-prone and tedious to invent your own fragile manual roll-back and/or time-out/clean-up strategies. Taking a RDBMS as an example, I realize that it would be a major performance cost associated with keeping all the transactions open. As an alternative, I could imagine having a database supporting two isolation levels/strategies simultaneously, one for short-running and one for long-running conversations. Long-running conversations could then for instance have more strict limitations on data access to facilitate them taking more time (read-only semantics on some data, optimistic locking semantics etc). Are there any solutions which could do something similar?

    Read the article

  • Autotools automatic invocation of lcov after 'make check'

    - by disown
    I have successfully set up an autotools project where the tests compiles with instrumentation so I can get a test coverage report. I can get the report by running lcov in the source dir after a successful 'make check'. I now face the problem that I want to automate this step. I would like to add this to 'make check' or to make it a separate goal 'make check-coverage'. Ideally I would like to parse the result and fail if the coverage falls below a certain percentage. Problem is that I cannot figure out how to add a custom target at all. The closest I got was finding this example autotools config, but I can't see where in that project the goal 'make lcov' is added. I can only see some configure flags in m4/auxdevel.m4. Any tips?

    Read the article

  • Spring constructor injection of SLF4J logger - how to get injection target class?

    - by disown
    I'm trying to use Spring to inject a SLF4J logger into a class like so: @Component public class Example { private final Logger logger; @Autowired public Example(final Logger logger) { this.logger = logger; } } I've found the FactoryBean class, which I've implemented. But the problem is that I cannot get any information about the injection target: public class LoggingFactoryBean implements FactoryBean<Logger> { @Override public Class<?> getObjectType() { return Logger.class; } @Override public boolean isSingleton() { return false; } @Override public Logger getObject() throws Exception { return LoggerFactory.getLogger(/* how do I get a hold of the target class (Example.class) here? */); } } Is FactoryBean even the right way to go? When using picocontainers factory injection, you get the Type of the target passed in. In guice it is a bit trickier. But how do you accomplish this in Spring?

    Read the article

  • C++ Serialization Clean XML Similar to XSTREAM

    - by disown
    I need to write a linux c++ app which saves it settings in XML format (for easy hand editing) and also communicates with existing apps through XML messages over sockets and HTTP. Problem is that I haven't been able to find any intelligent libs to help me, I don't particular feel like writing DOM or SAX code just to write and read some very simple messages. Boost Serialization was almost a match, but it adds a lot of boost-specific data to the xml it generates. This obviously doesn't work well for interchange formats. I'm wondering if it is possible to make Boost Serialization or some other c++ serialization library generate clean xml. I don't mind if there are some required extra attributes - like a version attribute, but I'd really like to be able to control their naming and also get rid of 'features' that I don't use - tracking_level and class_id for instance. Ideally I would just like to have something similar to xstream in Java. I am aware of the fact that c++ lacks introspection and that it is therefore necessary to do some manual coding - but it would be nice if there was a clean solution to just read and write simple XML without kludges! If this cannot be done I am also interested in tools where the XML schema is the canonical resource (contract first) - a good JAXB alternative to C++. So far I have only found commercial solutions like CodeSynthesis XSD. I would prefer open source solutions. I have tried gSoap - but it generates really ugly code and it is also SOAP-specific. In desperation I also started looking at alternative serialization formats for protobuffers. This exists - but only for Java! It really surprises me that protocol buffers seems to be a better supported data interchange format than XML. I'm going mad just finding libs for this app and I really need some new ideas. Anyone?

    Read the article

  • Why not use javascript handlers on the body element?

    - by disown
    As an answer to the question of 'How do you automatically set the focus to a textbox when a web page loads?', Espo suggests using <body onLoad="document.getElementById('<id>').focus();"> Ben Scheirman replies (without further explanation): Any javascript book will tell you not to put handlers on the body element like that Why would this be considered bad practice? In Espos answer, an 'override' problem is illustrated. Is this the only reason, or are there any other problems? Compatibility issues?

    Read the article

  • Maven - 'all' or 'parent' project for aggregation?

    - by disown
    For educational purposes I have set up a project layout like so (flat in order to suite eclipse better): -product | |-parent |-core |-opt |-all Parent contains an aggregate project with core, opt and all. Core implements the mandatory part of the application. Opt is an optional part. All is supposed to combine core with opt, and has these two modules listed as dependencies. I am now trying to make the following artifacts: product-core.jar product-core-src.jar product-core-with-dependencies.jar product-opt.jar product-opt-src.jar product-opt-with-dependencies.jar product-all.jar product-all-src.jar product-all-with-dependencies.jar Most of them are fairly straightforward to produce. I do have some problem with the aggregating artifacts though. I have managed to make the product-all-src.jar with a custom assembly descriptor in the 'all' module which downloads the sources for all non-transitive deps, and this works fine. This technique also allows me to make the product-all-with-dependencies.jar. I however recently found out that you can use the source:aggregate goal in the source plugin to aggregate sources of the entire aggregate project. This is also true for the javadoc plugin, which also aggregates through the usage of the parent project. So I am torn between my 'all' module approach and ditching the 'all' module and just use the 'parent' module for all aggregation. It feels unclean to have some aggregate artifacts produced in 'parent', and others produced in 'all'. Is there a way of making an 'product-all' jar in the parent project, or to aggregate javadoc in the 'all' project? Or should I just keep both? Thanks

    Read the article

  • Name that blog entry - Modelling changes over time with two db columns only.

    - by disown
    I vaguely remember reading a blog entry (written by a well-known blogger I think) about how to model price changes over time, and that you could model most changes by saving two dates only (two columns in a db). The blog talked about prices on a website changing over time and how you could figure out the right price to charge knowing only when the purchase had been made. Very vague, I know, but my google-fu is failing me, everyone at IRC are busy talking about other stuff and I don't know what to do! :)

    Read the article

  • Java abstract visitor - guarantueed to succeed? If so, why?

    - by disown
    I was dealing with hibernate, trying to figure out the run-time class behind proxied instances by using the visitor pattern. I then came up with an AbstractVisitable approach, but I wonder if it will always produce correct results. Consider the following code: interface Visitable { public void accept(Visitor v); } interface Visitor { public void visit(Visitable visitorHost); } abstract class AbstractVisitable implements Visitable { @Override public void accept(Visitor v) { v.visit(this); } } class ConcreteVisitable extends AbstractVisitable { public static void main(String[] args) { final Visitable visitable = new ConcreteVisitable(); final Visitable proxyVisitable = (Visitable) Proxy.newProxyInstance( Thread.currentThread().getContextClassLoader(), new Class<?>[] { Visitable.class }, new InvocationHandler() { @Override public Object invoke(Object proxy, Method method, Object[] args) throws Throwable { return method.invoke(visitable, args); } }); proxyVisitable.accept(new Visitor() { @Override public void visit(Visitable visitorHost) { System.out.println(visitorHost.getClass()); } }); } } This makes a ConcreteVisitable which inherits the accept method from AbstractVisitable. In c++, I would consider this risky, since this in AbstractVisitable could be referencing to AbstractVisitable::this, and not ConcreteVisitable::this. I was worried that the code under certain circumstances would print class AbstractVisible. Yet the code above outputs class ConcreteVisitable, even though I hid the real type behind a dynamic proxy (the most difficult case I could come up with). Is the abstract visitor approach above guaranteed to work, or are there some pitfalls with this approach? What guarantees are given in Java with respect to the this pointer?

    Read the article

  • XML and XSD - use element name as replacement of xsi:type for polymorphism

    - by disown
    Taking the W3C vehicle XSD as an example: <schema xmlns="http://www.w3.org/2001/XMLSchema" targetNamespace="http://cars.example.com/schema" xmlns:target="http://cars.example.com/schema"> <complexType name="Vehicle" abstract="true"/> <complexType name="Car"> <complexContent> <extension base="target:Vehicle"/> ... </complexContent> </complexType> <complexType name="Plane"> <complexContent> <extension base="target:Vehicle"/> <sequence> <element name="wingspan" type="integer"/> </sequence> </complexContent> </complexType> </schema> , and the following definition of 'meansOfTravel': <complexType name="MeansOfTravel"> <complexContent> <sequence> <element name="transport" type="target:Vehicle"/> </sequence> </complexContent> </complexType> <element name="meansOfTravel" type="target:MeansOfTravel"/> With this definition you need to specify the type of your instance using xsi:type, like this: <meansOfTravel> <transport xsi:type="Plane"> <wingspan>3</wingspan> </transport> </meansOfTravel> I would just like to acheive a 'name of type' - 'name of element' mapping so that this could be replaced with just <meansOfTravel> <plane> <wingspan>3</wingspan> </plane> </meansOfTravel> The only way I could do this until now is by making it explicit: <complexType name="MeansOfTravel"> <sequence> <choice> <element name="plane" type="target:Plane"/> <element name="car" type="target:Car"/> </choice> </sequence> </complexType> <element name="meansOfTravel" type="target:MeansOfTravel"/> But this means that I have to list all possible sub-types in the 'MeansOfTravel' complex type. Is there no way of making the XML parser assume that you mean a 'Plane' if you call the element 'plane'? Or do I have to make the choice explicit? I would just like to keep my design DRY - if you have any other suggestions (like groups or so) - i would love to hear them.

    Read the article

  • UDP + total order, non-reliable

    - by disown
    I'm trying to find a version of UDP which just alleviates the restriction of a maximum size of the message sent. I don't care about reliability or partial retransmission, if all chunks arrive I want the message to be assembled from the chunks in sending order and delivered to the listening app. If one or more chunks are missing I would just like to discard the message. The goal is to have a low-latency notification mechanism about real time data, but with the added support for bigger messages than what would fit in an IP datagram. I would like the protocol to be one way only, and not have long connection setup times. An optional feature to be able to respond to a received message wouldn't hurt (a concept of an unreliable connection), but is not necessary.

    Read the article

  • Running a program on boot without login, using the screen

    - by configurator
    Preface: I have a server running on an old laptop. The screen is always on with a login prompt, but because its keyboard is in pretty bad shape, I use it exclusively via ssh. The screen is in a good position, though; I want to use it to display a clock and some stats about what my server is doing. I have scripts to display all those things, but I want to always show them on the monitor screen. My question is, how do I get my script (called HUD) to run on /dev/tty1, instead of the login prompt. Hopefully, it should be possible to accept keyboard input as well as display its output, so that it can use the keyboard to show more info where needed in a future version. I'd also like tty2 etc. to remain active as login screens, in face I actually do need to login locally. For a start, I tried creating a script that I can run from ssh to start the HUD. It goes something like this: ( flock -n 9 watch --interval 0.2 --precise --color --notitle --exec /path/to/script & disown ) 9> /var/lock/hud > /dev/tty1 2> /dev/tty1 < /dev/tty1 (I had to use & disown instead of nohup because nohup recognized the tty and redirects output to nohup.out instead.) This sort-of works. However, it has a few issues: It doesn't steal the terminal's keyboard input, so you can't ctrl+c to get out of it (nor change the script to actually use the keyboard input), and if you press enter it show it and scrolls the display, never refreshing it correctly afterwards. Oddly, if I disconnect the ssh session which created it, it stops working and shows a message: exec: No such file or directory. If I reconnect to ssh, it resumes functioning properly. It feels hackish. Is there a better way to do this? How?

    Read the article

  • Clarification on signals (sighup), jobs, and the controlling terminal

    - by asolberg
    So I've read two different perspectives and I'm trying to figure out which one is right. 1) Some sources online say that signals sent from the controlling terminal are ONLY sent to the foreground process group. That means if want a process to continue running in the background when you logout it is sufficient to simply suspend the job (ctrl-Z) and resume it in the background (bg). Then you can log out and it will continue to run because SIGHUP is only sent to the foreground job. See: http://blog.nelhage.com/2010/01/a-brief-introduction-to-termios-signaling-and-job-control/ ...In addition, if any signal-generating character is read by a terminal, it generates the appropriate signal to the foreground process group.... 2) Other sources claim you need to use the "nohup" command at the time the program is executed, or failing that, issue a "disown" command during execution to remove it from the jobs table that listens for SIGHUP. They say if you don't do this when you logout your process will also exit even if its running in a background process group. For example: http://docstore.mik.ua/orelly/unix3/upt/ch23_11.htm ...If I log out anyway, the shell sends my background job a HUP signal... In my own experiments with Ubuntu linux it seems like 1) is correct. I executed a command: "sleep 20 &" then logged out, logged back in and pressed did a "ps aux". Sure enough the sleep command was still running. So then why is it that so many people seem to believe number 2? And if all you have to do is place a job in the background to keep it running why do so many people use "nohup" and "disown?"

    Read the article

  • background jobs and ssh connections

    - by petrelharp
    This question has come up quite a lot (really a lot), but I'm finding the answers to be generally incomplete. The general question is "Why does/doesn't my job get killed when I exit/kill ssh?", and here's what I've found. The first question is: How general is the following information? The following seems to be true for modern Debian linux, but I am missing some bits; and what do others need to know? All child processes, backgrounded or not of a shell opened over an ssh connection are killed with SIGHUP when the ssh connection is closed only if the huponexit option is set: run shopt huponexit to see if this is true. If huponexit is true, then you can use nohup or disown to dissociate the process from the shell so it does not get killed when you exit. If huponexit is false, which is the default on at least some linuxes these days, then backgrounded jobs will not be killed on normal logout. But even if huponexit is false, then if the ssh connection gets killed, or drops (different than normal logout), then backgrounded processes will still get killed. This can be avoided by disown or nohup as in (2). There is some distinction between (a) processes whose parent process is the terminal and (b) processes that have stdin, stdout, or stderr connected to the terminal. I don't know what happens to processes that are (a) and not (b), or vice versa. Final question: How can I avoid behavior (3)? In other words, by default in Debian backgrounded processes run along merrily by themselves after logout but not after the ssh connection is killed. I'd like the same thing to happen to processes regardless of whether the connection was closed normally or killed. Or, is this a bad idea?

    Read the article

  • Why I can't see my desktop icons in ubuntu 13.04?

    - by Edgar
    I just installed Ubuntu 13.04 in my laptop Aspire-M3-5871TG, and I can't see anything (neither icons, ...). Only is visible the desktop background but I can open and work with the terminal. Maybe the problem is related with Nvidia Geforce GT 640M vs Unity. I've tried several commands: dconf reset -f /org/compiz/ unity --reset-icons &disown and unity --replace & but nothing happens. I've tried other commands as well: sudo add-apt-repository ppa:ubuntu-x-swat/x-updates sudo apt-get update sudo apt-get install nvidia-current and nothing happens. I've also tried to install tweak but is it not possible to find it. Thus, I can't do nothing ... Definetely my laptop is not compatible with Ubuntu 13.04?

    Read the article

  • Linux process management

    - by tanascius
    Hello, I started a long running background-process (dd with /etc/urandom) in my ssh console. Later I had to disconnect. When I logged in, again (this time directly, without ssh), the process still seemed to to run. I am not sure what happened - I did not use disown. When I logged in later, the process was not listed in top at first, but after a while it reclaimed a high CPU percentage, as I expected. So I assume dd is still running. Now, I'd like to see the progress. I use kill -USR1 <pid> but nothing is printed. Is there any way to get the output again?

    Read the article

  • How to start process as a daemon?

    - by Markus Johansson
    I have created a service which consists of a web fronted (nginx), python runner glue handler (uwsgi) and my own python code (fetcher). I have made a script (deploy.sh) to start the difference services: nginx uwsgi --ini inifie.ini python fetcher.py & disown My question is regarding how I start my python daemon. I want it to run in the background. It should not print anything to my current terminal. If I add "print" calls to my fetcher script I currently see them in the terminal window. So my question is: how do I start my fetcher.py script as a daemon?

    Read the article

  • After logging out of SSH, screen sessions disappear on Arch Linux

    - by Ivan
    On Arch Linux (I'm on a single dedicated server, where my domain name points to only one IP), when I SSH into a user (say, for example, user mc), and then do screen -S test (or -dmS, the resulting issue is the same), run a command, and then detach from it, then exit out of my SSH session, and log back in, the screen session disappears. screen -ls returns No Sockets found in /run/screens/S-mc. The only way I can reattach to my sessions is if I never logged out of my SSH. How do I fix this? I do have read/write access in /run/screens/S-mc I detach from screen sessions with Ctrl-A,D disown -a && exit gives me the same problem shopt huponexit returns "huponexit off" There is no ~/.logout, and ~/.bash_logout is empty, with 3 lines of comments, telling me it's the ~/.bash_logout file ls -l /usr/bin | grep screen returns lrwxrwxrwx 1 root root 12 Oct 31 2012 screen -> screen-4.0.3 -rwsr-xr-x 1 root root 363672 Oct 31 2012 screen-4.0.3

    Read the article

  • Ubuntu Desktop does not load

    - by Niklas
    If I login on my Ubuntu 14.04, I get the following desktop: This weird behavior appeared after I executed sudo apt-get update && sudo apt-get upgrade and restarted my computer. Don't know why though. To my Ubuntu I have tried the following (nothing seems to work so far) Fix any broken packages: sudo apt-get update sudo apt-get autoclean sudo apt-get clean sudo apt-get autoremove Locate any broken packages and reinstall them: sudo apt-get install debsums sudo apt-get clean sudo debsums_init sudo debsums -cs sudo apt-get install --reinstall $(sudo dpkg -S $(sudo debsums -c) | cut -d : -f 1 | sort -u) Removing some compiz files: rm -r ~/.cache/compizconfig-1 rm -r ~/.compiz Purging of NVIDIA and installing NVIDIA-prime: sudo apt-get install --reinstall ubuntu-desktop sudo apt-get install unity sudo apt-get purge nvidia* bumblebee* sudo apt-get install nvidia-prime sudo shutdown -r now Compizconfig Settings Manager: sudo apt-get install compizconfig-settings-manager export DISPLAY=:0 ccsm // Back to UI and enablement of Unity Plugin Unity replace, which stopped at a while and did nothing afterwards unity --replace Some dconf reset dconf reset -f /org/compiz/ unity --reset-icons &disown Actually dconf did not work and I got this error: error: Cannot autolaunch D-Bus without X11 $DISPLAY Can anybody help me on that? This is my hardware (hope it helps in any way): Intel® Core™ i7-3770 ASUS GTX660TI-DC2-OG-2GD5 (NVIDIA driver is/was installed) ASUS P8Z77-V LX Corsair DIMM 8 GB DDR3-1600 Kit Samsung 830series 2,5" 256 GB (Windows is installed here) Seagate ST31000524AS 1 TB (3/4 are reserved for files; 1/4 is for Ubuntu (16GB swap included))

    Read the article

1 2  | Next Page >