Search Results

Search found 7807 results on 313 pages for 'ricardo h bin'.

Page 257/313 | < Previous Page | 253 254 255 256 257 258 259 260 261 262 263 264  | Next Page >

  • How do I use command line and wmctrl to make a window larger than the screen to get a huge screenshot?

    - by Mnebuerquo
    I use a program which makes a large image which I have to scroll to view. The program has no way to save the image, and I have no access to the source to modify it. The only way I have to get the image from the program is by screenshot. My goal is to save the full size image without having to piece together individual screenshots. I'm using this script to try taking a screenshot: #!/bin/bash window=$(wmctrl -l | grep "Program$" | awk '{print $1}') wmctrl -v -i -r $window -e '0,0,0,6030,5828' wmctrl -i -a $window import -window $window ~/Desktop/screenshot.png This uses wmctrl to get the window id ($window) for a window named "Program". It then tries to resize the window to the desired dimensions. It uses imagemagick (import) to save a screenshot.png on the user's Desktop. All of this works except the resize step. I can resize the window using wmctrl -r -e, but sizes greater than the screen size don't work. I'm using Ubuntu 10.04 and the Gnome Desktop. I run two monitors, but I've tried this with one of them disabled. Is there a way to resize the window larger than my screen to get a huge screenshot?

    Read the article

  • Ubuntu Launcher Items Don't Have Correct Environment Vars under NX

    - by ivarley
    I've got an environment variable issue I'm having trouble resolving. I'm running Ubuntu (Karmic, 9.10) and coming in via NX (NoMachine) on a Mac. I've added several environment variables in my .bashrc file, e.g.: export JAVA_HOME=$HOME/dev/tools/Linux/jdk/jdk1.6.0_16/ Sitting at the machine, this environment variable is available on the command line, as well as for apps I launch from the Main Menu. Coming in over NX, however, the environment variable shows up correctly on the command line, but NOT when I launch things via the launcher. As an example, I created a simple shell script called testpath in my home folder: #!/bin/sh echo $PATH && sleep 5 quit I gave it execute privileges: chmod +x testpath And then I created a launcher item in my Main Menu that simply runs: ./testpath When I'm sitting at the computer, this launcher runs and shows all the stuff I put into the $PATH variable in my .bashrc file (e.g. $JAVA_HOME, etc). But when I come in over NX, it shows a totally different value for the $PATH variable, despite the fact that if I launch a terminal window (still in NX), and type export $PATH, it shows up correctly. I assume this has to do with which files are getting loaded by the windowing system over NX, and that it's some other file. But I have no idea how to fix it. For the record, I also have a .profile file with the following in it: # if running bash if [ -n "$BASH_VERSION" ]; then # include .bashrc if it exists if [ -f "$HOME/.bashrc" ]; then . "$HOME/.bashrc" fi fi

    Read the article

  • Setting up logging for a remote backup script

    - by Brian Dainis
    So I wrote up a short script that I am planning to run via a cron job daily to package up my site files and send them to a remote location. I also plan to incorporate DB dumps, but I have not gotten that far yet. My issue today however is that Im am uncertain how to log the output of each command for errors, warnings, or other pertinent information the command may output. I would also like to install sometype of fail safe so if something goes horribly wrong the script will stop dead in its tracks and notify me via email or something. Ok the email thing is not as critical, but would be nice. Does anybody have any ideas for that? Here is what I have so far. By the way, both servers are CentOS 6.2 running standard LAMP. #!/bin/sh ################################# ### Set Vars ################################# THEDATE=`date +%m%d%y%H%M` ################################# ### Create Archives ################################# tar -cf /root/backups/files/server_BAK_${THEDATE}.tar -C / var/www/vhosts gzip /root/backups/files/server_BAK_${THEDATE}.tar ################################# ### Send Data to Remote Server ################################# scp /root/backups/files/server_BAK_${THEDATE}.tar.gz user@host:/home/bak1/ftp/backups/ ################################# ### Remove Data from this Server ################################# rm -rf /root/backups/files/server_BAK_${THEDATE}.tar.gz

    Read the article

  • How to configure iptables to use apt-get in a server?

    - by segaco
    I'm starting using iptables (newbie) to protect a linux server (specifically Debian 5.0). Before I configure the iptables settings, I can use apt-get without a problem. But after I configure the iptables, the apt-get stop working. For example I use this script in iptables: #!/bin/sh IPT=/sbin/iptables ## FLUSH $IPT -F $IPT -X $IPT -t nat -F $IPT -t nat -X $IPT -t mangle -F $IPT -t mangle -X $IPT -P INPUT DROP $IPT -P OUTPUT DROP $IPT -P FORWARD DROP $IPT -A INPUT -i lo -j ACCEPT $IPT -A OUTPUT -o lo -j ACCEPT $IPT -A INPUT -p tcp --dport 22 -j ACCEPT $IPT -A OUTPUT -p tcp --sport 22 -j ACCEPT $IPT -A INPUT -p tcp --dport 80 -j ACCEPT $IPT -A OUTPUT -p tcp --sport 80 -j ACCEPT $IPT -A INPUT -p tcp --dport 443 -j ACCEPT $IPT -A OUTPUT -p tcp --sport 443 -j ACCEPT # Allow FTP connections @ port 21 $IPT -A INPUT -p tcp --sport 21 -m state --state ESTABLISHED -j ACCEPT $IPT -A OUTPUT -p tcp --dport 21 -m state --state NEW,ESTABLISHED -j ACCEPT # Allow Active FTP Connections $IPT -A INPUT -p tcp --sport 20 -m state --state ESTABLISHED,RELATED -j ACCEPT $IPT -A OUTPUT -p tcp --dport 20 -m state --state ESTABLISHED -j ACCEPT # Allow Passive FTP Connections $IPT -A INPUT -p tcp --sport 1024: --dport 1024: -m state --state ESTABLISHED -j ACCEPT $IPT -A OUTPUT -p tcp --sport 1024: --dport 1024: -m state --state ESTABLISHED,RELATED -j ACCEPT #DNS $IPT -A OUTPUT -p udp --dport 53 --sport 1024:65535 -j ACCEPT $IPT -A INPUT -p tcp --dport 1:1024 $IPT -A INPUT -p udp --dport 1:1024 $IPT -A INPUT -p tcp --dport 3306 -j DROP $IPT -A INPUT -p tcp --dport 10000 -j DROP $IPT -A INPUT -p udp --dport 10000 -j DROP then when I run apt-get I obtain: core:~# apt-get update 0% [Connecting to ftp.us.debian.org] [Connecting to security.debian.org] [Conne and it stalls. What rules I need to configure to make it works. Thanks

    Read the article

  • Configure X connections over TCP without using an X connection

    - by Darren Cook
    I want to run a GUI application on a remote machine I only have ssh access to. I don't need to, or want to, see the GUI window. (I know I could use something like ssh -C -X remote_server if I wanted the GUI to be on my client.) I know X is running on the remote machine, as ps shows this: root ... /usr/bin/Xorg :0 -br -audit 0 -auth /var/gdm/:0.Xauth -nolisten tcp vt7 I set DISPLAY=:0.0 but I then get "Xlib: connection to ":0.0" refused by server" when I try to use it. At Get remote x display working in linux without ssh tunneling and Xserver doesn't work unless DISPLAY=0.0 I see the advice to use gdmsetup to allow X to listen on TCP. But, gdmsetup is a GUI application! And trying to run it over ssh -X did not work ("X11 connection rejected because of wrong authentication"). So, is there a text file I can edit to remove -nolisten? And, after editing it, how do I safely restart X, remotely? (There is other stuff running on this machine, so requesting a reboot is possible, but undesirable.) If not, should gdmsetup be able to run over ssh and I should persevere in that direction? UPDATE: I had to do the ssh -X session as root (ssh as a normal user, then sudo or su, does not work.) So, I did the edit with gdmsetup. I then restarted X with gdm-restart. I've also done xhost + from that ssh -X session. The ps line no longer shows the -nolisten tcp part. But still no luck connecting to it, with either DISPLAY=:0 or DISPLAY=localhost:0

    Read the article

  • IPTables configuration help

    - by Sam
    I'm after some help with setting up IPTables. Mostly the configuration is working, but regardless of what I try I cannot allow localhost to access the local Apache only (i.e. localhost to access localhost:80 only). Here is my script: !/bin/bash Allow root to access external web and ftp iptables -t filter -A OUTPUT -p tcp --dport 21 --match owner --uid-owner 0 -j ACCEPT iptables -t filter -A OUTPUT -p tcp --dport 80 --match owner --uid-owner 0 -j ACCEPT Allow DNS queries iptables -A OUTPUT -p udp --dport 53 -j ACCEPT iptables -A OUTPUT -p tcp --dport 53 -j ACCEPT Allow in and outbound SSH to/from any server iptables -A INPUT -p tcp -s 0/0 --dport 22 -j ACCEPT iptables -A OUTPUT -p tcp -d 0/0 --sport 22 -j ACCEPT Accept ICMP requests iptables -A INPUT -p icmp -s 0/0 -j ACCEPT iptables -A OUTPUT -p icmp -d 0/0 -j ACCEPT Accept connections from any local machines but disallow localhost access to networked machines iptables -A INPUT -s 10.0.1.0/24 -j ACCEPT iptables -A OUTPUT -d 10.0.1.0/24 -j DROP Drop ALL other traffic iptables -A OUTPUT -p tcp -d 0/0 -j DROP iptables -A OUTPUT -p udp -d 0/0 -j DROP Now I have tried many permutations and I'm obviously missing everything. I place them above the in/out bound SSH to/from, so it's not the precedence order. If someone could give me the heads up on allowing only the local machine to access the local web server, that'd be great. Cheers guys.

    Read the article

  • Allow outgoing connections for DNS

    - by Jimmy
    I'm new to IPtables, but I am trying to setup a secure server to host a website and allow SSH. This is what I have so far: #!/bin/sh i=/sbin/iptables # Flush all rules $i -F $i -X # Setup default filter policy $i -P INPUT DROP $i -P OUTPUT DROP $i -P FORWARD DROP # Respond to ping requests $i -A INPUT -p icmp --icmp-type any -j ACCEPT # Force SYN checks $i -A INPUT -p tcp ! --syn -m state --state NEW -j DROP # Drop all fragments $i -A INPUT -f -j DROP # Drop XMAS packets $i -A INPUT -p tcp --tcp-flags ALL ALL -j DROP # Drop NULL packets $i -A INPUT -p tcp --tcp-flags ALL NONE -j DROP # Stateful inspection $i -A INPUT -m state --state NEW -p tcp --dport 22 -j ACCEPT # Allow established connections $i -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT # Allow unlimited traffic on loopback $i -A INPUT -i lo -j ACCEPT $i -A OUTPUT -o lo -j ACCEPT # Open nginx $i -A INPUT -p tcp --dport 443 -j ACCEPT $i -A INPUT -p tcp --dport 80 -j ACCEPT # Open SSH $i -A INPUT -p tcp --dport 22 -j ACCEPT However I've locked down my outgoing connections and it means I can't resolve any DNS. How do I allow that? Also, any other feedback is appreciated. James

    Read the article

  • The script not working as expected files dump path

    - by user3319390
    I have a script needs to be dump matching cname from my file contains and then matching scode to dump file to $cname/$year/$month/$day/ into files like access and error logs #!/bin/sh #base_dir="/home/vizion/Desktop" path="/home/vizion/Desktop/adn_DF9D_20140515_0005.log" name=$(basename "$path" ".log") for x in *.log; do year=${x:9:4}; month=${x:13:2}; day=${x:15:2}; done while read -r line do cname=$(echo ${line} | awk '{split($7,c,"/"); print c[3]}') scode=$(echo ${line} | awk -F"[ ]" '{print $9}') [[ ! -d "$cname/$year/$month/$day" ]] && mkdir -p "$cname/$year/$month/$day/" [[ ( ${scode} -ge 200 ) && ( ${scode} -le 399 ) ]] && { # [[ ! -d "$cname/$year/$month/$day" ]] && mkdir -p "$cname/$year/$month/$day/" echo ${line} >> /home/vizion/Desktop/$cname/$year/$month/$day/${cname}_${name}_access.log } [[ ( ${scode} -ge 400 ) && ( ${scode} -le 599 ) ]] && { [[ ! -d "$cname/$year/$month/$day" ]] && mkdir -p "$cname/$year/$month/$day" echo ${line} >> ${cname}_${name}_error.log } done < $path i am able to filter logs but not not dumping the exact location It's going other locations suggest to me correction in script

    Read the article

  • Fix stubborn 'Setting locale failed.'

    - by user60129
    I have a very stubborn, well-known locale error on Ubuntu 9.10: perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LC_TIME = "custom.UTF-8", LANG = "en_US.UTF-8" Tried the following: Added LANG=en_US.UTF-8 and LC_ALL=en_US.UTF-8 to /etc/environment Run apt-get install --reinstall locales (error: perl: warning: Falling back to the standard locale ("C"). /usr/bin/mandb: can't set the locale; make sure $LC_* and $LANG are correct) Run sudo dpkg-reconfigure locales. Result: Cannot set LC_ALL to default locale: No such file or directory, and then updates locales all locales including en_US.UTF-8 sudo locale-gen updates all locales successfully, including en_US.UTF-8 sudo locale-gen un_US en_US.UTF-8 gives no error nor other output In /etc/default/locale it says LANG="en_US.UTF-8" echo $LANG gives en_US.UTF-8 /var/lib/locales/supported.d/local says en_US.UTF-8 UTF-8 locale -a gives me: C en_AG en_AU.utf8 en_BW.utf8 en_CA.utf8 en_DK.utf8 en_GB.utf8 en_HK.utf8 en_IE.utf8 en_IN en_NG en_NZ.utf8 en_PH.utf8 en_SG.utf8 en_US.utf8 en_ZA.utf8 en_ZW.utf8 POSIX So well... I am pretty much out of options I can think of. Anybody any idea?? Thanks!

    Read the article

  • Run Python script at startup using upstart

    - by MarcusMaximus
    I'm trying to create an upstart script to run a python script on startup. In theory it looks simple enough but I just can't seem to get it to work. I'm using a skeleton script I found here and altered. description "Used to start python script as a service" author "Me <[email protected]>" # Stanzas # # Stanzas control when and how a process is started and stopped # See a list of stanzas here: http://upstart.ubuntu.com/wiki/Stanzas#respawn # When to start the service start on runlevel [2345] # When to stop the service stop on runlevel [016] # Automatically restart process if crashed respawn # Essentially lets upstart know the process will detach itself to the background expect fork # Start the process script exec python /usr/local/scripts/script.py end script The test script I want it to run is currently a simple python script that runs without any issue when run from a terminal. #!/usr/bin/python2 import os, sys, time if __name__ == "__main__": for i in range (10000): message = "UpstartTest " , i , time.asctime() , " - Username: " , os.getenv("USERNAME") #print message time.sleep(60) out = open("/var/log/scripts/scriptlogfile", "a") print >> out, message out.close() The location/var/log/scripts has permissions 777 The file /usr/local/scripts/script.py has permissions 775 The upstart script /etc/init.d/pythonupstart.conf has permissions 755

    Read the article

  • How to automatically start VM created by virt-manager?

    - by Jeff Shattock
    I have created a virtual machine with virt-manager that runs on kvm/qemu. The machine works well when started through virt-manager. However, I would like to be able to start and stop the VM through a script in init.d, so that it comes up and down along with the host. I need to have virt-manager show that the machine is running, and to be able to connect to its console through there. When I use the command line that is produced by running ps -eaf | grep kvm after starting the vm through virt-manager, I get some console messages about redirected character devices, but the machine does start and runs properly. However, I do not get any indication from virt-manager that it has started. How can I modify the command line to get virt-manager to pick up the running VM? Is there anything else about the command line that should change when starting outside of virt-manager? Command line is (slightly reformatted for readability): /usr/bin/kvm -S -M pc-0.12 -enable-kvm -m 512 -smp 1 -name BORON \ -uuid fa7e5fbd-7d8e-43c4-ebd9-1504a4383eb1 \ -chardev socket,id=monitor,path=/var/lib/libvirt/qemu/BORON.monitor,server,nowait \ -monitor chardev:monitor -localtime -boot c \ -drive file=/dev/FS1/BORON,if=ide,index=0,boot=on,format=raw \ -net nic,macaddr=52:54:00:20:0b:fd,vlan=0,name=nic.0 \ -net tap,fd=41,vlan=0,name=tap.0 -chardev pty,id=serial0 -serial chardev:serial0 \ -parallel none -usb -usbdevice tablet -vnc 127.0.0.1:1 -k en-us -vga cirrus

    Read the article

  • Relative path incorrect in the view layer when hosting a rails3 app in a subdirectory using passenger and apache

    - by Saifis
    I want to host multiple Rails apps on a multiple server using sub-directories. And have encountered some relative path problems. I have made a symbolic link to the app's public directory and placed it in the /var/www/html directory, var/www/html/ /test_app (symbolic link to the public folder of test_app) and set apache as so LoadModule passenger_module /usr/local/lib/ruby/gems/1.9.1/gems/passenger-3.0.12/ext/apache2/mod_passenger.so PassengerRoot /usr/local/lib/ruby/gems/1.9.1/gems/passenger-3.0.12 PassengerRuby /usr/local/bin/ruby <VirtualHost *:80> ServerName test.com DocumentRoot /var/www/html Options Indexes FollowSymLinks -MultiViews RailsBaseURI /test_app </Location> </VirtualHost> The links in the app itself works just fine, all the links acknowledge the test_app/ directory and work, however, when it comes to showing images in the public directory in the view, the relative path goes wrong. Say I have /system/files/1/aaa.png it goes looking for it in /var/www/html/system/files/1/aaa.png rather than /var/www/html/test_app/system/files/1/aaa.png As far as I understand this is an Apache setting problem than something to be done in Rails, if its possible I would prefer to have it contained in the conf file of apache rather than having to alter the code.

    Read the article

  • apache segmentation error

    - by lush
    I can't start Apache with the following errors: [root@web]# /etc/init.d/httpd start Starting httpd: /bin/bash: line 1: 19232 Segmentation fault /usr/sbin/httpd [root@web]# /usr/sbin/apachectl -k start /usr/sbin/apachectl: line 102: 19919 Segmentation fault $HTTPD $OPTIONS $ARGV I use webmin control panel and I've already tried re-installing Apache from scratch. Can someone advise what else I should try to do? Many thanks. UPDATE: The only line is always written in the error logs which seems not to be very important: [Mon Nov 14 19:00:09 2011] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec) UPDATE 2: I've recently had the error below in the logs. Looks like some modules are incompatible, so I've just disabled these extensions: fileinfo and mcrypt in my php.ini. I should be able to start the web server without them. PHP Warning: PHP Startup: fileinfo: Unable to initialize module\nModule compiled with module API=20050922, debug=0, thread-safety=0\nPHP compiled with module API=20060613, debug=0, thread-safety=0\nThese options need to match\n in Unknown on line 0 PHP Warning: Module 'mcrypt' already loaded in Unknown on line 0 UPDATE 3: [root@web]# file /usr/sbin/httpd /usr/sbin/httpd: ELF 64-bit LSB shared object, AMD x86-64, version 1 (SYSV), for GNU/Linux 2.6.9, stripped [root@web]# uname -m x86_64

    Read the article

  • Quick introduction to Linux needed

    - by 0xDEAD BEEF
    Hi guys! I have to get into Linux ASAP and realy mean ASAP. I have installed Cygwin but as allways - things dont go as easy as one would like. First problem i enconter was - i choose KDE package, but there is no sign of KDE files anywhere in cygwin folder. How do i run KDE windows. Currently startx fires, but all looks ugly! My desire is to download and run Qt Creator. Seems that there is no cygwin package, but downloading source and compiling is good to go. Only that i have forgoten every linux command i ever knew! :D Please - what are default commands u use on linux? What does exec do? what ./ stands for? What is directory strucutre and why there is such mess in bin folder? Thanks god - i have windows over cygwin, so downloading files is not a problem, but again -how do i unpack them in linux style and how to i build? simply issue "make" command from folder, where i extracted files? Please help!

    Read the article

  • can't install anything anymore with apt-get

    - by Aymane Shuichi
    Welcome this is the log I have when trying to install anything (php5-fpm after removing it) apt-get install php5-fpm Reading package lists... Done Building dependency tree Reading state information... Done php5-fpm is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 1 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? y Setting up php5-fpm (5.4.4-14+deb7u10) ... insserv: warning: script 'S55IptabLes' missing LSB tags and overrides insserv: warning: script 'S55IptabLex' missing LSB tags and overrides insserv: There is a loop between service IptabLes and mountnfs if started insserv: loop involving service mountnfs at depth 8 insserv: loop involving service networking at depth 7 insserv: loop involving service mountnfs-bootclean at depth 10 insserv: There is a loop between service rc.local and mountall if started insserv: loop involving service mountall at depth 6 insserv: loop involving service checkfs at depth 5 insserv: loop involving service kbd at depth 11 insserv: There is a loop between service rc.local and mountall-bootclean if started insserv: loop involving service mountall-bootclean at depth 7 insserv: loop involving service urandom at depth 9 insserv: There is a loop between service IptabLes and mountdevsubfs if started insserv: loop involving service mountdevsubfs at depth 2 insserv: loop involving service udev at depth 1 insserv: There is a loop at service rc.local if started insserv: There is a loop at service IptabLes if started insserv: Starting IptabLes depends on rc.local and therefore on system facility `$all' which can not be true! (x99 times repeated ) insserv: Max recursions depth 99 reached insserv: loop involving service postfix at depth 2 insserv: There is a loop between service IptabLes and udev if started insserv: loop involving service mountkernfs at depth 1 insserv: loop involving service IptabLes at depth 1 Now here is the error i get insserv: exiting now without changing boot order! update-rc.d: error: insserv rejected the script header dpkg: error processing php5-fpm (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: php5-fpm E: Sub-process /usr/bin/dpkg returned an error code (1) The biggest operation I held before this was updating nginx from 1.2 to 1.6 and it was thanks to this site : here is the link : How to upgrade nginx from 1.2 to 1.6 on debian 7 Please help !

    Read the article

  • Debian 100% cpu every 30 minutes but not loggable?

    - by user654123
    I have a Debian 7 x64 machine running with Digital Ocean that has every 30 Minutes a 100% cpu usage for about 1 minute. A couple of days ago it stayed there for a couple of hours so the server finally crashed and I had to repair my Mysql databases. The server is a pure webserver running apache2 and Mysql. I tried tracing which processes use the cpu but with no luck. The script I used: #!/bin/sh while true; do ps -A -eo pcpu,pid,user,args | sort -k 1 -r | head -3 >> proclog.txt; echo "\n" >> proclog.txt; sleep 2; done I was monitoring htop as well while this was happening, but the top processess' cpu usage didn't add up to ~15% even though htop's cpu meter showed constant 100%. htop was configured to show all users' processess, user- and kernel-threads. Edit: By stopping Apache2 & Mysql prior to the expected 100% usage I can tell both are not responsible for it. The 100% usage occurred anyway. This is what the graph looked like the past hours:

    Read the article

  • Prevent rmdir -p from traversing above a certain directory

    - by thepurplepixel
    I hacked together this script to rsync some files over ssh. The --remove-source-files option of rsync seems to remove the files it transfers, which is what I want. However, I also want the directories those files are placed in to be gone as well. The current part of the find command, -exec rmdir -p {} ; tries to remove the parent directory (in this case, /srv/torrents), but fails because it doesn't have the right permissions. What I'd like to do is stop rmdir from traversing above the directory find is run in, or find another solution to get rid of all the empty folders. I've thought of using some kind of loop with find and running rmdir without the -p switch, but I thought it wouldn't work out. Essentially, is there an alternative way to remove all the empty directories under the parent directory? Thanks in advance! #!/bin/bash HOST='<hostname>' USER='<username>' DIR='<destination directory>' SOURCE='/srv/torrents/' rsync -e "ssh -l $USER" --remove-source-files -h -4 -r --stats -m --progress -i $SOURCE $HOST:$DIR find $SOURCE -mindepth 1 -type d -empty -prune -exec rmdir -p \{\} \;

    Read the article

  • What's the safest way to kick off a root-level process via cgi on an Apache server?

    - by MartyMacGyver
    The problem: I have a script that runs periodically via a cron job as root, but I want to give people a way to kick it off asynchronously too, via a webpage. (The script will be written to ensure it doesn't run overlapping instances or such.) I don't need the users to log in or have an account, they simply click a button and if the script is ready to be run it'll run. The users may select arguments for the script (heavily filtered as inputs) but for simplicity we'll say they just have the button to choose to press. As a simple test, I've created a Python script in cgi-bin. chown-ing it to root:root and then applying "chmod ug+" to it didn't have the desired results: it still thinks it has the effective group of the web server account... from what I can tell this isn't allowed. I read that wrapping it with a compiled cgi program would do the job, so I created a C wrapper that calls my script (its permissions restored to normal) and gave the executable the root permissions and setuid bit. That worked... the script ran as if root ran it. My main question is, is this normal (the need for the binary wrapper to get the job done) and is this the secure way to do this? It's not world-facing but still, I'd like to learn best practices. More broadly, I often wonder why a compiled binary is more "trusted" than a script in practice? I'd think you'd trust a file that was human-readable over a cryptic binaryy. If an attacker can edit a file then you're already in trouble, more so if it's one you can't easily examine. In short, I'd expect it to be the other way 'round on that basis. Your thoughts?

    Read the article

  • package manager cannot create users? installation script fails?

    - by RapidWebs
    It seems that when trying to install any package which requires the creation of a system User and Group, the installer fails with the error: install: invalid user 'username' When it happened the first time, I assumed it was related to the package, or it's installation script. But now I am attempting a different package, and alas, the same issue arises! note: /etc/passwd and /etc/group is not chattr -i Here is the example output from an attempted installation (of varnish): Selecting previously unselected package varnish. Unpacking varnish (from .../varnish_3.0.2-1ubuntu0.1_amd64.deb) ... Processing triggers for man-db ... Setting up libvarnishapi1 (3.0.2-1ubuntu0.1) ... Setting up varnish (3.0.2-1ubuntu0.1) ... install: invalid user `varnish' dpkg: error processing varnish (--configure): subprocess installed post-installation script returned error exit status 1 Processing triggers for libc-bin ... ldconfig deferred processing now taking place Errors were encountered while processing: varnish note: running ubuntu 12.04 For now, I have resolved the issue by manually creating the users, and running apt-get install -f, but this does not resolve the actual issue. Any ideas?

    Read the article

  • Autossh startup on Ubuntu 10.04 - fails after powering off

    - by grant
    I'm using upstart to keep a reverse ssh tunnel alive using auto ssh similar to Using Upstart to Manage AutoSSH Reverse Tunnel. This works fine, except after a manual power down I can no longer connect to the machine through the "central server" using the tunnel. I receive "ssh_exchange_identification: Connection closed by remote host". The autossh process is running on the client. I can connect again after re-starting networking. I'm trying to figure out why this is failing consistently after a manual shutdown. Is it possible that I need to do some cleanup on startup that would allow the tunnel to work in this situation, or are there some other debugging/troubleshooting steps I can take to determine the problem? Machine A is the client machine, using autossh. This machine sits behind a firewall and uses the following command in upstart to create an ssh tunnel: /usr/bin/autossh -fN -i /keyfile -o StrictHostKeyChecking=no -R 20098:localhost:22 user@centralserver Machine B we'll call the "central server", which sits in the cloud and is the host. This machine is "centralserver" in the command above. When Machine A is hard powered off, and back on, I cannot connect to it by SSH'ing from my machine (C) to Machine B in the cloud, then using the following command to get to Machine A: ssh -p 2098 user@localhost Again, after a reboot of the client (A), this works fine. It is only after a hard power down that the problem occurs. There are autossh processes that are running on the client machine (A) after powering down and back up, but they just don't seem to doing their job.

    Read the article

  • Log Problem and bash script

    - by GvWorker
    Hello Guys, I have 11 Debian servers running on rackspace cloud hosting. All running VHCS2 for hosting management. 1 server is used for application and 10 are used for only smtp. My question is regarding smtp servers. Each server hosted 1 domain. My problem is when my client use smtp there's a log created in this directory /var/log/ but within 24 hours drives are full and server refuse all smtp connections. Then I deleted the logs and ran following command to check the disk space. df -h but it shows hdd still full and server is still refusing the smtp connections. Then I ran following command to see the truth du --max-depth=1 -h It shows the truth. The real disk space used. Then I rebooted the server and now server working fine. But after few hours same situation happened. Then I created the following script. #!/bin/sh rm -fr /var/log/* rm -fr /var/log/apache2/*.log rm -fr /var/log/apache2/*.log.* rm -fr /var/log/apache2/users/* rm -fr /var/log/apache2/backup/* reboot It worked for days but after that logs are again filling the hdd. Now I want the following solutions. If anybody can help me. When I delete files from server hdd will free up without rebooting Log should be in specific range. Like a specific size of file where old data overwrite with new data

    Read the article

  • Removing all traces of GNU java and openjdk and replacing with Sun JDK

    - by user61766
    I have installed latest Sun JDk. But when I do: java -version I still got OpenJDK version. So I completely removed OpenJDK. But now when I do: java -version I get even older GNU java 1.5 something libgcj. So I completely removed that too but it was asking to remove bunch of dependent apps like OpenOffice.org Writer etc. Even though I need the writer, I let it go because I do not want ever to see the face of any GNU java on my linux. So everything related to GNU java is removed. Luckily I am able to start Eclipse and it works fine and start normally (apparently using the installed Sun JDK which is what I want). But now when I run java -version I get bash: /usr/bin/java: No such file or directory Now what I need to do so that when I open any terminal window and enter java -version I should get Sun JDK version? Sun JDK is installed in /usr/java/jdk1.6.021. I also have symlinks: /usr/java/latest and /usr/java/defaults pointing to sun jdk.

    Read the article

  • Unable to create files in a directory

    - by vamsi360
    I have created a directory in Virutalbox. Using VBoxManage, I am executing a script inside the Ubuntu VM directory I created above from Ubuntu host OS. But if the script in the VM contains commands for creating a new file, they are not executing. "echo" commands before and after the touch ommand are working fine. I even used root user for VBoxManage to install. I think the directory is not allowing the files to be created . How can I make a directory in Linux to be 777 to all new files created automatically. I mean, even if I make the directory (chmod 777 dir), I am unable to execute the script from the host. Please help. It may be simple permissions problem. Even root is unable to execute. VBoxManage guestcontrol "Ubuntu_10_04" execute --image "/bin/bash" "/home/cloudlet/Desktop/temp2/three" --username root --password root --verbose --wait-exit --wait-stdout -- -l /usr Please help. I am struggling with this problem for the past one week.

    Read the article

  • How to avoid duplicates when copying files that have been renamed at the destination

    - by Benoitt
    I have to get pictures from a folder – with subfolders which are updated automatically – with their extensions. These files have to be copied in a folder where a website based on PHP will edit them (by renaming and creating an XML file) to be downloadable and integrated in an XML feed. Because of the rename function of the script, when I perform the copy gain, all the files are duplicated, because the script has renamed the original ones already. I've tried a few things with rsync but I'm looking for something more powerful because I can't copy files with an external "history". #!/bin/bash find '/home/name/picture' -name '*.jpg' | while read FILE ; do rsync --backup --backup-dir=incremental --suffix=.old "$FILE" /var/www/media ; done wget --spider 'http://myscript.php' ; #exit 0 PS: As a little addition, I'd like to replace '.' with a 'space' just after the *.jpeg copy. My PHP script has some problem to define files with comma because of the extension. I'm finking about a command with find – like I did before – with a sed function? Is that a good idea?

    Read the article

  • Sharepoint web part fails intermittently

    - by pringly
    I have a MOSS 2007 environment, 2 web servers and a DB server, load balanced between the two web servers. I deployed a web part recently, which worked fine for a while, but failed on web server 2 after a day. When it fails, it gets the error message: 'A Web Part or Web Form Control on this Page cannot be displayed or imported. The type could not be found or it is not registered as safe’ Once it has failed, it will stay that way until an IIS reset is done. The other web server never fails, I tried to force the second web server to fail to recreate the issue and have been unable to do it. I tried placing it under heavy http traffic and it handled it fine. Put it back in the pool and it failed again after about 7 hours. So, if i remove the .dll for the webpart from the affected web server, the webpart doesnt stop working. Is this normal behavior? I checked the bin directory for the site and the global assembly and it there is no other copy of the .dll anywhere else on the server. Also, when checking the web part gallery, if the web part has failed it will appear in the gallery, but by trying to add a new webpart, the .dll wont be listed. I have no idea how to continue troubleshooting from here or even fix it, any ideas?

    Read the article

< Previous Page | 253 254 255 256 257 258 259 260 261 262 263 264  | Next Page >