I am searching for an encrypted file system for Linux that can be mounted in a write-only mode, by that I mean you should be able to write/append files, but not be able to read the files you have written. Access to the files should only be given when the filesystem is mounted via a password. The purpose of this is to write log files and such, without having the log files themselves be accessible.
Does such a thing exist on Linux? Or if not, what would be the best alternative to create encrypted log files? My current workaround consists of simply piping the data through gpg --encrypt, which works, but is very cumbersome, as you can't get easy access to the file system as a whole, you have to pipe each file through gpg --decrypt manually.
In a previous version of Nautilus the location had a button for toggling between a text field for the directory path, and a row of buttons (one per directory in the path). With the install of Ubunbtu 10.04 Nautilus 2.30.1 has just the button bar and no button to toggle to the text field. How do I get the control that was available before?
I have a 100+ GB files per day incoming on one machine. (File size is arbitrary and can be adjusted as needed.)
I have several other machines that do some work on these files.
I need to reliably deliver each incoming file to the worker machines. A worker machine should be able to free its HDD from a file once it is done working with it.
It is preferable that a file would be uploaded to the worker only once and then processed in place, and then deleted, without copying somewhere else — to minimize already high HDD load. (Worker itself requires quite a bit of bandwidth.)
Please advise a solution that is not based on Java. None of existing replication solutions that I've seen can do the "free HDD from the file once processed" stuff — but maybe I'm missing something...
A preferable solution should work with files (from the POV of our business logic code), not require the business logic to connect to some queue or other. (Internally the solution may use whatever technology it needs to — except Java.)
i have a collection of word files with lots of formulas objects made in MathType.
i know there's a way to mass convert doc files to open office format, but it doesn't guarantee that formulas will be transfered smoothly.
i was wondering if someone already figured out how to do that
thanks in adnvace
I have my server set up with several public IP addresses, with a network configuration as follows (with example IPs):
eth0
\- br0 - 1.1.1.2
|- [VM 1's eth0]
| |- 1.1.1.3
| \- 1.1.1.4
\- [VM 2's eth0]
\- 1.1.1.5
My question is, how do I set up iptables with different rules for the actual physical server as well as the VMs? I don't mind having the VMs doing their own iptables, but I'd like br0 to have a different set of rules. Right now I can only let everything through, which is not the desired behavior (as br0 is exposed).
Thanks!
I have project hosted with gitolite on my own server, and I would like to deploy the whole project from gitolite bare repository to apache accessible place, by post-receive hook.
I have next hook content
echo "starting deploy..."
WWW_ROOT="/var/www_virt.hosting/domain_name/htdocs/"
GIT_WORK_TREE=$WWW_ROOT git checkout -f
exec chmod -R 750 $WWW_ROOT
exec chown -R www-data:www-data $WWW_ROOT
echo "finished"
hook can't be finished without any error message.
chmod: changing permissions of `/var/www_virt.hosting/domain_name/file_name': Operation not permitted
means that git has no enough right to make it.
The git source path is /var/lib/gitolite/project.git/, which is owned by gitolite:gitolite
And with this permissions redmine (been working under www-data user) can't achieve git repository to fetch all changes
The whole project should be placed here: /var/www_virt.hosting/domain_name/htdocs/, which is owned by www-data:www-data.
What changes I should do, to work properly post-receive hook in git, and redmine with repository ?
what I did, is:
# id www-data
uid=33(www-data) gid=33(www-data) groups=33(www-data),119(gitolite)
# id gitolite
uid=110(gitolite) gid=119(gitolite) groups=119(gitolite),33(www-data)
does not helped.
I want to have no any problem to work apache (to view project), redmine to read source files for project (under git) and git (doing deploy to www-data accessible path)
what should I do ?
I set up an OpenVPN server on my VPS, using this guide:
http://vpsnoc.com/blog/how-to-install-openvpn-on-a-debianubuntu-vps-instantly/
And I can connect to it without problems.
Connect, that is, because no traffic is being redirected. When I try to load a webpage when connected to the vpn I just get an error.
This is the config file it generated:
dev tun
server 10.8.0.0 255.255.255.0
ifconfig-pool-persist ipp.txt
ca ca.crt
cert server.crt
key server.key
dh dh1024.pem
push "route 10.8.0.0 255.255.255.0"
push "redirect-gateway"
comp-lzo
keepalive 10 60
ping-timer-rem
persist-tun
persist-key
group daemon
daemon
This is my iptables.conf
# Generated by iptables-save v1.4.4 on Sat May 7 13:09:44 2011
*raw
:PREROUTING ACCEPT [37938267:10998335127]
:OUTPUT ACCEPT [35616847:14165347907]
COMMIT
# Completed on Sat May 7 13:09:44 2011
# Generated by iptables-save v1.4.4 on Sat May 7 13:09:44 2011
*nat
:PREROUTING ACCEPT [794948:91051460]
:POSTROUTING ACCEPT [1603974:108147033]
:OUTPUT ACCEPT [1603974:108147033]
-A POSTROUTING -s 10.8.0.0/24 -o eth0 -j MASQUERADE
-A POSTROUTING -s 10.8.0.0/24 -o eth1 -j MASQUERADE
-A POSTROUTING -s 10.8.0.0/24 -o venet0 -j MASQUERADE
COMMIT
# Completed on Sat May 7 13:09:44 2011
# Generated by iptables-save v1.4.4 on Sat May 7 13:09:44 2011
*mangle
:PREROUTING ACCEPT [37938267:10998335127]
:INPUT ACCEPT [37677226:10960834925]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [35616847:14165347907]
:POSTROUTING ACCEPT [35680187:14169930490]
COMMIT
# Completed on Sat May 7 13:09:44 2011
# Generated by iptables-save v1.4.4 on Sat May 7 13:09:44 2011
*filter
:INPUT ACCEPT [37677226:10960834925]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [35616848:14165347947]
-A INPUT -i eth0 -j LOG --log-prefix "BANDWIDTH_IN:" --log-level 7
-A FORWARD -o eth0 -j LOG --log-prefix "BANDWIDTH_OUT:" --log-level 7
-A FORWARD -i eth0 -j LOG --log-prefix "BANDWIDTH_IN:" --log-level 7
-A OUTPUT -o eth0 -j LOG --log-prefix "BANDWIDTH_OUT:" --log-level 7
COMMIT
# Completed on Sat May 7 13:09:44 2011
I'm having issues adding a second IP address to one interface. Below is my /etc/networking/interfaces
# The loopback network interface
auto lo
iface lo inet loopback
#eth0 is our main IP address
auto eth0
iface eth0 inet static
address 198.58.103.*
netmask 255.255.255.0
gateway 198.58.103.1
#eth0:0 is our private address
auto eth0:0
iface eth0:0 inet static
address 192.168.129.134
netmask 255.255.128.0
#eth0:1 is for www.site.com
auto eth0:1
iface eth0:1 inet static
address 198.58.104.*
netmask 255.255.255.0
gateway 198.58.104.1
When I run /etc/init.d/networking restart, I get a fail error about bringing up eth0:1:
RTNETLINK answers: File exists
Failed to bring up eth0:1.
Any reason this would be? I didn't have any problems with I first set up eth0 and eth0:0.
I have 3 subdomains websites. Those are subdomain1.mydomain.com, subdomain2.mydomain.com, subdomain3.mydomain.com.
I have point these to the external IP address.
bind is ok, dig is onerror, Apache2 reload ok
1.) If I set the following, I can only see the first one alphabetically.
NameVirtualHost *:80
<VirtualHost *:80>
ServerName subdomain1.mydomain.com
...
<VirtualHost *:80>
ServerName subdomain2.mydomain.com
...
2.) If I set the following, I get file not found. Apache2 reload ok.
NameVirtualHost mydomain.com:80
<VirtualHost mydomain.com:80>
ServerName subdomain1.mydomain.com
...
<VirtualHost mydomain.com:80>
ServerName subdomain2.mydomain.com
Please Help! What else should I do.
On my firewall I've three interfaces :
eth0 : public IP (46.X.X.X.)
eth0:0 public IP (46.X.X.Y.)
eth1 : public IP (88.X.X.X.)
eth2 : private LAN (172.X.X.X)
I've setup a basic NAT which works great until I turn on the eth1 interface, I basically loose the connectivity. When I turn off the interface (ifconfig eth1 down) then the NAT re-work.
I've added some policy routing via iproute, which makes my three public IP's available.
I don't understand why turning on eth1 on makes the LAN unavailable.
PS : weirder ; when I turn on eth1 BUT remove the NAT, then the firewall is accessible by using the public IPS. So to me it's exclusively a NAT issue, since without the NAT the network works while with the NAT without the second public interface, the NAT does work.
Regards
EDIT :
I've been able to make it work by using iproute2 rules. That was definitely a routing issue. Here is what I did :
ip rule add prio 50 table main
ip rule add prio 201 from ip1/netmask table 201
ip rule add prio 202 from ip2/netmask table 202
ip route add default via gateway1 dev interface1 src ip1 proto static table 201
ip route append prohibit default table 201 metric 1 proto static
ip route add default via gateway2 dev interface2 src ip2 proto static table 202
ip route append prohibit default table 202 metric 1 proto static
# mutipath
ip rule add prio 221 table 221
ip route add default table 221 proto static \
nexthop via gateway1 dev interface1 weight 2\
nexthop via gateway2 dev interface2 weight 3
Is it possible to set timeout directives within a location block to prevent nginx returning a 504 from a long running PHP script (PHP-FPM?
location /myurlsegment/ {
client_body_timeout 1000000;
send_timeout 1000000;
fastcgi_read_timeout 1000000;
}
This has no effect when making a request to example.com/myurlsegment. The timeout occurs after approximately 60 seconds. PHP is configured to allow the script to run until completion (set_time_limit(0))
I don't want to set a global timeout for all scripts.
What I need to do is a program that given (as a command line argument) a directory with more directoreies inside, and 4 Pics inside of each dir, the program makes a thumbnail of the 4 files and glues them together (2 rows, 2 columns) and renames that image to the name of the directory.
I think it could be done with a combination of a program and shell scripting (I'm experienced in M$, but new to linux).
Some real examples would be great.
Thanx in advance
I have a remote VPS that I would like to setup samba on and only allow my IP access to it. I currently have in my smb.conf:
[global]
netbios name = apollo
security = user
encrypt passwords = true
socket options = TCP_NODELAY
printing = bsd
log level = 3
log file = /var/log/samba/log/%m
debug timestamp = yes
max log size = 100
[hosting]
path = /hosting/
comment = Hosting Folder
browseable = yes
read only = yes
guest account = yes
valid users = nick
I have the ports (137,138,139,445) open in iptables (they're open to everyone right now while I debug) and I see nothing in the syslog about iptables blocking my requests.
When I try to open a file browser to my address \\ipaddress, it hangs for a good thirty seconds, and then opens a log in box. I enter my user name and password for the server, hit okay. It then opens the same box, I enter my credentials again and hit enter. Windows then tells me it could not connect. My user account is added to Samba already.
Anybody have any suggestions what I can do to get this working?
<b>Tech Drive-in:</b> "But apart from these eye candy, in a more subtle way, a number of new applications are also in the pipeline. Let's explore these new comers."
I've been using dpkg -S <package_name> to list the contents of a package. Sometimes I pipe to grep bin to quickly scan for executables.
I just ran into a case where this didn't work out for me:
$ which virtualenv
$ sudo apt-get install python-virtualenv
Reading package lists... Done
...
Setting up python-virtualenv (1.7.1.2-1) ...
$ which virtualenv
/usr/bin/virtualenv
$ dpkg -S /usr/bin/virtualenv
python-virtualenv: /usr/bin/virtualenv
$ dpkg -S python-virtualenv | grep bin
$
/usr/bin/virtualenv seems to be provided by python-virtualenv, but isn't listed in the package contents provided by dpkg -S. All the while, passing /usr/bin/virtualenv to dpkg -S returns that the file comes from python-virtualenv. Can you all explain this?
Good day,
I am having problems manually extracting domains from Plesk 9.5 backup that was FTPed onto my back up server. I have followed the article http://kb.parallels.com/en/1757 using method 2. The problem is here:
zcat DUMP_FILE.gz DUMP_FILE
My backup file CP_1204131759.tar is a tar archive and zcat does not work with it. So I proceed to run the command: cat CP_1204131759.tar CP_1204131759.
But when I try # cat CP_1204131759 | munpack
I get an error that munpack did not find anything to read from standard input.
I went on to extract the tar backup file using the xvf flags and got a lot of files (20) similar to these ones:
CP_sapp-distrib.7686-0_1204131759.tgz CP_sapp-distrib.7686-35_1204131759.tgz CP_sapp-distrib.7686-6_1204131759.tgz
How best can I extract the httpdocs of a domain from this server wide Plesk 9.5.4 backup?
I found this in my log server:
sm-mta[11410]: r9BKb6YY021119: to=<[email protected]>, ctladdr=<[email protected]> (33/33), delay=2+07:24:18, xdelay=00:00:01, mailer=esmtp, pri=29911032, relay=mail1.mkuku.com. [58.22.50.83], dsn=4.0.0, stat=Deferred: Connection refused by mail1.mkuku.com.
This message is repeated every 10-30 seconds with a different "to" address.
What is this? Is my server being used to send spam?
I'm using celery 2.5.1 with django on a micro ec2 instance with 613mb memory and as such have to keep memory consumption down.
Currently I'm using it only for the scheduler "celery beat" as a web interface to cron, though I hope to use it for more in the future. I've noticed it is the biggest consumer of memory on my micro machine even though I have configured the number of workers to one. I don't have many other options set in settings.py:
import djcelery
djcelery.setup_loader()
BROKER_BACKEND = 'djkombu.transport.DatabaseTransport'
CELERYBEAT_SCHEDULER = 'djcelery.schedulers.DatabaseScheduler'
CELERY_RESULT_BACKEND = 'database'
BROKER_POOL_LIMIT = 2
CELERYD_CONCURRENCY = 1
CELERY_DISABLE_RATE_LIMITS = True
CELERYD_MAX_TASKS_PER_CHILD = 20
CELERYD_SOFT_TASK_TIME_LIMIT = 5 * 60
CELERYD_TASK_TIME_LIMIT = 6 * 60
Here's the details via top:
PID USER NI CPU% VIRT SHR RES MEM% Command
1065 wuser 10 0.0 283M 4548 85m 14.3 python manage_prod.py celeryd --beat
1025 wuser 10 1.0 577M 6368 67m 11.2 python manage_prod.py celeryd --beat
1071 wuser 10 0.0 578M 2384 62m 10.6 python manage_prod.py celeryd --beat
That's about 214mb of memory (and not much shared) to run a cron job occasionally. Have I done anything wrong, or can this be reduced about ten-fold somehow? ;)
Update: here's my upstart config:
description "Celery Daemon"
start on (net-device-up and local-filesystems)
stop on runlevel [016]
nice 10
respawn
respawn limit 5 10
chdir /home/wuser/wuser/
env CELERYD_OPTS=--concurrency=1
exec sudo -u wuser -H /usr/bin/python manage_prod.py celeryd --beat --concurrency=1 --loglevel info --logfile /var/tmp/celeryd.log
Update 2:
I notice there is one root process, one user child process, and two grandchildren from that. So I think it isn't a matter of duplicate startup.
root 34580 1556 sudo -u wuser -H /usr/bin/python manage_prod.py celeryd
wuser 577M 67548 +- python manage_prod.py celeryd --beat --concurrency=1
wuser 578M 63784 +- python manage_prod.py celeryd --beat --concurrency=1
wuser 271M 76260 +- python manage_prod.py celeryd --beat --concurrency=1
I'm looking to add a bit of memory to MySQL on a Linode VPS server on which I've got a small facebook (canvas app) PHP app using MySQL running. I'm not super familiar with MySQL optimization so I'm hoping to find a simple answer. I think I want to increase the key_buffer size (the default is 16M) to something like 32M to start, but I'm not sure if I need to tweak anything else as well. All I've done so far is increase the query_cache_size to 32M from 16M.
There's also key_buffer under [mysqld] and key_buffer under [isamchk]. What are the difference between those two?
If I have Linode 2048MB (http://www.linode.com) VPS, what would recommend I set the buffers to? I don't expect this site to have tons of visitors, but I'd like it to be as optimized as possible. Definitely way more heavy on the database access than PHP and very few HTTP requests.
$ loopy
Command 'loopy' is available in '/usr/games/loopy'
The command could not be located because '/usr/games' is not included in the PATH environment variable.
loopy: command not found
Every answer I've found just says to add it to my .profile... but this should be be in the PATH for all users, and was up until recently (I have no idea what would have caused it to change).
How can I solve this on my system for all users? What could have caused this to change?
Thanks.
I am am running Kubuntu Hardy Heron, with a dual monitor setup, and have VirtualBox on it running Windows XP in seamless mode.
My problem is, I can't get virtualbox to extend to the second monitor. Has anyone been able to achieve this or know if it can be achieved?
I'm trying to run mysqld inside chroot environment. Herez the situation.
When I run mysqld as root, I can connect to my databases. But when I run mysql using init.d scripts, mysql gives me an error.
$ mysql --user=root --password=password
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (111)
So I guess, I need to change file permissions of some files. But which ones? Oh and in case you are wondering '/var/run/mysqld/mysqld.sock' is owned by 'mysql' user.
EDIT: strace output looks something like this
[pid 20599] <... select resumed> ) = 0 (Timeout)
[pid 20599] time (NULL) = 12982215237
[pid 20599] select(0, NULL, NULL, NULL, {1, 0} <unfinished ...>
Is there any way to untar and only extract those files that are above a certain date including directory structure??
I restored a backup on a play server but it was a few days old. However I have a tar archive of the entire structure that is more up to date and healthy so now I want to extract all files (including directory structure) based on a date filter on the files if possible?
So I ran apt-get install httperf on my system and I can now run httperf. But how can I run 'autobench'? I downloaded the file and unarchived it and if I go in it and run autobench it says -bash command not found
I think it's a perl script but if I run perl autobench, it says:
root@example:/tmp/autobench-2.1.2# perl autobench
Autobench configuration file not found
- installing new copy in /root/.autobench.conf
cp: cannot stat `/etc/autobench.conf': No such file or directory
Installation complete - please rerun autobench
Even if I run it again it says the same thing.