How can I specify a port for the destination ip? When I do scp -p 0000 it still tries to connect on port 22, not the one I'm specifying.
scp svn_backup.tgz [email protected]:/path/to/new/svn/
I need to allow members of the group 'ftpusers' to be able to change permissions on all objects inside a certain directory. I was looking into how to do it but all I have found is how to do it on BSD:
chmod +a "ftpgroup allow writesecurity" /some/dir
I need exactly the same thing but for Debian/GNU.
Thanks
I have two CentOS 5 servers running iptables and apf.
I am having issues connecting with ssh from server 1 to server 2.
I can connect from server 1 to a third server and from that third server to both 1 and 2.
In all cases I am using the IP address and not a host name.
I have stopped iptables and apf on all servers and it doesn't seem to change anything.
What is the best way to debug this process?
I trying to find the best way to upgrade from PHP 5.1 to 5.3. The CRM software I am running on this server requires this upgrade or else I probably wouldn't even perform it, because it seems like it's going to be perhaps trickier than I hoped it would be. Being still new to the programming world, these routine upgrades are still worrisome to me.
I am running apache 2.2.6 (Fedora), PHP 5.1.6 and MySQL 5.0.27 on this server.
I have a jar that runs forever (infinite loop with socket listening thread) and need it to run in the background at all times. An example would be: "java -jar test.jar" How do I do this? Thanks in advance!
What is the most efficient way to block access to 8-10k IP addresses?
My server runs EngineX on Debian.
If possible, I would like to show all of the addresses an Internal Server Error (500).
Tomcat is leaving me with CLOSE_WAIT sockets which ultimately saturate the maximum number of connections.
I've tried many methods in my client and server code to get rid of these to no avail, including closing connections, calling System.gc(), etc.
Now I'm trying to find a way to simply time these out quickly in the OS. I've got conntrack working, but am not sure how to use that to kill these connections. I've also set /proc/sys/net/ipv4/netfilter/ip_conntrack_tcp_timeout_close_wait to 1, which of course is too low but the connections persist.
Is there a way to kill these zombie sockets?
Running Ubuntu.
Hello All,
I have Fedora Core 10 installed on a PC, occupying the whole disk (no free space). I want to add Ubuntu 10.04 desktop edition.
Does Ubuntu 10.04 allow you to resize existing partitions during the installation process in order to free up some space for the Ubuntu installation? Without losing or trashing existing data, obviously.
Thanks.
I have following scenarios:
Single partition mounted as /, with lots of disk space.
There is a range of directories (/pg/tbs1, /pg/tbs2, /pg/tbs3 and so on), and I would like to limit total size of these directories.
One option is to make some big files, and then mkfs them, and mount over loopback, and then set quota, but this makes expansion a bit problematic.
Is there any other way to make the quota work per directory?
I've been using rsync to automatically send files from one box to another, which is great compared to scp, since it supports resuming. However, when resuming a very large file (10gb) rsync has to read both files and compare them, which is very slow.
I don't need fancy error handling, just "scp with resume", so here's my question:
Is there an alternative to rsync/scp, that supports resuming without having to read both source and destination files?
I've read the manuals without finding anything I can use, please let me know if I've missed something. This is the rsync line I've been using:
rsync -av --partial --progress --inplace SRC DST
I have an old server running Fedora 4 that occasionally just stops responding via network for about an hour. This happens 1-2 times a week.
Also no connection from the server itself to any other computer on the network is possible when it happens. The network settings and routes look fine. There are no unusual log messages and no unusual processes running at that time.
If I restart the network or just do an ifconfig eth0 down & ifconfig eth0 up it works fine afterwards.
I know that the server should be updated to a currently supported OS, but that is not really an easy option right now.
Any ideas on how I could diagnose and fix that problem?
I need to:
Copy all of a tree's folders (with all files, including hidden) under /sourcefolder/* preserving user privileges to /destfolder/
If there is a conflict with a file (a file with the same name exists in destfolder), then
rename file in destfolder with a standard rule, like add "old" prefix to filename (readme.txt will become oldreadme.txt)
copy the conflicted file from source to destination
Conflicts between folders should be transparent - if same directory exists in both sourcefolder and destfolder, then preserve it and recursively copy its content according to the above rules.
I need also a .txt report that describes all files/folders added to destfolder and files that were renamed.
How can I accomplish this?
I use Putty to login to the remote server and then set the environment and change the path to a particular directory. Now from this dir, I need to copy a folder to my desktop which is Windows?
How can I achieve this ?
Some of my failed attempts are as follows
scp -r remote_foldername srao@my_ipaddress:C:\srao\Users\Desktop
So from the remote server which is to be copied through putty, to my_username_in_windows@ip_address:path to destination
I am attempting to set up my Raspberry Pi as a bridge (but I think this is not a question specific to the hardware) - using Debian wheezy.
I have a hostapd.conf: (some details changed for security)...
interface=wlan0
bridge=br0
driver=nl80211
auth_algs=1
macaddr_acl=0
ignore_broadcast_ssid=0
logger_syslog=-1
logger_syslog_level=0
hw_mode=g
ssid=MY_SSID
channel=11
wep_default_key=0
wep_key0=MY_KEY
wpa=0
(yes, I know WEP is no good)
And this in /etc/network/interfaces
auto lo
iface lo inet loopback
iface eth0 inet dhcp
allow-hotplug wlan0
iface wlan0 inet manual
wpa-roam /etc/wpa_supplicant/wpa_supplicant.conf
iface default inet dhcp
auto br0
iface br0 inet dhcp
bridge-ports eth0 wlan0
Everything seems to come up ok, but I cannot associate with the bridged wireless connection - even though the flashing lights on the USB stick suggest packets are being exchanged.
I have read somewhere that not all cards/devices will run in hostap mode - they won't pass packets in one direction: is that right? (The info was a bit old)- this my card:
[ 3.663245] usb 1-1.3.1: new high-speed USB device number 5 using dwc_otg
[ 3.794187] usb 1-1.3.1: New USB device found, idVendor=0cf3, idProduct=9271
[ 3.804321] usb 1-1.3.1: New USB device strings: Mfr=16, Product=32, SerialNumber=48
[ 3.816994] usb 1-1.3.1: Product: USB2.0 WLAN
[ 3.823790] usb 1-1.3.1: Manufacturer: ATHEROS
[ 3.830645] usb 1-1.3.1: SerialNumber: 12345
So, what have I got wrong here?
We have an environment of a few thousand users running applications on about 40 clusters ranging in size from 20 compute nodes to 98,000 compute nodes. Users on these systems generate massive files (sometimes 1PB) controlled by traditional unix permissions (ACLs usually aren't available or practical due to the specialized nature of the filesystem).
We currently have a program called "give", which is a suid-root program that allows a user to "give" a file to another user when group permissions are insufficient. So, a user would type something like the following to give a file to another user:
> give username-to-give-to filename-to-give ...
The receiving user can then use a command called "take" (part of the give program) to receive the file:
> take filename-to-receive
The permissions of the file are then effectively transferred over to the receiving user.
This program has been around for years and we'd like to revisit things from a security and functional point of view.
Our current plan of action is to remove the bit rot in our current implementation of "give" and package it up as an open source app before we redeploy it into production.
Does anyone have another method they use to transfer extremely large files between users when only traditional unix permissions are available?
I have a server with a faulty power button that likes to reboot itself. Usually there are warning signs, like the acpid log file in /var/log starts spamming garbage for about 10hrs or so.
Is there an easy way I can have something monitor the acpid log and email me when it has new activity?
I wouldn't consider myself extremely advanced so any "guides" you may have for accomplishing something like this would be very helpful and much appreciated. Thank you!
I am interested in setting up a raspberry pi as the local end of a printing solution. In my testing the pi chokes on acting as a complete cups based print server. It seems a little underpowered for some of the ghostscipt processing and other filtering that occurs-- particularly on larger or complex documents the processing time can be 5 or more minutes.
My question is can the processing be largely done elsewhere and the prepared end product of the processing chain be fed to the pi for output on the connected printer?
So in this scenario any arbitrary document (html, pdf, text) is initially 'printed' on a relatively powerful machine but the output is stored in a file. This file is then grabbed by the pi and with all the heavy work out of the way easily printed using cups. I know files can be pushed through cups in raw mode but I am fuzzy on the pros and cons and the applicability in what I describe. I have tested this with pdftops creating a ps file then feeding that raw to cups and I think it works but it seems like there may be a cleaner solution.
This scenario would ideally work for any number of printer types.
I am working on a PHP script that transfers files using FTP functions. It has always worked on my production server (which is a hosting service). The development server I have just setup (I am a novice to servers) is Debian Lenny with Apache2, PHP5, and MySQL5.
The file transfer works correctly, but once the file has been written to the server, it has permissions of 600. This makes it impossible for me to view the file (JPEG) in the web browser, as permission is denied. I have scoured the internet and even broken my server installation and reinstalled it trying to figure this out (which has been fun, nonetheless!).
I know it is unwise to set 777 permissions on public accessible files, but even that will not solve the problem. The only thing that works is if I chmod 777 thefile.jpg after it has been transferred, which is not a working solution.
I tried changing the owner of my site files to www-data per this post, but that also does not work.
My user is mike, and it still does not work whether the owner of the files is mike or root.
Would somebody point me in the right direction? Thanks! And, of course, let me know if I can clarify anything.
I have set up a single laptop (I have 10 of these exact models) for a demonstration next week. I want to copy the configuration (Fedora 13 with the software for demonstration) to the 9 other machines. Is this simply a dd operation? I may be thinking that I require more than what is really needed.
Thanks for the assistance!
My Ubuntu eats more memory than the task manager is showing:
sudo ps -e --format rss | awk 'BEGIN{c=0} {c+=$1} END{print c/1024}'
1043.84
free -m
total used free shared buffers cached
Mem: 3860 1878 1982 0 20 679
-/+ buffers/cache: 1178 2681
Swap: 2729 1035 1693
That's strange. Can someone explain this difference?
But what is more important:
I'd like to know how much memory a process is really using. I don't want to know the virtual memory size, but rather the resident memory plus swap of a process.
I have also tried to output the format param "sz" of 'ps', but the sum of this is to high (5450 MB) (param 'size' gives 8323.45 MB). Are there any other options?
I really want to use this, to determine which programs/processes are eating to much memory (and swap), to kill them, because hibernate might not be working if the swap partition is to little.
I've been using an external USB drive on a Debian server for backup. The drive is formatted as NTFS and mounted with ntsfmount. This was working fine, but I was filling up a directory with lots of files. Eventually the backup failed. When I then tried to look at the directory using ls it reported:
ls: reading directory .: Numerical result out of range
Looking in syslog, I also saw this:
Sep 23 07:35:31 tosh ntfsmount[28040]: Failed to read index block: Numerical result out of range.
Is this simply that I've reached the upper limit of number of files in a directory? If so, is there any way to extend the number of allowed files?
I'm working on an embedded system (based onUbuntu 12.04 LTS) and I'm customizing its kernel. I'm having some problem with upstart, mountall and plymouth. Nothing unsolvable I suppose, but the real problem is that I can't diagnose properly what's going on because the kernel (or maybe plymouth) changes the video mode in the middle of the boot process. This completely wipes entire lines of log and prevents any debugging of kernel misconfigurations.
My Grub2 config seems to be ok with:
GRUB_CMDLINE_LINUX=""
GRUB_CMDLINE_LINUX_DEFAULT="acpi=force noplymouth"
GRUB_GFXMODE=1024x768x32
GRUB_GFXPAYLOAD_LINUX=keep
Here is some relevant output of lspci:
00:00.0 Host bridge: Intel Corporation Mobile 945GSE Express Memory Controller Hub (rev 03)
00:02.0 VGA compatible controller: Intel Corporation Mobile 945GSE Express Integrated Graphics Controller (rev 03)
00:02.1 Display controller: Intel Corporation Mobile 945GM/GMS/GME, 943/940GML Express Integrated Graphics Controller (rev 03)
And here is the relevant portion of my kernel configuration:
CONFIG_AGP=y
CONFIG_AGP_INTEL=y
CONFIG_VGA_ARB=y
CONFIG_VGA_ARB_MAX_GPUS=16
CONFIG_DRM=y
CONFIG_DRM_KMS_HELPER=y
CONFIG_DRM_I915=y
CONFIG_DRM_I915_KMS=y
CONFIG_VIDEO_OUTPUT_CONTROL=y
CONFIG_FB=y
CONFIG_FB_BOOT_VESA_SUPPORT=y
CONFIG_FB_CFB_FILLRECT=y
CONFIG_FB_CFB_COPYAREA=y
CONFIG_FB_CFB_IMAGEBLIT=y
CONFIG_FB_MODE_HELPERS=y
CONFIG_FB_VESA=y
CONFIG_BACKLIGHT_LCD_SUPPORT=y
CONFIG_BACKLIGHT_CLASS_DEVICE=y
CONFIG_VGA_CONSOLE=y
CONFIG_VGACON_SOFT_SCROLLBACK=y
CONFIG_VGACON_SOFT_SCROLLBACK_SIZE=640
CONFIG_DUMMY_CONSOLE=y
CONFIG_FRAMEBUFFER_CONSOLE=y
CONFIG_FRAMEBUFFER_CONSOLE_DETECT_PRIMARY=y
CONFIG_FONT_8x8=y
CONFIG_FONT_8x16=y
CONFIG_LOGO=y
CONFIG_LOGO_LINUX_MONO=y
CONFIG_LOGO_LINUX_VGA16=y
CONFIG_LOGO_LINUX_CLUT224=y
Every other custom/stock kernel boot fine with that Grub2 config. What I would like to have is a single flow of messages on a single console (retaining one screen resolution) from the bootup logo till the login prompt. Does anybody know what I have to tweak to achieve this?
I am runing ubuntu 9.10 and the latest version of skype but I have problem with webcam. When I trie the webcam with cheese it works fine but when I try to use the web on skype , skype crashes. can any body help me
My Canon camera produces files like IMG_1234.JPG and MVI_1234.AVI. It also timestamps those files. Unfortunately during a trip to another timezone several cameras were used, one of which did not have the correct time zone set - meta data mess.. Now I would like to correct this (not EXIF, the file's "modify" timestamp on disk).
Proposed algorithm:
1 read file's modify date
2 add delta, i.e. hhmmss (preferred: change timezone)
3 write new timestamp
Unless someone knows a tool or a combination of tools that do the trick directly, maybe one could simplify the calculation using epoch time (seconds since ..) and whip up a shell script.
Any help appreciated!
hi,
I have two systems with ip address say 64.103.56.1(A)(Dev name -wlan0) and 64.103.225.18(B),now what i want is , everytime I ping B from my system A, it has to be routed via a router say with address 10.0.0.251(C)(I want this to be my next hop to reach B) , but this router is on a different subnetwork than the two systems.How do I do this?
/* Things I tried:
I used 'route add -host B gw C wlan0', and got an error saying " no such process exist or no such device found".
Tried ping C and traceroute and found the gw addr at my side is some 63.103.236.3(D), so added another entry route add -host C gw D wlan0, I was able to do this without any error!
*/