I am trying to find a workaround for incorrect grouping of windows in Docky, and I believe the problem lies with the WMClass attribute that is set for each window. However, I do not know how to view this attribute for open windows. Is there any way to do this?
For example, if I have a directory containing files file1 and file2, and a directory dir1, then "ls -l file1" will show details just for file1. Doing the same thing for dir1 will instead show the contents of dir1. Is there a way to treat dir1 like file1?
I have an Ubuntu server. It is going to be a web server with a URI of www.example.com. I have a DNS A record pointing www.example.com to the server's IP address.
Let's say I pick "trinity" as the hostname for this server.
I want to set up the DNS records correctly. I need reverse DNS to www.example.com, so a CNAME for www.example.com doesn't seem appropriate. Here's my question:
Is it considered best practice to set up two DNS records (which in my case would likely be two A records), one for www.example.com and one for trinity.example.com, both pointing to this server's IP address? (Or, even if it is not accepted as a best practice, is it a good idea?)
If so, would the following be a proper /etc/hosts file?
$ cat /etc/hosts
127.0.1.1 trinity.local trinity
99.100.101.102 trinity.example.com trinity www.example.com
This server is a Linode and Linode's docs seem to imply that the above approach is best (if I am reading them correctly). Here's the relevant section. I bolded the line that seems to apply here.
Update /etc/hosts
Next, edit your /etc/hosts file to resemble the following example,
replacing "plato" with your chosen hostname, "example.com" with your
system's domain name, and "12.34.56.78" with your system's IP address.
As with the hostname, the domain name part of your FQDN does not
necesarily need to have any relationship to websites or other services
hosted on the server (although it may if you wish). As an example, you
might host "www.something.com" on your server, but the system's FQDN
might be "mars.somethingelse.com."
File:/etc/hosts
127.0.0.1 localhost.localdomain localhost
12.34.56.78 plato.example.com plato
The value you assign as your system's FQDN should have an "A" record
in DNS pointing to your Linode's IP address. For more information on
configuring DNS, please see our guide on configuring DNS with the
Linode Manager.
I need to organise an external HDD such that there is no more than 500 folders on it.
Ubuntu's "Properties" pane shows only the file count, not the folder count.
Is there a simple CLI line that will tell me the number of subdirectories?
Thanks!
I bought a VPS server yesterday. Server companies support is not avaible for now. I am going to set dns adress. So i know 1 ip number they gave to me, how can i learn how many ip's i have and what are they? Is there command for ubuntu for that? Or any other way?
I'm serious - is it really fun for *nix sysadmins spending half of their life on spotting typo in httpd.conf ? What not use xml or json - (write gui tools would be easy)
I am having problems booting a new Ubuntu 10 (server) install. My primary HD (/dev/sda) is laid out as follows:
Device Boot Start End Blocks Id System
/dev/sda1 * 1 18 144553+ 83 Linux <-- /BOOT
/dev/sda2 19 182401 1464991447+ 5 Extended
/dev/sda5 19 2207 17583111 fd Linux raid autodetect
/dev/sda6 2208 11934 78132096 fd Linux raid autodetect <-- / (ROOTFS)
/dev/sda7 11935 182401 1369276146 fd Linux raid autodetect
The rootfs is part of a RAID1 (software) array (currently degraded):
# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md2 : active raid1 sda6[1]
78132032 blocks [2/1] [_U]
The UUIDs for the partitions are as follows:
# blkid /dev/sda1
/dev/sda1: UUID="b25dd301-41b9-4f4d-9b0a-0e31713dd74c" TYPE="ext2"
# blkid /dev/sda6
/dev/sda6: UUID="af7b9ede-fa53-c0c1-74be-31ec752c5cd5" TYPE="linux_raid_member"
# blkid /dev/md2
/dev/md2: UUID="a0602d42-6855-482f-870c-6f6ecdcdae3f" TYPE="ext4"
Finally, I have my grub2 menuentry setup as follows:
### BEGIN /etc/grub.d/10_linux ###
menuentry 'Ubuntu, with Linux 2.6.32-25-server' --class ubuntu --class gnu-linux --class gnu --class os {
insmod ext2
insmod raid
insmod mdraid
set root='(hd0,1)'
search --no-floppy --fs-uuid --set b25dd301-41b9-4f4d-9b0a-0e31713dd74c
linux /vmlinuz-2.6.32-25-server root=UUID=a0602d42-6855-482f-870c-6f6ecdcdae3f ro nosplash noplymouth
initrd /initrd.img-2.6.32-25-server
}
When I attempt to boot, grub loads OK, however I eventually get the following error message:
Gave up waiting for root device. ALERT /dev/disk/by-uuid/a0602d42-6855-482f-870c-6f6ecdcdae3f does not exist. Dropping to a shell!
If from the grub bootloader I open a grub command line, I can ls (hd0,) and it lists the correct partitions with the UUIDs as shown above - sda6 shows 'a0602d42-6855-482f-870c-6f6ecdcdae3f' (the RAID UUID). If I ls (md2)/ it properly lists all the files on the RAID1 filesystem (ext4) so it doesn't appear to be an issue accessing the raid device.
Does anyone have any suggestions as to what the problem might be? I can't figure this one out.
I have the following command that lists all files with the extension doc, docx, etc.
find . -maxdepth 1 -iname \*.doc\*
The command returns numerous files some of which I would like to delete. So for example the results returned are
Example.docx
Dummydata.doc
Sample.doc
I would like to delete Sample.doc and Dummydata.docx. How do I delete the files using the -exec option. Am I able to pass in the names of the files e.g. rm Dummydata.docx Sample.doc hence the command would look as follows
find . -maxdepth 1 -iname \*.doc\* -exec rm Dummydata.docx Sample.doc
Can I pass the names of the files within {} afterrm`? e.g.
find . -maxdepth 1 -iname \*.doc\* -exec rm {Dummydata.docx} Sample.doc
Is there a better way of doing it?
I'm looking for a simple method that will log file system operations. It should display the name of the file being accessed or modified.
I'm familiar with powertop, and it appears this works to an extent, in so much that it show the user files that were written to. Is there any other utilities that support this feature.
Some of my findings:
powertop: best for write access logging, but more focused on CPU activity
iotop: shows real time disk access by process, but not file name
lsof: shows the open files per process, but not real time file access
iostat: shows the real time I/O performance of disk/arrays but does not indicate file or process
I have a whole lot of photos and it's time to clean up the mess and free some disk space.
I know mogrify is great to batch-resize things down. The problem is, in some directories I have small images mixed with the big ones. I'd like to batch-downsize all the big one but not upsize the small ones.
As an example, I have a rep with tens of MBs-pictures in the 3000x2000s. Some of them I have already downsized so I could email them. They may be 1024x768. I'd like to downsize the big ones to 1600x1200, a disk-space-to-quality tradeoff I like. But then, with mogrify or convert, the small ones will be upsized, which would be a waste of disk space.
I found some tricky ways to use identify with cut and some scripting to filter the small pics out and mogrify the others, but man, there's got a way to tell mogrify not to upsize my pics... How ?
Is there some other tool better suited ?
I recently purchased a netgear 150 usb wireless dongle for use with my 11.10 Xubuntu amd64 system. Using the network-manager interface, I can see local wireless networks and enter the authentication details for my local wireless lan. Unfortunately, the connection does not seem to work, I keep getting notifications that my wireless has disconnected (but none indicating that I've connected). When I examine syslog, it seems to indicate that I've successfully associated with the wireless switch and that dhcp has successfully acquired an ip address but the log shows that the dhcp process keeps sending requests, eventually dropping the connection. 'ifconfig wlan0' never shows the dhcp address logged in syslog.
I suspect that the problem lies with the usb dongle, my configuration or the wireless switch but I am not certain how to isolate the problem, can anyone provide some insight on how I should go about homing in on the cause of this problem or verifying the functionality of the individual components, thanks.
I want to write code in Dev C++ so that when i execute in Ubuntu 8 , it clones my windows 7 from D: partition to its child partitions E:,F: ...
i have made my partitions of equal sizes and i have tested by manualy using ntfsclone ,so their will be no problem in cloning.
this is part of kiosk system and i hope you understand what i am upto
Some reference or help will be appreciated
thanks
I'm trying to use Skype with Ubuntu Karmic and I just don't understand how to configure Pulseaudio properly. The previous version of Skype allowed me to talk through and hear the voice on my USB phone and the ringing sounds through my laptop speaker. I'm not able to do this with the new version (2.1.0.47).
We have about 10 heterogeneous machines we would like to run various jobs on. The current situation is that people log in on a machine with ssh, see if other people are running stuff on it, then use screen to run the job.
I'd like to automate this process, but I don't have enough time to install a full-fledged cluster solution. So what's the simplest thing I can do?
I want the process to look like:
I choose the correct scan settings (dpi, color depth, etc)
I lay the first page on the scanner and trigger the process
The scanner scans the page and waits for me to position the next page correctly
I confirm that the next page is ready for scanning
Repeat the above two steps until I tell the scanner that there are no more pages to come
The scanner saves everything into a single PDF.
I tried both xsane and gscan2pdf. First problem: they want me to know how many pages will be scanned. This is already a nuisance, but I can do the counting if needed.
The main problem is that in step 3, the scanner does not pause. It is probably optimised for being fed loose sheets. The next scan process is triggered automatically as soon as the CCD has returned to the start position. The time the scanner needs to return the CCD is very short and I can't turn the page and position the book properly.
Is there a software which can do the scan process in the way I described above, or did I just miss a setting available in xsane or gscan2pdf to make the scanner pause?
If it makes any difference, the scanner is an Epson Stylus SX620FW, I run it using the manufacturer-provided driver.
I was trying to bring up my custom kernel. I did the following :
make menuconfig && make modules && make modules_install && make install
I would like to change the install PATH. How can i do that?
I tried doing
export INSTALL_PATH=<my custom path>
But then it is only creating the vmlinux.bin(it is not creating the ramdisk image!!)
But if i am not doing that, make install will automatically create the ramdisk image in the default /boot folder.
How can i change that??
Thanks,
Sen
I have 2 user accounts, foo and bar
I want to allow user foo to execute commands as root and any other user ie:
sudo su root -c'./run-my-script'
sudo su bar -c'./another-script'
sudo su another -c'./yet-another-script
I also want to allow user bar to execute commands as other user but only a subset and not root ie:
sudo su bar -c'./run-my-script'
but not
sudo su root -c'./run-my-script'
Is this possible ?
If I run a program from the shell, and it segfaults:
$ buggy_program
Segmentation fault
It will tell me, however, is there a way to get programs to print a backtrace, perhaps by running something like this:
$ print_backtrace_if_segfault buggy_program
Segfault in main.c:35
(rest of the backtrace)
I'd also rather not use strace or ltrace for that kind of information, as they'll print either way...
I tried to find a previous question on SU pertaining to this, but I'm surprised this has not been asked before.
I have seen some deals lately for really cheap SDHC Class 4 cards, and would like to know whether these are a feasible alternative to USB flash drives for running an OS.
I would like to be able to limit an installed binary to only be able to use up to a certain amount of RAM. I don't want it to get killed if it exceeds it, only that that would be the max amount that it could use.
The problem I am facing is that I am running an Apache 2.2 server with PHP and some custom code that a developer is writing for us. The problem is that somewhere in there code they launch a PHP exec call that launches ImageMagick's 'convert' to create a resized image file.
I'm not privy to a lot of details to the project or the code, but need to find a solution to keep them from killing the server until they can find a way to optimize the code.
I had thought that I could do this with /etc/security/limits.conf and setting a limit on the apache user, but it seems to have no effect. This is what I used:
www-data hard as 500
If I understand it correctly, this should have limited any apache user process to a maximum to 500kb, however, when I ran a test script that would chew up a lot of RAM, this actually got up to 1.5GB before I killed it. Here is the output of 'ps auxf' after the setting change and a system reboot:
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 5268 0.0 0.0 401072 10264 ? Ss 15:28 0:00 /usr/sbin/apache2 -k start
www-data 5274 0.0 0.0 402468 9484 ? S 15:28 0:00 \_ /usr/sbin/apache2 -k start
www-data 5285 102 9.4 1633500 1503452 ? Rl 15:29 0:58 | \_ /usr/bin/convert ../tours/28786/.….
www-data 5275 0.0 0.0 401072 5812 ? S 15:28 0:00 \_ /usr/sbin/apache2 -k start
Next I thought I could do it with Apache's RlimitMEM setting, but get the same result of it not getting limited. Here is what I have in my apache.conf file:
RLimitMEM 500000 512000
It wasn't until many hours later that I figured out that if the process actually reached that amount that it would die with an OOM error.
Would love any ideas on how to set this limit so other things could function on the server, and all of them could play together nicely.
Hi,
I need to install a pair of 1Tb disks into a server that has a hardware RAID card.
How long is it likely to take to configure the RAID controller - sticking the disks in is only a 5 minute job, but is there likely to be significant downtime while both disks mirror (even though they are both blank)? Am I looking at 10 minutes over all, or more like 2 hours for this to happen?
Thanks
I am building a web application where my users will be able to upload files. After the files are uploaded I need to send the files to two other servers, and after they will be deleted from the server where they were just uploaded to.
I am wandering is it a good I idea to keep the uploaded files in the tmp/ folder the time the uploaded files are sent to the other two servers or should I move them to another folder incase they get deleted? I am also wandering because I would like to know if I have to build a cron script to get rid of the files that have been transfered to the other servers so that I get my disk space back.