Search Results

Search found 26947 results on 1078 pages for 'util linux'.

Page 406/1078 | < Previous Page | 402 403 404 405 406 407 408 409 410 411 412 413  | Next Page >

  • Video Player/Library for Ubuntu with ratings and thumbs

    - by greggannicott
    I've just made the switch to Ubuntu on my main PC and I've been looking for a media player that can: Play all the usual video formats Rate (and ideally, tag) each file Display thumbnails for each file Other than that there isn't much I'm after. Banshee comes close, but doesn't display thumbnails. I've Google'd lots but I'm running out of search terms to try. Does anyone have any suggestions? Cheers!

    Read the article

  • SSD suddenly full

    - by Daniel
    Today the hard drive of our server was suddenly full. The disk usage always stayed around 50 % in the weeks and months before (old data is regularly expunged from the server). I deleted 10 GB of files in /tmp, which strangely freed 51 GB. Here is what I did: root@***:~# df -h Dateisystem Size Used Avail Use% Eingehängt auf /dev/sda3 139G 137G 0 100% / tmpfs 3,9G 0 3,9G 0% /lib/init/rw udev 3,9G 116K 3,9G 1% /dev tmpfs 3,9G 0 3,9G 0% /dev/shm /dev/sda1 985M 25M 910M 3% /boot root@***:/var# du -hs * 3,3M backups 438M cache 9,4G lib 4,0K local 12K lock 76M log 24K mail 4,0K opt 88K run 184K spool 10G tmp 12K www root@***:/var/tmp# find -type f -print0 | xargs -0 rm root@***:/var/tmp# df -h Dateisystem Size Used Avail Use% Eingehängt auf /dev/sda3 139G 81G 51G 62% / tmpfs 3,9G 0 3,9G 0% /lib/init/rw udev 3,9G 116K 3,9G 1% /dev tmpfs 3,9G 0 3,9G 0% /dev/shm /dev/sda1 985M 25M 910M 3% /boot Any explanation as to why deleting 10 GB in /tmp gave me back 51 GB on the disk? Could this point to an SSD failure? Are there any tools for Debian to test SSD health? I already have checked syslog. The first entry relating to this incidient is a mysql message: 1:22:02 [ERROR] /usr/sbin/mysqld: Disk is full writing... So I have absolutely no idea what caused this.

    Read the article

  • Server suddenly running out of entropy

    - by Creshal
    Since a reboot yesterday, one of our virtual servers (Debian Lenny, virtualized with Xen) is constantly running out of entropy, leading to timeouts etc. when trying to connect over SSH / TLS-enabled protocols. Is there any way to check which process(es) is(/are) eating up all the entropy? Edit: What I tried: Adding additional entropy sources: time_entropyd, rng-tools feeding urandom back into random, pseudorandom file accesses – netted about 1 MiB additional entropy per second, problems still persisted Checking for unusual activity via lsof, netstat and tcpdump – nothing. No noticeable load or anything Stopping daemons, restarting permanent sessions, rebooting the entire VM – no change in behaviour What in the end worked: Waiting. Since about yesterday noon, there are no connection problems anymore. Entropy is still somewhat low (128 Bytes peak), but TLS/SSH sessions have no noticeable delay anymore. I'm slowly switching our clients back to TLS (all five of them!), but I don't expect any change in behavior now.

    Read the article

  • processes slow after some time of actively running

    - by Yervand Aghababyan
    i have several cron jobs running on an ubuntu machine. each one does some pretty heavy load stuff. The cron jobs are parsing files and the bigger the file the longer it takes them to parse it. The strange thing is that if i make the files too big ( like 30mb) the script kind of hangs. It starts processing them really enthusiastically but after some time (something like 5-10 minutes) the cpu usage of the process drops a lot and it gets into some "zombie" state. If prior to this the process in htop was using 70-80% of the CPU then after this drop occurs it slows down to something like 5-10%. the load average drops down as well. The status of the processes sometimes changes to D in htop, which AFAIR stands for zombie. Today i noticed the same behavior of processes of mysql when executing heavy queries (a query took something like 4 hours to execute). the cron jobs are mostly php and during their processing most of the CPU eats the php process and not mysql. so i think the issue is not with a specific language/program but with the way the processes are "managed". The only other place i've seen similar behavior was on my Amazon EC2 micro instance when after some aggressive use of CPU the CPU quota was taking effect and everything was slowing down dramatically. This is a dedicated machine running ubuntu. what may be the cause?

    Read the article

  • Resolve `strace` file number to a filename

    - by Mike Pennington
    I am debugging a problem where MoinMoin on CentOS is throwing a permissions error, but I can't track down where the problematic file / directory is. I ran strace -vp <pid> on the apache pid; when I have the problem I see this: epoll_wait(10, {{EPOLLIN, {u32=3487534344, u64=140367313734920}}}, 2, 10000) = 1 accept4(6, {sa_family=AF_INET6, sin6_port=htons(52621), inet_pton (AF_INET6, "::ffff:105.193.30.91", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, [28], SOCK_CLOEXEC) = 11 ## Later on... read(7, 0x7fffa658ad7f, 1) = -1 EAGAIN (Resource temporarily unavailable) However, since apache is already running, I see no corresponding open() on the file referred to as 7; thus I see the permissions problem, but I still don't know which file is the problem. I know I could try to catch all the file opens when I respawn apache, but I'm hoping there is a way to map file 7 to a real filename... is there a way to do this?

    Read the article

  • What are the minimum required modules to run WordPress

    - by Mister IT Guru
    Recently a 'consultant' came in to talk to bean counters at my place of employment, with regards to being more efficient with our IT infrastructure. They suggested to be more efficient we should only load the Apache modules that are required on our web servers. (This is just 1 of 1Ks of suggestions). The Bean Counters are very excited, and prepared for me to spend the time to investigate this avenue of cost cutting. I don't mind this mundane exercise, I see it as a learning experience! I guess this leads me to the actual question: How can I determine the minimum required apache modules for a PHP based application without actually going through the code, or plain old trial and error?

    Read the article

  • How can a driver change the kernel page table?

    - by Naruto
    I am encountering an issue with kernel memory. When my driver finishes running, the other processes in kernel fail to run, for example, I run ls, the command crashes the kernel with error "Corrupted page table" at a specified address. I do not know whether the page table of my driver relates to the page table of other process. How can my driver changes the page table of the other processes? And how the driver of a process relates to the kernel page table? As I know when the driver runs, it will be switched to kernel context. Kernel has its own page table and the driver has it own one. What is the relation among the kernel page table, the page table of my driver and the page table of the other processes when it runs in kernel context?

    Read the article

  • Hourly CRON task running more frequently than one hour

    - by Justin
    I have a cron task that calls a special PHP script via wget. Here is the crontab entry: 0 * * * * wget http://www.... It will work perfect for several days, running on the hour. However, after a few days the cron job will start to be called several times an hour. I have never seen CRON drift like this, so I imagine it can't really be a CRON issue. However, the logs of the script that is called clearly show it running several times an hour. Server details: Ubuntu Luci Apache MySQL PHP5 Time is showing correct @ command line Server is setup to sync with a NTP server In order for the script to run it must be passed a unique 50-character hash key in the URL, so this script isn't being called from any other source accidentally. What might cause CRON to drift like this?

    Read the article

  • How to bulk-rename files with invalid encoding or bulk-replace invalid encoded characters?

    - by qdoe
    I have a debian server and I'm hosting music for an internet radio station. I have trouble with file names and paths because a lot of files got an invalid encoding, for example: ./music/Bändname - Some Title - additional Info/B?ndname - 07 - This Title Is Cörtain, The EncÃ?ding Not.mp3 Ideally, I would like to remove everything that is not letters A-Z/a-z or numbers 0-9 or dash -/underscore _... The result should look like something like that: ./music/Bndname-SomeTitle-additionalInfo/Bndname-07-ThisTitleIsCrtain,TheEnc?dingNot.mp3 How to achieve this for a batch of a lot of files and directories? I've seen this similar question: bulk rename (or correctly display) files with special characters But this only fixes the encoding, I would prefer a more strict approach as described above.

    Read the article

  • How do I find the cause for a huge difference in performance between two identical Ubuntu servers?

    - by the.duckman
    I am running two Dell R410 servers in the same rack of a data center. Both have the same hardware configuration, run Ubuntu 10.4, have the same packages installed and run the same Java web servers. No other load. One of them is 20-30% faster than the other, very consistently. I used dstat to figure out, if there are more context switches, IO, swapping or anything, but I see no reason for the difference. With the same workload, (no swapping, virtually no IO), the cpu usage and load is higher on one server. So the difference appears to be mainly CPU bound, but while a simple cpu benchmark using sysbench (with all other load turned off) did yield a difference, it was only 6%. So maybe it is not only CPU but also memory performance. I tried to figure out if the BIOS settings differ in some parameter, did a dump using dmidecode, but that yielded no difference. I compared /proc/cpuinfo, no difference. I compared the output of cpufreq-info, no difference. I am lost. What can I do, to figure out, what is going on?

    Read the article

  • Production Instance : CLOSE_WAIT Connection Issue

    - by rajnikant
    I am using 10EC2 Instances behind 1 ELB. And ELB configured 80 to 8080 and 443 to 8080 port. And all 10EC2 instances having installed with Apache Tomcat, total request on ELB around 8000 to 10000 in 1 minute. I am facing problem for CLOSE_WAIT connection on 10 EC2 Instance, having Apache Tomcat. EC2 Instance Type : m1.xlarge When we restart the Apache Tomcat, all CLOSE_WAIT connections are lost, but its not proper way to work on Production Instances. Please help me out.

    Read the article

  • How to determine the best byte size for the dd command

    - by James
    I know that doing a dd if=/dev/hda of=/dev/hdb does a deep hard drive copy. I've heard that people have been able to speed up the process by increasing the number of bytes that are read and written at a time (512) with the "bs" option. People have suggested that the optimal byte size is due to sector size. I personally think it would have something to do with the amount of cache that the hard drive has. My question is: What determines the ideal byte size for copying from a hard drive? and Why does that determine the ideal byte size?

    Read the article

  • Optimal dir strcuture for keeping millions of files on an ext4 system

    - by Alex Flo
    I need to keep millions of files on an ext4 system. I understand that having a structure with multiple subdirectories is the general accepted solution. I wonder what would be the optimal approach in terms of number of dirs/subdirs. For example I tried a structure like 16/16/16/16 (that is, (sub)directories from 1 to 16) and I found that I am able to move 100K files to this structure in 2m50s. When trying to move 100K files to a 8/8/8/8/8/8 structure it took 11 minutes. So the 16/16/16/16 approach seems to be better but I was wondering if anyone has some empirical experience with an even better dir/subdir distribution.

    Read the article

  • Zabbix not getting data for one filesystem

    - by Dennis Williamson
    I have Zabbix monitoring disk space for several volumes on several servers. It works fine on all of them except for one of the volumes on one of the servers which always reports as 0. However, when I run ./zabbix_get -s localhost -p 10050 -k 'vfs.fs.size[/home, free]' locally on the machine in question, it gives me the correct, non-zero size which matches the output of df. How can I go about troubleshooting and correcting this problem?

    Read the article

  • SSH reverse tunnel to monitor and manage remote devices

    - by acid_crucifix
    I have a set a distributed set of devices running Ubuntu 12.04 that I am distributing to clients. I would like to manage them remotely. They may not have fixed IPs and potentially might be behind firewalls. What I am planning to do is have the devices (permanently connected to the net) poll a request URL and based on the response open a reverse tunnel to my server, so that I can access them via that tunnel. Most of what I read about reverse tunnel over SSH is for single use cases and very little about heavy production usage. Is there some reason for this, security issues? or stability? Any help would be much obliged.

    Read the article

  • Does a VPS need a firewall?

    - by Camran
    Do I need a firewall on my VPS which I ordered today? If so, which one would you recommend? I plan on running a classifieds website with Java, php, mysql. My OS is ubuntu 9.10 Thanks Btw: What is iptables?

    Read the article

  • Successful su for user by root in /var/log/auth.log

    - by grs
    I have this sorts of entries in my /var/log/auth.log: Apr 3 12:32:23 machine_name su[1521]: Successful su for user1 by root Apr 3 12:32:23 machine_name su[1654]: Successful su for user2 by root Apr 3 12:32:24 machine_name su[1772]: Successful su for user3 by root Situation: All users are real accounts in /etc/passwd; None of the users has its own crontab; All of those users are logged in the machine some time ago via SSH or No Machine - time varies from few minutes to few hours; no cron jobs are scheduled to run at that time, anacron is removed; I can see similar entries for other days and other times. The common part is the users are logged in when it appears. It does not appear during login, but some time afterwards. This machine has similar setup with few others but it is the only one where I see these entries. What causes them? Thanks

    Read the article

  • Debian: SSH: "PermitRootLogin=forced-commands-only" stopped working

    - by Brent
    I have several servers running Debian Lenny. Just recently I discovered the PermitRootLogin=forced-commands-only directive for ssh, which allows me to run a scripted rsync as root with an ssl key, without enabling more generalized root ssh access. However, last week this stopped working - it appears on all of my servers - and I can't figure out why. Everything continues to work fine with PermitRootLogin=yes, but I would prefer to block root logins - especially via passwords. The day it stopped working, we reconfigured some of the ports on one of our switches (which we later reverted), but I can't see that affecting this, since it still works with PermitRootLogin set to yes. How can I diagnose why the forced-commands-only directive has apparently stopped working?

    Read the article

  • Set custom mount point and mount options for USB stick

    - by kayahr
    Hello, I have an USB stick which contains private stuff like the SSH key. I want to mount this stick to my own home directory with 0700 permissions. Currently I do this with this line in /etc/fstab: LABEL=KAYSTICK /home/k/.kaystick auto rw,user,noauto,umask=077,fmask=177 0 0 This works great but there is one minor problem: In Nautilus (The Gnome file manager) the mount point ".kaystick" is displayed. I guess Nautilus simply scans the /etc/fstab file and displays everything it finds there. This mount point is pretty useless because it can't be clicked when the device is not present and it can't be clicked when the device is present (Because then it is already mounted). I know this is a really minor problem because I could simply ignore it but I'm a perfectionist and so I want to get rid of this useless mount point in Nautilus. Is there another way to customize the mount point and mount options for a specific USB device? Maybe it can be configured in udev? If yes, how?

    Read the article

  • Primary/secondary ethernet interfaces via NetworkManager in Ubuntu 9.10

    - by Josh
    I have an Ubuntu 9.10 machine with three ethernet interfaces, eth0, eth1 and eth2. eth2 is connected to a private network. eth0 and eth2 are connected to two different LANs. Either one will provide access to the internet. All three networks have DHCP servers. Using Ubuntu's the default settings (And Gnome), when I boot up all the interfaces are active and my system gets three IP addresses. However any attempt to access the internet results in connection timeouts and other weirdness. I suspect that traffic is going out on one NIC (like eth0) and coming back in on another (like eth1). I'm not sure what's going on. The only way I can access the internet at the moment is to bring two of the devices down with ifdown. How can I configure eth0 as my primary interface so all trafic goes out by default on that interface, while keeping the other two active? Also, I want to make sure Avahi broadcasts properly on all three IPs so that the computers on the LAN of eth1 can still connect to myHostname.local... EDIT: Here's my routing table: Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 172.16.151.0 0.0.0.0 255.255.255.0 U 0 0 0 eth2 172.16.30.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 10.1.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth1 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth1 0.0.0.0 172.16.30.2 0.0.0.0 UG 0 0 0 eth0 0.0.0.0 10.1.0.1 0.0.0.0 UG 0 0 0 eth1 I want the 172.16.30.2 network to be the primary one and the 10.1.0.0 network to be the secondary one. EDIT2: My nameservers are also incorrect. It seems like Ubuntu is bringing the networks up in order, eth0, then 1, then 2, and the DHCP information from eth1 is overriding eth0, and eth2 is overriding eth1. How can I reverse this so the DHCP information from eth0 is the "master"? EDIT3: This seems to be an issue with Gnome's NetworkManager.

    Read the article

  • Httpd and LDAP Authentication not working for sub-pages

    - by DavisTasar
    I just recently installed a Nagios implementation, and I'm trying to get LDAP authentication working for httpd on Red Hat. (nagios.conf for Apache config below, sanitized of course) ScriptAlias /nagios/cgi-bin "/usr/local/nagios/sbin" <Directory "/usr/local/nagios/sbin"> #SSLRequireSSL Options ExecCGI AllowOverride none AuthType Basic AuthName "LDAP Authentication" AuthLDAPURL "ldap://my.domain.controller:389/OU=Users,DC=my,DC=domain,DC=controller?sAMAccountName?sub?(objectClass=user)" NONE AuthzLDAPAuthoritative off AuthLDAPBindDN "CN=NagiosAdmin,DC=my,DC=domain,DC=controller" AuthLDAPBindPassword "myPassword" require valid-user </Directory> Alias /nagios "/usr/local/nagios/share" <Directory /usr/local/nagios/share> #SSLRequireSSL Options None AllowOverride none AuthBasicProvider ldap AuthType Basic AuthName "LDAP Authentication" AuthzLDAPAuthoritative off AuthLDAPURL "ldap://my.domain.controller:389/OU=Users,DC=my,DC=domain,DC=controller?sAMAccountName?sub?(objectClass=user)" NONE AuthLDAPBindDN "CN=NagiosAdmin,DC=my,DC=domain,DC=controller" AuthLDAPBindPassword "myPassword" require valid-user </Directory> Now, the initial authentication works, so when you first hit the page you can log in just fine. However, when you go anywhere else, it prompts you for authentication, fails (asking for a re-prompt), and gives this error message: [Mon Oct 21 14:46:23 2013] [error] [client 172.28.9.30] access to /nagios/cgi-bin/statusmap.cgi failed, reason: verification of user id '<myuseraccount>' not configured, referer: http://<nagiosserver>/nagios/side.php I'm almost certain its a simple flag or option, but I just can't find it, and I don't have a lot of experience working with Apache. Any assistance or help would be greatly appreciated.

    Read the article

  • How can I set deadline as the I/O scheduler for USB Flash devices by using udev rules?

    - by ????
    I have set CFQ as the default I/O scheduler. I often get bad performance when I write data into a Flash device. This is resolved if I use deadline as the I/O scheduler for USB Flash devices. I can't always change the scheduler manually, right? I think writing udev rules is a good idea. Can someone please write rules for me? I want: When I plug in a USB device, detect the type of the device. If it is a portable USB hard disk, do nothing (I think if a device has more than one partitions, it always a portable hard disk. If it is a USB Flash device, set deadline as it's scheduler.

    Read the article

  • 2 Printers 1 Queue

    - by Shazburg
    My issue: When an order is processed, the same document needs to be printed on two printers. My proposed solution: Create a single queue in CUPS with a backend script that spits the job out to the two real printers queues. My problem: Documentation. Maybe I'm looking at every ring around the bullseye, but I can't find anything that lays out the rules for writing a CUPS backend script. In the end, I have several questions: Is there already an option to do this in CUPS that I've missed? The line I use to add my queue is "lpadmin -p MultiPass -E -v multipass -P Generic PostScript Printer". But DeviceURI is bad unless I specify a directory like "-v multipass:/tmp". Why is this? For testing, my script does nothing but capture ARGV and write it out to a text file one line per argument. Problem is, I'm getting nothing. Logs show the job as successful, but I'm pretty sure my meager attempt at a backend isn't even being run. I've tried to keep this question brief, so please ask for more info as I'm sure I've left out the most important part in all this. Honestly, I'm just done chasing my own tail. Thank you for your time.

    Read the article

< Previous Page | 402 403 404 405 406 407 408 409 410 411 412 413  | Next Page >