Search Results

Search found 26947 results on 1078 pages for 'util linux'.

Page 431/1078 | < Previous Page | 427 428 429 430 431 432 433 434 435 436 437 438  | Next Page >

  • What's a good tool for collecting statistics on filesystem usage?

    - by Kamil Kisiel
    We have a number of filesystems for our computational cluster, with a lot of users that store a lot of really large files. We'd like to monitor the filesystem and help optimize their usage of it, as well as plan for expansion. In order to this, we need some way to monitor how these filesystems are used. Essentially I'd like to know all sorts of statistics about the files: Age Frequency of access Last accessed times Types Sizes Ideally this information would be available in aggregate form for any directory so that we could monitor it based on project or user. Short of writing something up myself in Python, I haven't been able to find any tools capable of performing these duties. Any recommendations?

    Read the article

  • credit or minclass does not work well with pam_cracklib.so in common-password (opeSuSe 11.3)

    - by Mario
    I'm trying to implement password complexities on my pdc. It's a samba PDC with openLDAP backend. I tried cracklib-check but it looks like that I should have a decent and localize version of password library since the library out there usually comes in english. I also have another consideration that we will allow users to use any kind of password - even though it's dictionary based - as long as their passwords integrated with low/upper alphabet, digits, and other characters such as '$' or '_' (pam_cracklib.so calls them as classes). So here is my /etc/pam.d/common-password: #password requisite pam_pwcheck.so nullok cracklib password requisite pam_cracklib.so minclass=4 reject_username ##password requisite pam_cracklib.so \ ## dcredit=-1 ucredit=-1 lcredit=-1 ocredit=-1 reject_username password optional pam_gnome_keyring.so use_autht_ok password required pam_unix2.so use_authtok nullok The first commented line (with #) was the default configuration of openSuse 11.3. The 2nd/3rd (with leading ##) is another configuration I use when minclass=4 line is commented out. By the way, I have 'check password script' = /usr/local/sbin/crackcheck -d /usr/share/cracklib/pw_dict and passdb backend = ldapsam:ldap://127.0.0.1 parameters in smb.conf and cracklib-check works fine too. So here is the test I conduct. I logon to windows and then change my password. Sometimes it works fine that it trows error message - which what I wanted, but simple password with only lower alphabets can pass windows change password. Maybe I should make a new library which incorporates local vocabularies, but a guy out there (raise your hand please if you read this :) ) also experienced the same trouble with english word. Besides, what we really want is to let user to choose 2 or 3 format password out of 4 classes. Is there a bug or something with pam module in openSuse 11.3? Thank you in advance. Regards, Mario

    Read the article

  • How to write rules for persistent net names?

    - by ndemou
    I know that a process generates persistent network card names based on rules found in /lib/udev/rules.d/75-persistent-net-generator.rules. I also know how to completely disable this process with a simple echo '#' > /etc/udev/rules.d/75-persistent-net-generator.rules but I've read that I "could also write my own rules file to give the interface a name — the persistent rules generator ignores the interface if a name has already been set" (/etc/udev/rules.d/README confirms that this is possible). Do you have any pointers to documentation about how to write such rules? (I mostly care about Debian/Ubuntu and a bit less for CentOS) As a specific example of why I want to write custom rules: I have two identical servers with one onboard LAN and one PCI LAN. In case of HW failure I want to be able to move disks from HW#1 to HW#2 and it's important for eth0 to continue pointing to the onboard card and eth1 to the PCI card (no one wants to mess with cabling in the middle of a HW failure panic). My current workaround works but is a lot of work[1] so I wonder if writing custom rules would allow me to express something simple like this: cards with MAC A or B should be named eth0 cards with MAC C or D should be named eth1 follow default naming scheme for anything else [1] install the OS in HW#1 and keep a copy of /etc/udev/rules.d/70-persistent-net.rules. Move the disks to HW#2 and keep a second copy of the same file. Concatenate the two copies and manually edit the NAME="ethX" part. Replace /etc/udev/rules.d/70-persistent-net.rules with my version. Finally disable auto-creation of a new 70-persistent-net.rules using echo '#' > /etc/udev/rules.d/75-persistent-net-generator.rules

    Read the article

  • PHP unable to allocate memory.

    - by AlReece45
    On my way to the office this morning, every website on our shared VPS started giving the same error (several times, not the typical memory_limit error which is fatal): Warning: Unknown: Unable to allocate memory for pool. in Unknown on line 0 The shared server is a 64-bit OpenVZ container running cPanel. There are only ~6 VPSes on the host-- this is the largest one at only 4GB. The host itself has 24GB RAM. As the below graphs show, the memory usage on the host and VPS are both rather low. CPU Usage/Disk/Host all seem to be normal. RlimitMem was set to 583653034, yet the memory usage is about the same as it usually is. Apache 2.2, PHP 5.2 (mod_php) Restarting Apache has corrected the problem for now. However, I'd like to prevent it from happening again and I'm not sure what was limiting the memory. RlimitMem was set to 583653034, yet the memory usage is about the same as it usually is. There's seems to be plenty of memory: what caused this error? VPS Memory Usage Host Memory Usage APC Information apc.ttl=0 apc.shm_size=0 apc.mmap_file_mask=(blank) 1 Segment(s) with 32.0 MBytes (mmap memory, pthread mutex locking)

    Read the article

  • How to mount a iSCSI/SAN storage drive to a stable device name (one that can't change on re-connect)?

    - by jcalfee314
    We need stable device paths for our Twinstrata SAN drives. Many guides for setting up iSCSI connectors simply say to use a device path like /dev/sda or /dev/sdb. This is far from correct, I doubt that any setup exists that would be happy to have its device name suddenly change (from /dev/sda to /dev/sdb for example). The fix I found was to install multipath and start a multipathd on boot which then provides a stable mapping between the storage's WWID to a device path like this /dev/mapper/firebird_database. This is a method described in the CentOS/RedHat here: http://www.centos.org/docs/5/html/5.1/DM_Multipath/setup_procedure.html. This seems a little complicated though. We noticed that it is common to see UUIDs appear in fstab on new installs. So, the question is, why do we need an external program (multipathd) running to provide a stable device mount? Should there be a way to provide the WWID directly in /etc/fstab?

    Read the article

  • How do you delete an iFolder from iFolder admin interface

    - by cheshirekow
    There are only two buttons at the bottom of the screen "enable" and "disable". When I check the box next to an iFolder one of them is lit (depending on what the state of the folder is)... but there is no button to delete the folder (as it seems there should be from the documentation). There is a delete button in the "orphaned" tab but how do you "orphan" an iFolder? I'm logged in to the admin interface as admin, who is currently the owner of the folder I wish to delete.

    Read the article

  • Securing debain with fail2ban or iptables

    - by Jimmy
    I'm looking to secure my server. Initially my first thought was to use iptables but then I also learnt about Fail2ban. I understand that Fail2ban is based on iptables, but it has the advantages of being able to ban IP's after a number of attempts. Let's say I want to block FTP completely: Should I write a separate IPtable rule to block FTP, and use Fail2ban just for SSH Or instead simply put all rules, even the FTP blocking rule within the Fail2Ban config Any help on this would be appreciated. James

    Read the article

  • Running two Magentos installations, one of which has 3 stores set up as multi-store. Which server?

    - by Pedro Peixoto
    I want to run 4 Magento stores in 2 different installations. 1 is a standalonne installation with 3 languages. The other is a multi-store with 3 different online stores in different domains. At the moment we have a VPS with 1GB memory, would that be enough? I ask because I've finished the standalone store and already put it online, and the server is already running on 62% memory. The ideal would be that this is enough as my company wouldn't like to move to a Dedicated Server (as it involves costs). I'm sure I can try to optimize Magento to run on lower memory (I'm expecting visits averaging 2000/day on all sites), if I could have some tips on the best way to do that Id appreciate it too.

    Read the article

  • Router that allows custom Dynamic DNS server [closed]

    - by Thuy
    I've made my own DDNS service and it works fine using an application running on clients to update the IP. But if for some reason I don't have the choice of using my software and instead I need to use a router to update the IP, it becomes troublesome. For example, I needed to setup IPsec from a customer to me and the customers router/firewall (netgear srx5308) has a dynamic IP which is given from the ISP which can't offer static IPs. So it needs to use dynamic dns for it to work. In this case there really isn't a client to run the software on since it's a router/firewall. Unfortunately it seems that most routers are rather unfriendly towards custom DDNS solutions and most offer only dyndns.com or similar templates. Which was the case with this router too. Leaving me with no way to use my own dynamic dns server IP. I have the option of switching out the customers router and I've been looking around for alternatives and other routers/solutions and I was wondering if anyone on this great site might have been in a similar situation or might just know about some router/firewall that is more friendly towards custom ddns solutions that I might be able to use. Thanks in advance for any help or guidance!

    Read the article

  • File/folder Write/Delete wise, is my server secure?

    - by acidzombie24
    I wanted to know if someone got access to my server by using a nonroot account, how much damage can he do? After i su someuser I used this command to find all files and folders that are writeable. find / -writable >> list.txt Here is the result. Its most /dev/something and /proc/something and these /var/lock /var/run/mysqld/mysqld.sock /var/tmp /var/lib/php5 Is my system secure? /var/tmp makes sense but i am unsure why this user has write access to those folders. Should i change them? stat /var/lib/php5 gives me 1733 which is odd. Why write access? why no read? is this some kind of weird use of a temp file?

    Read the article

  • What software this log file comes from? [closed]

    - by mickula
    From what software comes this logfile? Please specify full name. Internal IP Threshold FlowsDiff 40 flows/s, Diff: 73 flows/s Sum 26.962 flows/300s (89 flows/s), 32.162.000 packets/300s (107.206 packets/s), 1,198 GByte/300s (32 MBit/s) External 87.98.238.221, 26.958 flows/300s (89 flows/s), 32.156.000 packets/300s (107.186 packets/s), 1,198 GByte/300s (32 MBit/s) External 89.230.69.49, 2 flows/300s (0 flows/s), 2.000 packets/300s (6 packets/s), 0,000 GByte/300s (0 MBit/s) External 89.231.190.149, 1 flows/300s (0 flows/s), 3.000 packets/300s (10 packets/s), 0,000 GByte/300s (0 MBit/s) External 89.239.101.20, 1 flows/300s (0 flows/s), 1.000 packets/300s (3 packets/s), 0,000 GByte/300s (0 MBit/s)

    Read the article

  • mod_rewrite filename from mod_pagespeed back to normal files

    - by British Sea Turtle
    I am hoping someone can help me with this problem. I am moving to a new server and not using mod_pagespeed any more. However we have lots of external links to images on our site using the strange mod_pagespeed filenames. This is not an issue but we do not want to have lots of 404 errors. So I have lots of links like the following : http://www.domain.com/images/150x150xlink.png.pagespeed.ic.pPXw45HSQm.png http://www.domain.com/images/paris_01.gif.pagespeed.ce.vfrkuKUaj0.gif http://www.doamin.com/images/1st2.gif.pagespeed.ce.OUg38q6VbZ.gif How can I redirect them to : http://www.domain.com/images/150x150xlink.png http://www.domain.com/images/paris_01.gif http://www.doamin.com/images/1st2.gif There are thousands of files like this so I am hoping for a simple solution with mod_rewrite, I tried this but it does not work. So any help would be appreciated. RewriteCond %{REQUEST_URI} \.gif\.pagespeed\. [NC] RewriteRule ^(.*?\.gif)\..*\.gif$ $1 [NC,L]

    Read the article

  • How do you enable webcam support in facebook for ubuntu 10.04?

    - by Jonathan
    I think I have finally arrived at an insolvable equation: Chromium v.7 + Ubuntu 10.04 + Sun Java 6 + Webcam + Facebook + Flash 10 = non-functional All of those items listed above are potential points of failure in this situation, and any help narrowing them down would be fantastic. I am simply trying to enable webcame support directly through Facebooks website. Forum searches and the usual googling turn up few posts related to this specific equation. Two of the major suggestions include: 1) Installing the Sun (I refuse to say oracle sob)-provided Java implementation instead of the OpenJDK normally installed in ubuntu. And yes, after installing it, I did update all my default supports to use the sun commands over the openjdk. 2) Somehow enabling Facebook as a permitted site to access my webcam using Flash settings. I have not been able to explore option 2 because I cannot find a way to adjust the Flash settings in chromium 7. Other factors that do not help include the fact that I am pretty sure facebook changes its webcam interface every 10 seconds just to keep troubleshooters and support personnel on their toes. If anyone has a OTP that informs us of the next shift in the app, a leak would be greatly appreciated! Cheers!

    Read the article

  • tailwatchd - chkservd on host.domain.com status: hang

    - by Zim3r
    The chkservd sub-process with pid 17420 was running for 602 seconds. The sub-process was terminated as it exceeded the time between checks of 300 seconds. Please check /var/log/chkservd.log and /usr/local/cpanel/logs/tailwatchd_log to discover the I was notified for this error by email on the destination server while transferring server. what does it mean ? and also this happened: ftpd failed @ Wed Aug 8 11:26:38 2012. A restart was attempted automagically. Service Check Method: [socket connect] Reason: Timeout while trying to get data from service: Died at /usr/local/cpanel/Cpanel/TailWatch/ChkServd.pm line 607. Number of Restart Attempts: 1 Startup Log: Starting pure-config.pl: Running: /usr/sbin/pure-ftpd -O clf:/var/log/xferlog --daemonize -A -c50 -B -C8 -D -fftp -H -I15 -lextauth:/var/run/ftpd.sock -L10000:8 -m4 -s -U133:022 -u100 -Oxferlog:/usr/local/apache/domlogs/ftpxferlog -k99 -Z -Y1 -JHIGH:MEDIUM:+TLSv1:!SSLv2:+SSLv3 [ OK ] Starting pure-authd:

    Read the article

  • How do I install yum on Redhat Enterprise 4?

    - by Bob Cross
    For historical reasons, one of the machines that I manage has a Redhat Enterprise 4 boot disk (among others). Every now and then, we have to boot into RHEL4 to bring up some of the legacy software that we support and connect to. Since it's a fringe system, the Redhat support has long since lapsed and I can't convince myself that it would be worth paying just to get RPMs that I can go and get for myself. That said, the default RHEL tools are heavily biased against letting you do exactly that. I would like to install yum and use that as my package discovery and installation. So, is there an installation guide to integrating yum with an older RHEL 4 system?

    Read the article

  • LVS / IPVS difference in ActiveConn since upgrading

    - by Hans
    I've recently migrated from an old version of LVS / ldirectord (Ultra Monkey) to a new Debian install with ldirectord. Now the amount of Active Connections is usually higher than the amount of Inactive Connections, it used to be the other way around. Basically on the old load balancer the connections looked something like: -> RemoteAddress:Port Forward Weight ActiveConn InActConn -> 10.84.32.21:0 Masq 1 12 252 -> 10.84.32.22:0 Masq 1 18 368 However since migrating it to the new load balancer it looks more like: -> RemoteAddress:Port Forward Weight ActiveConn InActConn -> 10.84.32.21:0 Masq 1 313 141 -> 10.84.32.22:0 Masq 1 276 183 Old load balancer: Debian 3.1 ipvsadm 1.24 ldirectord 1.2.3 New load balancer: Debian 6.0.5 ipvsadm 1.25 ldirectord 1.0.3 (I guess the versioning system changed) Is it because the old load balancer was running a kernel from 2005, and ldirectord from 2004, and things have simply changed in the past 7 - 8 years? Did I miss some sysctl settings that I should be enforcing for it to behave in the same way? Everything appears to be working fine but can anyone see an issue with this behaviour? Thanks in advance! Additional info: I'm using LVS in masquerading mode, the real servers have the load balancer as their gateway. The real servers are running Apache, which hasn't changed during the upgrade. The boxes themselves show roughly the same amount of Inactive Connections shown in ipvsadm.

    Read the article

  • Creating a link to name changing directory

    - by groove1534
    I have an Ubuntu 12.04 installed using wubi + Win7. I'm trying to create a link to "my documents" directory which located in my C drive: C:\Users\Myuser\My Documents\ Since the Ubuntu is installed in D:\, which is the "host", my C drive is accessible via /media/some_changing_hex. This hex get changed each time I restart my machine. So I need, somehow, to create a link that uses regex OR a link that somehow gets the the first (in this case - only) subdirectory in /media (something like all_subdirectories[0]). So how do I do that?

    Read the article

  • Enter response once prompt returns?

    - by mjb
    It's neither a secure idea nor one I'd recommend elsewhere, but I have a situation when occasionally it takes a while for my Ansible ad-hoc command to respond. I'd love to pipe or args or whatever is needed to push the required text into the prompt so I can walk away and know it will finish. Ex: $ ansible all -m shell -a "reboot" --ask-pass Password: blah blah blah it worked I'd love to send an argument or << or something to get the password in. Is that possible?

    Read the article

  • Maximise network transfer speed of various applications

    - by Alex
    When using nc, scp, wget to transfer files between 2 machines on a dedicated 2Mbps link, I get speeds between 0.5 and 1 Mbps. However, when I use iperf -c 10.0.1.4 -t 20 -P 12 (for example) I can maximise the speed of the link (getting stable 2Mbps). Is there a way to make single stream transfers (such as those done by scp) to utilise all/most of the link? Some kind of tcp settings, or iptables...?

    Read the article

  • CentOS 6 init script doesn't work properly

    - by user711643
    I'm setting up my ruby production server based on CentOS 6. I need a process called god (which is a process monitoring tool) to start at boot. I'm using an init script that I found here. Just as stated in the guide I ran: chkconfig --add god and then chkconfig --level 345 god on After this if I run "service god start|restart" everything works. It loads the available configurations and brings up the related processes (if they are not running). Problem is it doesn't work at boot. If I reboot the system, then I do "ps -aux | grep god". At this point "god" is running but apparently it didn't load the configuration files. If i run again service god restart, it loads everything without problems. What am I doing wrong?

    Read the article

  • Backup all plesk MySQL Databases to individual files

    - by Michael
    Hy, Because I'm new to shell scripting I need a hand. I currently backup all mydatabases to a single file, thing that makes the restore preaty hard. The second problem that my MySQL password dosen't work because of a Plesk bug and i get the password from "/etc/psa/.psa.shadow". Here is the code that I use to backup all my databases to a single file. mysqldump -uadmin -p`cat /etc/psa/.psa.shadow` --all-databases | bzip2 -c > /root/21.10.2013.sql.bz2 I found some scripts on the web that backup each database to individual files but I don't know how to make them work for my situation. Here is a example script: for db in $(mysql -e 'show databases' -s --skip-column-names); do mysqldump $db | gzip > "/backups/mysqldump-$(hostname)-$db-$(date +%Y-%m-%d-%H.%M.%S).gz"; done Can someone help me make the script above work for my situation? Requirements: Backup each database to individual file using plesk password location.

    Read the article

  • Multiple user directories on EC2

    - by Joseph
    Im trying to set up multiple user directories on EC2 running Ubuntu, but im not sure how to set it up correctly so that i can serve files in the following format: http://<ec2 ip address>/user_1/public_html/file1.html and http://<ec2 ip address>/user_2/public_html/file3.html and so on for every user that i add. I tried looking for the httpd.conf file but i coulndt find it i only found apache2.conf Thank you guys.

    Read the article

  • NVidia TwinView - slow rendering on dual desktop [closed]

    - by lisak
    Hey, does anybody have experience with it ? I've set it up 4 times on 4 different machines. And there was always problems with slow rendering ( for instance : scrolling pages in browser is not fluent). But there always was something that finally made it work perfectly... I remember that one time this option helped, but not now Option "RenderAccel" "1" Nvidia geforce 8400GS or Zotac geforce 9500GT Monitors connected via dvi and hdmi connectors proper nvidia driver installed Section "ServerLayout" Identifier "X.org Configured" Screen 0 "Screen0" 0 0 InputDevice "Mouse0" "CorePointer" InputDevice "Keyboard0" "CoreKeyboard" Option "Xinerama" "0" EndSection Section "Files" ModulePath "/usr/lib64/xorg/modules" FontPath "/usr/share/fonts/local" FontPath "/usr/share/fonts/TTF" FontPath "/usr/share/fonts/OTF" FontPath "/usr/share/fonts/Type1" FontPath "/usr/share/fonts/misc" FontPath "/usr/share/fonts/CID" FontPath "/usr/share/fonts/75dpi/:unscaled" FontPath "/usr/share/fonts/100dpi/:unscaled" FontPath "/usr/share/fonts/75dpi" FontPath "/usr/share/fonts/100dpi" FontPath "/usr/share/fonts/cyrillic" EndSection Section "Module" Load "dri2" Load "glx" Load "extmod" Load "record" Load "dbe" EndSection Section "InputDevice" Identifier "Keyboard0" Driver "kbd" EndSection Section "InputDevice" Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/input/mice" Option "ZAxisMapping" "4 5 6 7" EndSection Section "Monitor" Identifier "Monitor0" VendorName "Unknown" ModelName "Acer AL1715" HorizSync 30.0 - 83.0 VertRefresh 50.0 - 75.0 EndSection Section "Device" Identifier "Nvidia" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "MSI big bang-fuzion" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce 8400 GS" EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "RenderAccel" "1" Option "AllowGLXWithComposite" "1" Option "TwinView" "1" Option "TwinViewXineramaInfoOrder" "DFP-1" Option "metamodes" "CRT: 1280x1024 +1920+0, DFP: 1920x1080 +0+0" SubSection "Display" Depth 24 EndSubSection EndSection

    Read the article

  • Understanding ulimit -u

    - by tripleee
    I'd like to understand what's going on here. linvx$ ( ulimit -u 123; /bin/echo nst ) nst linvx$ ( ulimit -u 122; /bin/echo nst ) -bash: fork: Resource temporarily unavailable Terminated linvx$ ( ulimit -u 123; /bin/echo one; /bin/echo two; /bin/echo three ) one two three linvx$ ( ulimit -u 123; /bin/echo one & /bin/echo two & /bin/echo three ) -bash: fork: Resource temporarily unavailable Terminated one I speculate that the first 122 processes are consumed by Bash itself, and that the remaining ulimit governs how many concurrent processes I am allowed to have. The documentation is not very clear on this. Am I missing something? More importantly, for a real-world deployment, how can I know what sort of ulimit is realistic? It's a long-running daemon which spawns worker threads on demand, and reaps them when the load decreases. I've had it spin the server to its death a few times. The most important limit is probably memory, which I have now limited to 200M per process, but I'd like to figure out how I can enforce a limit on the number of children (the program does allow me to configure a maximum, but how do I know there are no bugs in that part of the code?)

    Read the article

< Previous Page | 427 428 429 430 431 432 433 434 435 436 437 438  | Next Page >