Search Results

Search found 821 results on 33 pages for 'redhat'.

Page 16/33 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • How can I measure actual memory usage from my running processes?

    - by NullUser
    I have two servers, server1 and server2. Both of them are identical HP blades, running the exact same OS (RHEL 5.5). Here's the output of free for both of them: ### server1: total used free shared buffers cached Mem: 8017848 2746596 5271252 0 212772 1768800 -/+ buffers/cache: 765024 7252824 Swap: 14188536 0 14188536 ### server2: total used free shared buffers cached Mem: 8017848 4494836 3523012 0 212724 3136568 -/+ buffers/cache: 1145544 6872304 Swap: 14188536 0 14188536 If I understand correctly, server2 is using significantly more memory for disk I/O caching, which still counts as memory used. But both are running the same OS and if I remember correctly, I configured both with the same parameters when they were installed. I did a diff on /etc/sysctl.conf and they are identical. The problem is, I am collecting memory usage and other metrics over a period of time, (eg: vmstat, iostat, etc.) while a load is generated on the system. The memory used for caching is throwing off my calculations on the results. How can I measure actual memory usage from my running processes, rather than system usage? Is used - (buffers + cached) a valid way to measure this?

    Read the article

  • USER_LOGIN audit log with incorrect auid value?

    - by hijinx
    We have a CentOS 6.2 x86_64 system that's logging what looks to be erroneous audit information. We were receiving alerts for failed logins by a user who wasn't actually trying to log in. After some diagnosis, we figured out that the source of the events is our tool that periodically checks to see if SSH is answering. When that happens, we see this log this entry: type=USER_LOGIN msg=audit(1340312224.011:489216): user pid=28787 uid=0 auid=501 ses=8395 subj=unconfined_u:system_r:sshd_t:s0-s0:c0.c1023 msg='op=login acct=28756E6B6E6F776E207A01234567 exe="/usr/sbin/sshd" hostname=? addr=127.0.0.1 terminal=ssh res=failed' This is the entry we get whenever there is an incomplete ssh connection, but usually the auid is the same as the ses= value. For some reason, on this system, it's using a particular user's auid, regardless of the login user. For example, ssh'ng to this system as [email protected] and cancelling before providing a password generates this error. Attempting to log to an unrelated account with a bogus password will also create an entry with the incorrect auid value.

    Read the article

  • Making a bootable image of linux Red Hat Ent Es for a VM

    - by djshortbus
    I have a old server running Red Hat that has some valuable apps installed. I would like to create a bootable image of the drive and install it in a VM on a newer server. i am trying to avoid reinstalling Red Hat the apps and data. Any useful links or advice would be greatly appreciated.(Not yet decided on the VM Software)

    Read the article

  • RHEL 5.3 Kickstart - How specify location of individual package in Workstation folder?

    - by Ed
    I keep getting "package does not exist" errors during the install. I made a kickstart ISO to create an unattended install of a RHEL 5.3 build machine for C++ software releases. It pulls the kickstart config file from our internal web server. This is handy; it makes it easy to test and modify without having to make a new ISO. And I plan to check it in to version control if I can get it working. Anyway, the rpm packages are located in two folders on the disk; Client and Workstation. The packages install fine for the ones that are physically located under the Client folder. It cannot find those under the Workstation folder such as as doxygen and subversion complaining that packages do not exist. Is there a way to specify the individual package location? # ----------------------------------------------------------------------------- # P A C K A G E S # ----------------------------------------------------------------------------- %packages @gnome-desktop @core @base @base-x @printing @development-tools emacs kexec-tools fipscheck xorg-x11-server-Xnest xorg-x11-server-Xvfb #Packages Located in Workstation Folder *** Install can not find any of these ?? bison doxygen gcc-c++ subversion zlib-devel freetype-devel libxml2-devel Thanks in advance, -Ed

    Read the article

  • NTP configuration not recognized?

    - by Eugene S
    I'm trying to configure NTP on my machine but it seems that the parameters I set are not being read by the system. Below is my /etc/ntp.conf file. (I applied the most basic configuration to eliminate other issues) server 10.45.68.47 server 127.0.0.1 After I set the above configuration, I restart the ntpd process by doing the following: service ntpd restart And then I get the following output: Shutting down ntpd: [ OK ] ntpd: Synchronizing with time server: [FAILED] Starting ntpd: [ OK ] Moreover, I can see the following in /var/etc/messages: Apr 2 10:54:07 hsystem1a ntpd[21067]: ntpd exiting on signal 15 Apr 2 10:54:07 hsystem1a ntpdate[21537]: can't find host ntpServer1 Apr 2 10:54:07 hsystem1a ntpdate[21537]: can't find host ntpServer2 Apr 2 10:54:07 hsystem1a ntpdate[21537]: no servers can be used, exiting So it seems that the ntpServer1 and the ntpServer2 are being read from somewhere instead of the IPs I configured in /etc/ntp.conf. NOTE: I done init 6 on the machine just in case. Thanks!

    Read the article

  • Sharing accounts between multiple computers running Ubuntu Linux

    - by john
    My school has a computer lab full of machines running Red Hat Linux. They have it set up so that you can log into any computer in the lab, and it automatically loads your desktop, home directory, etc, which makes it so all computers in the lab look the same to you, regardless or which one you're using. I have two computer at home running Ubuntu Linux. Could I do this same thing with my computers at home? What's it called, and how do I find documentation on how to set it up? Thanks!

    Read the article

  • SMBfs mounting OK, listing OK, Read KO, smbclient OK

    - by Kwaio
    I've tried to make the title the most meaningfull I could but it still looks ugly. The premises. We are using RHEL3-U8 as OS on most servers here, don't ask me why or suggest to upgrade, it's not on today's schedule. That means kernel used is 2.4.21 I have no access to the remote server, but I know it is a netApp NAS rack. $> smbclient --version Version 3.0.9-1.3E.9 Here is the /etc/fstab line : //NASHOSTNAME/share /mnt/mydir smbfs ro,uid=123,gid=123,workgroup=XXXX,credentials=/somefile 0 0 Here is the following mount output line //NASHOSTNAME/share on /mnt/mydir type smbfs (0) The symptoms. I can list the share without problems, even cd in there. The issue appears if I try to read any file : $> cat /mnt/mydir/fileX.txt cat: /mnt/mydir/fileX.txt: Input/output error In the system logs (/var/log/kernel for example) the following errors appear. Jul 30 15:40:02 hostname kernel: smb_errno: class ERRHRD, code 31 from command 0x2 Jul 30 15:40:02 hostname kernel: smb_errno: class ERRHRD, code 31 from command 0x2 Jul 30 15:40:02 hostname kernel: smb_open: fileX.txt open failed, result=-5 Jul 30 15:40:02 hostname kernel: smb_errno: class ERRHRD, code 31 from command 0x2 Jul 30 15:40:02 hostname kernel: smb_errno: class ERRHRD, code 31 from command 0x2 Jul 30 15:40:02 hostname kernel: smb_open: fileX.txt open failed, result=-5 Jul 30 15:40:02 hostname kernel: smb_readpage_sync: fileX.txt open failed, error=-5 The ERRHRD code 0x001F error is "General hardware failure" although it seems samba sometimes uses it for a different purpose, see http://www.ubiqx.org/cifs/SMB.html [Strange behaviour Alert] Additionnal informations : There is another SMB mountpoint on the system pointing to a (linux) host using samba and this one works. What I have tried. I have tried adding debug=4 to the mounting options and remounting the share and the logs still look the same. I have tried to mount the share with smbclient and I am able to fetch files with the get command. Both targets are in the same subnet, so network problem should be out, even if the LAN goes through a VPN with optimizers, MTU has already been decreased to 1450. I can also mount the share through NFS but then the files are all root.root 700 and I need to read them with another user...

    Read the article

  • How to install Red Hat Enterprise Linux on Apple Macbook Pro MacBookPro4,1

    - by Todd V. Rovito
    I have a one year old Mac Book Pro that I am trying to get RHEL 5.4 installed on via bootcamp. No matter what I do I can't get the installer to boot. I have tried multiple DVD's and even verified the install works on a new Mac Book Pro. Most of the time the installer simply locks up. I usually use Linux text with all-generic-ide on the boot line. I removed the ide parameter and I just used linux text. The results I get are that a bunch of kernel messages appear then the background turns blue and a thin text box pops up saying its loading ata..... something it disappears too fast for me to read. Then the machine freezes. I pressed the alt function keys to see if I could look at the system log, here is what it says: Alt-f3 says "trying to mount CD device hda" Alt-f4 says status error: hda: lastFailedSense Hda: Failed opcode was: unknown Hda: Lost interrupt Hda: Drive not ready for command Ide-cd: command 0x3 timed out Above this junk it looks like it found the partition because it knew it was 20 GB and listed as /dev/sda3. I think it has something to do with the CD drive, is that possible? Thanks again for the support. PS I posted in the apple support forums ( Apple.com Support Discussions Boot Camp Installation and Storage) and didn't get an answer.

    Read the article

  • rhn_register through HTTP Proxy with Authentication

    - by kjloh
    Is there any limitation to the proxy authentication support of rhn_register? The proxy of the network I'm on sends the follow 407: HTTP/1.1 407 Proxy Authentication Required ( The ISA Server requires authorization to fulfill the request. Access to the Web Proxy filter is denied. ) Via: 1.1 VANESSA Proxy-Authenticate: Negotiate Proxy-Authenticate: Kerberos Proxy-Authenticate: NTLM It seems that rhn_register is not able to any of the authentication schemes above. Any advise?

    Read the article

  • NFS host is not exporting the "share"

    - by user1345260
    I have a NFS Server: usanfsd01 And a remote machine: usafssd01 I tried mounting a directory from usafssd01 onto usanfsd01 by adding the following line to /etc/fstab as root usanfsdo1:/home/dblogs /home/data/dblogs nfs rw 0 0 And when I run the following command to see if NFS is exporting the share, it's not shown showmount -e usanfsdo1 Can someone please help? Also, a point of interest would be there is another mount that works on the same servers and thats defined as below in the /etc/fstab usanfsdo1:/home/files /home/data/files nfs rw 0 0 /etc/exports /nfs/home/dblogs 'IPADDRESS'(rw,no_root_squash)

    Read the article

  • RHEL 5 list missing critical patches/packages

    - by Vinnie Biros
    Im trying to figure out if there is an easy way to identify the missing critical patches/packages on my RHEL5 boxes. This is for audit purposes and was trying to figure out if there was an RPM command or something of the sort that would accomplish this easily. I know with my Solaris 10 boxes, i can run the "smpatch analyze" command which would display this information for me. Anyone know of anything similar for RHEL5? Thanks.

    Read the article

  • How can I make the NetworkManager work?

    - by Yang Jy
    I am running a version of RHCE6 on my laptop, and lately I've been trying various stuff about network configuration through command line. Last night, I tried removing the NetworkManager using "yum remove NetworkManager" from the system, so that I could have more control of the network through the command line. But the result is, I didn't manage to configure the wireless connection through wpa_supplicant, and I need wireless connection during my travel to another place. So I need the wireless function back as soon as possible. I typed " yum install NetworkManager", some version installed, but I don't get to have an icon on the taskbar, and of course, the network doesn't work. The package I previously removed(about 24MB) was much larger that the one I just installed(about 2MB), so I think some dependencies must be missing. How could I install all these dependencies? Please help!

    Read the article

  • yum update with shared cache

    - by Sammitch
    We've got a big batch of RHEL6 machines that are due for patching, and for some reason the process here does not involve a local repo. I'm new here, I've asked why, ["it just didn't work"] and I don't have enough time to make it work before the window that's already scheduled. So the usual method is to install yum-downloadonly and run yum update --downloadonly --downloaddir=/mnt/cifs_share and then yum update /mnt/cifs_share/*.rpm which just does not look right to me since not all of these machines have the same set of installed packages. The method I tried today was mounting the share to /var/cache/yum/x86_64/6Server/rhel-x86_64-server-6/packages/ which worked, but then yum automatically deleted everything once it finished. I've looked over the yum man page, but I don't see any flag I can feed it to stop it from deleting everything, nor a flag like up2date's --tmpdir=/mnt/cifs_share. Can anyone out there help me kludge this together until I can get a local repository working?

    Read the article

  • How can I remount an NFS volume on Red Hat Linux?

    - by user76177
    I changed the user id of a user on an NFS client that mounts a volume from another server. My goal is to get the 2 users to have the same id, so that both servers can read and write to the volume. I changed the id successfully on the client system, but now when I look at the NFS mount from that system, it reports the files being owned by the old id. So it looks like I need to "refresh" that mount. I have found many instructions on how to remount, but each seems slightly different according to the type of system. Is there a simple command I can run to get the mounted volume to refresh so that it interprets the new user settings?

    Read the article

  • Linux environment variables

    - by George2
    I am using Red Hat Linux Enterprise 5. I am always using export command to set environment variable. I am wondering any other ways of setting environment variables and pros/cons of all alternative ways. Any answers or recommended readings? Thanks in advance!

    Read the article

  • What could cause a file system to spontaneously unmount or become invalid for a short time?

    - by Ichorus
    We've got DB2 LUW running on a RHEL box. We had a crash of DB2 and IBM came back and said that a file that DB2 was trying to access (through open64()) unmounted or became invalid. We have done nothing but restart the database and things seem to be running fine. Also, the file in question looks perfectly normal now: $ cd /db/log/TEAMS/tmsinst/NODE0000/TEAMS/T0000000/ $ ls -l total 557604 -rw------- 1 tmsinst tmsinst 570425344 Jan 14 10:24 C0000000.CAT $ file C0000000.CAT C0000000.CAT: data $ lsattr C0000000.CAT ------------- C0000000.CAT $ ls -l total 557604 -rw------- 1 tmsinst tmsinst 570425344 Jan 14 10:24 C0000000.CAT With those facts in hand (please correct me if I am mis-interpreting the data at hand) what could cause a file system to 'spontaneously unmount or become invalid for a short time'? What should my next step be? This is on Dell hardware and we ran their diagnostic tools against the hardware and it came back clean.

    Read the article

  • What are the common Linux commands for SAN-related activities? How do I check if a LUN is attached to the computer?

    - by Nishant
    How do I check if a LUN has been presented to my server? What are the Linux commands for that? Do the LUNs show up in a fdisk -l command like a normal /dev/sda gets listed? What are other commands associated with general SAN related checks in Linux? What is WWN and how does that have any relevance? If we have LUNs, what is the use of multipathing? Bit lengthy but I am not able to get a grasp on the topic. Any help would be appreciated.

    Read the article

  • "service"-command and environment variables

    - by varesa
    I am trying to start a service that requires a env. variable to be set to certain path. I set this variable in "/etc/profile.d/". However when I start this service using the service command, it doesn't work. man service: service runs a System V init script in as predictable environment as possible, removing most environment variables and with current working directory set to /. So it seems that service is removing my variables. How should I set the variables up to keep them from being removed. Or is that something i should not do. I could start the service manually using the init-scripts, or even hardcode the path into the script, but I'd like to know how to use it with the service command.

    Read the article

  • iptables blank after reboot

    - by theillien
    We've started encountering an issue with iptables on our RHEL 6.3 systems in that after a reboot, when the service starts, the rules are not loaded. We get the empty ruleset: [msnyder@matt-test ~]$ sudo iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination This is in spite of the fact that we have rules defined and the service is, indeed, running. That I know because when I run service iptables start it simply drops back to the prompt. If I run service iptables restart it actually stops and then restarts the service. And, of course, if I run service iptables stop it indicates that iptables is actually stopping. Knowing that I need to restart the service, I do so and the rules load up properly. They simply don't get loaded after a reboot. Unless they get loaded differently during a reboot I don't see how our rules would be wrong. If they were, they wouldn't even load during a service restart. Has anyone else ever encountered this? EDIT: The rules are already saved in /etc/sysconfig/iptables. They are not added on the fly from the command line so service iptables save is unnecessary.

    Read the article

  • What to do after a fresh Linux install in a production server?

    - by Rhyuk
    I havent had previous experience with the 'serious' IT scene. At work I've been handed a server that will host an application and MYSQL (I will install and configure everything), this will be a productive server. Soon I will be installing RHEL5 to it but I would like to know like, if you get a new production server, what would be the first 5 things you would do after you do a fresh Linux install? (configuration/security/reliability wise) EDIT: Added more information regarding the server enviroment and server roles: -The server will be inside my company's intranet/firewall. -The server will receive files (GBs) in binary code from another internal server. The application installed in this server is in charge of "translating" all that binary into human readable input. Server will get queried to get this information. -Only 2-3(max) users will be logging in. -(2) 145GB HDs in RAID1 for the OS and (2) 600GB HDs in RAID1 also for data. I mean, I know I may not get the perfect guideline. But at least something thats better than leaving everything on default.

    Read the article

  • Kill program after it outputs a given line, from a shell script

    - by Paul
    Background: I am writing a test script for a piece of computational biology software. The software I am testing can take days or even weeks to run, so it has a recover functionality built in, in the case of system crashes or power failures. I am trying to figure out how to test the recovery system. Specifically, I can't figure out a way to "crash" the program in a controlled manner. I was thinking of somehow timing a SIGKILL instruction to run after some amount of time. This is probably not ideal, as the test case isn't guaranteed to run the same speed every time (it runs in a shared environment), so comparing the logs to desired output would be difficult. This software DOES print a line for each section of analysis it completes. Question: I was wondering if there was a good/elegant way (in a shell script) to capture output from a program and then kill the program when a given line/# of lines is output by the program?

    Read the article

  • PHP potential issues with compiling 5.3.8 extensions against RHEL 6 / CentOS 6 PHP 5.3.3 package

    - by user101203
    I'm working on getting a Red Hat 6 LAMP server going and while the PHP that comes with it has many extensions we use, it doesn't have all of them. To solve this, I was thinking about either compiling the PHP extensions which come in the ext folder of the downloadable source code of PHP 5.3.3 from php.net same as #1, but using the extensions from the latest PHP version (currently 5.3.8). Do #1 but manually decide which updates to backport from the latest version of the PHP extensions into the older version and then compile the backported result A drawback to #1 is that security and bug fixes come out which we wouldn't be able to take advantage of. A drawback to #3 is that it might be a lot of work Does anyone know what the drawbacks to #2 are? I don't want to go down that route if it might result in some unexpected negative outcomes. Also, are there any other drawbacks to the other options or a better way to go altogether? I want to use the PHP 5.3.3 which comes with the Linux distro because I don't want us to get to a place again where we are forced to upgrade to a new version of PHP to stay on top of security updates like from PHP 5.2.x to 5.3.x and there be backwards incompatible changes (this is the situation we're in now with PHP 5.2.x no longer being supported).

    Read the article

  • Cannot umount device is busy

    - by user132199
    Situation I am running a RHEL server via a VM on my laptop. I have a win7 desktop sharing out a folder and the VM on my laptop running RHEL6 has a CIFS windows mount at \mnt\win When I go to unmount the device I get a device is busy message. So I went to my laptop and check to see if there were any users connected to the share, since it listed none I turned off sharing. I went back to my RHEL instance and attempted another umount \mnt\win but received the same error. Question What are other alternatives to unmounting a shared drive?

    Read the article

  • Permission denied when trying to execute a binary burned to a CD-R

    - by user16654
    On an Ubuntu 9.10 (Karmic Koala) machine, I burned a CD from the command prompt using: cdrecord -v speed=16 dev=0,1,0 /FPS.iso The CD now contains an executable and some files. I tested the CD by loading it onto another machine (Red Hat 5.3) and when I try to run the program I get the following message: bash: ./FPS1_1: Permission denied I can open other files like text documents (the executable also comes with shared libraries). I realized I had burned the CD as root so I burned another one as another user but I still have the same problem. How can I remove this permission or what is the problem? P.S. the image was in / if that helps

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >