Search Results

Search found 41561 results on 1663 pages for 'linux command'.

Page 317/1663 | < Previous Page | 313 314 315 316 317 318 319 320 321 322 323 324  | Next Page >

  • /usr/bin/python (Python 2.4) was deleted on CentOS 5. I compiled from source but yum is still broken. How can I get everything back to the way it was?

    - by Maxwell
    I saw a lot of other questions like this but none of them answered the exact part I am having trouble with (actually installing the Python RPM). Someone on my system deleted /usr/bin/python and /usr/bin/python2.4 on my 64 bit CentOS 5.8 installation. I recompiled Python 2.4 from source, but now whenever I try to yum install anything I get the following error: [root@cerulean-OW1 ~]# yum install httpd There was a problem importing one of the Python modules required to run yum. The error leading to this problem was: No module named yum Please install a package which provides this module, or verify that the module is installed correctly. It's possible that the above module doesn't match the current version of Python, which is: 2.4 (#1, Dec 16 2012, 09:16:56) [GCC 4.1.2 20080704 (Red Hat 4.1.2-52)] If you cannot solve this problem yourself, please go to the yum faq at: http://wiki.linux.duke.edu/YumFaq I checked http://wiki.linux.duke.edu/YumFaq and it said the following: If you are getting a message that yum itself is the missing module then you probably installed it incorreclty (or installed the source rpm using make/make install). If possible, find a prebuilt rpm that will work for your system like one from Fedora or CentOS. Or, you can download the srpm and do a rpmbuild --rebuild yum*.src.rpm I tried going to http://rpm.pbone.net/index.php3/stat/4/idpl/17838875/dir/centos_5/com/python-2.4.3-46.el5.x86_64.rpm.html to install Python, which resulted in the following error: [root@cerulean-OW1 ~]# rpm -Uvh python-2.4.3-46.el5.x86_64.rpm error: Failed dependencies: python-libs-x86_64 = 2.4.3-46.el5 is needed by python-2.4.3-46.el5.x86_64 So I tried installing python-libs-x86_64, which resulted in the following: [root@cerulean-OW1 ~]# rpm -Uvh python-libs-2.4.3-46.el5_8.2.x86_64.rpm warning: python-libs-2.4.3-46.el5_8.2.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID 192a7d7d Preparing... ########################################### [100%] package python-libs-2.4.3-46.el5_8.2.x86_64 is already installed file /usr/lib64/libpython2.4.so.1.0 from install of python-libs-2.4.3-46.el5_8.2.x86_64 conflicts with file from package python-libs-2.4.3-46.el5_8.2.x86_64

    Read the article

  • redirect temporarily STDOUT to another file descriptor, but still to screen

    - by Carlos Campderrós
    I'm making a script that executes some commands inside, and these commands show some output on STDOUT (and STDERR as well, but that's no problem). I need that my script generates a .tar.gz file to STDOUT, so the output of some commands executed in the script also go to STDOUT and this ends with a not valid .tar.gz file in the output. So, in short, it's possible to output the first commands to the screen (as I still want to see the output) but not via STDOUT? Also I would like to keep the STDERR untouched so only error messages appear there. A simple example of what I mean. This would be my script: #!/bin/bash # the output of these commands shouldn't go to STDOUT, but still appear on screen some_cmd foo bar other_cmd baz #the following command creates a tar.gz of the "whatever" folder, #and outputs the result to STDOUT tar zc whatever/ I've tried messing with exec and the file descriptors, but I still can't get it to work: #!/bin/bash # save STDOUT to #3 exec 3>&1 # the output of these commands should go to #3 and screen, but not STDOUT some_cmd foo bar other_cmd baz # restore STDOUT exec 1>&3 # the output of this command should be the only one that goes to STDOUT tar zc whatever/ I guess I'm lacking closing STDOUT after the first exec and reopen it again or something, but I can't find the right way to do it (right now the result is the same as if I didn't add the execs

    Read the article

  • CentOS networking BNX2

    - by james moore
    Having some trouble with my NICs. The server starts fine and I can wget/ping etc. However, when I /etc/init.d/networking restart I then receive the following error: Bringing up interface eth0: bnx2: fw sync timeout, reset code = 1030009 SIOCSIFFLAGS: Device or resource busy Consequently, the task fails. I have searched around on google users suggesting to disable PNP in the BIOS but I see no option. Here is some system information: $ ethtool -i eth0 driver: bnx2 version: 2.0.8-rh firmware-version: bc 2.9.1 $ uname -a Linux host 2.6.18-238.9.1.el5 #1 SMP Tue Apr 12 18:10:13 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux $/sbin/lspci | grep Broadcom 04:00.0 PCI brodge: Broadcom EPB PCI-Express to PCI-X Bridge (rev c3) 05:00.0 Ethernet controller: Broadcom Corporation NetXtreme II BCM5700 Gigabit Ethernet (rev 12) 08:00.0 PCI brodge: Broadcom EPB PCI-Express to PCI-X Bridge (rev c3) 09:00.0 Ethernet controller: Broadcom Corporation NetXtreme II BCM5700 Gigabit Ethernet (rev 12) $ lsmod | grep bnx2 bnx2i 81704 0 cnic 109512 1 bnx2i libiscsi2 77765 6 be2iscsi,ib_iser,iscsi_tcp,bnx2i,cxgb3i,libiscsi_tcp scsi_transport_iscsi2 73945 8 be2iscsi,ib_iser,iscsi_tcp,bnx2i,cxgb3i,libiscsi2 bnx2 224780 0 scsi_mod 199001 15 mpt2sas,scsi_transport_sas,mptctl,be2iscsi,ib_iser,iscsi_tcp,bnx2i,cxgb3i,libiscsi2,scsi_transport_iscsi2,scsi_dh,sg,libata,megaraid_sas,sd_mod $ rmmod bnx2; modprobe bnx2 PCI: Enabling device 0000:05:00.0 (0158 -> 015a) PCI: Enabling device 0000:09:00.0 (0158 -> 015a) bnx2: fw sync timeout, reset code = 10300003 Any help would be appreciated as I am at a loss.

    Read the article

  • Files on ext4 on Drobo with corrupt, zero-ed out blocks

    - by Patrick
    I have a 2TB ext4 file system (Ubuntu running Linux kernel 2.6.31-22-server x86_64). This file system is the second drive on a Drobo box plugged in via USB. We've not had problems on the first drive (Drobo limits drive size to 2TB due to some OS limitations, so if you have more space than that it appears as two separate drives). I am sharing this files with Samba (smbd 3.4.0) with a mix of Windows and Linux workstations. Recently we've been experiencing some data corruption in multiple files. In many cases I have an un-corrupt original file stored on one of the workstations. These are binary files of various formats, (e.g. SQLite, but others as well). I used "split" to split a corrupt and uncorrupt file into 4096 byte chunks (this is the block size of the ext4 file system). I then ran md5sum on pairs of chunks and discovered that the chunks matched in many cases and in every case where they did not match, the corrupt chunk was a solid chunk of zeroes (620f0b67a91f7f74151bc5be745b7110 for what it's worth). I'm trying to track down a culprit but am a bit at a loss. I don't believe Samba is at fault since I'm using it without issue on the first drive exported by the Drobo. What can I do to narrow this down and find out what's going on?

    Read the article

  • How can I find the USB wireless adapter into the dmesg log file?

    - by AndreaNobili
    I am pretty new in Linux (RaspBian for RaspBerry Pi but I think that there are not difference) and I have to install an USB wireless network adapter (the product is the TP-Link TL-WN725N, this one: http://www.tp-link.it/products/details/?model=TL-WN725N ) Now, I think that this is not automatically recognized by my system because if I execute ifconfig command I obtain the following output: pi@raspberrypi ~ $ ifconfig eth0 Link encap:Ethernet HWaddr b8:27:eb:2a:9f:b0 inet addr:192.168.1.8 Bcast:192.168.1.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:475 errors:0 dropped:0 overruns:0 frame:0 TX packets:424 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:34195 (33.3 KiB) TX bytes:89578 (87.4 KiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) So now it see only my ethernet network interface and not the wireless. So I was thinkig to try to see into the dmesg, but I don't know what have I to see and how to select it into the dmesg output. For example by the following command I can see the line of the dmesg log file relate to my ethernet port: pi@raspberrypi ~ $ cat /var/log/dmesg |grep -i eth [ 3.177620] smsc95xx 1-1.1:1.0 eth0: register 'smsc95xx' at usb-bcm2708_usb-1.1, smsc95xx USB 2.0 Ethernet, b8:27:eb:2a:9f:b0 [ 18.030389] smsc95xx 1-1.1:1.0 eth0: hardware isn't capable of remote wakeup [ 19.642167] smsc95xx 1-1.1:1.0 eth0: link up, 100Mbps, full-duplex, lpa 0x45E1 But what can I try to search for the USB wireless adapter? Tnx

    Read the article

  • Hardware, network infrastructure for runnng gaming server nd on VirtualGL

    - by archer
    Foud nice project VirtualGL (http://www.virtualgl.org/). Tried to run 3D fames (EVE Online, Prototype) on server and display the output on thin client using 100Mbps network. Server: Gentoo Linux on AMD Phoenom II x6 3.4Gz, 8GB RAM, 2x NVIDIA 9800 GTX in single session with display resulution 1024x768 on client. Performance is very promising. Going to increase network speed to 1Gbps (using either Ethernet or Fiber) and run 5-6 clients simultenously. My questions are: a) what would be better for network - 1Gbps Ethernet or Fiber (clients are distributed in max 20m around server)? Is that a must to use managed switch for better network performance? b) Should I increase number of video cards to put in SLI on server (going to use Gigabyte GA-890FXA-UD7 which has 6 PCIExpress slots [2 x4, 2 x8 and 2 x16]). Will it impact performance significantly. If I need to increase the number of video cards - what would be better - put 2 banks of video cards with 3 in bank using SLI, or 3 banks with 2 in the bank? Would linux recognize that and properly use all banks of video cards? c) any suggestions on good thin clients supporting 1920x1080 HDMI video and 1Gbps network I understand that my questions can't be answered clearly (unless someone already managed to use this kind of stuff ;)) although any suggestions would be very helpful.

    Read the article

  • lm-sensors - always returns 32 degrees (celsius) for temperature

    - by mopoke
    On my VIA EPIA motherboard (using VIA VT8231 ISA bridge), I get strange output for the lm-sensors temperature reading. It always returns 32 degrees (celsius). I have previously had correct output for temperature (my munin graphs show temperatures typically in the range of 50 to 60 degrees. I've tried uninstalling (and purging) the lm-sensors package, have re-run sensors-detect a number of times and rebooted but nothing seems to change the output. I am running Ubuntu Karmic Koala (9.10). Anyone got any bright ideas on what I might have missed? uname -a: Linux george 2.6.31-16-386 #53-Ubuntu SMP Tue Dec 8 06:39:34 UTC 2009 i686 GNU/Linux cpuinfo: processor : 0 vendor_id : CentaurHauls cpu family : 6 model : 7 model name : VIA Samuel 2 stepping : 3 cpu MHz : 399.000 cache size : 64 KB fdiv_bug : no hlt_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 1 wp : yes flags : fpu de tsc msr cx8 mtrr pge mmx 3dnow up bogomips : 800.04 clflush size : 32 power management: lspci: 00:00.0 Host bridge: VIA Technologies, Inc. VT8601 [Apollo ProMedia] (rev 05) 00:01.0 PCI bridge: VIA Technologies, Inc. VT8601 [Apollo ProMedia AGP] 00:11.0 ISA bridge: VIA Technologies, Inc. VT8231 [PCI-to-ISA Bridge] (rev 10) 00:11.1 IDE interface: VIA Technologies, Inc. VT82C586A/B/VT82C686/A/B/VT823x/A/C PIPC Bus Master IDE (rev 06) 00:11.2 USB Controller: VIA Technologies, Inc. VT82xxxxx UHCI USB 1.1 Controller (rev 1e) 00:11.3 USB Controller: VIA Technologies, Inc. VT82xxxxx UHCI USB 1.1 Controller (rev 1e) 00:11.4 Bridge: VIA Technologies, Inc. VT8235 ACPI (rev 10) 00:11.5 Multimedia audio controller: VIA Technologies, Inc. VT82C686 AC97 Audio Controller (rev 40) 00:12.0 Ethernet controller: VIA Technologies, Inc. VT6102 [Rhine-II] (rev 51) 01:00.0 VGA compatible controller: Trident Microsystems CyberBlade/i1 (rev 6a) sensors: acpitz-virtual-0 Adapter: Virtual device temp1: +32.0°C (crit = +60.0°C)

    Read the article

  • CIFS mounted drive setting "stick-bit" on all files, cannot change permissions or modify files

    - by mattmcmanus
    I have a folder mounted on an Ubuntu 8.10 sever through cifs that I simply cannot change the permissions on once mounted. Here is a breakdown of what's going on: All files within the mounted folder automatically have their permissions set to -rwxrwSrwx regardless of whether the file is create on the windows server or on the linux machine. I have the same directory mounted on two other linux servers (both running 9.10 instead of 8.10) with no problems at all. They all are using the same fstab options and the same credentials. //server/folder /media/backups cifs credentials=/etc/samba/.arcadia_cred,noexec,noserverino 0 0 I've I run a chmod command a million different ways, all of which report successfully changing the permissions. However it doesn't. The issue began after I updated from 8.04 to 8.10 Any idea why this may be happening on one machine? Since it started after an upgrade I'm not sure what is the bes thing to do. Any help you could give would great! None of my automated backup scripts are working because of this!

    Read the article

  • Opening an existing process

    - by Grasper
    I am using Eclipse in Linux through a remote connection (xrdp). My internet got disconnected, so I got disconnected from the server while eclipse was running. Now I logged in again, and I do the "top" command I can see that eclipse is running and still under my user name. Is there some way I can bring that process back into my view (I do not want to kill it because I am in the middle of checking in a large swath of code)? It doesnt show up on the bottom panel after I logged in again. Here is the "top" output: /home/mclouti% top top - 08:32:31 up 43 days, 13:06, 29 users, load average: 0.56, 0.79, 0.82 Tasks: 447 total, 1 running, 446 sleeping, 0 stopped, 0 zombie Cpu(s): 6.0%us, 0.7%sy, 0.0%ni, 92.1%id, 1.1%wa, 0.1%hi, 0.1%si, 0.0%st Mem: 3107364k total, 2975852k used, 131512k free, 35756k buffers Swap: 2031608k total, 59860k used, 1971748k free, 817816k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 13415 mclouti 15 0 964m 333m 31m S 21.2 11.0 83:12.96 eclipse 16040 mclouti 15 0 2608 1348 888 R 0.7 0.0 0:00.12 top 31395 mclouti 15 0 29072 20m 8524 S 0.7 0.7 611:08.08 Xvnc 2583 root 20 0 898m 2652 1056 S 0.3 0.1 139:26.82 automount 28990 postgres 15 0 13564 868 304 S 0.3 0.0 26:33.36 postgres 28995 postgres 16 0 13808 1248 300 S 0.3 0.0 6:54.95 postgres 31440 mclouti 15 0 3072 1592 1036 S 0.3 0.1 6:01.54 gam_server 1 root 15 0 2072 524 496 S 0.0 0.0 0:03.00 init 2 root RT -5 0 0 0 S 0.0 0.0 0:04.53 migration/0 3 root 34 19 0 0 0 S 0.0 0.0 0:00.04 ksoftirqd/0 4 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/0 5 root RT -5 0 0 0 S 0.0 0.0 0:01.72 migration/1 6 root 34 19 0 0 0 S 0.0 0.0 0:00.07 ksoftirqd/1 7 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/1 8 root RT -5 0 0 0 S 0.0 0.0 0:04.33 migration/2 9 root 34 19 0 0 0 S 0.0 0.0 0:00.05 ksoftirqd/2

    Read the article

  • Backup solution, or, how Duplicati duped me

    - by blarghmaster
    TL/DR version: Mono + Duplicati.commandline.exe restore etc. etc. spits this out for several files regardless of what I try. I am able to list sets, list files in said sets, even do a verify, but each time i do a restore of any kind, i get errors to the effect of : Failed to restore file: "snapshot/blahblah/2005-11-07.tar.gz", Error message: The partial file record for snapshot/blahblah/2005-11-07.tar.gz does not match the file Any advice here, or an idea of where to look for a better solution? FULL STORY: Ive recently put together an nice clean, friendly backup solution for several servers, predominantly Linux, but occasionally a windows box is added too. The solution as is meets all my requirements and does it well... save 1: cross-compatibility The solution is based on a combination of several elements, but eventually comes done to using Duplicity and Duplicati for the actual storage of files. The entire solution was ready to go before i realized that Duplicati, does not, in fact allow me to restore my files to a Linux box, regardless of what the commandline under Mono might tell you. It just spits out errors on random zip and image files, for apparently no good reason as i have tried several options to get it to restore, and several versions of Mono including installing it pretty much lib-for-lib. There is no effective log file for the reasons for these errors, and even the "--debug-output=true" flag does nothing. I am able to list sets, list files in said sets, even do a verify, but each time i do a restore of any kind, i get errors to the effect of : Failed to restore file: "snapshot/blahblah/2005-11-07.tar.gz", Error message: The partial file record for snapshot/blahblah/2005-11-07.tar.gz does not match the file Now i could most likely use the friendly instructions on Duplicati's site and script a bash equivalent of the restore, but that's not exactly ideal. Any advice on this? or possibly an alternative solution that presents the same benefits of Duplicati/Duplicity but that actually works across platforms?

    Read the article

  • Postfix + procmail - delivery fails because "can't create user output file" - on CentOS 6.2

    - by jshin47
    I verified that my postfix installation / relaying setup worked. Now I am having trouble with procmail. I have it wired to postfix with the following command: mailbox_command = /usr/bin/procmail -f -a "$USER" I have nothing in my procmail config but the following: LOGFILE=/var/procmailrc/log And I send an email to a recipient that previously worked (before I attached procmail). Now it fails with error: Apr 6 14:07:05 localhost postfix/qmgr[15194]: D0C3DFF6E1: from=<[email protected]>, size=938, nrcpt=1 (queue active) Apr 6 14:07:05 localhost postfix/local[1953]: D0C3DFF6E1: to=<[email protected]>, orig_to=<postmaster>, relay=local, delay=0.05, delays=0.02/0.01/0/0.02, dsn=5.2.0, status=bounced (can't create user output file. Command output: procmail: Couldn't create "/var/spool/mail/nobody" procmail: Couldn't read "//root" ) Apr 6 14:07:05 localhost postfix/bounce[1955]: warning: D0C3DFF6E1: undeliverable postmaster notification discarded Apr 6 14:07:05 localhost postfix/qmgr[15194]: D0C3DFF6E1: removed It seems like there is some sort of permissions issue but I do not know what the problem is, nor do I understand how I would go about diagnosing it further. The logfile that I specified is empty, by the way. How can I make procmail+postfix work?

    Read the article

  • Kernel Logging disabled?

    - by Tiffany Walker
    uname -a Linux host 2.6.32-279.9.1.el6.i686 #1 SMP Tue Sep 25 20:26:47 UTC 2012 i686 i686 i386 GNU/Linux And start ups: ls /etc/init.d/ abrt-ccpp certmonger dovecot irqbalance matahari-broker mdmonitor nfs proftpd rpcbind single ypbind abrtd cgconfig functions kdump matahari-host messagebus nfslock psacct rpcgssd smartd abrt-oops cgred haldaemon killall matahari-network mysqld ntpd qpidd rpcidmapd sshd acpid cpuspeed halt ktune matahari-rpc named ntpdate quota_nld rpcsvcgssd sssd atd crond httpd lfd ma tahari-service netconsole oddjobd rdisc rsyslog sysstat auditd csf ip6tables lvm2-lvmetad matahari-sysconfig netfs portreserve restorecond sandbox tuned autofs cups iptables lvm2-monitor matahari-sysconfig-console network postfix rngd saslauthd udev-post But when I installed CSF/LFD I am getting nothing. LFD does not create lfd.log and nor are any blocks being logged in /var/log/messages either from the firewall. This is not natural. I looked for klogd but maybe I am looking in the wrong place for it to see if it is enabled? ls /etc/init.d/syslog ls: cannot access /etc/init.d/syslog: No such file or directory Also noticed no syslog? Also noticed this: csf -d 84.113.21.201 Adding 84.113.21.201 to csf.deny and iptables DROP... iptables: No chain/target/match by that name. iptables: No chain/target/match by that name. I've never seen this before and this is a dedicated box. Also: ./csftest.pl Testing ip_tables/iptable_filter...OK Testing ipt_LOG...OK Testing ipt_multiport/xt_multiport...OK Testing ipt_REJECT...OK Testing ipt_state/xt_state...OK Testing ipt_limit/xt_limit...OK Testing ipt_recent...OK Testing xt_connlimit...OK Testing ipt_owner/xt_owner...OK Testing iptable_nat/ipt_REDIRECT...OK Testing iptable_nat/ipt_DNAT...OK RESULT: csf should function on this server iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination

    Read the article

  • GRE Tunnel over IPsec with Loopback

    - by Alek
    Hello, I'm having a really hard time trying to estabilish a VPN connection using a GRE over IPsec tunnel. The problem is that it involves some sort of "loopback" connection which I don't understand -- let alone be able to configure --, and the only help I could find is related to configuring Cisco routers. My network is composed of a router and a single host running Debian Linux. My task is to create a GRE tunnel over an IPsec infrastructure, which is particularly intended to route multicast traffic between my network, which I am allowed to configure, and a remote network, for which I only bear a form containing some setup information (IP addresses and phase information for IPsec). For now it suffices to estabilish a communication between this single host and the remote network, but in the future it will be desirable for the traffic to be routed to other machines on my network. As I said this GRE tunnel involves a "loopback" connection which I have no idea of how to configure. From my previous understanding, a loopback connection is simply a local pseudo-device used mostly for testing purposes, but in this context it might be something more specific that I do not have the knowledge of. I have managed to properly estabilish the IPsec communication using racoon and ipsec-tools, and I believe I'm familiar with the creation of tunnels and addition of addresses to interfaces using ip, so the focus is on the GRE step. The worst part is that the remote peers do not respond to ping requests and the debugging of the general setup is very difficult due to the encrypted nature of the traffic. There are two pairs of IP addresses involved: one pair for the GRE tunnel peer-to-peer connection and one pair for the "loopback" part. There is also an IP range involved, which is supposed to be the final IP addresses for the hosts inside the VPN. My question is: how (or if) can this setup be done? Do I need some special software or another daemon, or does the Linux kernel handle every aspect of the GRE/IPsec tunneling? Please inform me if any extra information could be useful. Any help is greatly appreciated.

    Read the article

  • How do I create a "here document" within a shell function?

    - by BenU
    I'm working my way through William Shotts Jr.'s great The Linux Command Line on my Mac OSX 10.7.5 system. 90% of the linux that Shotts covers is close enough to Darwin that I can figure out or GTEM to figure out what's going on. I've made it to chapter 27 on "Writing Shell Scripts" and am getting hung up creating "here files" within a function. I get an syntax error: unexpected end of file error when I include the following function: report_uptime () { cat <<- _EOF_ <H2>System Uptime</H2> <PRE>$(uptime)</PRE> _EOF_ return } The error goes away if I use the following function placeholder: report_uptime () { return } Also, elsewhere in the script, outside of a function I use the cat << _EOF_ format to create a "here file" with no trouble: cat << _EOF_ <HTML> <HEAD> <TITLE>$TITLE</TITLE> </HEAD> <BODY> <H1>$TITLE</H1> <P>$TIME_STAMP</P> $(report_uptime) $(report_disk_space) $(report_home_space) </BODY> </HTML> _EOF_ If anyone has any idea what I'm doing wrong I would be grateful!

    Read the article

  • Why is SSH finding remote keys for other accounts?

    - by Brian Pontarelli
    This is a strange issue I'm having with SSH from my Macbook Pro to a Linux (Ubuntu 11.10) server. I have a DSA key setup on the remote Linux server under my home directory like this: /home/me/.ssh/authorzied_keys I also have the same DSA key setup for a few other accounts on the machine named "foo" and "bar". I can log into all of the accounts fine without any password. Therefore, the DSA keys are all setup correctly. The strange behavior I'm seeing is when debugging the SSH connection. During the connection, the SSH debug is outputting this: debug2: key: /Users/me/.ssh/id_dsa (0x7f91a1424220) debug2: key: /home/foo/.ssh/id_dsa (0x7f91a1425620) debug2: key: /home/bar/.ssh/id_rsa (0x7f91a1425c60) debug2: key: /Users/me/.ssh/id_rsa (0x0) This is strange for so many reasons, but essentially, why is SSH listing out keys on the server (/home/foo/.ssh/id_dsa and /home/bar/.ssh/id_rsa)? These files don't even exist on the server, so why are they listed? I'm not logging into the "foo" or "bar" accounts, so why is SSH even listing those? On my Macbook Pro, I only have a DSA key, but SSH is listing out an RSA key, what's that all about? Another user on the server doesn't get any of these messages when they log in and they have the exact same setup for their DSA key and the exact same Macbook Pro setup as mine? Does anyone know what these messages are and why SSH is outputting them?

    Read the article

  • guest crash on long backup via rsync

    - by ToreTrygg
    I recently upgraded host to Ubuntu 9.10 with vmware server 2.0.2, i had two guest machine. One is a sme server i had several crash during a session of backup with rsync to another pc. Normal activities run regularly. The other guest is up without problem since 25 days. I found in the log a lot o f row like these Dec 20 05:29:27.445: vcpu-1| VLANCE: Ethernet0 skipped 2560 time(s) Dec 20 05:29:27.445: vcpu-1| VLANCE: 66 12 5 8 2 3 3 0 1 0 0 1 0 1 2 0 Dec 20 05:29:27.445: vcpu-1| VLANCE: 0 0 1 0 1 0 0 0 1 0 0 0 0 1 0 2452 Dec 20 05:29:27.651: vmx| ide0:0: Command WRITE(10) took 1.947 seconds (ok) Dec 20 05:29:37.945: vmx| ide0:0: Command WRITE(10) took 1.033 seconds (ok) when the vitual machine crash the log report, I paste here only some part to limit the lenght of the message Dec 27 01:48:05.686: Worker#2| Caught signal 6 -- tid 700 Dec 27 01:48:05.686: Worker#2| SIGNAL: eip 0x460422 esp 0xb124c024 ebp 0xb124c03 Dec 27 01:48:05.712: Worker#2| SymBacktrace12 00000000 eip 0x39d7ee in function clone in object /lib/tls/i686/cmov/libc.so.6 loaded at 0x2d1000 Dec 27 01:48:05.719: Worker#2| Unexpected signal: 6. Dec 27 01:48:05.720: Worker#2| Core dump limit is 0 KB. Dec 27 01:48:05.762: Worker#2| Child process 10455 failed to dump core (status 0 x6). Dec 27 01:48:05.762: Worker#2|SymBacktrace13 00000000 eip 0x39d7ee in function clone in object /lib/tls/i686/cmov/libc.so.6 loaded at 0x2d1000 Dec 27 01:48:05.779: Worker#2|Msg_Post: Error Dec 27 01:48:05.780: Worker#2|http://msg.log.error.unrecoverable VMware Server unrecoverable error: (Worker#2) Dec 27 01:48:05.780: Worker#2|Unexpected signal: 6. I have no idea how to solve the problem with this installation, I think to dowgrade the host to a version more compatible with vmware server 2. I read a lot of post about difficult of installation I think the problem of compilation during install could be related to my problem. Excuse me if the post isn't very clear, it's my first post here. Any help or suggest will be appreciated. Thanks

    Read the article

  • Authenticate VNC session with ConsolKit?

    - by lori
    I have a linux machine running Fedora 16 in a cupboard. It has no screen or keyboard. I connect to it using a combination of vnc and ssh. Recently, after an update, I have had issues with authentication on the machine. If I vnc to it, the kde desktop pops up an error dialog every few minutes saying Authorization failed. Failed to obtain authentication. If I plug in a USB drive it fails to mount, Dolphin reports an authentication issue again. I have had limited success finding the solution. AFAICT, it is an issue with ConsoleKit deeming me to be a non-local user so it prevents authentication. This is the output from ck-list-sessions: $ ck-list-sessions Session5: unix-user = '1000' realname = 'steve' seat = 'Seat6' session-type = '' active = FALSE x11-display = ':1' x11-display-device = '' display-device = '' remote-host-name = '' is-local = FALSE on-since = '2012-09-16T08:07:03.137011Z' login-session-id = '1' I have tried to update my .vnc/xstartup script to include ck-launch-session as follows: $ cat ~/.vnc/xstartup #!/bin/sh exec ck-launch-session vncconfig -iconic & unset SESSION_MANAGER unset DBUS_SESSION_BUS_ADDRESS export XKL_XMODMAP_DISABLE=1 OS=`uname -s` if [ $OS = 'Linux' ]; then case "$WINDOWMANAGER" in *gnome*) if [ -e /etc/SuSE-release ]; then PATH=$PATH:/opt/gnome/bin export PATH fi ;; esac fi if [ -x /etc/X11/xinit/xinitrc ]; then exec ck-launch-session /etc/X11/xinit/xinitrc fi if [ -f /etc/X11/xinit/xinitrc ]; then exec ck-launch-session sh /etc/X11/xinit/xinitrc fi [ -r $HOME/.Xresources ] && xrdb $HOME/.Xresources exec ck-launch-session xsetroot -solid grey exec ck-launch-session xterm -geometry 80x24+10+10 -ls -title "$VNCDESKTOP Desktop" & exec ck-launch-session twm & This has not helped. How can I either authenticate myself to ConsoleKit, or trick it into believing I am a local user?

    Read the article

  • Wget save cookies not working

    - by TrymBeast
    I've been trying to login in the pyload through the web api, but wget is not saving the cookies and I don't understand why. I'm using the following command: wget --delete-after --keep-session-cookies --save-cookies=my_cookies.txt --post-data="username=USERNAME&password=PASSWORD" http://localhost:8000/api/login But the content of my_cookies.txt is: # HTTP cookie file. # Generated by Wget on 2012-06-23 22:31:33. # Edit at your own risk. When I run the same command but in debug mode I get the following output that includes the set cookie in the header response: DEBUG output created by Wget 1.10.2 (Red Hat modified) on linux-gnueabi. --22:31:11-- http://localhost:8000/api/login Resolving localhost... 127.0.0.1 Caching localhost => 127.0.0.1 Connecting to localhost|127.0.0.1|:8000... connected. Created socket 3. Releasing 0x000504d0 (new refcount 1). ---request begin--- POST /api/login HTTP/1.0 User-Agent: Wget/1.10.2 (Red Hat modified) Accept: */* Host: localhost:8000 Connection: Keep-Alive Content-Type: application/x-www-form-urlencoded Content-Length: 32 ---request end--- [POST data: username=USERNAME&password=PASSWORD] HTTP request sent, awaiting response... ---response begin--- HTTP/1.1 200 OK Content-Length: 34 Content-Type: application/json Cache-Control: no-cache, must-revalidate Set-cookie: beaker.session.id=405390ddc809efed54820638c95d7997; expires=Tue, 19-Jan-2038 04:14:07 GMT; Path=/ Connection: Keep-Alive Date: Sat, 23 Jun 2012 21:31:11 GMT Server: CherryPy/3.1.2 WSGI Server ---response end--- 200 OK hs->local_file is: login (not existing) Registered socket 3 for persistent reuse. TEXTHTML is on. Length: 34 [application/json] Saving to: `login' 100%[=======================================>] 34 --.-K/s in 0s 22:31:11 (1.28 MB/s) - `login' saved [34/34] Removing file due to --delete-after in main(): Removing login. Saving cookies to my_cookies.txt. Done saving cookies. Can anyone tell me what am I doing wrong? Thanks in advance!

    Read the article

  • VirtualBox communication from Linux to/from Windows 7

    - by J. Otto Tennant
    VirtualBox is running in Windows 7 as the host. VirtualBox has the two modifications (one is called Guest Additions; don't remember the other). The Virtual machine has "bridged" networking selected. I have SAMBA set up (now, the problem may be here; it has been three or four years since I last did this) on the Linux guest machine. Neither guest nor host sees the other. From the Windows 7 command prompt, the IP address of the Linux guest pings. The IP address of another computer (a separate Windows 7 on the wireless network) pings from the Linux guest. (I have no idea what IP address the Windows 7 host itself has. The output of "netstat" does not seem to be useful.) So, it seem to me that something should be working. The only workgroup on the LAN is inventively named WORKGROUP. SMB4K should be seeing something. There must be a simple setup step that I am missing. (FWIW, there are two processes running smbd, and no process is running nmbd. YaST says that nmbd is set to run. I am not sure what this means.)

    Read the article

  • ssh connection operation timed out using rsync

    - by Mark Molina
    I use rsync to backup my remote server on my local device but when I combine it with a cron job my ssh times out. Just to be clear, the data is stored on a remote server and I want it stored on my local server. The backup request must be sent from my local server to the remote server. The command for backup up the data is working when I just type it in terminal like this: rsync -chavzP --stats USERNAME@IPADDRES: PATH_TO_BACKUP LOCAL_PATH_TO_BACKUP but when I combine it with a cron job like this: 10 11 * * * rsync -chavzP --stats USERNAME@IP_ADDRESS: PATH_TO_BACKUP LOCAL_PATH_TO_BACKUP the ssh connection times out. When the cronjob executes it send a mail to the root user with the output like this: From local.xx.xx.xx Tue Jul 2 11:20:17 2013 X-Original-To: username Delivered-To: [email protected] From: [email protected] (Cron Daemon) To: [email protected] Subject: Cron <username@server> rsync -chavzP --stats USERNAME@IPADDRES: PATH_TO_BACKUP LOCAL_PATH_TO_BACKUP X-Cron-Env: <SHELL=/bin/sh> X-Cron-Env: <PATH=/usr/bin:/bin> X-Cron-Env: <LOGNAME=username> X-Cron-Env: <USER=username> X-Cron-Env: <HOME=/Users/username> Date: Tue, 2 Jul 2013 11:20:17 +0200 (CEST) ssh: connect to host IP_ADDRESS port XX: Operation timed out rsync: connection unexpectedly closed (0 bytes received so far) [receiver] rsync error: unexplained error (code 255) at /SourceCache/rsync/rsync-42/rsync/io.c(452) [receiver=2.6.9] So the rsync command is working when just typed in terminal but not when used by a cronjob. Can anybody explain this?

    Read the article

  • Free tiered storage automation in linux?

    - by NginUS
    I have a couple virtualized fileservers running in QEMU/KVM on ProxmoxVE. The physical host has 4 storage tiers with significant performance variances. They're attached both locally and via NFS. These will be provided to the fileserver(s) as local disks, abstracted into pools, and handling multiple streams of data for the network. My aim is for this abstraction layer to intelligently pool the tiers. There's a similar post on the site here: Home-brew automatic tiered storage solutions with Linux? (Memory - SSD - HDD - remote storage) in which the accepted answer was a suggestion to abandon a linux solution for NexentaStor. I like the idea of running NexentaStor. It almost fits the bill. NexentaStor provides Hybrid Storage Pools, and I love the idea of checksumming. 16TB without incurring licensing fees is a huge plus as well. After the expense of the hardware, free is about all my budget can handle. I don't know if zfs pools are adaptive or dynamically allocated based on load, but it becomes irrelevant since NexentaStor doesn't support virtio network or block drivers, which is a must in my environment. Then I saw a commercial solution called SmartMove: http://www.enigmadata.com/smartmove.html And it looks like a step in the right direction, but I'm so broke I'd be wasting their time to even ask for a quote, so I'm looking for another option. I'm after a linux implementation that supports virtio drivers, and I'm at a loss as to which software is up to it.

    Read the article

  • How to manage processes-to-CPU cores affinities ?

    - by Philippe
    I use a distributed user-space filesystem (GlusterFS) and I would like to be sure GlusterFS processes will always have the computing power they need. Each execution node of my grid have 2 CPU, with 4 cores per CPU and 2 threads per core (16 "processors" are seen by Linux). My goal is to guarantee that GlusterFS processes have enough processing power to be reliable, responsive and fast. (There is no marketing here, just the dreams of a sysadmin ;-) I consider two main points : GlusterFS processes I/O for data access (on local disks, or remote disks) I thought about binding the Linux Kernel and GlusterFS instances on a specific "processor". I would like to be sure that : No grid job will impact the kernel and the GlusterFS instances Researchers jobs won't be affected by system processes (I'd like to reserve a pool of cores to job execution and be sure that no system process will use these CPUs) But what about I/O ? As we handle a huge amount of data (several terabytes), we'll have a lot of interuptions. How can I distribute these operations on my processors ? What are the "best practices" ? Thanks for your comments!

    Read the article

  • How to move Mdadm RAID drive (EBS based) to different AWS Instance

    - by Stanley
    We have a media-rich web application that is hosted on AWS. We have several Web Servers and we have an NFS server. On the NFS server (Linux server) we have several EBS volumes that are mounted and we've used mdadm to implement the different mounted volumes as a single RAID volume. The Web Servers simply access the NFS storage through a mount point. Amazon has now let us know that they will be performing power maintenance on this server in a couple of days time. Since all our media is on here it would render our site unusable for the hours while Amazon is working on it. We want to try and prevent this downtime. I was thinking that we can prevent server downtime by perhaps setting up a new server temporarily and attaching the EBS drives (raid volume) to that server and have our web servers point there during maintenance. This is a very high risk operation since this involves several terabytes of our production data. What would be the safe way to move over our logical raid drive (md0) to a new amazon instance? I was hoping that I could start with building the new server, mounting the ebs volumes and assembling the RAID partition using mdadm --assemble --scan before unmounting from the existing instance so that I can first test that everything works and thus having it mounted on two instances at the same time, but I don't believe that is possible with the way that filesystems work. How do I move a Linux software RAID to a new machine? suggests a way to move drives, but isn't really a cloud-based question. Perhaps there are simpler ways to prevent system downtime with our solution being hosted on the cloud? I have considered taking an EBS snapshot, but that tries to replicate all the many terabytes of mounted storage, so this is not a practical solution. Any ideas?

    Read the article

  • How can I recover my system after running 'mkfs' on the system partition?

    - by Filip Podgórny
    I am not a Linux user, and was doing some homework, I blindly typed sudo mkfs ext3 dev/sda2 (I had Ubuntu as Windows installation). I've done few more things, and turned Ubuntu off to switch on Windows back. No operating system installed - this is the message I'm getting. I plugged my HDD onto another computer and all my files are still there. What should I do to get my windows installation back? df -l (before mkfs) /dev/loop0 29G 2,0G 27G 8% / udev 3,0G 4,0K 3,0G 1% /dev tmpfs 1,2G 900K 1,2G 1% /run none 5,0M 0 5,0M 0% /run/lock none 3,0G 1,3M 3,0G 1% /run/shm /dev/sda3 455G 123G 333G 27% /host /dev/sdb1 1,9G 820M 1,1G 43% /media/PHONE CARD mkfs output (polish, sorry) mke2fs 1.41.14 (22-Dec-2010) Etykieta systemu plików= Typ OS: Linux Rozmiar bloku=1024 (log=0) Rozmiar fragmentu=1024 (log=0) Stride=0 bloków, szerokosc Stripe=0 bloków 25688 i-wezlów, 102400 bloków 5120 bloków (5.00%) zarezerwowanych dla superuzytkownika Pierwszy blok danych=1 Maksymalna liczba bloków systemu plików=67371008 13 grup bloków 8192 bloków w grupie, 8192 fragmentów w grupie 1976 i-wezlów w grupie Kopie zapasowe superbloku zapisane w blokach: 8193, 24577, 40961, 57345, 73729 Zapis tablicy i-wezlów: zakonczono Tworzenie kroniki (4096 bloków): wykonano Zapis superbloków i podsumowania systemu plików: wykonano Ten system plików bedzie automatycznie sprawdzany co kazde 30 montowan lub co 180 dni, zaleznie co nastapi pierwsze. Mozna to zmienic poprzez tune2fs -c lub -i.

    Read the article

  • How to debug a kernel created using ubuntu-vm-builder?

    - by user265592
    Aim: Trying to perform a code walkthrough of what functions are getting called for sending and receiving packets over the network. I am building a kernel and using gdb for debugging/ tracing purposes. I have build a vm using the following command : time sudo ubuntu-vm-builder qemu precise --arch 'amd64' --mem '1024' --rootsize '4096' --swapsize '1024' --kernel-flavour 'generic' --hostname 'ubuntu' --components 'main' --name 'Bob' --user 'ubuntu' --pass 'ubuntu' --bridge 'br0' --libvirt 'qemu:///system' And I can run the VM successfully in qemu using the following command: qemu-system-x86_64 -smp 1 -drive file=tmpGgEOzK.qcow2 "$@" -net nic -net user -serial stdio -redir tcp:2222::22 Now, I want to debug the kernel using gdb. For this I need an executable with debug symbols(vmlinux), which apparently I don't have, as the vm-builder never asked for any such options and simply created a .qcow2 file. Question 1: Am I taking the correct approach to solve the problem and is there an easier way to do it? Question 2: Is there a way to debug this kernel using GDB? P.S: I don't have hardware support for KVM. Please correct me if I am wrong. Thanks.

    Read the article

< Previous Page | 313 314 315 316 317 318 319 320 321 322 323 324  | Next Page >