Search Results

Search found 26640 results on 1066 pages for 'a troubled linux newbie'.

Page 387/1066 | < Previous Page | 383 384 385 386 387 388 389 390 391 392 393 394  | Next Page >

  • strange behaviour - dhclient needs to be run twice in order to connect to wireless

    - by splicer
    I am trying to connect my to my wlan without the use of NetworkManager. I run the following commands after boot: iwconfig wlan0 enc <WEP passwd> mode managed essid <name> channel 6 ifconfig wlan0 up dhclient wlan0 At this point, dhclient stalls for ages (perhaps 2 minutes), then it returns with PING 192.168.1.254 (192.168.1.254) from 192.168.1.65 wlan0: 56(84) bytes of data. --- 192.168.1.254 ping statistics --- 3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 3000ms pipe 3 .. The strange thing is that when I run pkill dhclient; dhclient wlan0 right after this, it connects in about <3 seconds. Any idea what could be the cause of this problem? Edit: oh, and I did try using the -timeout flag on dhclient but that didn't seem to make any difference (it still stalled for ages).

    Read the article

  • Hybrid gmail MX + postfix for local accounts

    - by krunk
    Here's the setup: We have a domain, mydomain.com. Everything is on our own server, except general email accounts which are through gmail. Currently gmail is set as the MX record. The server also has various email aliases it needs to support for bug trackers and such. e.g. [email protected] |/path/to/issuetracker.script I'm struggling with a setup that allows the following, both locally and from user's email clients. guser1 - has a gmail account and a local account guser2 - only has a gmail account bugs - has a pipe alias in /etc/aliases for issue tracker Scenarios mail to [email protected] from local host (crons and such) needs to go to gmail account mail to [email protected] from local host mail to [email protected] needs to be piped to the local issue tracker script So, the first stab was creating a transport map. In this scenario, the our server would be set as teh MX and guser* destined emails are sent to gmail. Put the gmail users in a map like so: [email protected] smtp:gmailsmtp:25 [email protected] smtp:gmailsmtp:25 Problems: Ignores extensions such as [email protected] Only works if append_at_myorigin = no (if set to yes, gmail refuses to connect with: E4C7E3E09BA3: to=, relay=none, delay=0.05, delays=0.02/0.01/0.02/0, dsn=4.4.1, status=deferred (connect to gmail-smtp-in.l.google.com[209.85.222.57]:25: Connection refused)) since append_at_myorigin is set to no, all received emails have (unknown sender) The second stab was to set explicit localhost aliases in /etc/aliases and do a domain wide forward on mydomain. This too requires setting the local server as the MX: root: root@localhost # transport mydomain.com smtp:gmailsmtp:25 Problems: * If I create a transport map for a domain that matches "$myhostname", the aliases file is never parsed. So when a local user (or daemon) sends an email like: mail -s "testing" root < text.txt Postfix ignores the /etc/alias entry and maps to [email protected] and attempts to send it to the gmail transport mapping. Third stab: Create a subdomain for the bugs, something like bugs.mydomain.com. Set the MX for this domain to local server and leave the MX for mydomain.com to the Gmail server. Problems: * Does not solve the issue with local accounts. So when the bug tracker responds to an email from [email protected], it uses a local transport and the user never receives the email. % postconf -n alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases append_at_myorigin = no append_dot_mydomain = no biff = no config_directory = /etc/postfix inet_interfaces = all mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 mydestination = $myhostname, localhost.$myhostname, localhost myhostname = mydomain.com mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 myorigin = /etc/mailname readme_directory = no recipient_delimiter = + relayhost = smtp_tls_cert_file = /etc/ssl/certs/kspace.pem smtp_tls_enforce_peername = no smtp_tls_key_file = /etc/ssl/certs/kspace.pem smtp_tls_note_starttls_offer = yes smtp_tls_scert_verifydepth = 5 smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtp_use_tls = yes smtpd_banner = $myhostname ESMTP $mail_name (Debian/GNU) smtpd_recipient_restrictions = permit_mynetworks, reject_invalid_hostname, reject_non_fqdn_sender, reject_non_fqdn_recipient, reject_unknown_sender_domain, reject_unknown_recipient_domain, reject_unauth_destination smtpd_tls_ask_ccert = yes smtpd_tls_req_ccert = no smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache tls_random_source = dev:/dev/urandom transport_maps = hash:/etc/postfix/transport

    Read the article

  • RHEL5 php5-curl install fail.

    - by The Rook
    PHP's curl bindings are nowhere to be found in yum. By looking in the yum.repos.d I can see that rpmforge is being used. Build from source? phpize isn't installed and it isn't in yum. What do i do? How do i repair the repo? This is RHEL5 machine that is i686.

    Read the article

  • needs updated glibc package version 3.4.15 or later for RHEL6

    - by Tejas
    I want to upgrade my current running applications to latest version. But due to some package issue i am unable to install them. I get common error in that: /usr/lib64/libstdc++.so.6: version 'GLIBCXX_3.4.15' not found. When i tried to update glibc package i get following output: [root@agastya ~]# yum install glibc Loaded plugins: refresh-packagekit, rhnplugin epel/metalink | 3.8 kB 00:00 epel | 4.3 kB 00:00 epel/primary_db | 5.0 MB 01:33 epel-testing/metalink | 3.8 kB 00:00 epel-testing | 4.3 kB 00:00 epel-testing/primary_db | 295 kB 00:03 rhel-x86_64-server-6 | 1.8 kB 00:00 rhel-x86_64-server-6/primary | 11 MB 02:02 rhel-x86_64-server-6 8816/8816 Setting up Install Process Package glibc-2.12-1.80.el6_3.6.x86_64 already installed and latest version Nothing to do [root@agastya ~]# Should i need to add some more repositories? If yes, how?

    Read the article

  • Wpa supplicant suddenly stopped working

    - by Grzenio
    Hi, Recently my wireless stopped working on my Debian testing system. It just doesn't connect. The best I get (only after a reboot) is that it says it did connect, but failed to get IP address. But usually it just tries to connect, disconnects straight away, connects again etc. so it never manages to associate correctly. I am sure it did work about a month ago, stopped working after recent upgrades from the repository. Any ideas how to find the issue and fix it?

    Read the article

  • Cut (smart edit) .mts (AVCHD Progressive) files un Ubuntu Lucid

    - by pts
    I have a bunch of .mts files containing AVCHD Progressive video recorded by a Panasonic camera, and I need software on Ubuntu Lucid with which I can remove the boring parts, and concatenate the interesting parts, all this without reencoding the video stream. It's OK for me to cut at keyframe boundary. If Avidemux was able to open the files, it would take about 60 hours of work for me to cut the files. (At least that was it last time I tried with similar videos, but of a file format supported by Avidemux.) So I need a fast, powerful and stable video editor, because I don't want that 60 hours of work go up to 240 or even 480 hours just because the tool is too slow or unstable or has a terrible UI. I've tried Avidemux 2.5.5 and 2.5.6, but they crash trying to open such a file, even if I convert the file to .avi first using mencoder -oac copy -ovc copy. mplayer can play the files. I've tried Avidemux 2.6.0, which can open the file, but it cannot jump to the previous or next keyframe etc. (if I make it jump to the next keyframe, and then to the previous keyframe, it doesn't end up at the original keyframe, sometimes displays an error etc.). Also I'm not sure if Avidemux 2.6.x would let me save the result without reencoding. I've tried Kdenlive 0.7.7.1, but playback is very choppy, and it cannot play audio at all (complaining that SDL cannot find the device; but many other programs on the system can play audio). It would be a pain to work with. I've tried converting the .mts file to .mkv using ffmpeg -i input.mts -vcodec copy -sameq -acodec copy -f matroska output.mkv, but that caused too much visible distortions in the video in both mplayer and Avidemux. I've tried converting the .mts file with TsRemux.exe, but Avidemux 2.5.x still can't open that file. Is there another program to cut and concatenate the files? Is there a preprocessor which would create a file (without reencoding the video) on which Avidemux wouldn't crash?

    Read the article

  • How do I re-enable the IPMI temperature sensors?

    - by NobleUplift
    I've never had a problem reading temperature sensors with ipmitool on my server, but recently the temperature readings started showing up as disabled: # ipmitool sdr list Temp | disabled | ns Temp | disabled | ns Ambient Temp | 21 degrees C | ok CMOS Battery | 0x00 | ok VCORE | 0x00 | ok VDDIO | 0x00 | ok VDDA | 0x00 | ok VTT | 0x00 | ok VCORE | 0x00 | ok VDDIO | 0x00 | ok VDDA | 0x00 | ok VTT | 0x00 | ok VDD 1.2V PG | 0x00 | ok Linear PG | 0x00 | ok I am using OpenIPMI 2.0.19 and ipmitool 1.8.12. How can I re-enable my temperature sensors?

    Read the article

  • Better logging for cronjob output using /usr/bin/logger

    - by Stefan Lasiewski
    I am looking for a better way to log cronjobs. Most cronjobs tend to spam email or the console, get ignored, or create yet another logfile. In this case, I have a Nagios NSCA script which sends data to a central Nagios sever. This send_nsca script also prints a single status line to STDOUT, indicating success or failure. 0 * * * * root /usr/local/nagios/sbin/nsca_check_disk This emails the following message to root@localhost, which is then forwarded to my team of sysadmins. Spam. forwarded nsca_check_disk: 1 data packet(s) sent to host successfully. I'm looking for a log method which: Doesn't spam the messages to email or the console Don't create yet another krufty logfile which requires cleanup months or years later. Capture the log information somewhere, so it can be viewed later if desired. Works on most unixes Fits into an existing log infrastructure. Uses common syslog conventions like 'facility' Some of these are third party scripts, and don't always do logging internally. UPDATE 2010-04-30 In the process of writing this question, I think I have answered myself. So I'll answer myself "Jeopardy-style". Is there any problem with this method? The following will send any Cron output to /usr/bin//logger, which will send to syslog, with a 'tag' of 'nsca_check_disk'. Syslog handles it from there. My systems (CentOS and FreeBSD) already handle log rotation. */5 * * * * root /usr/local/nagios/sbin/nsca_check_disk 2>&1 |/usr/bin/logger -t nsca_check_disk /var/log/messages now has one additional message which says this: Apr 29, 17:40:00 192.168.6.19 nsca_check_disk: 1 data packet(s) sent to host successfully. I like /usr/bin/logger , because it works well with an existing syslog configuration and infrastructure, and is included with most Unix distros. Most *nix distributions already do logrotation, and do it well.

    Read the article

  • How do I make a virtualised WAN?

    - by EnchantedEggs
    I want to create a virtualised WAN. As in, I want to have a couple of VMs (VBox) on one physical host machine, that exist on separate LANs, but that can talk to each other. Do I make the VMs, set them up with different IP addresses (e.g. 1.2.3.4 and 5.6.7.8) and then configure port forwarding between them somehow??? I've seen articles that set up port forwarding on port 2222, but I don't really understand why this works. How is setting up the VM to listen to port 2222 and then port forward from there to, say, port 80, any different from just telling the VM to listen on port 80 in the first place? FYI, the VMs run Ubuntu Desktop 14.x.

    Read the article

  • Force gdm login screen to the primary monitor

    - by JIa3ep
    I have two monitors attached to my video card. Primary monitor has a resolution equal to 1280x1024 and second has 1920x1200. My gdm login screen always appears on the second monitor even if it is switched off. My question is how to force gdm to show login screen always on primary monitor with resolution 1280x1024? I use Ubuntu 10.04.

    Read the article

  • sshd: How to enable PAM authentication for specific users under

    - by Brad
    I am using sshd, and allow logins with public key authentication. I want to allow select users to log in with a PAM two-factor authentication module. Is there any way I can allow PAM two-factor authentication for a specifc user? I don't want users - By the same token - I only want to enable password authentication for specific accounts. I want my SSH daemon to reject the password authentication attempts to thwart would-be hackers into thinking that I will not accept password authentication - except for the case in which someone knows my heavily guarded secret account, which is password enabled. I want to do this for cases in which my SSH clients will not let me do either secret key, or two-factor authentication.

    Read the article

  • 284 GiB of data, 217.4 GiB of space

    - by Malfist
    I want to reinstall my OS, but I don't have the hard drive space to backup any more (I have a RAID 1 array, so I haven't done it for a while). In my /home I have 284.8 GiB of data, and I have a spare 250 GB (or 217.4 GiB) hard drive that I've been using for backup. What type of compression algorithm (if any) is capable of this type of compression? I don't care about the time, I have a quad core though, so something that utilizes all 4 cores would be great. I have tried 7zip with no success. Ran on one core for two days and failed because of lack of space. Any ideas?

    Read the article

  • When and Where does Wubi mount it's virtual disks

    - by TuxPotato
    My use of the wubi-new-virtual-disk made Wubi start mounting this new virtual home disk over my /home folder. After the use of the script failed, I am left with Wubi constantly remounting an empty virtual disk over my /home folder. I followed the instructions on the Ubuntu website to revert the change, but the mounting continues. Where did Wubi put the mount operation, and how can I remove it? Thanks in advance!

    Read the article

  • Debian Bluetooth headphones not working

    - by cYrus
    Hardware Headphones Bluetooth dongle Maybe not exactly these models. Setup I tried to follow some guides, here's what I've done so far: Install software: sudo apt-get install bluez-utils bluez-alsa Reboot (just to be sure): $ dmesg | grep -i bluetooth [ 20.268212] Bluetooth: Core ver 2.16 [ 20.268230] Bluetooth: HCI device and connection manager initialized [ 20.268233] Bluetooth: HCI socket layer initialized [ 20.268235] Bluetooth: L2CAP socket layer initialized [ 20.268239] Bluetooth: SCO socket layer initialized [ 20.284685] Bluetooth: RFCOMM TTY layer initialized [ 20.284692] Bluetooth: RFCOMM socket layer initialized [ 20.284693] Bluetooth: RFCOMM ver 1.11 [ 20.335375] Bluetooth: BNEP (Ethernet Emulation) ver 1.3 [ 20.335378] Bluetooth: BNEP filters: protocol multicast The deamon is running: $ /etc/init.d/bluetooth status [ ok ] bluetooth is running. Plug the dongle: $ dmesg | tail [...] [23108.352034] usb 5-2: new full-speed USB device number 2 using ohci_hcd [23108.571131] usb 5-2: New USB device found, idVendor=0a12, idProduct=0001 [23108.571136] usb 5-2: New USB device strings: Mfr=0, Product=0, SerialNumber=0 [23108.629042] usbcore: registered new interface driver btusb Put the headphones in pairing mode, and try scanning: $ hcitool scan Scanning ... Found nothing. What's next? What should I try? I'll update this answer as soon as you provide me hints.

    Read the article

  • Tool to convert a file of HEX to ASCII character set?

    - by Aaron
    Question: Is there a known tool to convert a file consisting of 2 byte Hex into ascii? Note: - Maintain file offset listing in bytes Example: File contents: 00000000 0054 0065 0073 0074 0020 0054 0065 0073 00000008 0074 0020 0054 0065 0073 0074 0020 0054 00000016 0065 0073 0074 0020 0054 0065 0073 0074 00000024 0020 0054 0065 0073 0074 0020 0054 0065 00000032 0073 0074 0020 0054 0065 0073 0074 0020 00000040 0054 0065 0073 0074 000a 0054 0065 0073 00000048 0074 0020 0054 0065 0073 0074 0020 0054 00000056 0065 0073 0074 0020 0054 0065 0073 0074 00000064 0020 0054 0065 0073 0074 0020 0054 0065 Expected output 00000016 0065 0073 0074 0020 0054 0065 0073 0074 |est Test Test Te| 00000032 0073 0074 0020 0054 0065 0073 0074 0020 |st Test Test.Tes| 00000048 0074 0020 0054 0065 0073 0074 0020 0054 |t Test Test Test| 00000064 0020 0054 0065 0073 0074 0020 0054 0065 | Test Test Test |

    Read the article

  • Causes of sudden massive filesystem damage? ("root inode is not a directory")

    - by poolie
    I have a laptop running Maverick (very happily until yesterday), with a Patriot Torx SSD; LUKS encryption of the whole partition; one lvm physical volume on top of that; then home and root in ext4 logical volumes on top of that. When I tried to boot it yesterday, it complained that it couldn't mount the root filesystem. Running fsck, basically every inode seems to be wrong. Both home and root filesystems show similar problems. Checking a backup superblock doesn't help. e2fsck 1.41.12 (17-May-2010) lithe_root was not cleanly unmounted, check forced. Resize inode not valid. Recreate? no Pass 1: Checking inodes, blocks, and sizes Root inode is not a directory. Clear? no Root inode has dtime set (probably due to old mke2fs). Fix? no Inode 2 is in use, but has dtime set. Fix? no Inode 2 has a extra size (4730) which is invalid Fix? no Inode 2 has compression flag set on filesystem without compression support. Clear? no Inode 2 has INDEX_FL flag set but is not a directory. Clear HTree index? no HTREE directory inode 2 has an invalid root node. Clear HTree index? no Inode 2, i_size is 9581392125871137995, should be 0. Fix? no Inode 2, i_blocks is 40456527802719, should be 0. Fix? no Reserved inode 3 (<The ACL index inode>) has invalid mode. Clear? no Inode 3 has compression flag set on filesystem without compression support. Clear? no Inode 3 has INDEX_FL flag set but is not a directory. Clear HTree index? no .... Running strings across the filesystems, I can see there are what look like filenames and user data there. I do have sufficiently good backups (touch wood) that it's not worth grovelling around to pull back individual files, though I might save an image of the unencrypted disk before I rebuild, just in case. smartctl doesn't show any errors, neither does the kernel log. Running a write-mode badblocks across the swap lv doesn't find problems either. So the disk may be failing, but not in an obvious way. At this point I'm basically, as they say, fscked? Back to reinstalling, perhaps running badblocks over the disk, then restoring from backup? There doesn't even seem to be enough data to file a meaningful bug... I don't recall that this machine crashed last time I used it. At this point I suspect a bug or memory corruption caused it to write garbage across the disks when it was last running, or some kind of subtle failure mode for the SSD. What do you think would have caused this? Is there anything else you'd try?

    Read the article

  • SCP command Clarification

    - by david.colais
    I'm using the scp commands to pull some files from the remote server and one variation of the command is not working. I have 2 files names one.xml and two.xml in a remote server and I'm pulling these two files into the current dir using the following command: scp [email protected]:/student/class/Intermediate/one.xml . scp [email protected]:/student/class/Intermediate/two.xml . The above command works fine but if I use wildcards to pull all the xml files in a single shot as shown below it returns scp: No match. scp [email protected]:/student/class/Intermediate/*.xml . Why is it working if I pull the files individually and not working if I try to pull using wildcards.

    Read the article

  • Difference between ps output and top output?

    - by Soumya Prasad Ukil
    I find it difficult to understand the output produced by ps and top? This is the output by top: PID PSID USERNAME TID PRI NICE SIZE RES STATE TIME CPU COMMAND 26439 23712 soumyau 26439 15 0 7512M 5234M sleep 286:25 16.67% or_lse2 (18) 26523 23712 soumyau 26439 -2 0 7512M 5234M cpu9 143:10 8.33% or_lse2 26522 23712 soumyau 26439 -2 0 7512M 5234M cpu3 143:10 8.33% or_lse2 This is by ps (ps -L -p 26439 -o pcpu,psr,pid,user,tid): %CPU PSR PID USER TID 99.9 3 26439 soumyau 26522 99.9 9 26439 soumyau 26523 0.0 8 26439 soumyau 26439 Why are there differences in two result? Can you briefly explain the significance of the two CPU% ?

    Read the article

  • switchover in postgresql

    - by user1010280
    I am using Postgresql 9.0 with Streaming replication. So, during switchover I follow these steps:- Get the server timestamp on primary. Get the current log position on primary. Set Verify Log location Verify Transaction Received Location Shutdown DB on production. Synchronize the transaction logs from PR to DR. Trigger a failover on the DR Database by creating the trigger file specified in recovery.conf Verify DB Mode on DR Copy the control file from from DR to primary. copy the temporary stats file from DR to primary. copy the history file from DR to primary. Create recovery.conf file. Start Database in standby mode in primary. Verify DB mode on PR At step (6), I have to copy last wal generated on Primary to standby and sync both PR and standby. but this thing takes time to copy files because this remote. So that postgres will keep seraching for wal for long time and after that it stops the server. So I want to know is there any way so that I can ask postgres to stop seraching or locating WAL after shutdown??? because postgres tries to locate this wal every 5 seconds. Please reply as soon as possible..its urgent...

    Read the article

  • Using fedora 17 commandline 'mail' program cannot send to hotmail

    - by Eric Leschinski
    I am trying to use the console in Fedora 17 to send an automated email to myself. I run this: echo "email content" | mail -s "blah" [email protected] It works fine, google treats it as a spam email, but when you mark it not spam everything is cool. For Hotmail there are policies to prevent the email from being sent. I do this: echo "email content" | mail -s "blah" [email protected] And the email returns as undeliverable, the email does not even appear in the spam folder and I get this as a response: ----- Transcript of session follows ----- ... while talking to mx3.hotmail.com.: >>> MAIL From:<[email protected]> SIZE=685 <<< 550 DY-001 (BAY0-MC3-F8) Unfortunately, messages from 184.90.101.28 weren't sent. Please contact your +Internet service provider. You can tell them that Hotmail does not relay dynamically-assigned IP ranges. +You can also refer your provider to http://mail.live.com/mail/troubleshooting.aspx#errors. 554 5.0.0 Service unavailable So apparently hotmail doesn't like spammers so much, they they are blocking anything with a dynamically assigned IP range. Google does not do this. What is the easiest way to just get around this and send an email to hotmail and end up in their spam folder to be unblocked later by the user?

    Read the article

  • How do I tell mdadm to start using a missing disk in my RAID5 array again?

    - by Jon Cage
    I have a 3-disk RAID array running in my Ubuntu server. This has been running flawlessly for over a year but I was recently forced to strip, move and rebuild the machine. When I had it all back together and ran up Ubuntu, I had some problems with disks not being detected. A couple of reboots later and I'd solved that issue. The problem now is that the 3-disk array is showing up as degraded every time I boot up. For some reason it seems that Ubuntu has made a new array and added the missing disk to it. I've tried stopping the new 1-disk array and adding the missing disk, but I'm struggling. On startup I get this: root@uberserver:~# cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md_d1 : inactive sdf1[2](S) 1953511936 blocks md0 : active raid5 sdg1[2] sdc1[3] sdb1[1] sdh1[0] 2930279808 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU] I have two RAID arrays and the one that normally pops up as md1 isn't appearing. I read somewhere that calling mdadm --assemble --scan would re-assemble the missing array so I've tried first stopping the existing array that ubuntu started: root@uberserver:~# mdadm --stop /dev/md_d1 mdadm: stopped /dev/md_d1 ...and then tried to tell ubuntu to pick the disks up again: root@uberserver:~# mdadm --assemble --scan mdadm: /dev/md/1 has been started with 2 drives (out of 3). So that's started md1 again but it's not picking up the disk from md_d1: root@uberserver:~# cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md1 : active raid5 sde1[1] sdf1[2] 3907023872 blocks level 5, 64k chunk, algorithm 2 [3/2] [_UU] md_d1 : inactive sdd1[0](S) 1953511936 blocks md0 : active raid5 sdg1[2] sdc1[3] sdb1[1] sdh1[0] 2930279808 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU] What's going wrong here? Why is Ubuntu trying to pick up sdd into a different array? How do I get that missing disk back home again? [Edit] - After adding the md1 to mdadm.conf it now tries to mount the array on startup but it's still missing the disk. If I tell it to try and assemble automatically I get the impression it know it needs sdd but can't use it: root@uberserver:~# mdadm --assemble --scan /dev/md1: File exists mdadm: /dev/md/1 already active, cannot restart it! mdadm: /dev/md/1 needed for /dev/sdd1... What am I missing?

    Read the article

  • Clear OS always showing "Operation too slow. Less than 1 bytes/sec"

    - by Blue Gene
    Have been trying to install clear os addon but nothing is working as i am facing this error on every mirror in the .repo file. Yum install squid http://mirror2-dallas.clearsdn.com/clearos/core/6/x86_64/repodata/primary.sqlite.bz2: [Errno 12] Timeout on http://mirror2-dallas.clearsdn.com/clearos/core/6/x86_64/repodata/primary.sqlite.bz2: (28, 'Operation too slow. Less than 1 bytes/sec transfered the last 30 seconds') Trying other mirror. mirror2-dc.clearsdn.com/clearos/core/6/x86_64/repodata/primary.sqlite.bz2: [Errno 12] Timeout on mirror2-dc.clearsdn.com/clearos/core/6/x86_64/repodata/primary.sqlite.bz2: (28, 'Operation too slow. Less than 1 bytes/sec transfered the last 30 seconds') Trying other mirror. mirror1.timburgess.net/clearos/core/6/x86_64/repodata/primary.sqlite.bz2: [Errno 12] Timeout on mirror1.timburgess.net/clearos/core/6/x86_64/repodata/primary.sqlite.bz2: (28, 'Operation too slow. Less than 1 bytes/sec transfered the last 30 seconds') Trying other mirror. mirror2-houston.clearsdn.com/clearos/core/6/x86_64/repodata/primary.sqlite.bz2: [Errno 12] Timeout on mirror2-houston.clearsdn.com/clearos/core/6/x86_64/repodata/primary.sqlite.bz2: (28, 'Operation too slow. Less than 1 bytes/sec transfered the last 30 seconds') Trying other mirror. mirror3-toronto.clearsdn.com/clearos/core/6/x86_64/repodata/primary.sqlite.bz2: [Errno 12] Timeout on mirror3-toronto.clearsdn.com/clearos/core/6/x86_64/repodata/primary.sqlite.bz2: (28, 'Operation too slow. Less than 1 bytes/sec transfered the last 30 seconds') Trying other mirror. mirror2-dallas.clearsdn.com/clearos/core/6/x86_64/repodata/primary.sqlite.bz2: [Errno 12] Timeout on mirror2-dallas.clearsdn.com/clearos/core/6/x86_64/repodata/primary.sqlite.bz2: (28, 'O*peration too slow. Less than 1 bytes/sec transfered the last 30 seconds'*) Trying other mirror. mirror2-dc.clearsdn.com/clearos/core/6/x86_64/repodata/primary.sqlite.bz2: [Errno 12] Timeout on mirror2-dc.clearsdn.com/clearos/core/6/x86_64/repodata/primary.sqlite.bz2: (28, 'Operation too slow. Less than 1 bytes/sec transfered the last 30 seconds') Trying other mirror. mirror1.timburgess.net/clearos/core/6/x86_64/repodata/primary.sqlite.bz2: [Errno 12] Timeout on mirror1.timburgess.net/clearos/core/6/x86_64/repodata/primary.sqlite.bz2: (28, 'Operation too slow. Less than 1 bytes/sec transfered the last 30 seconds') Trying other mirror. mirror3-toronto.clearsdn.com/clearos/core/6/x86_64/repodata/primary.sqlite.bz2: [Errno 12] Timeout on mirror3-toronto.clearsdn.com/clearos/core/6/x86_64/repodata/primary.sqlite.bz2: (28, 'Operation too slow. Less than 1 bytes/sec transfered the last 30 seconds') Trying other mirror. Error: failure: repodata/primary.sqlite.bz2 from clearos-core: [Errno 256] No more mirrors to try. How can i fix this.i am able to access repo through web,and it seems nothing wrong with the repo.Where can be the problem. Tried yum clean all but it also didnt help. Is there a way to fix it as i am not able to install any package in it.

    Read the article

  • System time wrong after running ntpdate because DST ignored

    - by Ian Dunn
    When I run ntpdate, my system clock displays the time as an hour behind what it should be. I know that ntpdate does everything in UTC, so I'm guessing there's a timezone setting wrong and it's ignoring Daylight Savings Time, but I can't figure it out. Here's what I've done so far: ln -sf /usr/share/zoneinfo/EST /etc/localtime to set the timezone Set UTC=true in /etc/sysconfig/clock so that DST will be automatically applied date -s hh:mm::ss to set system clock correctly hwclock -systohc --utc to set the hardware clock correctly At this point date and hwclock both display the correct time. But if I then run ntpdate 0.us.pool.ntp.org, the date output is an hour behind what it should be. I've looked at a dozen tutorials and can't figure out what I'm doing wrong. Does anyone have any ideas?

    Read the article

< Previous Page | 383 384 385 386 387 388 389 390 391 392 393 394  | Next Page >