Search Results

Search found 38288 results on 1532 pages for 'oracle linux partners'.

Page 557/1532 | < Previous Page | 553 554 555 556 557 558 559 560 561 562 563 564  | Next Page >

  • Help with Ubuntu and Windows, separate HDs

    - by LuxuryMode
    Need some major help. Running a Dell XPS/Dimension 630i. It came with "SATA 2 RAID 0 With Dual 500GB Hard Drives." I have installed a new, third non-raided drive and installed Ubuntu on it. So now I have Windows on the original hard drive and Ubuntu Linux on the new HD. When I get to the boot menu where I can select an OS, if I select windows I get an error: "No such drive, no such disk." Also, strangely in the first place, in order to even get to the bootloader menu I have had to disable ALL ports under the RAID config. Unless I do this, I will just get to a never-ending blinking cursor. I have tried every conceivable CMOS config and nothing else works. Tried setting port 3 (the new HD w/ Ubuntu) to first hard disk boot priority. Tried disabling all other ports and enabling the Ubuntu HD port and vice versa. I have some pictures of boot up: first one is strange error i get after messing with CMOS to finally get ubuntu install to work. http://imgur.com/5sqJa then boot menu: http://imgur.com/TWtLq then error: http://imgur.com/TJ1mS. Also, please note that I can actually access all files from the raided Windows drive through Ubuntu.

    Read the article

  • User-unique .vimrc file for servers as root user

    - by Scott
    I'm getting thrown into an IDE war at the office, where multiple users have root access on our servers, and like to have everything their own way with VIM. Unfortunately, we have our servers locked down enough to where if you want to do anything, you need to have root access. Obviously (although this is obviously frowned upon), we get tired of typing sudo before each command we type, which would require that we constantly type in our wonderfully complex passwords that are mandated on us over and over again, so naturally we all just execute the sudo su - command upon login to avoid all of this. Of course, when it comes to VIM and custom .vimrc files, we are often times stepping on someone else's custom .vimrc file, and we have some whacked out functionality in these files that users have that may overwrite functionality that we have no idea about, much less have the patience to learn either. When as root on a linux box, is there any way for all of us to still maintain our .vimrc file without having to overwrite the file over and over again every time someone wants to use VIM? Ideally, we have many virtual machines all with VIM installed, so a universal solution across all servers would be best, and we do have our Microsoft Windows user specific home directories mounted on the servers under /home/username. Any recommendations for accommodating this?

    Read the article

  • Software RAID 1 Configuration

    - by Corve
    I have created a software RAID 1 quite some while ago and it always seemed to work for me. However I am not completely sure that I have configured everything right and do not have the experience to check so I would be very grateful for some advice or just verification that all seems right so far. I am using Linux Fedora 20 (32 bit with plans to upgrade to 64bit) The RAID 1 should consist of two 1TB SATA hard drives. This is the output of mdadm --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Sun Jan 29 11:25:18 2012 Raid Level : raid1 Array Size : 976761424 (931.51 GiB 1000.20 GB) Used Dev Size : 976761424 (931.51 GiB 1000.20 GB) Raid Devices : 2 Total Devices : 1 Persistence : Superblock is persistent Update Time : Sat Jun 7 10:38:09 2014 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 0 Spare Devices : 0 Name : argo:0 (local to host argo) UUID : 1596d0a1:5806e590:c56d0b27:765e3220 Events : 996387 Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 0 1 active sync /dev/sda The RAID is mounted successfully: friedrich@argo:~ ? sudo mount -l | grep md0 /dev/md0 on /mnt/raid type ext4 (rw,relatime,data=ordered) Basically my question are: Why do I only have 1 active device? What does the State removed at bottom mean? Also I noticed some strange error messages that I see on the console on system start and shutdown and always repeating in the background when I switch with Ctrl + Alt + F2: ... ata2: irq_stat 0x00000040 connection status changed ata2: SError: { CommWake DevExch } ata2: COMRESET failed (errno=-32) ata2: exception Emask 0x10 SAct 0x0 SErr 0x4040000 action 0xe frozen ata2: irq_stat 0x00000040 connection status changed ata2: SError: { CommWake DevExch } ata2: exception Emask 0x10 SAct 0x0 SErr 0x4040000 action 0xe frozen ... Are these errors related to the RAID? Something seems wrong with the SATA devices.. All together the system works (I can read and write to the mounted raid) but I always had these strange errors on startup shutdown (probably always in the background). Thx for your help

    Read the article

  • Family server setup

    - by Manny
    Hi all, I really hope some of you can give me some direction. I have setup a linux server at home and through samba I can access files from different computers in my home. I would like to use this server as a file-server for my family (brothers, sisters and parents who all live in their own homes). I really like the way it is set up right now with user and permission controls, but I've read that it is bad idea to open up the samba port to the world. The requirements are simple: 1) it should be easy to access, by using standard web browsers or mounting the drive (shouldn't have to use any VPN setup or use putty etc) 2) should be somewhat secure. We just want to share family pictures instead of putting them on facebook or picasa or other web server, nothing top secret. Here is what I've looked into: 1)Webdav. It seems decent but seems like it windows7 doesn't like it very much, even with digest mode authentication. User controls and permissions are not as flexible as samba (or at least to my knowledge). I really like the user and group permissions in samba, but if I could live with webdav if it worked seamlessly with windows, it should just work shouldn't it? 2) I read somewhere to stay away from ftp as it is outdated and that there are newer and better internet file-server setups? Was that a reference to webdav? I am so confused, please help... Manny

    Read the article

  • Trouble getting started with the STEALTH monitoring package

    - by dlanced
    Is anyone here familiar with the Linux-based STEALTH package (for monitoring FS integrity of client systems)? I'm trying to get started with a very simple configuration, but I'm running into trouble (this is running under Ubuntu 14.04): Config line `USE BASE/root/stealth/10.0.0.79' invalid STEALTH (2.11.02) started at Fri, 30 May 2014 15:25:00 +0000 Program terminated due to non-zero exit value for -type f -exec /usr/bin/sha1sum {} \; (EOC Fri May 30 15:25:00 2014 127) Stealth is creating a binary tmp file in the Stealth server root and generating a "report" file in the start directory, but not much else. Regarding the "USE BASE...invalid" error, and just to be sure, I manually created the directories in /root, but it didn't help. And, by the way, I am running stealth with sudo. Everything seems to be configured correctly: I'm able to ssh into root@client from the stealth machine without a password Here's my "policy" file (I've removed the email directives just for simplicity): DEFINE SSHCMD /usr/bin/ssh [email protected] -T -q exec /bin/bash --noprofile DEFINE EXECSHA1 -xdev -perm +u+s,g+s ( -user root -or -group root ) \ -type f -exec /usr/bin/sha1sum {} \; USE BASE/root/stealth/10.0.0.79 USE SSH ${SSHCMD} USE DD /bin/dd USE DIFF /usr/bin/diff USE PIDFILE /var/run/stealth- USE REPORT report USE SH /bin/sh GET /usr/bin/sha1sum /root/tmp LABEL \nchecking the client's /usr/bin/find program CHECK LOG = remote/binfind /usr/bin/sha1sum /usr/bin/find LABEL \nsuid/sgid/executable files uid or gid root on the / partition CHECK LOG = remote/setuidgid /usr/bin/find / ${EXECSHA1} LABEL \nconfiguration files under /etc CHECK LOG = remote/etcfiles \ /usr/bin/find /etc -type f -not -perm /6111 \ -not -regex "/etc/(adjtime\|mtab)"\ -exec /usr/bin/sha1sum {} \; Any ideas? Thanks,

    Read the article

  • DHCPD (Slackware) - Disabling auto-generation of gateway as DNS server

    - by Dogbert
    Good day, I am using a Linux workstation on Slackware 13.37. One "problem" I have had to deal with ever since 11.0 is the following: DNS servers are queried and determined at startup by DHCP daemon (DHCPD) This is invoked at startup by a script located at /etc/rc.d/rc.dhcpd My DNS servers for my ISP are resolved correctly, and are stored in a list located at /etc/resolv.conf However, the one annoying problem is that my gateway IP (ie: 192.168.1.1) is always automatically put at the top of the list in resolv.conf, meaning I have to always wait for a timeout before a valid DNS server is used to resolve an address (ie: timeout on 192.168.1.1 because it is not actually a DNS server, then DHCP uses the next server in the list). I could lower my DNS resolution timeout so the gateway query times out quicker, but that's not what I want, as I don't want to degrade the abilities of legitimate DNS servers. What I would like to do is change how DHCPD operates so that it does NOT put my gateway IP address at the beginning of this list. I've searched via "man dhcpd", etc, and haven't found the exact answer yet. Any help on this issue is appreciated. Thank you all in advance for your time and assistance.

    Read the article

  • The plugin of munin is always timed out

    - by haoX
    I want to use munin to make a graph of ttyACM0 in Linux, but munin can not create the graph. I found some information in "munin-node.log". it shows that "Service 'temperature' timed out". So I changed timeout to 60 or 120 in /munin/plugin-conf.d/munin-node, but it does not work. It's also timed out. Here is part of my code: if [ "$1" = "config" ]; then echo 'graph_title Temperature of board' echo 'graph_args --base 1000 -l 0' echo 'graph_vlabel temperature(°C)' echo 'graph_category temperature' echo 'graph_scale no' echo 'graph_info This graph shows the temperature of board' for i in 1 2 3 4 5; do case $i in 1) TYPE="Under PCB" ;; 2) TYPE="HDD" ;; 3) TYPE="PHY" ;; 4) TYPE="CPU" ;; 5) TYPE="Ambience" ;; esac name=$(clean_name $TYPE) if [ "$TYPE" != "NA" ]; then echo "temp_$name.label $TYPE"; fi done exit 0 fi for i in 1 2 3 4 5; do case $i in 1) TYPE="Under PCB" VALUE=$(head -1 /dev/ttyACM0 | awk '{print $1}') ;; 2) TYPE="HDD" VALUE=$(head -1 /dev/ttyACM0 | awk '{print $2}') ;; 3) TYPE="PHY" VALUE=$(head -1 /dev/ttyACM0 | awk '{print $3}') ;; 4) TYPE="CPU" VALUE=$(head -1 /dev/ttyACM0 | awk '{print $4}') ;; 5) TYPE="Ambience" VALUE=$(head -1 /dev/ttyACM0 | awk '{print $5}') ;; esac name=$(clean_name $TYPE) if [ "$TYPE" != "NA" ]; then echo "temp_$name.value $VALUE" fi

    Read the article

  • Raspberry pi slows down my entire network

    - by gnusouth
    Whenever my Raspberry Pi is connected to the network (via ethernet) the entire network is slowed to a crawl. On my main computer, ping times for google.com go from ~10ms to ~200ms and it takes forever to load web pages. Connections are also slow on the Pi, with an apt-get update showing pathetic speeds in the order of 1KB/s. Turning off the Pi completely removes the drag from the network. I've tried static and dynamic IP addresses for the Pi, but both have the same problems. I'm currently using Raspbian (downloaded today), but also had this problem with Arch Linux. I've checked the connection's duplex with dmesg | grep -i duplex, which shows that the Pi's connection is running at 100Mbps, full-duplex, as expected. My modem/router is a Billion 7404VNPX (an Australian thing); relatively high-end, albeit a bit buggy at times (it will occassionally delete all its firewall settings). It assigns IPs in the range 192.168.1.1 to 192.168.1.20 and has 192.168.1.254 as its own IP. When I assign static IPs I tend to use the 192.168.1.200 area. Does anyone have any idea as to what could be causing this weird slowdown? Or any tests I could try? Thanks

    Read the article

  • My yum repository able to search packages, but not able to install it in RHEL?

    - by mandy
    I set up yum from dvd. Following is the containts of my .repo file: [dvd] name=Red Hat Enterprise Linux Installation DVD baseurl=file:///media/dvd enabled=0. I'm able to search packages. However while installation I'm getting below error: [root@localhost dvd]# yum install libstdc++.x86_64 Loaded plugins: rhnplugin, security This system is not registered with RHN. RHN support will be disabled. Setting up Install Process Nothing to do My Yum Search output: [root@localhost dvd]# yum search gcc Loaded plugins: rhnplugin, security This system is not registered with RHN. RHN support will be disabled. ============================================================================= Matched: gcc ============================================================================= compat-libgcc-296.i386 : Compatibility 2.96-RH libgcc library compat-libstdc++-296.i386 : Compatibility 2.96-RH standard C++ libraries compat-libstdc++-33.i386 : Compatibility standard C++ libraries compat-libstdc++-33.x86_64 : Compatibility standard C++ libraries cpp.x86_64 : The C Preprocessor. libgcc.i386 : GCC version 4.1 shared support library libgcc.x86_64 : GCC version 4.1 shared support library libgcj.i386 : Java runtime library for gcc libgcj.x86_64 : Java runtime library for gcc libstdc++.i386 : GNU Standard C++ Library libstdc++.x86_64 : GNU Standard C++ Library libtermcap.i386 : A basic system library for accessing the termcap database. libtermcap.x86_64 : A basic system library for accessing the termcap database. Please guide me on this, I want to install gcc on my RHEL.

    Read the article

  • how do I fix a wrong UUID in grub.cfg?

    - by mozerella
    I run Debian Wheezy alone on my PC and I recently copied the root partition to another with rsync as I found that worked well (I also know about dd and ddrescue but they leave unusable space on the new partition). I generated a new random UUID for the new partition with sudo tune2fs -U random /dev/hda9 and also updated fstab / and /home entries. Then as I know so little about GRUB I used a gui (GRUB Customizer) to probe for the new OS and add an entry to GRUB and the MBR -it makes an /etc/grub.d entry then updates GRUB. On startup, the GRUB list contains the new OS (on sda9) but it boots the first OS (which I copied from -sda5). /boot/grub/grub.cfg contains the new debian OS but it looks like this set root='(hd0,msdos9)' search --no-floppy --fs-uuid --set=root 64662470-0e58-4dfd-90ac-43227d773556 linux /boot/vmlinuz-3.2.0-2-amd64 root=UUID=cc3bca0d-aee4-4b9c-95c2-57212cc36d4d ro quiet initrd /boot/initrd.img-3.2.0-2-amd64 the 1st uuid is of sda9, but the 2nd uuid there is of sda5. I can change the 2nd uuid at startup (with E) and it boots sda9. So how can I get grub.cfg corrected so that the sda9 GRUB list entry boots from sda9 permanently?

    Read the article

  • Getting Started in SuSE as an Ubuntu User

    - by Subhamoy Sengupta
    I am not a Linux newbie, but haven't touched SuSE in a very very long time (last time I tried it, it was SuSE 7!). Finally now I felt like giving it a try, and many things seem strange or unnecessarily complex. I have a series of questions. How do I ensure that my packages are uptodate? It sounds silly, but I tried the obvious methods already. I have disabled the default repositories that show up when you do zypper lr, and added Tumbleweed and packman repositories (Essentials, Multimedia, Extra). Then I did a sudo zypper ref --force and then sudo zypper dup, and it tells me many dependencies are not met. I have already added solder.allowVendorChange=true to /etc/zypp/zypp.conf, so it should not care which repository the latest versions are in, and just upgrade to it. Even when I chose to skip the packages with unmet dependencies, and seemed like quite a bit happened in the background, I opened Firefox afterwards and the version was 7! I am guessing things did not go as expected. But of course this is not a problem with SuSE, but I am not understanding the system right. How do I do it right? When I start typing arguments of a command, for example sudo zypper install, when I type sudo zypper ins and keep hitting TAB, nothing happens! It always worked in Ubuntu and I feel very uneasy with this. Is this how SuSE is supposed to be? When I try to install something, and I start writing its name, even though the package exists and I am sure of it, hitting TAB does not autocomplete it. This is also quite inconvenient. Why is it not happening? There are many things in SuSE that are really great, and I think I will stay with it and not go back to Ubuntu once I settle these very rudimentary issues. But right now they are giving me a lot of grief! Please help!

    Read the article

  • How can I resize images in multiple subdirectories more effectively?

    - by jtfairbank
    I have the original images in a directory structure that looks like this: ./Alabama/1.jpg ./Alabama/2.jpg ./Alabama/3.jpg ./Alaska/1.jpg ...the rest of the states... I wanted to convert all of the original images into thumbnails so I can display them on a website. After a bit of digging / experimenting, I came up with the following Linux command: find . -type f -iname '*.jpg' | sed -e 's/\.jpg$//' | xargs -I Y convert Y.jpg -thumbnail x100\> Y-small.jpg It recursively finds all the jpg images in my subdirectories, removes the file type (.jpg) from them so I can rename them later, then makes them into a thumbnail and renames them with '-small' appended before the file type. It worked for my purposes, but its a tad complicated and it isn't very robust. For example, I'm not sure how I would insert 'small-' at the beginning of the file's name (so ./Alabama/small-1.jpg). Questions: Is there a better, more robust way of creating thumbnails from images that are located in multiple subdirectories? Can I make the existing command more robust (for example, but using sed to rename the outputted thumbnail before it is saved- basically modify the Y-small.jpg part).

    Read the article

  • Massive number of context switches on ksoftirqd

    - by Pace
    We have two servers that are grinding to a halt. One is a VM and the other is bare metal. Neither of them are running similar code but they are on the same network. It appears that an incredible number of context switches are arising from ksoftirqd (which is taking up a lot of CPU). vmstat output procs -----------memory---------- ---swap-- -----io---- -system-- -----cpu------ r b swpd free buff cache si so bi bo in cs us sy id wa st 1 0 0 605092 182496 2637556 0 0 0 0 4177 519187 8 19 73 0 0 2 0 0 605092 182496 2637556 0 0 0 0 4792 520980 8 19 74 0 0 3 0 0 605092 182496 2637552 0 0 0 0 2137 659640 18 26 56 0 0 ... pidstat output TCK4-BM-06A:~ # pidstat -w -I 5 Linux 2.6.32.12-0.7-default (TCK4-BM-06A) 07/02/2012 _x86_64_ 03:03:01 PM PID cswch/s nvcswch/s Command 03:03:06 PM 1 0.20 0.00 init 03:03:06 PM 4 386666.27 0.00 ksoftirqd/0 03:03:06 PM 6 0.60 0.00 ksoftirqd/1 03:03:06 PM 8 378213.17 0.00 ksoftirqd/2 03:03:06 PM 10 0.20 0.00 ksoftirqd/3 03:03:06 PM 12 0.20 0.00 ksoftirqd/4 03:03:06 PM 26 377115.37 0.00 ksoftirqd/11 03:03:06 PM 27 1.80 0.00 events/0 03:03:06 PM 28 1.00 0.00 events/1 03:03:06 PM 29 1.00 0.00 events/2 03:03:06 PM 30 1.00 0.00 events/3 03:03:06 PM 31 0.80 0.00 events/4 03:03:06 PM 32 0.80 0.00 events/5 ... My initial thought is that, since both are on the same network, something is flooding the network. Is this consistent with the data?

    Read the article

  • TCP Keepalive and firewall killing idle sessions

    - by Carlos A. Ibarra
    In a customer site, the network team added a firewall between the client and the server. This is causing idle connections to get disconnected after about 40 minutes of idle time. The network people say that the firewall doesn't have any idle connection timeout, but the fact is that the idle connections get broken. In order to get around this, we first configured the server (a Linux machine) with TCP keepalives turned on with tcp_keepalive_time=300, tcp_keepalive_intvl=300, and tcp_keepalive_probes=30000. This works, and the connections stay viable for days or more. However, we would also like the server to detect dead clients and kill the connection, so we changed the settings to time=300,intvl=180,probes=10, thinking that if the client was indeed alive, the server would probe every 300s (5 minutes) and the client would respond with an ACK and that would keep the firewall from seeing this as an idle connection and killing it. If the client was dead, after 10 probes, the server would abort the connection. To our surprise, the idle but alive connections get killed after about 40 minutes as before. Wireshark running on the client side shows no keepalives at all between the server and client, even when keepalives are enabled on the server. What could be happening here? If the keepalive settings on the server are time=300,intvl=180,probes=10, I would expect that if the client is alive but idle, the server would send keepalive probes every 300 seconds and leave the connection alone, and if the client is dead, it would send one after 300 seconds, then 9 more probes every 180 seconds before killing the connection. Am I right? One possibility is that the firewall is somehow intercepting the keepalive probes from the server and failing to pass them on to the client, and the fact that it got a probe makes it think that the connection is active. Is this common behavior for a firewall? We don't know what kind of firewall is involved. The server is a Teradata node and the connection is from a Teradata client utility to the database server, port 1025 on the server side, but we have seen the same problem with an SSH connection so we think it affects all TCP connections.

    Read the article

  • Allowing access to company files accross the internet

    - by Renaud Bompuis
    The premise I've been tasked with finding a solution to the following scenario: our main file server is a Linux machine. on the LAN, users simply access the files using SMB. each user has an account on the file server and his/her own access rights. user accounts are simple passwd/group security accounts, not NIS/LDAP. The problem We want to give users (or at least some of them, say if they belong to a particular group) the ability to access the files from the Internet while travelling. Ideally I'd like a seamless solution. Maybe something that allows the user to access a mapped drive would be ideal. A web-oriented solution is also good but it should present files in a way that is familiar to users, in an explorer-like fashion for instance. Security is a must of course, and users would be expected to log-in. The connection to the server should also be encrypted. Anyone has some pointers to neat solutions? Any experiences? Edit The client machines are Windows only.

    Read the article

  • Monitoring instantaneous network throughput at one second intervals?

    - by Shaddi
    For a testing setup I have, I need to monitor the throughput through a "router"* at regular intervals of around 5 seconds or less (sub-second intervals would be very nice, but not required). Ideally, I would be able to generate a file which contained both the number of bytes and packets seen during each interval. I will eventually be generating a time-series of throughput from this data. On a previous setup using an older version of FreeBSD, there was a tool called "bpfmon" which gave me this information. However, I need to do this under a modern version of Linux (namely, Ubuntu 11.04). I have looked at both iptraf and iftop, but these do not appear to provide the resolution I need, nor do they seem to easily allow scraping the data I need. I understand iptables statistics may be able to give me what I'm after, but the examples I've seen of this seem to rely on repeatedly reading and resetting traffic counters, which seems like it could give inaccurate as read/reset is not an atomic operation. I already capture a tcpdump trace of the traffic I'm interested in on the link I want to monitor, so I am open to approaches which simply parse that. I feel like this must be a common problem though, so I am hoping there will be a standard "best practice" tool for accomplishing this. *I say "router" in quotes because I am really talking about a machine with two bridged NICs through which all the traffic I'm interested in passes.

    Read the article

  • postfix test and configuration problem

    - by Woho87
    Hi Guys! I installed postfix using sudo yum install postfix postfix-mysql. I'm newbie to mail systems, but I have one AMAZON EC2 instance with a public DNS. I used the public DNS in most cases, when I configured the file main.cf. The public DNS I have is from amazon and it is a long string(ec2-123-34-234-677.....amazon.com). // I configured this on main.cf. I replaced example.com with ec2-123-.......amazon.com myhostname = mail.example.com mydomain = example.com myorigin = $mydomain mydestination = example.com, $transport_maps local_recipient_maps = $alias_maps $virtual_mailbox_maps unix:passwd.byname home_mailbox = Maildir/ How do I test postfix? I just want it to send emails for my web application. I tried to test it with >telnet localhost 25 after I typed in SSH >sudo postfix start. but I recieve the message that telnet command can not be found. I also use the Amazon linux distribution if you want to know. I use it because it is free. What have I done wrong? Are there anymore configurations required pls help!

    Read the article

  • Ubuntu 10.04 on virtualbox gives error: Target filesystem doesn't have /sbin/init \ No init found. Try passing init= bootarg

    - by Philip
    I'm a linux newbie and the only reason I have it installed is so I can stop having Windows incompatibility issues with Ruby on Rails. Having said that, it sure has been nice, and much faster, and I don't think I'll be doing any Winrails stuff anytime soon. So I created a virtualmachine using virtualbox and have had ubuntu on it for the last 3 weeks. Recently ubuntu asked if it could update a few things, I clicked 'ok'. Now it won't boot and I get this error: *mount: mounting /dev on /root/dev failed: No such file or directory mount: mounting /sys on /root/sys failed: No such file or directory ... Target filesystem doesn't have /sbin/init. No init found. Try passing init= bootarg BusyBox v1.13.3... (initramfs) _ * So I cruised the forums and there are a variety of solutions, but they all have to do with booting from the live cd. (which I assume is the ISO image I used to install ubuntu in the first place). But when I boot from that CD, it just hangs on the ubuntu screen, and the little dots keep cycling white to red, but it hung there for an hour so I think it was stuck. Not sure what I can do; can I do anything from the busybox shell (or whatever that is) to fix things? The thing is, it took about 10 hours to get everything the way I needed with all the gems and whatnot. And I didn't really write down what I tweaked, and I'm middle aged, so all that information has leaked out by now and I don't want to do it again. I'd really like to repair my existing install. One question you might have is, is there something wrong with the ISO? I don't think so, because I made a new virtual machine and used that same iso file to install a fresh ubuntu. Any help much appreciated. Phil

    Read the article

  • krenew command not working : Permission Denied

    - by prathmesh.kallurkar
    I am using a Linux server to perform my simulations. The login and the file-system of the server are protected using kerberos. The file-system is supported using NFS. Since my simulations take a lot of time to run, my ssh sessions used to hang regularly. So, I have started running my simulations in byobu (similar to screen). In order to make sure that my kerberos session remains active, I am using the krenew command. I have entered the following command in my .bash_profile file. (I am sure that it is called for every login) killall -9 krenew 2> /dev/null krenew -b -t -K 10 So everytime I ssh to the server, I kill the existing krenew command. Then, I spawn a new krenew command -b (which runs in background), -t (I forgot why I was using this option !), and -K 10 (It must run after every 10 minutes and refresh the kerberos cache). When I run the simulations, It runs for 14 hours and then suddenly, I am getting error for reading file Permission Denied Is the command that I am running incorrect ??

    Read the article

  • Simple Distributed Disconnected way to sync a directory

    - by Rory
    I want to start regularly backup my home directory on my ubuntu laptop, machine X. Suppose I have access to 2 different remote (linux) servers that I can backup to, machines A & B. Machine X will be the master, and should be synced to A and B. I could just regularly run rsync from X to A and then from X to B. That's all I need. However I'm curious if there's a more bandwidth effecient, and hence faster way to do it. Assuming X is going to be on residential style broadband lines, and since I don't want to soak up the bandwidth, I would limit the transfer from X. A and B will be on all the time, however X, will not be, so I'd also like to reduce the amount of time that X is transfering, potentially allowing A and B to spend more time transfering. Also, X won't be connected all the time. What's the best way to do this? rsync from X to A, then from A to B? Timing that right could be troublesome. I don't want to keep old files around, so if I was to rsync, then the --del option would be used. Could that mean something might get tranfered from A to B, then deleted from B, then transfered from A to B again? That's suboptimal. I know there are fancy distributed filesystems like gluster, but I think that's overkill in this case, and might not fit with the disconnected nature.

    Read the article

  • Preventing endless forwarding with two routers

    - by jarmund
    The network in quesiton looks basically like this: /----Inet1 / H1---[111.0/24]---GW1---[99.0/24] \----GW2-----Inet2 Device explaination H1: Host with IP 192.168.111.47 GW1: Linux box with IPs 192.168.111.1 and 192.168.99.2, as well as its own route to the internet. GW2: Generic wireless router with IP 192.168.99.1 and its own route to the internet. Inet1 & Inet2: Two possible routes to the internet In short: H has more than one possible route to the internet. H is supposed to only access the internet via GW2 when that link is up, so GW1 has some policy based routing special just for H1: ip rule add from 192.168.111.47 table 991 ip route add default via 192.168.99.1 table 991 While this works as long as GW2 has a direct link to the internet, the problem occurs when that link is down. What then happens is that GW2 forwards the packet back to GW1, which again forwards back to GW2, creating an endless loop of TCP-pingpong. The preferred result would be that the packet was just dropped. Is there something that can be done with iptables on GW1 to prevent this? Basically, an iptables-friendly version of "If packet comes from GW2, but originated from H1, drop it" Note1: It is preferable not to change anything on GW2. Note2: H1 needs to be able to talk to both GW1 and GW2, and vice versa, but only GW2 should lead to the internet TLDR; H1 should only be allowed internet access via GW2, but still needs to be able to talk to both GW1 and GW2. EDIT: The interfaces for GW1 are br0.105 for the '99' network, and br0.111 for the '111' network. The sollution may or may not be obnoxiously simple, but i have not been able to produce the proper iptables syntax myself, so help would be most appreciated. PS: This is a follow-up question from this question

    Read the article

  • Strange issue with 74.125.79.118

    - by Domenic
    I'm facing with a strange issue on a Linux server. After frequent crashes the analysis found that the server is led to collapse by a huge number of connections to the ip 74.125.79.118 departing from php scripts of the hosted web sites. After a depth analysis of the files I'm found that are not present any malware infections. Ip 74.125.79.118 is Google. I realize after a Google search that the connections to this ip are generated by embedded video from youtube on web sites, among other Google features like safe search. But I don't understand how this type of behavior can lead to the collapse the server and the uniqueness of the situation leads me to think that the situation is far from being attributable only to Google and Youtube. Also I've found that blocking connections from eth0 to 74.125.79.118:80 doesn't solve the issue but if I stop DNS traffic from eth0 to internet, connections to 74.125.79.118 stops. I'm really confused about this. Any suggestions? Cheers.

    Read the article

  • Centos repository packages vs latest developer release

    - by fran
    I have started to run a personal server using CentOS and I have noticed that many packages that are available to install from repository are old compared with the latest release from the developer. I know that installing packages from repository is very easy and I guess that the supplied versions are stable and prepared to work without any trouble, but I still find odd having so much software that lags behind the current version. It's my first time with linux and I don't know what is the "normal" thing, should I stick to whatever version the repository supplies, or try to get the latest from the developer? To be more precisely, the repository supplies the apache httpd web server with version 2.2, I wanted to update to 2.4, so I started removing apache and its dependencies packages that come with centos to use the latest ones, but when I was about to remove pcre v6 to replace it with v8, i found out that 132 installed packages depend on it and probably it is not a good idea to remove it, so that made me think twice about getting the latest software instead of using the packages supplied by the official repositories. Should I leave things as they are instead of going on an upgrade rampage? Thanks

    Read the article

  • logfile deleted on Oracle database how to re-create it?

    - by Daniel
    for my database assignment we were looking into 'database corruption' and I was asked to delete the second redo log file which I have done with the command: rm log02a.rdo this was in the $HOME/ORADATA/u03 directory. Now I started up my database using startup pfile=$PFILE nomount then I mounted it using the command alter database mount; now when I try to open it alter database open; it gives me the error: ORA-03113: end-of-file on communication channel Process ID: 22125 Session ID: 25 Serial number: 1 I am assuming this is because the second redo log file is missing. There is still log01a.rdo, but not the one I have deleted. How can I go about recovering this now so that I can open my database again? I have looked into the database created scripts, and it specified the log02a.rdo file to be size 10M and part of group 2. If I do select group#, member from v$logfile; I get: 1 /oradata/student_db/user06/ORADATA/u03/log01a.rdo 2 /oradata/student_db/user06/ORADATA/u03/log02a.rdo 3 /oradata/student_db/user06/ORADATA/u03/log03a.rdo 4 /oradata/student_db/user06/ORADATA/u03/log04a.rdo So it is part of group 2. If I try to add the log02a.rdo file again "already part of the database". If I drop group 2 and then add it again with these commands: ALTER DATABASE ADD LOGFILE GROUP 2 ('$HOME/ORADATA/u03/log02a.rdo') SIZE 10M; Nothing. Supposedly alters the database, but it still won't start up. Any ideas what I can do to re-create this and be able to open my database again?

    Read the article

  • ping incorrectly pinging 127.0.0.1

    - by AlexW
    I've got an odd DNS issue. I'm running a dual ipv4/ipv6 environment on Linux. Pinging some sites results in ping pinging 127.0.0.1. e.g. #> ping authserver.mojang.com PING authserver.mojang.com (127.0.0.1) 56(84) bytes of data. 64 bytes from localhost.localdomain (127.0.0.1): icmp_seq=1 ttl=64 time=0.045 ms 64 bytes from localhost.localdomain (127.0.0.1): icmp_seq=2 ttl=64 time=0.043 ms 64 bytes from localhost.localdomain (127.0.0.1): icmp_seq=3 ttl=64 time=0.058 ms --- authserver.mojang.com ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2000ms rtt min/avg/max/mdev = 0.043/0.048/0.058/0.010 ms Dig, however correctly returns the following: # dig authserver.mojang.com ; <<>> DiG 9.9.3-P2 <<>> authserver.mojang.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 15800 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 512 ;; QUESTION SECTION: ;authserver.mojang.com. IN A ;; ANSWER SECTION: authserver.mojang.com. 5 IN A 54.235.119.47 ;; Query time: 14 msec ;; SERVER: 2001:4860:4860::8888#53(2001:4860:4860::8888) ;; WHEN: Sat Nov 09 15:34:40 GMT 2013 ;; MSG SIZE rcvd: 66 I'm confused! My web browser returns the correct website, and the same computer booted into Windows also works correctly.

    Read the article

< Previous Page | 553 554 555 556 557 558 559 560 561 562 563 564  | Next Page >