Search Results

Search found 26263 results on 1051 pages for 'linux guest'.

Page 427/1051 | < Previous Page | 423 424 425 426 427 428 429 430 431 432 433 434  | Next Page >

  • RHEL 5.3 Kickstart - How specify location of individual package in Workstation folder?

    - by Ed
    I keep getting "package does not exist" errors during the install. I made a kickstart ISO to create an unattended install of a RHEL 5.3 build machine for C++ software releases. It pulls the kickstart config file from our internal web server. This is handy; it makes it easy to test and modify without having to make a new ISO. And I plan to check it in to version control if I can get it working. Anyway, the rpm packages are located in two folders on the disk; Client and Workstation. The packages install fine for the ones that are physically located under the Client folder. It cannot find those under the Workstation folder such as as doxygen and subversion complaining that packages do not exist. Is there a way to specify the individual package location? # ----------------------------------------------------------------------------- # P A C K A G E S # ----------------------------------------------------------------------------- %packages @gnome-desktop @core @base @base-x @printing @development-tools emacs kexec-tools fipscheck xorg-x11-server-Xnest xorg-x11-server-Xvfb #Packages Located in Workstation Folder *** Install can not find any of these ?? bison doxygen gcc-c++ subversion zlib-devel freetype-devel libxml2-devel Thanks in advance, -Ed

    Read the article

  • What should I encrypt in Debian during install?

    - by ianfuture
    I have seen various guides and recommendations on web about how best to do this but nothing that clearly explains the best way and why. So I understand there is a need for part of Debian during install to be un-encrypted on its own partition to allow it to boot. Most info I have seen is call this /boot and set the boot flag. Next I believe the best approach is to create another partition out of all the rest of the disk space, encrypt this, then on top of that create a LVM and then within the LVM create my various partitions , name them , select size, and file system type. Can I include /swap in the encrypted LVM part ? Is this approach sound? If so what are the partitions I should use (this is going to be a minimal server install with a view to install as and when what I need for a dev server)? Finally how does the installer know what to put in each partition I define ? I appreciate there are more than one question but any help and suggestions would be appreciated. If further clarification is needed please mention in the comments . Thanks.. Ian

    Read the article

  • Kickstart Partitioning Configuration

    - by Flo
    I'be been trying to run a kickstart script with the following partition configuration: #Clear the masterboot record zerombr bootloader --location=mbr --driveorder=sda --append=" rhgb crashkernel=auto quiet" # Set up the partitions/logical volumes/logical groups clearpart --all part /boot --fstype=ext4 --asprimary --size=512 --ondisk=sda part swap --size=2048 --fstype=swap --ondisk=sda part pv.01 --fstype=ext4 --grow --size=200 --ondisk=sda part pv.02 --fstype=ext4 --grow --size=200 --ondisk=sdb volgroup VolGroup pv.01 pv.02 --pesize=32768 logvol /opt --fstype=ext4 --name=opt.fs --vgname=VolGroup --size=40000 logvol / --fstype=ext4 --name=root.fs --vgname=VolGroup --size=78000 I have two hard drives and it looks to me like its a really simple configuration. When I run the kickstart I keep getting all these errors that have to do with python files for configuring partitions. The only actual maybe useful piece of information is KeyError /dev/sda/ I tried a number of alterations of this configuration but nothing really worked. Any ideas?

    Read the article

  • Why not block ICMP?

    - by Agvorth
    I think I almost have my iptables setup complete on my CentOS 5.3 system. Here is my script... # Establish a clean slate iptables -P INPUT ACCEPT iptables -P FORWARD ACCEPT iptables -P OUTPUT ACCEPT iptables -F # Flush all rules iptables -X # Delete all chains # Disable routing. Drop packets if they reach the end of the chain. iptables -P FORWARD DROP # Drop all packets with a bad state iptables -A INPUT -m state --state INVALID -j DROP # Accept any packets that have something to do with ones we've sent on outbound iptables -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT # Accept any packets coming or going on localhost (this can be very important) iptables -A INPUT -i lo -j ACCEPT # Accept ICMP iptables -A INPUT -p icmp -j ACCEPT # Allow ssh iptables -A INPUT -p tcp --dport 22 -j ACCEPT # Allow httpd iptables -A INPUT -p tcp --dport 80 -j ACCEPT # Allow SSL iptables -A INPUT -p tcp --dport 443 -j ACCEPT # Block all other traffic iptables -A INPUT -j DROP For context, this machine is a Virtual Private Server Web app host. In a previous question, Lee B said that I should "lock down ICMP a bit more." Why not just block it altogether? What would happen if I did that (what bad thing would happen)? If I need to not block ICMP, how could I go about locking it down more?

    Read the article

  • Create a video stream from an image

    - by skerit
    I have a network camera that is only able to give me an image, not a video stream. In my home network this image is at http://192.168.1.16/loginfree.jpg I'm able to get this image a few times per second. I would now like to be able to serve it up as a video stream so I can use it in zoneminder. Any idea as to how I do that? I've messed around with named pipes and ffmpeg's image2pipe, but I can't get it to work properly.

    Read the article

  • amplified reflected attack on dns

    - by Mike Janson
    The term is new to me. So I have a few questions about it. I've heard it mostly happens with DNS servers? How do you protect against it? How do you know if your servers can be used as a victim? This is a configuration issue right? my named conf file include "/etc/rndc.key"; controls { inet 127.0.0.1 allow { localhost; } keys { "rndc-key"; }; }; options { /* make named use port 53 for the source of all queries, to allow * firewalls to block all ports except 53: */ // query-source port 53; /* We no longer enable this by default as the dns posion exploit has forced many providers to open up their firewalls a bit */ // Put files that named is allowed to write in the data/ directory: directory "/var/named"; // the default pid-file "/var/run/named/named.pid"; dump-file "data/cache_dump.db"; statistics-file "data/named_stats.txt"; /* memstatistics-file "data/named_mem_stats.txt"; */ allow-transfer {"none";}; }; logging { /* If you want to enable debugging, eg. using the 'rndc trace' command, * named will try to write the 'named.run' file in the $directory (/var/named"). * By default, SELinux policy does not allow named to modify the /var/named" directory, * so put the default debug log file in data/ : */ channel default_debug { file "data/named.run"; severity dynamic; }; }; view "localhost_resolver" { /* This view sets up named to be a localhost resolver ( caching only nameserver ). * If all you want is a caching-only nameserver, then you need only define this view: */ match-clients { 127.0.0.0/24; }; match-destinations { localhost; }; recursion yes; zone "." IN { type hint; file "/var/named/named.ca"; }; /* these are zones that contain definitions for all the localhost * names and addresses, as recommended in RFC1912 - these names should * ONLY be served to localhost clients: */ include "/var/named/named.rfc1912.zones"; }; view "internal" { /* This view will contain zones you want to serve only to "internal" clients that connect via your directly attached LAN interfaces - "localnets" . */ match-clients { localnets; }; match-destinations { localnets; }; recursion yes; zone "." IN { type hint; file "/var/named/named.ca"; }; // include "/var/named/named.rfc1912.zones"; // you should not serve your rfc1912 names to non-localhost clients. // These are your "authoritative" internal zones, and would probably // also be included in the "localhost_resolver" view above :

    Read the article

  • OpenVPN bridge network from routed clients

    - by gphilip
    I have the following setup: subnet 1 - 10.0.1.0/24 with a machine used as NAT and also running an OpenVPN client subnet 2 - 192.168.1/24 with an OpenVPN server (the server in subnet 1 connect here) subnet 3 - 10.0.2.0/24 that uses the NAT machine (subnet 1) to access the internet, so all non-local traffic is routed there to the eth0 interface The OpenVPN client creates the tun0 interface and appropriate routing so that I can access machines from 192.168.1/24 [root@ip-10-0-1-208 ~]# telnet 192.168.1.186 8081 Trying 192.168.1.186... Connected to 192.168.1.186. Escape character is '^]'. [root@ip-10-0-1-208 ~]# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 10.0.1.1 0.0.0.0 UG 0 0 0 eth0 10.0.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 10.8.0.1 10.8.0.5 255.255.255.255 UGH 0 0 0 tun0 10.8.0.5 0.0.0.0 255.255.255.255 UH 0 0 0 tun0 169.254.169.254 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 192.168.0.0 10.8.0.5 255.255.0.0 UG 0 0 0 tun0 However, when I try the same from subnet 3, it can't reach that machine. [root@ip-10-0-2-61 ~]# telnet 192.168.1.186 8081 Trying 192.168.1.186... I suspect that it's because subnet 3 is routed to eth0 on the NAT machine in subnet 1 and it cannot jump to tun0. What's the easiest way to resolve it? I don't want to use iptables. I can't change the routing from machines in subnet 1 because it's done in AWS and so it works only with specific interfaces. Also, the NAT machine gets its IP with DHCP and so bridging is a bit complicated. IP forwarding is set on the NAT machine [root@ip-10-0-1-208 ~]# cat /proc/sys/net/ipv4/ip_forward 1 Thank you!

    Read the article

  • Ubuntu X doesn't start

    - by den-javamaniac
    I'm running desktop Ubuntu 9.10 on my Dell laptop. Previously it was Ubuntu 9.04. After some period of time (lets say 3-4 months) my X fails to start automatically after some restart calls. If that takes place my network manager applet doesn't start either (after I do startx). Can any one point out what I'm missing/what's the problem?

    Read the article

  • SSH reverse tunnel to monitor and manage remote devices

    - by acid_crucifix
    I have a set a distributed set of devices running Ubuntu 12.04 that I am distributing to clients. I would like to manage them remotely. They may not have fixed IPs and potentially might be behind firewalls. What I am planning to do is have the devices (permanently connected to the net) poll a request URL and based on the response open a reverse tunnel to my server, so that I can access them via that tunnel. Most of what I read about reverse tunnel over SSH is for single use cases and very little about heavy production usage. Is there some reason for this, security issues? or stability? Any help would be much obliged.

    Read the article

  • MTD mtd3ro backup returns BCH decoding failed

    - by saeed144
    While doing a kernel backup of an mtd (Memory Technology Device) from /dev/mtd/mtd3ro of a TI board gives many "BCH decoding failed", Here are system info #cat /proc/mtd dev: size erasesize name mtd0: 00080000 00020000 "X-Loader" mtd1: 00140000 00020000 "U-Boot" mtd2: 000c0000 00020000 "U-Boot Env" mtd3: 00500000 00020000 "Kernel" mtd4: 1f880000 00020000 "File System" here is the method used, dd if=/dev/mtd/mtd3ro of=/data/local/tmp/mtd3.bin doing a cat also returns the same error, and here is the error, BCH decoding failed BCH decoding failed yes, the destination has enough space ;) tell me what do you think? Thanks

    Read the article

  • Lost Page Write I/O Errors on CentOS LVM setup

    - by Gregg Leventhal
    I have a CentOS 6 box with LVM setup and one of the PVs is a USB disk (I know). One of them is getting the error: Oct 30 10:57:07 alpha01 kernel: lost page write due to I/O error on dm-3 Oct 30 10:57:07 alpha01 kernel: Buffer I/O error on device dm-3, logical block 4 Which is causing problems with all of the LVs on it. pvs shows the PV as unknown device. I can ls to the logical volumes and they show up in lvdisplay, but first I get a bunch of IO errors. I made sure the cables are secure between the USB drive. What should I do to get this back up and running for the meanwhile? Should I unmount each LV and run an fsck.ext4 on each one like fsck.ext4 -y /dev/vg1/lv_logvolname ?

    Read the article

  • dns server bind is not work [closed]

    - by user1742080
    I just installed bind on RHEL 6 and point a domain to that server. but actually when i ping domain it returns error 1214: Here is my named.conf: // // named.conf // // Provided by Red Hat bind package to configure the ISC BIND named(8) DNS // server as a caching only nameserver (as a localhost DNS resolver only). // // See /usr/share/doc/bind*/sample/ for example named configuration files. // options { listen-on port 53 { any; }; listen-on-v6 port 53 { ::1; }; directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; memstatistics-file "/var/named/data/named_mem_stats.txt"; allow-query { any; }; recursion yes; dnssec-enable yes; dnssec-validation yes; dnssec-lookaside auto; /* Path to ISC DLV key */ bindkeys-file "/etc/named.iscdlv.key"; managed-keys-directory "/var/named/dynamic"; }; logging { channel default_debug { file "data/named.run"; severity dynamic; }; }; zone "." IN { type hint; file "named.ca"; }; include "/etc/named.rfc1912.zones"; include "/etc/named.root.key"; zone "mydomain.com"{ type master; file "/var/named/data/named.mydomain.com"; allow-update { none; }; }; AND The content of "/var/named/data/named.mydomain.com": 1 $TTL 38400 2 3 mydomain.com. IN SOA ns1.mydomain.com. milad.yahoo.com. ( 4 2012101201 ; serial number YYMMDDNN 5 28800 ; Refresh 6 7200 ; Retry 7 864000 ; Expire 8 38400 ; Min TTL 9 ) 10 11 mydomain.com. IN A 1.2.3.4 12 www IN A 1.2.3.4 13 ns1.mydomain.com. IN A 1.2.3.4 14 ns2.mydomain.com. IN A 1.2.3.4 15 mydomain.com. IN NS ns1.mydomain.com. 16 mydomain.com. IN NS ns2.mydomain.com. AND i'm sure the named service is running: [root@server ~]# service named status version: 9.8.2rc1-RedHat-9.8.2-0.10.rc1.el6_3.3 CPUs found: 8 worker threads: 8 number of zones: 20 debug level: 0 xfers running: 0 xfers deferred: 0 soa queries in progress: 0 query logging is OFF recursive clients: 0/0/1000 tcp clients: 0/100 server is up and running named (pid 26299) is running...

    Read the article

  • Primary/secondary ethernet interfaces in Ubuntu 9.10

    - by Josh
    I have an Ubuntu 9.10 machine with three ethernet interfaces, eth0, eth1 and eth2. eth2 is connected to a private network. eth0 and eth2 are connected to two different LANs. Either one will provide access to the internet. All three networks have DHCP servers. Using Ubuntu's the default settings (And Gnome), when I boot up all the interfaces are active and my system gets three IP addresses. However any attempt to access the internet results in connection timeouts and other weirdness. I suspect that traffic is going out on one NIC (like eth0) and coming back in on another (like eth1). I'm not sure what's going on. The only way I can access the internet at the moment is to bring two of the devices down with ifdown. How can I configure eth0 as my primary interface so all trafic goes out by default on that interface, while keeping the other two active? Also, I want to make sure Avahi broadcasts properly on all three IPs so that the computers on the LAN of eth1 can still connect to myHostname.local...

    Read the article

  • How can I set deadline as the I/O scheduler for USB Flash devices by using udev rules?

    - by ????
    I have set CFQ as the default I/O scheduler. I often get bad performance when I write data into a Flash device. This is resolved if I use deadline as the I/O scheduler for USB Flash devices. I can't always change the scheduler manually, right? I think writing udev rules is a good idea. Can someone please write rules for me? I want: When I plug in a USB device, detect the type of the device. If it is a portable USB hard disk, do nothing (I think if a device has more than one partitions, it always a portable hard disk. If it is a USB Flash device, set deadline as it's scheduler.

    Read the article

  • Setting Keyboard Shortcuts in Ubuntu

    - by joemangrove
    Is it possible to do the following in Ubuntu? If so can someone point me in the right direction. Say you want to set a keyboard shortcut to do the following: For examples sake, set Alt+F to open Firefox and maximize it, but only if Firefox is not already running. If it is running and not maximized, then maximize the most recently touched Firefox window. If it is maximized, then minimize Firefox. Thanks, Joe

    Read the article

  • How do I change the .bash_history file location?

    - by Brian Graham
    I'm running CentOS 6.x and want to move the .bash_history to a different location. The home directories of my users are (because I run a VPS) in /var/www/vhost/<domain>.<tld> which is FTP accessible (and it should be). Because of this, I have changed the AuthorizedKeysFile for SSH connections out of the normal ~/.ssh/authorized_keys since FTP connections would easily be able to locate them. At the same time I want to move the .bash_history file to /home/%u/.bash_history where %u is the current user.

    Read the article

  • How to limit reverse SSH tunelling ports?

    - by funktku
    We have a public server which accepts SSH connections from multiple clients behind firewalls. Each of these clients create a Reverse SSH tunnel by using the ssh -R command from their web servers at port 80 to our public server. The destination port(at the client side) of the Reverse SSH Tunnel is 80 and the source port(at public server side) depends on the user. We are planning on maintaining a map of port addresses for each user. For example, client A would tunnel their web server at port 80 to our port 8000; client B from 80 to 8001; client C from 80 to 8002. Client A: ssh -R 8000:internal.webserver:80 clienta@publicserver Client B: ssh -R 8001:internal.webserver:80 clientb@publicserver Client C: ssh -R 8002:internal.webserver:80 clientc@publicserver Basically, what we are trying to do is bind each user with a port and not allow them to tunnel to any other ports. If we were using the forward tunneling feature of SSH with ssh -L, we could permit which port to be tunneled by using the permitopen=host:port configuration. However, there is no equivalent for reverse SSH tunnel. Is there a way of restricting reverse tunneling ports per user?

    Read the article

  • Best practice to create an ftp administrator account on vsftpd

    - by jtd
    Background: My manager would like me to create an administration account for out FTP server. When logged in via ftp, it should instantly display all of the home directories of the users, and be able to modify any directory or file in any way possible. What would be the best way to go about this? I planned on chrooting this ftp admin to /home, but I don't know how to properly go about the permissions. Maybe make a group called ftp_admins, and chgrp the /home folder? But then wouldn't it affect the users accessing their folders? any help is appreciated.

    Read the article

  • HOw to deny a particular mac address client not to obtain ip/name from dhcp & dns server..

    - by Deepak Narwal
    Hello Friends... I configures DHCP server on my rhel 5.4 machine.Clients are getting ip from this DHCP server.NO problem upto this.. NOw i want that a particular mac address client do not pick ip from this dhcp server.. Same question is with my DNS server. I want that a particular mac address client do not pick name from this dns server.. PLz discuss in little bit details i am very new in this field.I am learning these things.I hope YOu will give detailed explaiantion.. Thanks IN ADvance friends..

    Read the article

  • Intermittent apt-get 'no installation candidate' error on fabric deploy

    - by jberryman
    I'm experiencing a strange issue with a fabric script I'm using to bootstrap a server on EC2. I launch a stock Ubuntu 12.04 AMI, wait for it to start, then proceed with: with settings(host_string="ubuntu@%s" % i.dns_name, connection_attempts=30): sudo('apt-get -qy update') sudo('apt-get -qy install --no-install-recommends mdadm') # don't install postfix #etc... The apt-get update appears to run fine and gives no errors, however (2/3 of the time or so) installing mdadm throws a "no installation candidate" error. When I ssh into the server and run apt-get install mdadm I get the same error. Running apt-get update by hand, then the package installs fine. Any ideas on what might be happening, or ideas for debugging?

    Read the article

< Previous Page | 423 424 425 426 427 428 429 430 431 432 433 434  | Next Page >