Search Results

Search found 13331 results on 534 pages for 'fluent interface'.

Page 421/534 | < Previous Page | 417 418 419 420 421 422 423 424 425 426 427 428  | Next Page >

  • Change the order of IP addresses returned by ifconfig?

    - by erikcw
    I have an Ubuntu server with several IP addresses attached to it. 127.0.0.1 is listed as venet0 by ifconfig. I'm using Chef to configure the server. The problem is that chef is listing 127.0.0.1 as the IP address for the server instead of one of the server's "real" IPs. (apparent "ohai ipaddress" uses the first IP listed by ifconfig to determine the server's IP). How can I change the order so the servers main IP is listed first instead of the 127.0.0.1? Can venet0 be deleted and venet0:0 be "promoted" to take its place since 127.0.0.1 is already listed in the "lo" interface? lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:334 errors:0 dropped:0 overruns:0 frame:0 TX packets:334 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:16700 (16.7 KB) TX bytes:16700 (16.7 KB) venet0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:127.0.0.1 P-t-P:127.0.0.1 Bcast:0.0.0.0 Mask:255.255.255.255 UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1 RX packets:7622207 errors:0 dropped:0 overruns:0 frame:0 TX packets:8183436 errors:0 dropped:1 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:2102750761 (2.1 GB) TX bytes:2795213667 (2.7 GB) venet0:0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:XXX.XXX.XXX.XX1 P-t-P:XXX.XXX.XXX.XX1 Bcast:0.0.0.0 Mask:255.255.255.255 UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1 venet0:1 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:XXX.XXX.XXX.XX2 P-t-P:XXX.XXX.XXX.XX2 Bcast:0.0.0.0 Mask:255.255.255.255 UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1 route -n route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.0.2.1 0.0.0.0 255.255.255.255 UH 0 0 0 venet0 0.0.0.0 192.0.2.1 0.0.0.0 UG 0 0 0 venet0

    Read the article

  • Same netmask or /32 for secondary IP on Linux

    - by derobert
    There appear to be (at least) two ways to add a secondary IP address to an interface on Linux. By secondary, I mean that it'll accept traffic to the IP address, and responses to connections made to that IP will use it as a source, but any traffic the box originates (e.g., an outgoing TCP connection) will not use the secondary address. Both ways start with adding the primary address, e.g., ip addr add 172.16.8.10/24 dev lan. Then I can add the secondary address with either a netmask of /24 (matching the primary) or /32. If I add it with a /24, it gets flagged secondary, so will not be used as the source of outgoing packets, but that leaves a risk of the two addresses being added in the wrong order by mistake. If I add it with /32, wrong order can't happen, but it doesn't get flagged as secondary, and I'm not sure what the bad effects of that may be. So, I'm wondering, which approach is least likely to break? (If it matters, the main service on this machine is MySQL, but it also runs NFSv3. I'm adding a second machine as a warm standby, and hope to switch between them by changing which owns the secondary IP.)

    Read the article

  • Linux VLAN Bridge

    - by raspi
    I have home network with VLANs, one for LAN, one for WLAN and one for internet. I'd like to use bridging so that instead of configuring these same VLANs to every machine, they had own VLAN ID and bridges were LAN, WLAN and internet. I've tried it but for some reason keep-alive/ttl seems to get broken because SSH sessions etc suddenly disconnects. We have this same setup working in workplace for 4+ years with 100+ customers but it's custom firewall/router hardware so accessing it is impossible. I know that it runs Linux. So what is Debian/Ubuntu default network settings doing wrong or is it just NIC driver/hw problem? I've tried to mess araund with ttl etc settings without any luck. The bad stuff is happening in the bridge because current VLAN-only setup works fine. interfaces: auto lo iface lo inet loopback # The primary network interface allow-hotplug eth0 allow-hotplug eth1 iface eth0 inet static iface eth1 inet static auto vlan111 auto vlan222 auto vlan333 auto vlan444 auto br0 auto br1 auto br2 # LAN iface vlan111 inet static vlan_raw_device eth0 # WLAN iface vlan222 inet static vlan_raw_device eth0 # ADSL Modem iface vlan333 inet static vlan_raw_device eth1 # Internet iface vlan444 inet static vlan_raw_device eth0 # LAN bridge iface br0 inet static address 192.168.0.1 netmask 255.255.255.0 bridge_ports eth0.111 bridge_stp on # Internet bridge iface br1 inet static address x.x.x.x netmask x.x.x.x gateway x.x.x.x bridge_ports eth1.333 eth0.444 bridge_stp on post-up iptables -t nat -A POSTROUTING -o br1 -j MASQUERADE pre-down iptables -t nat -D POSTROUTING -o br1 -j MASQUERADE # WLAN bridge iface br2 inet static address 192.168.1.1 netmask 255.255.255.0 bridge_ports eth0.222 bridge_stp on Sysctl: net.ipv4.conf.default.forwarding=1

    Read the article

  • Internet explorer rejects cookies in kerberos protected intranet sites

    - by remix_tj
    I'm trying to build an intranet site using joomla. The webserver is using HTTP Kerberos authentication with mod_kerb_auth. Everything works fine, the users get authenticated and so on. But if i try to login to the administrator panel i can't because IE does not accept the needed cookies. No such problem with firefox. The intranet site is called "intranet_new" and is hosted by webintranet04, under the directory /var/www/vhosts/joomla/intranet_new/. I have my virtualhost for intranet_new containing this: <Location /> AuthType Kerberos AuthName "Kerberos Login" KrbMethodNegotiate On KrbMethodK5Passwd On KrbAuthRealms PROV.TV.LOCAL Krb5KeyTab /etc/apache2/HTTP.keytab require valid-user </Location> The same is for webintranet04 virtualhost, which is the default pointing to /var/www and contains: <Location /vhosts/joomla/> AuthType Kerberos AuthName "Kerberos Login" KrbMethodNegotiate On KrbMethodK5Passwd On KrbAuthRealms PROV.TV.LOCAL Krb5KeyTab /etc/apache2/HTTP.keytab require valid-user </Location> the very strange problem i have is that if i open http:// webintranet04/vhosts/joomla/intranet_new/administrator IE allows me to login, accepting cookie. If i open http:// intranet_new/administrator, instead, i loop on the login page. Last, intranet_new is a CNAME record of webintranet04. This is only an IE problem. I need: - the admin interface to work with IE - the "kerberized" zone to accept cookie, because i am deploying other programs requiring cookies.

    Read the article

  • 10 GigE interfaces limits single connection throughput to 1 Gb on a ProCurve 4208vl

    - by wazoox
    The setup is as follow : 3 Linux servers with Intel CX4 10 GigE controllers and an X-Serve with a Myricom 10 GigE CX4 controller are connected to a ProCurve 4208vl switch, with a myriad of other machines connected through good ol' 1000 base-T. The interfaces are actually set up as 10 Gig, according to both the switch monitoring interface and the servers (ethtool, etc). However a single connection between two 10 GigE equipped machines through the switch is limited to exactly 1Gb. If I connect two of the 10 GigE machines directly with a CX4 cable, netperf reports the link bandwidth as 9000 Mb/s. NFS achieves about 550 MB/s transfers. But when I'm using the switch, the connection tops at 950 Mb/s through netperf and 110 MB/s with NFS. When I open several connections from 3 of the machines to the 4th, I get 350 MB/s of NFS transfer speed. So each individual 10 GigE ports actually can reach much more than 1 Gb, but individual connections are strictly limited to 1 Gb. Conclusion : the 10 GigE connection through the switch behaves exactly like a trunk of 10 1 Gb connections. That doesn't make any sense to me, unless HP planned these ports only for cascading switches or strictly for many-clients-to-single-server connection. Unfortunately this is NOT the envisioned setup, we need big throughput from machine to machine. Is this a not-so-known (or carefully hidden...) limitation of this type of switch? Should I suggest seppuku to the HP representative? Does anyone have any idea on how to enable a proper behaviour ? I upgraded for an hefty price from bonded 1Gb links to 10 GigE and see exactly ZERO gain! That's absolutely unacceptable.

    Read the article

  • Problem routing between directly connected Subnets w/ ASA-5510

    - by Zephyr Pellerin
    This is an issue I've been struggling with for quite some time, with a seemingly simple answer (Aren't all IT problems?). And that is the problem of passing traffic between two directly connected subnets with an ASA While I'm aware that best practice is to have Internet - Firewall - Router, in many cases this isn't possible. For example, In have an ASA with two interfaces, named OutsideNetwork (10.19.200.3/24) and InternalNetwork (10.19.4.254/24). You'd expect Outside to be able to get to, say, 10.19.4.1, or at LEAST 10.19.4.254, but pinging the interface gives only bad news. Result of the command: "ping OutsideNetwork 10.19.4.254" Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 10.19.4.254, timeout is 2 seconds: ????? Success rate is 0 percent (0/5) Naturally, you'd assume that you could add a static route, to no avail. [ERROR] route Outsidenetwork 10.19.4.0 255.255.255.0 10.19.4.254 1 Cannot add route, connected route exists At this point, you might gander if its a NAT or Access list problem. access-list Outsidenetwork_access_in extended permit ip any any access-list Internalnetwork_access_in extended permit ip any any There is no dynamic nat (or static nat for that matter), and Unnatted traffic is permitted. When I try pinging the above address (10.19.4.254 from Outsidenetwork), I get this error message from level 0 logging (debugging). Routing failed to locate next hop for icmp from NP Identity Ifc:10.19.200.3/0 to Outsidenetwork:10.19.4.1/0 This led me to set same-security traffic permit, and assigned the same, lesser and greater security numbers between the two interfaces. Am I overlooking something obvious? Is there a command to set static routes that are classified higher than connected routes?

    Read the article

  • Solaris 10 invalid ARP requests from 0.0.0.0? Link up/down every hour or 2

    - by JWD
    The guys at the data center where I'm hosting a server running Solaris 10 are telling me that my server is making a lot of invalid arp requests. This is an example of a portion of what was sent to me from the logs (with Mac addresses and IP addresses changed). [mymacaddress]/0.0.0.0/0000.0000.0000/[myipaddress]/[Datestamp]) It's being logged every hour. I don't see anything in the arp tables (arp -a) or routing tables (netstat -r) and I don't see anything relating to 0.0.0.0 when snoping the arp requests. The only place I see any reference to 0.0.0.0 is if I do netstat -a for the SCTP SCTP: Local Address Remote Address Swind Send-Q Rwind Recv-Q StrsI/O State ------------------------------- ------------------------------- ------ ------ ------ ------ ------- ----------- 0.0.0.0 0.0.0.0 0 0 102400 0 32/32 CLOSED But not really sure what that means. Doesn't seem like I can disable SCTP. There are some tunable SCTP parameters but it's not something I'm familiar with. Do I have to add changes to /etc/system? Looks like sctp_heartbeat_interval might be what I need to change? If it makes any difference, I have a few solaris zones running on this server, each with their own IP address on a virtual interface. eth0:0, eth0:1, etc. Does anyone have any idea what might be causing this and how to stop it? I think the switch I'm connected to doesn't like it and momentarily drops the connection. Is there anyway to at least block those requests using ipfilter or something else? Update: This was happening more frequently but now it seems to be happening roughly every hour or every two hours. It's not consistent. I tried setting setting the link speed and duplex to match the switch port and that seemed to make it stop happening for a few hours but then it started again.

    Read the article

  • Issue with multiple bridging for KVM hosts

    - by Henry-Nicolas Tourneur
    I'm using KVM and libvirt on my host (Debian lenny) + 2 bridges per guest (one for mgmt, one for public traffic). That setup isn't stable at all, sometimes I can do pings to a management ip, sometimes not. I don't know if my bridging paramateres are correct, could you check ? or if there is anything wrong ... Please also note that interface on guest doesn't flap and that I got not logs on my host. Of course forwarding is enabled. iface eth3 inet manual auto bond0 iface bond0 inet manual slaves eth1 eth2 pre-up ip link set bond0 up down ip link set bond0 down auto br0 iface br0 inet static address 10.160.0.7 netmask 255.255.255.128 bridge_ports eth3 bridge_fd 9 bridge_hello 2 bridge_maxage 12 bridge_stp off auto br0:1 iface br0:1 inet static address 10.160.0.9 netmask 255.255.255.128 auto br0:2 iface br0:2 inet static address 10.160.0.10 netmask 255.255.255.128 auto br1 iface br1 inet static address 217.4.40.242 netmask 255.255.255.240 gateway 217.4.40.241 pre-up /etc/network/firewall start bridge_ports bond0 bridge_fd 9 bridge_hello 2 bridge_maxage 12 bridge_stp off auto br1:1 iface br1:1 inet static address 217.4.40.252 netmask 255.255.255.240 auto br1:2 iface br1:2 inet static address 217.4.40.253 netmask 255.255.255.240

    Read the article

  • Debian can't connect to internet using LAN

    - by tampe125
    I have a headless Raspberry Pi using Debian Wheezy. I have a wifi dongle and if I connect my Raspberry using it, everything works fine: I can connect to the Internet, I can ping, I can update. However, if I get down my wifi and set up the lan interface, I lost my internet connection. I still can connect to it locally, using my laptop, but the connection doesn't exit (ie ping is not working). Some useful info: cat /etc/network/interfaces auto lo auto eth0 iface eth0 inet static address 192.168.0.105 netmask 255.255.255.0 gateway 192.168.0.1 ping www.google.com (nothing request timed out) ifconfig eth0 Link encap:Ethernet HWaddr b8:27:eb:a2:b5:20 inet addr:192.168.0.105 Bcast:192.168.0.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1130 errors:0 dropped:0 overruns:0 frame:0 TX packets:1116 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:97223 (94.9 KiB) TX bytes:146140 (142.7 KiB) ping 192.168.0.1 PING 192.168.0.1 (192.168.0.1) 56(84) bytes of data. ^C --- 192.168.0.1 ping statistics --- 19 packets transmitted, 0 received, 100% packet loss, time 18007ms cat /etc/resolv.conf nameserver 8.8.8.8 netstat -r Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface default 192.168.0.1 0.0.0.0 UG 0 0 0 eth0 192.168.0.0 * 255.255.255.0 U 0 0 0 eth0 well, I think that's all... Any ideas?

    Read the article

  • Debian can't connect to internet using LAN

    - by tampe125
    I have a headless Raspberry Pi using Debian Wheezy. I have a wifi dongle and if I connect my Raspberry using it, everything works fine: I can connect to the Internet, I can ping, I can update. However, if I get down my wifi and set up the lan interface, I lost my internet connection. I still can connect locally, using my laptop, but the connection doesn't exit (ie ping is not working). Some useful info: cat /etc/network/interfaces auto lo auto eth0 iface eth0 inet static address 192.168.0.105 netmask 255.255.255.0 gateway 192.168.0.1 ping www.google.com (nothing request timed out) ifconfig eth0 Link encap:Ethernet HWaddr b8:27:eb:a2:b5:20 inet addr:192.168.0.105 Bcast:192.168.0.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1130 errors:0 dropped:0 overruns:0 frame:0 TX packets:1116 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:97223 (94.9 KiB) TX bytes:146140 (142.7 KiB) ping 192.168.0.1 PING 192.168.0.1 (192.168.0.1) 56(84) bytes of data. ^C --- 192.168.0.1 ping statistics --- 19 packets transmitted, 0 received, 100% packet loss, time 18007ms cat /etc/resolv.conf nameserver 8.8.8.8 netstat -r Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface default 192.168.0.1 0.0.0.0 UG 0 0 0 eth0 192.168.0.0 * 255.255.255.0 U 0 0 0 eth0 well, I think that's all... Any ideas?

    Read the article

  • Automate creation of Windows startup script?

    - by Niten
    Is there a good way to automate installing local startup (rather than login) scripts in Windows XP and Windows 7, via the command line, WMI, COM, or otherwise (even Win32 if it comes to that)? I need to setup a local startup script on a large number of computers, and unfortunately, Active Directory is absolutely not an option. I would like to write a script or small program that I can run on each computer to perform the startup script installation in order to save myself a lot of error-prone point-and-click manual labor. I see that when one uses gpedit.msc to create a local startup script, information about the script gets stored in the registry here: HKLM\Software\Policies\Microsoft\Windows\System\Scripts\Startup However, if you create such a script and then delete its registry key, the script will remain listed in the local Group Policy editor; as is so often the case in Windows, apparently there is more going on there than meets the eye. This leads me to question whether it's safe to manually add subkeys for new startup scripts here (I wouldn't want my script to be overwritten by later changes made using the local Group Policy editor, for instance)... Another option that's occurred to me is to create an item in the Task Scheduler configured to run at system startup. However, my concerns there are twofold: Can this be automated any more easily? For instance, the at command doesn't appear to let you schedule a task for system startup, and WMI's Win32_ScheduledJob interface looks unreliable (it fails to show any of my currently scheduled tasks, for one thing). Would I be able to prevent users from logging in until the scheduled startup task is completed, as can be done with "normal" Windows startup scripts? Thanks in advance for any suggestions, I've been banging my head against this one for a bit...

    Read the article

  • How important is dual-gigabit lan for a super user's home NAS?

    - by Andrew
    Long story short: I'm building my own home server based on Ubuntu with 4 drives in RAID 10. Its primary purpose will be NAS and backup. Would I be making a terrible mistake by building a NAS Server with a single Gigabit NIC? Long story long: I know the absolute max I can get out of a single Gigabit port is 125MB/s, and I want this NAS to be able to handle up to 6 computers accessing files simultaneously, with up to two of them streaming video. With Ubuntu NIC-bonding and the performance of RAID 10, I can theoretically double my throughput and achieve 250MB/s (ok, not really, but it would be faster). The drives have an average read throughput of 83.87MB/s according to Tom's Hardware. The unit itself will be based on the Chenbro ES34069-BK-180 case. With my current hardware choices, it'll have this motherboard with a Core i3 CPU and 8GB of RAM. Overkill, I know, but this server will be doing other things as well (like transcoding video). Unfortunately, the only Mini-ITX boards I can find with dual-gigabit and 6 SATA ports are Intel Atom-based, and I need more processing power than an Atom has to offer. I would love to find a board with 6 SATA ports and two Gigabit LAN ports that supports a Core i3 CPU. So far, my search has come up empty. Thus, my dilemma. Should I hold out for such a board, go with an Atom-based solution, or stick with my current single-gigabit configuration? I know there are consumer NAS units with just one gigabit interface (probably most of them), but I think I will demand a lot more from my server than the average home user. Any advice is appreciated. Thanks.

    Read the article

  • Knowledge and user generated content management system to track files, research, proposals, etc.?

    - by Eshwar
    I'll try keep it short. Here's the scenario: We have employees all over the world performing similar work i.e. research, generating powerpoint slides, word documents, graphics, etc. Many times a lot of this previous work can be reused for another future project. The current arrangement is email and phone calls which as you would agree is quick if you know where to look but otherwise archaic and very very inefficient. So I am looking for software that will allow me to do the following: Tag files e.g. an investor presentation on cellphone usage in kenya would be tagged investor, cellphone, kenya Manage references e.g. if we read something on the internet, should be able to paste that link in some fashion and tag it as above. Preferably cloud based so that it can be accessed by anybody and additionally would be nice (though NOT must) to have access levels (director, manager, everyone) A nice interface that non technically savvy folks can warm up to ;) A desktop app would be handy so that people don't always have to click upload or something A tree based system is inefficient in this case because content is usually linked across branches and also people might not quite agree on one format of a tree. Tagging works around this very nicely. What I have considered so far: Evernote (for its more professional look) Springpad (for its versatility with content) Mendeley (this is a research manager and in some ways ideal, but i fear its limited to PDFs) The goal is that when somebody wants to look for a document, they don't have to ask a colleague, they can just search with keywords and all relevant information shows up. Thanks!

    Read the article

  • UDP blocked by Windows XP Firewall when sending to local machine

    - by user36367
    I work for a software development company but the issue doesn't seem to be programming-related. Here is my setup: Windows XP Professional with Service Pack 3, all updated Program that sends UDP datagrams Program that receives UDP datagrams Windows Firewall set to allow inbound UDP datagrams on a specific port (Scope: Subnet) If I send a UDP datagram on any port to other, similar machines, it goes through. If I send the UDP datagram to the same computer running the program that sends (whether using broadcast, localhost IP or the specific IP of the machine), the receiver program gets nothing. I've traced the problem down to the Windows XP Firewall, as Windows 7 does not have this problem (and I do not wish to sully my hands with Vista). If the exception I create for that UDP port in the WinXP firewall is set for a Scope of Subnet the datagram is blocked, but if I set it to All Computers or specifically enter my network settings (192.168.2.161 or 192.168.2.0/255.255.255.0) it works fine. Using different UDP ports makes no difference. I've tried different programs to reproduce this problem (ServerTalk to send and either IP Port Spy or PortPeeker to receive) to make sure it's not our code that's the issue, and those programs' datagrams were blocked as well. Also, that computer only has one network interface, so there are no additional network weirdness. I receive my IP from a DHCP server, so this is a straightforward setup. Given that it doesn't happen in Windows 7 I must assume it's a defect in the Windows XP Firewall, but I'd think someone else would have encountered this problem before. Has anyone encountered anything like this? Any ideas?

    Read the article

  • debugging connection to mysql from python script using MySQLdb

    - by timpone
    I am a python newbie and have a python 2.5 script that is using MySQLdb to connect on OS X 10.5.8. I haven't been able to succesfully connect to the database of interest with this. However, I am able to connect using php's mysqli and also via the mysql cli interface. I get the error: File "build/bdist.macosx-10.5-i386/egg/MySQLdb/connections.py", line 188, in __init__ _mysql_exceptions.OperationalError: (1045, "Access denied for user 'arc_development'@'localhost' (using password: YES)") On my linux box which has the same mysql perms, the script works fine logging in. On my OS X laptop, I am able to create a database named test_python which bypasses mysql authentication scheme. This makes me think that issues like 32bit / 64bit incompatabilities aren't occuring. If I turn on the query log, I get access denied: 100610 20:56:55 4 Connect Access denied for user 'arc_development'@'localhost' (using password: YES) I'm a little bit at a loss to what to do next. Is there any way I can specify in the general log or binary log to get the actual password set on the connection string? How about writing out from connections.py file the value (although not sure how I'd do that)? thanks

    Read the article

  • Thecus N5200, disk has dropped out of RAID5

    - by Anders Ekdahl
    We have a Thecus 5200 NAS here at work with five WD Caviar Black 2TB disks in a RADI5 array. Yesterday, disk 4 dropped out of the array, and in the NAS web interface there's a warning about the RAID array being "degraded". When I go into Storage - Disks, disk 1 and 4 has a warning next to them. When I click on the warnings, this information about the disks are displayed: Tray Number 4 Model WD2001FASS-00W2B Power On Hours 2403 Hours Temperature Celsius 34 Reallocated Sector Count 66 Current Pending Sector 1447 Raw Read Error Rate 61 Seek Error Rate 0 Hardware ECC Recovered N/A Tray Number 1 Model WD2001FASS-00W2B Power On Hours 2403 Hours Temperature Celsius 32 Reallocated Sector Count 0 Current Pending Sector 1465 Raw Read Error Rate 0 Seek Error Rate 0 Hardware ECC Recovered N/A I'm not really an expert on either disks or RAID arrays. Does this indicate that the fourth disk is damaged, and needs to be replaced? And what about disk number one? It has a warning, but it's still in the array. Is it safe to add the fourth disk back into the array as a spare? I can't find any way to add it back as a it were before.

    Read the article

  • Destination NAT Onto the Same Network from internal clients

    - by mivi
    I have a DSL router which acts as NAT (SNAT & DNAT). I have setup a server on internal network (10.0.0.2 at port 43201). DSL router was configured to "port forward" (or DNAT) all incoming connections to 10.0.0.2:43201. I created a virtual server for port forwarding on DSL router. I also added following iptables rules for port forwarding. iptables -t nat -A PREROUTING -p tcp -i ppp_0_1_32_1 --dport 43201 -j DNAT --to-destination 10.0.0.2:43201 iptables -I FORWARD 1 -p tcp -m state --state NEW,ESTABLISHED,RELATED -d 10.0.0.2 --dport 43201 -j ACCEPT # ppp_0_1_32_1 is routers external interface. # routers internal IP address is 10.0.0.1 and server is setup at 10.0.0.2:43201 Problem is that connections coming from external IP addresses are able to access internal server using External IP address, but internal clients (under NAT) are not able to access server using external IP address. Example: http://<external_address>:43201 is working from external clients But, internal clients are not able to access using http://<external_address>:43201 This seems to be similar to the problem described in http://www.netfilter.org/documentation/HOWTO/NAT-HOWTO-10.html (NAT HOW-TO Destination NAT Onto the Same Network). Firstly, I am not able to understand why is this a problem for internal clients? Secondly, what iptables rule will enable internal clients to access server using external IP address? Please suggest.

    Read the article

  • HP Power Manager SMTP setup doesn't have space for username & password

    - by Martha
    Is there some way to configure HP Power Manager to not assume that there's an email server running locally? We recently acquired an HP T1500 G3 UPS, which we're trying to control using HP Power Manager 4.2. The main reason we wanted to get this particular UPS is because it says it's capable of sending notifications (of the "Yo, the power's out, you may want to look into it" type) via email, as opposed to SNMP. Turns out, that's not entirely true. The server is running Windows Server 2003. It is not running an email server of any sort - we do that via two different providers. Outlook email is provided by Verizon, and our SMTP email service is provided by a small local company. When we use CDO to send auto-generated notification emails, we have to provide the SMTP server name, port, username, and password. The HP Power Manager interface only allows us to enter the server name and the username. Thus, not surprisingly, the emails never go anywhere. Help?

    Read the article

  • Is Cherokee (probably) the best static content server for beginner sysadmins?

    - by Bad Learner
    I have read the pros and cons of most of the popular web servers and have come to a conclusion that Apache would (probably) be the best web server for serving dynamic content - - no wonder YouTube, Flickr and Facbook, among many others, use it. I do not know if that C10K problem applies to Apache even when serving dynamic content only, but I think any web server used to serve dynamic content needs some good tweaking for optimized performance, and the fact that nothing beats Apache when it comes to documentation, resources and support on the web, I think should will go with Apache for dynamic content. That apart, the confusion begins when it comes to choosing web servers for static content (including streaming videos). I see that Nginx, Cherokee and Lighttpd are among the best (I am not considering non-open source or non-linux stuff here). So, which too choose? I know one cannot go wrong with any of the three (Nginx, Cherokee, Lighttpd). Lighttpd's development has evidently gotten slower than it was a good time ago. The documentation is pretty good for all the three, and hopefully, so are the resources (knowledge of these among the users of Stackoverflow/Serverfault sites, the web etc). Precisely, and noting point [2] and [3], if I am not wrong, I should either go with Nginx or Cherokee. I would love to see someone clarify these... is Cherokee just as fast (mb/s), performant (connections/s), and reliable (think downtime/restarting server) as Nginx for serving static content and load balancing, for small, medium to large (and really large) websites and applications? (Think, the size of YouTube, Apache or Facebook.) if the answer for the Q above is a big "hell, yes!" then, I should probably prefer Cherokee, right? Because, since I am a beginner, it would a lot easier to setup Cherokee as it has a graphical admin user interface + really good documentation. Yes? I could be wrong, I could be right. I put down what I know so that you can offer most relevant advise. Pardon if anything I've said is offensive.

    Read the article

  • Install VirtualBox on Ubuntu 12.04.1 (on [Samsung] Chromebook)

    - by iphonedev7
    I have dual booted Ubuntu Linux 12.04.1 LTS on my Samsung Series 5 ChromeBook, and am trying to run/install Oracle VirtualBox (from the generic .run file downloaded from their website). However, every time I try to run it (as root from the command line), it gives me the following error occurs: Please install the build and header files for your current Linux kernel. The current kernel version is 3.4.0 Problems were found which would prevent VirtualBox from installing. I have tried the version from the Software Center, as well as the command line installation, both of which gave me errors based on my linux-headers/linux-kernel/linux-[kernel]-image. Here's an error I keep getting (on the command line): First Installation: checking all kernels... It is likely that 3.4.0 belongs to a chroot's host Building only for 3.5.0-18-generic Building initial module for 3.5.0-18-generic ERROR (dkms apport): kernel package linux-headers-3.5.0-18-generic is not supported Error! Bad return status for module build on kernel: 3.5.0-18-generic (x86_64) Consult /var/lib/dkms/virtualbox/4.1.12/build/make.log for more information. Setting up virtualbox-qt (4.1.12-dfsg-2ubuntu0.2) ... Processing triggers for libc-bin ... ldconfig deferred processing now taking place ...And one of the more cryptic errors I get when trying to start any Virtual Machine: Result Code: NS_ERROR_FAILURE (0x80004005) Component: Machine Interface: IMachine {5eaa9319-62fc-4b0a-843c-0cb1940f8a91}

    Read the article

  • Best all in one linux based proxy,firewall, dhcp and wins server.

    - by BeStRaFe
    I help to run a lan in Sydney. We have a need for a proxy/gateway solution to allow those pesky games that require internet to work. I have been doing this with an ISA server and it has worked quite well. However now i wish to port this over to run on the same hardware as our cacti / nagios box under a vmware VM. ISA server is horridly nad due to the massive ram and i/o requirement for something is basically port blocking and handing out IP's. The needs are as follows. 1. DHCP 2. WINS (otherwise network devices fight over who is the WINS master) 3. Filtering based in PORT for outbound traffic. 4. Ability to whitelist IP/MAC's for internet access. 5. Web Interface. I had been thinking to use PFSENSE however there is no option for a WINS server and i cbf working my way around bsd.

    Read the article

  • Fedora 15: em1 recently dissapeared and hostapd no longer serves internet to wirelessly connected devices

    - by Daniel K
    I have a laptop running hostapd, phpd, and mysql. This laptop uses an Ethernet connection to connect to the internet and acts as a wireless access point for my workplace's wifi devices. After installing some software and reconnecting my Ethernet elsewhere, my "em1" device is no longer present and wirelessly connected devices can no longer reach the internet. The software I recently installed is: pptp, pptpd, and updated some fedora libraries. I have also recently moved my desk and laptop to another location and thus had to reconnect the Ethernet elsewhere. Wifi devices no longer have access to the internet. Wirelessly connected devices are able to successfully log into the laptop, showing full strength, correct SSID, and uses the proper password. However, when I tried to connect to a site like google, the request times out. The device "em1" also no longer appears on my machine. Running: # ifup em1 will give me the following output: ERROR : [/etc/sysconfig/network-scripts/ifup-eth] Device em1 does not seem to be present, delaying initialization. And running: # dhclient em1 has the following output: Cannot find device "em1" When I run # dmesg|grep renamed, I get the following: renamed network interface eth0 to p4p1. I've tried to connect to the internet through p4p1 directly from the laptop and was successful. However, my wireless devices connected to my laptop are not able to connect to the internet. I have uninstalled pptp and pptpd using # yum erase ... but the problem still persists. To install pptp I used: # yum install pptp To install pptpd I did the following: # rpm -Uvh http://poptop.sourceforge.net/yum/stable/fc15/pptp-release-current.noarch.rpm # yum install pptpd To update my fedora libraries I used: # yum check-update # yum update EDIT: Running # route produces the following results: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface default 10.11.200.1 0.0.0.0 UG 0 0 0 p4p1 10.11.200.0 * 255.255.252.0 U 0 0 0 p4p1 172.16.100.0 * 255.255.255.0 U 0 0 0 wlan0

    Read the article

  • VPN messes up DNS resolution

    - by user124114
    After connecting with the Kerio VPN client (OS X Leopard) to a server, the internet (~web browsing) stopped working for the client. After poking around, the issue seems to be bad DNS server (i.e., entering IPs directly works). After disconnecting from the VPN, the invalid DNS server disappears from scutil --dns and all's well again. Now, I don't understand why OS X on the client even changes the DNS settings -- internet should be routed through a different interface, through the default gateway, not through the VPN. Questions: By what mechanism does connecting the VPN client change the "default" DNS server? How can I stop the VPN client from changing routing/DNS rules? Where is this stuff stored/modified? Before VPN: $ scutil --dns DNS configuration resolver #1 nameserver[0] : 10.66.77.1 # <---- default gateway = home router; all good order : 200000 resolver #2 domain : local options : mdns timeout : 2 order : 300000 ... VPN connected: $ scutil --dns DNS configuration resolver #1 nameserver[0] : 192.168.1.1 # <--- rubbish nameserver[1] : 192.168.2.1 order : 200000 resolver #2 domain : local options : mdns timeout : 2 order : 300000 ... The VPN doesn't appear among $ networksetup -listallnetworkservices.

    Read the article

  • Setting the default permissions for files uploaded via FTP to a directory

    - by Kerri
    Disclaimer: I'm just a web designer/coder, and server admin stuff is my weakest point of them all. So be easy on me (and very specific). I'm using a simple CMS (Unify) on a site, where part of the functionality is that the client can upload files to a specified directory (using FTP). The permissions for the upload directory are set to 755. But when files are uploaded through the interface, they are uploaded with permissions set to 640 (instead of 644), so site visitors cannot acces the files. When I emailed the CMS's support about this, they told me that it was a server setting, and I need to make sure that files uploaded through FTP are set to 644. Makes perfect sense, but I have no idea how to do this. Any help would be greatly appreciated. This site is a shared site hosted by Network Solutions (Unix), so my access options are limited. I can edit .htaccess files, and php.ini, but that's about all I have access to. It appears I can't even log on via shell. ETA: 11/11/2010 Thanks all. I was able to work around this problem by setting up the CMS's settings in a different way. I'd be interested in following up on Nick O'Niel's suggestions, because I think he's on the right track, but unfortunately I can't access the necessary files on this particular server. So, anyway, I'm leaving this open, since the original questions isn't exactly resolved. Unfortunately, I probably can't put a correct answer to the test, since the shared server in question has nearly all of its config files tightly locked down.

    Read the article

  • cannot reach munin port on other AWS instance

    - by Amedee Van Gasse
    2 AWS instances, in the same region but different availability zones, one is in regular EC2 and the other is in VPC, both have an Elastic IP, both are 64bit Amazon Linux AMI 2014.03.1. Both are running munin-node. The instance in the VPC is running munin-cron. I have added incoming TCP and UDP port 4949 to the security groups of both instances. On the munin node, I added an allow-line with the IP address (regular expression) of the munin server to /etc/munin/munin-node.conf. I bind munin-node to any interface using host *. Then I did sudo service munin-node restart. Then I ran netstat. $ sudo netstat -at | grep munin tcp 0 0 *:munin *:* LISTEN So the port is open there. On the munin server AND on the munin node: $ nmap AMAZON-IP -p 80,4949 | grep tcp 80/tcp open http 4949/tcp closed munin On the munin node: $ nmap localhost -p 80,4949 | grep tcp 80/tcp open http 4949/tcp open munin So from the outside, the http port is open (Apache is running) but the munin port is closed. The node can't even reach the munin port on it's own public IP address, but it can on localhost. I added port 80 as a sanity check, to be sure that there is network connectivity at all. So what am I overlooking here?

    Read the article

< Previous Page | 417 418 419 420 421 422 423 424 425 426 427 428  | Next Page >