Search Results

Search found 15248 results on 610 pages for 'slashdot configuration'.

Page 479/610 | < Previous Page | 475 476 477 478 479 480 481 482 483 484 485 486  | Next Page >

  • Redirect local, not internal, requests using SuSEfirewall2 or an iptables rule

    - by James
    I have a server that is running a web application deployed on Tomcat and is sitting in a test network. We're running SuSE 11 sp1 and have some redirection rules for incoming requests. For example we don't bind port 80 in Tomcat's server.xml file, instead we listen on port 9600 and have a configuration line in SuSEfirewall2 to redirect port 80 to 9640. This is because Tomcat doesn't run as root and can't open up port 80. My web application needs to be able to make requests to port 80 since that is the port it will be using when deployed. What rule can I add so that local requests get redirected by iptables? I tried looking at this question: How do I redirect one port to another on a local computer using iptables? but suggestions there didn't seem to help me. I tried running tcpdump on eth0 and then connecting to my local IP address (not 127.0.0.1, but the actual address) but I didn't see any activity. I did see activity if I connected from an external machine. Then I ran tcmpdump on lo, again tried to connect and this time I saw activity. So this leads me to believe that any requests made to my own IP address locally aren't getting handled by iptables. Just for reference he's what my NAT table looks like now: Chain PREROUTING (policy ACCEPT) target prot opt source destination REDIRECT tcp -- anywhere anywhere tcp dpt:http redir ports 9640 REDIRECT tcp -- anywhere anywhere tcp dpt:xfer redir ports 9640 REDIRECT tcp -- anywhere anywhere tcp dpt:https redir ports 8443 Chain POSTROUTING (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination

    Read the article

  • Software to automate website screenshot capture

    - by Leniel Macaferi
    Do you know any software that can automate the process of getting screenshots of every page of a website? It would act like a spider/crawler/robot. You name it... For example: I developed a website and now I'd like to get a screenshot of every page of the site. I of course could do it manually (a lot of work). For each module of the site (Student, Payment, etc) I have different pages (Create, Edit, Details, Delete, etc) forms. The thing I'm looking for is a software that can visit every link of the site and then capture the screen - a software that can automate the whole process. It would also be good if the software allowed the user to pass a list of URLs to capture screenshots allowing even more fine grained configuration. EDIT: I tried Selenium mentioned by Aaron in his answer but I managed to find an app that does exactly what I needed. It's called Paparazzi!. I wrote a blog post to showcase my attempt at Selenium and the findings regarding Paparazzi!'s batch capture functionality: Software to automate website screenshot capture

    Read the article

  • Dual Monitor setup issues between laptop and external led monitor

    - by Julian
    I have two challenges. Monitor will not connect to laptop through HDMI Watching HD video content causes the laptop to sometimes turn off expecially when I'm streaming from tekzilla.com Setup. I got my new HP 2311x LED LCD monitor this week and I have it running as the main monitor extended by the 15 inch screen on my Dell Studio 1558. Right now I have to connect the external monitor through VGA. For the HDMI connection issue. I suspect that either the appropriate drivers are not installed because I don't see any hdmi device in the device manager. I've checked and I don't see any hdmi specific drivers listed online. For the shut down issue, I suspect the laptop might be overheating. Not sure why it would. It never did that while I watched movies on my laptop's default screen. My Laptop Configuration: 15" led lcd screen at 1366 x 768 intel i5 processor integrated graphics card 4 GB DDR3 RAM 500 GB hard drive I've tried everything from switching the source on my monitor to hdmi to start up combinations and nothing has worked. What could be the issue and how do I solve it?

    Read the article

  • Can I autoregister my clients/servers in local DNS?

    - by Christian Wattengård
    Right now I have a W2k12 server at home that I run as a domain controller. This has the extra benefit of registering every "subordinate" computers name in it's DNS so that I don't have to go around remembering IP's all the time. (And it let's me easily run dhcp also on my servers). I need to rework my home network for several odd reasons, and in this new scenario there is no place for a big honking W2k12 server box. I have a RasPI, and I have other smallish linux boxen I can use. (In a worst case scenario I'll use my NUC, but then I'll be forced to use my home cinema's UPnP-client for media... The HORROR!!) Is it possible to set up a DNS-server-"appliance" that somehow autoregisters it's own hostname.. Scenario: Router (N66u) on 172.20.20.1. Runs DHCP on 172.20.20.100-200 range. Server [verdant] of a *nix flavor on 172.20.20.2 Laptop [speedy] of W8 flavor on DHCP assigned Laptop [canary] of W8 flavor on DHCP assigned Desktop [lianyu] of Ubunto flavor on DHCP assigned What I would like is that all of the above servers (except possibly the router) would be available on verdant.starling.lan and canary.starling.lan and so on. This is how it works right now (except the Ubuntu box... I haven't cracked that one yet) because Windows just does this for you.. I would also be able to do this without any manual labor on the server. When I tell my box it's name is smoak it should "immediately" be available as smoak.starling.lan without any extra configuration on my part. How can I do this in a Linux (Ubuntu) environment? (Bonus comment upvote for naming the naming scheme :P )

    Read the article

  • How to use the correct SSH private key?

    - by Dail
    I have a private key inside /home/myuser/.ssh/privateKey I have a problem connecting to the ssh server, because i always get: Permission denied (publickey). I tried to debug the problem and i find that ssh is reading wrong file, take a look at the output: [damiano@Damiano-PC .ssh]$ ssh -v root@vps1 OpenSSH_5.8p2, OpenSSL 1.0.0g-fips 18 Jan 2012 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for vps1 debug1: Applying options for * debug1: Connecting to 111.111.111.111 [111.111.111.111] port 2000. debug1: Connection established. debug1: identity file /home/damiano/.ssh/id_rsa type -1 debug1: identity file /home/damiano/.ssh/id_rsa-cert type -1 debug1: identity file /home/damiano/.ssh/id_dsa type -1 debug1: identity file /home/damiano/.ssh/id_dsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.8p1 Debian-7ubuntu1 debug1: match: OpenSSH_5.8p1 Debian-7ubuntu1 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.8 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Server host key: RSA 74:8f:87:fe:b8:25:85:02:d4:b6:5e:03:08:d0:9f:4e debug1: Host '[111.111.111.111]:2000' is known and matches the RSA host key. debug1: Found key in /home/damiano/.ssh/known_hosts:1 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Trying private key: /home/damiano/.ssh/id_rsa debug1: Trying private key: /home/damiano/.ssh/id_dsa debug1: No more authentication methods to try. as you can see ssh is trying to read: /home/damiano/.ssh/id_rsa but i don't have this file, i named it differently. How could I tell to SSH to use the correct private key file? Thanks!

    Read the article

  • Problem IIS 7.0 Locking files durring upload

    - by viscious
    I am running a server 2008 with iis7 and the ftp addon on to iis 7.0 I have the ftp site configured and mostly working Except that about 70% of the time when transferring a file the upload will hang forever. If I disconnect the ftp client and reconnect and try to upload the same file I will get an error on the client saying the file is locked. I have to restart the ftp service to clear the lock. I fired up process explorer and did a search on the file in question and sure enough the ftp service has a lock on the file and it takes around 20 minutes to release the lock on its own (and sometimes longer). This lock stays around even after I disconnect the client. Like I said this only happens about 70% of the time, the other 30% of the time it goes through just fine. Things i have verified. -Not a firewall issue. Server is using passive port range 8000-9000 which is allowed on the firewall. -Not a nat issue, server has a globally rout-able ip address -all recommended/required updates installed I have 5 other servers in a very similar configuration and this is the only one i have problems with.

    Read the article

  • Can I disable this Windows (XP) Security Warning?

    - by FumbleFingers
    I recently reformatted my hard drive and reinstalled Windows XP (I know I'll have to take the plunge and commit to Win8 "real soon, now", but I'm just not quite ready for the upheaval yet! :) I used to use WinRar (and later, when I got fed up with the "nag" messages, 7-Zip), but I haven't installed either of them in my new configuration, so I must be using the built-in XP facility when I open *.zip files. For years, I've been opening downloaded *.zip archives, and using "drag & drop" to copy to a File Explorer window open on the folder where I want the files to end up (usually, My Documents\Downloads). But now I find that when I "drop" the file(s), I get a pop-up Windows Security Warning saying Are you sure you want to copy or move files to this folder? You should only move or copy files from locations that you trust Can anyone explain why I'm getting this message, and is there any (reasonably easy, please! :) way to suppress it? Since I've already put the *.zip file on my computer, it seems a bit late to ask if I trust it. (Thus far, the files in question have always been plain text, so it's not a matter of dodgy programs, etc.) Apologies for the low quality image - I don't have the appropriate tools or knowledge to do any better, and it doesn't help that my "PrtScr" screen capture has included what would have been on my second monitor (TV) if it had been turned on. If you can't read it, trust me - I have copied the text verbatim.

    Read the article

  • How to use WPA2 client mode in the Linux-based Cisco WAP4410N access point

    - by joechip
    I have a Cisco WAP4410N access point that I want to use as a client to connect to a WPA2 wireless network (for WLAN service monitoring purposes). Supposedly this access point supports a "Wireless Client/Repeater" mode that allows to do this. The Repeater function is optional (I have that box unchecked so that nobody can connect to this access point wirelessly). I have verified through SSH that the access point gets configured as a client and not as a Master. But it never associates to the SSID I ask it to. This is what iwconfig shows: ath04 IEEE 802.11ng ESSID:"myownssid" Mode:Managed Channel:0 Access Point: Not-Associated Bit Rate:0 kb/s Tx-Power:14 dBm Sensitivity=1/3 Retry:off RTS thr:off Fragment thr:off Encryption key:off Power Management:off Link Quality=0/94 Signal level=161/162 Noise level=161/161 Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:0 Missed beacon:0 Although I've never done this from the command line, I suppose I could use wpa_supplicant or wpa_client to associate it, but I don't know how to do that without editing configuration files and the filesystem is readonly. Besides, I would have to run those commands manually after every reboot. I'd like to know how to do this the Cisco way, if possible. If not, any trick to make this work would be useful. Edit: This is with the latest firmware, 2.0.4.2. And I found that not all of the filesystem is readonly, since /var and /tmp are mounted with type ramfs.

    Read the article

  • PHP include() through HTTP makes Apache time out

    - by Adam Interact
    I have a problem with ExpressionEngine2 after moving from an old server to WHM/cPanel running on CentOS6.4. Simple test code to reproduce that issue: <?php $protocol = strpos(strtolower($_SERVER['SERVER_PROTOCOL']),'https') === FALSE ? 'http' : 'https'; $host = $_SERVER['HTTP_HOST']; include($protocol . '://' . $host . '/header.html'); ?> <p> Main text...</p> <?php include($protocol . '://' . $host . '/footer.html'); ?> Where header.html looks like <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Untitled Document</title> </head> <body> and footer.html looks like: </body> </html> Creates Apache time out: Warning: include(http://www.domain.com/header.html) [function.include]: failed to open stream: Connection timed out in /home/domain/public_html/test/index.php on line 5 Warning: include() [function.include]: Failed opening 'http://www.domain.com/header.html' for inclusion (include_path='.:/usr/lib/php:/usr/local/lib/php') in /home/domain/public_html/test/index.php on line 5 Main text... Warning: include(http://www.domain.com/footer.html) [function.include]: failed to open stream: Connection timed out in /home/domain/public_html/test/index.php on line 12 Warning: include() [function.include]: Failed opening 'http://www.domain.com/footer.html' for inclusion (include_path='.:/usr/lib/php:/usr/local/lib/php') in /home/domain/public_html/test/index.php on line 12 Any clue what can be wrong with Apache or PHP configuration? Thanks

    Read the article

  • X.org - mouse gets stuck on press

    - by grawity
    I'm using Arch Linux. Very often, when I click on something, the OS thing sees the mouse press but not the release. If it was a link or file I clicked, moving the cursor would drag it too. Hammering the same mouse button again gives no effect. Usually, if I tap the touchpad (ALPS), the system finally sees both press and release of that, and I can continue working. (This might be because it uses a different driver - synaptics instead of evdev.) As you can imagine, this is quite annoying even for someone who spends 70% of his life in front of a terminal app. This is not a mouse issue - I'm on a laptop, and this affects both the Trackpoint thing and an external USB mouse. This is not a DE or window manager issue - I have used GNOME (with Metacity, Compiz and Xfwm4), Xfce (with Metacity and Xfwm4), mwm, twm, awesome, and wmii. Doesn't seem to be a hardware thing - after rebooting into Windows XP, everything works fine. hal is used for the auto-configuration of devices (as I have to disconnect the USB mouse often), so Xorg.conf really has nothing of relevance. Xorg -version shows: X.Org X Server 1.6.3.901 (1.6.4 RC 1) Release Date: 2009-8-25 X Protocol Version 11, Revision 0 If it changes anything, the laptop is stone-age Dell Latitude C840. I kinda suspect either hal or the evdev thing to cause it, but I really have no ideas on what to check further. In other words, HALP!#$ This thing is driving me nuts.

    Read the article

  • 1and1 ssh - connection refused

    - by kitensei
    I'm having troubles connecting through SSH to my 1&1 account. When I try to connect with command userXXX@host -p22 -vv I have the following output: OpenSSH_5.8p1 Debian-7ubuntu1, OpenSSL 1.0.0e 6 Sep 2011 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to mySite.com [ip_here] port 22. debug1: connect to address ip_here port 22: Connection refused Moreover, once I try to connect through SSH and it fails, even the HTTP access is dead, I cannot access the website through explorer anymore :/ please help < I'm running ubuntu 11.10 EDIT: don't know if it can help, here's the .htaccess of the 1and1 server Options +Indexes Satisfy any Order Deny,Allow Allow from 212.227.X.X Deny from all RemoveType .html .gif AuthType Basic AuthName "Access to /logs" AuthUserFile /kunden/homepages/43/d376072470/htpasswd Require user "user_here" and sftp.log: Mar 26 09:21:24 193.251.X USER_HERE Connection from 193.251.X port 51809 Mar 26 09:21:30 193.251.X USER_HERE Failed password for USER_HERE from 193.251.X port 51809 ssh2 Mar 26 09:23:39 193.251.X USER_HERE Failed password for USER_HERE from 193.251.X port 51809 ssh2 Mar 26 09:23:41 193.251.X USER_HERE Failed password for USER_HERE from 193.251.X port 51809 ssh2 Mar 26 09:23:45 193.251.X USER_HERE Failed password for USER_HERE from 193.251.X port 51809 ssh2 Mar 26 09:23:57 193.251.X USER_HERE Failed password for USER_HERE from 193.251.X port 51809 ssh2 Mar 26 10:53:36 212.227.X tmp64459736-3228 Connection from 212.227.X port 23275 Mar 26 10:53:36 212.227.X tmp64459736-3228 Accepted password for tmp64459736-3228 from 212.227.X port 23275 ssh2 Mar 26 11:53:37 212.227.X tmp64459736-3228 Connection closed by 212.227.X Mar 26 18:58:17 212.227.X tmp64459736-5363 Connection from 212.227.X port 23353 Mar 26 18:58:17 212.227.X tmp64459736-5363 Accepted password for tmp64459736-5363 from 212.227.X port 23353 ssh2 Mar 26 19:53:36 212.227.X tmp64459736-8525 Connection from 212.227.X port 5166 Mar 26 19:53:36 212.227.X tmp64459736-8525 Accepted password for tmp64459736-8525 from 212.227.X port 5166 ssh2 Mar 26 19:58:17 212.227.X tmp64459736-5363 Connection closed by 212.227.X

    Read the article

  • Lucid Lynx login issue

    - by Bart Silverstrim
    Recently upgraded from Karmic to Lynx. Upgrade seemed to go well, no noticeable issues. I logged in, and my window manager wasn't starting. An application would appear, but sans control buttons and border, so figured the windows manager needed to be given a swift kick. Opened a web browser and a quick google had me run "metacity --replace &" and everything popped up. I re-ran the Compiz configuration tool to enable my rotating desktop cube to the way I liked it, and had to reconfigure my desktop switcher to the right number of desktops (although the first time I ran it, it crashed on the panel and reloaded...odd, but once it relaunched it seemed fine.) Today I installed updates, rebooted and logged in for the second time since my upgrade. Again, the window manager was dead, and my compiz settings were gone, and the workspaces were set back to four (and when I clicked on the preferences to change them, it crashed on the panel and reloaded again). Resetting everything made things look somewhat normal again. I'm guessing it'll work until I reboot again. Googling around isn't turning up similar complaints about Lucid Lynx and the window manager. Before I go deleting preference files, anyone else know of this kind of issue and what could be done about it? Or should I start taking the stab in the dark approach of deleting preference files hoping one of them is corrupt or has something unsupported in it that's throwing LL for a loop?

    Read the article

  • How do I specify the emergency location in CDP?

    - by chrish
    In the LLDP-MED and Cisco Discovery Protocol whitepaper, it compares LLDP-MED and CDP. The part I am interested in is emergency location configuration. In LLDP-MED, I can specify the Emergency Line Indentification Number (ELIN) and that number will be used by some IP Phones (e.g. Aastra) when placing emergency calls. The whitepaper states: Location Identification Discovery This capability is important because it normally provides location information from the switch to the phone. (If the phone is configured with location information or can determine its location, then it may send this information to the switch. However, the real value is getting this information from the switch to the phone for phones that cannot determine their own location.) Location identification discovery allows the phone to be aware of its location-information that can be used for location-based applications on the phone. More importantly, this capability can be used to provide location information when making emergency calls. Both Cisco Discovery Protocol and LLDP-MED support the transportation of location information. However, LLDP-MED has more supported data formats than Cisco Discovery Protocol. I have found the documentation on how to set the location and associate the location to the interfaces for LLDP-MED. How is this done for CDP? Is ELIN supported for CDP?

    Read the article

  • RRAS with DHCP when the IP pool is on a different subnet

    - by John B
    I run a small business network and the last couple of days I have been setting up some equipment to add VPN capabilities to our network. I've got the following set up: Windows 2008 R2 with RRAS - 172.22.200.50 Cisco RV082 router - 172.22.100.1 / 172.22.200.1 The Cisco router only support DHCP on a single class C network; 172.22.100.0/24. On the Cisco router I have set up an additional subnet; 172.22.200.0/24. The DHCP range is 172.22.100.200-254 When a PPTP connection comes in to the router, it is forwarded to my RRAS at 172.22.200.50. If I configure RRAS to assign IPs from a static pool on the 172.22.200.0/24 subnet everything works fine except the DNS suffix / search domain. However, if I set RRAS to use DHCP I am no longer able to contact any devices on the network. The IP I receive is on a different subnet (172.22.100.0/24). Is it possible to still use DHCP as the method of ip assignment in RRAS, even when the IP adresses assigned are in a different subnet? If yes, what piece of configuration am I missing to fix the VPN connection issues mentioned in the paragraph above. The reason I want RRAS with DHCP to work is because from what I have understood, this is the "only" way to hand out a DNS suffix to VPN clients. Any help on this matter is greatly appreciated!

    Read the article

  • Multiple WAN interfaces on SonicWall TZ 100?

    - by Chad Decker
    I'm using a SonicWall TZ 100 with a basic configuration of X0 for the LAN and X1 for the WAN. The WAN uses DHCP to obtain its routable IP address. I want to obtain a second routable IP from my ISP. I'm in luck because my cable company will provide me with an additional dynamic IP for $5/mo. How do I bind this IP to my SonicWall? My additional dynamic IP will not be consecutive to the original one. It won't even be on the same class C. I think what I want to do is to use one of the empty ports/interfaces (X2, X3, or X4), tell that interface to use DHCP, and then add that interface to the WAN "zone". I can't figure out how to do this though. Here's what I've tried so far: (1) I've looked in Network Interfaces. I see X0 and X1 but the other unused interfaces don't show up. I don't see an "Add" button to add the new interfaces. (2) I've looked in Network Zones. I see that X0, X2, X3, X4 are in the LAN zone. I tried to drag X3 into the WAN zone but I can't. Nor does clicking the "Configure" button allow me to move an unused interface from LAN to WAN. (3) I've read the post entitled Splitting up multiple WAN's on Sonicwall. This doesn't seem applicable to me. Any thoughts?

    Read the article

  • Installing WindowsAuthentication breaks authentication / web.config?

    - by Ian Quigley
    I have a clean Windows 2008 R2 box (on a VM) and have installed IIS 7.5 with default options. I then copied a website to it (from Windows 7, IIS 7) and after a little tweaking the website is working fine. The website is currently using and working with Anonymous Authentication. I have gone back to the Windows Components/Sever Manager, Roles - Security and ticked and installed Windows Authentication. When I check my server in IIS (top level above sites) - Authentication, I see Anonymous Authentication (enabled) ASP.NET Impersonation (disabled) Forms Authentication (disbaled) Windows Authentication (enabled) When I check my default website - Authentication, I see as above but "Retrieving status" and an error dialog saying There was an error while performing this operation. Details: Filename c:\inetpub\wwwroot\screwturnwiki\web.config Line number: 96 Error: This configuration section cannot be used in this path. This happens when the section is being locked at the parent level. Locking is either by default (overriderModeDefault="Deny"), or set explicity by a location tag with overrideMode="Deny" or the legacy allowOverride="False". I have tried hand editing the web.config with no success. UN-installing Windows Authentication happily returns my site to working with Anonymous Authentication, and allows me to enable/disable these three options. FYI. I am using ScrewTurnWiki with the Active Directory plug in.

    Read the article

  • PHP Output buffer flush issue on Apache/Linux

    - by Iiro Vaahtojärvi
    Hi, I'm running into issues with the PHP output buffer flushing on my Linux web server. The output buffer is maintained correctly and all the right data is pushed to it in my code, but the usual flushing mechanisms won't flush it to the browser. I have tried everything posted here: http://php.net/manual/en/function.flush.php but no success so far. I got a small script from php.net to test it: <?php ob_start(); for($i=0;$i<70;$i++) { echo 'printing...<br />'; ob_get_flush(); flush(); usleep(300000); } ?> This should print "printing..." to the browser 70 times, one line every three seconds. This works fine on my other testing environment which is based on Windows (still using apache, XAMPP package), but on my Linux server it doesn't. It waits for the script to finish before giving anything to the browser, basically ignoring the whole flush command. If anyone has experienced this before or knows of anything that could help (be it server configuration or adjustment to code) it would be greatly appreciated!

    Read the article

  • Fixed and dynamic IPs in ISC DHPD lead to double lease

    - by GorillaPatch
    I would like to have a small dynamic adress part and the most clients are assigned a fixed IP adress. My dhcpd.conf looks like this: use-host-decl-names on; authoritative; allow client-updates; ddns-updates on; # Einstellungen fuer DHCP leases default-lease-time 3600; max-lease-time 86400; lease-file-name "/var/lib/dhcpd/dhcpd.leases"; subnet 192.168.11.0 netmask 255.255.255.0 { ddns-updates on; pool { # IP range which will be assigned statically range 192.168.11.1 192.168.11.240; deny all clients; } pool { # small dynamic range range 192.168.11.241 192.168.11.254; # used for temporary devices } } group { host pc1 { hardware ethernet xx:xx:xx:xx:xx:xx; fixed-address 192.168.11.11; } } The motivation for the pool declaration with deny all hosts comes from the ISC DHCPD homepage http://www.isc.org/files/auth.html This will allow hosts to be first added to the network, where they will receive a temporary IP from the 241-254 adress range and then later write an explicit host declaration. Upon next connect it will receive the right configuration. The problem is that I am getting error messages that 192.168.11.13 has a dynamic and a static lease. I am a bit confused as I expected the pool declaration with deny all clients would not count as dynamic. Dynamic and static leases present for 192.168.11.13. Remove host declaration pc1 or remove 192.168.11.13 from the dynamic address pool for 192.168.11.0/24 Is there a way to have the DHCP server send an DHCPNA to clients if they have a host statement and retain this dynamic range?

    Read the article

  • Rewriting html links with modproxyperlhtml

    - by Juancho
    I'm trying to setup an Apache reverse proxy using mod_proxy and modproxyperlhtml. This is my scenario: Domain for the proxy: http : // www.myserver.com/ Destination server (the one behind the proxy): http : // myserver.foo.com/myapp/ I'm sorry that I have to space the URL but serverfault doesn't allow me to post more than two links as "spam protection mechanism" (ridiculous on a site where you ask questions about servers and it's really probable to post more than two times the same URL's to explain your question). The idea is to map http : // www.myserver.com/ to http : // myserver.foo.com/myapp/ . Note that the path on the proxy is / and on the destination server is /myapp/. All of the examples I can find on the net (like the one on the official documentation of modproxyperlhtml) are the other way around, ie. path on the proxy /myapp/ and path on the destination server /. This is my current config that doesn't work: ProxyPass / http : // myserver.foo.com/myapp/ ProxyPassReverse / http : // myserver.foo.com/myapp/ PerlInputFilterHandler Apache2::ModProxyPerlHtml PerlOutputFilterHandler Apache2::ModProxyPerlHtml SetHandler perl-script PerlSetVar ProxyHTMLVerbose "On" LogLevel Info <Location / > # ProxyPassReverse /myapp/ PerlAddVar ProxyHTMLURLMap "/myapp/ /" PerlAddVar ProxyHTMLURLMap "http : // myserver.foo.com /" </Location> The examples use the ProxyPassReverse inside the Location directive, but on my case doesn't work, only when outside. With this configuration the links aren't being replaced as they should be, my guess is that the location isn't being found, thus the rewrite rules aren't being applied. The error log only shows that it uncompresses the content, searches it but doesn't find anything: [Tue Nov 13 0842:05 2012] [warn] [ModProxyPerlHtml] Uncompressing text/html; charset=UTF-8, Content-Encoding: gzip\n [Tue Nov 13 08:42:05 2012] [warn] [ModProxyPerlHtml] Content-type 'text/html; charset=UTF-8' match: /(text\\/javascript|text\\/html|text\\/css|text\\/xml|application\\/.*javascript|application\\/.*xml)/is\n [Tue Nov 13 08:42:05 2012] [warn] [ModProxyPerlHtml] Compressing output as Content-Encoding: gzip\n [Tue Nov 13 08:42:06 2012] [warn] [ModProxyPerlHtml] Content-type 'text/html; charset=UTF-8' match: /(text\\/javascript|text\\/html|text\\/css|text\\/xml|application\\/.*javascript|application\\/.*xml)/is\n What could be wrong ?

    Read the article

  • Installing and running a guest OS on KVM-qemu with only serial console access

    - by nixnotwin
    I am trying to installing a bsd distro with virt-installer. With a Linux distro I used this: virt-install -n debian -r 1024 --vcpus=1 --accelerate -v --disk /var/kvm/installation-disks/debian.img,size=6--nographics --network=bridge:br0,model=ne2k_pci,mac=52:54:00:66:68:09 -l http://ftp.de.debian.org/debian/dists/squeeze/main/installer-amd64/current/images/ -x console=ttyS0,115200 This loads the installer directly from the online mirror. With Fedora I used this mirror: http://www.nic.funet.fi/pub/mirrors/fedora.redhat.com/pub/fedora/linux/releases/16/Fedora/x86_64/os/ Are there such mirrors for freebsd or openbsd? The reason I want direct installable ftp/http mirrors is because I can access my physical server only via ssh, and it doesn't have a X server or a window manager to give me a VNC GUI. When I tried installing centos 6 with an online mirror I was able to finish the installation via serial console, but after I rebooted it, the serial console never worked for me. I tried everything possible---editing menu.lst, inttab and securtty files. Fedora 16 booted fine from serial console, but got stuck when it loaded anaconda installer. I tried editing freebsd iso installation media by adding serial console option to boot option. And installation was successful. But couldn't boot into it becuase it wasn't giving console acess. I couldn't edit any files as ufs partition cannot be loaded with write access on my Ubuntu server 10.04. Only debian squeeze worked well, it worked for me even without editing a single configuration file. I want to have CLI versions of fedora/centos and freebsd/openbsd. But, looks like there isn't any hope for me to have them, as I have to depend on a serial console to do everything.

    Read the article

  • What does a connection timeout indicate when performing an NFS mount?

    - by DeeDee
    We have a shiny new QNAP NAS (TS-879U-RP), and I'm trying to mount it to our big ol' RHEL server in the same manner as our other two QNAP NAS devices. The IT department won't give me the root privileges to the NAS, so I can't SSH in (I know, I know). The first thing I did was to, via the QNAP web admin interface, create a network share named "Runs." I then added the IP of the RHEL server to the permissions list: On the RHEL server, I then added the following line to /etc/fstab: [IP of NAS]:/Runs /mnt/gsrnas3 nfs defaults 0 0 Aside from the IP and the specific mount directory name, this is how I mounted the other two NAS devices. I then created the gsrnas3 directory under /mnt/, and then ran `mount /mnt/gsrnas3' I got the following error: mount.nfs: Connection timed out My first thought is that it's a ports issue, but I don't have enough specific experience with this issue to know for sure. I have two other NAS devices by the same manufacturer already mounted to this RHEL server, so that leads me to believe the configuration issue is on the NAS side of things. I can ping the NAS device successfully from the RHEL server. Not being able to SSH into said NAS is a huge hassle, though. Any ideas?

    Read the article

  • iptables advanced routing

    - by Shamanu4
    I have a Centos server acting as a NAT in my network. This server has one external (later ext1) interface and three internal (later int1, int2 and int3). Egress traffic comes from users via int1 and after MASQUERADE goes via ext1. Ingress traffic comes from ext1, MASQUERADE, and goes via int2 or int3 according to static routes. | ext1 | x.x.x.x/24 +---------|----------------------+ | | | Centos server (NAT) | | | +---|------|---------------|-----+ | | | int1 | | int2 | int3 10.30.1.10/24 | | 10.30.2.10/24 | 10.30.3.10/24 ^ v v 10.30.1.1/24 | | 10.30.2.1/24 | 10.30.3.1/24 +---|------|---------------|-----+ | | | | | | | v v | | ^ -Traffic policer- | | |_____________ | | | | | +------------------|-------------+ | 192.168.0.1/16 | | Clients 192.168.0.0/16 The problem: Egress traffic seems to be dropped after PREROUTING table. Packet counters are not changing on MASQUERADE rule in POSTROUTING. If I change the routes to clients causing the traffic go back via int1 - everything works perfectly. current iptable configuration is very simple: # cat /etc/sysconfig/iptables *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] -I INPUT 1 -i int1 -j ACCEPT -A FORWARD -j ACCEPT COMMIT *nat -A POSTROUTING -o ext1 -j MASQUERADE # COMMIT Can anyone point me what I'm missing? Thanks. UPDATE: 192.168.100.60 via 10.30.2.1 dev int2 proto zebra # routes to clients ... 192.168.100.61 via 10.30.3.1 dev int3 proto zebra # ... I have a lot of them x.x.x.0/24 dev ext1 proto kernel scope link src x.x.x.x 10.30.1.0/24 dev int1 proto kernel scope link src 10.30.1.10 10.30.2.0/24 dev int2 proto kernel scope link src 10.30.2.10 10.30.3.0/24 dev int3 proto kernel scope link src 10.30.3.10 169.254.0.0/16 dev ext1 scope link metric 1003 169.254.0.0/16 dev int1 scope link metric 1004 169.254.0.0/16 dev int2 scope link metric 1005 169.254.0.0/16 dev int3 scope link metric 1006 blackhole 192.168.0.0/16 default via x.x.x.y dev ext1 Clients have 192.168.0.1 as gateway, which is redirecting them to 10.30.1.1

    Read the article

  • How to configure networking on an appliance such that it can plug and play on any corporate network?

    - by Joshua Lim
    I had a chance to configure a Moxa NPort device server appliance on my client's network, it was very easy to do so, done in just 2 minutes. Here's what I did:- The Moxa device server had a preset IP address of 192.168.127.254 and subnet mask 255.255.255.0 - http://www.moxa.com/doc/manual/nport/5400/NPort_5400_Series_Users_Manual_v4.pdf Moxa provides a Windows software which I used to "scan" for the device server. It worked like magic! The software returns a list of device servers found. Each device server is identified by MAC address, and by selecting the device server using the software, I can reset the default IP address and subnet mask of that device server! In comparison, during an earlier project, I spent 2 hours trying to get KVM to work for a Windows 7 embedded appliance I'm trying to install in my client's network - http://superuser.com/questions/380305/how-to-configure-windows-7-professional-appliance-pc-on-my-clients-network-usin Prior to that, I have already tried pre-configuring the IP address and subnet mask to the one which my client provided, yet the appliance still can't connect to the client's network! I've also tried cross cable, didn't work either. After KVM worked, I discovered that the network settings were "lost" after I plug the machine into the client's network. Now my question is what can I do to setup my Windows 7 embedded appliance so that it can connect to any network like that the Moxa device server? I tried experimenting this on my network using a Windows machine configured to an IP address of 192.168.127.254 and subnet mask 255.255.255.0, but it doesn't connect to my network that uses 192.168.0.*. :( EDIT: I would like to point out that the Moxa Windows configuration software seems to be able to connect to any Moxa device connected to the network even if it is on a different subnet, as long as the network adapter shows "connected". This is important because the Moxa device has no VGSM port or interface to configure the IP address.

    Read the article

  • Apache 2.2 + mod_fcgid + PHP 5.4: (104) Connection reset by peer

    - by Michele Piccirillo
    On a Debian 6 VPS, I'm running PHP 5.4 via mod_fcgid on a couple of different virtual hosts, managed by Virtualmin GPL. At random, I get 500 Internal Server Errors; restarting Apache brings everything back to normality. Examining the logs, I find messages of this kind: [Thu Oct 04 15:39:35 2012] [warn] [client 173.252.100.117] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server [Thu Oct 04 15:39:35 2012] [error] [client 173.252.100.117] Premature end of script headers: index.php Any ideas about what is happening? UPDATE: I found a similar question and the author reported to have solved the problem disabling APC. I tried following the advice, but I'm still getting the same errors. VirtualHost configuration SuexecUserGroup "#1000" "#1000" ServerName example.com DocumentRoot /home/example/public_html ScriptAlias /cgi-bin/ /home/example/cgi-bin/ DirectoryIndex index.html index.htm index.php index.php4 index.php5 <Directory /home/example/public_html> Options -Indexes +IncludesNOEXEC +FollowSymLinks +ExecCGI allow from all AllowOverride All AddHandler fcgid-script .php AddHandler fcgid-script .php5 FCGIWrapper /home/example/fcgi-bin/php5.fcgi .php FCGIWrapper /home/example/fcgi-bin/php5.fcgi .php5 </Directory> <Directory /home/example/cgi-bin> allow from all </Directory> RemoveHandler .php RemoveHandler .php5 IPCCommTimeout 61 FcgidMaxRequestLen 1073741824 php5.fcgi #!/bin/bash PHPRC=$PWD/../etc/php5 export PHPRC umask 022 export PHP_FCGI_CHILDREN PHP_FCGI_MAX_REQUESTS=99999 export PHP_FCGI_MAX_REQUESTS SCRIPT_FILENAME=$PATH_TRANSLATED export SCRIPT_FILENAME exec /usr/bin/php5-cgi Package versions webmin-virtual-server/virtualmin-universal 3.94.gpl-2 apache2/squeeze 2.2.16-6+squeeze8 libapache2-mod-fcgid/squeeze 1:2.3.6-1+squeeze1 php5 5.4.7-1~dotdeb.0 php5-apc 5.4.7-1~dotdeb.0

    Read the article

  • How can I debug solutions in Visual Studio 2010 from a network share?

    - by alastairs
    I've recently got a new Mac laptop and am running VS2010 in a Parallels virtual machine. It's mostly working out well for me, but I'm having some problems with debugging specific project types, related to the fact that the projects are being accessed via a network share. Test projects don't run because the test runner can't load the tests' DLL. Web projects fail to run in the Visual Studio mini web server, throwing the following exception: 'An error occurred loading a configuration file: Failed to start monitoring changes to path\to\web.config'. I've spent the evening trawling the web with little luck on this. After reading these two posts, I tried out the usual CasPol changes, but then found this post from one of the early VS2010 betas indicating that CasPol is no longer needed/supported in .NET 4.0 and VS2010. The network share is accessible via both a mapped drive and the UNC path. The virtual machine runs its applications under the administrator account, which appears to have all the necessary permissions on the network share to create, read, write and delete files and folders. I say "appears to have" as I can't view the Security Properties of the appropriate folder via Explorer: the Security tab just isn't present. Has anyone managed to successfully load and debug web and test projects from a network share in VS2010?

    Read the article

< Previous Page | 475 476 477 478 479 480 481 482 483 484 485 486  | Next Page >