Search Results

Search found 43474 results on 1739 pages for 'configuration database'.

Page 1521/1739 | < Previous Page | 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528  | Next Page >

  • Nagios shell script cannot be executed

    - by MeinAccount
    I'm trying to monitor GitLab with nagios. I've created the following command definition and shell script but when checking the service I'm receiving the following e-mail. How can I solve this? The file is executable. [...] nagios : 3 incorrect password attempts ; TTY=unknown ; PWD=/ ; USER=git ; COMMAND=/bin/bash -c /var/lib/nagios/custom_plugins/check_gitlab.sh Command definition: define command { command_name custom_check_gitlab command_line /var/lib/nagios/custom_plugins/check_gitlab.sh } Shell script: #! /bin/sh # [...] RAILS_ENV="production" # Script variable names should be lower-case not to conflict with internal /bin/sh variables such as PATH, EDITOR or SHELL. app_root="/home/git/gitlab" app_user="git" unicorn_conf="$app_root/config/unicorn.rb" pid_path="$app_root/tmp/pids" socket_path="$app_root/tmp/sockets" web_server_pid_path="$pid_path/unicorn.pid" sidekiq_pid_path="$pid_path/sidekiq.pid" ### Here ends user configuration ### # Switch to the app_user if it is not he/she who is running the script. if [ "$USER" != "$app_user" ]; then sudo -u "$app_user" -H -i $0 "$@"; exit; fi # Switch to the gitlab path, if it fails exit with an error. if ! cd "$app_root" ; then echo "Failed to cd into $app_root, exiting!"; exit 1 fi ### Init Script functions check_pids(){ if ! mkdir -p "$pid_path"; then echo "Could not create the path $pid_path needed to store the pids." exit 1 fi # If there exists a file which should hold the value of the Unicorn pid: read it. if [ -f "$web_server_pid_path" ]; then wpid=$(cat "$web_server_pid_path") else wpid=0 fi if [ -f "$sidekiq_pid_path" ]; then spid=$(cat "$sidekiq_pid_path") else spid=0 fi } # Checks whether the different parts of the service are already running or not. check_status(){ check_pids # If the web server is running kill -0 $wpid returns true, or rather 0. # Checks of *_status should only check for == 0 or != 0, never anything else. if [ $wpid -ne 0 ]; then kill -0 "$wpid" 2>/dev/null web_status="$?" else web_status="-1" fi if [ $spid -ne 0 ]; then kill -0 "$spid" 2>/dev/null sidekiq_status="$?" else sidekiq_status="-1" fi } check_pids check_status if [ "$web_status" != "0" -a "$sidekiq_status" != "0" ]; then echo "GitLab is not running." exit 2 fi if [ "$web_status" != "0" ]; then printf "The GitLab Unicorn webserver is \033[31mnot running\033[0m.\n" exit 1 fi if [ "$sidekiq_status" != "0" ]; then printf "The GitLab Sidekiq job dispatcher is \033[31mnot running\033[0m.\n" exit 1 fi if [ "$web_status" = "0" -a "$sidekiq_status" = "0" ]; then printf "GitLab and all it's components are \033[32mup and running\033[0m.\n" exit 0 fi

    Read the article

  • Exchange 2010 Internal Auto Discover Migrate away from current .local DNS name

    - by Bryan
    We have an Exchange 2010 Server, running within our Active Directory domain, with an internal hostname of server.example.local. The server is configured for Exchange anywhere, but currently has a self signed certificate with a name of server.example.local installed. Internally, clients connect and work fine, but externally, we are having certificate errors as you would expect. I'm about to purchase a UCC SSL Certificate to install on the server with all the relevant SANs on the certificate to correct this, but due to obvious problem obtaining a trusted cert with .local as a subject alternative name, I'm looking to configure clients on the internal network so that they don't use any reference to the .local hostname. I've configured our external DNS name for the server as exchange.example.com, and have created an CNAME for autodiscover.example.com which also (correctly) points to exchange.example.com. I've also configured internal DNS records for these two hostnames which point to the internal interface of the same server. I don't anticipate any problems here. I'm now trying to reconfigure Auto Discover internally, so that Outlook attempts to connect to exchange.example.com. I've followed the steps in KB940726 to prepare for this, and this appeared to work fine. No errors were generated and I was able to verify the CAS name in AD using ADSI edit. I've just tried testing this with a newly created test user account complete with a new Exchange mailbox, and Outlook 2007 connects fine on the internal network, but looking deeper in the Exchange profile, Outlook is still resolving the server name as server.example.local. Could it be the self signed cert, that is causing Outlook to display the server name as server.example.local, or is there still something wrong with my internal autodiscover configuration? Edit I've proven it isn't the certificate that is responsible for outlook returning server.example.local, by installing another self certified certificate with a name of test.example.com. When creating a new outlook profile, I get the mismatch error I'm expceting, but after accepting the cert, and finishing the config of the Outlook profile, again it still shows server.example.local as the server name. This means that if I were to purchase the UCC cert now, that external client would work fine, but internal clients would show a certificate name mismatch. Any ideas where to start diagnosing this?

    Read the article

  • How to specify this 'symbolic link' for the Jungo WinDriver?

    - by user252098
    Just now , I try to install the Jungo WinDriver in the Ubuntu 13.10 . But I am puzzled by the its manual : 4.2.3. Linux WinDriver Installation Instructions 4.2.3.1. Preparing the System for Installation In Linux, kernel modules must be compiled with the same header files that the kernel itself was compiled with. Since WinDriver installs kernel modules, it must compile with the header files of the Linux kernel during the installation process. Therefore, before you install WinDriver for Linux, verify that the Linux source code and the file version.h are installed on your machine: Install the Linux kernel source code: If you have yet to install Linux, install it, including the kernel source code, by following the instructions for your Linux distribution. If Linux is already installed on your machine, check whether the Linux source code was installed. You can do this by looking for 'linux' in the /usr/src directory. If the source code is not installed, either install it, or reinstall Linux with the source code, by following the instructions for your Linux distribution. Install version.h: The file version.h is created when you first compile the Linux kernel source code. Some distributions provide a compiled kernel without the file version.h. Look under /usr/src/linux/include/linux to see whether you have this file. If you do not, follow these steps: Become super user: $ su Change directory to the Linux source directory: cd /usr/src/linux Type: make xconfig Save the configuration by choosing Save and Exit. Type: make dep Exit super user mode: exit To run GUI WinDriver applications (e.g., DriverWizard [5]; Debug Monitor [7.2]) you must also have version 5.0 of the libstdc++ library — libstdc++.so.5. If you do not have this file, install it from the relevant RPM in your Linux distribution (e.g., compat-libstdc++). Before proceeding with the installation, you must also make sure that you have a linux symbolic link. If you do not, create one by typing /usr/src$ ln -s 'target kernel'/linux For example, for the Linux 2.4 kernel type /usr/src$ ln -s linux-2.4/ linux ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ I can't understand how to specify these two parameters in my Ubuntu .

    Read the article

  • Nagios core Event Handler not working

    - by sivashanmugam
    Nagios Event Handler is not triggering when the service is taking more time to response or down. My configuration in below nagios.cfg enable_event_handlers=1 localhost.cfg define service { use generic-service host_name Server service_description test-server servicegroups test-service check_command check-service is_volatile 0 check_period 24x7 max_check_attempts 4 normal_check_interval 2 retry_check_interval 2 contact_groups testcontacts notification_period 24x7 notification_options w,u,c,r notifications_enabled 1 event_handler_enabled 1 event_handler recheck-service } command.cfg define command{ command_name recheck-service command_line /usr/local/nagios/libexec/alert.sh $SERVICESTATE$ $SERVICESTATETYPE$ $SERVICEATTEMPT$ } alert.sh file !/bin/sh set -x case "$1" in OK) # The service just came back up, so don't do anything... ;; WARNING) # We don't really care about warning states, since the service is probably still running... ;; UNKNOWN) # We don't know what might be causing an unknown error, so don't do anything... ;; CRITICAL) Aha! The HTTP service appears to have a problem - perhaps we should restart the server... Is this a "soft" or a "hard" state? case "$2" in We're in a "soft" state, meaning that Nagios is in the middle of retrying the check before it turns into a "hard" state and contacts get notified... SOFT) # What check attempt are we on? We don't want to restart the web server on the first check, because it may just be a fluke! case "$3" in Wait until the check has been tried 3 times before restarting the web server. If the check fails on the 4th time (after we restart the web server), the state type will turn to "hard" and contacts will be notified of the problem. Hopefully this will restart the web server successfully, so the 4th check will result in a "soft" recovery. If that happens no one gets notified because we fixed the problem! 3) echo -n "Going To Ping the Virtual Machine (3rd soft critical state)..." # Call the init script to restart the HTTPD server myresult=`/usr/local/nagios/libexec/check_http xyz.com -t 100 | grep 'time'| awk '{print $10}'` echo "Your Service Is taking the following time Delay" "$myresult Seconds" |mail -s "WARNING : Service Taken More Time To Response" [email protected] ;; esac ;; # The HTTP service somehow managed to turn into a hard error without getting fixed. # It should have been restarted by the code above, but for some reason it didn't. # Let's give it one last try, shall we? # Note: Contacts have already been notified of a problem with the service at this

    Read the article

  • How do you splice out a part of an xvid encoded avi file, with ffmpeg? (no problems with other files

    - by yegor
    Im using the following command, which works for most files, except what seems to be xvid encoded ones /usr/bin/ffmpeg -sameq -i file.avi -ss 00:01:00 -t 00:00:30 -ac 2 -r 25 -copyts output.avi So this should basically splice out 30 seconds of video + audio, starting from 1 minute mark. It does START encoding at the 00:01:00 mark but it goes all the way to the end of the file for some reason, ignoring that I want just 30 seconds. The output looks like this. FFmpeg version git-ecc4bdd, Copyright (c) 2000-2010 the FFmpeg developers built on May 31 2010 04:52:24 with gcc 4.4.3 20100127 (Red Hat 4.4.3-4) configuration: --enable-libx264 --enable-libxvid --enable-libmp3lame --enable-libopenjpeg --enable-libfaac --enable-libvorbis --enable-gpl --enable-nonfree --enable-libxvid --enable-pthreads --enable-libfaad --extra-cflags=-fPIC --enable-postproc --enable-libtheora --enable-libvorbis --enable-shared libavutil 50.15. 2 / 50.15. 2 libavcodec 52.67. 0 / 52.67. 0 libavformat 52.62. 0 / 52.62. 0 libavdevice 52. 2. 0 / 52. 2. 0 libavfilter 1.20. 0 / 1.20. 0 libswscale 0.10. 0 / 0.10. 0 libpostproc 51. 2. 0 / 51. 2. 0 [mpeg4 @ 0x17cf770]Invalid and inefficient vfw-avi packed B frames detected Input #0, avi, from 'file.avi': Metadata: ISFT : VirtualDubMod 1.5.10.2 (build 2540/release) Duration: 00:02:00.00, start: 0.000000, bitrate: 1587 kb/s Stream #0.0: Video: mpeg4, yuv420p, 672x368 [PAR 1:1 DAR 42:23], 25 tbr, 25 tbn, 25 tbc Stream #0.1: Audio: ac3, 48000 Hz, 5.1, s16, 448 kb/s File 'lol6.avi' already exists. Overwrite ? [y/N] y Output #0, avi, to 'lol6.avi': Metadata: ISFT : Lavf52.62.0 Stream #0.0: Video: mpeg4, yuv420p, 672x368 [PAR 1:1 DAR 42:23], q=2-31, 200 kb/s, 25 tbn, 25 tbc Stream #0.1: Audio: mp2, 48000 Hz, 2 channels, s16, 64 kb/s Stream mapping: Stream #0.0 -> #0.0 Stream #0.1 -> #0.1 Press [q] to stop encoding [mpeg4 @ 0x17cf770]Invalid and inefficient vfw-avi packed B frames detected [buffer @ 0x184b610]Buffering several frames is not supported. Please consume all available frames before adding a new one. frame= 1501 fps=104 q=0.0 Lsize= 15612kB time=30.02 bitrate=4259.7kbits/s ts/s video:15303kB audio:235kB global headers:0kB muxing overhead 0.482620% if I convert this file to mp4 for example, and then perform the same action, it works perfectly.

    Read the article

  • Routing all data through an VPN tunnel with ppp

    - by Oliver
    I'm trying to create a VPN tunnel that forwards all data from the local machine to the VPN server. I'm using ppp-2.4.5 for this with the following configuration: pty "pptp <VPNServer> --nolaunchpppd" name <my login name> remotename PPTP usepeerdns require-mppe-128 file /etc/ppp/options.pptp persist maxfail 0 holdoff 5 I have a script in if-up.d with the following content: route del default eth0 route add default dev ppp0 Before starting the VPN tunnel my routing looks like: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.0.1 0.0.0.0 UG 2 0 0 eth0 127.0.0.0 127.0.0.1 255.0.0.0 UG 0 0 0 lo 192.168.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0 After starting the tunnel (via pon) it looks like: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 0.0.0.0 0.0.0.0 U 0 0 0 ppp0 12.34.56.1 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0 127.0.0.0 127.0.0.1 255.0.0.0 UG 0 0 0 lo 192.168.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0 Now the problem is, that the VPN tunnel seems to be looped into itself. If I run ifconfig after a few seconds without any traffic: eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.0.10 netmask 255.255.0.0 broadcast 192.168.255.255 ether 00:01:2e:2f:ff:35 txqueuelen 1000 (Ethernet) RX packets 39931 bytes 6784614 (6.4 MiB) RX errors 0 dropped 90 overruns 0 frame 0 TX packets 34980 bytes 7633181 (7.2 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 device interrupt 20 memory 0xfbdc0000-fbde0000 ppp0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST> mtu 1496 inet 12.34.56.78 netmask 255.255.255.255 destination 12.34.56.1 ppp txqueuelen 3 (Point-to-Point Protocol) RX packets 7 bytes 94 (94.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 782863 bytes 349257986 (333.0 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 It states that already over 300 MiB have been send, ppp0 is only online since a few seconds and the connection isn't working anyway. Can someone please help me to fix the routing table, so that the traffic from ppp0 is not send again through ppp0 but instead goes to the remote server?

    Read the article

  • Excessive Outbound DNS Traffic

    - by user1318414
    I have a VPS system which I have had for 3 years on one host without issue. Recently, the host started sending an extreme amount of outbound DNS traffic to 31.193.132.138. Due to the way that Linode responded to this, I have recently left Linode and moved to 6sync. The server was completely rebuilt on 6sync with the exception of postfix mail configurations. Currently, the daemons run are as follows: sshd nginx postfix dovecot php5-fpm (localhost only) spampd (localhost only) clamsmtpd (localhost only) Given that the server was 100% rebuilt, I can't find any serious exploits against the above stated daemons, passwords have changed, ssh keys don't even exist on the rebuild yet, etc... it seems extremely unlikely that this is a compromise which is being used to DoS the address. The provided IP is noted online as a known SPAM source. My initial assumption was that it was attempting to use my postfix server as a relay, and the bogus addresses it was providing were domains with that IP registered as their nameservers. I would imagine given my postfix configuration that DNS queries for things such as SPF information would come in with equal or greater amount than the number of attempted spam e-mails sent. Both Linode and 6Sync have said that the outbound traffic is extremely disproportionate. The following is all the information I received from Linode regarding the outbound traffic: 21:28:28.647263 IP 97.107.134.33.32775 > 31.193.132.138.53: 28720 op8+% [b2&3=0x4134] [17267a] [30550q] [28773n] [14673au][|domain] 21:28:28.647264 IP 97.107.134.33 > 31.193.132.138: udp 21:28:28.647264 IP 97.107.134.33.32775 > 31.193.132.138.53: 28720 op8+% [b2&3=0x4134] [17267a] [30550q] [28773n] [14673au][|domain] 21:28:28.647265 IP 97.107.134.33 > 31.193.132.138: udp 21:28:28.647265 IP 97.107.134.33.32775 > 31.193.132.138.53: 28720 op8+% [b2&3=0x4134] [17267a] [30550q] [28773n] [14673au][|domain] 21:28:28.647266 IP 97.107.134.33 > 31.193.132.138: udp 6sync cannot confirm whether or not the recent spike in outbound traffic was to the same IP or over DNS, but I have presumed as such. For now my server is blocking the entire 31.0.0.0/8 subnet to help deter this while I figure it out. Anyone have any idea what is going on?

    Read the article

  • Nvidia Drivers on Debian / Lenny (Stable) -> Installation successful -> Monitors gets black

    - by David
    I have successfully installed the proprietary drivers for my nvidia (geforce 7300 gt) graphics card on debian/lenny. I know its not the best way to chose for driver installation ( see this link: http://wiki.debian.org/NvidiaGraphicsDrivers#non-freedrivers ). but the two ways seem to be possible for me (nvidia-kernel module compilation). Now the problem is that the monitors gets black, the power light starts blinking after i launch the x-server. Have a short look a the logs (output truncated from /var/log/Xorg.0.log): (II) Setting vga for screen 0. (**) NVIDIA(0): Depth 24, (--) framebuffer bpp 32 (==) NVIDIA(0): RGB weight 888 (==) NVIDIA(0): Default visual is TrueColor (==) NVIDIA(0): Using gamma correction (1.0, 1.0, 1.0) (**) Jul 28 17:10:11 NVIDIA(0): Enabling RENDER acceleration (II) Jul 28 17:10:11 NVIDIA(0): Support for GLX with the Damage and Composite X extensions is (II) Jul 28 17:10:11 NVIDIA(0): enabled. (II) Jul 28 17:10:11 NVIDIA(0): NVIDIA GPU GeForce 7300 GT (G73) at PCI:1:0:0 (GPU-0) (--) Jul 28 17:10:11 NVIDIA(0): Memory: 262144 kBytes (--) Jul 28 17:10:11 NVIDIA(0): VideoBIOS: 05.73.22.25.00 (II) Jul 28 17:10:11 NVIDIA(0): Detected PCI Express Link width: 16X (--) Jul 28 17:10:11 NVIDIA(0): Interlaced video modes are supported on this GPU (--) Jul 28 17:10:11 NVIDIA(0): Connected display device(s) on GeForce 7300 GT at PCI:1:0:0: (--) Jul 28 17:10:11 NVIDIA(0): Samsung SyncMaster (CRT-0) (--) Jul 28 17:10:11 NVIDIA(0): Samsung SyncMaster (DFP-0) (--) Jul 28 17:10:11 NVIDIA(0): Samsung SyncMaster (CRT-0): 400.0 MHz maximum pixel clock (--) Jul 28 17:10:11 NVIDIA(0): Samsung SyncMaster (DFP-0): 165.0 MHz maximum pixel clock (--) Jul 28 17:10:11 NVIDIA(0): Samsung SyncMaster (DFP-0): Internal Single Link TMDS (II) Jul 28 17:10:11 NVIDIA(0): Assigned Display Device: CRT-0 (==) Jul 28 17:10:11 NVIDIA(0): (==) Jul 28 17:10:11 NVIDIA(0): No modes were requested; the default mode "nvidia-auto-select" (==) Jul 28 17:10:11 NVIDIA(0): will be used as the requested mode. (==) Jul 28 17:10:11 NVIDIA(0): (II) Jul 28 17:10:11 NVIDIA(0): Validated modes: (II) Jul 28 17:10:11 NVIDIA(0): "nvidia-auto-select" (II) Jul 28 17:10:11 NVIDIA(0): Virtual screen size determined to be 1280 x 1024 (--) Jul 28 17:10:11 NVIDIA(0): DPI set to (85, 86); computed from "UseEdidDpi" X config (--) Jul 28 17:10:11 NVIDIA(0): option (==) Jul 28 17:10:11 NVIDIA(0): Enabling 32-bit ARGB GLX visuals. (--) Depth 24 pixmap format is 32 bpp Here is the complete /etc/X11/xorg.conf file as generated by nvidia-xconfig: # nvidia-xconfig: X configuration file generated by nvidia-xconfig # nvidia-xconfig: version 256.35 (buildmeister@builder101) Wed Jun 16 19:25:59 PDT 2010 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" EndSection Section "Files" EndSection Section "Module" Load "dbe" Load "extmod" Load "type1" Load "freetype" Load "glx" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" Identifier "Monitor0" VendorName "Unknown" ModelName "Unknown" Hor

    Read the article

  • How to set up a file server in a restricted corporate environment

    - by Emilio M Bumachar
    I work in a big corporation, and the disk space my team gets in the corporate file server is so low, I am considering turning my work PC into a file server. I ask this community for links to tutorials, software suggestions, and advice in general about how to set it up. My machine is an Intel Core2Duo E7500 @ 3GHz, 3 GB of RAM, Running Windows XP Service Pack 3. Upgrading, formatting or installing another OS is out of the question. But I do have Administrator priviledges on the PC, and I can install programs (at least for now). A lot of security software I don't even know about is and must remain installed. But I only need communication whithin the corporate network, which is not restricted. People have usernames (logins) on the corporate network, and I need to use them to restrict access. Simply put, I have a list of logins of team members, and only people in the list should access the files. I have about 150 GB of free disk space. I'm thinking of allocating 100 GB to the team's shared files. I plan monthly backups on machines of co-workers, same configuration. But automation of backups is a nice, unnecessary feature: it's totally acceptable for me to manually copy the contents to a different machine once a month. Uptime is important, as everyone would use these files in their daily work. I have experience as a python and C programmer, but no experience whatsoever as a sysadmin, and almost nothing of my programming experience is network programming. I'm a complete beginner in this. Thanks in advance for any help. EDIT I honestly appreciate all the warnings, I really do, but what I plan to make available is mostly stuff that now is solely on DVDs just for space reasons. It's 'daily work' to read them, but 'daily work write' files will remain on the corporate server. As for the importance of uptime, I think I overstated it: a few outages are OK, it's already an improvement over getting the DVDs. As for policy, my manager is kind of on my side, I will confirm that before making my move. As for getting more space through the proper channels, well, that was Plan A, and it's still on the table... But I don't have much hope. I'm not as "core businees" as I'd like.

    Read the article

  • Windows Server 2008 - one MAC Address, assign multiple external IP's to VirtualBoxes running as guests on host

    - by Sise
    Couldn't find any help @ google or here. The scenario: Windows Server 2008 Std x64 on i7-975, 12 GB RAM. The server is running in a data centre. One hardware NIC - RealTek PCIe GBE - one MAC Address. The data centre provides us 4 static external IP's. The first is assigned to the host by default of course. I have ordered all 4 IP's, the data centre can assign the available IP's to the physical MAC address of the given NIC only. This means one NIC, one MAC Address, 4 IP's. Everything works fine so far. Now, what I would like to have: Installed VirtualBox with 1-3 guests running, each gets it's own external IP assigned. Each of it should be an standalone Win Server 2008. It looks like the easiest way would be to put the guests into an virtual subnet and routing all data coming to the 2nd till 4th external IP through to this guests using there subnet IP's. I have been through the VirtualBox User Manuel regarding networking. What's not working: I can't use bridged networking without anything else, because the IP's are assigned to the one MAC address only. I can't use NAT networking because it does not allow access from outside or the host to the guest. I do not wanna use port forwarding. Host-only networking itself would not allow internet access, by sharing the default internet connection of the host, internet is granted from the guest to the outside but not from outside or the host to the guest. InternalNetworking is not really an option here. What I have tried is to create an additional MS Loopback adapter for a routed subnet, where the Vbox guests are in, now the idea was to NAT the internet connection to the loopback 'subnet'. But I can't ping the gateway from the guests. By using route command in the command shell or RRAS (static route, NAT) I didn't get there as well. Solutions like the following do work for the one way, but not for the way back: For your situation, it might be best to use the Host-Only adapter for ICS. Go to the preferences of VB itself and select network. There you can change the configuration for the interface. Set the IP address to 192.168.0.1, netmask 255.255.255.0. Disable the DHCP server if it isn't already and that's it. Now the Guest should get an IP from Windows itself and be able to get onto the internet, while you can also access the Host. Slowly I'm pretty stucked with this topic. There is a possibility I've just overlooked something or just didn't getting it by trying, especially using RRAS, but it's kinda hard to find useful howto's or something in the web. Thanks in advance! Best regards, Simon

    Read the article

  • Nginx + Wordpress Multisite 3.4.2 + subdirectories + static pages and permalinks

    - by UrkoM
    I am trying to setup Wordpress Multisite, using subdirectories, with Nginx, php5-fpm, APC, and Batcache. As many other people, I am getting stuck in the rewrite rules for permalinks. I have followed these two guides, which seem to be as official as you can get: http://evansolomon.me/notes/faster-wordpress-multisite-nginx-batcache/ http://codex.wordpress.org/Nginx#WordPress_Multisite_Subdirectory_rules It is partially working: http://blog.ssis.edu.vn works. http://blog.ssis.edu.vn/umasse/ works. But other permalinks, like these two to a post or to a static page, don't work: http://blog.ssis.edu.vn/umasse/2008/12/12/hello-world-2/ http://blog.ssis.edu.vn/umasse/sample-page/ They either take you to a 404 error, or to some other blog! Here is my configuration: server { listen 80 default_server; server_name blog.ssis.edu.vn; root /var/www; access_log /var/log/nginx/blog-access.log; error_log /var/log/nginx/blog-error.log; location / { index index.php; try_files $uri $uri/ /index.php?$args; } # Add trailing slash to */wp-admin requests. rewrite /wp-admin$ $scheme://$host$uri/ permanent; # Add trailing slash to */username requests rewrite ^/[_0-9a-zA-Z-]+$ $scheme://$host$uri/ permanent; # Directives to send expires headers and turn off 404 error logging. location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ { expires 24h; log_not_found off; } # this prevents hidden files (beginning with a period) from being served location ~ /\. { access_log off; log_not_found off; deny all; } # Pass uploaded files to wp-includes/ms-files.php. rewrite /files/$ /index.php last; if ($uri !~ wp-content/plugins) { rewrite /files/(.+)$ /wp-includes/ms-files.php?file=$1 last; } # Rewrite multisite '.../wp-.*' and '.../*.php'. if (!-e $request_filename) { rewrite ^/[_0-9a-zA-Z-]+(/wp-.*) $1 last; rewrite ^/[_0-9a-zA-Z-]+.*(/wp-admin/.*\.php)$ $1 last; rewrite ^/[_0-9a-zA-Z-]+(/.*\.php)$ $1 last; } location ~ \.php$ { # Forbid PHP on upload dirs if ($uri ~ "uploads") { return 403; } client_max_body_size 25M; try_files $uri =404; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include /etc/nginx/fastcgi_params; } } Any ideas are welcome! Have I done something wrong? I have disabled Batcache to see if it makes any difference, but still no go.

    Read the article

  • Some domain names not resolving on local network

    - by Solignis
    I am not really sure where to start with this one... I have a small network setup with some linux servers (Ubuntu 11.04 Server). 2 servers are running BIND 9 (NS01, NS02), they are configured as master and slave respectively. 1 server is running Zimbra ZCS 7.1.1 (MX01), it has a private BIND 9 server running to achieve a split DNS configuration. This DNS server does not interact with the other two, it forwards queries it can resolve to the other 2 that is it. No zone transfers. Zimbra is hosting 3 domains at the moment, solignis.local, solignis.com, campbellsurvey.net. The problem From with in my network I cannot connect to mail.campbellsurvey.net. When I mean I cannot connect, I mean if I open firefox and type https://mail.campbellsurvey.net I go nowhere, the address is supposed to connect to my Zimbra webmail. But it goes nowhere, the odd thing is if I try the same task from outside of the network it brings the website up like normal. If I try to create an account in thunderbird to connect to the same server using IMAP4 or POP3 I get an error saying that thunderbird cannot find the domain name. Even the zimbra client fails too. It is like from with in my own walls that campbellsurvey.net does not exist. But if step outside I can get it work with no problem at all. I had thought maybe the problem was with the DNS server (BIND 9), so just to eliminate it as a possibility I configured a windows server I use for VMware VCenter as a DNS server to see what would happen. The result was the same, its like something is preventing connections to those domains, but I have checked various firewalls and such. I checked port forwards, etc. So I am running out of ideas I know this is not a lot of information to work from and I can give more details about certain things as needed. I am just trying to figure out what could be going wrong. Any help you could offer would be much appreciated.

    Read the article

  • sub domains with /etc/hosts and apache for gitorious

    - by QLands
    I managed to have a local install of Gitorious. Now I need to finalize the apache integration using a virtual server but nothing seems to work. See for example my /etc/hosts file: 127.0.0.1 localhost 172.26.17.70 darkstar.ilri.org darkstar 172.26.17.70 git.darkstar.ilri.org My vhosts.conf has the following entries: # # Use name-based virtual hosting. # NameVirtualHost *:80 <VirtualHost *:80> <Directory /srv/httpd/htdocs> Options Indexes FollowSymLinks ExecCGI AllowOverride None Order allow,deny Allow from all </Directory> ServerName darkstar.ilri.org DocumentRoot /srv/httpd/htdocs ErrorLog /var/log/httpd/error_log AddHandler cgi-script .cgi </VirtualHost> <VirtualHost *:80> <Directory /srv/httpd/git.darkstar.ilri.org/gitorious/public> Options FollowSymLinks ExecCGI AllowOverride None Order allow,deny Allow from All </Directory> AddHandler cgi-script .cgi DocumentRoot /srv/httpd/git.darkstar.ilri.org/gitorious/public ServerName git.darkstar.ilri.org ErrorLog /var/www/git.darkstar.ilri.org/log/error.log CustomLog /var/www/git.darkstar.ilri.org/log/access.log combined AddOutputFilterByType DEFLATE text/html text/plain text/xml text/javascript text/css application/x-javascript BrowserMatch ^Mozilla/4 gzip-only-text/html BrowserMatch ^Mozilla/4\.0[678] no-gzip BrowserMatch \bMSIE !no-gzip !gzip-only-text/html <FilesMatch "\.(ico|pdf|flv|jpg|jpeg|png|gif|js|css|swf)$"> ExpiresActive On ExpiresDefault "access plus 1 year" </FilesMatch> FileETag None RewriteEngine On RewriteCond %{DOCUMENT_ROOT}/system/maintenance.html -f RewriteCond %{SCRIPT_FILENAME} !maintenance.html RewriteRule ^.*$ /system/maintenance.html [L] </VirtualHost> Now, when I go with Firefox to darkstar.ilri.org it shows the default Apache screen: "It works!". but when I go to git.darkstar.ilri.org it waits for few seconds then falls to darkstar.ilri.org and the default apache page. No error is reported. If I run httpd -S I get: VirtualHost configuration: wildcard NameVirtualHosts and _default_ servers: *:80 is a NameVirtualHost default server darkstar.ilri.org (/etc/httpd/extra/httpd-vhosts.conf:21) port 80 namevhost darkstar.ilri.org (/etc/httpd/extra/httpd-vhosts.conf:21) port 80 namevhost git.darkstar.ilri.org (/etc/httpd/extra/httpd-vhosts.conf:37) Syntax OK The funny thing is that if I configure gotirious in a host called gitrepository, add 127.0.0.1 gitrepository and go with Firefox to gitrepository.. Gitorious works... But why not with git.darkstar.ilri.org? Many thanks in advance.

    Read the article

  • Error sending email to alias with Postfix

    - by Burning the Codeigniter
    I'm on Ubuntu 11.04 64bit. I'm trying to set up Postfix on my VPS, which has been configured but when I send an email to an alias e.g. [email protected] it will send it to [email protected]. Now when I sent the email from my GMail account, I got this returned: Delivery to the following recipient failed permanently: [email protected] Technical details of permanent failure: Google tried to deliver your message, but it was rejected by the recipient domain. We recommend contacting the other email provider for further information about the cause of this error. The error that the other server returned was: 550 550 #5.1.0 Address rejected [email protected] (state 14). ----- Original message ----- DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:date:message-id:subject:from:to:content-type; bh=R1WtjVRWywfkWCR2g4QKbSjAfUaU9DAAMKbg9UAWqvs=; b=FiSfdhEaV4pEq/76ENlH4tvOgm35Ow3ulRg06kDYrIQTaDf3eOEgfSEgH25PjZuAj/ 7Hg1CL++o6Rt/tl80ZiR2AWekhA0zIn2JkqE7KssMG7WbBmMmbf8V9KDo2jOw+mZv+C/ KDKsQ65AudBZ/NYLDDpTT7MkKf8DzqeGCKj9MAct6sHDoC0wCciXYxNfTf+MKxrZvRHQ oICTkH5LOugKW9wEjPF2AoO8X0qgYmTLYeSUtXxu46VeNKRBGmdRkkpPOoJlQN9ank7i SW6kU6M9bk2bYOgKwV/YPsaantmYlu1XdmYx+kWeJkNJAyYOfXfZZ8WUJhbbFFD9bZCi m/hw== MIME-Version: 1.0 Received: by 10.101.3.5 with SMTP id f5mr783908ani.86.1334247306547; Thu, 12 Apr 2012 09:15:06 -0700 (PDT) Received: by 10.236.73.136 with HTTP; Thu, 12 Apr 2012 09:15:06 -0700 (PDT) Date: Thu, 12 Apr 2012 17:15:06 +0100 Message-ID: <CAN+9S2aB=xjiDxVZx3qYZoBMFD4XuadUyR_3OYWaxw1ecrZmOQ@mail.gmail.com> Subject: Test Email From: My Name <[email protected]> To: [email protected] Content-Type: multipart/alternative; boundary=001636c597eabfd21504bd7da8fd Now that I don't understand why it isn't working, my aliases are set up correctly - I see no error messages being produced in /var/log/mail.log or any other mail logs, which makes it harder for me to debug. This is my postfix configuration (postconf -n): alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases append_dot_mydomain = no biff = no config_directory = /etc/postfix inet_interfaces = all mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 mydestination = $mydomain, $myhostname, localhost, localhost.localdomain, localhost mydomain = domain.com myhostname = localhost mynetworks = 192.168.1.0/24 127.0.0.0/8 readme_directory = no recipient_delimiter = + smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_tls_cert_file = /etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file = /etc/ssl/private/ssl-cert-snakeoil.key smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes Does anyone know how to solve this specific issue?

    Read the article

  • weird SSH connection timed out

    - by bran
    This problem started when I tried to login to my brand spaning new VPS server. I remember that in my first SSH try on the server I actually got prompt for password several times which would mean that there is no port blocking problem from my isp. Since the password did'nt work for me (for some reason). I had a lot of authentication failure. After that attempting to log in to the server just timed out. I did the same at mediatemple (which used to work before with sftp) and put in wrong password and now trying to ssh (or even SFTP) gives me timeout error. So some kind of security feature is preventing me from trying too many times to log in, either from my side or from the server side. Any idea what it could be? TRaceroute and ping works on the ips. I am using a zyxel wimax modem (max-206m1r - if that's relevent) c:\Program Files (x86)\OpenSSH\bin>ssh.exe [email protected] ssh: connect to host 109.169.7.136 port 22: Connection timed out c:\Program Files (x86)\OpenSSH\bin>ssh.exe [email protected] ssh: connect to host 109.169.7.131 port 22: Connection timed out c:\Program Files (x86)\OpenSSH\bin>ssh.exe [email protected] ssh: connect to host 87.117.249.227 port 22: Connection timed out c:\Program Files (x86)\OpenSSH\bin>ssh.exe [email protected] -vv OpenSSH_5.6p1, OpenSSL 0.9.8r 8 Feb 2011 debug1: Reading configuration data /etc/ssh_config debug2: ssh_connect: needpriv 0 debug1: Connecting to 87.117.249.227 [87.117.249.227] port 22. debug1: connect to address 87.117.249.227 port 22: Connection timed out ssh: connect to host 87.117.249.227 port 22: Connection timed out c:\Program Files (x86)\OpenSSH\bin>ssh.exe s122797.gridserver.com Could not create directory '/home/pavs/.ssh'. The authenticity of host 's122797.gridserver.com (205.186.175.110)' can't be est ablished. RSA key fingerprint is 33:24:1e:38:bc:fd:75:02:81:d8:39:42:16:f6:f6:ff. Are you sure you want to continue connecting (yes/no)? yes Failed to add the host to the list of known hosts (/home/pavs/.ssh/known_hosts). Password: Password: Password: [email protected]'s password: Permission denied, please try again. [email protected]'s password: Permission denied, please try again. [email protected]'s password: Received disconnect from 205.186.175.110: 2: Too many authentication failures fo r pavs c:\Program Files (x86)\OpenSSH\bin>ssh.exe s122797.gridserver.com ssh: connect to host s122797.gridserver.com port 22: Connection timed out c:\Program Files (x86)\OpenSSH\bin>ssh.exe s122797.gridserver.com ssh: connect to host s122797.gridserver.com port 22: Connection timed out

    Read the article

  • Linux Software RAID1 Rebuild Completes, but after reboot, its degraded again

    - by zimmy6996
    I have been beating my head with an issue here, and I'm now turning to the internet for help. I have a system running Mandrake Linux, with the following configuration: /dev/hda - This is a IDE drive. Has some partitions on it that boot the system and make up most of the file system. /dev/sda - This is drive 1 of 2 for a software raid /dev/md0 /dev/sdb - This is drive 2 of 2 for a software raid /dev/md0 md0 gets mounted but fstab as /data-storage, so it is not critical to the systems ability to boot. We can comment it out of fstab, and the system works just fine either way. The problem is, we have a failed sdb drive. So I shut the box down, and have pulled the failed disk and installed a new disk. When the system boots up, /proc/mdstat shows only sda as part of the raid. I then run the various command to rebuild the RAID to /dev/sdb. Everything rebuilds correctly, and upon completion, you look at /proc/mdstat and it shows 2 drives sda1(0) and sdb1(1). Everything looks great. Then you reboot the box ... UGH!!! Once rebooted, sdb is missing again from the RAID. It is like the rebuild never happened. I can walk through the commands to rebuild it again, and it will work, but again, after reboot, the box seems to make sdb just vanish! The real odd thing is, if after reboot, I pull sda out of the box, and try to get the system to load with the rebuilt sdb drive in the system, and when I do, the system actually throws and error just after grub, and says something about drive error, and the system has to shut down. Thoughts??? I'm starting to wonder if grub has something to do with this mess. That the drive isn't being setup within grub to be visible at boot? This RAID array isn't necessary for the system to boot, but when the replacement drive is in there, without SDA it won't boot system, so it makes me believe there is something to that. On top of that, there just seems to be something wonky here the drive falling off of RAID after reboot. I've hit the point of pounding my head on the keyboard. Any help would be greatly appreciated!!!

    Read the article

  • IPv6 routing to another interface

    - by Robert
    I'm trying to get an IPv6 enabled router to forward data from one interface to the other and I'm having issues. When following this example (http://www.cisco.com/en/US/tech/tk872/technologies_configuration_example09186a0080ba6106.shtml) I am able to get full connectivity between all 3 routers in my simulator. However when I try to use only 1 router; I can't get connectivity to the other interfacs on the same router. My PC is directly attached to FA 0/1 and it can ping the router's interface. However it can not ping any other interface on the router(which unless I'm missing something it should be able to do). The router on the other hand can ping everything. I thought static routes might help; but the router already has routes for everything. I'm thinking the packet should come in; router looks up the destination in it's ipv6 routing table and then realizes it's for itself, and should respond. I thought maybe it couldn't respond directly; so I tried pinging a device like 2001:0000:0000:1000::2, but i don't get a response. I'm running on IOS 12.4. I'm missing something(hopefully simple), but I just can't see what it is. With only 1 router; how do I enable my PC to talk to the other subnets? Thank you in advance, Robert Topology: R1 FA 0/0: 2001:0000:0000:0000::1/52 FA 0/1: 2001:0000:0000:1000::1/52 FA 1/0: 2001:0000:0000:2000::1/52 Loopback 0: 2001:0000:0000:3000::1/52 PC: 2001:0000:0000:2000::2/52 PC plugs directly into FA 1/0 on the router. --- Configuration --- ipv6 cef ipv6 unicast routing interface Loopback0 no ip address ipv6 address 2001:0000:0000:3000::1/52 ipv6 enable ! interface FastEthernet0/0 no ip address duplex auto speed auto ipv6 address 2001:0000:0000::1/52 ipv6 enable ! interface FastEthernet0/1 no ip address duplex auto speed auto ipv6 address 2001:0000:0000:1000::1/52 ipv6 enable ! interface FastEthernet1/0 no ip address duplex auto speed auto ipv6 address 2001:0000:0000:2000::1/52 ipv6 enable --- end of config --- --- routing table --- IPV6Lab#show ipv6 route IPv6 Routing Table - 10 entries Codes: C - Connected, L - Local, S - Static, R - RIP, B - BGP U - Per-user Static route I1 - ISIS L1, I2 - ISIS L2, IA - ISIS interarea, IS - ISIS summary O - OSPF intra, OI - OSPF inter, OE1 - OSPF ext 1, OE2 - OSPF ext 2 ON1 - OSPF NSSA ext 1, ON2 - OSPF NSSA ext 2 C 2001:0000:0000::/52 [0/0] via ::, FastEthernet0/0 L 2001:0000:0000::1/128 [0/0] via ::, FastEthernet0/0 C 2001:0000:0000:1000::/52 [0/0] via ::, FastEthernet0/1 L 2001:0000:0000:1000::1/128 [0/0] via ::, FastEthernet0/1 C 2001:0000:0000:2000::/52 [0/0] via ::, FastEthernet1/0 L 2001:0000:0000:2000::1/128 [0/0] via ::, FastEthernet1/0 C 2001:0000:0000:3000::/52 [0/0] via ::, Loopback0 L 2001:0000:0000:3000::1/128 [0/0] via ::, Loopback0 L FE80::/10 [0/0] via ::, Null0 L FF00::/8 [0/0] via ::, Null0 --- end of routing table ---

    Read the article

  • configuring mod_proxy_html properly?

    - by tobinjim
    I have an apache2 web server that handles reverse proxy for Rails3 app running on another machine. The setup works except URLs generated within the webapp aren't getting rewritten by my configuration for mod_proxy_html. The ["Reverse Proxy Scenario"][1] is exactly what I'm trying to do, so I've followed the tutorial as completely as I know how. I've applied or tried answers supplied here on stackoverflow, to no effect. According to the "Reverse Proxy Scenario" you want a number of modules loaded. All those instructions are in my httpd.conf file and when I examine the output from apactectl -t -D DUMP_MODULES all the expected modules show in amongst the listing. My external web server doing the reverse proxy is at www.ourdomain.org and the Rails app is internally available at apphost.local (the server is Mac OS X Server 10.6, the rails app server is Mac OS X 10.6). What's working right now is access to the webapp via the reverse proxy as: http://www.ourdomain.org/apphost/railsappname/controllername/action But none of the javascript files, css files or other assets get loaded, and links internal to the web app come out missing the apphost portion of the URL, as if my rewrite rule is configured incorrectly (so of course I've focused on that and can't seem to get anything to be added or deleted in the process of passing the html in from the apphost and out through the Apache server). For instance, hovering over an action link in the html returned by the web app you'll get: http://www.ourdomain.org/railsappname/controllername/action Here's what my Apache directives look like: LoadModule proxy_html_module /usr/libexec/apache2/mod_proxy_html.so LoadModule xml2enc_module /usr/libexec/apache2/mod_xml2enc.so ProxyHTMLLogVerbose On LogLevel Debug ProxyPass /apphost/ http://apphost.local/ <Location /apphost/> SetOutputFilter INFLATE;proxy-html;DEFLATE ProxyPassReverse / ProxyHTMLExtended On ProxyHTMLURLMap railsappname/ apphost/railsappname/ RequestHeader unset Accept-Encoding </Location> After every change I make to httpd.conf I religiously check apachectl -t just to be sane. I'm definitely not an Apache expert, but all the directives that follow mine seem to not overrule what I'm doing here. But then nothing that I try seems to alter the URLs I see in my browser after hitting the Apache server with a request for my web app. Even if you can't tell what I've done incorrectly, I'd welcome ideas on how to get Apache to help see what it's working on and doing to the html coming from my web app. That's what I understood the ProxyHTMLLogVerbose On and LogLevel Debug to be setting up, but I'm not seeing anything in the log files.

    Read the article

  • Win-XP Browsers Hang on page load - (waiting for...)

    - by CHarmon
    Hello, I’m having problems with my browsers hanging on loading pages on my desktop machine. I’m using Windows XP Pro with SP3 and fully updated except for IE 8. All three of my browsers, IE 7, Chrome and Firefox are having the same problems. Pages are not being loaded and are hanging on “waiting for …”. The browsers are waiting for the page being loaded or ad servers. Sometimes a page will load but the loading graphic continues to be displayed as if the page were still loading when the page appears to be fully loaded. The problem is bad enough that I can’t really use any of my browsers. I can eventually get most pages to load by stopping and restarting the page load. I have DSL modem with a wireless router and I have been able to eliminate the modem and router from being the source of my problem. My laptop doesn’t have any problems even when hardwired to the router and with the wireless connection disabled. I deleted the NIC and let XP re-install. Also tried a different network cable. Tried the same router port used in the laptop test. One clue that may be important is that I can’t connect to my router using the desktop machine…the page hangs while trying to connect. I can ping the router and I can quickly connect to the router using the laptop. I also can’t use the Windows update process – the page never fully loads. The problem affects other user accounts and even happens in safe mode. I am convinced the problem is with part of the O/S…some layer able to affect all of the browsers. The purpose of this post is to see if anyone has some ideas before I do a XP repair. I have done quite a bit of trouble-shooting: Ran a full anti-virus scan with AVG – no problems. Ran full scans with Spybot, MalwareBytes and Sophos anti-rootkit – no problems. Ran Chkdsk with both options checked. Ran Disk Clean up Defragged RE-installed IE7 Cleared all the browser caches Ran Ccleaner (registry tool) Ran HijackThis – nothing unusual (problem happens in safe mode too) Ran Process Explorer – no unusual processes Used System Restore and fell back several days – no change in the problem Booted to last known good configuration – no change in the problem Ran MicrosoftFixit50199.msi – no change in the problem Any ideas or suggestions would be appreciated…I’m not looking forward to doing a repair on XP. Thanks in advance for any help.

    Read the article

  • LdapErr: DSID-0C0903AA, data 52e: authenticating against AD '08 with pam_ldap

    - by Stefan M
    I have full admin access to the AD '08 server I'm trying to authenticate towards. The error code means invalid credentials, but I wish this was as simple as me typing in the wrong password. First of all, I have a working Apache mod_ldap configuration against the same domain. AuthType basic AuthName "MYDOMAIN" AuthBasicProvider ldap AuthLDAPUrl "ldap://10.220.100.10/OU=Companies,MYCOMPANY,DC=southit,DC=inet?sAMAccountName?sub?(objectClass=user)" AuthLDAPBindDN svc_webaccess_auth AuthLDAPBindPassword mySvcWebAccessPassword Require ldap-group CN=Service_WebAccess,OU=Groups,OU=MYCOMPANY,DC=southit,DC=inet I'm showing this because it works without the use of any Kerberos, as so many other guides out there recommend for system authentication to AD. Now I want to translate this into pam_ldap.conf for use with OpenSSH. The /etc/pam.d/common-auth part is simple. auth sufficient pam_ldap.so debug This line is processed before any other. I believe the real issue is configuring pam_ldap.conf. host 10.220.100.10 base OU=Companies,MYCOMPANY,DC=southit,DC=inet ldap_version 3 binddn svc_webaccess_auth bindpw mySvcWebAccessPassword scope sub timelimit 30 pam_filter objectclass=User nss_map_attribute uid sAMAccountName pam_login_attribute sAMAccountName pam_password ad Now I've been monitoring ldap traffic on the AD host using wireshark. I've captured a successful session from Apache's mod_ldap and compared it to a failed session from pam_ldap. The first bindrequest is a success using the svc_webaccess_auth account, the searchrequest is a success and returns a result of 1. The last bindrequest using my user is a failure and returns the above error code. Everything looks identical except for this one line in the filter for the searchrequest, here showing mod_ldap. Filter: (&(objectClass=user)(sAMAccountName=ivasta)) The second one is pam_ldap. Filter: (&(&(objectclass=User)(objectclass=User))(sAMAccountName=ivasta)) My user is named ivasta. However, the searchrequest does not return failure, it does return 1 result. I've also tried this with ldapsearch on the cli. It's the bindrequest that follows the searchrequest that fails with the above error code 52e. Here is the failure message of the final bindrequest. resultcode: invalidcredentials (49) 80090308: LdapErr: DSID-0C0903AA, comment: AcceptSecurityContext error, data 52e, v1772 This should mean invalid password but I've tried with other users and with very simple passwords. Does anyone recognize this from their own struggles with pam_ldap and AD? Edit: Worth noting is that I've also tried pam_password crypt, and pam_filter sAMAccountName=User because this worked when using ldapsearch. ldapsearch -LLL -h 10.220.100.10 -x -b "ou=Users,ou=mycompany,dc=southit,dc=inet" -v -s sub -D svc_webaccess_auth -W '(sAMAccountName=ivasta)' This works using the svc_webaccess_auth account password. This account has scan access to that OU for use with apache's mod_ldap.

    Read the article

  • 2nd Year College - Learning - Microsoft Server Products

    - by Ryan
    As the title says, I just finished my first year of college (majoring in Software Engineering). Fortunately my school likes Microsoft enough, and I can get pretty much anything I want that Microsoft sells. I also can get IBM Websphere and the like for free as well. Earlier this year, I set up an oldish computer (2.6 Pentium D, x64) to run ubuntu server headless. I'm predominately a Java developer, so Apache, Maven, Nexus, Sonar, SVN, etc made it onto the machine. It worked really well for personal and school projects, especially team projects (quick ramp up). Anyways, I started to pick up C# to complement my Java knowledge (don't judge me :P), and am interested in working with some of the associated Microsoft equivalents. The machine currently has the Ubuntu install, as well as Windows 7 Ultimate. I do all of my actual development work off my laptop, also running Windows 7 Ultimate. I was wondering what software you would recommend putting on the machine. I’m not actually serving anything off the machine itself, but in Ubuntu I had it doing integration tests with Hudson on every commit, and profiling my applications, etc, etc. The machine would be running headless, and I would remote into it. Here is what I am currently leaning towards / wondering about: Windows 7 Ultimate vs Windows Server 2008 (R2) (no one is really clear why I should go with one over the other) Windows Team Foundation Sharepoint (Never used it before, kind of meh about it) IBM Websphere or Glassfish (Some Java EE web server) SQL Server 2008 A DVCS In order to better control product conflicts / limit resource use, I’m wondering if I should install things into virtual machines (I can get VmWare or Microsoft Virtualization Products) I also plan on installing everything I had running under Linux (it’s almost entirely Java based development software, so it’ll run on both, only reason I went with ubuntu during the year was because the apache build seemed better). I’m primarily looking to become familiar with enterprise software development tools, as well as get something functional that will help my development process. (IE, I’ll still use project and assign tasks even though I might be the only one to assign tasks to, just to practice doing so). Is there any other software / configuration details I should explore? Opinions on my current list? I primarily use C#, Java, and PHP. I'm familiar with ruby, and python as well. Thanks!

    Read the article

  • SSH stops at "using username" with IPTables in effect

    - by Rautamiekka
    We used UFW but couldn't make the Source Dedicated ports open, which was weird, so we purged UFW and switched to IPTables, using Webmin to configure. If the inbound chain is on DENY and SSH port open [judged from Webmin], PuTTY will say using username "root" and stops at that instead of asking for public key pw. Inbound chain on ACCEPT the pw is asked. This problem didn't happen with UFW. Picture of IPTables configuration in Webmin: http://s284544448.onlinehome.us/public/PlusLINE%20Dedicated%20Server,%20Webmin,%20IPTables,%200.jpgThe address is to the previous rautamiekka.org. iptables-save when on INPUT DENY: # Generated by iptables-save v1.4.8 on Wed Apr 11 16:09:20 2012 *mangle :PREROUTING ACCEPT [1430:156843] :INPUT ACCEPT [1430:156843] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [1415:781598] :POSTROUTING ACCEPT [1415:781598] COMMIT # Completed on Wed Apr 11 16:09:20 2012 # Generated by iptables-save v1.4.8 on Wed Apr 11 16:09:20 2012 *nat :PREROUTING ACCEPT [2:104] :POSTROUTING ACCEPT [0:0] :OUTPUT ACCEPT [0:0] COMMIT # Completed on Wed Apr 11 16:09:20 2012 # Generated by iptables-save v1.4.8 on Wed Apr 11 16:09:20 2012 *filter :INPUT DROP [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [1247:708906] -A INPUT -i lo -m comment --comment "Machine-within traffic - always allowed" -j ACCEPT -A INPUT -p tcp -m comment --comment "Services - TCP" -m tcp -m multiport --dports 22,80,443,10000,20,21 -m state --state NEW,ESTABLISHED -j ACCEPT -A INPUT -p tcp -m comment --comment "Minecraft - TCP" -m tcp --dport 25565 -j ACCEPT -A INPUT -p udp -m comment --comment "Minecraft - UDP" -m udp --dport 25565 -j ACCEPT -A INPUT -p tcp -m comment --comment "Source Dedicated - TCP" -m tcp --dport 27015 -j ACCEPT -A INPUT -p udp -m comment --comment "Source Dedicated - UDP" -m udp -m multiport --dports 4380,27000:27030 -j ACCEPT -A INPUT -p udp -m comment --comment "TS3 - UDP - main port" -m udp --dport 9987 -j ACCEPT -A INPUT -p tcp -m comment --comment "TS3 - TCP - ServerQuery" -m tcp --dport 10011 -j ACCEPT -A OUTPUT -o lo -m comment --comment "Machine-within traffic - always allowed" -j ACCEPT COMMIT # Completed on Wed Apr 11 16:09:20 2012 iptables --list when on INPUT DENY: Chain INPUT (policy DROP) target prot opt source destination ACCEPT all -- anywhere anywhere /* Machine-within traffic - always allowed */ ACCEPT tcp -- anywhere anywhere /* Services - TCP */ tcp multiport dports ssh,www,https,webmin,ftp-data,ftp state NEW,ESTABLISHED ACCEPT tcp -- anywhere anywhere /* Minecraft - TCP */ tcp dpt:25565 ACCEPT udp -- anywhere anywhere /* Minecraft - UDP */ udp dpt:25565 ACCEPT tcp -- anywhere anywhere /* Source Dedicated - TCP */ tcp dpt:27015 ACCEPT udp -- anywhere anywhere /* Source Dedicated - UDP */ udp multiport dports 4380,27000:27030 ACCEPT udp -- anywhere anywhere /* TS3 - UDP - main port */ udp dpt:9987 ACCEPT tcp -- anywhere anywhere /* TS3 - TCP - ServerQuery */ tcp dpt:10011 Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere /* Machine-within traffic - always allowed */ The UFW rules prior to purging on INPUT DENY: 127.0.0.1 ALLOW IN 127.0.0.1 3306 DENY IN Anywhere 20,21/tcp ALLOW IN Anywhere 22/tcp (OpenSSH) ALLOW IN Anywhere 80/tcp ALLOW IN Anywhere 443/tcp ALLOW IN Anywhere 989 ALLOW IN Anywhere 990 ALLOW IN Anywhere 8075/tcp ALLOW IN Anywhere 9987/udp ALLOW IN Anywhere 10000/tcp ALLOW IN Anywhere 10011/tcp ALLOW IN Anywhere 25565/tcp ALLOW IN Anywhere 27000:27030/tcp ALLOW IN Anywhere 4380/udp ALLOW IN Anywhere 27014:27050/tcp ALLOW IN Anywhere 30033/tcp ALLOW IN Anywhere

    Read the article

  • Ubuntu getting wrong hostname from DHCP

    - by sam
    When provisioning new Ubuntu Precise (12.04) servers, the hostname they're getting seems to be generated from the DNS search path, not a reverse lookup on the hostname. Take the following configuration BIND is configured with the hostname, and reverse name Normal zone $TTL 600 $ORIGIN srv.local.net. @ IN SOA ns0.local.net. hostmaster.local.net. ( 2014082101 10800 3600 604800 600 ) @ IN NS ns0.local.net. @ IN MX 5 mail.local.net. my-new-server IN A 10.32.2.30 And reverse @ IN SOA ns0.local.net. hostmaster.local.net. ( 2014082101 10800 3600 604800 600 ) @ IN NS ns0.local.net. $ORIGIN 32.10.in-addr.arpa. 30.2 IN PTR my-new-server.srv.local.net. Then DHCPD is configured to hand out static leases based on mac addresses like so subnet 10.32.2.0 netmask 255.255.254.0 { option subnet-mask 255.255.254.0; option routers 10.32.2.1; option domain-name-servers 10.32.2.1; option domain-name "util.of1.local.net of1.local.net srv.local.net"; site-option-space "pxelinux"; option pxelinux.magic f1:00:74:7e; if exists dhcp-parameter-request-list { option dhcp-parameter-request-list = concat(option dhcp-parameter-request-list,d0,d1,d2,d3); } group { option pxelinux.configfile "pxelinux.cfg/pxeboot"; host my-new-server { fixed-address my-new-server.srv.local.net; hardware ethernet aa:aa:aa:bb:bb:bb; } } } So the hostname should be my-new-server.srv.local.net, however when building a Ubuntu 12.04 node, the hostname ends up as my-new-server.util.of1.local.net When building Lucid (10.04) hosts, the hostname will be correct, it's only on Precise/12.04 nodes we have the problem. Doing a normal and reverse lookup on the host and IP returns the correct result Sams-MacBook-Pro:~ sam$ host my-new-server my-new-server.srv.local.net has address 10.32.2.30 Sams-MacBook-Pro:~ sam$ host my-new-server.srv.local.net my-new-server.srv.local.net has address 10.32.2.30 Sams-MacBook-Pro:~ sam$ host 10.32.2.30 30.2.32.10.in-addr.arpa domain name pointer my-new-server.srv.local.net. The contents of the hosts file is incorrect too 127.0.0.1 localhost 127.0.1.1 my-new-server.util.of1.local.net of1.local.net srv.local.net my-new-server So it looks like when it creates the hosts file, it puts the entire contents of the DNS search path into the local address so the FQDN according to the server is the short hostname as defined, then the first domain in the search path. Is there a way to get around this behaviour, or fix this so it gets the hostname correctly? It's picking up the first part of the hostname, then the rest is wrong.

    Read the article

  • Thunderbird 3: can't change column width?

    - by rumtscho
    I recently installed Thunderbird 3.0.3. Just noticed a suboptimal UI setting: in the upper pane, which lists the e-mails in the current folder, the Date column is about 200px wide. So when I keep the window at 480x600, all I see in a row is: | tree icon | favourites icon | attachment icon | read icon | junk icon | Date and time, followed by 5cm whitespace | ... | P Where "P" is the first letter of the name of the sender. And the "..." is actually shown this way, I have no idea which column it is meant to be. But I don't see neither the sender, nor the message subject, which makes scrolling a folder for a certain mail rather pointless. I see these when I maximize the window, actually the columns are then not only bigger, they are arranged in another sequence. But I feel that holding a mail client permanently maximised at 1600x1200 is a waste of screen real estate. My naive solution attempt was to try to go with the mouse cursor to the right edge of the date column and try to shrink it by moving the cursor left while holding down the left mouse button. Not only is this default behaviour for all resizable columns I've ever encountered in GUIs, the cursor actually turns into a horizontal double-headed arrow. But pulling has no effect at all. I cannot make a wide column narrow, and I cannot make the narrow columns wide. I didn't find anything in the preferences either. So can please somebody explain how to get the columns arranged sensibly? Edit: I found out that I only have the problem when I drag the Thunderbird window to a GridMove screen area. It gets automatically resized, but doesn't notice the resize event or something, so the column width remains the same as under a maximized window. First making the window narrow using the mouse helps with column width, but the width of the mail pane is still too wide (rows don't reflow). Anyway, this seems to be a bug caused by the combination of the two applications and not a configuration problem, so I guess I'll have to live with it.

    Read the article

  • Proxy Error 502 "Reason: Error reading from remote server" with Apache 2.2.3 (Debian) mod_proxy and Jetty 6.1.18

    - by Martin
    Apache is receiving requests at port :80 and proxying them to Jetty at port :8080 The proxy server received an invalid response from an upstream server The proxy server could not handle the request GET /. My dilemma: Everything works fine normally (fast requests, few seconds or few tens of seconds long requests are processed ok). Problems occur when request processing takes long (few minutes?). If I issue request instead directly to Jetty at port :8080 the request is processed OK. So problem is likely to sit somewhere between Apache and Jetty where I am using mod_proxy. How to solve this? I have already tried some "tricks" related to KeepAlive settings, without luck. Here is my current configuration, any suggestions? #keepalive Off ## I have tried this, does not help #SetEnv force-proxy-request-1.0 1 ## I have tried this, does not help #SetEnv proxy-nokeepalive 1 ## I have tried this, does not help #SetEnv proxy-initial-not-pooled 1 ## I have tried this, does not help KeepAlive 20 ## I have tried this, does not help KeepAliveTimeout 600 ## I have tried this, does not help ProxyTimeout 600 ## I have tried this, does not help NameVirtualHost *:80 <VirtualHost _default_:80> ServerAdmin [email protected] ServerName www.mydomain.fi ServerAlias mydomain.fi mydomain.com mydomain www.mydomain.com ProxyRequests On ProxyVia On <Proxy *> Order deny,allow Allow from all </Proxy> ProxyRequests Off ProxyPass / http://www.mydomain.fi:8080/ retry=1 acquire=3000 timeout=600 ProxyPassReverse / http://www.mydomain.fi:8080/ RewriteEngine On RewriteCond %{SERVER_NAME} !^www\.mydomain\.fi RewriteRule /(.*) http://www.mydomain.fi/$1 [redirect=301L] ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined ServerSignature On </VirtualHost> Here is also the debug log from a failing request: 74.125.43.99 - - [29/Sep/2010:20:15:40 +0300] "GET /?wicket:bookmarkablePage=newWindow:com.mydomain.view.application.reports.SaveReportPage HTTP/1.1" 502 355 "https://www.mydomain.fi/?wicket:interface=:0:2:::" "Mozilla/5.0 (Windows; U; Windows NT 6.1; fi; rv:1.9.2.10) Gecko/20100914 Firefox/3.6.10" [Wed Sep 29 20:20:40 2010] [error] [client 74.125.43.99] proxy: error reading status line from remote server www.mydomain.fi, referer: https://www.mydomain.fi/?wicket:interface=:0:2::: [Wed Sep 29 20:20:40 2010] [error] [client 74.125.43.99] proxy: Error reading from remote server returned by /, referer: https://www.mydomain.fi/?wicket:interface=:0:2:::

    Read the article

< Previous Page | 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528  | Next Page >