Search Results

Search found 98447 results on 3938 pages for 'sql server denali'.

Page 1711/3938 | < Previous Page | 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718  | Next Page >

  • Using dnsmasq for accessing multiple nameservers assigned by DHCP

    - by Ash
    At my work desktop running openSUSE 11.4, I have a local network which gets its address, domain (work.site) and nameservers (10.100.1.1, 10.100.1.2) info through DHCP - which get written into /etc/resolv.conf I get to access the internet using the work network, and these 2 nameservers end up returning the entries for any public domain name lookups on the internet. I also have a private VPN that I end up connecting. The nameserver (10.111.1.1) and domain (private.site) are rarely bound to change for this network, but currently they're pushed by the openVPN client into networkmanager, and which also gets merged with the existing /etc/resolv.conf My resolv.conf ultimately ends up looking like this: search private.site work.site nameserver 127.0.0.1 nameserver 10.111.1.1 nameserver 10.100.1.1 As you can see the 2nd nameserver from my work network was pushed out because of the max 3 entry limitations. It is fine still, but would be a problem if that nameserver goes down for maintenance or something. So I found out that dnsmasq could help me here, and hence I setup dnsmasq just as a local DNS resolver without any DHCP support. So right now this is my /etc/dnsmasq.conf: resolv-file=/etc/resolv.conf server=/private.site/10.111.1.1 server=/1.111.10.in-addr.arpa/10.111.1.1 listen-address=127.0.0.1 bind-interfaces log-queries I've made dnsmasq get the list of nameservers from /etc/resolv.conf since NetworkManager seems to be updating this list correctly (for a max of 3 nameservers). I'm able to resolve the host names in both the networks correctly. So these are the questions I have: Is there a way I can make either NetworkManager or dhclient write out the list of nameservers somewhere else which I can make dnsmasq use as resolv-file ? How do I make dnsmasq use certain nameservers as the default for all queries ? Right now I notice that lookups for public domains on the internet are usually sent to both the nameservers - the one on work.site as well as private.site. It would be good if I can limit this only to work.site.

    Read the article

  • XAMPP - Apache service stops running after few seconds.

    - by Fábio Antunes
    Hello I have this big problem with my Xampp server, for some reason the Apache service stops running after a few seconds it as been started, and i have no idea what the problem is, and the error logs don't say much about the problem. [Fri May 07 01:09:32 2010] [notice] Digest: generating secret for digest authentication ... [Fri May 07 01:09:32 2010] [notice] Digest: done [Fri May 07 01:09:33 2010] [notice] Apache/2.2.14 (Win32) DAV/2 mod_ssl/2.2.14 OpenSSL/0.9.8l mod_autoindex_color PHP/5.3.1 mod_apreq2-20090110/2.7.1 mod_perl/2.0.4 Perl/v5.10.1 configured -- resuming normal operations [Fri May 07 01:09:33 2010] [notice] Server built: Nov 11 2009 14:29:03 [Fri May 07 01:09:33 2010] [crit] (22)Invalid argument: Parent: Failed to create the child process. [Fri May 07 01:09:33 2010] [crit] (OS 6)O identificador é inválido. : master_main: create child process failed. Exiting. [Fri May 07 01:09:33 2010] [notice] Parent: Forcing termination of child process 36 identificador é inválido (pt_PT) = identifier is invalid. Note: No other applications is using the Apache port. I have done some changes to the httpd.conf file but, it as worked well for allot of time. Added some virtual hosts. Enabled xdebug. As this happen to anyone, that could tell me whats the problem? Thanks for your time.

    Read the article

  • Wildcard DNS, VirtualHosts on apache2, 404 for unused subdomains

    - by niel
    On an Apache2 server linked to by a DNS that includes a wildcard entry, e.g. *.example.com, subdomains that are not defined as ServerNames in any VirtualHosts point to the first defined VirtualHost, in my example this is 000-default. My Question:How would one get unused subdomains (subdomains not used in any virtualhosts) to return a 404 error to the requesting client? This must preferably show in server logs as a 404 as well. I have looked into the following possibilities: Redirecting any invalid subdomain to the home page or some other page.The problem with this method is, when someone links to your site as this.company.sucks.example.com, the client will see your home page or in my case 000-default if I do not redirect. Thanks, to Mike for pointing this out. (regex for "suck", etc definately not an option) Let the default VirtualHost point to a non-existent directory.Apache does not like this one bit, warning with every reload. Beyond the warning, everything seems fine. This seems like a hack. Does this seem like a problem (however small) to anyone? Point the default VirtualHost to a folder where the index.php is forbidden, thus creating a 403 status code.This is confusing and makes things like the following overly complicated: Say, for example, you use a subdomain per user (a big reason to use wildcard DNS, apparently), and users have the ability to view each others profiles at username.example.com. This solution is confusing to the user and completely not what I want to do. My ideal sollution will let the user know there is nothing to view at the url he entered. Preferably with a 404 and an error log entry for the address entered (not some other address). Any help would be greatly appreciated!

    Read the article

  • Uknown nginx Error Messages

    - by Sparsh Gupta
    Hello, I am getting some nginx errors as I can see them in my error.log which I am unable to understand. They look like: ERRORS: 2011/03/13 21:48:21 [crit] 14555#0: *323314343 open() "/usr/local/nginx/proxy_temp/0/95/0000000950" failed (13: Permission denied) while reading upstream, client: XX.XX.XX.XX, server: , request: "GET /abc.jpg 2 HTTP/1.0", upstream: "http://192.168.162.141:80/abc.jpg", host: "example.com", referrer: "http://domain.com" 2011/03/13 22:00:07 [crit] 14552#0: *324171134 open() "/usr/local/nginx/proxy_temp/1/95/0000000951" failed (13: Permission denied) while reading upstream, client: XX.XX.XX.XY, server: , request: "GET mno.png HTTP/1.1", upstream: "http://192.168.162.141:80/mno.png", host: "example.com", referrer: "http://domain2.com" I also looked at these locations but found that there is no file by this name. root@li235-57:/var/log/nginx# /usr/local/nginx/proxy_temp/1/ 00/ 01/ 02/ 03/ 04/ 05/ 06/ 07/ 08/ 09/ 10/ 11/ 12/ 13/ 14/ 15/ 16/ 17/ 18/ 19/ 20/ 21/ 22/ 23/ 24/ 25/ 26/ 27/ 28/ 29/ 30/ 31/ 32/ 33/ 34/ 35/ 36/ 37/ root@li235-57:/var/log/nginx# ls /usr/local/nginx/proxy_temp/0/ 01/ 02/ 03/ 04/ 05/ 06/ 07/ 08/ 09/ 10/ 11/ 12/ 13/ 14/ 15/ 16/ 17/ 18/ 19/ 20/ 21/ 22/ 23/ 24/ 25/ 26/ 27/ 28/ 29/ 30/ 31/ 32/ 33/ 34/ 35/ 36/ 37/ Can someone help me whats going on / how can I debug this more and better fix this Thanks

    Read the article

  • RAID 5 - DELL 2850 and others

    - by Kiara
    I have installed Ubuntu on a DELL 2850 and I have configured an array of 5 disks (SCSI 73GB 10K) in RAID5. I wanted to simulate a drive error so in the middle of something I just took one of the drives out and put it back again after a bit. Then the drive shows an orange light and seems to be rebuilding but actually is taking hours and hours with no results. So I went to the PERC utility (Ctrl+M) and the disk shows "REBLD". But it never gets to an online state. So I went to Objects - Physical drives - Rebuilding - View rebuild process. And in here I can see a bar moving from 0%... but if I reboot before finishing and get into the PERC Utility again, it seems to start again rebuilding from 0% - so it is not rebuilding automatically. My concern is: what would happen in a real situation? Do I have to just switch the server off and go to the Perc utility to start the rebuilding manually? I thought the whole point was to have this done automatically and without the need of stopping the server. Or does it perhaps rebuild automatically indeed but needs to have enough time without rebooting because otherwise the rebuilding process will start from scratch? It seems to take more than 3h for a 73gb disk! My second question is: can I mix then hard drives? So if I have a RAID of 5x73GB 10K can I use different size (146GB) or speeds (15K)? Apparently someone said it is OK in here Poweredge 2850: replace disk with larger in RAID?

    Read the article

  • Debian Squeeze and exim4: cannot send mail

    - by Fernando Campos
    Hello guys, Got this error after install and config of exim4-daemon-light and mailutils packages on Debian Squeeze. This package is aimed to send automatic messages from websites, like email confirmation and stuff. Configuration after package install: dpkg-reconfigure exim4-config You'll be presented with a welcome screen, followed by a screen asking what type mail delivery you'd like to support. Choose the option for "internet site" and select "Ok" to continue. After many configuration sceens you can test mail with: echo "test message" | mail -s "test message" [email protected] Here is the response: root@server:/etc# echo "test message" | mail -s "test message" [email protected] 2011-03-02 20:34:59 1PuxRT-0001Aj-T9 Cannot open main log file "/var/log/exim4/mainlog": Permission denied: euid=101 egid=103 2011-03-02 20:34:59 1PuxRT-0001Aj-T9 <= root@debian U=root P=local S=331 2011-03-02 20:34:59 1PuxRT-0001Aj-T9 Cannot open main log file "/var/log/exim4/mainlog": Permission denied: euid=101 egid=103 exim: could not open panic log - aborting: see message(s) above Can't send mail: sendmail process failed with error code 1 There is no /var/log/exim4 directory on my server. I tried to create it, but it didn't work. Please, can someone help me? Best regards, Fernando

    Read the article

  • Monitor Exchange Email Address and run scripts

    - by WernerCD
    Okay... Not sure how "out there" this thought is... Right now to send a pager message (aka text message), a user logs into our AS400... logs into the program... enters user name and message and hit's F10 to send. With a little looking, it seems that you can run remote commands to the AS400 via FTP. So I'm working on building a script (batch or otherwise) that, given two parameters (user, message), will FTP into the AS400 and run a remote command: c:\>ftp server user: admin password: ***** ftp> quote rcmd SNDPGRMSG TOPGR(JDOE) MSG('This is a Test') ftp> quit So... what I want to do is setup an email account on our Exchange server Monitor the account for incoming mail upon receipt of incoming mail, parse it... say for example subject is defined as "Recipient" and email text is defined as "Pager message" run a batch that uses the above mentioned TOPGR and MSG as parameters... via FTP to the AS400 mark email as "read" The main thing I'm not sure about is monitoring an exchange account and running a script on incoming emails. I'm sure what I want to do is possible... but where would I start? EDIT: Clarification The main reasons for using this four part system are logging (messages sent via this are logged and reported by the AS400 program) and the existing scheduler for redirecting pages (For example, the weekly on-call person = TOPGR(oncall) gets updated by the AS400 program). I'm also trying to remove duplicate work. If I can get this setup working, I can redirect pages from OTHER systems into this one. I then don't have to update 2, soon to be 3, systems with current phone numbers, carriers, on-call schedules, etc. System #2 and #3 can just "email" [email protected].

    Read the article

  • Ubuntu Hardy : Testing for environment variables in udev rules doesn't seem to work

    - by Fred
    I have a Ubuntu 8.04 LTS (server edition), and I need to write a udev rule for it to act upon plugging a USB thumb drive. However, I need a different action depending on the filesystem of the drive. I know I can use the ID_FS_TYPE environment variable to check for the filesystem on the drive. Following instructions found here, I try a dummy udev rule as such : KERNEL!="sd[a-z][0-9]", GOTO="my_udev_rule_end" ACTION=="add", RUN+="/usr/bin/touch /tmp/test_udev_%E{ID_FS_TYPE}" ACTION=="add", ENV{ID_FS_TYPE}=="vfat", RUN+="/usr/bin/touch /tmp/test_udev_it_works" LABEL="my_udev_rule_end" However, when I plug in a thumb drive with a vfat filesystem (which should trigger both rules), I end up with a file called /tmp/test_udev_vfat, meaning the first rule was triggered successfully, and that the ID_FS_TYPE environment variable is "vfat", but I don't have the other file, meaning that although I know the ID_FS_TYPE env variable is "vfat", I can't seem to check against it for a match. I tried googling the thing, but pretty much every result seems to assume ENV{ID_FS_TYPE}=="vfat" works. I also tested the exact same udev rule on Ubuntu 10.04 LTS server, and I have the same result. I'm probably missing something very simple, but I just don't get it. Does anyone see what is wrong with my udev rule that would prevent it from matching on ENV{ID_FS_TYPE}? Thanks.

    Read the article

  • raid 1 and high load average

    - by melocoton
    i have a server with high load average, I think the problem is the raid 1. cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdb1[1] sda1[0] 256896 blocks [2/2] [UU] md3 : active raid1 sdb3[1] sda3[0] 2562240 blocks [2/2] [UU] md4 : active raid1 sdb5[1] sda5[0] 958566272 blocks [2/2] [UU] md1 : active raid1 sdb2[1] sda2[0] 15366080 blocks [2/2] [UU] model name : Intel(R) Core(TM)2 Duo CPU E8400 @ 3.00GHz Linux 2.6.18-164.6.1.el5.centos.plus (local) 04/19/2010 avg-cpu: %user %nice %system %iowait %steal %idle 17.37 0.01 6.02 26.17 0.00 50.43 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 61.09 562.65 893.73 1557214 2473546 sda1 0.01 0.27 0.02 751 42 sda2 6.11 195.50 169.78 541075 469888 sda3 0.01 0.23 0.00 641 0 sda4 0.00 0.01 0.00 18 0 sda5 54.96 366.54 723.94 1014449 2003616 sdb 54.40 433.22 893.73 1199015 2473546 sdb1 0.01 0.16 0.02 436 42 sdb2 5.31 169.00 169.78 467729 469888 sdb3 0.01 0.31 0.00 865 0 sdb4 0.00 0.00 0.00 10 0 sdb5 49.05 263.65 723.94 729695 2003616 md1 29.96 364.39 166.68 1008498 461312 md4 124.15 630.07 713.28 1743822 1974112 md3 0.05 0.43 0.00 1192 0 md0 0.04 0.32 0.00 872 10 dm-0 7.96 83.29 23.02 230530 63720 dm-1 3.67 51.81 2.73 143394 7560 dm-2 7.63 67.76 27.35 187546 75696 dm-3 8.20 134.60 14.02 372514 38792 dm-4 5.90 10.66 39.35 29498 108912 dm-5 17.39 24.52 121.79 67850 337080 dm-6 27.19 229.60 139.89 635442 387168 dm-7 0.14 1.07 0.28 2970 776 dm-8 25.84 4.23 202.89 11698 561536 dm-9 14.77 8.38 112.35 23202 310960 dm-10 5.29 12.78 29.55 35376 81784 dm-11 0.16 1.25 0.05 3450 128 the server runs lvm in the md4

    Read the article

  • nginx+passenger +static websites= problems

    - by Eugene K
    I've got a Rails app that nginx serves through passenger. I'd also like to serve some static content for a different domain name. But when I add another server block to my config, both websites become unavailable returning HTTP 204. What have I done wrong? What can I do to fix it? Here's the http block of my nginx.conf: https://gist.github.com/4243256 Here's what I added : server { listen 80; server_name website2; root /var/www/website2; location / { index index.html; } } It's going to be a Rails app as well at some point in the future (though I'm not really sure about that, maybe I'm going to use a different back-end solution.) Either way, I don't want anything dynamic to eat away the resources just yet. As of now, this website consists of nothing but index.html and stylesheet.css files in the root directory. What should I do? Thank you in advance. Sincerely yours, Eugene.

    Read the article

  • Snort/Barnyard2 Logging

    - by Eric
    I need some help with my Snort/Barnyard2 setup. My goal is to have Snort send unified2 logs to Barnyard2 and then have Barnyard2 send the data to other locations. Here is my currrent setup. OS Scientific Linux 6 Snort Version 2.9.2.3 Barnyard2 Version 2.1.9 Snort command snort -c /etc/snort/snort.conf -i eth2 & Barnyard2 command /usr/local/bin/barnyard2 -c /etc/snort/barnyard2.conf -d /var/log/snort -f snort.log -w /var/log/snort/barnyard.waldo & snort.conf output unified2: filename snort.log, limit 128 barnyard2.conf output alert_syslog: host=127.0.0.1 output database: log, mysql, user=snort dbname=snort password=password host=localhost With this setup, barnyard2 is showing all of the correct information in the database and I'm using BASE to view it on the web GUI. I was hoping to be able to send the full packet data to syslog with barnyard2 but after reading around, it seems that it is impossible to do that. So I then started trying to modify the snort.conf file and add lines like "output alert_full: alert.full". This definitely gave me a lot more information but still not the full packet data like I want. So my question is, is there anyway I can use barnyard2 to send the full packet data of alerts to a human readable file? Since I can't send it directly to syslog, I can create another process to take the data from that file and ship it off to another server. If not, what flags and/or snort.conf configuration would you recommend to get the most data possible but still be able to handle quite a bit of traffic? In the end of it all, these alerts will be shipped to a central server via a SSH tunnel. I'm trying to stay away from databases.

    Read the article

  • getting "No LoginModules configured" for JAAS login under WebSphere security domain

    - by user1739040
    I have a JAX-RPC web service running on WebSphere V7. It requires a UserNameToken for security. I have a custom login module (MyLoginModule) which extracts the username and password, and that module is defined as a JAAS application login in the websphere admin console. Using IBM RAD 8.0, I have bound the token consumer to the login module using the JAAS config name of the module. This all works fine and happy on my development server. Now I realize, that for deployment to another server, I am required to move the JAAS login from global security to a security domain. When I do that, it breaks my web service. I get this SOAP Fault message: com.ibm.wsspi.wssecurity.SoapSecurityException: WSEC6520E: Construction of the login context failed. The exception is : javax.security.auth.login.LoginException: No LoginModules configured for MyLoginModule According to the IBM docs: The JAAS application logins, the JAAS system logins, and the JAAS J2C authentication data aliases can all be configured at the domain level. By default, all of the applications in the system have access to the JAAS logins configured at the global level. The security runtime first checks for the JAAS logins at the domain level. If it does not find them, it then checks for them in the global security configuration. Configure any of these JAAS logins at a domain only when you need to specify a login that is used exclusively by the applications in the security domain. So I am looking to make sure my application is in the domain, and I have tried everything I can think of. (I have assigned the domain to "all scopes", to the entire cell, etc.) No luck, I keep getting the same error response to my web service client. Any help or hints are appreciated.

    Read the article

  • VSFTPD does not allow upload with virtual users

    - by Mr. Squig
    I am attempting to setup VSFTPD with virtual users on a server running Ubuntu 12.04. I have configured the server to allow for virtual users to login, but I am having trouble getting it to allow uploads. My vsftpd.conf is as follows: listen=YES anonymous_enable=NO local_enable=YES write_enable=YES local_umask=022 anon_upload_enable=YES dirmessage_enable=YES use_localtime=YES xferlog_enable=YES connect_from_port_20=YES chroot_local_user=YES virtual_use_local_privs=YES guest_enable=YES guest_username=virtual user_sub_token=$USER local_root=/var/www/$USER hide_ids=YES secure_chroot_dir=/var/run/vsftpd/empty pam_service_name=vsftpd rsa_cert_file=/etc/ssl/private/vsftpd.pem /etc/pam.d/vsftpd contains: auth required pam_pwdfile.so pwdfile /etc/vsftpd.passwd crypt=hash account required pam_permit.so crypt=hash I have two virtual users set up, one of which has the same name as a local user. They each have a directory in /var/www/ owned by 'virtual'. As I understand it, when a virtual user logs in this way they will appear to the system as the user virtual. Using this configuration user can log on, but cannot upload files. The error given in /var/log/vsftpd.log is: Tue Nov 20 19:49:00 2012 [pid 2] CONNECT: Client "96.233.116.53" Tue Nov 20 19:49:07 2012 [pid 1] [zac] OK LOGIN: Client "96.233.116.53" Tue Nov 20 19:49:11 2012 [pid 2] CONNECT: Client "96.233.116.53" Tue Nov 20 19:49:11 2012 [pid 1] [zac] OK LOGIN: Client "96.233.116.53" Tue Nov 20 19:49:11 2012 [pid 3] [zac] FAIL CHMOD: Client "96.233.116.53", "/test.ppm 644" I have tried changing the permissions of these directories in all sorts of ways, but nothing seem to work. I have a feeling that it is something simple related to permissions. Any ideas?

    Read the article

  • Sticky sessions not sticky on coldfusion cluster

    - by GreatSeaSpider
    we're trying to deploy a legacy coldfusion site onto a new CF8 cluster. The cluster consists of three cf instances running under JRUN4 on a single windows 2008 server. I've got the cluster set to not replicate sessions, and sticky sessions turned on. each instance is set to use J2EE session variables. The application tag for the site has: sessionmanagement="Yes" setclientcookies="Yes" setdomaincookies="Yes" when each instance starts... no errors are reported in the instance log and they join the cluster without any issues. though the intances do have: 16/10 08:31:25 info SessionReplicationService successfully joined a JINI lookup service (assigned JINI-ID .....) and 16/10 08:31:25 info Clusterable service SessionReplicationService discovered a SessionReplicationService peer on a JRun server named "xxxx" on host xxxx which is interesting since session replication is definately off, is the SessionReplicationService responsible for sticky sessions aswell? thats enough background, the problem is that the sticky sessions appear to simply not work, each request is bounced to a different instance, and it seems as if the sessions are being lost on each instance anyways? As soon as the cluster is down to a single instance, the web app works exactly as expected and the sessions seem fine. has anybody got any ideas for me? i've been trawling the web and I cant seem to find any answers.

    Read the article

  • Solaris 10 invalid ARP requests from 0.0.0.0?

    - by JWD
    The guys at the data center where I'm hosting a server running Solaris 10 are telling me that my server is making a lot of invalid arp requests. This is an example of a portion of what was sent to me from the logs (with Mac addresses and IP addresses changed). xxxx:xxxx:xxxx/0.0.0.0/0000.0000.0000/[myipaddress]/[Datestamp]) I don't see anything in the arp tables (arp -a) or routing tables (netstat -r) and I don't see anything relating to 0.0.0.0 when snoping the arp requests. The only place I see any reference to 0.0.0.0 is if I do netstat -a for the SCTP SCTP: Local Address Remote Address Swind Send-Q Rwind Recv-Q StrsI/O State ------------------------------- ------------------------------- ------ ------ ------ ------ ------- ----------- 0.0.0.0 0.0.0.0 0 0 102400 0 32/32 CLOSED But not really sure what that means. Doesn't seem like I can disable SCTP. Does anyone have any idea what might be causing this and how to stop it? I think the switch I'm connected to doesn't like it and momentarily drops the connection. Is there anyway to at least block those requests using ipfilter or something else?

    Read the article

  • authbind, privbind or iptables REDIRECT (port 80 to 8080)?

    - by chris_l
    Hi, I'd like to run Glassfish v3 as a non-privileged user on Linux (Debian), but make it available on port 80. I'm currently doing this with iptables: iptables -t nat -I PREROUTING -p tcp -d x.x.x.x --dport 80 -j REDIRECT --to-port 8080 This works, but I wonder: If this has any significant performance impact compared to binding directly to port 80 If I could make a similar setup also work for HTTPS (or if that must run on 443) If there's a way to avoid other users from binding to port 8080 (in case my server crashes) - maybe block that port permanently to other users somehow? ...or if I should use authbind/privbind instead? Problem: I couldn't make it work with authbind or privbind so far. For authbind, I edited asadmin's last line to: exec authbind --deep "$JAVA" -Djava.net.preferIPv4Stack=true -jar ... For privbind: exec privbind -u glassfish "$JAVA" -Djava.net.preferIPv4Stack=true -jar ... (Only) with these settings, I can successfully perform a create-domain --domainport 80. This proves, that authbind and privbind actually work (the authbind version of the script is called by the glassfish user; the privbind version is called by root of course). However, in both cases I get the following exception, when starting the domain (start-domain): [#|2010-03-20T13:25:21.925+0100|SEVERE|glassfishv3.0|javax.enterprise.system.core.com.sun.enterprise.v3.server|_ThreadID=11;_ThreadName=FelixStartLevel;|Shutting down v3 due to startup exception : Permission denied: 80=com.sun.enterprise.v3.services.impl.monitor.MonitorableSelectorHandler@1fc25e5|#] I haven't found a solution for that yet (after searching the web, it seems, that this isn't so easy?) But maybe, the solution with iptables is good enough - what do you think? Thanks, Chris

    Read the article

  • Help diagnosing Likewise Open Active Directory authentication problem

    - by purpletonic
    I have two servers which were up until recently authenticating against the companies Active Directory Domain controller. I believe a recent change to the Active Directory administrator password caused the servers to stop authenticating against AD. I tried to add the servers back to the domain using the command: domainjoin-cli join example.com adusername this seemed to work without complaints, but when I try to login via ssh with my domain account, I get an invalid password error. When I run the command: lw-enum-users it prints all of the domain users, and looking up my own account, I see that it is valid and my password hasn't expired. I also ran lw-get-status and received the following: LSA Server Status: Agent version: 5.0.0 Uptime: 0 days 3 hours 35 minutes 46 seconds [Authentication provider: lsa-activedirectory-provider] Status: Online Mode: Un-provisioned Domain: example.com Forest: example.com Site: Default-First-Site-Name Online check interval: 300 seconds \[Trusted Domains: 1\] \[Domain: EXAMPLE\] DNS Domain: example.com Netbios name: EXAMPLE Forest name: example.com Trustee DNS name: Client site name: Default-First-Site-Name Domain SID: S-1-5-24-1081533780-4562211299-822531512 Domain GUID: 057f0239-7715-4711-e64b-eb5eeed20e65 Trust Flags: \[0x001d\] \[0x0001 - In forest\] \[0x0004 - Tree root\] \[0x0008 - Primary\] \[0x0010 - Native\] Trust type: Up Level Trust Attributes: \[0x0000\] Trust Direction: Primary Domain Trust Mode: In my forest Trust (MFT) Domain flags: \[0x0001\] \[0x0001 - Primary\] \[Domain Controller (DC) Information\] DC Name: dc1.example.com DC Address: 10.11.0.103 DC Site: Default-First-Site-Name DC Flags: \[0x000003fd\] DC Is PDC: yes DC is time server: yes DC has writeable DS: yes DC is Global Catalog: yes DC is running KDC: yes [Authentication provider: lsa-local-provider] Status: Online Mode: Local system Anyone got any ideas what might be occurring? Thanks in advance!

    Read the article

  • So Close: How to get this SSH login working (.bashrc)

    - by This_Is_Fun
    Objective: SSH login ( + eliminate warning message) / run 2 commands / stay logged in: EDIT: Oops, I made a mistake (see below): This code does ~95% of what I wanted to do # .bashrc # Run two commands and stay logged in to new server. alias gr='ssh -t -p 5xx4x [email protected] 2> /dev/null "cd /var; ls; /bin/bash -i"' Now, after successful login / verify user logged in = root pts/0 2011-01-30 22:09 Try to 'logout' = bash: logout: not login shell: use `exit' I seem to have full root access w/o being logged into the shell? (The " /bin/bash -i " was added to 'Stay logged in' but doesn't work quite as expected) FYI: The question is "How to get this SSH login working" & it is mostly solved, sorry I made a mess... ... .. . Original Question Here: # .bashrc # Run two commands and stay logged in to new server. alias gr='ssh -t -p 5xx4x [email protected] "cd /var; ls; /bin/bash -i"' # (hack) Hide "map back to the address - POSSIBLE BREAK-IN ATTEMPT!" message. alias gr='ssh -p 5xx4x [email protected] 2> /dev/null' Both examples 'work' as shown; When I try to add the ' 2 /dev/null ' to the first example, then the whole thing breaks. I'm out of time trying to solve the warning message other ways, so is it possible to combine both examples to make example #1 work w/o the warning message? Thank you. ps. If you also know a proper way to kill the login warning message, please do tell (the 'standard' "edit host file" advice isn't working for me)

    Read the article

  • WebDAV through Apache2 permissions/missing files

    - by Strifariz
    I have a WebDAV setup on Apache2 on a server running Debian 5.0 (Lenny), which I am accessing through a mapped network drive under Windows 7. The setup appears to run fine, I receive no permission errors when copying a file to the share the first time, but the file never shows up in the directory (it's invisible, doing a ls -lha on the directory as root on the server also shows no files. When attempting to copy the file once more I am informed that the file already exists though, and I am asked if I wish to overwrite the file, when selecting "Yes" to this, I receive a permission error saying I'm not able to write to the folder. My logs aren't reporting any access violations of any kind, what could be the problem? (See log excerpt below) [17/Jan/2011:10:26:34 +0100] "PUT /1.png HTTP/1.1" 401 525 "-" "Microsoft-WebDAV-MiniRedir/6.1.7600" [17/Jan/2011:10:26:34 +0100] "PUT /1.png HTTP/1.1" 201 304 "-" "Microsoft-WebDAV-MiniRedir/6.1.7600" [17/Jan/2011:10:26:34 +0100] "LOCK /1.png HTTP/1.1" 401 525 "-" "Microsoft-WebDAV-MiniRedir/6.1.7600" [17/Jan/2011:10:26:34 +0100] "LOCK /1.png HTTP/1.1" 200 447 "-" "Microsoft-WebDAV-MiniRedir/6.1.7600" [17/Jan/2011:10:26:34 +0100] "PROPPATCH /1.png HTTP/1.1" 401 525 "-" "Microsoft-WebDAV-MiniRedir/6.1.7600" [17/Jan/2011:10:26:34 +0100] "PROPPATCH /1.png HTTP/1.1" 207 389 "-" "Microsoft-WebDAV-MiniRedir/6.1.7600" [17/Jan/2011:10:26:34 +0100] "HEAD /1.png HTTP/1.1" 401 - "-" "Microsoft-WebDAV-MiniRedir/6.1.7600" [17/Jan/2011:10:26:34 +0100] "HEAD /1.png HTTP/1.1" 200 - "-" "Microsoft-WebDAV-MiniRedir/6.1.7600" [17/Jan/2011:10:26:34 +0100] "PUT /1.png HTTP/1.1" 401 525 "-" "Microsoft-WebDAV-MiniRedir/6.1.7600" [17/Jan/2011:10:26:35 +0100] "PUT /1.png HTTP/1.1" 204 - "-" "Microsoft-WebDAV-MiniRedir/6.1.7600" [17/Jan/2011:10:26:35 +0100] "PROPPATCH /1.png HTTP/1.1" 401 525 "-" "Microsoft-WebDAV-MiniRedir/6.1.7600" [17/Jan/2011:10:26:35 +0100] "PROPPATCH /1.png HTTP/1.1" 207 389 "-" "Microsoft-WebDAV-MiniRedir/6.1.7600" [17/Jan/2011:10:26:35 +0100] "UNLOCK /1.png HTTP/1.1" 401 525 "-" "Microsoft-WebDAV-MiniRedir/6.1.7600" [17/Jan/2011:10:26:35 +0100] "UNLOCK /1.png HTTP/1.1" 204 - "-" "Microsoft-WebDAV-MiniRedir/6.1.7600" [17/Jan/2011:10:26:38 +0100] "PROPFIND / HTTP/1.1" 401 525 "-" "Microsoft-WebDAV-MiniRedir/6.1.7600" [17/Jan/2011:10:26:38 +0100] "PROPFIND / HTTP/1.1" 207 1634 "-" "Microsoft-WebDAV-MiniRedir/6.1.7600"

    Read the article

  • Configuration for a two machine ESXi cluster using VSA to present local storage to VMs

    - by MDMarra
    I'm designing a little vSphere 5 cluster for one of our remote sites. We have some IBM x3650s that have 6x 300GB 10K RPM drives in them, along with dual quad core CPUs and 24GB RAM. Because we use HP P4500 G2s at our primary site, we have licenses available for HP P4000 VSAs. I thought that this would be the perfect opportunity to use them. Below is a basic drawing of what I want to accomplish: I want to run a P4000 VSA on each server and run them in a Network RAID-10 (Lefthand speak for network mirroring, think of it as RAID 1 across nodes or as an active/active storage cluster). I will then present this storage to guests that will run on this mini-cluster. It will be managed by a vCenter Server on our main site. All connections will be GbE with two dedicated to storage. Management and Data will share a pair of connections, since I don't expect there to be high load. These servers are just there to provide directory services, dhcp, printing, etc. Does anyone see anything potentially wrong with this approach? Is this the best way to do this without adding additional dedicated storage heads? Are there any pitfalls in this design, besides the lack of dedicated Data/Mgmt interfaces?

    Read the article

  • Endian Destination NAT

    - by Ben Swinburne
    I have installed Endian Community Firewall 2.3 and am clearly misunderstanding/doing something wrong with it. I'm trying to create some destination NAT rules to allow incoming connections to various services within the network. Router - RED I/F - x.x.x.x Router - GREEN I/F - 192.168.11.253 ECF - RED I/F - 192.168.11.254/24 ECF - GREEN I/F - 192.168.12.254/24 Target server - 192.168.12.1 Please ignore the haphazard choice of subnets and addresses- I'm trying to quickly plop Endian into an existing network before a complete rework in 6-12 months so for now. Everything works except destination NAT, so outgoing connections are fine, the routes between the two subnets are OK etc. I want to create various incoming NATs but let's take for the sake of argument, SMTP port 25 from the Internet to Target server 192.168.12.1. I've tried almost every combination of options in the Destination NAT section to achieve this and clearly am doing something wrong. I suspect my confusion must be somewhere in the Access From and/or Target section. The rest seems OK Filter Policy = Allow Service = SMTP Protocol = TCP Port = 25 Translate to type = IP DNAT Policy = NAT Insert IP = 192.168.12.1 Port Range = 25 Enabled = Checked Position = First I can't work out what I'm doing wrong, or am I doing it right and it's just not working!? Any help would be greatly appreciated.

    Read the article

  • Apache & SVN on Ubuntu - Post-commit hook fails silently, pre-commit hook “Permission Denied”

    - by 113169587962668775787
    I've been struggling for the past couple days to get post-commit email notifications working on my SVN server (running via HTTP with Apache2 on Ubuntu 9.10). SVN commits work fine, but for some reason the hooks are not being properly executed. Here are the configuration settings: - Users access the repo via HTTP with the apache dav_svn module (I created users/passwords via htpasswd in a dav_svn.passwd file). dav_svn.conf: <Location /svn/repos> DAV svn SVNPath /home/svn/repos AuthType Basic AuthName "Subversion Repository" AuthUserFile /etc/apache2/dav_svn.passwd Require valid-user </Location> I created a post-commit hook file that writes a simple message to a file in the repository root: /home/svn/repos/hooks/post-commit: #!/bin/sh REPOS="$1" REV="$2" /bin/echo 'worked' > ${REPOS}/postcommit.log I set the entire repository to be owned by www-data (the apache user), and assigned 755 permissions to the post-commit script when I test the post-commit script using the www-data user in an empty environment, it works: sudo -u www-data env - /home/svn/repos/hooks/post-commit /home/svn/repos 7 But when I commit on a client machine, the commit is successful, but the post-commit script does not seem to be executed. I also tried running a simple script for the pre-commit hook, and I get an error, even with an empty pre-commit script: "Commit failed (details follow): Can't create null stdout for hook '/home/svn/repos/hooks/pre-commit': Permission denied" I did a few searches on Google for this error and I presume that this is an issue with the apache user (www-data) not having adequate permissions, specifically to execute /dev/null. I also read that the reason post-commit fails silently is because that it doesn't report with stdout. Anyway, I've also tried giving the apache user (www-data) ownership of the entire repository, and edited the apache virtualhost to allow operations on the server root, and I'm still getting permission denied /etc/apache2/sites-available/primarydomain.conf <Directory /> Options FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> Any ideas/suggestions would be greatly appreciated! Thanks

    Read the article

  • PAM Winbind Expired Password

    - by kernelpanic
    We've got Winbind/Kerberos setup on RHEL for AD authentication. Working fine however I noticed that when a password has expired, we get a warning but shell access is still granted. What's the proper way of handling this? Can we tell PAM to close the session once it sees the password has expired? Example: login as: ad-user [email protected]'s password: Warning: password has expired. [ad-user@server ~]$ Contents of /etc/pam.d/system-auth: auth required pam_env.so auth sufficient pam_unix.so nullok try_first_pass auth requisite pam_succeed_if.so uid >= 500 quiet auth sufficient pam_krb5.so use_first_pass auth sufficient pam_winbind.so use_first_pass auth required pam_deny.so account [default=2 success=ignore] pam_succeed_if.so quiet uid >= 10000000 account sufficient pam_succeed_if.so user ingroup AD_Admins debug account requisite pam_succeed_if.so user ingroup AD_Developers debug account required pam_access.so account required pam_unix.so broken_shadow account sufficient pam_localuser.so account sufficient pam_succeed_if.so uid < 500 quiet account [default=bad success=ok user_unknown=ignore] pam_krb5.so account [default=bad success=ok user_unknown=ignore] pam_winbind.so account required pam_permit.so password requisite pam_cracklib.so try_first_pass retry=3 password sufficient pam_unix.so md5 shadow nullok try_first_pass use_authtok password sufficient pam_krb5.so use_authtok password sufficient pam_winbind.so use_authtok password required pam_deny.so session [default=2 success=ignore] pam_succeed_if.so quiet uid >= 10000000 session sufficient pam_succeed_if.so user ingroup AD_Admins debug session requisite pam_succeed_if.so user ingroup AD_Developers debug session optional pam_mkhomedir.so umask=0077 skel=/etc/skel session optional pam_keyinit.so revoke session required pam_limits.so session optional pam_mkhomedir.so session [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid session required pam_unix.so session optional pam_krb5.so

    Read the article

  • Apache HTTPD - Segmentation fault when loading mod_jk module

    - by hansengel
    I just set up mod_jk with my Apache httpd 2.0.52 installation, but now when I try to start Apache, it has a segmentation fault. I've checked that I am using the mod_jk compiled for 2.0.x.. built against the same version I have, in fact. I've also verified that the path I'm giving to LoadModule is correct, and the permissions and the ownership of the file are the same as the rest of the modules'. When I remove the "LoadModule" command for mod_jk from my httpd.conf, there is no segmentation fault. Nothing shows in Apache's error logs. I have tried restarting the server with this module using both service httpd restart and httpd. These are the last few lines returned of strace httpd -X: gettimeofday({1292100295, 434487}, NULL) = 0 socket(PF_INET6, SOCK_STREAM, IPPROTO_IP) = -1 EAFNOSUPPORT (Address family not supported by protocol) socket(PF_NETLINK, SOCK_RAW, 0) = 3 bind(3, {sa_family=AF_NETLINK, pid=0, groups=00000000}, 12) = 0 getsockname(3, {sa_family=AF_NETLINK, pid=22378, groups=00000000}, [12]) = 0 time(NULL) = 1292100295 sendto(3, "\24\0\0\0\26\0\1\3\307\342\3M\0\0\0\0\0\305\333\267", 20, 0, {sa_family=AF_NETLINK, pid=0, groups=00000000}, 12) = 20 recvmsg(3, {msg_name(12)={sa_family=AF_NETLINK, pid=0, groups=00000000}, msg_iov(1)=[{"<\0\0\0\24\0\2\0\307\342\3MjW\0\0\2\10\200\376\1\0\0\0"..., 4096}], msg_controllen=0, msg_flags=0}, 0) = 664 recvmsg(3, {msg_name(12)={sa_family=AF_NETLINK, pid=0, groups=00000000}, msg_iov(1)=[{"\24\0\0\0\3\0\2\0\307\342\3MjW\0\0\0\0\0\0\1\0\0\0\10\0"..., 4096}], msg_controllen=0, msg_flags=0}, 0) = 20 close(3) = 0 socket(PF_INET, SOCK_STREAM, IPPROTO_IP) = 3 --- SIGSEGV (Segmentation fault) @ 0 (0) --- +++ killed by SIGSEGV +++ Process 22378 detached Has anyone had a similar problem using Apache 2.0.52 with mod_jk? I might try downloading and building the source for the Apache server and mod_jk myself if there isn't a discovered fix for this.

    Read the article

  • Virtualbox HTTP load testing, host CPU overload issues

    - by aschuler
    I'm doing HTTP load testing benchmarks (using Apache Benchmark and Siege) on a small Java EE 1.7.0 / Tomcat 7.0.26 application running on a Debian Squeeze 6.0.4 x64 virtualized with Virtualbox 4.1.8. The computer host is Ubuntu 11.10 x64. I've modified those parameters in the Tomcat server.xml : <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="200000" redirectPort="8443" acceptCount="2000" maxThreads="150" minSpareThreads="50" /> The application executed on the server takes around 300ms. This app is running well until a certain amount of concurrent connections like those one : ab -n 500 -c 150 http://xx.xx.xx.xx:8080/myapp/ ab -n 1000 -c 50 http://xx.xx.xx.xx:8080/myapp/ siege -b -c 100 -r 20 http://xx.xx.xx.xx:8080/myapp/ A lot of socket connection timed out happens and this completly overload the host processor (but the CPU load inside the VM is normal). Doing an htop on the host, i can see that the Virtualbox processus is running under 300% CPU and never come down even after the load test is finished. (I've allocated 4 processors to the VM, if I allocate only one processor, CPU load goes under 100%). Restarting Tomcat don't do anything, i'm forced to restart the whole VM. I've tryed to launch those ab/siege commands locally on the VM and everything goes well. I first thought it was related to a linux network limit as explained here: Running some benchmarks using ab, and tomcat starts to really slow down So I've modified those TCP parameters : echo 15 > /proc/sys/net/ipv4/tcp_fin_timeout echo 30 > /proc/sys/net/ipv4/tcp_keepalive_intvl echo 1 > /proc/sys/net/ipv4/tcp_tw_recycle echo 1 > /proc/sys/net/ipv4/tcp_tw_reuse It seems to be better, but it continues to overload the host CPU and output socket connections time out at a certain amount of concurrent connections. I'm wondering if this is not related to how Virtualbox handles external concurrent connections.

    Read the article

< Previous Page | 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718  | Next Page >