Search Results

Search found 10169 results on 407 pages for 'port'.

Page 326/407 | < Previous Page | 322 323 324 325 326 327 328 329 330 331 332 333  | Next Page >

  • Installing Mysql Ruby gem on 64-bit CentOS

    - by Jacek
    Hi, I have a problem installing mysql ruby gem on 64bit CentOS machine. [jacekb@vitaidealn ~]$ uname -a Linux vitaidealn.local 2.6.18-92.el5 #1 SMP Tue Jun 10 18:51:06 EDT 2008 x86_64 x86_64 x86_64 GNU/Linux Mysql and mysql-devel packages are installed. Mysql_config provides following paths: Usage: /usr/lib64/mysql/mysql_config [OPTIONS] Options: --cflags [-I/usr/include/mysql -g -pipe -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -fno-strict-aliasing -fwrapv] --include [-I/usr/include/mysql] --libs [-L/usr/lib64/mysql -lmysqlclient -lz -lcrypt -lnsl -lm -L/usr/lib64 -lssl -lcrypto] --libs_r [-L/usr/lib64/mysql -lmysqlclient_r -lz -lpthread -lcrypt -lnsl -lm -lpthread -L/usr/lib64 -lssl -lcrypto] --socket [/var/lib/mysql/mysql.sock] --port [3306] --version [5.0.45] --libmysqld-libs [-L/usr/lib64/mysql -lmysqld -lz -lpthread -lcrypt -lnsl -lm -lpthread -lrt -L/usr/lib64 -lssl -lcrypto] Trying to install: [jacekb@vitaidealn ~]$ gem install mysql -- --with-mysql-include=/usr/include/mysql --with-mysql-libs=/usr/lib64/mysql ... ERROR: Error installing mysql: ERROR: Failed to build gem native extension. /usr/bin/ruby extconf.rb --with-mysql-include=/usr/include/mysql --with-mysql-libs=/usr/lib64/mysql checking for mysql_query() in -lmysqlclient... no checking for main() in -lm... no checking for mysql_query() in -lmysqlclient... no checking for main() in -lz... no checking for mysql_query() in -lmysqlclient... no checking for main() in -lsocket... no checking for mysql_query() in -lmysqlclient... no checking for main() in -lnsl... no checking for mysql_query() in -lmysqlclient... no *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. I would appreciate any help. Thanks for reading :).

    Read the article

  • Issues with sustained traffic with PFSense

    - by Farseeker
    Last week we had to replace our PFSense firewall because it had a catastrophic hardware failure. All but one of the NICs were taken out of the old server and put into the new one. The one NIC that was not moved was the LAN NIC as this is on-board. The other NICs are all WAN connections and the must all be present (i.e. I can't disable one just for the sake of testing) After re-installing PFSense and restoring our backup of the configuration, everything came back online just fine, however on the new hardware any download that takes longer than about 10 seconds just times out in the middle. Example 1: Downloading from Microsoft.com goes at about 900k/sec and times out after about 10 seconds (thus, just under 10Mb of content) Example 2: Downloading from cnet.com goes at about 300k/sec and times out after about 10 seconds (thus, about 3Mb of content). By times out, I mean that the download just stops, and you have to pause/resume to get the next part done, repeat and rinse until the download is complete. However it's not consistant, sometimes it's 10 seconds, sometimes it's 4 seconds, and it sometimes you can't even load a heavy HTML page because the page never finishes. I assume this is most likely because PFSense does not like the onboard NIC, as this is the primary difference between the two servers. It's recognised as NFE0, and there's no room in the server for any more NICs and I don't have any dual-port NICs handy to experiment with a different LAN connection. I've never had to troubleshoot this sort of issue before. Can anyone give me some pointers about where to start? Linux is not my forte so please be kind!

    Read the article

  • Solaris Fibre Channel target - Configure QLogic QLA2340

    - by growse
    I'm currently trying to set up a small storage system as a fibre channel target. This is for testing, so I'm currently using Solaris (Nexenta) and a QLogic QLA2340 HBA. For some reason, the qlc and qlt drivers don't support the QLA2340, so I'm using the qla2300 driver from QLogic's website. I've also got the scli utility installed for configuration. The HBA is detected by the system. That said, it's not clear how I get from this point to a point where I have a ZFS volume being exposed as an FC target. I was originally following this guide (http://www.youtube.com/watch?v=yzEBd3l7Qn4) but it seems that without the qlc/qlt drivers, Sun's configuration tools won't work. Does that also imply that COMSTAR also won't work? What's the best way to expose an FC target with this setup? Most of the options I'm seeing in scli complain that the port state is LinkDown (it is, I've not plugged anything in yet). Do I have to have my FC client plugged up and working before I can configure the target? Apologies for the slight vagueness of the question, but I'm not overly familiar with the terminology.

    Read the article

  • Strange performance differences in read/write from/to USB flash drive

    - by Mario De Schaepmeester
    When copying files from my 8GB USB 2.0 flash drive with Windows 7 to a traditional hard drive, the average speed is between 25 and 30 MB/s. When doing the reverse, copying to the USB drive, the speed is 5MB/s average. I have tested this with about 4.5GB of files, a mixture of smaller and larger ones. The observations were the same on both FAT32 and exFAT file systems on the USB drive, NTFS on the internal hard disk. I don't think I can be mistaken in saying that flash memory has a lot higher performance than a spinning hard drive in both terms of reading and writing. For both memory types, reading should be faster than writing too. Now I wonder, how can it be that copying files from a fast read memory to a faster write memory is actually slower than copying files from a fast read memory to a slow write memory? I think that the files are stored in RAM before being copied over too, and there's caching as well, but I don't see how even that could tip the balance. It can only be in the advantage of writing to the USB drive, since it is "closer" to the SATA system than the USB port and it will receive data from the internal SATA HDD faster. Perhaps my way of thinking is all wrong or it just depends on the manufacturer of the USB pen. But I am curious.

    Read the article

  • LogMeIn Hamachi for Linux

    - by tlunter
    So far most of my work using LogMeIn Hamachi has been from either a Mac OS X or Windows system to Windows or a Linux Computer. Recently I purchased a mini computer and have been running Ubuntu Server on it, as my little server. I knew LogMeIn had a Linux client that is command line only, but I often do all my work via command line anyway, so that wasn't an issue. I added my user to the correct local file so that I could run the hamachi daemon without sudo, and was able to connect to LogMeIn's service. I decided to set up my Linux server as a git server as well, and set it up correctly. The thing is, the server is behind my schools firewall and I need to use hamachi to get around that. Since most of the time I was using either Mac or Windows, I never had an issue sshing onto any of my computers since LogMeIn is fully featured for these OSs. From Linux (Arch) though, it seems like the client cannot correctly route to the LogMeIn IPs. I know from Windows I can connect to the Linux computers, both of them. From Linux (Arch) though, I can't connect to my Mac, Windows, or Linux server. It keeps just dropping the connection. I was wondering if there was some configuration that I would need to make for this to work. I understand that it is most likely going to be a static configuration since I assume it has to do with the computer not understanding that 5.*.*.* actually refers to another IP:Port. Has anyone had any experience getting this to work?

    Read the article

  • OSX: Mimic Ubuntu IP Masquerading via iptables with ipfw

    - by Dogbert
    Good day, I am attempting to replicate a setup I have between a router and an Ubuntu PC, and have the same setup working on my MacBook (10.6, Snow Leopard). First, I have a router that has a USB port. When I plug it into my Ubuntu PC, it creates an RNDIS connection, allowing me to connect to the router over the USB cable via an IP connection. When I plug it into my computer via USB, it gets assigned an IP address of 172.16.84.1, and a new adapter appears when I type ifconfig. I can then SSH into the device via ssh [email protected]. When I log in to the device, I flush the routes, then create the default route: admin@localhost> route -f admin@localhost> route add default 172.16.84.2 Now, in my Ubuntu machine, I use iptables to enable IP masquerading: root@Valhalla> sudo iptables -t nat -A POSTROUTING -s 172.16.84.2 -j MASQUERADE Once this is all done, the router has internet access over the USB connection to my PC. I am trying to replicate this exact setup on my MacBook now (Snow Leopard), but iptables does not exist for OSX, not even a Macports version exists. I have scoured through other questions on StackOverflow that cover the usage of the ipfw command, which apparently works as a drop-in replacement for iptables. However, the syntax is significantly different, and I'm pretty much lost. Does anyone with some experience with ipfw have some suggestions on how I could accomplish this and create a NAT connection via IP masquerading like I could with my Ubuntu PC? Thank you for your assistance.

    Read the article

  • Apple Service Diagnostic application on USB key?

    - by Matt 'Trouble' Esse
    I found the following in a text file, and I would like to use the Apple Service Diagnostic Application from a bootable USB key but I cannot find where to download it or set it up? Also is this free software or does it require a separate licence? It sounds like it would be a useful tool for diagnosing Mac problems. The Apple Service Diagnostic application is designed to run both EFI and Mac OS X tests from an external USB hard drive. Apple Service Diagnostic (EFI) runs low-level tests of the hardware directly and does not require Mac OS X, while Apple Service Diagnostic (OS) uses Mac OS X to run tests. Booting and Using the Apple Service Diagnostic Application - Before using Apple Service Diagnostic, disconnect any Ethernet network, USB, and audio cables. - With the USB hard drive containing ASD 3S123 plugged into a USB port, restart the computer and hold down the option key as the computer boots up into the Startup Manager. To run ASD (EFI) select the "ASD EFI 3S123" drive icon and press return or select it with a mouse click. To run ASD (OS) select the "ASD OS 3S123" drive icon and press return or select it with a mouse click. ASD (EFI) will load in 20-30 seconds; ASD (OS) will load in 2-3 minutes. - After running ASD (OS) or ASD (EFI), press the Restart button to restart the computer back into the normal startup volume, or hold down the option key to get back to the Startup Manager. ASD is no longer delivered as an image to be restored onto a DVD. ASD 3S117 and newer versions requires installation onto an external USB hard drive. For more information, please refer to the document "Installing ASD on a USB hard drive".

    Read the article

  • Apache2 with lighttpd as proxy

    - by andrzejp
    Hi, I am using apache2 as web server. I would like to help him lighttpd as a proxy for static content. Unfortunately I can not well set up lighttpd and apache2. (OS: Debian) Important things from lighttpd.config: server.modules = ( "mod_access", "mod_alias", "mod_accesslog", "mod_proxy", "mod_status", ) server.document-root = "/www/" server.port = 82 server.bind = "localhost" $HTTP["remoteip"] =~ "127.0.0.1" { alias.url += ( "/doc/" => "/usr/share/doc/", "/images/" => "/usr/share/images/" ) $HTTP["url"] =~ "^/doc/|^/images/" { dir-listing.activate = "enable" } } I would like to use lighttpd in only one site operating as a virtual directory on apache2. Configuration of this virtual directory: ProxyRequests Off ProxyPreserveHost On ProxyPass /images http://0.0.0.0:82/ ProxyPass /imagehosting http://0.0.0.0:82/ ProxyPass /pictures http://0.0.0.0:82/ ProxyPassReverse / http://0.0.0.0:82/ ServerName MY_VALUES ServerAlias www.MY_VALUES UseCanonicalName Off DocumentRoot /www/MYAPP/forum <Directory "/www/MYAPP/forum"> DirectoryIndex index.htm index.php AllowOverride None ... As you can see (or not;)) my service is physically located at the path: / www / myapp / forum and I would like to support lighttpd dealt with folders: / www / myapp / forum / images / www / myapp / forum / imagehosting / www / myapp / forum / pictures and left the rest (PHP scripts) for apache After running lighttpd and apache2 working party, but did not show up any images of these locations. What is wrong?

    Read the article

  • Make Nginx fail when SSL certificate not present, instead of hopping to only available certificate

    - by Oli
    I've got a bunch of websites on a server, all hosted through nginx. One site has a certificate, the others do not. Here's an example of two sites, using (fairly accurate) representations of real configuration: server { listen 80; server_name ssl.example.com; return 301 https://ssl.example.com$request_uri; } server { listen 443 ssl; server_name ssl.example.com; } server { listen 80; server_name nossl.example.com; } SSL works on ssl.example.com great. If I visit http://nossl.example.com, that works great, but if I try to visit https://nossl.example.com (note the SSL), I get ugly warnings about the certificate being for ssl.example.com. By the sounds of it, because ssl.example.com is the only site listening on port 443, all requests are being sent to it, regardless of domain name. Is there anything I can do to make sure a Nginx server directive only responds to domains it's responsible for?

    Read the article

  • How do I get a subdomain on Xampp Apache @ localhost?

    - by jasondavis
    **UPDATE- I got it working now, I just had to change to The port number is important here. I just modified my windows HOST file @ C:\Windows\System32\drivers\etc and added this to the end of it 127.0.0.1 images.localhost 127.0.0.1 w-w-w.friendproject-.com 127.0.0.1 friendproject.-com Then I modified my httpd-vhosts.conf file on Apache under Xampp @ C:\webserver\apache\conf\extra Under the part where it shows examples for adding virtualhost I added this code below: NameVirtualHost *:80 <VirtualHost *:80> DocumentRoot /htdocs/images/ ServerName images.localhost </VirtualHost> <VirtualHost *:80> DocumentRoot /htdocs/ ServerName friendproject.com/ </VirtualHost> <VirtualHost *:80> DocumentRoot /htdocs/ ServerName w-ww-.friendproject.c-om/ </VirtualHost> Now the problem is when I go to any of the newly added domains in the browser I get this error below and even worse news is I now get this error even when going to http://localhost/ which worked fine before doing this I realize I can change everything back but I really need to at least get htt-p://im-ages.localhost to work. What do I do? Access forbidden! You don't have permission to access the requested directory. There is either no index document or the directory is read-protected. If you think this is a server error, please contact the webmaster . Error 403 localhost 07/25/09 21:20:14 Apache/2.2.11 (Win32) DAV/2 mod_ssl/2.2.11 OpenSSL/0.9.8i PHP/5.2.9

    Read the article

  • OpenVPN: ERROR: could not read Auth username from stdin

    - by user56231
    I managed to setup openvpn but now I want to integrate a user/pass authentication method so, even though I haven't added the auth-nocache in the server config, whenever I try to connect it returns with the following message on the client side: ERROR: could not read Auth username from stdin My server.conf file contains basic stuff, everything works up untill I try to implement this for of authentication. mode server dev tun proto tcp port 1194 keepalive 10 120 plugin /usr/lib/openvpn/openvpn-auth-pam.so login client-cert-not-required username-as-common-name auth-user-pass-verify /etc/openvpn/auth.pl via-env ca /etc/openvpn/easy-rsa/2.0/keys/ca.crt cert /etc/openvpn/easy-rsa/2.0/keys/server.crt key /etc/openvpn/easy-rsa/2.0/keys/server.key dh /etc/openvpn/easy-rsa/2.0/keys/dh1024.pem user nobody group nogroup server 10.8.0.0 255.255.255.0 persist-key persist-tun #persist-local-ip status openvpn-status.log verb 3 client-to-client push "redirect-gateway def1" push "dhcp-option DNS 10.8.0.1" log-append /var/log/openvpn comp-lzo I searched all over the net for a solution and all answers seems to be related to the auth-nocache param which I haven't set. The directive auth-user-pass-verify /etc/openvpn/auth.pl via-env points to a script which is executed to perform the authentication. A false authentication should result in a exit 1 while a true one should result with exit 0. For testing, that script auth.pl returns exit 0 no matter what the input is but it seems that the file is not executed before the error raises. auth.pl file contents: #!/usr/bin/perl my $user = $ENV{username}; my $passwd = $ENV{password}; printf("$user : $passwd\n"); exit 0; Any ideas?

    Read the article

  • Possible DNS issue?

    - by durilai
    I am having an issue, which I think stems from DNS. I have 2 servers. Server 1 is AD server with DNS, which was automatically configured when installing AD. The second server is a web server that is part of the domain, but it is not AD nor any other role. I can remote desktop in from server 1 using internal IP address, but when I attempt to connect from any other computer it fails, the computer can connect to server 1. I am able to ping both servers, as well as nslookup both using their FQDN. I am also able to telnet to port 3389. Any help is appreciated UPDATE I do not think it is DNS anymore, but not sure what it is. The remote desktop connects and I get to the login prompt, but when I start to enter credentials it disconnects. I then am unable to reconnect. If I wait for about 10 minutes it will allow me to repeat, but with the same results. UGH!!!

    Read the article

  • USB keyboard stopped working in Ubuntu with error -71

    - by tapan
    I have a usb keyboard which was working perfectly till now (about a year since i have been regularly using ubuntu). It suddenly stopped working. It stopped working when I connected a USB HDD. Now the keyboard works randomly .. working for a while and then stops working for a longer time. Here is the dmesg output : [ 705.817076] usb 5-1: device not accepting address 8, error -71 [ 705.928032] usb 5-1: new low speed USB device using uhci_hcd and address 9 [ 706.336060] usb 5-1: device not accepting address 9, error -71 [ 706.448055] usb 5-1: new low speed USB device using uhci_hcd and address 10 [ 706.568044] usb 5-1: device descriptor read/64, error -71 [ 706.792049] usb 5-1: device descriptor read/64, error -71 [ 707.008060] usb 5-1: new low speed USB device using uhci_hcd and address 11 [ 707.128041] usb 5-1: device descriptor read/64, error -71 [ 707.352052] usb 5-1: device descriptor read/64, error -71 [ 707.456068] hub 5-0:1.0: unable to enumerate USB device on port 1 Based on the suggestions here i tried the following two things: echo -1 > /sys/module/usbcore/parameters/autosuspend echo Y > /sys/module/usbcore/parameters/old_scheme_first However, i am still facing the same problem. Can anyone help me out with this ? I'm using Ubuntu 10.04 Lucid Lynx.

    Read the article

  • SSL with nginx on subdomain not working

    - by peppergrower
    I'm using nginx to serve three sites: example1.com (which redirects to www.example1.com), example2.com (which redirects to www.example2.com), and a subdomain of example2.com, call it sub.example2.com. This all works fine without SSL. I recently got SSL certs (from StartSSL), one for www.example1.com, one for www.example2.com, and one for sub.example2.com. I got them set up and everything seems to work (I'm using SNI to make all this work on a single IP address), except for sub.example2.com. I can still access it fine over non-SSL, but on SSL I just get a timeout. If I go directly to my server's IP address, I get served the SSL certificate for sub.example2.com, so I know nginx is loading the certificate properly...but somehow it doesn't seem to be listening for sub.example2.com on port 443, even though I told it to. I'm running nginx 1.4.2 on Debian 6 (squeeze); here's my config for sub.example2.com (the other domains have similar configs): server { server_name sub.example2.com; listen 80; listen 443 ssl; ssl_certificate /etc/nginx/ssl/sub.example2.com/server-unified.crt; ssl_certificate_key /etc/nginx/ssl/sub.example2.com/server.key; root /srv/www/sub.example2.com; } Does anything look amiss? What am I missing? I don't know if it matters, but StartSSL lists the base domain as a subject alternative name (SAN); not sure if that would somehow pose problems, if both subdomains list the same SAN.

    Read the article

  • MySQL 5.6 won't start on OS X - ambiguous option

    - by MaticPetek
    I would like to try MySQL 5.6 on my machine, but I cannot start it. I always get an error : [ERROR] /usr/local/mysql-5.6.5-m8-osx10.6-x86/bin/mysqld: ambiguous option '--log=/var/log/mysqld.log' (log-bin, log_slave_updates) my.cnf: [mysqld]<br/> pid-file=/usr/local/mysql-5.6.5-m8-osx10.6-x86/mysql.pid<br/> log-error=/usr/local/mysql-5.6.5-m8-osx10.6-x86/data/mysql-error.log<br/> log-slow-queries=/usr/local/mysql-5.6.5-m8-osx10.6-x86/data/mysql-slowquery.log<br/> log-bin=/usr/local/mysql-5.6.5-m8-osx10.6-x86/data/mysql-bin.log<br/> general_log_file=/usr/local/mysql-5.6.5-m8-osx10.6-x86/data/mysql-general_log_file.log<br/> log=/usr/local/mysql-5.6.5-m8-osx10.6-x86/data/mysql.log<br/> I try to set "log" and "log-bin" paramether in my.cnf file and also as start parameters for mysqld, but with no luck. Any idea what I can do? Thank you. My environment: OS X 10.6.8 mysql-5.6.5-m8-osx10.6-x86 (not _x64 version) Note: I'm also running Mysql 5.5 on this machine (different port and socket). I also try to stop this instance but I get the some error.

    Read the article

  • Configuring DNS & MX records for exchange 2010

    - by Mahmoud Saleh
    i am trying to configure Exchange Server 2010 on Windows Server 2008 R2 to receive emails from the internet following the danscourses tutorials: and i followed this video for the DNS & MX records: http://www.youtube.com/watch?v=jdf_3DRssks i don't have any windows administration skills, and i am stuck with the DNS configuration. and the following are my domain configuration i got from the hosting. following are the steps i made: 1- Add new name server: add ns1.centors.com ip Exchange Server Public IP: 41.233.26.131 2- Change the A record change it to point to the public ip address Exchange Server Public IP: 41.233.26.131 3- New cname record for www and make it resolve to centors.com 4- New mx record for mail.centors.com 5- New A record for mail.centors.com: name: mail ip: Exchange Server Public IP: 41.233.26.131 6- new A record for ns1: ip: Exchange Server Public IP: 41.233.26.131 7- i made port forward in the router for SMTP and POP3 to the exchange server local ip address. ISSUE: i have a user account in the active directory, and the user is member of the domain, the user is [email protected] and when trying to login with this account in outlook 2010 on other machine using following data: account type: POP3 incoming mail server: mail.centors.com outgoing mail server: mail.centors.com i always get the error: Authorization failed, check your server settings. please advise what's wrong with the configuration, thanks in advance.

    Read the article

  • Joining an Ubuntu 14.04 machine to active directory with realm and sssd

    - by tubaguy50035
    I've tried following this guide to set up realmd and sssd with active directory: http://funwithlinux.net/2014/04/join-ubuntu-14-04-to-active-directory-domain-using-realmd/ When I run the command realm –verbose join domain.company.com –user-principal=c-u14-dev1/[email protected] –unattended everything seems to connect. My sssd.conf looks like the following: [nss] filter_groups = root filter_users = root reconnection_retries = 3 [pam] reconnection_retries = 3 [sssd] domains = DOMAIN.COMPANY.COM config_file_version = 2 services = nss, pam [domain/DOMAIN.COMPANY.COM] ad_domain = DOMAIN.COMPANY.COM krb5_realm = DOMAIN.COMPANY.COM realmd_tags = manages-system joined-with-adcli cache_credentials = True id_provider = ad krb5_store_password_if_offline = True default_shell = /bin/bash ldap_id_mapping = True use_fully_qualified_names = True fallback_homedir = /home/%d/%u access_provider = ad My /etc/pam.d/common-auth looks like this: auth [success=3 default=ignore] pam_krb5.so minimum_uid=1000 auth [success=2 default=ignore] pam_unix.so nullok_secure try_first_pass auth [success=1 default=ignore] pam_sss.so use_first_pass # here's the fallback if no module succeeds auth requisite pam_deny.so # prime the stack with a positive return value if there isn't one already; # this avoids us returning an error just because nothing sets a success code # since the modules above will each just jump around auth required pam_permit.so # and here are more per-package modules (the "Additional" block) auth optional pam_cap.so However, when I try to SSH into the machine with my active directory user, I see the following in auth.log: Aug 21 10:35:59 c-u14-dev1 sshd[11285]: Invalid user nwalke from myip Aug 21 10:35:59 c-u14-dev1 sshd[11285]: input_userauth_request: invalid user nwalke [preauth] Aug 21 10:36:10 c-u14-dev1 sshd[11285]: pam_krb5(sshd:auth): authentication failure; logname=nwalke uid=0 euid=0 tty=ssh ruser= rhost=myiphostname Aug 21 10:36:10 c-u14-dev1 sshd[11285]: pam_unix(sshd:auth): check pass; user unknown Aug 21 10:36:10 c-u14-dev1 sshd[11285]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=myiphostname Aug 21 10:36:10 c-u14-dev1 sshd[11285]: pam_sss(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=myiphostname user=nwalke Aug 21 10:36:10 c-u14-dev1 sshd[11285]: pam_sss(sshd:auth): received for user nwalke: 10 (User not known to the underlying authentication module) Aug 21 10:36:12 c-u14-dev1 sshd[11285]: Failed password for invalid user nwalke from myip port 34455 ssh2 What do I need to do to allow active directory users the ability to log in?

    Read the article

  • fail2ban log parsing too slow on Raspberry Pi - options? [migrated]

    - by Gordon Morehouse
    I'm running fail2ban on a Raspberry Pi at 950MHz which I cannot overclock further. The Pi is occasionally subject to SYN floods on particular ports. I've set up iptables to throttle the rate of SYNs on the port of interest; when the throttle limits are exceeded, hosts which send SYNs are dropped into the REJECT chain and the particular SYN packet which exceeded the limit is logged. fail2ban then watches for these logged SYNs and, after seeing a few, temporarily bans the host for a short time (this is a transient issue in the app I'm working with). The problem is that the SYN floods can occasionally reach rates which are too fast for fail2ban to keep up with; I'll see 20-40 log messages per second, and eventually fail2ban falls behind and becomes ineffective. To add insult to injury, it continues consuming a LOT of CPU as it tries to catch up. I have verified that DROP chained packets from hosts already banned by fail2ban are not logged, and thus do not add to its load. What are my options here? I have a few ideas, but no clear path forward. Could I make the log-parse regex "easier" so it takes fewer cycles? Would using iptables --log-prefix to put a token near the start of the log message, and/or otherwise simplifying/altering the fail2ban regex help? Here is the current fail2ban config line containing a regex: failregex = kernel:.*?SRC=(?:::f{4,6}:)?(?P<host>[\w\-.^_]+) DST.*?SYN Is there a faster way for fail2ban to watch for the packets exceeding the limits than parsing kern.log? Could fail2ban be run under PyPy instead of CPython with minimal nonstandard wizardry (the OS is Raspbian 7, so, mostly Debian 7)? Is there something better than fail2ban that I could use to watch for the packets which exceed the SYN limits, and after N exceeds in X seconds, temporarily put the offending IP into the iptables DROP bucket, and take it out when the ban timer expires? Again, I'd vastly prefer a solution that uses as much software available in Debian as possible, though I can build Debian packages in a pinch.

    Read the article

  • iRedMail home setup - use different SMTP relay for different destination domains

    - by John
    Hello helpful server folks, I'm messing with iRedMail. I've mostly been successful, I think I have an SMTP problem. I have changed RoundCube (webmail) to use BrightHouse's, my ISP's, SMTP server for outgoing. It works fine, I click send and poof, I have gmail. I can reply from gmail to my email server, and it works. It took 10 hours for the email to show up, which is a different problem, I think, but it does work. But when I send from my server TO my own server, my ISP's Postmaster account sends me a cryptic blurb. I just got off the phone with them, and they say it "should work", and that they can't reach my pop3 server. (pop3, pop3s, imap, and imaps are all open on my router and forwarded to the server, I'm not sure what I need, I'm just covering my bases...) pop3 and/or imap as external interfaces are just formalities, I really just want webmail to work. Roundcube only takes one SMTP server in its configs. How can I configure Postfix to relay / forward emails to my ISP's SMTP, while taking messages bound for my own domain and processing them? Since my ISP won't let me "bounce" my emails off of it. Maybe I'm vastly misunderstanding how e-mail works in general: To receive mail, I should only need port 25, SMTP, open to the internet, correct? Should I be concerned about some authentication failure from the outside to my relay? (My relay requires user/pass to use, my ISP's requires none.)

    Read the article

  • Remote Desktop access Windows 7 system from Windows 8

    - by Prabhat
    I have 2 systems; Windows 7 & Windows 8. Both are connected to WiFi router. They have been assigned address 192.168.2.8 & 192.168.2.9 respectively. I have added them to home group. I am able to ping and connect Windows 8 system from Windows 7. I am having trouble connecting Windows 7 system from Windows 8 system. I can't even ping Windows 7 system. Windows 7 system's user is administrator (default administrator account from secpol.msc). File sharing, Remote Access, network discovery are all enabled. Someone please help me connect. EDIT : I found that this is the issue of Kaspersky Internet Security 2012. If I disable firewall, it works. I tried opening port 3389 in Kaspersky. It is still blocking access.

    Read the article

  • Can't pop3 from exchange server after a reboot

    - by BLAKE
    Last night I shutdown my Exchange 2003 Virtual Machine, I added a new VHD (For backups), and booted it again. Now I can't POP3 email from it with Outlook 2007. In Outlook I get the error: Task '[email protected] - Receiving' reported error (0x800CCC0F) : 'The connection to the server was interrupted. If this problem continues, contact your server administrator or Internet service provider (ISP).' Does anybody know what is wrong? All I did was a reboot. I haven't formated the added disk. There are no weird errors in the event log. I can still send mail with Outlook over port 25. I can send and recieve mail with OWA. I can POP3 the mail to my phone (it take about 15 minutes after sending a message, but I do get it eventually). EDIT: The 'Microsoft Exchange POP3' Service says that it is started but if I stop it and try to start it again, it fails saying 'Could not start the Microsoft Exchange POP3 service on Local Computer. Error 1053: The service did not respond to the start or control request in a timely fashion.' I did some googling and someone on exchangefreaks.com said that if I use task manager to 'End Task' on inetinfo.exe, then I can start the POP3 service fine. Does anyone know what causes this problem? I am fine for now since I did get the Service started, but If it does this after every reboot...

    Read the article

  • Ubuntu 10.04 Keyboard and Mouse Freezing Problem

    - by nitbuntu
    I had a partition setup with Windows XP and Ubuntu 8.04 dual booting. I recently upgraded to Ubuntu 10.04 by installing fresh from CD but leaving the previous /home folder as is. Things seemed to be working fine, but started finding that my mouse and keyboard were freezing. After a quick search on the internet, I found the following suggestions as shown here:- Ubuntu Forums Here the suggestion was to:- Edit /etc/default/grub, go to the line that begins like: GRUB_CMDLINE_LINUX_DEFAULT= Change it to: GRUB_CMDLINE_LINUX_DEFAULT="quiet splash acpi=off" After that, run this command: sudo update-grub and Reboot This seemed to have resolved the issue but after a couple of days I again find my mouse and keyboard freezing. I also find that my parallel port printer had also stopped working. I have saved the output of dmesg and my syslog. The first can be viewed here but the syslog had too many characters, so if someone can suggest an alternative to freetexthost, I can post it there. Moreover, if there is any other information that should be provided, do let me know. I do hope we can get to the bottom of this issue. Thank you in advance for any help that could be provided.

    Read the article

  • How can find the USB wireless adapter into the dmesg log file?

    - by AndreaNobili
    I am pretty new in Linux (RaspBian for RaspBerry Pi but I think that there are not difference) and I have to install an USB wireless network adapter (the product is the TP-Link TL-WN725N, this one: http://www.tp-link.it/products/details/?model=TL-WN725N ) Now, I think that this is not automatically recognized by my system because if I execute ifconfig command I obtain the following output: pi@raspberrypi ~ $ ifconfig eth0 Link encap:Ethernet HWaddr b8:27:eb:2a:9f:b0 inet addr:192.168.1.8 Bcast:192.168.1.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:475 errors:0 dropped:0 overruns:0 frame:0 TX packets:424 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:34195 (33.3 KiB) TX bytes:89578 (87.4 KiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) So now it see only my ethernet network interface and not the wireless. So I was thinkig to try to see into the dmesg, but I don't know what have I to see and how to select it into the dmesg output. For example by the following command I can see the line of the dmesg log file relate to my ethernet port: pi@raspberrypi ~ $ cat /var/log/dmesg |grep -i eth [ 3.177620] smsc95xx 1-1.1:1.0 eth0: register 'smsc95xx' at usb-bcm2708_usb-1.1, smsc95xx USB 2.0 Ethernet, b8:27:eb:2a:9f:b0 [ 18.030389] smsc95xx 1-1.1:1.0 eth0: hardware isn't capable of remote wakeup [ 19.642167] smsc95xx 1-1.1:1.0 eth0: link up, 100Mbps, full-duplex, lpa 0x45E1 But what can I try to search for the USB wireless adapter? Tnx

    Read the article

  • Mac OS X Client With Static DHCP Assignment Requests Wrong IP via Option 50

    - by Starchy
    I have a number of Mac (and a few Linux) laptops getting DHCP from a Force10 layer 3 switch, the only DHCP server on the subnet. There's a global dynamic pool, and for each full-time employee's laptop I have a single IP static pool set by MAC address. One and only one of the clients, running OS X 10.7.5, consistently fails to get a static assignment. The MAC address in the static pool definition has been carefully re-checked. Running tcpdump on a mirrored port when the laptop connects, I see that it is specifically requesting 10.100.0.252 (a dynamic address): 11:32:10.108280 IP (tos 0x0, ttl 255, id 28293, offset 0, flags [none], proto UDP (17), length 328) 0.0.0.0.bootpc > broadcasthost.bootps: [udp sum ok] BOOTP/DHCP, Request from 3c:07:54:xx:xx:xx (oui Unknown), length 300, xid 0x1399da89, Flags [none] (0x0000) Client-Ethernet-Address 3c:07:54:xx:xx:xx (oui Unknown) Vendor-rfc1048 Extensions Magic Cookie 0x63825363 DHCP-Message Option 53, length 1: Request Parameter-Request Option 55, length 9: Subnet-Mask, Default-Gateway, Domain-Name-Server, Domain-Name Option 119, LDAP, Option 252, Netbios-Name-Server Netbios-Node MSZ Option 57, length 2: 1500 Client-ID Option 61, length 7: ether 3c:07:54:xx:xx:xx Requested-IP Option 50, length 4: 10.100.0.252 Lease-Time Option 51, length 4: 7776000 Hostname Option 12, length 10: "host-name" END Option 255, length 0 PAD Option 0, length 0, occurs 8 I haven't been able to find any extra system prefs or unusual software on the laptop. Disabling the interface and rebooting or temporarily setting the IP manually both fail to make any difference. Any suggestions appreciated.

    Read the article

  • HP Procurve 2610 intervlan routing

    - by user19039
    Can anyone tell me why inter vlan routing is working for all vlans except my newly created vlan 4/ I have an hp procurve 2610. Any help would be appreciated. I have basically this 1 switch with all unmanaged switches attached to the core. We have a second 2610 on port 28 Running configuration: ; J9085A Configuration Editor; Created on release #R.11.25 hostname "Core_HP" interface 22 speed-duplex 100-full exit ip routing snmp-server community "public" Unrestricted vlan 1 name "DEFAULT_VLAN" untagged 1-12,17-22,26-27 ip address 192.168.4.6 255.255.255.0 tagged 25 no untagged 13-16,23-24,28 exit vlan 2 name "WAN" untagged 28 ip address 10.254.254.3 255.255.255.0 exit vlan 3 name "Wireless" untagged 13-16,24 ip address 192.168.7.6 255.255.255.0 ip helper-address 192.168.4.2 tagged 27 exit vlan 35 name "guest" untagged 23 tagged 24 exit vlan 4 name "esxi" untagged 25 ip address 10.10.1.1 255.255.248.0 exit ip route 192.168.5.0 255.255.255.0 10.254.254.1 ip route 192.168.6.0 255.255.255.0 10.254.254.1 ip route 0.0.0.0 0.0.0.0 192.168.4.10 show ip route IP Route Entries Destination Gateway VLAN Type Sub-Type M etric Dist. ------------------ --------------- ---- --------- ---------- - --------- ----- 0.0.0.0/0 192.168.4.10 1 static 1 1 10.10.0.0/21 esxi 4 connected 0 0 10.254.254.0/24 WAN 2 connected 0 0 127.0.0.0/8 reject static 0 250 127.0.0.1/32 lo0 connected 0 0 192.168.4.0/24 DEFAULT_VLAN 1 connected 0 0 192.168.5.0/24 10.254.254.1 2 static 1 1 192.168.6.0/24 10.254.254.1 2 static 1 1 192.168.7.0/24 Wireless 3 connected 0 0 show ip Internet (IP) Service IP Routing : Enabled Default TTL : 64 Arp Age : 20 VLAN | IP Config IP Address Subnet Mask Prox y ARP ------------ + ---------- --------------- --------------- ---- ----- DEFAULT_VLAN | Manual 192.168.4.6 255.255.255.0 No WAN | Manual 10.254.254.3 255.255.255.0 No Wireless | Manual 192.168.7.6 255.255.255.0 No esxi | Manual 10.10.1.1 255.255.248.0 No guest | Disabled

    Read the article

< Previous Page | 322 323 324 325 326 327 328 329 330 331 332 333  | Next Page >