Search Results

Search found 22211 results on 889 pages for 'client identifier'.

Page 385/889 | < Previous Page | 381 382 383 384 385 386 387 388 389 390 391 392  | Next Page >

  • Mac and L2TP VPN no problems, xp, vista and 7 no go :s

    - by The_cobra666
    Hi all, I've got some weird problem and I'm out off options. The situation: When connecting from my mac to the VPN server (Windows Server 2003 R2) with L2TP PSK, everything works like it should. However, when I connect from a Windows PC, nothing happens. it spits out error 809 and sometimes 789. Now I know that my ports are OK, since the mac can connect without any problems. It's the same for: XP, Vista SP2 and 7. None can connect. If I connect to the VPN server directly (to the internal IP instead of WAN from the router), it connect's without a problem. Connecting using PPTP works... now if only L2TP would work thank you very much Windows! I have checked the counters on my linux router with iptables -L -nv and they do not raise when connecting. Not on ACCEPT and not on DROP. Only when connecting from the mac. I've found the guide from Microsoft to enable: AssumeUDPEncapsulationContextOnSendRule in the registery. I have set it to "2", on the server and client. Still no go. After that registery key it started giving me error 789 instead of 809. The IPSEC services are running on the client and server. Is there anyone that ppleease can help me with this! I've been working on this for 2 days and I'm out of options. Thanks!

    Read the article

  • CentOS centralised logging, syslogd, rsyslog, syslog-ng, logstash sender?

    - by benbradley
    I'm trying to figure out the best way to setup a central place to store and interrogate server logs. syslog, Apache, MySQL etc. I've found a few different options but I'm not sure what would be best. I'm looking for something that is easy to install and keep updated on many virtual machines. I can add it to a VM template going forward but I'd also like it to be easy to install to keep the VM complexity down. The options I've found so far are: syslogd syslog-ng rsyslog syslogd/syslog-ng/rsyslog to logstash/ElasticSearch logstash agent in each log "client" to send to Redis/logstash/ElasticSearch And all sorts of permutations of the above. What's the most resilient and light from the log "client" perspective? I'd like to avoid the situation where log "clients" hang because they are unable to send their logs to the logging server. Also I would still like to keep local logging and the rotation/retention provided by logrotate in place. Any ideas/suggestions or reasons for or against any of the above? Or suggestions of a different structure entirely? Cheers, B

    Read the article

  • Backing up server and multiple clients

    - by inquam
    I'm running a Amahi server. It's basically a Fedora14 x64 installation. I'm looking for a good solution to backup my 200GB system drive on the server to an external USB/eSATA drive every night. I looked into using dd but since other things might be running on the server at the same time it didn't feel quite safe. I would like the backups to be incremental so the following backups after the initial one would be quite fast. The backup should also be bootable or prehaps be able to produce a bootable disk after booting from a CD or something. I would also like the server to be able to do similar backups of my clients running Ubuntu, Windows 7 x64, Windows 7 Starter, OSX Lion, Windows XP and so on. So no applications backing up only shared folders or something like that. My guess is a client daemon would have to exist that would lock the system to allow backup of a Windows system drive that can otherwise be quite cranky. Booting up a CD in a crashed client and connecting to the server restoring the latest backup and being up running is my ideal goal. Is there anything out there that would fit these needs?

    Read the article

  • Apache 2.2 + mod_fcgid + PHP 5.4: (104) Connection reset by peer

    - by Michele Piccirillo
    On a Debian 6 VPS, I'm running PHP 5.4 via mod_fcgid on a couple of different virtual hosts, managed by Virtualmin GPL. At random, I get 500 Internal Server Errors; restarting Apache brings everything back to normality. Examining the logs, I find messages of this kind: [Thu Oct 04 15:39:35 2012] [warn] [client 173.252.100.117] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server [Thu Oct 04 15:39:35 2012] [error] [client 173.252.100.117] Premature end of script headers: index.php Any ideas about what is happening? UPDATE: I found a similar question and the author reported to have solved the problem disabling APC. I tried following the advice, but I'm still getting the same errors. VirtualHost configuration SuexecUserGroup "#1000" "#1000" ServerName example.com DocumentRoot /home/example/public_html ScriptAlias /cgi-bin/ /home/example/cgi-bin/ DirectoryIndex index.html index.htm index.php index.php4 index.php5 <Directory /home/example/public_html> Options -Indexes +IncludesNOEXEC +FollowSymLinks +ExecCGI allow from all AllowOverride All AddHandler fcgid-script .php AddHandler fcgid-script .php5 FCGIWrapper /home/example/fcgi-bin/php5.fcgi .php FCGIWrapper /home/example/fcgi-bin/php5.fcgi .php5 </Directory> <Directory /home/example/cgi-bin> allow from all </Directory> RemoveHandler .php RemoveHandler .php5 IPCCommTimeout 61 FcgidMaxRequestLen 1073741824 php5.fcgi #!/bin/bash PHPRC=$PWD/../etc/php5 export PHPRC umask 022 export PHP_FCGI_CHILDREN PHP_FCGI_MAX_REQUESTS=99999 export PHP_FCGI_MAX_REQUESTS SCRIPT_FILENAME=$PATH_TRANSLATED export SCRIPT_FILENAME exec /usr/bin/php5-cgi Package versions webmin-virtual-server/virtualmin-universal 3.94.gpl-2 apache2/squeeze 2.2.16-6+squeeze8 libapache2-mod-fcgid/squeeze 1:2.3.6-1+squeeze1 php5 5.4.7-1~dotdeb.0 php5-apc 5.4.7-1~dotdeb.0

    Read the article

  • IIS permission configuration issue

    - by Dan
    Sorry the title of this question is a little ambiguous but I don't really have any idea where the issue lies - I'm seeking some clarification of the server error logs. Basically, I had a dedicated server running Windows 2003 and Plesk (v8 I think). Last week the server hardware failed and the entire thing had to be rebuilt from scratch. New hardware was put in, new operating system (Win2008), new Plesk installation (v9.5), new software (MSSQL etc) then all data ported over manually from old C and D drives to restore all 30 client sites. It was hell! All has been okay for a couple of days now but about an hour ago POP! Suddenly all sites went down giving a 500 error. Restarting all services eventually brought everything back online, but I'm now living in total fear. It can - and probably will - happen again. The guys on support gave me the following errors from the server log: The Template Persistent Cache initialization failed for Application Pool 'ASP.NET v4.0 Classic' because of the following error: Could not create a Disk Cache Sub-directory for the Application Pool. The data may have additional error codes.. The worker process for application pool 'domain1.com(domain)(2.0)(pool)' encountered an error 'Cannot read configuration file ' trying to read configuration data from file '\\?\C:\inetpub\temp\apppools\domain1.com(domain)(2.0)(pool).config', line number '0'. The data field contains the error code. The worker process for application pool 'PleskControlPanel' encountered an error 'Cannot read configuration file ' trying to read configuration data from file '\\?\C:\inetpub\temp\apppools\PleskControlPanel.config', line number '0'. The data field contains the error code. The support guys are so ambiguous about this and it scares me horribly. Can anyone positively identify the cause of this error which lead to all client website going offline? What can be done to prevent it from happening again? Any pointers would be very much appreciated! Thanks folks...

    Read the article

  • Cannot connect with Cisco VPN but can connect with ShrewSoft VPN

    - by rodey
    EDIT: We connected an air card to the computer to use a different Internet connection and using the Cisco software, we were able to successfully connect to our VPN server. I just don't understand why the ShrewSoft VPN client would connect but the Cisco connection won't. I'm not our network admin so sorry if I butcher some of the terminology. I have a computer at remote site that connects to our network through Cisco VPN. It uses the Cisco VPN software to do so. The problem is that the computer at this site cannot connect to our VPN because it is getting error "Reason 412: The remote peer is no longer responding." To see if perhaps something on their network was blocking the connection, I installed the ShrewSoft VPN client on the computer, imported our .pcf file and connected with no problem. I have tried two different versions of the Cisco VPN software (4.8.0.* and 5.0.03.*) and have the same problem. I installed Wireshark on the computer and have confirmed (while trying to connect through Cisco) that the computer is trying to contact the VPN server but is not receiving a response. We are not having any other problems regarding users not being able to connect. I'm at a loss at what else to check. I'll be monitoring this and have access to the computer at any time.

    Read the article

  • emails not sending from CentOS 5.6 VM on Win7 via PHP code

    - by crmpicco
    I am experiencing an issue where my CentOS 5.6 (Final) VM running on Windows 7 has stopped sending emails from my PHP code. I'm confident this isn't a coding issue as I have the exact same code running in my office and emails send correctly from there, hence why I believe this to be a networking/configuration issue. In my /etc/hosts/ file on my VM I have the following: 127.0.0.1 localhost.localdomain localhost 192.168.0.9 crmpicco.co.uk m.crmpicco.co.uk dev53.localdomain When I run setup on my VM the DNS configuration is set to dev53.localdomain and my Primary DNS is 192.168.0.1. In My /var/log/maillog files I see a lot of this sort of thing: Nov 19 14:36:58 dev53 sendmail[21696]: qAJEawI7021696: from=<[email protected]>, size=12858, class=0, nrcpts=1, msgid=<1353335817.9103820024efb30b451d006dc4ab3370@PHPMAILSERVER>, proto=ESMTP, daemon=MTA, relay=localhost.localdomain [127.0.0.1] Nov 19 14:36:58 dev53 sendmail[21693]: qAJEawvd021693: [email protected], [email protected] (48/48), delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=42681, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (qAJEawI7021696 Message accepted for delivery) Nov 19 14:36:59 dev53 sendmail[21698]: qAJEawI7021696: to=<[email protected]>, delay=00:00:01, xdelay=00:00:01, mailer=esmtp, pri=132858, relay=mailserver.fletcher.co.uk. [213.171.216.114], dsn=5.0.0, stat=Service unavailable Is this likely to be a configuration issue?

    Read the article

  • Upgrade an Ubuntu 8.04 installation with VMware Server 1.0.8 and lots of guest OSes to Something Els

    - by Glyph
    I have an Ubuntu 8.04 (Hardy Heron) host machine which is running a whole slew of virtual machines in VMWare Server 1.0.8. Among other guest OSes, there is every release version of Ubuntu since 6.06, OpenSolaris 2009.06, and Windows XP. Right now I access these VMs from a variety of client OSes as well; Linux and Windows via the VMWare server console, and MacOS via X-forwarding the host machine's server console. I'd like to upgrade the host to Ubuntu 10.04 (Lucid Lynx), but from what I can tell, getting VMWare Server 1.x to work on a more recent version of Linux is a real pain. While VMware Server 2.x is a bit easier, it's still not packaged as Debian packages, so installing security updates is a big chore. As long as I'm upgrading anyway, I'd like to move to a virtualization solution that will allow me to automate applying updates. The options that I'm aware of right now are KVM (managed via virt-manager) and VirtualBox (as managed by its own tools or via its own libvirt bindings), but I'm open to other suggestions. For each option, I'd like to know how do I convert my guest images to the new format? am I going to have to re-activate my Windows guests (alternatively, "If the virtual hardware is different by default, can I avoid re-activation by changing some virtualization configuration to provide me with more similar virtual hardware") what are the management options like for each client OS (mac, linux, windows)? Thanks.

    Read the article

  • Windows 7, network shares, and authentication via local group instead of local user

    - by Donovan
    I have been doing some troubleshooting of my home network lately and have come to an odd conclusion that I was hoping to get some clarification on. I'm used to managing share permissions in a domain environment via groups instead of individual user accounts. I have a box at home running windows 7 ultimate and I decided to share some directories on that machine. I set it up to disallow guest access and require specifically granted permissions. (password moe?). Anyway, after a whole bunch of time i figured out that even though the shares I created were allowed via a local group i could not access them until i gave specific allowance to the intended user. I just didn't think i would have to do that. So here is the breakdown. Network is windows workgroup, not homegroup or nt domain PC_1 - win 7 ultimate - sharing in classic mode - user BOB - groups Admins PC_2 - win 7 starter - client - user BOB - groups admins PC_3 - win xp pro - client - user BOB - groups admins the share on PC_1 granted permission to only the local group administrators. local user BOB on PC_1 was a member of administrators. Both PC_2 and PC_3 could not browse the intended share on PC_1 because they were denied access. Also, no challenge was presented. They were simply denied. After adding BOB specifically to the intended share everything works just fine. Remember, its not an nt domain just a workgroup. But still, shouldn't i be able to manage share permissions via groups instead of individual user accounts? D.

    Read the article

  • HAProxy is caching the forwarding?

    - by shadow_of__soul
    i'm trying to set up a server structure for an application i'm building in Node.js with socket.io. My setup is: HAProxy frontend forward to -> apache2 as default backend (or nginx, is apache in this local test) -> node.js app if the url has socket.io in the request AND a domain name i have something like: global log 127.0.0.1 local0 log 127.0.0.1 local1 notice maxconn 4096 user haproxy group haproxy daemon defaults log global mode http maxconn 2000 contimeout 5000 clitimeout 50000 srvtimeout 50000 frontend all 0.0.0.0:80 timeout client 5000 default_backend www_backend acl is_soio url_dom(host) -i socket.io #if the request contains socket.io acl is_chat hdr_dom(host) -i chaturl #if the request comes from chaturl.com use_backend chat_backend if is_chat is_soio backend www_backend balance roundrobin option forwardfor # This sets X-Forwarded-For timeout server 5000 timeout connect 4000 server server1 localhost:6060 weight 1 maxconn 1024 check #forwards to apache2 backend chat_backend balance roundrobin option forwardfor # This sets X-Forwarded-For timeout queue 50000 timeout server 50000 timeout connect 50000 server server1 localhost:5558 weight 1 maxconn 1024 check #forward to node.js app The problem comes when i made a request to something like www.chaturl.com/index.html it load perfectly but fails to loads the socket.io files (www.chaturl.com/socket.io/socket.io.js) why it redirect to apache (and should redirect to the node.js app that serve the files). The weird thing is that if i access directly to the socket.io file, after refreshing a few times, it loads, so i suppose is "caching" the forwarding for the client when it makes the first request and reach the apache server. Any suggestion of how this can be solved? or what i can try or look about this?

    Read the article

  • How to disable horizontal scrolling within virtualbox on Ubuntu guest, Windows 7 host?

    - by Steven Rosato
    I am using Windows 7 as Host, Ubuntu Karmic as guest OS with guest tools installed and I get an annoying glitch when switching from host to the guest machine: Vertical scrolling switches to horizontal! (using the mouse wheel). Since I don't really care about horizontal scrolling, how can I disable this? I have checked the web and the only thing I found was to play in the xorg.conf file and adding in the section "InputDevice" Option "ZAxisMapping" "4 5" which would enable vertical scrolling only. The thing is, I don't have that section in my config file so I guessed that I would need to add Section "InputDevice" Identifier "VBoxMouse" Driver "vboxmouse" Option "ZAxisMapping" "4 5" EndSection But that does not seem to work after restarting xserver. Any workaround for this?

    Read the article

  • (Zywall USG 300) NAT bypassed when accessing in-house-server From LAN Via domain name

    - by mschr
    My situations is like this; i host a number of websites from within our joint network solution. On the network is basically 3 categories: the known public, registered via mac, given static dhcp lease the anonymous lan connections, given lease from specific dhcp range switches, unix hosts firewall Now, consider following hosts which are of interest 111.111.111.111 (Zywall USG 300 WAN) 192.168.1.1 (ZyWall USG 300 LAN) load balances and bw monitors plus handles NAT 192.168.1.2 (Linux www) serves mydomain1.tld and mydomain2.tld 192.168.123.123 (Random LAN client) accesses mydomain1.tld from LAN 23.234.12.253 (Random External client) accesses mydomain1.tld via WAN DNS A records are setup so that both mydomain1.tld and mydomain2.tld points to 111.111.111.111 - and the Linux www serves the http parts with VirtualHost configurations, setting up the document roots pr ServerName, this is not so interesting though.. NAT rule translates 111.111.111.111:80 to 192.168.1.2:80 (1:1 NAT) Our problem follows; When accessing http://mydomain1.tld from outside (23.234.12.253 example host) the joint network - everything is fine, zywall receives requests via port 80 and maps it to the linux host' httpd. However - once trying to go through the NAT from LAN side (in-house, 192.168.123.123 example host) then one gets filtered in the Zywall port 80 firewall. I know this only because port 443 is open for administration interface and https://mydomain1.tld prompts for zywall login. So my conclusion is, that the LAN that accesses 111.111.111.111 in fact are routed to 192.168.1.1 whilst bypassing the NAT table. I need to know how to setup NAT / Policy Route, so that LAN WAN LAN will function with proper network translations instead of doing the 'quick nameserver lookup' or whatever this might be.

    Read the article

  • In spite of correct DNS, Exchange sending to wrong destination server for single outbound domain

    - by beporter
    My company uses an SBS 2003 server and makes use of Exchange to host our own email. We also have a linux server hosting domains for some of our clients. In order for us to send to those clients, we had internal DNS set up to shadow the client domains to provide "correct" MX records inside our network. For example, public DNS for a domain abc.com might point to 1.2.3.4, but internally we have MX records set up to route mail for abc.com to 172.16.0.4, which is the linux email server. This setup was entirely functional; this is just back story. We've recently moved one of our client domains from our internal linux server to an external email provider. When we did that, we naturally deleted our internal shadow DNS records so our Exchange server would fetch correct (public) DNS records and route mail out to the new external host. This has NOT had any effect on Exchange though. Even after rebooting the Exchange server and completely flushing the DNS cache (nslookups on the Exchange machine itself correctly resolve to the new external address) Exchange still attempts to deliver messages for the domain to our internal server! Exchange correctly routes to all other internal and external domains when sending email. Somehow Exchange is trying to deliver to a machine that by all accounts it has no business trying to use for just this one domain. Is there a DNS cache that Exchange uses internally? Is there a way to flush that internal cache? What else could I be missing?

    Read the article

  • Balancing internal services using a Cisco CSS 11501

    - by Ladadadada
    First, the background to the problem: I have a Cisco CSS11501 that I am using to load balance a few web servers. These web servers have two network interfaces, one internal and one external and we are sending the requests to the internal interface. We have the CSS configured to do NAT because our webservers need to see the client's IP address. Because the TCP packets hit the webservers with a source address on the Internet, the webserver tries to send the packet back to the client over the external interface and not through the load balancer. In order to stop these requests being sent back out to the Internet via the external interface, we added a routing rule on these boxes so that all traffic with a source address on the internet will use the load balancer as the gateway. This part works fine. What I would also like to to is use the CSS as a load balancer for internal services such as our MySQL slaves. When I do this, I run into a similar problem; the TCP connection goes from the web server to the load balancer and then from the load balancer to the MySQL slave but the CSS spoofs a source address of the original webserver. The MySQL slave then tries to send the response directly to the webserver via the internal network and not via the load balancer. The ideal solution would be to tell the CSS not to do source address spoofing on the internal network and only do it for requests originating on the Internet. Is this possible ? Failing that, is there a way of directing the load balanced traffic back through the load balancer while keeping the other traffic (say SSH) purely on the internal network ? Is there another way of using the CSS11501 to load balance internal services ?

    Read the article

  • Windows Server 2008 backup VHD's - is it possible to mount/open in Windows 7?

    - by Simon
    Hi All, Is it possible to mount the VHD files created by the Windows Server 2008 backup utility onto a Windows 7 (release) client? Following an array failure I was very worried that there was a problem with both the backup sets on different USB drives as attaching the VHD to a Win 7 box did not show the expected structure (instead they behaved like unformatted disk space). Subsequently, I've attached the backup drive to a 2008r2 machine that I'd intended to be the replacement and the backup set can be browsed without issue (seemingly). When the new disks arrive I'll go through the recovery process and see where we are, but it looks promising so far. Is it simply the case that you can't take server created VHD's and mount them on desktop machines? (Rather than hyper-ventilating at the thought of years of lost photos and email, I'm now just mildly curious) Edit:One thing that has confused things is that the backup utility on Win7 is more restrictive about restoring from external devices than the equivilent on 2008r2. With r2, I can restore files 'from another server' and browse to external storage. Win7 only allows the back to be located on a network share. Once my box of new disks arrive and I've got something to restore onto, I'll move the smaller of the backup VHDs onto network storage reachable by Win7 and see if the VHD is readable. I haven't read up on the VHD process used by the backup app - I'm assuming it's a base VHD and differencing files used for incremental backups and that the restore app understands this. Finally: In retrospect the question should have been, 'can I restore a 2008r2 backup set via a Win 7 client' Thanks

    Read the article

  • Vagrant doesn't detect chef-solo unless re-installed

    - by nightowl
    I am using Vagrant to test my Chef recipes in Amazon AWS, and I am encountering an irritating issue: I initially assumed that Vagrant would install chef itself (as it does when using Virtual Box as the provider) but it seems that this needs to be done using the cloud-init script. However, even after I successfully installed the chef gem via cloud-init I was still getting the following error: The chef binary (eitherchef-soloorchef-client) was not found A quick google of this error suggested three probable causes: Chef had failed to install It had installed, but the directory was not in the $PATH environment variable It had installed and in the $PATH but with incorrect permissions I logged in and double checked; chef-solo and chef-client were installed; The path variable for the user, sudo and root all included /usr/local/bin and permissions were all fine. I managed to solve this problem by uninstalling and reinstalling the gem using sudo gem install chef. I don't understand why this should resolve the issue and it is a bit of a problem if I have to ssh into a test box and manually install the gem every time. Does anyone have any suggestions why this might be happening?

    Read the article

  • Loopback connection via PHP's getimage size crashes server (Magento's CMS)

    - by Alex
    We were able to trace down a problem that is crashing our NGINX server running Magento until the following point: Background info: Magento Backend has a CMS function with a WYSIWYG editor. This editor loads some pictures via a controller in magento (cms/directive). When we set the NGINX error_log level to info, we get the following lines (line break inserted for better readability): 2012/10/22 18:05:40 [info] 14105#0: *1 client closed prematurely connection, so upstream connection is closed too while sending request to upstream, client: XXXXXXXXX, server: test.local, request: "GET index.php/admin/cms_wysiwyg/directive/___directive/BASEENCODEDIMAGEURL,,/ HTTP/1.1", upstream: "fastcgi://127.0.0.1:9024", host: "test.local" When checking the code in the debugger, the following call does never return (in ´Varien_Image_Adapter_Abstract::getMimeType()` # $this->_fileName is http://test.local/skin/adminhtml/base/default/images/demo-image-not-existing.gif` # $_SERVER['REQUEST_URI'] = http://test.local/admin/cms_wysiwyg/directive/___directive/BASEENCODEDIMAGEURL list($this->_imageSrcWidth, $this->_imageSrcHeight, $this->_fileType, ) = getimagesize($this->_fileName); The filename requests is an URL to the same server which is requesting the script a link to a static .gif that is not existing. Sample URL: http://test.local/skin/adminhtml/base/default/images/demo-image-not-existing.gif When the above line executed, any subsequent request to the NGNIX server does not respond any more. After waiting for around 10 minutes, the NGINX server starts answering requests again. I tried to reproduce the error with a simple test script that only calls getimagesize() with the given URL - but this not crash. It simple leads to an exception saying that the URL could not be loaded (which is fine as the URL is wrong)

    Read the article

  • How to set CA cert file for LDAP backend server in smbpasswd configuration

    - by hayalci
    I am having a problem with smbpasswd, an LDAP backend server and SSL/TLS certificates. The client machine that I run smbpasswd on is a Debian Etch machine, and the Ldap server is Sun DS running on Solaris. All the following occurs on the client. When I disable SSL, by setting "ldap ssl = no" in smb.conf, the smbpasswd program works without errors. When I set "ldap ssl = start tls", the following messages are printed by smbpasswd and there is a long timeout period before any password is asked by it Failed to issue the StartTLS instruction: Connect error Connection to LDAP server failed for the 1 try! ..... long delay ..... New SMB password: Retype new SMB password: Failed to issue the StartTLS instruction: Connect error Connection to LDAP server failed for the 1 try! smbpasswd: /tmp/buildd/openldap2-2.1.30/libraries/liblber/io.c:702: ber_get_next: Assertion `0' failed. Aborted I conducted some tests with "ldapsearch -ZZ". It was not working at first, but after I added the TLS_CACERT line to /etc/ldap/ldap.conf, /etc/libnss-ldap.conf and /etc/pam_ldap.conf, it started working. So relevant TLS sections in all those files are: ssl start_tls tls_checkpeer no tls_cacertfile /path/to/ca-root.pem TLS_CACERT /path/to/ca-root.pem But the smbpasswd program continued giving the error. I tried creating /etc/smbldap-tools/smbldap.conf file with following content (after consulting debian docs for smbldap-tools package) But as I see, smbpasswd comes with samba-common package and does not use the configuration for smbldap-tools utilities. verify="optional" cafile="/path/to/ca-root.pem" My question is: How can I set which SSL CA Certificate is used by smbpasswd program ?

    Read the article

  • Ubuntu upgrade process failed

    - by Spin0us
    I tried to dist-upgrade my ubuntu server on my percona cluster but it failed with this message The following packages have unmet dependencies: libmysqlclient18 : Depends: libmariadbclient18 (= 5.5.33a+maria-1~precise) but it is not installable And here is the package listing # dpkg --list | grep -E 'percona|mysql' ii libdbd-mysql-perl 4.020-1build2 Perl5 database interface to the MySQL database iU libmysqlclient18 5.5.33a+maria-1~precise Virtual package to satisfy external depends ii mariadb-common 5.5.33a+maria-1~precise MariaDB database common files (e.g. /etc/mysql/conf.d/mariadb.cnf) ii percona-xtrabackup 2.1.5-680-1.precise Open source backup tool for InnoDB and XtraDB ii percona-xtradb-cluster-client-5.5 5.5.31-23.7.5-438.precise Percona Server database client binaries ii percona-xtradb-cluster-common-5.5 5.5.33-23.7.6-496.precise Percona Server database common files (e.g. /etc/mysql/my.cnf) ii percona-xtradb-cluster-galera-2.x 157.precise Galera components of Percona XtraDB Cluster ii percona-xtradb-cluster-server-5.5 5.5.31-23.7.5-438.precise Percona Server database server binaries ii php5-mysql 5.3.10-1ubuntu3.8 MySQL module for php5 During the install of the server, mariadb and galera cluster have first been installed. Then removed to be replaced by percona XtraDBCluster. So i think this is the source of the problem. But how can i resolve this without reinstalling all ? UPDATE 1 # apt-cache policy libmariadbclient18 libmariadbclient18: Installed: (none) Candidate: (none) Version table: 5.5.32+maria-1~precise 0 100 /var/lib/dpkg/status

    Read the article

  • How to send mail with PHP [migrated]

    - by roth66
    My litle problem is about mail() function in PHP, it doesnt want to send emails, to my local server, or anywhere else. I don't think that function was supposed to send mail to adresses like: [email protected]; So I've installed a mail server: hmailserver, I installed a client: dream-mail; I installed sendmaill.exe; (actually unzipped it in a folder, then in php.ini set the sendmail_path to point to it) After countless trials and errors, it still doesn't work. my system would comprise in an Apache server 2.2, and PHP (last version I think 5.3 or somehing), running on windows. And now for avoiding the usual questions (Did you make rules in your firewall etc etc), I guess I should mention, that there arent any connectivity issues, everything is set to "local" (localhost), port 25, 110, 143, are all opened, And, after a few days of fiddling with my brand new mail-server, I manage to make it work. THe Dream-mail client, has a test, trough which it would test its connections, and according to it, the SMTP AND POP3 connections are all successful, it even sends an email, for testing. SO ya, it would work. The problem, remains: PHP mail funcion. And I really need it, since on my website there's a contact form, and right now is useless. I've also checked the form it self, and seems to be alright.

    Read the article

  • Cisco ASA user authentication options - OpenID, public RSA sig, others?

    - by Ryan
    My organization has a Cisco ASA 5510 which I have made act as a firewall/gateway for one of our offices. Most resources a remote user would come looking for exist inside. I've implemented the usual deal - basic inside networks with outbound NAT, one primary outside interface with some secondary public IPs in the PAT pool for public-facing services, a couple site-to-site IPSec links to other branches, etc. - and I'm working now on VPN. I have the WebVPN (clientless SSL VPN) working and even traversing the site-to-site links. At the moment I'm leaving a legacy OpenVPN AS in place for thick client VPN. What I would like to do is standardize on an authentication method for all VPN then switch to the Cisco's IPSec thick VPN server. I'm trying to figure out what's really possible for authentication for these VPN users (thick client and clientless). My organization uses Google Apps and we already use dotnetopenauth to authenticate users for a couple internal services. I'd like to be able to do the same thing for thin and thick VPN. Alternatively a signature-based solution using RSA public keypairs (ssh-keygen type) would be useful to identify user@hardware. I'm trying to get away from legacy username/password auth especially if it's internal to the Cisco (just another password set to manage and for users to forget). I know I can map against an existing LDAP server but we have LDAP accounts created for only about 10% of the user base (mostly developers for Linux shell access). I guess what I'm looking for is a piece of middleware which appears to the Cisco as an LDAP server but will interface with the user's existing OpenID identity. Nothing I've seen in the Cisco suggests it can do this natively. But RSA public keys would be a runner-up, and much much better than standalone or even LDAP auth. What's really practical here?

    Read the article

  • Different background color for multiple filetypes in vim

    - by puk
    Is it possible to have different background colors in vim (ie. one light, one dark) when dealing with files with multiple filetypes (ie. :set ft=html.php)? In PHP code with HTML embedded, it can be difficult to see one single PHP statement amongst dozens of HTML lines, see below. I'll settle for anything, be it different bg color, a marker in the margin, a second left margin (one vim plugin does this for marks), maybe highlighting the <?php tag for example (although that's less than ideal) EDIT: I don't think this is possible at the syntax level as the syntax appears to use a limited number of elements (String, Function, Identifier...). This is no doubt to allow for the easy integration with colorschemes. SyntaxAttr is a good plugin to demonstrate this. Put it over any part of the code and it will tell you what syntax group it belongs to.

    Read the article

  • Gwibber doesnt launch post upgrade

    - by Arcath
    i updated my pc from ubuntu 9.10 to 10.04 and one of the things i was looking forward too was the "me menu" and gwibber. For some reason gwibber doesnt launch at all. when i try to launch it from terminal i get: [21:02:20][arcath@Highgate ~]$ gwibber ** (gwibber:8182): WARNING **: Trying to register gtype 'WnckWindowState' as enum when in fact it is of type 'GFlags' ** (gwibber:8182): WARNING **: Trying to register gtype 'WnckWindowActions' as enum when in fact it is of type 'GFlags' ** (gwibber:8182): WARNING **: Trying to register gtype 'WnckWindowMoveResizeMask' as enum when in fact it is of type 'GFlags' Traceback (most recent call last): File "/usr/bin/gwibber", line 50, in <module> from gwibber import client File "/usr/lib/python2.6/dist-packages/gwibber/client.py", line 3, in <module> import gtk, gobject, gwui, util, resources, actions, json, gconf File "/usr/lib/python2.6/dist-packages/gwibber/gwui.py", line 2, in <module> import os, json, urlparse, resources, util File "/usr/lib/python2.6/dist-packages/gwibber/util.py", line 2, in <module> from microblog.util.couch import RecordMonitor File "/usr/lib/python2.6/dist-packages/gwibber/microblog/util/couch.py", line 10, in <module> OAUTH_DATA = desktopcouch.local_files.get_oauth_tokens() File "/usr/lib/python2.6/dist-packages/desktopcouch/local_files.py", line 323, in get_oauth_tokens oauth_token_secrets = cf.items_in_section("oauth_token_secrets")[0] File "/usr/lib/python2.6/dist-packages/desktopcouch/local_files.py", line 189, in items_in_section raise ValueError("Section %r not present." % (section_name,)) ValueError: Section 'oauth_token_secrets' not present. and i cant work out whats wrong with it. Can anyone point me in the right direction?

    Read the article

  • Domain Controller DNS Best Practice/Practical Considerations for Domain Controllers in Child Domains

    - by joeqwerty
    I'm setting up several child domains in an existing Active Directory forest and I'm looking for some conventional wisdom/best practice guidance for configuring both DNS client settings on the child domain controllers and for the DNS zone replication scope. Assuming a single domain controller in each domain and assuming that each DC is also the DNS server for the domain (for simplicity's sake) should the child domain controller point to itself for DNS only or should it point to some combination (primary VS. secondary) of itself and the DNS server in the parent or root domain? If a parentchildgrandchild domain hierarchy exists (with a contiguous DNS namespace) how should DNS be configured on the grandchild DC? Regarding the DNS zone replication scope, if storing each domain's DNS zone on all DNS servers in the domain then I'm assuming a DNS delegation from the parent to the child needs to exist and that a forwarder from the child to the parent needs to exist. With a parentchildgrandchild domain hierarchy then does each child forward to the direct parent for the direct parent's zone or to the root zone? Does the delegation occur at the direct parent zone or from the root zone? If storing all DNS zones on all DNS servers in the forest does it make the above questions regarding the replication scope moot? Does the replication scope have some bearing on the DNS client settings on each DC?

    Read the article

  • How to email photo from Ubuntu F-Spot application via Gmail?

    - by Norman Ramsey
    My father runs Ubuntu and wants to be able to use the Gnome photo manager, F-Spot, to email photos. However, he must use Gmail as his client because (a) it's the only client he knows how to use and (b) his ISP refuses to reveal his SMTP password. I've got as far as setting up Firefox to use GMail to handle mailto: links and I've also configured firefox as the system default mailer using gnome-default-applications-properties. F-Spot presents a mailto: URL with an attach=file:///tmp/mumble.jpg header. So here's the problem: the attachment never shows up. I can't tell if Firefox is dropping the attachment header, if GMail doesn't support the header, or what. I've learned that: There's no official header in the mailto: URL RFC that explains how to add an attachment. I can't find documentation on how Firefox handles mailto: URLs that would explain to me how to communicate to Firefox that I want an attachment. I can't find any documentation for GMail's URL API that would enable me to tell GMail directly to start composing a message with a given file as an attachement. I'm perfectly capable of writing a shell script to interpolate around F-Spot to massage the URL that F-Spot presents into something that will coax Firefox into doing the right thing. But I can't figure out how to persuade Firefox to start composing a GMail message with a local file attached. Any help would be greatly appreciated.

    Read the article

< Previous Page | 381 382 383 384 385 386 387 388 389 390 391 392  | Next Page >