Search Results

Search found 26176 results on 1048 pages for 'stream socket client'.

Page 900/1048 | < Previous Page | 896 897 898 899 900 901 902 903 904 905 906 907  | Next Page >

  • multiple streaming servers behind a Bastion Host

    - by Bond
    I am using open source streaming server Red5 on multiple servers. Which are running behind a bastion host. the world knows these sites as http://site1.mydomain.com http://site2.mydomain.com http://site3.mydomain.com http://site4.mydomain.com To reach the front end server is using Apache Reverse Proxy. I am also having video streaming on each of these websites using rtmp. To be able to reach the streaming server I embed a javascript in HTML pages as follows Code: <embed ..... var="rtmp://site1.my_domain.com" > the problem is the website are many site1.mydomain.com site2.mydomain.com site3.mydomain.com site4.mydomain.com each on a separate physical server. Each of these four have their own Red5 installations the front end to each of these four is a common Bastion Host. If I run rtmp on each of the subdomains at a different port how will I make sure a request such as rtmp://site1.mydomain.com rtmp://site2.mydomain.com goes to their respective servers. from the front end server. What do I need to handle in this case ? IPTABLES came to mind instantly but from the client browser on internet when some one requests rtmp://site1.mydomain.com how will I make sure this rtmp request is mapped to a port different than 1935 as there are three other streaming servers which are also to respond to their respective requests ?

    Read the article

  • Symbolic directory link shared in domain

    - by Sabre
    We have a file server that is 2008R2 STD, it is a member server in a 2008 AD. I need to relocate some of the files and directories and would like to do it behind the scenes more or less without impacting the users. (Reason for this is that some of the files, due to recent software changes, HAVE to be located locally on one of the workstations, but they can be accessed by other applications remotely.) So symbolic links seem the panacea here, I moved a directory to another network share in the same domain (Windows 7 professional), created a symlink to it in the location it used to be in, named it the same thing, and to the local user it seems almost transparent. I.E. When logged into the desktop of the file server, I can go to the directory, open the link, it leaps to the other share as if it were local, exactly what would be expected. Then I tried it from another client computer (Windows 7 professional as well), went through the normal provisioning of R2R and L2R with fsutil... No joy. What I am getting is an access denied "Logon failure: Unknown username or bad password." using the same account that I log on locally to the file server with (Which happens to be the domain admin) So I cannot believe it is telling the truth, or... I assume it is not passing the credentials I am connecting to the first share all the way through the symlink. The end result is I want users on the domain to browser to share A, inside share A is a mixture of directories/files that reside there, and symlinks to directories/files on the second machine over the network in the same domain. Possible? Or am I misunderstanding how the symlink should work?

    Read the article

  • How do I make dnsmasq serve IP addresses via IPoIB?

    - by Matt
    I have a cluster farm that I'm setting up. The nodes (computers in the farm) are connected via ethernet & IP over Infiniband. I'm needing to netboot the nodes and thought dnsmasq would fit well as it provides all the features including support for DHCP over IB and it works great for our ethernet setup. However, I can't seem to get it to provide IP addresses to the infiniband adaptors on the nodes. Each node is running an Ubuntu desktop 12.04 LTS. The dnsmasq server is running on ubuntu server 12.04LTS and has the following test config: dhcp-authoritative domain-needed bogus-priv expand-hosts no-hosts domain=local dhcp-range=eth0,10.0.0.10,10.0.0.255,12h dhcp-option=eth0,3,10.0.0.1 dhcp-range=ib0,10.1.1.10,10.1.1.255,12h dhcp-option=ib0,3,10.1.1.1 log-queries log-dhcp IPoIB works between nodes when configured statically but not with dhcp. On the nodes the file /etc/network/interfaces contains auto lo iface lo inet loopback auto ib0 iface ib0 inet dhcp #iface ib0 inet static #address 10.1.1.5 #netmask 255.0.0.0 up echo connected >`find /sys -name mode | grep ib0` Is there something I need to do on the client or server end to make this work?

    Read the article

  • Synergy on macbook with osx mavericks wifi connection

    - by user332956
    I'm trying to set up Synergy with my macbook pro running OS X 10.9.3 as a client and my Windows 7 desktop as a server. I'm having some pretty bad connection problems though when I try to use my mac. Every couple seconds the mouse or the keyboard will stop working entirely then come back. I ran some tests and found that the ping from my desktop to my mac would be very high every third ping or so(1000+ ms) or sometimes even time out. If I ping my desktop from my mac the pings are all reasonably low. I believe that this is a power saving feature of Mavericks and I have found a way to get around it by continually pinging my router on my mac, keeping my wifi card from going to sleep. I'm using this right now to type this up with synergy and have had zero issues. Has anyone else ran into this issue and found a better solution? So far, I think my best bet would be to buy an ethernet adapter but I'd rather not have yet another cable running across my desk.

    Read the article

  • Hostname error on my Slicehost Ubuntu server

    - by allesklar
    Like many folks who upgraded to Rails 2.2, I got an exception raised when sending an email. This version of Rails or later does require using tls for sending emails. The message in the production log file says: hostname was not match with the server certificate I did a whole lot of research and work on this and did everything I could. I changed my slice's hostname to ohlalaweb.com. If I run the command 'hostname' at the CL I get: ohlalaweb.com Postfix seems to work fine. I can send emails from the CL to my gmail, yahoo, and google apps gmail accounts with no problems. Here is the result of cat /etc/postfix/main.cf # See /usr/share/postfix/main.cf.dist for a commented, more complete version # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. myorigin = /etc/mailname smmtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no # TLS parameters smtpd_tls_cert_file=/etc/ssl/certs/ohlalaweb.pem smtpd_tls_key_file=/etc/ssl/certs/ohlalaweb.pem smtpd_use_tls=yes # SA created next line to force postfix to use self create certificate smtpd_tls_auth_only=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. myhostname = ohlalaweb.com alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases mydestination = localhost.localdomain, localhost relayhost = mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all I have regenerated the ssl keys with the ohlalaweb.com host name. Any ideas or suggestions?

    Read the article

  • Can I use Outlook 2010 (beta) with OWA account?

    - by Dan
    One of the new features of Outlook 2010 (beta) is the support for multiple Exchange accounts. I'm wondering if there is any way to use this together with a (different) Outlook Web Access account to also get that email in Outlook. Specifially, in additional to my regular corporate (Exchange) account, I also use another corporate account through OWA. With this second account, the only supported access is through OWA; while POP3 access is available, it is not actually suported. I'm not very familiar with configuring Exchange servers, but in talking to those who are, it sounds like enabling Outlook Web Access is (slightly) different than allowing access from Outlook via HTTP(s). Is that correct? If so, it doesn't really semm quite right as absolute worst-case, one could (theoretically) resort to screen-scraping OWA. Edit: this looks to be about the same as Activesync/OWA Desktop Client? (This doesn't have anything to do with the question, but I'm actually using this second corporate account in Outlook by POP3'ing to Gmail, and then IMAP4 from Gmail to Outlook. Obviously, it would be much nicer to add it as a second Exchange account.).

    Read the article

  • Solaris Fibre Channel target - Configure QLogic QLA2340

    - by growse
    I'm currently trying to set up a small storage system as a fibre channel target. This is for testing, so I'm currently using Solaris (Nexenta) and a QLogic QLA2340 HBA. For some reason, the qlc and qlt drivers don't support the QLA2340, so I'm using the qla2300 driver from QLogic's website. I've also got the scli utility installed for configuration. The HBA is detected by the system. That said, it's not clear how I get from this point to a point where I have a ZFS volume being exposed as an FC target. I was originally following this guide (http://www.youtube.com/watch?v=yzEBd3l7Qn4) but it seems that without the qlc/qlt drivers, Sun's configuration tools won't work. Does that also imply that COMSTAR also won't work? What's the best way to expose an FC target with this setup? Most of the options I'm seeing in scli complain that the port state is LinkDown (it is, I've not plugged anything in yet). Do I have to have my FC client plugged up and working before I can configure the target? Apologies for the slight vagueness of the question, but I'm not overly familiar with the terminology.

    Read the article

  • Exchange eMails In A Mailbox Appear To Be Blocked By A Dud Message

    - by John Judd
    I have a client with an Exchange server on which there are quite a few mailboxes. One mailbox in particular is causing some problems. When an email from a certain address arrives, it appears to prevent Exchange from successfully delivering the email to the Outlook Express inbox. The address in question is from an account with Bigpond, or at least I think it is, I didn't check to see if it was spoofed (only just occurred to me.) Any emails in the queue before the suspect email are delivered, then Express times out. When send/recv is retried those emails are re-received and the process times out again. The process I have for fixing this is to log in to the sever, load Outlook, open the recipients inbox, and delete the suspect email. Then retrying the send/recv on Express successfully retrieves all the messages (except for the deleted message.) This solves the immediate problem, but this has happened several times now, and each time requires the process above to correct it. What I am wondering is if there is anything I can do to fix this permanently. It seems to me that Exchange should reject a dud email message rather than getting stuck. Does anyone know what could be causing this, and how I can fix it?

    Read the article

  • ssh keys rejected each day

    - by EddyR
    I've had OpenSSH server running on my debian server for a couple weeks and all of a sudden now when I go to login the next day it rejects my ssh key and I have to manually add a new one each time. Not only that but I have the "tunneling with clear-text passwords" option enabled and the non-root (login with root is disabled) account for that is rejected too. I'm at a loss why this is happening and I can't find any ssh options that would explain it. --update-- I just changed debug level to DEBUG. But before that I'm seeing a lot of the following in auth.log Feb 1 04:23:01 greenpages CRON[7213]: pam_unix(cron:session): session opened for user root by (uid=0) Feb 1 04:23:01 greenpages CRON[7213]: pam_unix(cron:session): session closed for user root ... Feb 1 04:36:26 greenpages sshd[7217]: reverse mapping checking getaddrinfo for nat-pool-xx-xx-xx-xx.myinternet.net [xx.xx.xx.xx] failed - POSSIBLE BREAK-IN ATTEMPT! ... Feb 1 04:37:31 greenpages sshd[7223]: Did not receive identification string from xx.xx.xx.xx ... My sshd_conf file settings are: # Package generated configuration file # See the sshd(8) manpage for details # What ports, IPs and protocols we listen for Port xxx # Use these options to restrict which interfaces/protocols sshd will bind to #ListenAddress :: #ListenAddress 0.0.0.0 Protocol 2 # HostKeys for protocol version 2 HostKey /etc/ssh/ssh_host_rsa_key HostKey /etc/ssh/ssh_host_dsa_key #Privilege Separation is turned on for security UsePrivilegeSeparation yes # Lifetime and size of ephemeral version 1 server key KeyRegenerationInterval 3600 ServerKeyBits 768 # Logging SyslogFacility AUTH LogLevel DEBUG # Authentication: LoginGraceTime 120 PermitRootLogin no StrictModes yes RSAAuthentication yes PubkeyAuthentication yes #AuthorizedKeysFile %h/.ssh/authorized_keys # Don't read the user's ~/.rhosts and ~/.shosts files IgnoreRhosts yes # For this to work you will also need host keys in /etc/ssh_known_hosts RhostsRSAAuthentication no # similar for protocol version 2 HostbasedAuthentication no # Uncomment if you don't trust ~/.ssh/known_hosts for RhostsRSAAuthentication #IgnoreUserKnownHosts yes # To enable empty passwords, change to yes (NOT RECOMMENDED) PermitEmptyPasswords no # Change to yes to enable challenge-response passwords (beware issues with # some PAM modules and threads) ChallengeResponseAuthentication no # Change to no to disable tunnelled clear text passwords PasswordAuthentication yes # Kerberos options #KerberosAuthentication no #KerberosGetAFSToken no #KerberosOrLocalPasswd yes #KerberosTicketCleanup yes # GSSAPI options #GSSAPIAuthentication no #GSSAPICleanupCredentials yes X11Forwarding no X11DisplayOffset 10 PrintMotd no PrintLastLog yes TCPKeepAlive yes #UseLogin no #MaxStartups 10:30:60 #Banner /etc/issue.net # Allow client to pass locale environment variables AcceptEnv LANG LC_* Subsystem sftp /usr/lib/openssh/sftp-server UsePAM no ClientAliveInterval 60 AllowUsers myuser

    Read the article

  • How to Change the Kerberos Default Ticket Lifetime

    - by user40497
    Our KDC servers are running either Ubuntu Dapper (2.6.15-28) or Hardy (2.6.24-19). The Kerberos software is the MIT implementation of Kerberos 5. By default, a Kerberos ticket lasts for 10 hours. However, we'd like to increase it a bit (e.g. 14 hours) to suit our needs better. I had done the following but the ticket lifetime still stays at 10 hours: 1) On all the KDC servers, set the following parameter under [realms] in /etc/krb5kdc/kdc.conf and restarted the KDC daemon: max_life = 14h 0m 0s 2) Via "kadmin", changed the "maxlife" for a test principal via "modprinc -maxlife 14hours ". "getprinc " shows that the maximum ticket life is indeed 14 hours: Maximum ticket life: 0 days 14:00:00 3) On a Kerberos client machine, set the following parameters under [libdefaults], [realms], [domain_realm], and [login] in /etc/krb5.conf (everywhere basically since nothing I tried had worked): ticket_lifetime = 13hrs default_lifetime = 13hrs With the above settings, I suppose that the ticket lifetime would be capped at 13 hours. When I do "k5start -l 14h -t ", I see that the end time for the "renew until" line is now 14 hours from the starting time: Valid starting Expires Service principal 04/13/10 16:42:05 04/14/10 02:42:05 krbtgt/@ renew until 04/14/10 06:42:03 "-l 13h" would make the end time in the "renew until" line 13 hours after the starting time. However, the ticket still expires in 10 hours (04/13 16:42:05 - 014/14 02:42:05). Am I not changing the right configuration file(s)/parameter(s), not specifying the right option when obtaining a Kerberos ticket, or something else? Any feedback is greatly appreciated! Thank you!

    Read the article

  • TeamCity sends inadequate responses after Selenium tests

    - by Dmitriy Sukharev
    I have a TeamCity 7.0.2 at CentOS 6.2 server without X Server. I've installed x11-fonts*, xvfb, firefox, xauth, extracted env. variable DISPLAY=localhost:1, and started xvfb. After that I could start Selenium tests using maven. Tests are executed, but there's an issue with TeamCity. Usually TeamCity starts hehaves absolutely inadequate (it confuses images at the page, sends xml or strange text ampersants and numbers in responses and is a bit slower), also tests are executed 4 times slower (1h 15m) at server than at tester Windows 7-based machine (25m). It worth to notice that tests launch two Jetty servers for tested application (one for REST-services application and another for client). In TeamCity I set JVM command line parameters: -Xms256m -Xmx1224m -XX:MaxPermSize=320m, and Additional Maven command line parameters ends with "-DMAVEN_OPTS=-Xmx1024m" (without quotes). Also both web-services and TeamCity uses the same Oracle server (but different Oracle users). Finally TeamCity and its build agent is at the same server. Server has only 4GB of RAM, but during testing there're 400MB of RAM and 1.2GB of swap. TeamCity and Firefox uses about 65% of CPU during testing. There's no firefox process after end of testing. My knowledge about Selenium is weak. I only know that we use 2.20.0 version of selenium-java maven dependency. Please help me to determine why TeamCity sends wrong responces after Selenium tests. I've tried to give you all information I have, but feel free to ask me for more information.

    Read the article

  • OS X AFP shares and access

    - by gbrandt
    I am running 10.5.6 Client as a mini server and am having problems with AFP shares. All clients are OS X 10.5.7 I have created three users for 'File Sharing' only on the 'server'. I have created groups and placed these users into specific groups. I have created ACL's to give each group access to certain shares. Two of those users can read and write to any share, one user cannot write to the shares, with different results: when copying a directory, only the directory is created, no files inside are copied, the OS does not give any errors when copying a single file I get three dialogs: "You may need to enter the name and password for an administrator on this computer to change the item named 'xxxx', "The item 'xxxxx' contains one or more items you do not have permission to read. Do you want to copy the items you are allowed to read?, and, The operation cannot be completed because you do not have sufficient priveleges for some of the items. With the single file, a file gets created on the server, but is empty. My ACL for the group this user belongs to is: 0: group:projectmembers allow list,add_file,search,delete,add_subdirectory,delete_child,readattr,writeattr,readextattr,writeextattr,readsecurity,file_inherit,directory_inherit 1: group:informationtechnology inherited allow list,add_file,search,delete,add_subdirectory,delete_child,readattr,writeattr,readextattr,writeextattr,readsecurity,file_inherit,directory_inherit 2: group:executive inherited allow list,add_file,search,delete,add_subdirectory,delete_child,readattr,writeattr,readextattr,writeextattr,readsecurity,file_inherit,directory_inherit 3: group:everyone inherited deny list,add_file,search,delete,add_subdirectory,delete_child,readattr,writeattr,readextattr,writeextattr,readsecurity,file_inherit,directory_inherit User 1 & 2 belong to informationtechnology and executive and projectmembers, they can read and write freely on the share. User 3 belongs to projectmembers and cannot read and write freely. I have read that this is a UID issue, however User 1 & 2 do not have matching UID's across clients and server and they work, so I don't think this is the case. Any ideas?

    Read the article

  • Incomplete Apache logging

    - by Manz
    I have a problem with Apache running on a Linux server. This error undefined index on PHP, for example. The problem is that my Apache server doesn't log entire error messages. Some lines from the error.log file: [Thu Nov 29 05:29:06 2012] [warn] mod_fcgid: stderr: PHP Notice: Undefined index: lin [Thu Nov 29 05:29:06 2012] [warn] mod_fcgid: stderr: 9 [Thu Nov 29 05:31:30 2012] [warn] mod_fcgid: stderr: PHP Notice: Undefined index: link in /var/www/html/sit [Thu Nov 29 06:01:18 2012] [warn] mod_fcgid: stderr: PHP Notice: Undefined index: link in /var [Thu Nov 29 06:06:09 2012] [warn] mod_fcgid: stderr: PHP Notice: Undefined [Thu Nov 29 06:06:15 2012] [warn] mod_fcgid: stderr: PHP Notice: Undefined index: [Thu Nov 29 06:13:04 2012] [warn] mod_fcgid: stderr: PH [Thu Nov 29 07:14:16 2012] [warn] mod_fcgid: stderr: PHP Notice: Undef [Thu Nov 29 07:32:16 2012] [warn] mod_fcgid: stderr: PHP Notice: Undefined index: link in /var/www/ht [Thu Nov 29 07:34:26 2012] [warn] mod_fcgid: stderr: PHP Notice: Undefined index: link [Thu Nov 29 07:34:30 2012] [warn] mod_fcgid: stderr: PHP Notice: Undefined index: link in /var/www/html/site.com/ [Thu Nov 29 07:41:10 2012] [warn] mod_fcgid: stderr: PHP Notice: Und [Thu Nov 29 07:41:11 2012] [warn] mod_fcgid: stderr: PHP Notice: Und [Thu Nov 29 07:41:12 2012] [warn] mod_fcgid: stderr: PHP Notice: Und [Thu Nov 29 08:14:20 2012] [warn] mod_fcgid: stderr: PHP Notice: Undef [Thu Nov 29 12:36:54 2012] [warn] mod_fcgid: stderr: PHP Notice: Undefined index: li [Thu Nov 29 12:37:04 2012] [warn] mod_fcgid: stderr: PHP Notice: Unde [Thu Nov 29 12:46:52 2012] [warn] mod_fcgid: stderr: PHP Notice: Undefined index: link in /var/www/htm [Thu Nov 29 13:00:33 2012] [warn] mod_fcgid: stderr: line 35 [Thu Nov 29 13:10:55 2012] [error] [client XXX.XX.XX.XX] File does not exist: /var/www/h Some lines are incomplete and truncate the error message. Anyone know Why Apache is saving incomplete error messages?

    Read the article

  • GoDaddy SSL on Shared Hosting

    - by Jon
    So I'm very new to using SSL certificates and I have been trying to install one on a site for a client. He is using shared hosting for multiple domains through GoDaddy, and the site we're working on is not the primary domain. He purchased a UCC certificate for multiple domains and I installed it on the shared hosting account. My thought was that since the domains were under the same hosting account, then they would each be protected under the certificate. This was not the case...apparently. I checked both domains with an SSL checker and the primary domain checked out. The domain that we wanted the SSL on showed the following errors: None of the common names in the certificate match the name that was entered (www.CLIENTDOMAIN.com). You may receive an error when accessing this site in a web browser. I'm not sure how to fix this. It was just purchased yesterday, so if necessary, I guess I could un-install it or re-key it (???). Is there a way to just change the common name to www.CLIENTDOMAIN.com (the correct domain)?

    Read the article

  • mrepo and grouplist/groupinstall?, mrepo not working as expected with group

    - by user52874
    All, I'm trying to set up mrepo so we can have internal repositories. After quite the slog, things seem to be working as expected EXCEPT for groups. From man createrepo: EXAMPLES Here is an example of a repository with a groups file. Note that the groups file should be in the same directory as the rpm packages (i.e. /path/to/rpms/comps.xml). createrepo -g comps.xml /path/to/rpms So here's what I'm doing: wget -c http://ftp.scientificlinux.org/linux/scientific/6/x86_64/os/repodata/comps-sl6-x86_64.xml cp comps-sl6-x86_64.xml /var/mrepo/SL6-x86_64/os/Packages/comps-sl6-x86_64.xml createrepo -g comps-sl6-x86_64.xml /var/mrepo/SL6-x86_64/os/Packages/ lots of output, no apparent errors or warnings BUT.. from a client: yum grouplist Loaded plugins: refresh-packagekit Setting up Group Process Error: No group data available for configured repositories Here's /etc/mrepo.conf: ### Configuration file for mrepo ### The [main] section allows to override mrepo's default settings ### The mrepo-example.conf gives an overview of all the possible settings [main] srcdir = /var/mrepo wwwdir = /var/www/mrepo confdir = /etc/mrepo.conf.d arch = x86_64 mailto = root@localhost smtp-server = localhost pxelinux = /usr/lib/syslinux/pxelinux.0 tftpdir = /tftpboot #rhnlogin = username:password ### Any other section is considered a definition for a distribution ### You can put distribution sections in /etc/mrepo.conf.d ### Examples can be found in the documentation. Here's /etc/mrepo.conf.d/sl6.mrepo: ### Scientific Linux 6 [SL6] name = Scientific Linux 6 release = 6 arch = x86_64 metadata = repomd repoview os = rsync://rsync.scientificlinux.org/scientific/$release/$arch/os/ updates = rsync://rsync.scientificlinux.org/scientific/$release/$arch/updates/ security = rsync://rsync.scientificlinux.org/scientific/$release/$arch/updates/security/ fastbugs = rsync://rsync.scientificlinux.org/scientific/$release/$arch/updates/fastbugs/

    Read the article

  • PPTP server connection closes - Too much data?

    - by Sebastian Hoitz
    I set up a PPTP server for my company. However, every time I have another computer connected to this server (i.e. our backup server) and a lot of data gets transferred, the connection to this computer closes. In the syslog on the PPTP server I find this message: Apr 22 12:44:34 komola-chase pptpd[2581]: CTRL: Reaping child PPP[2583] Apr 22 12:44:34 komola-chase pppd[2583]: MPPE disabled Apr 22 12:44:34 komola-chase pppd[2583]: Connection terminated. Apr 22 12:44:34 komola-chase pppd[2583]: Exit. Apr 22 12:44:34 komola-chase pptpd[2581]: CTRL: Client 192.168.0.11 control connection finished Apr 22 12:55:11 komola-chase pptpd[2674]: GRE: xmit failed from decaps_hdlc: No buffer space available Apr 22 12:55:11 komola-chase pptpd[2674]: CTRL: PTY read or GRE write failed (pty,gre)=(6,7) Apr 22 12:55:11 komola-chase pppd[2675]: Modem hangup Apr 22 12:55:11 komola-chase pppd[2675]: Connect time 23.0 minutes. Hopefully you can help me as to what is wrong. As far as I can tell, there is no compression enabled on the PPTP server (npbsdcomp option). Thank you!

    Read the article

  • Need help setting up OpenLDAP on OSX Mountain Lion

    - by rjcarr
    I'm trying to get OpenLDAP manually configured on OSX Mountain Lion. I'd prefer to do it manually instead of installing OSX server, but if that's the only option (i.e., OpenLDAP on OSX isn't meant to be used without server) then I'll just install it. I've seen guides that mostly just say to change the password in slapd.conf and then start the server and it should work. However, whenever I try to do anything with the client it tells me this: ldap_bind: Invalid credentials (49) I've tried encrypting the password as well as leaving it plain it doesn't seem to matter. The version is 2.4.28 and I've read as of 2.4 OpenLDAP uses slapd.d directories, but that doesn't seem to be the case in OSX. There was mention of an 'olcRootPW' I should use (instead of 'rootpw' in slapd.conf), but I only found that in a file named slapd.ldif. Anyway, I tried setting a password in there but it didn't make a difference). So ... I'm really confused. Has anyone got OpenLDAP working on OSX Moutain Lion without the server tools?

    Read the article

  • LAMP server VM issues

    - by nullArray
    After getting a recommendation to salvage a wiki by installing a LAMP server, I went on the prowl for a good virtualized one. I used the VMware Player version. Since the windows box has Bonjour, I can, for example, go to http://lamp.local. and it works see the web client. The problem is, I can't ssh to a directory to scp the files I need, mount a usb thumbdrive (usbfs is unsupported) nor get samba working. I can't even update the ubuntu installation, it fails. I've tried bridged, nat and host-only networking settings in VMware Player. Bridged gives me an undefined IP, while the other two each have different IPs. All three settings allow me to access the web config, but none of them give me samba access. Windows usually freezes, then reports that it cannot connect. I'd rather not wipe a box to do a dedicated install, is there I way I can get this VM working, or are there better LAMP VMs out there? This one came already working and set up with VMware Player, so I thought it would be perfect... Thanks,

    Read the article

  • Cisco ASA: How to route PPPoE-assigned subnet?

    - by Martijn Heemels
    We've just received a fiber uplink, and I'm trying to configure our Cisco ASA 5505 to properly use it. The provider requires us to connect via PPPoE, and I managed to configure the ASA as a PPPoE client and establish a connection. The ASA is assigned an IP address by PPPoE, and I can ping out from the ASA to the internet, but I should have access to an entire /28 subnet. I can't figure out how to get that subnet configured on the ASA, so that I can route or NAT the available public addresses to various internal hosts. My assigned range is: 188.xx.xx.176/28 The address I get via PPPoE is 188.xx.xx.177/32, which according to our provider is our Default Gateway address. They claim the subnet is correctly routed to us on their side. How does the ASA know which range it is responsible for on the Fiber interface? How do I use the addresses from my range? To clarify my config; The ASA is currently configured to default-route to our ADSL uplink on port Ethernet0/0 (interface vlan2, nicknamed Outside). The fiber is connected to port Ethernet0/2 (interface vlan50, nicknamed Fiber) so I can configure and test it before making it the default route. Once I'm clear on how to set it all up, I'll fully replace the Outside interface with Fiber. My config (rather long): : Saved : ASA Version 8.3(2)4 ! hostname gw domain-name example.com enable password ****** encrypted passwd ****** encrypted names name 10.10.1.0 Inside-dhcp-network description Desktops and clients that receive their IP via DHCP name 10.10.0.208 svn.example.com description Subversion server name 10.10.0.205 marvin.example.com description LAMP development server name 10.10.0.206 dns.example.com description DNS, DHCP, NTP ! interface Vlan2 description Old ADSL WAN connection nameif outside security-level 0 ip address 192.168.1.2 255.255.255.252 ! interface Vlan10 description LAN vlan 10 Regular LAN traffic nameif inside security-level 100 ip address 10.10.0.254 255.255.0.0 ! interface Vlan11 description LAN vlan 11 Lab/test traffic nameif lab security-level 90 ip address 10.11.0.254 255.255.0.0 ! interface Vlan20 description LAN vlan 20 ISCSI traffic nameif iscsi security-level 100 ip address 10.20.0.254 255.255.0.0 ! interface Vlan30 description LAN vlan 30 DMZ traffic nameif dmz security-level 50 ip address 10.30.0.254 255.255.0.0 ! interface Vlan40 description LAN vlan 40 Guests access to the internet nameif guests security-level 50 ip address 10.40.0.254 255.255.0.0 ! interface Vlan50 description New WAN Corporate Internet over fiber nameif fiber security-level 0 pppoe client vpdn group KPN ip address pppoe ! interface Ethernet0/0 switchport access vlan 2 speed 100 duplex full ! interface Ethernet0/1 switchport trunk allowed vlan 10,11,30,40 switchport trunk native vlan 10 switchport mode trunk ! interface Ethernet0/2 switchport access vlan 50 speed 100 duplex full ! interface Ethernet0/3 shutdown ! interface Ethernet0/4 shutdown ! interface Ethernet0/5 switchport access vlan 20 ! interface Ethernet0/6 shutdown ! interface Ethernet0/7 shutdown ! boot system disk0:/asa832-4-k8.bin ftp mode passive clock timezone CEST 1 clock summer-time CEDT recurring last Sun Mar 2:00 last Sun Oct 3:00 dns domain-lookup inside dns server-group DefaultDNS name-server dns.example.com domain-name example.com same-security-traffic permit inter-interface same-security-traffic permit intra-interface object network inside-net subnet 10.10.0.0 255.255.0.0 object network svn.example.com host 10.10.0.208 object network marvin.example.com host 10.10.0.205 object network lab-net subnet 10.11.0.0 255.255.0.0 object network dmz-net subnet 10.30.0.0 255.255.0.0 object network guests-net subnet 10.40.0.0 255.255.0.0 object network dhcp-subnet subnet 10.10.1.0 255.255.255.0 description DHCP assigned addresses on Vlan 10 object network Inside-vpnpool description Pool of assignable addresses for VPN clients object network vpn-subnet subnet 10.10.3.0 255.255.255.0 description Address pool assignable to VPN clients object network dns.example.com host 10.10.0.206 description DNS, DHCP, NTP object-group service iscsi tcp description iscsi storage traffic port-object eq 3260 access-list outside_access_in remark Allow access from outside to HTTP on svn. access-list outside_access_in extended permit tcp any object svn.example.com eq www access-list Insiders!_splitTunnelAcl standard permit 10.10.0.0 255.255.0.0 access-list iscsi_access_in remark Prevent disruption of iscsi traffic from outside the iscsi vlan. access-list iscsi_access_in extended deny tcp any interface iscsi object-group iscsi log warnings ! snmp-map DenyV1 deny version 1 ! pager lines 24 logging enable logging timestamp logging asdm-buffer-size 512 logging monitor warnings logging buffered warnings logging history critical logging asdm errors logging flash-bufferwrap logging flash-minimum-free 4000 logging flash-maximum-allocation 2000 mtu outside 1500 mtu inside 1500 mtu lab 1500 mtu iscsi 9000 mtu dmz 1500 mtu guests 1500 mtu fiber 1492 ip local pool DHCP_VPN 10.10.3.1-10.10.3.20 mask 255.255.0.0 ip verify reverse-path interface outside no failover icmp unreachable rate-limit 10 burst-size 5 asdm image disk0:/asdm-635.bin asdm history enable arp timeout 14400 nat (inside,outside) source static any any destination static vpn-subnet vpn-subnet ! object network inside-net nat (inside,outside) dynamic interface object network svn.example.com nat (inside,outside) static interface service tcp www www object network lab-net nat (lab,outside) dynamic interface object network dmz-net nat (dmz,outside) dynamic interface object network guests-net nat (guests,outside) dynamic interface access-group outside_access_in in interface outside access-group iscsi_access_in in interface iscsi route outside 0.0.0.0 0.0.0.0 192.168.1.1 1 timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00 timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00 timeout sip-provisional-media 0:02:00 uauth 0:05:00 absolute timeout tcp-proxy-reassembly 0:01:00 dynamic-access-policy-record DfltAccessPolicy aaa-server SBS2003 protocol radius aaa-server SBS2003 (inside) host 10.10.0.204 timeout 5 key ***** aaa authentication enable console SBS2003 LOCAL aaa authentication ssh console SBS2003 LOCAL aaa authentication telnet console SBS2003 LOCAL http server enable http 10.10.0.0 255.255.0.0 inside snmp-server host inside 10.10.0.207 community ***** version 2c snmp-server location Server room snmp-server contact [email protected] snmp-server community ***** snmp-server enable traps snmp authentication linkup linkdown coldstart snmp-server enable traps syslog crypto ipsec transform-set TRANS_ESP_AES-256_SHA esp-aes-256 esp-sha-hmac crypto ipsec transform-set TRANS_ESP_AES-256_SHA mode transport crypto ipsec transform-set ESP-AES-256-MD5 esp-aes-256 esp-md5-hmac crypto ipsec transform-set ESP-DES-SHA esp-des esp-sha-hmac crypto ipsec transform-set ESP-DES-MD5 esp-des esp-md5-hmac crypto ipsec transform-set ESP-AES-192-MD5 esp-aes-192 esp-md5-hmac crypto ipsec transform-set ESP-3DES-MD5 esp-3des esp-md5-hmac crypto ipsec transform-set ESP-AES-256-SHA esp-aes-256 esp-sha-hmac crypto ipsec transform-set ESP-AES-128-SHA esp-aes esp-sha-hmac crypto ipsec transform-set ESP-AES-192-SHA esp-aes-192 esp-sha-hmac crypto ipsec transform-set ESP-AES-128-MD5 esp-aes esp-md5-hmac crypto ipsec transform-set ESP-3DES-SHA esp-3des esp-sha-hmac crypto ipsec security-association lifetime seconds 28800 crypto ipsec security-association lifetime kilobytes 4608000 crypto dynamic-map outside_dyn_map 20 set pfs group5 crypto dynamic-map outside_dyn_map 20 set transform-set TRANS_ESP_AES-256_SHA crypto dynamic-map SYSTEM_DEFAULT_CRYPTO_MAP 65535 set transform-set ESP-AES-128-SHA ESP-AES-128-MD5 ESP-AES-192-SHA ESP-AES-192-MD5 ESP-AES-256-SHA ESP-AES-256-MD5 ESP-3DES-SHA ESP-3DES-MD5 ESP-DES-SHA ESP-DES-MD5 crypto map outside_map 65535 ipsec-isakmp dynamic SYSTEM_DEFAULT_CRYPTO_MAP crypto map outside_map interface outside crypto isakmp enable outside crypto isakmp policy 1 authentication pre-share encryption 3des hash sha group 2 lifetime 86400 telnet 10.10.0.0 255.255.0.0 inside telnet timeout 5 ssh scopy enable ssh 10.10.0.0 255.255.0.0 inside ssh timeout 5 ssh version 2 console timeout 30 management-access inside vpdn group KPN request dialout pppoe vpdn group KPN localname INSIDERS vpdn group KPN ppp authentication pap vpdn username INSIDERS password ***** store-local dhcpd address 10.40.1.0-10.40.1.100 guests dhcpd dns 8.8.8.8 8.8.4.4 interface guests dhcpd update dns interface guests dhcpd enable guests ! threat-detection basic-threat threat-detection scanning-threat threat-detection statistics host number-of-rate 2 threat-detection statistics port number-of-rate 3 threat-detection statistics protocol number-of-rate 3 threat-detection statistics access-list threat-detection statistics tcp-intercept rate-interval 30 burst-rate 400 average-rate 200 ntp server dns.example.com source inside prefer webvpn group-policy DfltGrpPolicy attributes vpn-tunnel-protocol IPSec l2tp-ipsec group-policy Insiders! internal group-policy Insiders! attributes wins-server value 10.10.0.205 dns-server value 10.10.0.206 vpn-tunnel-protocol IPSec l2tp-ipsec split-tunnel-policy tunnelspecified split-tunnel-network-list value Insiders!_splitTunnelAcl default-domain value example.com username martijn password ****** encrypted privilege 15 username marcel password ****** encrypted privilege 15 tunnel-group DefaultRAGroup ipsec-attributes pre-shared-key ***** tunnel-group Insiders! type remote-access tunnel-group Insiders! general-attributes address-pool DHCP_VPN authentication-server-group SBS2003 LOCAL default-group-policy Insiders! tunnel-group Insiders! ipsec-attributes pre-shared-key ***** ! class-map global-class match default-inspection-traffic class-map type inspect http match-all asdm_medium_security_methods match not request method head match not request method post match not request method get ! ! policy-map type inspect dns preset_dns_map parameters message-length maximum 512 policy-map type inspect http http_inspection_policy parameters protocol-violation action drop-connection policy-map global-policy class global-class inspect dns inspect esmtp inspect ftp inspect h323 h225 inspect h323 ras inspect http inspect icmp inspect icmp error inspect mgcp inspect netbios inspect pptp inspect rtsp inspect snmp DenyV1 ! service-policy global-policy global smtp-server 123.123.123.123 prompt hostname context call-home profile CiscoTAC-1 no active destination address http https://tools.cisco.com/its/service/oddce/services/DDCEService destination address email [email protected] destination transport-method http subscribe-to-alert-group diagnostic subscribe-to-alert-group environment subscribe-to-alert-group inventory periodic monthly subscribe-to-alert-group configuration periodic monthly subscribe-to-alert-group telemetry periodic daily hpm topN enable Cryptochecksum:a76bbcf8b19019771c6d3eeecb95c1ca : end asdm image disk0:/asdm-635.bin asdm location svn.example.com 255.255.255.255 inside asdm location marvin.example.com 255.255.255.255 inside asdm location dns.example.com 255.255.255.255 inside asdm history enable

    Read the article

  • Flash alternative for iBook Mac?

    - by Hunter Dolan
    I have a old Apple iBook G4 that I decided to hook up to my main TV. I like the setup because I can surf the internet on my TV now. The only thing that I can't seem to do is watch Flash videos. Apparently Flash Player 10 doesn't play nice with the iBook's graphics card's GPU, leaving all the graphics processing to the CPU which is a disaster. Others suggested downgrading to Flash Player 9, I did that, and youtube worked fine, but Hulu (The main reason I wanted to hook it up to the TV in the first place) did not. Anyone know of a Flash alternative or a Flash 10 fix for the iBook? Or even a Hulu client that doesn't require Flash. Here are my iBook's Specs Model Name: iBook G4 <br> Model Identifier: PowerBook6,5 <br> Processor Name: PowerPC G4 (1.2) <br> Processor Speed: 1.2 GHz <br> Number Of CPUs: 1 <br> L2 Cache (per CPU): 512 KB <br> Memory: 512 MB <br> Bus Speed: 133 MHz <br> Boot ROM Version: 4.8.7f1 <br> Mac OS X Version: 10.5.8 <br> PS: Don't tell me that I need to buy a new computer. I know that I would have better results with a new computer but I don't want to buy a new computer just for Hulu.

    Read the article

  • Configuring port forwarding for SSH - no response outside LAN

    - by WinnieNicklaus
    I recently moved, and at the same time purchased a new router (Linksys E1200). Prior to the move, I had my old router set up to forward a port for SSH to servers on my LAN, and I was using DynDNS to manage the external IP address. Everything worked great. I moved and set up the new router (unfortunately, the old one is busted so I can't try things out with it), updated the DynDNS address, and attempted to restore my port forwarding settings. No joy. SSH connections time out, and pings go unanswered. But here's the weird part (i.e., key to the whole thing?): I can ping and SSH just fine from within this LAN. I'm not talking about the local 192.168.1.* addresses. I can actually SSH from a computer on my LAN to the DynDNS external address. It's only when the client is outside the LAN that connections are dropped. This surely suggests a particular point of failure, but I don't know enough to figure out what it is. I can't figure out why it would make a difference where the connections originate, unless there's a filter for "trusted" IP addresses, which is perhaps just restricted to my own. No settings have been touched on the servers, and I can't find any settings suggesting this on the router admin interface. I disabled the router's SPI firewall and "Filter anonymous traffic" setting to no avail. Has anyone heard of this behavior, and what can I do to get past it?

    Read the article

  • Route forwarded traffic through eth0 but local traffic through tun0

    - by Ross Patterson
    I have a Ubuntu 12.04/Zentyal 2.3 server configured with WAN NATed on eth0, local interfaces eth1 and wlan0 bridged on br1 on which DHCP runs, and an OpenVPN connection on tun0. I only need the VPN for some things running on the gateway itself and I need to make sure that everything running on the gateway goes through the VPNs tun0. root:~# route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface default gw... 0.0.0.0 UG 100 0 0 eth0 link-local * 255.255.0.0 U 1000 0 0 br1 192.168.1.0 * 255.255.255.0 U 0 0 0 br1 A.B.C.0 * 255.255.255.0 U 0 0 0 eth0 root:~# ip route 169.254.0.0/16 dev br1 scope link metric 1000 192.168.1.0/24 dev br1 proto kernel scope link src 192.168.1.1 A.B.C.0/24 dev eth0 proto kernel scope link src A.B.C.186 root:~# ip route show table main 169.254.0.0/16 dev br1 scope link metric 1000 192.168.1.0/24 dev br1 proto kernel scope link src 192.168.1.1 A.B.C.0/24 dev eth0 proto kernel scope link src A.B.C.D root:~# ip route show table default default via A.B.C.1 dev eth0 How can I configure routing (or otherwise) such that all forwarded traffic for other hosts on the LAN goes through eth0 but all traffic for the gateway itself goes through the VPN on tun0? Also, since the OpenVPN client changes routing on startup/shutdown, how can I make sure that everything running on the gateway itself loses all network access if the VPN goes down and never goes out eth0.

    Read the article

  • vmware vmdk disk problem

    - by dmtr
    I have a VMware ESXi 4 server and 2 storage servers (mounted via nfs). Between the storage servers (Fedora 14) is a drbd cluster (dual primary) and ocfs2 filesystem; also every server has a local partition with an ext4 filesystem, both are mounted via nfs on the esxi server. When I tried to copy a virtual machine (naturally it was powered off) from the ext4 partition to the ocfs2 partition, the vmdk total file size is different, but the md5sum is the same. On the ext4 partition: # ls -la total 28492228 -rw------- 1 root root 42949672960 Jan 14 14:46 disk-flat.vmdk # md5sum disk-flat.vmdk 0eaebe3138beb32f54ea5de6dfe5a987 On the ocfs2 partition: # ls -la total 13974660 -rw------- 1 root root 42949672960 Jan 14 16:16 disk-flat.vmdk # md5sum disk-flat.vmdk 0eaebe3138beb32f54ea5de6dfe5a987 When I power on the virtual machine from the ocfs2 partition it dosn't work. I have a windows on the virtual machine and it freez?s after the windows logo. From the ext4 partition the virtual machine workes. I tested with linux (created and installed on ext4 partition and then copied to the ocfs2) and the same problem appears. When I create a virtual machine directly from ocfs2 partition, there are no problems. I tried to copy via vSphere client, and I have the same problem. Any suggestions?

    Read the article

  • 24TB RAID 6 configuration

    - by Phil
    I am in charge of a new website in a niche industry that stores lots of data (10+ TB per client, growing to 2 or 3 clients soon). We are considering ordering about $5000 worth of 3TB drives (10 in a RAID 6 configuration and 10 for backup), which will give us approximately 24 TB of production storage. The data will be written once and remain unmodified for the lifetime of the website, so we only need to do a backup one time. I understand basic RAID theory, however I am not experienced with it. My question is, does this sound like a good configuration? What potential problems could this setup cause? Also, what is the best way to do a one-time backup? Have two RAID 6 arrays, one for offsite backup and one for production? Or should I backup the RAID 6 production array to a JBOD? EDIT: The data server is running Windows 2008 Server x64. EDIT 2: To reduce rebuild time, what would you think about using two RAID 5's instead of one RAID 6?

    Read the article

  • Moving automatically spam messages to a folder in Postfix

    - by cad
    Hi My problem is that I want to automatically to move spam messages to a folder and not sure how. I have a linux box giving email access. MTA is Postfix, IMAP is Courier. As webmail client I use Squirrelmail. To filter SPAM I use Spamassassin and is working ok. Spamassasin is overwriting subjects with [--- SPAM 14.3 ---] Viagra... Also is adding headers: X-Spam-Flag: YES X-Spam-Checker-Version: SpamAssassin 3.2.5 (2008-06-10) on xxxx X-Spam-Level: ************** X-Spam-Status: Yes, score=14.3 required=2.0 tests=BAYES_99, DATE_IN_FUTURE_24_48,HTML_MESSAGE,MIME_HTML_ONLY,RCVD_IN_PBL, RCVD_IN_SORBS_WEB,RCVD_IN_XBL,RDNS_NONE,URIBL_RED,URIBL_SBL autolearn=no version=3.2.5 X-Spam-Report: * 0.0 URIBL_RED Contains an URL listed in the URIBL redlist * [URIs: myimg.de] * 3.5 BAYES_99 BODY: Bayesian spam probability is 99 to 100% * [score: 1.0000] * 0.9 RCVD_IN_PBL RBL: Received via a relay in Spamhaus PBL * [113.170.131.234 listed in zen.spamhaus.org] * 3.0 RCVD_IN_XBL RBL: Received via a relay in Spamhaus XBL * 0.6 RCVD_IN_SORBS_WEB RBL: SORBS: sender is a abuseable web server * [113.170.131.234 listed in dnsbl.sorbs.net] * 3.2 DATE_IN_FUTURE_24_48 Date: is 24 to 48 hours after Received: date * 0.0 HTML_MESSAGE BODY: HTML included in message * 1.5 MIME_HTML_ONLY BODY: Message only has text/html MIME parts * 1.5 URIBL_SBL Contains an URL listed in the SBL blocklist * [URIs: myimg.de] * 0.1 RDNS_NONE Delivered to trusted network by a host with no rDNS I want to automatically to move spam messages to a folder. Ideally (not sure if possible) only to move messages with puntuation 5.0 or more to folder.. spam between 2.0 and 5.0 I want to be stored in Inbox. (I plan later to switch autolearn on) After reading a lot in procmail, postfix and spamassasin sites and googling a lot (lot of outdated howtos) I found two solutions but not sure which is the best or if there is another one: Put a rule in squirrelmail (dirty solution?) Use Procmail Which is the best option? Do you have any updated howto about it? Thanks

    Read the article

< Previous Page | 896 897 898 899 900 901 902 903 904 905 906 907  | Next Page >