Search Results

Search found 21717 results on 869 pages for 'setup versions'.

Page 674/869 | < Previous Page | 670 671 672 673 674 675 676 677 678 679 680 681  | Next Page >

  • Preseeding Ubuntu partman recipe using LVM and RAID

    - by Swav
    I'm trying to preseed Ubuntu 12.04 server installation and created a recipe that would create RAID 1 on 2 drives and then partition that using LVM. Unfortunately partman complains when creating LVM volumes saying there no partitions in recipe that could be used with LVM (in console it complains about unusable recipe). The layout I'm after is RAID 1 on sdb and sdc (installing from USB stick so it takes sda) and then use LVM to create boot, root and swap. The odd thing is that if I change the mount point of boot_lv to home the recipe works fine (apart from mounting in wrong place), but when mounting at /boot it fails I know I could use separate /boot primary partition, but can anybody tell me why it fails. Recipe and relevant options below. ## Partitioning using RAID d-i partman-auto/disk string /dev/sdb /dev/sdc d-i partman-auto/method string raid d-i partman-lvm/device_remove_lvm boolean true d-i partman-md/device_remove_md boolean true #d-i partman-lvm/confirm boolean true d-i partman-auto-lvm/new_vg_name string main_vg d-i partman-auto/expert_recipe string \ multiraid :: \ 100 512 -1 raid \ $lvmignore{ } \ $primary{ } \ method{ raid } \ . \ 256 512 256 ext3 \ $defaultignore{ } \ $lvmok{ } \ method{ format } \ format{ } \ use_filesystem{ } \ filesystem{ ext3 } \ mountpoint{ /boot } \ lv_name{ boot_lv } \ . \ 2000 5000 -1 ext4 \ $defaultignore{ } \ $lvmok{ } \ method{ format } \ format{ } \ use_filesystem{ } \ filesystem{ ext4 } \ mountpoint{ / } \ lv_name{ root_lv } \ . \ 64 512 300% linux-swap \ $defaultignore{ } \ $lvmok{ } \ method{ swap } \ format{ } \ lv_name{ swap_lv } \ . d-i partman-auto-raid/recipe string \ 1 2 0 lvm - \ /dev/sdb1#/dev/sdc1 \ . d-i mdadm/boot_degraded boolean true #d-i partman-md/confirm boolean true #d-i partman-partitioning/confirm_write_new_label boolean true #d-i partman/choose_partition select Finish partitioning and write changes to disk #d-i partman/confirm boolean true #d-i partman-md/confirm_nooverwrite boolean true #d-i partman/confirm_nooverwrite boolean true EDIT: After a bit of googling I found below snippet of code from partman-auto-lvm, but I still don't understand why would they prevent that setup if it's possible to do manually and booting from boot partition on LVM is possible. # Make sure a boot partition isn't marked as lvmok if echo "$scheme" | grep lvmok | grep -q "[[:space:]]/boot[[:space:]]"; then bail_out unusable_recipe fi

    Read the article

  • Configuration management in support of scientific computing

    - by Sharpie
    For the past few years I have been involved with developing and maintaining a system for forecasting near-shore waves. Our team has just received a significant grant for further development and as a result we are taking the opportunity to refactor many components of the old system. We will also be receiving a new server to run the model and so I am taking this opportunity to consider how we set up the system. Basically, the steps that need to happen are: Some standard packages and libraries such as compilers and databases need to be downloaded and installed. Some custom scientific models need to be downloaded and compiled from source as they are not commonly provided as packages. New users need to be created to manage the databases and run the models. A suite of scripts that manage model-database interaction needs to be checked out from source code control and installed. Crontabs need to be set up to run the scripts at regular intervals in order to generate forecasts. I have been pondering applying tools such as Puppet, Capistrano or Fabric to automate the above steps. It seems perfectly possible to implement most of the above functionality except there are a couple usage cases that I am wondering about: During my preliminary research, I have found few examples and little discussion on how to use these systems to abstract and automate the process of building custom components from source. We may have to deploy on machines that are isolated from the Internet- i.e. all configuration and set up files will have to come in on a USB key that can be inserted into a terminal that can connect to the server that will run the models. I see this as an opportunity to learn a new tool that will help me automate my workflow, but I am unsure which tool I should start with. If any member of the community could suggest a tool that would support the above workflow and the issues specific to scientific computing, I would be very grateful. Our production server will be running Linux, but support for OS X would be a bonus as it would allow the development team to setup test installations outside of VirtualBox.

    Read the article

  • Configuration management in support of scientific computing

    - by Sharpie
    For the past few years I have been involved with developing and maintaining a system for forecasting near-shore waves. Our team has just received a significant grant for further development and as a result we are taking the opportunity to refactor many components of the old system. We will also be receiving a new server to run the model and so I am taking this opportunity to consider how we set up the system. Basically, the steps that need to happen are: Some standard packages and libraries such as compilers and databases need to be downloaded and installed. Some custom scientific models need to be downloaded and compiled from source as they are not commonly provided as packages. New users need to be created to manage the databases and run the models. A suite of scripts that manage model-database interaction needs to be checked out from source code control and installed. Crontabs need to be set up to run the scripts at regular intervals in order to generate forecasts. I have been pondering applying tools such as Puppet, Capistrano or Fabric to automate the above steps. It seems perfectly possible to implement most of the above functionality except there are a couple usage cases that I am wondering about: During my preliminary research, I have found few examples and little discussion on how to use these systems to abstract and automate the process of building custom components from source. We may have to deploy on machines that are isolated from the Internet- i.e. all configuration and set up files will have to come in on a USB key that can be inserted into a terminal that can connect to the server that will run the models. I see this as an opportunity to learn a new tool that will help me automate my workflow, but I am unsure which tool I should start with. If any member of the community could suggest a tool that would support the above workflow and the issues specific to scientific computing, I would be very grateful. Our production server will be running Linux, but support for OS X would be a bonus as it would allow the development team to setup test installations outside of VirtualBox.

    Read the article

  • ClearOS - how to create a site to site VPN between two ClearOS boxes?

    - by Scott Szretter
    I plan on setting up some ClearOS boxes at several sites, and would like to set up site-to-site VPN between the remote sites and a main site (all running ClearOS enterprise 5.2sp1 / latest version). I have found references for how to set up ClearOS to VPN in to devices such as cisco for IPSEC, and others with PPTP. But for these methods it did not mention how you might configure 2 ClearOS boxes to talk to each other ipsec or pptp. I also saw documentation on installing OpenVPN and using the OpenVPN client software to VPN in to the ClearOS box. I will probably use this for individual users to VPN in, but I have some small sites ( 1 to 10 users) that will have their own ClearOS box and need to create a site to site VPN link back to the main site's OpenVPN box. Is this possible, can you point me to docs, or other info or basically, how? A couple updates: I did find a thread that asks the same basic question, where the user has a vpn set up between the two clearos machines (after installing ipsec vpn modules), just not transporting traffic between the LANS - and the very last post claims you have to edit some files (/etc/ipsec.conf) and set leftnexthop rightnexthop values to %direct. After that, it's supposed to work. Could it be that simple? I also posted to clear foundation, and they pointed me to some documentation for setting up ipsec unmanaged vpn. This looks pretty good, but, I will most likely need to figure out how to handle a dynamic dns type setup at least on one end. Also, what does it mean by multi-wan? Finally, what happens when a vpn connection goes down exactly - someone has to reboot the box or ?

    Read the article

  • cPanel web servers mounting home partition to a NAS or SAN

    - by Scott
    Hello, I currently have 2 cPanel web servers that are little 1RU dual cpu quad core xeons. They have a lot of resources for processing and handling web requests, and never exceed more than 10% cpu usage. They also have plenty of RAM. The problem is though that they both have RAID 1 160Gb SAS hard disk drives in them that are 75% full, and growing by the day. I didnt think that the amount of disk usage would be so high, but due to the nature of the sites hosted, this has become an issue. The easy fix would be just to upgrade the hard drives to something bigger (probably not of the SAS variety), but I am thinking of keeping the current machines as "processing servers" and buying a central "storage server" with about 12TB of storage. The /home/ partition on each of the 1RU servers would be mounted to a NAS or SAN point on this central storage server. My questions are: - Has anyone got a cPanel setup where they mount /home/ to a NAS or SAN elsewhere? If so, can you provide details as to what you did and how it went :) - Any recommendations on networking? Is gigabit ethernet enough? Is TCP/IP going to be a noticable performance problem? Anyone used a TOE key? - Anyone benchmarked or had any performance issues with SAN over NAS? Any help greatly appreciated. Scott

    Read the article

  • Setting up a network e-mail server

    - by Jason
    Hello, My boss just asked me to buy a new server for our office network. I know next to nothing about servers and networking, so I need someone to point me in the right direction. He said he wants this to be our e-mail server with a network login. I have no idea how to set up an e-mail server, especially one that sends/receives e-mail using our domain name. We use a terrible piece of order/inventory software called Mail Order Manager (MOM). Our computers currently connect to the MOM database through a networked drive. My boss would like to move away from this peer-to-peer MOM setup. The software publisher offers a SQL version of MOM, but it's way overpriced. Is there a better way to connect to these databases without using the SQL version? Finally, the server needs to be running Windows. Does this question make sense, is it possible, and can someone help me get started? Thanks!

    Read the article

  • How do I use a Zyxel P660 router as just modem so that I can connect a WRT54GL router in cascade?

    - by Kenji Kina
    I have a Zyxel P660HW-t1 v2 router (which has a DSL port) and a WRT54GL router (which does not) and the exact same situation as in this thread (UPDATE: the connection between both devices is the important part, since I have been able to set the zyxel router to act as bridge by itself quite nicely. I have accessed my internet connection directly through a PC using PPPoE without any problems, the issues arise when I try to connect the WRT54GL router between the zyxel "modem" and my PCs). I've been trying to use my Zyxel P660 as a modem only: Setup P660 to bridge mode. Changed WRT54GL's IP address to 192.168.2.1 to avoid a conflict on the network. Configured the PPPoE settings as required on WRT54GL. The thing is that when I connect the Zyxel modem/router on the WRT54GL's internet port the light doesn't turn on. I can confirm that this port has been working ok, so I'm not really sure what's going between the devices. I checked several settings such as IPs, tried disabling DHCP on Zyxel/Linksys, Firewall on both and still nothing. Also, I tried connecting Zyxel directly to a computer in bridge mode and dialed successfully. I have even posted a question here before, thinking that what I asked there was the only thing I needed to get things done. Unfortunately it wasn't, and the guy that solved his issue didn't give enough details in his post (and is quite unlikely to give more details since he was an anonymous user). For one, I don't know how to do this part: connected to the Zyxel through telnet and forced LAN port 1 to be at 100mb as well I can't find the option that does this on the zyxel router. Not through telnet or the web admin. Can anyone help me solve this?

    Read the article

  • mercurial hgwebdir error with basicauth in apache2

    - by Dio
    Hello, I'm having kind of a strange error that I'm trying to track down. I was trying to setup mercurial on my home server this weekend. I seem to have it running up to the point where I'm trying to get repositories published correctly. I'm running Ubuntu 10.04 LTS Mercurial Distributed SCM (version 1.4.3) I followed the hgwebdir guide: http://mercurial.selenic.com/wiki/HgWebDirStepByStep and everything seems to work great, I can pull and push my local repositories. Then I tried to add basic auth changing ScriptAliasMatch ^/hg(.*) /var/hg/hgwebdir.cgi$1 <Directory "/var/hg"> Options ExecCGI FollowSymLinks AllowOverride None </Directory> to ScriptAliasMatch ^/hg(.*) /var/hg/hgwebdir.cgi$1 <Directory "/var/hg"> Options ExecCGI FollowSymLinks AllowOverride None AuthType Basic AuthName hgwebdir AuthUserFile /usr/local/etc/httpd/users Require valid-user </Directory> This works exactly as I'd expect it to when I navigate to the directory via my web browser, but when I hg push get a long section repeating of File "/usr/lib/python2.6/urllib2.py", line 369, in _call_chain result = func(*args) File "/usr/lib/python2.6/urllib2.py", line 855, in http_error_401 url, req, headers) File "/usr/lib/python2.6/urllib2.py", line 833, in http_error_auth_reqed return self.retry_http_basic_auth(host, req, realm) File "/usr/lib/python2.6/urllib2.py", line 843, in retry_http_basic_auth return self.parent.open(req, timeout=req.timeout) followed by File "/usr/lib/pymodules/python2.6/mercurial/keepalive.py", line 249, in do_open self._start_transaction(h, req) File "/usr/lib/pymodules/python2.6/mercurial/url.py", line 419, in _start_transaction return keepalive.HTTPHandler._start_transaction(self, h, req) File "/usr/lib/pymodules/python2.6/mercurial/keepalive.py", line 342, in _start_transaction h.endheaders() File "/usr/lib/python2.6/httplib.py", line 904, in endheaders self._send_output() File "/usr/lib/python2.6/httplib.py", line 776, in _send_output self.send(msg) File "/usr/lib/pymodules/python2.6/mercurial/url.py", line 247, in _sendfile connection.send(self, data) File "/usr/lib/pymodules/python2.6/mercurial/keepalive.py", line 519, in safesend self.connect() File "/usr/lib/pymodules/python2.6/mercurial/url.py", line 273, in connect keepalive.HTTPConnection.connect(self) RuntimeError: maximum recursion depth exceeded while calling a Python object I'm a bit at a loss on this one. I'm really not sure why adding the authorization seems to work fine via my web browser but throw these errors from hg. Any help would be greatly appreciated.

    Read the article

  • Need Help Accessing the Vista Wampserver localhost from Virtual PC 2007 running an XP VM.

    - by Reg
    (I had posted this on stack overflow but it was suggested there that I post it here instead). I have a Vista laptop on which I'm running wampserver. I have Virtual PC 2007 setup with Windows XP running on the VM. My goal is to be able to use the XP VM to run IE6 to view the localhost in the Vista wampserver. I'm not interested in having the XP VM have any access to the internet -- only to my Vista wampserver's localhost. The vista wampserver works fine. As suggested on a blog I read, I installed the loopback adapter on Vista and I set the loopback to 192.168.21.1 and I set the xp vm ip to 192.168.21.2. I am able to successfully ping the vista-loopback adapter from the xp vm. I've turned the wampserver to "server online", and I've disabled the firewalls in both the vista host and the xp vm. But for some reason, I still can't seem to get the virtual XP to see the localhost on the vista wampserver. I've tried using the vista //name, and I've tried the ip 192.168.21.1 directly and with the port. For whatever its worth, I'm not able to see anything under the XM VM's network places (though I don't know if I'm supposed to be able to see anything). So at this point I'm stuck and I'm still not sure how to get this XP VM to "talk" to my vista wampserver localhost. Any advice on how to fix this problem is much appreciated. Thanks in advance for your help. -R

    Read the article

  • Getting 401 when using client certificate with IIS 7.5

    - by Jacob
    I'm trying to configure a web site hosted under IIS 7.5 so that requests to a specific location require client certificate authentication. With my current setup, I still get a "401 - Unauthorized: Access is denied due to invalid credentials" when accessing the location with my client cert. Here's the web.config fragment that sets things up: <location path="MyWebService.asmx"> <system.webServer> <security> <access sslFlags="Ssl, SslNegotiateCert"/> <authentication> <windowsAuthentication enabled="false"/> <anonymousAuthentication enabled="false"/> <digestAuthentication enabled="false"/> <basicAuthentication enabled="false"/> <iisClientCertificateMappingAuthentication enabled="true" oneToOneCertificateMappingsEnabled="true"> <oneToOneMappings> <add enabled="true" certificate="MIICFDCCAYGgAwIBAgIQ+I0z6z8OWqpBIJt2lJHi6jAJBgUrDgMCHQUAMCQxIjAgBgNVBAMTGURldiBDZXJ0aWZpY2F0ZSBBdXRob3JpdHkwHhcNMTAxMjI5MjI1ODE0WhcNMzkxMjMxMjM1OTU5WjAaMRgwFgYDVQQDEw9kZXYgY2xpZW50IGNlcnQwgZ8wDQYJKoZIhvcNAQEBBQADgY0AMIGJAoGBANJi10hI+Zt0OuNr6eduiUe6WwPtyMxh+hZtr/7eY3YezeJHC95Z+NqJCAW0n+ODHOsbkd3DuyK1YV+nKzyeGAJBDSFNdaMSnMtR6hQG47xKgtUphPFBKe64XXTG+ueQHkzOHmGuyHHD1fSli62i2V+NMG1SQqW9ed8NBN+lmqWZAgMBAAGjWTBXMFUGA1UdAQROMEyAENGUhUP+dENeJJ1nw3gR0NahJjAkMSIwIAYDVQQDExlEZXYgQ2VydGlmaWNhdGUgQXV0aG9yaXR5ghB6CLh2g6i5ikrpVODj8CpBMAkGBSsOAwIdBQADgYEAwwHjpVNWddgEY17i1kyG4gKxSTq0F3CMf1AdWVRUbNvJc+O68vcRaWEBZDo99MESIUjmNhjXxk4LDuvV1buPpwQmPbhb6mkm0BNIISapVP/cK0Htu4bbjYAraT6JP5Km5qZCc0iHZQJZuch7Uy6G9kXQXaweJMiHL06+GHx355Y="/> </oneToOneMappings> </iisClientCertificateMappingAuthentication> </authentication> </security> </system.webServer> </location> The client certificate I'm using in my web browser matches what I've placed in the web.config. What am I doing wrong here?

    Read the article

  • Why RSA SSH authentication only works after console log-in?

    - by smorhaim
    I setup RSA authentication on one of my Ubuntu servers, however after every restart, I can't log-in via ssh RSA. In order to log-in with ssh I need to first log-in via console, then the RSA starts working. Why??? Below are my sshd config file as well as an output from the ssh -vv command before console log-in and after. . Before console log-in: debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /Users/smorhaim/.ssh/smorhaim (0x7ff8d8c242c0) debug2: key: /Users/smorhaim/.ssh/id_rsaadmin (0x7ff8d8c24cf0) debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: /Users/smorhaim/.ssh/smorhaim debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey debug1: Offering RSA public key: /Users/smorhaim/.ssh/id_rsaadmin debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey debug2: we did not send a packet, disable method debug1: No more authentication methods to try. Permission denied (publickey). After console log-in: debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /Users/smorhaim/.ssh/smorhaim (0x7f91c14242c0) debug2: key: /Users/smorhaim/.ssh/id_rsaadmin (0x7f91c1424ae0) debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: /Users/smorhaim/.ssh/smorhaim debug2: we sent a publickey packet, wait for reply debug1: Server accepts key: pkalg ssh-rsa blen 279 debug2: input_userauth_pk_ok: fp b1:d5:90:43:be:43:52:a9:7f:05:c7:04:86:57:b3:ff debug1: Authentication succeeded (publickey). Authenticated to 10.10.30.151 ([10.10.30.151]:22). sshd config: Port 22 Protocol 2 ListenAddress 10.10.30.151 UsePrivilegeSeparation yes SyslogFacility AUTHPRIV PermitRootLogin no PasswordAuthentication no ChallengeResponseAuthentication no UsePAM yes X11Forwarding yes

    Read the article

  • Cannot delete old NFS directory: Device or resource busy

    - by Jakobud
    On server1, we had an NFS share mounted from server 2 like this: /nfs/server2/share Recently, we took down server2 to install a new OS on it. Now we can't get NFS setup the way it was. When I do this: ls -l /nfs I get this: drwxr-xr-x 2 root root 0 2010-03-15 09:59 server2 Notice how the directory size is 0 instead of 4096 like usual? Anyways I go into server2 expecting to see a share directory, but I don't. It's empty. So therefore I cannot mount my share at /nfs/server2/share. When I try to create /nfs/server2/share directory, I get mkdir: cannot create directory `share': No such file or directory I think this is because it doesn't really think the /nfs/server2 directory really exists. Even if I use the -p option with mkdir, it doesn't work. Next I tried to remove /nfs/server2 so I could just recreate it. I try to rm -r /nfs/server2 but I get rm: cannot remove directory `/nfs/server2': Device or resource busy So now I'm at a loss. I need to mount this NFS share in the same exact place on server1 (at /nfs/server2/share) because other software on server1 depend on this. But if I can't create that share directory and I can't remove that directory, what do I do? Also, just for testing, I attempted to mount the share at /nfs/testing/share and it mounted just fine. But like I said, I need to mount it back in the same location.

    Read the article

  • Last (I think and hope) problems configuring SSL certificate with Apache and VirtualHosts

    - by user65567
    Finally I set apache2 to get a single certificate for all subdomains. [...] # Go ahead and accept connections for these vhosts # from non-SNI clients SSLStrictSNIVHostCheck off # Apache setup which will listen for and accept SSL connections on port 443. Listen 443 # Listen for virtual host requests on all IP addresses NameVirtualHost *:443 # Because this virtual host is defined first, it will # be used as the default if the hostname is not received # in the SSL handshake, e.g. if the browser doesn't support # SNI. <VirtualHost *:443> ServerName domain.localhost DocumentRoot "/Users/<my_user_name>/Sites/domain/public" <Directory "/Users/<my_user_name>/Sites/domain/public"> Order allow,deny Allow from all </Directory> # SSL Configuration SSLEngine on ... </VirtualHost> <VirtualHost *:443> ServerName subdomain1.domain.localhost DocumentRoot "/Users/<my_user_name>/Sites/subdomain1/public" <Directory "/Users/<my_user_name>/Sites/subdomain1/public"> Order allow,deny Allow from all </Directory> # SSL Configuration SSLEngine on ... </VirtualHost> <VirtualHost *:443> ServerName subdomain2.domain.localhost DocumentRoot "/Users/<my_user_name>/Sites/subdomain2/public" <Directory "/Users/<my_user_name>/Sites/subdomain2/public"> Order allow,deny Allow from all </Directory> # SSL Configuration SSLEngine on ... </VirtualHost> So, for example, I can correctly access https://subdomain1.domain.localhost https://subdomain2.domain.localhost ... Now, anyway, I have problems on accessing http://subdomain1.domain.localhost http://subdomain2.domain.localhost ... Since I use a Mac Os, on accessing the "http: version", I get a default page "Your website." (instead of a error). Why does it happen?

    Read the article

  • Nginx + uWSGI on a fresh Ubuntu install - bind error port 80

    - by knuckfubuck
    I know this is a common problem usually having to do with apache or another service already running on port 80 and I have done a lot of searching and running netstat and still have not figured out why I am getting this error. I rebuilt my slice, did a fresh install of Ubuntu 10.04 and setup nginx + uwsgi. It worked and I was able to see my Django site. I then installed Postgres8.4 and the rest of the stack needed for Geodjango from this link. After that was done I tried to restart nginx and I get this error: sudo /etc/init.d/nginx start Starting nginx: the configuration file /usr/local/nginx/conf/nginx.conf syntax is ok configuration file /usr/local/nginx/conf/nginx.conf test is successful [emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use) [emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use) [emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use) [emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use) [emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use) [emerg]: still could not bind() I have nginx set to listen 80. Here's an output from netstat -l --numeric-ports | grep 80: tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN tcp 0 0 127.0.0.1:8000 0.0.0.0:* LISTEN Output from sudo lsof +M -i4: nginx 2330 root 8u IPv4 3195 0t0 TCP *:www (LISTEN) nginx 2331 www-data 8u IPv4 3195 0t0 TCP *:www (LISTEN) uwsgi 2335 s 4u IPv4 3259 0t0 TCP localhost:8000 (LISTEN) uwsgi 2352 s 4u IPv4 3259 0t0 TCP localhost:8000 (LISTEN) uwsgi 2353 s 4u IPv4 3259 0t0 TCP localhost:8000 (LISTEN) uwsgi 2354 s 4u IPv4 3259 0t0 TCP localhost:8000 (LISTEN) uwsgi 2355 s 4u IPv4 3259 0t0 TCP localhost:8000 (LISTEN) Anyone have any other ideas how I can figure out what is blocking port 80? edit Paste of my /etc/init.d/nginx script here: http://dpaste.com/hold/400937/

    Read the article

  • Zimbra MTA settings

    - by user192702
    Hi have some questions for Zimbra v8.0.6GA. Under Configure - MTA - Network, I'm seeing a few settings and am not very clear what to do with them. Web mail MTA Host name Is this for delivering local mail only (ie not for external mails)? According to this link, it says the following. That's a mouthful but what is "composed messages"? Is this for a multi server deployment where the Postfix server for Zimbra isn't installed on the same box that as the rest of the servers? Webmail MTA is used by the Zimbra server for composed messages and must be the location of the Postfix server in the Zimbra MTA. Relay MTA for external delivery My understanding after reading the doc is that if my ISP doesn't force me to relay outgoing mails through them, and I have enabled DNS lookup, I can leave this blank? Inbound SMTP host name Sorry I know this is explained as "If your MX records point to a spam-relay or any other external non-Zimbra server, enter the name of that server in the Inbound SMTP host name field." but I'm not following. Can someone provide an example? MTA Trusted Networks The admin doc says "To set up MTA trusted networks on a per server basis, make sure that MTA trusted networks have been set up as global settings and then go the Configure Servers MTA page and in the MTA Trusted Networks field enter the trusted network addresses for the server." However I see out of the box it has default networks setup for the server whereas on a global level it's blank. Does this mean there is a bug with the install software and I have to copy the setting from the server to the global setting?

    Read the article

  • Giving Select Windows Domain Users Symbolic Link Privilege

    - by fp0n
    I would like to setup select users on our domain to have the ability to create symbolic links on local NTFS drives and network shares without needing to run as Administrator, as part of an application with will call the CreateSymbolicLink() API directly. The default configuration for our users is to be Administrator of their computer and I think I am fighting UAC to make the privileges work the way that I want because of that. I found this link on MSDN: http://social.msdn.microsoft.com/Forums/en-SG/windowssdk/thread/fa504848-a5ea-4e84-99b7-0eb4e469cbef which describes the interaction between the SeCreateSymbolicLinkPrivilege, UAC and a domain but really does not have a solution. Here's the three options I've come up with: 1) Create a new group, give the SeCreateSymbolicLinkPrivilege to the group and assign users to the group 2) Give each individual user (2 now, more later) the privilege 3) Give the privilege to the default User group which opens it up to all Users 4) Change config so Users are not Admins by default (probably would work but not likely) Based on my testing, only 3 works for me and that is the least desirable but I've only got a local server to test with, not a domain. I need to recommend to the admin how to set this up and also have something that we can easily explain to other users of our application that are on their own domain or not on a domain. The other option seems to be to create a Service that runs with a SYSTEM account that creates the links for the application but I'd rather not go that route. Thanks.

    Read the article

  • NSclient++ NRPE issues

    - by Kyle
    I have had NSclient++ working with Nagios for a while now. Recently I started testing Nagwin just to see how it would work, out of pure curiosity. I stopped checking a test server with my main Nagios config, set NSclient++ to NRPE mode, and pointed Nagwin at it. It worked great for a few hours then suddenly I started seeing "UNKNOWN: No Handler for that command." I figured it has to be Nagwin's fault since it's so new, I'll just unload NRPElistner.dll and return my server to being monitored by check_NT. However now check_NT doesn't work my main Nagios server returns timeout errors and is unable to connect at all. My Nagwin server can connect to it, the server just doesn't know how to handle the check_NRPE commands even though it did with no changes a few hours earlier. I have been working on this for a day now and am fairly certain it is NSclient++ who is to blame here. My nagwin box has successfully stayed connected to a similar server throughout the night, without any issues. And my main Nagios config is not having any problems at all. I have been able to successfully switch another server between being monitored by nagios and nagwin without any problems by simply loading and unloading the NRPE.dll. I have tried uninstalling NSclient++ and reinstalling with fresh configuration but still receive the errors. As of now the firewall is off on the server, NSclient++ is setup to accept connection from any server, there is no password, I have also turned ssl off, and the NRPE module is loaded. Any Ideas would be appreciated, I am not an advanced Nagios user but I do know my way around it and can easily break it down and set it up again. I also want to add that while in test mode NSclient++ is unable to handle check_NRPE commands there either.

    Read the article

  • Self-hosting vs. Budget hosting - What are the economics?

    - by cdonner
    My current hosting provider (shared Linux, unlimited domains, < $10 per month, with about 20 sites) has been giving me a lot of grief lately. I am contemplating to just ditch them and repurpose the old Sun V20z that is sitting in my basement rack, and move the hosting in-house, literally. My math goes as follows: my company pays up to $80 a months for my home internet service, which would cover the upgrade from currently Fios to Comcast business internet with 5 static IPs. So this comes free. running the server will cost me about $180/year at the current rate of approx. $.2/kWh my time is free So, it seems that the my net cost of doing this would be about $80 anually, plus the work that goes into setup and maintenance. I will have to get email hosting somewhere, which I do not want to do myself. On the other side of the balance sheet, I'd likely get better uptime than my provider based on recent stats, will not get suspended and don't have to spend hours with customer support. Overall, I am not convinced. Has anybody actually done that? What was your experience, and did it pay off?

    Read the article

  • Issue configuring Oracle database for SSL

    - by Santhosha Kaldambe
    Hello, I want to setup Oracle for SSL communication. I am not using SSL authentication for database user. As first requirement, generated self signed certificate using OpenSSL and added certificate to wallet. The wallet location is specified in server configuration. Created listener and it is starting however it does not provide any service. The default listener (non SSL) is working fine. When I execute LSNRCTL.EXE status SSLLISTENER it gives below output. STATUS of the LISTENER Alias SSLLISTENER Version TNSLSNR for 32-bit Windows: Version 11.1.0.6.0 - Production Start Date 14-NOV-2009 01:47:08 Uptime 16 days 22 hr. 14 min. 3 sec Trace Level off Security ON: Local OS Authentication SNMP OFF Listener Parameter File C:\app\Administrator\product\11.1.0\db_1\network\admin\listener.ora Listener Log File c:\app\administrator\diag\tnslsnr\\ssllistener\alert\log.xml Listening Endpoints Summary... (DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=)(PORT =2484))) The listener supports no services The command completed successfully Here is exact content of various files after configuration. 1) File Name: tnsnames.ora ORCL = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = )(PORT 1521)) ) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = orcl) ) ) 2) File Name: sqlnet.ora SSL_VERSION = 0 NAMES.DIRECTORY_PATH= (TNSNAMES, EZCONNECT) sqlnet.authentication_services= (NONE) tcp.validnode_checking = no tcp.invited_nodes=(PS0803.oraebs.com,PS2948,PS5098) SSL_CLIENT_AUTHENTICATION = FALSE WALLET_LOCATION = (SOURCE = (METHOD = FILE) (METHOD_DATA = (DIRECTORY = C:\app\Administrator\admin\orcl\Server_Wallet) ) ) 3) File Name: listener.ora SSL_CLIENT_AUTHENTICATION = FALSE WALLET_LOCATION = (SOURCE = (METHOD = FILE) (METHOD_DATA = (DIRECTORY = C:\app\Administrator\admin\orcl\Server_Wallet) ) ) LISTENER = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521)) ) (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = )(PORT 1521)) ) ) SSLLISTENER = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCPS)(HOST = )(PORT = 2484)) ) Thanks Santhosh

    Read the article

  • Static routing on a TP-Link TL-WR1043ND

    - by igor
    My home network setup looks like this: Both routers are TP-Link TL-WR1043ND routers. The basement router handles all devices in the house that are connected via cable, handing out addresses for the 10.89.49.0/24 network via DHCP. Wireless doesn’t really work from the basement, as the signal is too weak, so I have disabled it. To do WiFi, I have added a second (identical) router downstairs. On the WAN side it is assigned the 10.89.49.101 IP address from the basement router, and on its LAN it provides the 10.89.7.0/24 network. Basic internet access works flawlessly from any device this way. I am now facing the problem that I am not able to communicate (e.g. SSH) between all devices, wired or wireless. I am able to connect from a wireless device to a wired device, for example SSH-ing from 10.89.7.X to 10.89.49.Y, but it doesn’t work the other way round—despite the fact that I have added a static route to the basement router: Does anybody have any idea on how to solve it? Both routers have already been upgraded to use the most recent firmware from TP-Link.com (Build 110429), to no avail. Errata: I would like to stick with the official firmware, switching to something like DD-WRT or OpenWrt only as a last resort.

    Read the article

  • Make Thundirbird use and sync with Gmail Spam facility

    - by Senthil
    I am using Thundirbird 3.1.7. I am using Gmail with it. When I mark a message as spam in Thunderbird, I want it to go to Gmail's spam folder. But it is not happening. The mail is only marked as spam and stays in Gmail's inbox. How can I setup Thunderbird so that when I mark a message as spam in thundirbird, it goes to spam folder in both Thunderbird and Gmail? FYI: I have disabled Thunderbird's own adaptive spam controls so that it doesn't interfere. I am perfectly fine with Gmail's spam facility and don't want Thunderbird to add another layer of spam filtering. Still Thunderbird doesn't send emails to Gmail's spam. Update I figured that dragging and dropping the email to the spam folder in Thunderbird does the job. But is it the same as marking the email as spam in Gmail's interface? i.e. Gmail will do stuff when you mark an email as spam so that it can filter out similar messages in the future. Does it happen when I do this drag and drop thing too? Are they the same?

    Read the article

  • Error when trying to deploy Windows XP SP3 with WDS

    - by Nic Young
    I have created a WDS server running Windows Server 2008 R2. I have built my custom images of Windows 7 using WAIK and MDT 2010 that are installed on the server. I used this guide to help me through the process. The Windows 7 images that I have created capture and deploy properly. I am attempting to follow the same steps from the guide I linked to capture and deploy a Windows XP SP3 image. I am able to sysprep and capture the reference machine with no errors. I am then able to import the custom .wim that I just captured in to MDT 2010 with no issues either. However when I try to deploy this image to a test virtual machine I get the following error: Deployment Error: I have made sure that the .iso that I am importing the source files from originally to create the sysprep and capture sequence is indeed a Windows XP SP3 iso. When I first select a PE boot environment before I deploy I select the x86 PE boot image that I created originally when making this for my Windows 7 deployments. Could this be the issue? If so how do I make a boot image specific for Windows XP SP3 deployments? I have Googled around for this error and some places point to the deployment image not being able to find setup.exe and other important system files for installing the operating system. If so, how do I add these to the image? Any ideas?

    Read the article

  • SSL certificates work fine from command line but fails in script

    - by jrallison
    I'm trying to setup email notifications for my continuous integration server. I have a script which uses nail to send the email when the build works: #!/bin/bash echo "Build Worked!" | nail -A myisp -s 'Build Success' [email protected] When I run this from the command line with sh build-worked, it works and I receive the email. However, when I start the continuous integration server which executes the same script, I get the following error: nail: /opt/bitnami/common/lib/libssl.so.0.9.8: no version information available (required by nail) nail: /opt/bitnami/common/lib/libcrypto.so.0.9.8: no version information available (required by nail) Error with certificate at depth: 0 issuer = /C=ZA/ST=Western Cape/L=Cape Town/O=Thawte Consulting cc/OU=Certification Services Division/CN=Thawte Premium Server CA/[email protected] subject = /C=US/ST=California/L=Mountain View/O=Google Inc/CN=smtp.gmail.com err 20: unable to get local issuer certificate Continue (y/n)? could not initiate SSL/TLS connection: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed . . . message not sent. I must be messing some configuration, any ideas?

    Read the article

  • Exchange 2010: Replication Service Still Trying to Replicate Deleted Mailbox Store

    - by ThaKidd
    In advance, thank you for your opinions! I just migrated from Server/Exchange 2003 to Server 2008 SR2 running Exchange 2010. I had an extra mailbox that appeared with some system mailboxes in it. I used the EMS to move those mailboxes over and then deleted the store out of the EMC. Since then every so often I get an Error in Event Viewer. Source: MSExchangeRepl ID: 4098 Error: The Microsoft Exchange Replication service couldn't find a valid configuration for database '5f012f40-3bad-4003-a373-dbc0ffb6736f' on server 'EXCHSERVER'. Error: (nothing after this) I can confirm that the above GUID is the mailbox store of that I deleted. No other Exchange errors occur. How can I tell Exchange Replication to ignore this store? Setup, one Exchange server 2003 transitioned over to 2010. No other Exchange servers. Is there a way to fix this? Do I need to change a setting to stop replication? I plan to add a second Exchange server in the next few days so stopping replication would be a bad thing. Thanks again in advance. Jason

    Read the article

  • nginx with stub_status.. need help with nginx.conf

    - by Amar
    Hello I am trying to setup nginx with stub status so I can monitor nginx requests etc.. with serverdensity.com. I needed to put something like this in nginx.conf server { listen 82.113.147.xxx; location /nginx_status { stub_status on; access_log off; allow 82.113.147.xxx; deny all; } } And with this monitoring acctualy works. However It seems I lost "include" part in my nginx.conf and now none of vhosts in sites-enabled work. Here is a bit more of my nginx.conf http { include /etc/nginx/mime.types; default_type application/octet-stream; server_tokens off; access_log /var/log/nginx/access.log; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; tcp_nodelay on; gzip on; gzip_comp_level 2; gzip_proxied any; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; server { listen 82.113.147.226; location /nginx_status { stub_status on; access_log off; allow 82.113.147.226; deny all; } } } Hope someone can help me with this , as I belive its minor issue, its just that "I dont see it" ty

    Read the article

< Previous Page | 670 671 672 673 674 675 676 677 678 679 680 681  | Next Page >