Search Results

Search found 14546 results on 582 pages for 'mod authz host'.

Page 478/582 | < Previous Page | 474 475 476 477 478 479 480 481 482 483 484 485  | Next Page >

  • Sendmail in Nexenta core 3.0.1

    - by maximdim
    I'm trying to setup sendmail in Nexenta core 3.0.1 (Solaris based OS). All I want is to be able to send emails from that host - like notifications about failures, cron jobs output etc. Initially Nexenta core doesn't have sendmail so here is what I've done: apt-get install sunwsndmu Now there is a sendmail in /usr/sbin/sendmail. When I try to send email from command line: $mail maxim test . It doesn't give me any error but in log file I see: Dec 20 12:41:08 nas sendmail[12295]: [ID 801593 mail.info] oBKHf8u7012295: from=maxim, size=107, class=0, nrcpts=1, msgid=<201012201741.oBKHf8u7012295@nas>, relay=maxim@localhost Dec 20 12:41:08 nas sendmail[12295]: [ID 801593 mail.info] oBKHf8u7012295: to=maxim, ctladdr=maxim (1000/10), delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=30107, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection refused by [127.0.0.1] So I guess I need to have SMTP service running. How do I do that in Nexenta? svcs -a | grep sendmail doesn't return anything and # svcadm enable sendmail svcadm: Pattern 'sendmail' doesn't match any instances I'm not married to sendmail so if there are easier ways to achieve y goal I'm open to suggestions as well. Thanks,

    Read the article

  • Moving automatically spam messages to a folder in Postfix

    - by cad
    Hi My problem is that I want to automatically to move spam messages to a folder and not sure how. I have a linux box giving email access. MTA is Postfix, IMAP is Courier. As webmail client I use Squirrelmail. To filter SPAM I use Spamassassin and is working ok. Spamassasin is overwriting subjects with [--- SPAM 14.3 ---] Viagra... Also is adding headers: X-Spam-Flag: YES X-Spam-Checker-Version: SpamAssassin 3.2.5 (2008-06-10) on xxxx X-Spam-Level: ************** X-Spam-Status: Yes, score=14.3 required=2.0 tests=BAYES_99, DATE_IN_FUTURE_24_48,HTML_MESSAGE,MIME_HTML_ONLY,RCVD_IN_PBL, RCVD_IN_SORBS_WEB,RCVD_IN_XBL,RDNS_NONE,URIBL_RED,URIBL_SBL autolearn=no version=3.2.5 X-Spam-Report: * 0.0 URIBL_RED Contains an URL listed in the URIBL redlist * [URIs: myimg.de] * 3.5 BAYES_99 BODY: Bayesian spam probability is 99 to 100% * [score: 1.0000] * 0.9 RCVD_IN_PBL RBL: Received via a relay in Spamhaus PBL * [113.170.131.234 listed in zen.spamhaus.org] * 3.0 RCVD_IN_XBL RBL: Received via a relay in Spamhaus XBL * 0.6 RCVD_IN_SORBS_WEB RBL: SORBS: sender is a abuseable web server * [113.170.131.234 listed in dnsbl.sorbs.net] * 3.2 DATE_IN_FUTURE_24_48 Date: is 24 to 48 hours after Received: date * 0.0 HTML_MESSAGE BODY: HTML included in message * 1.5 MIME_HTML_ONLY BODY: Message only has text/html MIME parts * 1.5 URIBL_SBL Contains an URL listed in the SBL blocklist * [URIs: myimg.de] * 0.1 RDNS_NONE Delivered to trusted network by a host with no rDNS I want to automatically to move spam messages to a folder. Ideally (not sure if possible) only to move messages with puntuation 5.0 or more to folder.. spam between 2.0 and 5.0 I want to be stored in Inbox. (I plan later to switch autolearn on) After reading a lot in procmail, postfix and spamassasin sites and googling a lot (lot of outdated howtos) I found two solutions but not sure which is the best or if there is another one: Put a rule in squirrelmail (dirty solution?) Use Procmail Which is the best option? Do you have any updated howto about it? Thanks

    Read the article

  • Apache 403 after configuring varnish

    - by w0rldart
    I just don't know where else to look and what else to do. I keep getting a 403 error on all my vhosts after setting varnish 3.0 Apacher log: [error] [client 127.0.0.1] client denied by server configuration: /etc/apache2/htdocs Headers: http://domain.com/ GET / HTTP/1.1 Host: domain.com User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:16.0) Gecko/20100101 Firefox/16.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-US,en;q=0.5 Accept-Encoding: gzip, deflate DNT: 1 Connection: keep-alive Cookie: __utma=106762181.277908140.1348005089.1354040972.1354058508.6; __utmz=106762181.1348005089.1.1.utmcsr=OTHERDOMAIN.com|utmccn=(referral)|utmcmd=referral|utmcct=/galerias/cocinas Cache-Control: max-age=0 HTTP/1.1 403 Forbidden Vary: Accept-Encoding Content-Encoding: gzip Content-Type: text/html; charset=iso-8859-1 X-Cacheable: YES Content-Length: 223 Accept-Ranges: bytes Date: Sat, 01 Dec 2012 20:35:14 GMT X-Varnish: 1030961813 1030961811 Age: 26 Via: 1.1 varnish Connection: keep-alive X-Cache: HIT ---------------------------------------------------------- /etc/default/varnish: DAEMON_OPTS="-a ip.ip.ip.ip:80 \ -T localhost:6082 \ -f /etc/varnish/main.domain.vcl \ -S /etc/varnish/secret \ -s file,/var/lib/varnish/$INSTANCE/varnish_storage.bin,1G" #-s malloc,256m" My vcl file: http://pastebin.com/axJ57kD8 So, any ideas what I could be missing? Update Just so you know, ports: NameVirtualHost *:8000 Listen 8000 and <VirtualHost 205.13.12.12:8000>

    Read the article

  • apache permission errors

    - by Wilduck
    I'm trying to set up Apache on a arch-linux box as a testing environment (I'm only using the localhost, not trying to serve anything to the greater web). When setting up Django with mod_wsgi, it recommended that I set up a WSGIScriptAlias from / to /usr/local/django/mysite/apache/django.wsgi . I've done this, as well as added the /usr/.../apache directory to my httpd.conf. When I try to access http://localhost I get a 403 forbidden error. I have no idea why this is happening. Things I've tried so far: 1) chown -R http .../apache 2) chmod -R 777 .../apache 3) using a simple Alias directive to host a static file from that directory. None of these have worked. I'm at a loss for what I'm doing wrong. Below is a relevant excerpt from my httpd.conf: Alias / /usr/local/django/mysite/apache <Directory "/usr/local/django/mysite/apache"> Order deny,allow Allow from all </Directory> So my question is: what am I doing wrong?

    Read the article

  • What to do before connecting Ubuntu Server to the internet for the first time?

    - by CodeMonkey
    I just finished installing Ubuntu Server 12.10 on an Asus Eee PC 1000H (to be used as a home server/sandbox) from USB. I installed this software during installation: OpenSSH server LAMP server Samba file server Virtual Machine host I won't use 2, 3 or 4 for a while though. Can/should I turn these off somehow? I have turned home directory encryption on. Security updates are installed automatically. I have chosen a strong password for the single user. I have never plugged in the internet cable so far. Before doing so I'd like to ask: What can/should I do/install to increase security before connecting to the internet? Firewall? Fail2ban? Users/Passwords? Encryption? Enable/Disable functionality? etc. I'm sorry if you get this question a lot. I've searched around quite a while, but it still feels like I might overlook something important.

    Read the article

  • iptables port forwarding works only for localhost

    - by Venki
    Below is my iptables config. I used this for my accessing a node js website running in port 9000 through port 80. This works fine only if access the website through local host / loop back. When I try to use the ip of eth0, which is assigned by my router through dcp. this does not work, when I use ip like 192.168.0.103 to access the website. I am not able to figure what is wrong here, Already burnt a day in this, still not able to figure out :( Edit: ( more information) Earlier, I was using this configuration to develop the website, i had configured the domain name to point to 127.0.0.1 in the /etc/hosts file. It was working fine, but now I am trying to deploy the website in a vps with static ip, This configuration does not work with both static IP. # redirect port 80 to port 9000 *nat :PREROUTING ACCEPT [57:3896] :INPUT ACCEPT [0:0] :OUTPUT ACCEPT [4229:289686] :POSTROUTING ACCEPT [4239:290286] -A PREROUTING -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 9000 -A OUTPUT -d 127.0.0.1/32 -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 9000 COMMIT # Allow HTTP and HTTPS connections from anywhere (the normal ports for websites and SSL). -A INPUT -p tcp --dport 80 -j ACCEPT -A INPUT -p tcp --dport 443 -j ACCEPT -A INPUT -p tcp --dport 9000 -j ACCEPT -A INPUT -j REJECT

    Read the article

  • PORT FORWARDING TO PUT MY WEB SERVER ON THE INTERNET

    - by Chadworthington
    I went to http://canyouseeme.org/ to check to see what my external IP address. Regardless of what port I enter, it tells me that the port is blocked. I have a LinkSys router that basically has the default settings with the exception that I have WEP encrptin setup and I have forwarded a few ports, including 80 and 69. I forwarded them to the 192.x.x.103 IP address of the PC which is running IIS. That PC runs Symantec Endpoint Protection, which I right mouse clicked in the tray to Disable. These steps used to make my PC visible so I could host my own web site in IIS on port 80, or some other port, like 69. Yet, the Open Port tool cannot see my IP when it checks eiether port and when I navigate to http://my external ip/ I get "page cant be displayed" At first I was thinking that maybe Comcast is blocking port 80, but 69 doesnt work eiether. I do not see any other blockking set up in my router and, as I mentioned, I went with teh defaults except where discussed. This is a corporate PC and Symantec End Point Protecion is new to it (this previously worked on teh same PC with Symantec Protection Agent), but I thought that disabling Sym End Pt from the tray, that that would effectively neutralize it. I do not have the rights to kill the program itself. Any suggestions on what else to try to make my PC externally visible?

    Read the article

  • Replacing local home server with VPS: Suggestions?

    - by CamronBute
    So right now, I'm running an old box with a 2TB HDD in it. I use this as a file server for the home network, as well as a box for downloading large files which are synced via Dropbox. Lots of other tinkering things, too. Basically, I'm sick of paying extra for the power and having to worry about drive failures and whatnot. I'd rather get a remote server, let someone else manage it and provide access from the Internet. So, I've been looking for a Windows VPS that would give me access to install things and tinker, and I'm having a problem finding a host that offers more than 100GB of hard drive space. If they do offer a package with 100GB of storage, everything else is waaayyyy more than what I actually need. The idea is to create a permanent VPN connection from the cloud server to my home network to provide a transparent solution so I'm not having to go to lengths to transfer files or whatnot. I think a VPS solution will allow me to do this. I would like 1TB of storage space, minimum 100Mbps Internet connection, minimum 250GB bandwidth, admin access. Anyone have anything? Or am I being unreasonable? If I am, why?

    Read the article

  • ssh timeout issue connecting to an EC2 instance on OS X

    - by mamusr
    I am new to AWS and not a networking expert but curious to know more about it. I created a VPC with a public subnet only. Then i created an EC2 instance using an Ubuntu 14.04 64-bit pv AMI image (ami-e84d8480) as well generating the key pair needed to connect to it through ssh. I followed amazon's instructions to connect to an EC2 instance via ssh which did not work. Here is my attempted input and debug log: Running on OS X 10.9.4 user$ ssh -vvv -i key.pem [email protected] OpenSSH_6.2p2, OSSLShim 0.9.8r 8 Dec 2011 debug1: Reading configuration data /etc/ssh_config debug1: /etc/ssh_config line 20: Applying options for * debug1: /etc/ssh_config line 102: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to xxx.xxx.xxx.xxx [xxx.xxx.xxx.xxx] port 22. debug1: connect to address xxx.xxx.xxx.xxx port 22: Operation timed out ssh: connect to host xxx.xxx.xxx.xxx port 22: Operation timed out To attempt to resolve the issue: I enabled the SSH port. Tried different usernames other than ubuntu, like ec2-user and root. Initially set an inbound ssh rule in the security group to connect to only my ip address. When that did not work, i changed it to allow any ip to connect. But those actions did not fix the problem. Here are my guesses as to what i am missing in getting the EC2 instance connection to work. My etc/ssh_config file may be preventing the connection from taking place. I may have missed an important networking detail when setting up the VPC. I do not have a public ip address specified for the instance. I am connecting through the private ip address. My questions for the community: Am i going about it the wrong way connecting to the instance through the private ip address? if so, do i need to specify a public ip address for it to connect or some other method?

    Read the article

  • SPF record for Gmail?

    - by Chris
    I have DNS, with a SPF TXT record, configured for a domain name. The primary user of the domain name now needs to be able to send both from our SMTP servers, and also from her GMail account. I've seen all the information about adding "include:_spf.google.com" to the SPF TXT record, but, as I look into it, it appears that record is outdated. In particular, I had the user send me a test message, and note that it was: Received: from mail-la0-f50.google.com (mail-la0-f50.google.com [209.85.215.50]) However, _spf.google.com doesn't list that IP address: $ dig +short _spf.google.com txt "v=spf1 ip4:216.239.32.0/19 ip4:64.233.160.0/19 ip4:66.249.80.0/20 ip4:72.14.192.0/18 ip4:209.85.128.0/17 ip4:66.102.0.0/20 ip4:74.125.0.0/16 ip4:64.18.0.0/20 ip4:207.126.144.0/20 ip4:173.194.0.0/16 ?all" (Note that a 209.85.21*8*.0 network is listed, but not 209.85.21*5*.0.) Is there a better way to enable sending from GMail? This user sends to at least one recipient with a strict SPF policy that bounces mail not from a designated host... Many thanks!

    Read the article

  • Xen guests accessing LUNs

    - by mechcow
    We are using RHEL5.3 with a Clarion SAN attached by FC. Our situation is that we have a number of LUNs presented to Hosts and we want to dynamically present the LUNs to Xen Guests. We are not sure on what the best practice approach is to set this up. The Xen guests will form a cluster together and need the LUNs only for data partitions, i.e. when they are actively running services. So one approach would be to always present all disks to all Xen guests, and then rely up on the cluster software, and mount itself, to not mount the disk twice in two locations. This sounds kinda risky and also is not very secure (one cracked guest can see/destroy all the data). Another approach would be to dynamically add and remove the disks from the Xen guests at the dom0 level (using xm block-attach). This could work but sounds slightly complicated, I'm wondering whether Red Hat Cluster Suite supports this in some way or whether there are scripts to do this. Yet another approach would be to have the LUNs endpointed at the Xen guests themselves - I'm not sure whether this is technically possible since the multipathing has to be done at the Host level.

    Read the article

  • Mac OS X Client With Static DHCP Assignment Requests Wrong IP via Option 50

    - by Starchy
    I have a number of Mac (and a few Linux) laptops getting DHCP from a Force10 layer 3 switch, the only DHCP server on the subnet. There's a global dynamic pool, and for each full-time employee's laptop I have a single IP static pool set by MAC address. One and only one of the clients, running OS X 10.7.5, consistently fails to get a static assignment. The MAC address in the static pool definition has been carefully re-checked. Running tcpdump on a mirrored port when the laptop connects, I see that it is specifically requesting 10.100.0.252 (a dynamic address): 11:32:10.108280 IP (tos 0x0, ttl 255, id 28293, offset 0, flags [none], proto UDP (17), length 328) 0.0.0.0.bootpc > broadcasthost.bootps: [udp sum ok] BOOTP/DHCP, Request from 3c:07:54:xx:xx:xx (oui Unknown), length 300, xid 0x1399da89, Flags [none] (0x0000) Client-Ethernet-Address 3c:07:54:xx:xx:xx (oui Unknown) Vendor-rfc1048 Extensions Magic Cookie 0x63825363 DHCP-Message Option 53, length 1: Request Parameter-Request Option 55, length 9: Subnet-Mask, Default-Gateway, Domain-Name-Server, Domain-Name Option 119, LDAP, Option 252, Netbios-Name-Server Netbios-Node MSZ Option 57, length 2: 1500 Client-ID Option 61, length 7: ether 3c:07:54:xx:xx:xx Requested-IP Option 50, length 4: 10.100.0.252 Lease-Time Option 51, length 4: 7776000 Hostname Option 12, length 10: "host-name" END Option 255, length 0 PAD Option 0, length 0, occurs 8 I haven't been able to find any extra system prefs or unusual software on the laptop. Disabling the interface and rebooting or temporarily setting the IP manually both fail to make any difference. Any suggestions appreciated.

    Read the article

  • NAT cause huge External (actually internal) bandwidth usage

    - by user67953
    We have 4 servers running in a data center, with internal IP: 192.168.3.* assigned. A hardware (FORTIGATE) firewall configured NAT, and it will lead the traffic as: external IP: 111.222.333.10 -> 192.168.3.10 www.server1.com 111.222.333.11 -> 192.168.3.11 www.server2.com 111.222.333.12 -> 192.168.3.12 www.server3.com In DNS, we have www.server1.com A 111.222.333.10 Now if I send a lot of data to www.server1.com from www.server2.com, the data will be send through 111.222.333.10 (external IP) and this cause our bandwidth usage huge (expensive!). The work around I have is to add a local host mapping to server2: 192.168.3.10 www.server1.com. That way when send files from server2 to www.server1.com, it will be internal. However, we are having more and more servers, it would be hard to manually add mapping to every server. Just wondering do we have another solution for this? Can we do something in the FORTIGATE firewall? ps. The DNS server being used is public, such as opendns, Google dns etc.

    Read the article

  • Remote server security: handling compiler tools

    - by Gonzolas
    Hello! I was wondering wether to remove compiler tools (gcc, make, ...) from a remote production server, mainly for security purposes. Background: The server runs a web application on Linux. Consider Apache jailed. Otherwise, only OpenSSHd faces the public network. Of course there is no compiler stuff within the jail, so this is about the actual OS outside of any jails. Here's my personal PRO/CON list (regarding removal) so far: PRO: I had been reading some suggestions to remove compiler tools in order inhibit custom building of trojans etc. from within the host if an attacker attains unpriviliged user permissions. CON: I can't live without Perl/Python and a trojan/whatever could be written in a scripting language like that, anyway, so why bother about removing gcc et al. at all. There is a need to build new Linux kernels as well as some security tools from source directly on the server, because the server runs in 64-bits mode and (to my understanding) I can't (cross-)compile locally/elsewhere due to lack of another 64-bits hardware system. OK, so here are my questions for you: (a) Is my PRO/CON assessment correct? (b) Do you know of other PROs / CONs to removing all compiler tools? Do they weigh in more? (c) Which binaries should I consider dangerous if the given PRO statement holds? Only gcc, or also make, or what else? Should I remove the enitre software packages them come with? (d) Is it OK to just move those binaries to a root-only accessible directory when they are not needed? Or is there a gain in security if I "scp them in" every time? Thank you!

    Read the article

  • Apache2, Tomcat6, and proxy redirects

    - by Randal Hale
    So here is my question - go easy and slow. I'm a GIS Consultant and general hack with linux. I inherited this volunteer job essentially because I knew more than the rest of the team - or the rest of the team isn't as stubborn as I am... With that said a number of people have been mucking around in the server before I got involved so I've been cleaning up a lot of things. The domain names have been changed to protect the innocent. I have a server running Apache2 (port 80) and tomcat6 (8080) running on ubuntu server 10.4. There is a virtual host on Apache2 called "Runner" (the domain is runner.org). I have mod_proxy loaded. I am trying to redirect everyone that visits runner.org to http://some.ip.address:8080/openrunner-webapp/ So far I've gotten runner.org assigned to the apache2 server. Someone set up a redirect in the httpd.conf file but I believe it needs to go into the virtualhost. I tried setting the redirect in the virtualhost as: *ProxyPass / http://localhost:8080/openrunner-webapp All that does is show me the root of the Apache webserver. Anyway I'm stuck

    Read the article

  • Cannot Install Bootcamp on Mac Fusion Drive

    - by user3377019
    I have two drive installed in my mid-2013 Macbook pro. It is running osx maverick. I set the two drives (an ssd and a regular HD) up using the fusion drive (following the diy fusion drive guides out there). I went ahead and created a 40gb partition to host my windows install. I removed the SSD, installed windows, then reinstalled the SSD. As soon as the ssd was placed back in the bootcamp partition stopped booting. I get a blinking cursor on a black screen. I checked out the partition info in disk utility and it appears that the windows partition is not marked bootable. Below is some info I managed to gather. I am wondering if there is a way to fix the partition table so my bootcamp will boot. /dev/disk0 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *120.0 GB disk0 1: EFI EFI 209.7 MB disk0s1 2: Apple_CoreStorage 119.7 GB disk0s2 3: Apple_Boot Boot OS X 134.2 MB disk0s3 /dev/disk1 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *500.1 GB disk1 1: EFI EFI 209.7 MB disk1s1 2: Microsoft Basic Data BOOTCAMP 40.0 GB disk1s2 3: Apple_CoreStorage 459.2 GB disk1s3 4: Apple_Boot Boot OS X 650.0 MB disk1s4 /dev/disk2 #: TYPE NAME SIZE IDENTIFIER 0: Apple_HFS Macintosh HD *573.4 GB disk2 Name : BOOTCAMP Type : Partition Disk Identifier : disk1s2 Mount Point : /Volumes/BOOTCAMP File System : Windows NT File System (NTFS) Connection Bus : SATA Device Tree : IODeviceTree:/PCI0@0/SATA@1F,2/PRT1@1/PMP@0 Writable : No Universal Unique Identifier : 584BAED6-4C46-4F18-93B3-957F6E27003C Capacity : 40 GB (39,998,980,096 Bytes) Free Space : 16.34 GB (16,339,972,096 Bytes) Used : 23.66 GB (23,659,003,904 Bytes) Number of Files : 86,424 Number of Folders : 0 Owners Enabled : No Can Turn Owners Off : No Can Be Formatted : No Bootable : No Supports Journaling : No Journaled : No Disk Number : 1 Partition Number : 2

    Read the article

  • Does a 3ware "ECC-ERROR" matter on a JBOD when I have ZFS?

    - by Stefan Lasiewski
    I have a FreeBSD 8.x machine running ZFS and with a 3ware 9690SA controller. The 3ware controller shows an ECC-ERROR with one of the disks: //host> /c0 show VPort Status Unit Size Type Phy Encl-Slot Model ------------------------------------------------------------------------------ p0 OK u0 279.39 GB SAS 0 - SEAGATE ST3300657SS p1 OK u0 279.39 GB SAS 1 - SEAGATE ST3300657SS p2 OK u1 931.51 GB SAS 2 - SEAGATE ST31000640SS p3 ECC-ERROR u2 931.51 GB SAS 3 - SEAGATE ST31000640SS p4 OK u3 931.51 GB SAS 4 - SEAGATE ST31000640SS /c0 show events shows no ECC errors in it's recent history. ZFS does not currently detect any errors. zpool status says No known data errors My question: Is this ECC-ERROR something that I need to be concerned about? According to the 3ware CLI 9.5.2 Manual, an ECC-ERROR means that the 3ware controller caught a read-error for one or more sectors on this drive. This sometimes occurs when a RAID array is recovering from a failed disk. I believe that ECC-ERRORS can also be detected when the 3ware Controller verifies each disk. None of the drives have failed and thus there was no drive rebuild, so I assume that 3ware discovered a bad sector when it ran it's weekly auto-verify scan of the disks. Is this a safe assumption? According to our logs, ZFS has not detected any bad sectors on this drive. ZFS can work around read errors -- if ZFS detects a bad sector on the drive, it will simply mark that sector as bad and never use it again. From the ZFS perspective one bad sector isn't a big deal, although it might indicate that the drive is starting to go bad.

    Read the article

  • How to configure DNS server to forward queries about particular domain AND all of its subdomains

    - by user71061
    I have DNS server (linux box with bind9), which is authorative for some domains, and forward all other queries to external DNS server of my ISP provider. So far no problem. Now I want that queries about some specific domains were forwarded to my internal DNS server, f.e.: zone "some_domain" { type forward; forwarders { some_internal_dns_ip; }; }; So far still no problem, all works ok. But then, I want also to forward some reverse DNS queries to my internal DNS. So, I have added: zone "16.172.in-addr.arpa" { type forward; forwarders { some_internal_dns_ip; }; }; And this doesn't work as I expect. Queries about "16.172.in-addr.arpa" (for example 1.16.172.in-addr.arpa) are resolved correctly, but reverse queries about full address (for example 1.1.16.172.in-addr.arpa) are not. I understand that my server should use here some recursive query, but could not configure it. I have already tried adding following options recursion yes; allow-recursion { 127.0.0.1; }; allow-recursion-on { 127.0.0.1; }; but with no success . (I have used loopback address here, because I need this functionality only for my DNS host, and not for its clients) Any suggestions?

    Read the article

  • two computers on same network cannot ping eachother nor view NetBios resources

    - by slava
    I'd like to find out the problem of my network configuration I have network configuration is like in this diagram: The problem is between laptop1 and laptop2. At first I thought it was samba server problem. I was configuring samba server on one of the laptops and I wasn't able to access the shares from the second laptop no matter what I was doing. After installing/removing/configuring samba-server a couple of times I realized that the problem resides somewhere else. Laptop configurations: - Laptop1: ubuntu 12.04 - Laptop2: Windows 7/ ubuntu 12.04 ( dual boot ) - Server : ubuntu 12.04 When I do "ping 192.168.0.10" from laptop2, I get "Destination host unreachable". The same situation is when I ping in other direction. When I access Laptop1 shares from Laptop2, having windows 7 loaded, I get the error message: "Error code: 0x80070035 The network path was not found." When I ping "server" or "router" or "wifi router" from any of laptops I get a reply. The same with windows shares, I am able to access "server"s shares from Windows and Ubuntu, from any of my laptops. Netbios can't function correctly, that's obvious, I am unable to access windows shares between laptops. I assume that on "wifi router" is a miss-configuration, but I can't find what specifically. The "Wifi router" works as Hub + wifi, it is connected to "router" not in WAN port but in LAN1. Please, help me correctly configure the router to make them see each-other, or at least make NetBios work correctly, between laptops, to be able to access windows shares. Thanks!

    Read the article

  • Using mixed disks and OpenFiler to create RAID storage

    - by Cylindric
    I need to improve my home storage to add some resilience. I currently have four disks, as follows: D0: 500Gb (System, Boot) D1: 1Tb D2: 500Gb D3: 250Gb There's a mix of partitions on there, so it's not JBOD, but data is pretty spread out and not redundant. As this is my primary PC and I don't want to give up the entire OS to storage, my plan is to use OpenFiler in a VM to create a virtual SAN. I will also use Windows Software RAID to mirror the OS. Partitions will be created as follows: D0 P1: 100Mb: System-Reserved Boot D0 P2: 50Gb: Virtual Machine VMDKs for OS D0 P3: 350Gb: Data D1 P1: 100Mb: System-Reserved Boot D1 P2: 50Gb: Virtual Machine VMDKs for OS D1 P3: 800Gb: Data D2 P1: 450Gb: Data D3 P1: 200Gb: Data This will result in: Mirrored boot partition Mirrored Operating system Mirrored Virtual machine O/S disks Four partitions for data In the four data partitions I will create several large VMDK files, which I will "mount" into OpenFiler as block-storage devices, combined into three RAID arrays (due to the differing disk sizes) In effect, I'll end up with the following usable partitions SYSTEM 100Mb the small boot partition created by the Windows 7 installer (RAID-1) HOST 50Gb the Windows 7 partition (RAID-1) GUESTS 50Gb Virtual machine Guest VMDK's (RAID-1) VG1 900Gb Volume group consisting of a RAID-5 and two RAID-1 VG2 300Gb Volume group consisting of a single disk On VG1 I can dynamically assign storage for my media, photographs, documents, whatever, and it will be safe. On VG2 I can dynamically assign storage for my data that is not critical, and easily recoverable, as it is not safe. Are there any particular 'gotchas' when implementing a virtual OpenFiler like this? Is the recovery process for a failing disk going to be very problematic? Thanks.

    Read the article

  • Recommendations for good FTP server for Win 2008 x64

    - by sfhtimssf1970
    I spent a bunch of time learning/configuring the "all new and better" FTP feature for IIS7. In my opinion, it still fails hard: In order to have multiple FTP sites on the same machine, you have to use host|user usernames (like domain.com|jason) for every account. Using IIS Manager auth doesn't seem to work at all. I'm sure I'm doing something wrong, but I can't figure out what the hell it is. I've read all the official articles on it and configured it a hundred different ways. Doesn't play well with passive connection types. That has to be disabled on the client in order for it to work. Doesn't have any way to allow one user to see multiple sites no matter what binding they are connected to. For instance, if "jason" connects to ftp.domain.com, he should be able to see domain2.com, domain3.com without seeing domain4.com and domain5.com. It takes an act of God to set this up with IIS7. So I'm wanting to install a third party FTP server instead. I've looked at FileZilla both ZFTPServer. Anyone know of any pros/cons on these? Any other recommendations?

    Read the article

  • Trying to Set Up SSH Tunneling To MySQL Server for MySQL Query Browser

    - by Teno
    I'm trying to set up SSH tunneling on a remote web server to another MySQL server so that the database can be browsed easily with MySQL Query Browser. I'm following this page but cannot connect to the MySQL server. http://www.howtogeek.com/howto/ubuntu/access-your-mysql-server-remotely-over-ssh/ What I've done: logged in to the web server with Putty via SSH. typed ssh -L 33060:[database]:3306 [myusername]@[webserver_address] where [...]s are altered by the actual information. I was asked a password and typed it and got the following message. So it seems login was successful. socket: Protocol not supported Last login: .... 2012 from .... Copyright (c) 1980, 1983, 1986, 1988, 1990, 1991, 1993, 1994 The Regents of the University of California. All rights reserved. FreeBSD 7.1-RELEASE.... Welcome to FreeBSD! Opened MySQL Query Browser in Windows and entered Server Host: localhost Port: 33060 UserName: myusername PassWord: mypassword And it says, Could not connect to the specified instance. MySQL Error Number 2003 Can't connect to MySQL Server on 'localhost' (10061) Sorry if this is too basic. Thanks for your information.

    Read the article

  • Connection established to google DNS, can't resolve any hosts

    - by Tar
    As you can see from the picture above, I am connected to google DNS but am unable to resolve any hostnames. When I try to ping sites like google.com, yahoo.com, etc, I get 'ping: unknown host'. Yes, I am able to ping localhost, I am able to ping hostname.domain.com, but not domain.com. I can't ping my nameservers. I can ping all hosts by IP address and that works. The output of my /etc/resolv.conf: nameserver 8.8.8.8 nameserver 8.8.4.4 Anyone know what the problem could be? 23:30:04.304955 IP my_server.44457 > 8.8.8.8.domain: 28349+ A? google.com. (28) 23:30:06.137985 IP 112.100.0.78.19781 > my_server.domain: 18717 [1au] A? www.my_domain.com. (46) 23:30:06.138286 IP my_server.domain > 112.100.0.78.19781: 18717*- 2/0/1 CNAME my_domain.com., A my_server (76) 23:30:06.686582 IP 112.100.0.74.19181 > my_server.domain: 65046 [1au] A? my_domain.com. (42) 23:30:06.686811 IP my_server.domain > 112.100.0.74.19181: 65046*- 1/0/1 A my_server (58) 23:30:07.043764 IP my_server.50465 > 4.2.2.1.domain: 13865+ PTR? 142.254.22.67.in-addr.arpa. (44) 23:30:09.065904 IP my_server.45242 > 8.8.4.4.domain: 29011+ PTR? 123.72.117.130.in-addr.arpa. (45) 23:30:09.310021 IP my_server.45440 > 8.8.4.4.domain: 28349+ A? google.com. (28)

    Read the article

  • How can I use two Internet connections in Ubuntu?

    - by Martin
    My goal is to be able to do something like this: curl google.com --interface ppp0 curl google.com --interface p2p2 ppp0 is a DSL connection, and p2p2 is a separate direct Internet connection. Currently I can only get one of these to work at a time. When I enable one, the other one stops working. /etc/network/interfaces: # The loopback network interface auto lo iface lo inet loopback # DSL auto p2p1 iface p2p1 inet manual auto dsl-provider iface dsl-provider inet ppp pre-up /sbin/ifconfig p2p1 up # line maintained by pppoeconf provider dsl-provider # DIRECT auto p2p2 iface p2p2 inet dhcp ifconfig: lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 p2p1 Link encap:Ethernet inet6 addr: fe80::20a:ebff:fe21:99c6/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 p2p2 Link encap:Ethernet inet addr:192.168.1.101 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::20a:ebff:fe17:1249/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 ppp0 Link encap:Point-to-Point Protocol inet addr:53.193.231.167 P-t-P:53.193.224.1 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1492 Metric:1 route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 0.0.0.0 0.0.0.0 U 0 0 0 ppp0 10.0.10.0 0.0.0.0 255.255.255.0 U 0 0 0 eth2 53.193.224.1 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 p2p2 By default, only ppp0 works. If I run "route add default gw 192.168.1.1 p2p2" then I can use p2p2 but ppp0 stops working. If I then run "route add default gw 53.193.224.1 ppp0" then I can use ppp0 again but p2p2 stops working. What can I do to be able to use both interfaces selectively?

    Read the article

  • Xen networking is inconsistent in multiple ways

    - by WildVelociraptor
    I've been running xen for a few weeks now on an Ubuntu 12.04 server. I've got 3 guests: a Windows Server 2003 guest, an Ubuntu guest, and a Windows 7 Guest. My Server 2003 guest seems to work fine; I can ping it from the network, the hostname resolves correctly, and it can see the internet. This guest is attached to xenbr0, and its IP is 10.100.1.21. My Win7 guest is what is driving me crazy. I use the same configuration script as a base, changing the important parts (hostname and boot disk, mainly). It installed correctly, and is currently running, but I am unable to ping this guest. It's hostname is "alexander", with an IP of 10.100.1.22. It is also using xenbr0. The guest can ping the firewall and various IP addressess, but seems unable to resolve hostnames. Now heres the weird part: when I use rdesktop (RDP client) from my laptop (not the xen host) to connect to alexander, it works just fine. It apparently resolves the hostname fine, and does the same with the IP address. So, can someone tell me why I can access this guest using RDP, but not using ping, nslookup, traceroute, etc? It's apparently invisible to all but RDP. Also, is it okay to use two guests on the same bridge, or do i need different ones for each guest? Thanks in advance for any help. Regards

    Read the article

< Previous Page | 474 475 476 477 478 479 480 481 482 483 484 485  | Next Page >