Search Results

Search found 25521 results on 1021 pages for 'static objects'.

Page 396/1021 | < Previous Page | 392 393 394 395 396 397 398 399 400 401 402 403  | Next Page >

  • Configuring network route between two routers on home network

    - by Paul
    I have a home network - the main router connected to the internet (and has wifi) is a Netopia box. Connected to it is a Linksys router. Everything currently works - I can connect via the wireless network and get to the internet. Machines connected to the Linksys can connect with each other and connect to the internet. Both routers are configured to serve addresses via DHCP (Netopia 192.168.1.1 - 192.168.1.99), Linksys (192.168.0.1 - 192.168.0.100). Here's how they are connected: Internet <-> Netopia w/wifi (192.168.1.254) <-> Linksys (192.168.0.1) I decided I really need to allow wireless connections to also communicate with machines behind the Linksys router. Currently the Linksys is configured to obtain an IP address via DHCP. I thought this would be straightforward. I configured the Linksys to have a static IP address: IP: 192.168.1.100 Mask: 255.255.255.0 GW: 192.168.1.254 Then I configured a static route on the Netopia: Network: 192.168.0.0 Mask: 255.255.255.0 GW: 192.168.1.100 So it should now look like this: Internet <-> Netopia w/wifi (192.168.1.254) <-> (192.168.1.100) Linksys (192.168.0.1) I reset both routers. I cannot ping the Netopia (192.168.1.254) from inside the Linksys network, and if I attempt to ping 192.168.0.1 from a wifi connection I get a "Destination host not available" error. Obviously I'm missing something, but I'm not sure where. Any ideas on what I'm missing?

    Read the article

  • nginx - proxy_pass is working - Apache isn't doing what it should...

    - by matthewsteiner
    So, I've got this in my nginx.conf: location ~* ^.+.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|js)$ { root /var/www/vhosts/example.com/public/; access_log off; expires 30d; } location / { proxy_pass http://127.0.0.1:8080/; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } So anything that is a "static file" that exists will just be done with nginx. Otherwise, it should pass it off to Apache. Right now, static files are working correctly. However, if something is passed to apache and it's example.com or subdomain.example.com, apache just spits out the "Apache 2 Test Page" that you get if there's nothing there. Apache worked fine before, so I'm guessing it has to do with the way nginx is "asking". I'm not sure though. Any ideas?

    Read the article

  • Recognizing Dell EquilLogic with Nagios

    - by user3677595
    EDIT: All firmware and models are compatible, that is why nothing is posted about it. Okay, so there will be a lot here, so please bare with me. I've been working on this now for a few hours (reading manuals and such) so I'm not just coming here right out of the blue. I am working on a PRE-EXISTING Nagios server where there are several other existing plugins and checks running and working. Now I want to add another server there to check so I made the following modifications: First and foremost, I added a file to /usr/local/nagios/libexec named: check_equallogic.sh. The permissions are 755, the same as all others. I have chowned to nagios:nagios and in the listing it shows the Owner as Nagios. I then added a command to the commands.cfg file in \usr\local\nagios\etc\objects that shows the following: # 'check_equallogic' command definition define command{ command_name check_equallogic command_line $USER1$/check_equallogic -H $HOSTADDRESS$ -C $ARG1$ -t $ARG2$ $ARG3$ } Following this, I created a file named equallogic.cfg in the objects directory and it contains (more or less): define host{ use linux-server ; Inherit default values from a template host_name 172.16.50.11 ; The name we're giving to this device alias EqualLogic ; A longer name associated with the device address 172.16.50.11 ; IP address of the device contact_groups admins } Check Equallogic Information define service{ use generic-service host_name 172.16.50.11 service_description General Information check_command check_equallogic!public!info } After ensuring that permissions are okay for all files, I restart the nagios service, no errors. When I go into the WebGUI, I get the following errors AFTER the check runs: (Return code of 127 is out of bounds - plugin may be missing) Extra, probably unrelated problem Furthermore, when I log into the EquilLogic server, under Audit logs I get the following error: Level: AUDIT Time: 26/05/2014 3:59:13 PM Member: ps4100-1 Subsystem: agent Event ID: 22.7.1 SNMP packet validation failed, request received from 172.16.10.11 An snmpwalk receives a timeout, whereas others succeed. I will work on importing the MIBs tomorrow. The reason why I am mentioning it is because I want to make sure that it is only a MIB issue for the SNMP. If it is, then ignore this area. I am entirely unsure of what to do here.

    Read the article

  • Migrating to AWS Cloud with auto-scaling - where to put Redis and ElasticSearch?

    - by RobMasters
    I've been trying to research this topic but haven't found anywhere that recommends where to install services such as Redis and ElasticSearch when migrating to a cloud framework. I'm currently running a Symfony2 application on 2 static servers - one is running MySQL and the other is the public facing web server, which also has Redis and ElasticSearch running on it. Both of these servers are virtualised, but they're static in terms of not being able to replicate at present (various aspects are still dependent on the local filesystem). The goal is to migrate to AWS and use auto-scaling to be able to spin up and kill web servers as required, but I'm not clear on what I should put on each EC2 instance. Should they be single-responsibility only? i.e. Set up individual instances for the web server(s), Redis, and ElasticSearch and most likely an RDS instance for MySQL and only set up auto-scaling on the web server(s)? I don't foresee having to scale the ElasticSearch server anytime soon as it's only driving the search functionality, but it's possible that Redis may need to be replicated at some point - but should this be done manually? I'm not sure of how this could be done automatically as each instance needs to be configured to know about it's master/slave(s) as far as I know. I'd appreciate advice on this. One more quick question while I'm here - how would I be able to deploy code changes when there are X web servers currently active? I'm using a Capifony deployment script (Symfony2 version of Capistrano), which I think can handle multiple servers easily enough by specifying an array of :domain addresses...but how can should this be handled when the number of web servers can vary?

    Read the article

  • How to setup GIT repo on server with need for working dir (non- bare)

    - by OrangeTux
    I want to have configurate a GIT repo for a website. Multiple users will have a clone of the repo on their local machine and on the end of each day they push their work to the server. I can setup a bare repo, but I want a working dir/non-bare repository. The idea is that the working dir of the repository will the root folder for the website. At the end of each day all changes will be visible directly. But I can't find a way to do this. Initializing the server repo with git init gives the following error when a client is trying to push some files: git push origin master [email protected]'s password: Counting objects: 3, done. Writing objects: 100% (3/3), 227 bytes, done. Total 3 (delta 0), reused 0 (delta 0) remote: error: refusing to update checked out branch: refs/heads/master remote: error: By default, updating the current branch in a non-bare repository remote: error: is denied, because it will make the index and work tree inconsistent remote: error: with what you pushed, and will require 'git reset --hard' to match remote: error: the work tree to HEAD. remote: error: remote: error: You can set 'receive.denyCurrentBranch' configuration variable to remote: error: 'ignore' or 'warn' in the remote repository to allow pushing into remote: error: its current branch; however, this is not recommended unless you remote: error: arranged to update its work tree to match what you pushed in some remote: error: other way. remote: error: remote: error: To squelch this message and still keep the default behaviour, set remote: error: 'receive.denyCurrentBranch' configuration variable to 'refuse'. To ssh://[email protected]/home/orangetux/www/ ! [remote rejected] master -> master (branch is currently checked out) error: failed to push some refs to 'ssh://[email protected]/home/orangetux/www/' So I'm wondering if this the right way to setup a GIT repo for a website? If so, how do I have to do this? If not, what is a better way to setup a GIT repo for the development of a website? EDIT you can't push to a non-bare repository Oke, clear. But whats the way to solve my problem? Create a bare repository on the server and have a clone of this repo on the same server in the htdocs folder? This looks a bit clumsy to me. To see the result of a commit I've to clone the repository each time.

    Read the article

  • Mail not piping in postfix

    - by user220912
    I have setup a postfix server and wanted to test the piping of mail to my perl script where i can make use of it and filter the mails.I wrote a test script for that which just logs the information in txt file. but i don't see any changes on sending the mail. My postconf-n output: alias_database = hash:/etc/aliases append_dot_mydomain = no command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix data_directory = /var/lib/postfix debug_peer_level = 2 debugger_command = PATH=/bin:/usr/bin:/usr/local/bin:/usr/X11R6/bin ddd $daemon_directory/$process_name $process_id & sleep 5 html_directory = no inet_interfaces = all inet_protocols = all mail_owner = postfix mailbox_size_limit = 0 mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man mydestination = yantratech.co.in, localhost.localdomain, localhost myhostname = tcmailer8.in mynetworks = 103.8.128.62, 103.8.128.69/101, 168.100.189.0/28, 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 myorigin = $mydomain newaliases_path = /usr/bin/newaliases.postfix queue_directory = /var/spool/postfix readme_directory = /usr/share/doc/postfix-2.6.6/README_FILES recipient_delimiter = + relayhost = sample_directory = /usr/share/doc/postfix-2.6.6/samples sendmail_path = /usr/sbin/sendmail.postfix smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_tls_cert_file = /etc/pki/tls/certs/tcmailer8.in.cert smtpd_tls_key_file = /etc/pki/tls/private/localhost.key smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes transport_maps = hash:/etc/postfix/transport virtual_alias_maps = hash:/etc/postfix/virtual virtual_gid_maps = static:5000 virtual_mailbox_base = /home/vmail virtual_mailbox_domains = /etc/postfix/vhosts virtual_mailbox_maps = hash:/etc/postfix/vmaps virtual_minimum_uid = 1000 virtual_uid_maps = static:5000 here's my transport: [email protected] email_route my main.cf declaration: transport_maps = hash:/etc/postfix/transport my master.cf declaration: email_route unix - n n - - pipe flags=FR user=nobody argv=/etc/postfix/test.php -f $(sender) -- $(recipient) and my php script: #!/usr/bin/php <?php $fh = fopen('/etc/postfix/testmail.txt','a'); fwrite($fh, "Hello it works\n"); fclose($fh); ?> I am sending mails through telnet in localhost.

    Read the article

  • CloudFront with Custom Origin and ELB

    - by kmfk
    We are using CloudFront for our static assets but also wanted to allow for Gzip. We set up a new distribution with a custom origin pointing back to our application servers which are behind a elastic load balancer. We manually keep the files in sync across the cluster and update them when we publish. However, with this set up, we get nothing but Miss and RefreshHits from CloudFront, which so far has defeated the purpose. Is there any additional settings in order to use an ELB as your custom origin? In the docs, it references this as a viable solution. It appears when we point the distribution to a single server in our production cluster, cloudfront properly caches our assets. Is it possible that the sticky sessions cookie and the subsequent header that gets added by it could be an issue? Cache-Control: no-cache="set-cookie" //Added by load balancer Any ideas? FYI - currently, we have our custom origin pointing to a single EC2 instance, so caching is working correctly - in case you try to curl the file below. Example headers: curl -I http://static.quick-cdn.com/css/9850999.css HTTP/1.0 200 OK Accept-Ranges: bytes Cache-Control: max-age=3700 Cache-Control: no-cache="set-cookie" Content-Length: 23038 Content-Type: text/css Date: Thu, 12 Apr 2012 23:03:52 GMT Last-Modified: Thu, 12 Apr 2012 23:00:14 GMT Server: Apache/2.2.17 (Ubuntu) Vary: Accept-Encoding X-Cache: RefreshHit from cloudfront X-Amz-Cf-Id: K_q7Zy3_jdzlEJ85ukELVtdx1GmuXqApAbZZ7G0fPt0mxRMqPKX5pQ==,RzJmPku-rEIO9WlvuSoKa8hiAaR3dLk5KC4cQMWWrf_MDhmjWe8n6A== Via: 1.0 28c34f9fbf559a21ee16594849e4fc9c.cloudfront.net (CloudFront) Connection: close

    Read the article

  • How to route between 2 networks with a server with 2 network cards?

    - by LumenAlbum
    This is the first time I am faced with routing and it seems I have hit a dead end. I have the following scenario: client1: 192.168.1.10 255.255.255.0 gateway: 192.168.1.100 DNS server: 192.168.1.100 client2: 192.168.1.20 255.255.255.0 gateway: 192.168.1.100 DNS server: 192.168.1.100 server (Windows Server 2008 R2 with enabled RAS & Routing Services) network card 1 (connected to a switch along with the clients) 192.168.1.100 255.255.255.0 DNS server: 127.0.0.1 network card 2 (connected to the router) 192.168.2.100 255.255.255.0 gateway: 192.168.2.1 DNS server: 127.0.0.1 (DNS forwarding to 192.168.2.1) ISP router (with connection to internet) 192.168.2.1 Now in this scenario I have tried to route traffic from the 192.168.1.0/24 network with the clients to the 192.168.2.0/24 network with the routers to connect them to the internet. However, no matter what I do I get no positive ping to the router 192.168.2.1. Ping from 192.168.168.1.10 to 192.168.1.20: Success to 192.168.1.100: Success to 192.168.2.100: Success to 192.168.2.1: not reachable The routing table contains the 2 routes 192.168.1.0 and 192.168.2.0 as directly connected. Does anyone know where the routing fails? I have searched different forums but mostly found nothing relevant. One post however pointed out that in a similar situation the problem was that the router doesn't know the way back and the internet router would need a static route back to the first router. If that really is the case, I take it there is no solution with my equipment, because the standart ISP router doesn't allow to set any static routes.

    Read the article

  • How to handle files that don't need version control in mercurial

    - by richardh
    I am new to mercurial, and for the most part do LaTeX reports and statistical calculations in R using .csv and/or .sqlite files. Re LaTeX, all I really care is the .tex file. Re R, I don't need version control on the .csv or .sqlite files because they are static. When I do 'hg add' for a repo with a .csv and/or .sqlite file, I get a warning like: rev2.sqlite: up to 3070 MB of RAM may be required to manage this file (use 'hg revert rev2.sqlite' to cancel pending addition) So I revert and subsequently use adds like hg add -X *.sqlite. I guess I really have two questions: (1) Should I ignore these warnings? Because these large files are static, can I just add to the repo knowing that the diff files will always be empty and not worry about wasted resources? (2) If I should keep excluding these files from the repo, is there away that I can fix this option? I.E., add to my .hgrc file something that always appends an option like -I *.tex -I *.R to my 'hg add' commands? Thanks!

    Read the article

  • Unable to connect to server after a certain amount of time

    - by Troy
    I am a business FIOS subscriber with 5 static IPs. I have the following network setup: Verizon provided ONT Dlink switch Dell server running Ubuntu 12.04 with iptables enabled and a static IP address. The makes/models of hardware are: FIOS ONT Alcatel-Lucent I-211M-H ONT D-Link D-Link Web Smart Switch DES-1228P Server Dell Optiplex 755 (Ubuntu 12.04 Server) I have iptables running on the server with http, https and ssh ports open. I can connect to a website on the server from an external computer, but after a certain amount of time (mins to hours), I can no longer connect. All I have to do to re-enable connectivity is connect to the server via SSH from a computer INSIDE the network. I don't have to actually login, I just have to establish a connection. I can then access the website externally again. I did some googling and it seems some of verizon's equipment had an ARP bug where the ARP entries would expire after a certain time period, but those issues all seem to be from back in 2009 - 2010. I know the switch has an 'auto learning Mac address' feature, but I'm not sure if that could be the problem or not. Does anyone have any ideas or advice on how I can troubleshoot this? Thanks!

    Read the article

  • esx5 debian VM vlan setup

    - by Kstro21
    i have a server with ESX5, have a switch with about 20 vlans, this is how setup the trunk port interface GigabitEthernet0/1/1 description ToOper port link-type trunk undo port trunk allow-pass vlan 1 port trunk allow-pass vlan 2 to 14 stp disable ntdp enable ndp enable bpdu enable then, i created a standar switch(sw1) using the vSphere Client, the VLAN ID is set to All (4095), i also created a VM with Debian 6, with a NIC connected to sw1, now, i want to configure this NIC for a selected group of vlans auto vlan10 iface vlan10 inet static address 11.10.1.0 netmask 255.255.255.224 mtu 1500 vlan_raw_device eth0 auto vlan14 iface vlan14 inet static address 11.10.1.65 netmask 255.255.255.248 mtu 1500 vlan_raw_device eth0 so, when i restart the network using /etc/init.d/networking restart, i got this error Reconfiguring network interfaces...SIOCSIFADDR: No such device vlan14: ERROR while getting interface flags: No such device SIOCSIFNETMASK: No such device SIOCSIFBRDADDR: No such device vlan14: ERROR while getting interface flags: No such device SIOCSIFMTU: No such device vlan14: ERROR while getting interface flags: No such device Failed to bring up vlan14. done. this is just part of the error, so, my questions is: is this possible?, i mean, what i'm trying to achieve using ESX Virtual Machines, VLANS, etc is this a Debian problem? can be solved? i've read about a file named z25_persistent-net.rules in Debian but it doesn't exist in my installation. in the In the vSphere Networking for ESX5 guide, you can read: If you enter 0 or leave the option blank, the port group can see only untagged (non-VLAN) traffic. If you enter 4095, the port group can see traffic on any VLAN while leaving the VLAN tags intact. So, in theory, it should work, right? Hope you can help me up with this one Thanks

    Read the article

  • Backup of Xen servers and laptops with encryption

    - by Konrads
    Hello, I have the following backup objects: Multiple Xen Linux instances running on same box. A few laptops Windows that contain also private data that needs to be stored encrypted I have this backup media: A disk attached to the Xen box A disk available remotely for off-site backups I want to do daily-weekly-monthly backups of Xen systems and to back-up laptops in approximately same mode but in push mode (backup initiated by laptop) with encryption. Any good free solutions?

    Read the article

  • Connecting to SVN server from a computer outside of my LAN

    - by Tom Auger
    I've got a Fedora server running Subversion and svnserve on port 3690. My repo is at /var/svn/project_name. I have my router forwarding port 3690 to the local server (as well as port 80, 21, 22 and a few others). When I connect locally to svn://192.168.0.2/project_name it works great. When I connect from an external server to svn://my.static.ip/project_name I get a time out connecting to the host. However, if I http://my.static.ip there is no problem, so port forwarding is working (at least for port 80). I don't want to run WebDAV or svn via HTTP/s. I'd like it to work using svnserve, as documented in the svn book. What have I misconfigured? EDIT Here is the last part of my iptables dump. I'm not an expert, but it looks OK to me: ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:svn ACCEPT udp -- anywhere anywhere state NEW udp dpt:svn ACCEPT tcp -- anywhere anywhere state NEW tcp dpts:6680:6699 ACCEPT udp -- anywhere anywhere state NEW udp dpts:6680:6699 REJECT all -- anywhere anywhere reject-with icmp-host-prohibited EDIT 2 Results from sudo netstat -tulpn tcp 0 0 0.0.0.0:3690 0.0.0.0:* LISTEN 1455/svnserve

    Read the article

  • Configure linux machine as bridge/switch and end device

    - by leemes
    At my home, I have two desktop PCs in two rooms. The router / DSL modem is in one of these rooms. Now I want to configure a home server (having 2 LAN ports, running 24/7) in the corridor between the two rooms, using only one LAN cable at each door. This gives me the following physical configuration: (door) (door) .----/-/----. .-----/-/----------._ FritzBox | | | .----´´ DSL Router PC1 Server | PC2 As just said, the server has 2 network interfaces and is running Ubuntu. What I need now is a network configuration which enables both the server and PC1 to connect to the router. I think the server needs to serve as a bridge or switch. Currently, all computers are configured having static IP addresses. If I'm understanding it correctly, a bridge / switch doesn't have its own IP address, but as the server needs to be configured as an own end device, it needs to have one. My first question is, do I have to configure both interfaces separately, giving both the same static IP address? My next question is, how do I bridge the two physical networks into one? I have basic understanding (but am always confused again and again) of bridges and switches, but I don't know how to configure it in software. I only know that it's possible to do so :) The third question is: Is it possible to configure this in a way that network packets from/to PC1 to/from the router only go through hardware or only consume low CPU in the server? Can you help me? Thanks in advance!

    Read the article

  • Using a nat rule to translate 80/443 traffic to web server, but internal users cannot access it using external ip/domain name

    - by Josh
    I am using Cisco ASDM for ASA I have my internal network called soa. My outside interface is called outside. Let's say my outside IP given to me by my ISP isp is y.y.y.y I have a web server inside my network with a static ip of x.x.x.110. I have configured 2 static nat rules (one for http the other for https). Source is x.x.x.110. Interface is outside, service (http or https). Maybe I am doing this wrong, but when I run the packet tracer, I choose outside interface and for the source IP I used 8.8.8.8 and the destination ip is my outside IP address, y.y.y.y When I run that, it shows the packet traversing successfully, using 9 steps. For my other test, I switch to the soa interface, input an ip on that network, and leave the destination the same. This test comes up with 2 steps and then fails on my access list. When I see the rule that fails, it is my catch all which is source: any desitnation: any, service: ip action: deny. What rule do I need to make to allow my soa network access to go out and come back in by my external IP addess (using a domain name attached to that ip in my dns, of course)?

    Read the article

  • Ubuntu server apt-get says "(-5 - No address associated with hostname)"

    - by Srini
    I have a ubuntu 12.04 server. Running sudo apt-get update on it produces errors like this: W: Failed to fetch http://au.archive.ubuntu.com/ubuntu/dists/precise-backports/main/binary-i386/Packages Something wicked happened resolving 'au.archive.ubuntu.com:http' (-5 - No address associated with hostname) I am able to ping all the other hosts on the network and also Google's DNS 8.8.8.8. But am unable to ping www.google.com. So, I'm guessing something is wrong with my DNS setup, but not sure what. I use static IP and my /etc/network/interfaces looks like this: auto eth0 iface eth0 inet static address 192.168.1.50 netmask 255.255.255.0 network 192.168.1.0 broadcast 192.168.0.255 gateway 192.168.1.1 #dns-nameserver 203.12.160.35 203.12.160.36 #nameserver 203.12.160.35 203.12.160.36 My /etc/resolv.conf and /etc/resolvconf/resolv.conf.d/base are both empty and my /etc/resolvconf/resolv.conf.d/original says: nameserver 192.168.1.1 Any help would be greatly appreciated. P.S. I've googled it a bit and the common resolution is to switch to DHCP which I don't want to do since this is my home server. Thanks Srini

    Read the article

  • Raspberry pi slows down my entire network

    - by gnusouth
    Whenever my Raspberry Pi is connected to the network (via ethernet) the entire network is slowed to a crawl. On my main computer, ping times for google.com go from ~10ms to ~200ms and it takes forever to load web pages. Connections are also slow on the Pi, with an apt-get update showing pathetic speeds in the order of 1KB/s. Turning off the Pi completely removes the drag from the network. I've tried static and dynamic IP addresses for the Pi, but both have the same problems. I'm currently using Raspbian (downloaded today), but also had this problem with Arch Linux. I've checked the connection's duplex with dmesg | grep -i duplex, which shows that the Pi's connection is running at 100Mbps, full-duplex, as expected. My modem/router is a Billion 7404VNPX (an Australian thing); relatively high-end, albeit a bit buggy at times (it will occassionally delete all its firewall settings). It assigns IPs in the range 192.168.1.1 to 192.168.1.20 and has 192.168.1.254 as its own IP. When I assign static IPs I tend to use the 192.168.1.200 area. Does anyone have any idea as to what could be causing this weird slowdown? Or any tests I could try? Thanks

    Read the article

  • Juniper SSG20 IP settings for email server

    - by codemonkie
    We have 5 usable external static IP addresses leased by our ISP: .49 to .53, where .49 is assigned to the Juniper SSG20 firewall and NATed for 172.16.10.0/24 .50 is assigned to a windows box for web server and domain controller .51 is assigned to another windows box with exchange server (domain: mycompany1.com) mx record is pointing to 20x.xx.xxx.51 Currently there is a policy set for all SMTP incoming traffic addressed to .51 forward to the NATed address of the exchange server box (private IP: 172.16.10.194). We can send and receive emails for both internal and external, but the gmail is saying mails from mycomany1.com is not sent from the same IP as the mx lookup however is from 20x.xx.xxx.49: Received-SPF: neutral (google.com: 20x.xx.xxx.49 is neither permitted nor denied by best guess record for domain of [email protected]) client-ip=20x.xx.xxx.49; Authentication-Results: mx.google.com; spf=neutral (google.com: 20x.xx.xxx.49 is neither permitted nor denied by best guess record for domain of [email protected]) [email protected] and the mx record in global dns space as well as in the domain controller .50 for mail.mycompany1.com is set to 20x.xx.xxx.51 My attempt to resolve the above issue is to Update the mx record from 20x.xx.xxx.51 to 20x.xx.xxx.49 Create a new VIP for SMTP traffic addressed to 20x.xx.xxx.49 to forward to 172.16.10.194 After my changes incoming email stopped working, I believe it has something to do with the Juniper setting that SMTP addressed to .49 is not forwarded to 172.16.10.194 Also, I have been wondering is it mandatory to assign an external static IP address to the Juniper firewall? Any helps appreciated. TIA

    Read the article

  • port forwarding/network settings preventing from game hosting

    - by Xitcod13
    I asked where to post this question on stackoverflow meta and they directed me here. Im on wireless connection and I want to host games in StarCraft: Brood War and i've been looking everywhere on how to accomplish that. My internet is amazingly fast so its not an internet problem (and when i play other peoples games dont experience lag) I found out that i need to have a static IP but I have already checked that i do (i downloaded a program to make my id static and it already was; The program asked for which router I used So i think it checked the router settings already) I found out that i need to allow Sc access through the firewall which i already did (i have zone-alarm but I allowed it everything possible except receiving emails lol) I have recently noticed that few people actually can join my games but most of them cannot. I dont know whats going on here. I really want to be able to host games overall how do I go about checking what is wrong with the network. Update: Alright I figured out what i did wrong in the first part I did not actually set up forwarding on the router -.- I have tried to fix my mistake. I went to forwarding options in my router (as this guide for my specific router suggests) but when i click ok I get a message incorrect ip address. 192.168.1.1 is my routers address. The default address that appears there is 192.168.1 (blank) I have set it to my computers current Ip4 adress which 192.168.1.23 I hope this works If so i will post it as an answer and mark it.

    Read the article

  • Serving images from another hostname vs Apache overload for the rewrites

    - by luison
    We are trying to improve further the speed of some sites with older HTML in order as well to obtain better SEO results. We have now applied some minify measures, combined html, css etc. We use a small virtualized infrastructure and we've always wanted to use a light + standar http server configuration so the first one can serve images and static contents vs the other one php, rewrites, etc. We can easily do that now with a VM using the same files and conf of vhosts (bind mounts) on apache but with hardly any modules loaded. This means the light httpd will have smaller fingerprint that would allow us to serve more and quicker, have more minSpareServer running, etc. So, as browsers benefit from loading static content from different hostnames as well, we've thought about building a rewrite rule on our main server (main.com) to "redirect" all images and css *.jpg, *.gif, *.css etc to the same at say cdn.main.com thus the browser being able to have more connections. The question is, assuming we have a very complex rewrite ruleset already (we manually manipulate many old URLs for SEO) will it be worth? I mean will the additional load of main's apache to have to redirect main.com/image.jpg (I understand we'll have to do a 301) to cdn.main.com/image.jpg + then cdn.main.com having to serve it, be larger than the gain we would be archiving on the browser? Could the excess of 301s of all images on a page be penalized by google? How do large companies work this out, does the original code already include images linked from the cdn with absolute paths?

    Read the article

  • Openvpn - stuck on Connecting

    - by user224277
    I've got a problem with openvpn server... every time when I trying to connect to the VPN , I am getting a window with login and password box, so I typed my login and password (login = Common Name (user1) and password is from a challenge password from the client certificate. Logs : Jun 7 17:03:05 test ovpn-openvpn[5618]: Authenticate/Decrypt packet error: packet HMAC authentication failed Jun 7 17:03:05 test ovpn-openvpn[5618]: TLS Error: incoming packet authentication failed from [AF_INET]80.**.**.***:54179 Client.ovpn : client #dev tap dev tun #proto tcp proto udp remote [Server IP] 1194 resolv-retry infinite nobind persist-key persist-tun ca ca.crt cert user1.crt key user1.key <tls-auth> -----BEGIN OpenVPN Static key V1----- d1e0... -----END OpenVPN Static key V1----- </tls-auth> ns-cert-type server cipher AES-256-CBC comp-lzo yes verb 0 mute 20 My openvpn.conf : port 1194 #proto tcp proto udp #dev tap dev tun #dev-node MyTap ca /etc/openvpn/keys/ca.crt cert /etc/openvpn/keys/VPN.crt key /etc/openvpn/keys/VPN.key dh /etc/openvpn/keys/dh2048.pem server 10.8.0.0 255.255.255.0 ifconfig-pool-persist ipp.txt #push „route 192.168.5.0 255.255.255.0? #push „route 192.168.10.0 255.255.255.0? keepalive 10 120 tls-auth /etc/openvpn/keys/ta.key 0 #cipher BF-CBC # Blowfish #cipher AES-128-CBC # AES #cipher DES-EDE3-CBC # Triple-DES comp-lzo #max-clients 100 #user nobody #group nogroup persist-key persist-tun status openvpn-status.log #log openvpn.log #log-append openvpn.log verb 3 sysctl : net.ipv4.ip_forward=1

    Read the article

  • Windows 7: "Replace All Child Object Permissions" Doesn't Stay Checked

    - by raywood
    I right-click on a top-level folder in Windows Explorer. I choose Properties Security tab Advanced Change Permissions. I check "Replace all child object permissions with inheritable permissions from this object" Apply. I get a Windows Security dialog that says, "Setting security information on" the list of objects that flashes by. But now the "Replace all child object permissions" box is unchecked. What is happening here?

    Read the article

< Previous Page | 392 393 394 395 396 397 398 399 400 401 402 403  | Next Page >