Search Results

Search found 13404 results on 537 pages for 'george host'.

Page 434/537 | < Previous Page | 430 431 432 433 434 435 436 437 438 439 440 441  | Next Page >

  • hadoop: port appears open locally but not remotelly

    - by miguel
    I am new to linux and hadoop and I am having the same issue as in this question. I think I understand what is causing it but I don't know how to solve it (Don't know what they mean by "Edit the Hadoop server's configuration file so that it includes its NIC's address."). The other post that they link says that the configuration files should refer to the machine's externally accessible host name. I think I got this right as every hadoop configuration file refers to "master" and the etc/hosts file lists the master by its private IP address. How can I solve this? Edit: I have 5 nodes: master, slavec, slaved, slavee and slavef all running debian. This is the hosts file in master: 127.0.0.1 master 10.0.1.201 slavec 10.0.1.202 slaved 10.0.1.203 slavee 10.0.1.204 slavef this is the hosts file in slavec (it looks similar in the other slaves): 10.0.1.200 master 127.0.0.1 slavec 10.0.1.202 slaved 10.0.1.203 slavee 10.0.1.204 slavef the masters file in master: master the slaves file in master: master slavec slaved slavee slavef the masters and slaves file in slavex has only one line: slavex

    Read the article

  • Apache logging issues

    - by Dan
    I'm trying to parse apache log files, but I'm finding some strange results and I'm not sure what they mean. Hopefully someone can provide some insight. (all of the IP addresses were altered. none actually start with 192, I didn't figure the search engines mattered though.) In the first example, multiple ip addresses are showing up in the host field: 192.249.71.25 - - [04/Aug/2009:04:21:44 -0500] "GET /publications/example.pdf HTTP/1.1" 200 2738 192.0.100.93, 192.20.31.86 - - [04/Aug/2009:04:21:22 -0500] "GET /docs/another.pdf HTTP/1.0" 206 371469 What causes this? Does it have to do with proxy servers? Is there a way to have Apache only log one? In the second example, a bunch of information is just completely missing! What would cause this? msnbot-65-55-207-50.search.msn.com - - [29/Dec/2009:15:45:16 -0600] "GET /publications/example.pdf HTTP/1.1" 200 3470073 "-" "msnbot/2.0b (+http://search.msn.com/msnbot.htm)" 266 3476792 - - - - "-" - - "-" "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; InfoPath.1)" 285 594 - - - - "-" - - "-" "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; InfoPath.1)" 285 4195 - - - - "-" - - "-" "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; InfoPath.1)" 299 109218 crawl-17c.cuil.com - - [29/Dec/2009:15:45:46 -0600] "GET /publications/another.pdf HTTP/1.0" 200 101481 "-" "Mozilla/5.0 (Twiceler-0.9 http://www.cuil.com/twiceler/robot.html)" 253 101704 My CustomLog configuration says: LogFormat "%h %l %u %t \"%r\" %s %b \"%{Referer}i\" \"%{User-agent}i\" %I %O" common

    Read the article

  • 554 5.7.1 <mail_addr>: Relay access denied centos postfix

    - by Relicset
    I have problem in send mail from postfix in centos I have following setup mail server postfix for sending mail but I am getting error. As in the link I tried following commands telnet localhost smtp Trying ::1... Connected to localhost. Escape character is '^]'. 220 mydomain.com ESMTP Postfix ehlo localhost 250-mydomain.com 250-PIPELINING 250-SIZE 10240000 250-VRFY 250-ETRN 250-ENHANCEDSTATUSCODES 250-8BITMIME 250 DSN mail from:<domain.com> 250 2.1.0 Ok rcpt to:<[email protected]> 554 5.7.1 <[email protected]>: Relay access denied Edit-1 In terminal this works echo TEST | mail -v -s "Test mail" [email protected] my postconf -n shows belog information alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix data_directory = /var/lib/postfix debug_peer_level = 2 home_mailbox = Maildir/ html_directory = no inet_interfaces = localhost inet_protocols = all mail_owner = postfix mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man mydestination = $myhostname, localhost.$mydomain, localhost, $mydomain mydomain = dummy.com myhostname = dummy.com mynetworks = all mynetworks_style = host myorigin = $mydomain newaliases_path = /usr/bin/newaliases.postfix queue_directory = /var/spool/postfix readme_directory = /usr/share/doc/postfix-2.6.6/README_FILES sample_directory = /usr/share/doc/postfix-2.6.6/samples sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop unknown_local_recipient_reject_code = 550 What configuration I have to perform to send mails from my server.

    Read the article

  • LSI 9260-8i w/ 6 256gb SSDs - RAID 5, 6, 10, or bad idea overall?

    - by Michael Pearson
    We're provisioning a new production server for our reasonably busy website. Our choice of host have available a 6 drive configuration with a LSI 9260-8i card. My initial thought was to fill all six bays with SSDs (Intel 520 256gb) and set them up in RAID. Good, bad, or terrible idea? Can the card handle it? Should we be using RAID 5, 6 or 10? This would be the first time the provider have filled all six slots for this rackmount with SSDs, so they're a bit hesitant. I'm wondering if somebody else with this card has done something similar in a production environment. We do about 43gb of writes per day and currently use about 300gb of storage. The server acts as webserver, database, and image store for approx 1 million files. The plan is to underprovision the SSDs by approximately 10% to 20% to increase their overall lifespan & performance. The fallback option is 2x480gb SSDs in RAID 1 and another 2x1TB HDDs in RAID 1. The motivation behind this is that the server rental cost difference between 2xSSDs and 6xSSDs is minimal (compared to the overall cost of the rental). We do not have any special high-IOPs requirements. However, if the configuration is known to work, I don't see a good reason to not use it and not have to worry about having separate 'fast and small' and 'slow and large' disks.

    Read the article

  • Hyper-V Virtual Machine Networking issues related to Max Ethernet Frame Size

    - by Goatmale
    I fixed an issue today earlier today but i'm interested in learning WHY it worked. We set up a new Hyper-V virtual machine only to discover that HTTP traffic wasn't working. HTTPS, pings, everything else was working fine. After months of prodding around I took a shot in the dark. On the Hyper-V host server, the physical NIC card had an advanced setting of "Max Ethernet Frame Size" set to 1500. After setting this setting to 1514 the issue was fixed. Alternatively, setting this to 1512 did not solve the issue; 1514 is the magic number. My best guess it that when this setting was set to 1500 it was allowing incoming pings because the data payload was a lot smaller of say, HTTP traffic. As far as HTTPS traffic, I read about something called "Path MTU discovery" which i'm going to assume why is HTTPs traffic was getting through fine, albeit slower. Looking at this post, people agree that 1518 is the max total frame size. Why didn't I need to change this to 1518 instead of 1514 bytes? Why is the default frame size 1500 if that's the max size of the Ethernet payload and not the max size.

    Read the article

  • Running Windows Update inside Windowx XP Mode

    - by Noam Gal
    I am working on a Win 7 Ultimate machine, and was using XP for some development tasks (for compatibility checks). Everything worked like a charm on the XP, including updates. Two days ago I had to switch computer (mainly a new motherboard/cpu), and I had just stuck my old HD inside the newer case. Win 7 worked like a charm - installed all the new drivers, identified everything automatically, no sweat. The trouble started when I tried running my old XP mode - it won't launch, complaining about the cpu change. I figured it's not a big deal, and I deleted the VM, and re-ran XP mode. It told me it can't find it, and offered to create a new one, just what I wanted. I had finished setting up the new XP mode VM, and it seems to work just fine. Got it to use the host network adapter, so I can surf from "inside". But I can't get Windows Update to run. Whenever I click on the "Custom" button on the WU site, after a short while, I get the [Error number: 0x80072EFD] page. I tried several solution from around the web for it (clearing some cache and restarting the wuauserv, even a microsoft fix-it run), but still nothing seems to work. Anyone here has any new tip for me? Thanks.

    Read the article

  • mod_deflate doesn't work [closed]

    - by kikio
    I want to gzip my static files. so put this in .htaccess: <IfModule mod_deflate.c> AddOutputFilterByType DEFLATE text/text text/html text/plain text/xml text/css application/x-javascript application/javascript </IfModule> and looked for mod_deflate in phpinfo() output Loaded Modules section, and I found it. But when I track server responses with Firebug, no gzipped file can be found: HTTP/1.1 200 OK Date: Sat, 08 Sep 2012 21:41:21 GMT Last-Modified: Sat, 08 Sep 2012 21:26:04 GMT Accept-Ranges: bytes Cache-Control: max-age=604800 Expires: Sat, 15 Sep 2012 21:41:21 GMT Vary: Accept-Encoding Keep-Alive: timeout=3, max=50 Connection: Keep-Alive Content-Type: text/css Content-Length: 18206 What's the problem? I'm sure I have mod_deflate enabled (according to php apache_get_modules()). UPDATE: the request headers: GET /d/jquery-ui.css HTTP/1.1 Host: 127.0.0.1 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:15.0) Gecko/20100101 Firefox/15.0.1 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip, deflate DNT: 1 Connection: keep-alive Pragma: no-cache Cache-Control: no-cache

    Read the article

  • How to disable TLS in postfix?

    - by Hamlet
    Hi, How can I disable TLS in postfix? I have problems sending email to a specific host, they said I should disable TLS as I am not using it. Emails are not arriving to indamail.hu ever. Apr 6 16:21:15 server postfix/smtp[12059]: < mail13.indamail.hu[91.83.45.53]: 220 mail13.indamail.hu ESMTP Apr 6 16:21:15 server postfix/smtp[12059]: > mail13.indamail.hu[91.83.45.53]: EHLO server.xxx.hu Apr 6 16:21:15 server postfix/smtp[12059]: < mail13.indamail.hu[91.83.45.53]: 250-mail13.indamail.hu Apr 6 16:21:15 server postfix/smtp[12059]: < mail13.indamail.hu[91.83.45.53]: 250-STARTTLS Apr 6 16:21:15 server postfix/smtp[12059]: < mail13.indamail.hu[91.83.45.53]: 250-PIPELINING Apr 6 16:21:15 server postfix/smtp[12059]: < mail13.indamail.hu[91.83.45.53]: 250-8BITMIME Apr 6 16:21:15 server postfix/smtp[12059]: < mail13.indamail.hu[91.83.45.53]: 250-SIZE 10485760 Apr 6 16:21:15 server postfix/smtp[12059]: < mail13.indamail.hu[91.83.45.53]: 250 AUTH LOGIN PLAIN CRAM-MD5 Apr 6 16:21:15 server postfix/smtp[12059]: > mail13.indamail.hu[91.83.45.53]: MAIL FROM:<[email protected]> SIZE=2365 BODY=8BITMIME Apr 6 16:21:15 server postfix/smtp[12059]: > mail13.indamail.hu[91.83.45.53]: RCPT TO:<[email protected]> Apr 6 16:21:15 server postfix/smtp[12059]: > mail13.indamail.hu[91.83.45.53]: DATA Apr 6 16:21:15 server postfix/smtp[12059]: send attr reason = lost connection with mail13.indamail.hu[91.83.45.53] while sending MAIL FROM Apr 6 16:21:15 server postfix/smtp[12059]: 70B132188649: to=<[email protected]>, relay=mail13.indamail.hu[91.83.45.53]:25, delay=163321, delays=163319/0.02/1.4/0.04, dsn=4.4.2, status=deferred (lost connection with mail13.indamail.hu[91.83.45.53] while sending MAIL FROM) main.cf contains smtp_use_tls = no smtpd_use_tls = no Thanks, hamlet

    Read the article

  • Fatal Execution Engine Error on the Windows2008 r2, IIS7.5

    - by user66524
    Hi Guys We are running some asp.net(3.5) applications on the Windows2008 r2, IIS7.5. Recently we got some event logs so difficult, we have not idea hope some guys can help. 1.EventID: 1334 (9-1-2011 8:41:57) Error message An error occurred during a process host idle check. Exception: System.AccessViolationException Message: Attempted to read or write protected memory. This is often an indication that other memory is corrupt. StackTrace: at System.Collections.Hashtable.GetEnumerator() at System.Web.Hosting.ApplicationManager.IsIdle() at System.Web.Hosting.ProcessHost.IsIdle() 2.EventID: 1023 (9-1-2011 19:44:02) Error message .NET Runtime version 2.0.50727.4952 - Fatal Execution Engine Error (742B851A) (80131506) 3.EventID: 1000 (9-1-2011 19:44:03) Error message Faulting application name: w3wp.exe, version: 7.5.7600.16385, time stamp: 0x4a5bcd2b Faulting module name: mscorwks.dll, version: 2.0.50727.4952, time stamp: 0x4bebd49a Exception code: 0xc0000005 Fault offset: 0x0000c262 Faulting process id: 0x%9 Faulting application start time: 0x%10 Faulting application path: %11 Faulting module path: %12 Report Id: %13 4.EventID: 5011 (9-1-2011 19:44:03) Error message A process serving application pool 'AppPoolName' suffered a fatal communication error with the Windows Process Activation Service. The process id was '2552'. The data field contains the error number. 5.some info: we got the memory.hdmp(234MB) and minidump.mdmp(19.2) from control panel action center but I donot know how to use that :(

    Read the article

  • adding trac to apache2 configuration file

    - by Michael
    I currently have apache2 running from a mythtv/mythweb install. This made two config files available in sites-enabled. One of them ("default-mythbuntu") has the VirtualHost directive and seems like a normal file (except a change to the directory index). There is also a mythweb.conf file that only has directives and sets various variables for mythweb. I want to host a trac site as well. According to this site: http://trac.edgewall.org/wiki/TracOnUbuntu there are some setting I need to set for the Trac site. They give me directions for making a VirtualHost, but I think I should use the current VirtualHost and just add the directives (I'll need to change the default location they point to from the site above to just point to the trac location). Where should I put these directives? Can I make a Trac.conf with just the settings for Trac and enable it, or do I need to put them in the default-mythbuntu file? I don't like that later because it doesn't separate out the Trac configs. How does Apache know that the mythweb (and the trac.conf I want to make) belong to the virtualhost defined in the default-mythbuntu? It is the only virtualhost that is being defined on my system if that matters.

    Read the article

  • How to set up multiple DNS servers on an intranet

    - by Brent
    We have an Active Directory network, with a mixture of Windows DNS, linux BIND servers, and want to use OpenDNS as our external DNS provider. I am wondering What is the best way to set up these servers (regarding forwarders, recursion, etc.)? Active Directory is our main internal DNS for our domain, and has 3 redundant servers. DHCP and all our servers use these as their DNS servers. Then we have a legacy AD server from an old network that is still authoritative for a bunch of domains. Finally, we have a couple of Linux Bind servers that are authoritative for a bunch of websites we host. Should our main AD servers point to our legacy AD server, which points to one of our BIND servers, which points to the other BIND server, which finally points out to openDNS? Or should our main AD servers point to all of these directly? - or is there a better option? What happens if a domain is listed in 2 places? Does DNS process the forwarders in order? What about root servers - if I want to use OpenDNS for "everything else", do I just list them as the last forwarders, and delete the root servers from all my DNS servers? How does recursion work - in this scenario, should I be using recursion or not?

    Read the article

  • Apache2 process stuck at 100% cpu, CLOSE_WAIT socket lingering

    - by mmazing
    I've troubleshooted the heck out of this today, and I can't seem to find any information on how to determine what is happening exactly. Basically, on my development server, another developer is causing CLOSE_WAIT connections that eat up one or more apache2 processes for several hours if I don't restart apache2. strace on any of the processes yields no information, only that it was able to attach. mod_proxy is not enabled. KeepAlive is on, KeepAliveTimeout is 15 seconds, MaxKeepAliveRequests is 100. From what I've been reading, this may or may not be an apache issue at all, just that that's how CLOSE_WAIT works (the server is waiting for a FIN packet to close the connection). I just can't believe that a server would be crippled so easily by not receiving a packet from a remote host telling it to close the connection. Especially without any intervention for well over an hour. Any tips? I'm about to pull my hair out. Edit : Also, there are no unusual entries in any apache log files. Edit 2: lsof -i shows only a single CLOSE_WAIT per hanging process. (That's what has been bothering me about this, as most other discussions talk about many CLOSE_WAIT connections, while I only have one per process.) The nature of the code that is running (php) doesn't really lend itself to closing open connections and whatnot. I can run the same code that he is executing with the same session data, and not result in a hanging process.

    Read the article

  • Bounce backs from web-generated e-mails are missing

    - by JerSchneid
    We use Google Apps to host my company's mail. On our website, we send some e-mails on behalf of our users. In those e-mails we include lines like this: Return-Path: <[email protected]> Sender: <[email protected]> Sending the messages works great (passes SPF tests), but in the case that the message is sent TO an invalid e-mail address, we expect to get a bounce back message sent to "[email protected]". That message never arrives. (If we send an e-mail manually from within the gmail interface to the same bad e-mail, the message does arrive). We used to receive the bounce back messages as expected, but it seems like they are always quietly blocked now (not in spam or anything). Is there a new policy that blocks bounce backs when the "From" does not match the "Return-Path" or something? We would really like to get these bounce-backs to verify the delivery of the messages. Is there any way to prevent them from being blocked?! Thank you!

    Read the article

  • iptables: allowing incoming for 192.168.1.0/24 allowed incoming for all?

    - by nortally
    The internal side of my ISP router has three devices: ISP router 128.128.43.1 Firewall router 128.128.43.2 Server 128.128.43.3 Behind the Firewall router is a NAT network using 192.168.100.n/24 This question is regarding iptables running on the Server. I wanted to allow access to port 8080 only from the NAT clients behind the Firewall router, so I used this rule -A Firewall-1-INPUT -s 192.168.100.0/24 -m state --state NEW -m tcp -p tcp --dport 8080 -j ACCEPT This worked, but UNEXPECTEDLY ALLOWED GLOBAL ACCESS, which resulted in our JBOSS server getting compromised. I now know that the correct rule is to use the Firewall router's address instead of the internal network, but can anyone explain why the first rule allowed global access? I would have expected it to just fail. Full config, mostly lifted from a RedHat server: *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] :Firewall-1-INPUT - [0:0] -A INPUT -j Firewall-1-INPUT -A FORWARD -j Firewall-1-INPUT -A Firewall-1-INPUT -i lo -j ACCEPT -A Firewall-1-INPUT -p icmp --icmp-type any -j ACCEPT -A Firewall-1-INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT -A Firewall-1-INPUT -m comment --comment "allow ssh from all" -A Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT -A Firewall-1-INPUT -m comment --comment "allow https from all" -A Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 443 -j ACCEPT -A Firewall-1-INPUT -m comment --comment "allow JBOSS from Firewall" ### THIS RESULTED IN GLOBAL ACCESS TO PORT 8080 ### -A Firewall-1-INPUT -s 192.168.100.0/24 -m state --state NEW -m tcp -p tcp --dport 8080 -j ACCEPT ### THIS WORKED -A Firewall-1-INPUT -s 128.128.43.2 -m state --state NEW -m tcp -p tcp --dport 8080 -j ACCEPt ### -A Firewall-1-INPUT -j REJECT --reject-with icmp-host-prohibited COMMIT

    Read the article

  • My email server is being blocked by Yahoo: TS03 Message permanently deferred.

    - by bilygates
    Hello, My mail server has been getting the following error from Yahoo's mail servers since about a month: postfix/smtp[23791]: host g.mx.mail.yahoo.com[98.137.54.238] refused to talk to me: 421 4.7.1 [TS03] All messages from [my ip] will be permanently deferred; Retrying will NOT succeed. See http:// postmaster.yahoo.com/421-ts03.html I have exchanged about 4 emails with Yahoo's support team. The first three seemed like automated messages, and the 4th told me that there is nothing they can do, but if I change my policies I can send them another email in 6 months. They also told me: However, based on the information you have provided us, we cannot systematically deliver your email to the Inbox at this time. We suggest that you ask your users to set up a filter in Yahoo! Mail to ensure that they get your email messages in their Inbox. The problem is that my email doesn't even get to their Spam folder. The server won't allow any connections. I have never sent spam messages, not even newsletters. I only send emails for my new users so they can activate their account. I've also implemented DKIM and told Yahoo about this. I have checked my configuration with http://www.myiptest.com/staticpages/index.php/DomainKeys-DKIM-SPF-Validator-test and it reports that both SPF and DKIM are set up correctly. What should I do? Basically, I'm losing new users every day. Any help will be appreciated. P.S.: I apologize if this particular question has already been asked. I searched for it but didn't find it.

    Read the article

  • Windows 7 machine, can't connect remotely until after ping

    - by rjohnston
    I have a Windows 7 (Home Premium) machine that doubles as a media centre and subversion server. There's a couple of problems with this setup, when connecting to the server from an XP (SP3) machine: Firstly, the machine won't respond to it's machine name until after it's IP address has been pinged. Here's an example: Microsoft Windows XP [Version 5.1.2600] (C) Copyright 1985-2001 Microsoft Corp. C:\Documents and Settings\Rob>ping damascus Ping request could not find host damascus. Please check the name and try again. C:\Documents and Settings\Rob>ping 192.168.1.17 Pinging 192.168.1.17 with 32 bytes of data: Reply from 192.168.1.17: bytes=32 time=2ms TTL=128 ... Ping statistics for 192.168.1.17: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 1ms, Maximum = 2ms, Average = 1ms C:\Documents and Settings\Rob>ping damascus Pinging damascus [192.168.1.17] with 32 bytes of data: Reply from 192.168.1.17: bytes=32 time<1ms TTL=128 .... Ping statistics for 192.168.1.17: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 0ms, Maximum = 1ms, Average = 0ms C:\Documents and Settings\Rob> Likewise, subversion commands with either the machine name or IP address will fail until the machine's IP address is pinged. Occasionally, the machine won't respond to pings on it's IP address, it'll just come back with "Request timed out". The svn server is VisualSVN, if that helps... Any ideas?

    Read the article

  • What is the difference between Startup programs in windows and the same programs being started manually

    - by sup
    I am no Windows guy, but I am trying to get a seamless integration of Windows program through Virtual Box Windows guest onto my Ubuntu machine. I more or less followed this tutorial: https://nowhere.dk/articles/running-windows-applications-natively-with-seamlessrdp Basically I start up Windows in Virtual Box and then I try to launch an application (on Ubuntu host) like this: rdesktop -A -s "c:\Program Files\ThinLinc\WTSTools\seamlessrdpshell.exe notepad.exe" 192.168.123.103:3389 -u user -p password That just gives me full Windows desktop that I do not want. However, when I run (on the Windows guest) "c:\Program Files\ThinLinc\WTSTools\seamlessrdpshell.exe" "notepad" The command above works and I get just the window I want. Now, so I thought I would put this command into startup folder of the Windows machine and everything would be fine. But it says "Unable to set up the virtual channel". (by googling, I nailed it to this file: https://sourceforge.net/p/rdesktop/code/1686/tree/seamlessrdp/trunk/ServerExe/vchannel.c - the warning is triggered (by main.c in the same directory) when function vchannel_open() returns something that C interprets as yes for if condition). I have no idea why it works when I launch this command manually via a bat file and not when I put it to startup programs. Any ideas?

    Read the article

  • Failing to send email on Ubuntu box (Karmic Koala)

    - by user25312
    I have a home network with an XP and Ubuntu (9.10) box. I have created a small test php script for checking that I can send emails from my machine. I am using the same php.ini file with the same [mail settings], yet the file works on my XP box, and fails on the Ubuntu box. I have included the script here, hopefully, someone can spot whats going wrong: <?php // send e-mail to ... $to="[email protected]"; // Your subject $subject="Test Email"; // From //$header="from: test script"; $header='From: host-email-username@hostdomain_here' . "\r\n" . // Your message $message="Hello \r\n"; $message.="This is test\r\n"; $message.="Test again "; // send email $sentmail = mail($to,$subject,$message,$header); // if your email succesfully sent if($sentmail){ echo "Email Has Been Sent ."; } else { echo "Cannot Send Email "; } ?> The emails have been spoofed for obvious reasons, but otherwise, the script is exactly as the one I tested [Edit] I have since installed mailutils package on my Ubuntu box, now the script runs and returns 'Email has been sent'. However, the mail never arrives in my mail inbox (I've waited 1 day so far). Is there something else I need to be looking at?

    Read the article

  • Use shared 404 page for virtual hosts in Nginx

    - by Choy
    I'd like to have a shared 404 page to use across my virtual hosts. The following is my setup. Two sites, each with their own config file in /sites-available/ and /sites-enabled/ www.foo.com bar.foo.com The www directory is set up as: www/ foo.com/ foo.com/index.html bar.foo.com/ bar.foo.com/index.html shared/ shared/404.html Both config files in /sites-available are the same except for the root and server name: root /var/www/bar.foo.com; index index.html index.htm index.php; server_name bar.foo.com; location / { try_files $uri $uri/ /index.php; } error_page 404 /404.html; location = /404.html { root /var/www/shared; } I've tried the above code and also tried setting error_page 404 /var/www/shared/404.html (without the following location block). I've also double checked to make sure my permissions are set to 775 for all folders and files in www. When I try to access a non-existent page, Nginx serves the respective index.php of the virtual host I'm trying to access. Can anyone point out what I'm doing wrong? Thanks!

    Read the article

  • Dedicated server with a lot of storage and good support - and cost-effective

    - by Martin Burger
    Hello, I am from Germany and looking for a dedicated server located in the US with a lot of storage: 750 - 1500 GB. CPU speed and amount of memory are secondary, the server will host large amounts of media files via http and ftp - the basic task is to help people exchange media files. In Germany, there are some good offers, like "Root Server EQ6" at www.hetzner.de. For example, that company provides support of high quality, and their plans are very cost-effective. The plan mentioned above costs about $90 per month and provides two 1500 GB SATA-II HDDs (Software-RAID 1). In the US, I found (amongst others) Go Daddy and rackspace. Go Daddy offers some "Storage Monster" plans that include 2 x 1,000 GB hard drives for about $180 per month - already twice as much as Hetzner above. However, I found some blog and forum entries that complain about the support provided by Go Daddy. Rackspace seems to provide decent support, but they are very "upscale". Their dedicated servers are customizable and start at $419 - thus, about 4.5 times as much as Hetzner. Can anybody recommend a solution / plan that is comparable to the one by Hetzner? Or are prices for dedicated servers in general much higher than in Germany? Regards, Martin

    Read the article

  • cURL hangs trying to upload file from stdin

    - by SidneySM
    I'm trying to PUT a file with cURL. This hangs: curl -vvv --digest -u user -T - https://example.com/file.txt < file This does not: curl -vvv --digest -u user -T file https://example.com/file.txt What's going on? * About to connect() to example.com port 443 (#0) * Trying 0.0.0.0... connected * Connected to example.com (0.0.0.0) port 443 (#0) * SSLv3, TLS handshake, Client hello (1): * SSLv3, TLS handshake, Server hello (2): * SSLv3, TLS handshake, CERT (11): * SSLv3, TLS handshake, Server key exchange (12): * SSLv3, TLS handshake, Server finished (14): * SSLv3, TLS handshake, Client key exchange (16): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSL connection using DHE-RSA-AES256-SHA * Server certificate: * subject: serialNumber=jJakwdOewDicmqzIorLkKSiwuqfnzxF/, C=US, O=*.example.com, OU=GT01234567, OU=See www.example.com/resources/cps (c)10, OU=Domain Control Validated - ExampleSSL(R), CN=*.example.com * start date: 2010-01-26 07:06:33 GMT * expire date: 2011-01-28 11:22:07 GMT * common name: *.example.com (matched) * issuer: C=US, O=Equifax, OU=Equifax Secure Certificate Authority * SSL certificate verify ok. * Server auth using Digest with user 'user' > PUT /file.txt HTTP/1.1 > User-Agent: curl/7.19.4 (universal-apple-darwin10.0) libcurl/7.19.4 OpenSSL/0.9.8l zlib/1.2.3 > Host: example.com > Accept: */* > Transfer-Encoding: chunked > Expect: 100-continue > < HTTP/1.1 100 Continue

    Read the article

  • fedora apache/nginx pylons

    - by microchasm
    I'm trying to wrap my head around Pylons and how it works. So far... it's been confusing... I'm using EC2 with Fedora8. Everything is working so far (i.e. I have Pylons/python et al installed and after creating a test app and running paster serve I can access the default page via my domain name). As the Pylons docs explain and as I understand, the built in paster serve server is not suited for a production environment. What I am not clear on, then, is what to do next... It seems like nginx is a good option, but I am more familiar with Apache (like .0002%). I plan on having virtualhosts (which nginx says can accomodate). However, I am totally unclear on how the big picture is supposed to work. In order to serve an app, does paster serve need to be running? Does then nginx/apache basically just act as a proxy to shuttle connections to the paster server? How do I start it so it doesn't terminate after closing the ssh connection? If running multiple apps, what do I set as the host/port in development.ini to differentiate the apps? Or if this is not the right way, how do I differentiate beween apps? I am more familiar with MySQL, but willing to negotiate PostgreSQL if it's a better fit. Is it? Is virtualenv a prerequisite to running multiple apps on the same machine? Thanks in advance for any tips.

    Read the article

  • Trying to use a SmartHost with my Exchange 2010 server

    - by Pure.Krome
    Hi folks, I'm trying to use a SmartHost with my Exchange 2010 Server. SmartHost details: Secure SMTPS: securemail.internode.on.net 465 <-- Note: that's port 465 Configure your existing SMTP settings (in your email program) to: use authentication (enter your Internode username and password, enter your username as [email protected]). enable SSL for sending email (SMTPS). So I've added the smart host details to my Org Config -> Hub Transport. I then used PowerShell to add the port:- Set-SendConnector "securemail.internode.on.net" -port 465 I've then added my username/password (as suggested above) to the SmartHost as Basic Authentication (with no TLS). Then I try sending an email and I get the following error message :- 451 4.4.0 Primary target IP address responded with: "421 4.4.2 Connection dropped due to ConnectionReset." So i'm not sure how to continue. I also tried ticking the TLS box but stll I get the same error. If i don't use SMTPS (secure SMTP, on port 465) and use basic SMTP on port 25 with no Authentication, email gets sent. Any ideas? EDIT: Btw, I can telnet to that server on port 465 from my mail server .. just to make sure i'm not getting firewall'd, etc.

    Read the article

  • IIS7 binding to subdomain causing authentication errors (TFS 2010)

    - by Tommy Jakobsen
    I'm trying to bind a IIS web site (Team Foundation Services 2010) to a subdomain, which is causing authentication errors. First I'll explain what I've done to set it up. This is the fist time I do this, so please correct me if I'm wrong. The web server is a stand-alone Windows Server 2008 R2 x64, running IIS7 with .NET Framework 4. I have the following A-records, pointing to my server: server.mydomain.com *.server.mydomain.com So all subdomains of server.mydomain.com points to the server. In IIS7 I have a web site (TFS 2010) on port 8080, with a virtual directory (named tfs) that is using Windows Authentication. I have one binding on the web site pointing to all unassigned IP addresses, port 8080 and having a host name of tfs.server.mydomain.com. Now, shouldn't I be able to access the virtual directory through: http://tfs.server.mydomain.com/tfs That is not working. However, I can access it through: http://tfs.server.mydomain.com:8080/tfs But, it won't let me authenticate using a Windows account (Server\Username). A windows account that I can authenticate with, when accessing the site through http://localhost:8080/tfs. What am I missing here?

    Read the article

  • Updating modules on VPS hosted under OpenVZ

    - by tertle
    Been trying to install OpenVPN on a VPS but come into a few problems when trying to start the openvpn server: Service deferred error: IPTablesServiceBase: failed to run iptables-restore [status=1]: ['FATAL: Could not load /lib/modules/2.6.18-028stab070.14/modules.dep: No such file or directory', 'FATAL: Could not load /lib/modules/2.6.18-028stab070.14/modules.dep: No such file or directory', 'iptables-restore: line 46 failed']: internet/base:1175,internet/base:752,internet/process:45,internet/process:306,internet/_baseprocess:48,internet/process:775,internet/_baseprocess:60,svc/pp:116,svc/svcnotify:26,internet/defer:238,internet/defer:307,internet/defer:323,sagent/ipts:105,sagent/ipts:39,util/error:52,util/error:32 service failed to start due to unresolved dependencies: set(['user', 'iptables_openvpn']) service failed to start due to unresolved dependencies: set(['user', 'iptables_openvpn']) service failed to start due to unresolved dependencies: set(['iptables_openvpn']) Anyway so after a bit of playing around and some advice, I found that the Linux kernel and modules don't match on my server. uname -r returns 2.6.18-028stab070.14 and ls /lib/modules returns 2.6.18-028stab070.7 The server is running OpenVZ and my container uses Ubuntu 9.10. So my question is, is it possible for me to update my modules on a VPS and if so how would I do this, or is this something I'll need to try get my host to do? Thanks in advance.

    Read the article

< Previous Page | 430 431 432 433 434 435 436 437 438 439 440 441  | Next Page >