Search Results

Search found 14154 results on 567 pages for 'sp send dbmail'.

Page 441/567 | < Previous Page | 437 438 439 440 441 442 443 444 445 446 447 448  | Next Page >

  • Bridging my laptop's wireless and wired adaptors

    - by stacey.richards
    I would like to be able to connect a desktop computer that does not have a wireless adapter to my wireless network. I could just run a network cable from my ADSL/wireless router to the desktop computer but sometimes this is not practical. What I would really like to do is bridge my laptop's wireless and wired adapters in such a way that I can run a network cable from my laptop to a switch and another network cable from the switch to a desktop computer so that the desktop computer can access the Internet through my ADSL/wireless router via my latop: +--------------------+ |ADSL/wireless router| +--------------------+ | +-------------------------+ |laptop's wireless adaptor| | | |laptop's wired adaptor | +-------------------------+ | +------+ |switch| +------+ | +-----------------------+ |desktop's wired adapter| +-----------------------+ A bit of Googling suggests that I can do this by bridging my laptop's wireless and wired adapters. In Windows XP's Network Connections I select both the Local Area Connection and the Wireless Network Connection, right click and select Bridge Connections. From what I gather, this (layer 2?) bridge will examine the MAC address of traffic coming from the wireless network and pass it through to the wired network if it suspects that a network adapter with that MAC address may be on the wired side, and vice-versa. If this is the case, I would assume that when the desktop computer attempts to get an IP address from a DHCP server (which is running on the ADSL/wireless router), it would send a DHCP broadcast packet which would pass through the laptop's bridge to the router and the reply would return through the laptop's bridge back to the desktop. This doesn't happen. With some more Googling I find some instruction how this can be done with Linux. I reboot to Ubuntu 9.10 and type the following: sudo apt-get install bridge-utils sudo brctl addbr br0 sudo brctl addif br0 wlan0 sudo brctl addif br0 eth0 sudo ipconfig wlan0 0.0.0.0 sudo ipconfig eth0 0.0.0.0 Once again, the desktop cannot reach the ADSL/wireless router. I suspect that I'm missing some simple important step. Can anyone shed some light on this for me?

    Read the article

  • I think installing PostFix solved my problem, but it seemed *too* easy

    - by Joel Marcey
    Hi, This is a followup to a serverfault post I made a while ago: http://serverfault.com/questions/21633/how-do-i-target-a-different-mail-server-depending-on-domain-with-exim (More context here too: http://forum.slicehost.com/comments.php?DiscussionID=3806 ) I have a slice/VPS at Slicehost. I basically decided to scrap exim (i.e., purge it), and start anew with my email infrastructure. In case you didn't read any of the above threads, basically my goal was to have a send only mail infrastructure that relays all outgoing email to Google Apps. I also wanted to where email from domain1 (a Wordpress installation) would show it coming from domain1.com and email from domain2 (a normal website) would show it coming from domain2.com. So I decided to give PostFix a try. I literally followed the surprisingly simple instructions here: http://sudhanshuraheja.com/2009/02/slicehost-setup-outgoing-mail-google-apps-postfix/ And voila, all seems to be working as I expected. My email tests show email coming from the proper locations (either domain1 or domain2 depending on where the emails were sent from). But this all seemed too simple to me. So simple, in fact, that I feel that something is amiss. When I installed PostFix according to the instructions in the post above and it worked, I was surprised that I didn't have to specify an SMTP server, a port number, any authentication credentials, etc. My slice is set up such that I have MX records for Google Apps (e.g., ASPMX.L.GOOGLE.COM.) in my DNS settings, but I am not sure if that is why it is working. My email infrastructure knowledge is admittedly limited, but with this am I suspect to: Spammers using my email infrastructure? My emails going to people as spam? Something else sinister? I have actually stopped running PostFix until I understand this better. Thanks!

    Read the article

  • Linux - real-world hardware RAID controller tuning (scsi and cciss)

    - by ewwhite
    Most of the Linux systems I manage feature hardware RAID controllers (mostly HP Smart Array). They're all running RHEL or CentOS. I'm looking for real-world tunables to help optimize performance for setups that incorporate hardware RAID controllers with SAS disks (Smart Array, Perc, LSI, etc.) and battery-backed or flash-backed cache. Assume RAID 1+0 and multiple spindles (4+ disks). I spend a considerable amount of time tuning Linux network settings for low-latency and financial trading applications. But many of those options are well-documented (changing send/receive buffers, modifying TCP window settings, etc.). What are engineers doing on the storage side? Historically, I've made changes to the I/O scheduling elevator, recently opting for the deadline and noop schedulers to improve performance within my applications. As RHEL versions have progressed, I've also noticed that the compiled-in defaults for SCSI and CCISS block devices have changed as well. This has had an impact on the recommended storage subsystem settings over time. However, it's been awhile since I've seen any clear recommendations. And I know that the OS defaults aren't optimal. For example, it seems that the default read-ahead buffer of 128kb is extremely small for a deployment on server-class hardware. The following articles explore the performance impact of changing read-ahead cache and nr_requests values on the block queues. http://zackreed.me/articles/54-hp-smart-array-p410-controller-tuning http://www.overclock.net/t/515068/tuning-a-hp-smart-array-p400-with-linux-why-tuning-really-matters http://yoshinorimatsunobu.blogspot.com/2009/04/linux-io-scheduler-queue-size-and.html For example, these are suggested changes for an HP Smart Array RAID controller: echo "noop" > /sys/block/cciss\!c0d0/queue/scheduler blockdev --setra 65536 /dev/cciss/c0d0 echo 512 > /sys/block/cciss\!c0d0/queue/nr_requests echo 2048 > /sys/block/cciss\!c0d0/queue/read_ahead_kb What else can be reliably tuned to improve storage performance? I'm specifically looking for sysctl and sysfs options in production scenarios.

    Read the article

  • Backing up SQL NetApp Snapshots using TSM

    - by WerkkreW
    In our environment we have a 3 node SQL 2005 Cluster which is on NetApp storage. We are currently using SMSQL (NetApp SnapManager for SQL) to take Snapshot backups of the data. This works great, but due to some audit requirements we are also forced to maintain some copies on tape. We have used NDMP in other places across the enterprise but we do not want to use it in this specific instance. Basically what I need to do is, get the most recent snapshot copy of the databases on tape, via Tivoli Storage Manager (TSM). What I have done is, obtained a basic Windows Server 2003 VM with SnapDrive installed, which is SAN attached and zoned to the NetApp, and I have written a batch file to do the following: Mount the latest __RECENT snapshot lun to the host, using a specific drive letter Perform a TSM based incremental backup Dis-mount the LUN This seems to work fine, except sometimes the LUN's do not mount due to some sort of timeout. Also, due to my limited knowledge of windows batch scripting, I have no way to monitor the success or failure of these backups since I do not know how to send a valid return code back to the TSM scheduling service. Is there a more efficient/elegant way to accomplish this without NDMP?

    Read the article

  • Postfix misconfigured? 550 Sender rejected from recieving server

    - by wnstnsmth
    We use Postfix on our CentOS 6 machine, having the following configuration. We use PHP's mail() function to send rudimentary password reset emails, but there is a problem. As you will see, mydomain and myhostname is correctly set, afaik. alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix data_directory = /var/lib/postfix debug_peer_level = 2 html_directory = no inet_interfaces = localhost inet_protocols = all mail_owner = postfix mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man mydestination = $myhostname, localhost.$mydomain, localhost mydomain = ***.ch myhostname = test.***.ch newaliases_path = /usr/bin/newaliases.postfix queue_directory = /var/spool/postfix readme_directory = /usr/share/doc/postfix-2.6.6/README_FILES sample_directory = /usr/share/doc/postfix-2.6.6/samples sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop unknown_local_recipient_reject_code = 550 Now this is the stuff that is in the /var/log/maillog of Postfix upon sending an email to ***.***@***.ch, with ***.ch being the same domain our sending server test.***.ch is on: Dec 13 16:55:06 R12X0210 postfix/pickup[6831]: E6D6311406AB: uid=48 from=<apache> Dec 13 16:55:06 R12X0210 postfix/cleanup[6839]: E6D6311406AB: message-id=<20121213155506.E6D6311406AB@test.***.ch> Dec 13 16:55:07 R12X0210 postfix/qmgr[6832]: E6D6311406AB: from=<apache@test.***.ch>, size=1276, nrcpt=1 (queue active) Dec 13 16:55:52 R12X0210 postfix/smtp[6841]: E6D6311406AB: to=<***.***@***.ch>, relay=mail.***.ch[**.**.249.3]:25, delay=46, delays=0.18/0/21/24, dsn=5.0.0, status=bounced (host mail.***.ch[**.**.249.3] said: 550 Sender Rejected (in reply to RCPT TO command)) Dec 13 16:55:52 R12X0210 postfix/cleanup[6839]: 8562C11406AC: message-id=<20121213155552.8562C11406AC@test.***.ch> Dec 13 16:55:52 R12X0210 postfix/bounce[6848]: E6D6311406AB: sender non-delivery notification: 8562C11406AC Dec 13 16:55:52 R12X0210 postfix/qmgr[6832]: 8562C11406AC: from=<>, size=3065, nrcpt=1 (queue active) Dec 13 16:55:52 R12X0210 postfix/qmgr[6832]: E6D6311406AB: removed Dec 13 16:55:52 R12X0210 postfix/local[6850]: 8562C11406AC: to=<root@test.***.ch>, orig_to=<apache@test.***.ch>, relay=local, delay=0.13, delays=0.07/0/0/0.05, dsn=2.0.0, status=sent (delivered to mailbox) Dec 13 16:55:52 R12X0210 postfix/qmgr[6832]: 8562C11406AC: removed So the receiving server rejects the sender (line 4 of log output). We have tested it with one other recipient and it worked, so this problem might be completely unrelated to our settings, but related to the recipient. Still, with this question, I want to make sure we're not making an obvious misconfiguration on our side.

    Read the article

  • Moving a lotus database causes incoming mail failure

    - by Logman
    Hey all, I moved a couple lotus note databases to another hd (ran out of space) by using a directory link/pointer on the domino server. USed a text file with the .DIR ext with the desired new path etc... well that works fine. I open up a lotus notes client, use the id and then OPEN the database. The directory link works fine, the new location of the db is opened with no problems. We found out that we couldnt FORWARD any mail: We did the following to make it work: Goto menu FILE|MOBILE|EDIT CURRENT LOCATION Goto tab MAIL and enter the correct path for "Mail File:" example: it should read "mail\morespace\flabor" and not "mail\flabor" "morespace" = directory link/pointer we can forward and reply to emails fine after that fix. The problem is that the user has no incoming email. Domino is still trying to send the incoming mail of that user to its old db location "mail\flabor" instead of "mail\morespace\flabor". Delivery error saying user does not exist. Is this a cache problem? We have reset the server ("Q" at the prompt), though we have not completely shut it down though. Thanks Frank

    Read the article

  • Some process does ICMP port scan on my OSX box and I am afraid my Mac got a virus

    - by Jamgold
    I noticed that my 10.6.6 box has some process send out ICMP messages to "random" hosts, which concerns me a lot. when doing a tcpdump icmp I see a lot of the following 15:41:14.738328 IP macpro > bzq-109-66-184-49.red.bezeqint.net: ICMP macpro udp port websm unreachable, length 36 15:41:15.110381 IP macpro > 99-110-211-191.lightspeed.sntcca.sbcglobal.net: ICMP macpro udp port 54045 unreachable, length 36 15:41:23.458831 IP macpro > 188.122.242.115: ICMP macpro udp port websm unreachable, length 36 15:41:23.638731 IP macpro > 61.85-200-21.bkkb.no: ICMP macpro udp port websm unreachable, length 36 15:41:27.329981 IP macpro > c-98-234-88-192.hsd1.ca.comcast.net: ICMP macpro udp port 54045 unreachable, length 36 15:41:29.349586 IP macpro > c-98-234-88-192.hsd1.ca.comcast.net: ICMP macpro udp port 54045 unreachable, length 36 I got suspicious when my router notified me about a lot of ICMP messages that don't get a response Does anyone know how to trace which process (or worse kernel module) might be responsible for this? I rebooted and logged in with a virgin user account and tcpdump showed the same results. Any dtrace magic welcome. Thanks in advance

    Read the article

  • Some process does ICMP port scan on my OSX box and I am afraid my Mac got a virus

    - by Jamgold
    I noticed that my 10.6.6 box has some process send out ICMP messages to "random" hosts, which concerns me a lot. when doing a tcpdump icmp I see a lot of the following 15:41:14.738328 IP macpro > bzq-109-66-184-49.red.bezeqint.net: ICMP macpro udp port websm unreachable, length 36 15:41:15.110381 IP macpro > 99-110-211-191.lightspeed.sntcca.sbcglobal.net: ICMP macpro udp port 54045 unreachable, length 36 15:41:23.458831 IP macpro > 188.122.242.115: ICMP macpro udp port websm unreachable, length 36 15:41:23.638731 IP macpro > 61.85-200-21.bkkb.no: ICMP macpro udp port websm unreachable, length 36 15:41:27.329981 IP macpro > c-98-234-88-192.hsd1.ca.comcast.net: ICMP macpro udp port 54045 unreachable, length 36 15:41:29.349586 IP macpro > c-98-234-88-192.hsd1.ca.comcast.net: ICMP macpro udp port 54045 unreachable, length 36 I got suspicious when my router notified me about a lot of ICMP messages that don't get a response [INFO] Mon Jan 10 16:31:47 2011 Blocked outgoing ICMP packet (ICMP type 3) from 192.168.1.189 to 212.25.57.90 Does anyone know how to trace which process (or worse kernel module) might be responsible for this? I rebooted and logged in with a virgin user account and tcpdump showed the same results. Any dtrace magic welcome. Thanks in advance

    Read the article

  • 530 5.7.1 Client was not authenticated Exchange 2010 for some computers within mask

    - by user1636309
    We have a classic problem with Client not Authenticated but with a specific twist: We have an Exchange 2010 cluster, let's say EX01 and EX02, the connection is always to smtp.acme.com, then it is switched through load balancer. We have an application server, call it APP01 There are clients connected to the APP01. There is a need for anonymous mail relay from both clients and APP01. The Anonymous Users setting of the Exchange is DISABLED, but the specific computers - APP01 and clients by the mask, let's say, 192.168.2.* - are enabled. For internal relay, a "Send Connector" is created, and then the above IP addresses are added for the connector to allow computers, servers, or any other device such as a copy machine to use the exchange server to relay email to recipients. The problem is that the relay works for APP01 and some clients, but not others (we get "Client not Authenticated") - all inside the same network and the same mask. This is basically what we do to test it outside of our application: http://smtp25.blogspot.sk/2009/04/530-571-client-was-not-authenticated.html So, I am looking for ideas: What can be the reason for such a strange behaviour? Where I can see the trace of what's going on at the Exchange side?

    Read the article

  • SQL Server Offsite Backups

    - by Eric Maibach
    We have about !TB of SQL Server databases, and these databases generate about 200GB of data changes each day. Up to this point we have been doing Weekly full backups, daily diff backups, and hourly transaction log backups. The full and diff backups are backed up to tape and taken offsite each day. We have been trying to move away from tapes, and our IT department purchased a Barracuda Backup device that backups up data and then sends it offsite using our internet connection. I have been trying to get this to work for our SQL Server backups, and have ran into a number of problems. I normally like to just use SQL Server to perform backups instead of trying to use a agent, so that is what I tried first. However the Barracuda device was not able to dedup these files very well, so it ended up being to much data to try to send offsite and to archive. I then tried installing the Barracuda agent and using it to backup the SQL Server databases. However the problem I am having there is that on some of the database servers I also have files that need backed up, and I cannot find a way to create seperate backup schedules for the file backups and the SQL Server backups. Barracuda only does full or transaction log backups. So if I want to do hourly transaction log backups I end up doing a file system backup every hour (which is not good), or if I only schedule the backups to run once a night I either have to do a full backup every night, or only do a transaction backup once a day. None of these scenarios are good options. My question is, how is everyone else getting their large SQL Server database backups offsite. Are you just using tape, or have you found a offsite backup device that works well? Is anybody else using Barracuda to backup their SQL Server databases? If you do, then how do you have it setup?

    Read the article

  • append $myorigin to localpart of 'from', append different domain to localpart of incomplete recipient address

    - by PJ P
    We have been having some trouble getting Postfix to behave in a very specific fashion in which sender and recipient addresses with only a localpart (i.e. no @domain) are handled differently. We have a number of applications that use mailx to send messages. We would like to know the username and hostname of the sending party. For example, if root sends an email from db001.company.local, we would like the email to be addressed from [email protected]. This is accomplished by ensuring $myorigin is set to $myhostname. We also want unqualified recipients to have a different domain appended. For example, if a message is sent to 'dbadmin' it should qualify to '[email protected]'. However, by the nature of Postfix and $myorigin, an unqualified recipient would instead qualify to [email protected]. We do not want to adjust the aliases on all servers to forward appropriately. (in fact, every possible recipient doesn't have an entry in /etc/passwd) All company employees have mailboxes on Exchange, which Postfix eventually routes to, and no local Linux/Unix mailboxes are used or access. We would love to tell our application owners to ensure they use a fully qualified email address for all recipients, but the powers that be dictate that any negligence must be accommodated. If we were to keep $myorigin equal to $myhostname, we could resolve this issue by having an entry such as the following in 'recipient_canonical_maps': @$myorigin @company.com However, unfortunately, we cannot use variables in these map files. We also want to avoid having to manually enter and maintain the actual hostname in 'recipient_canonical_maps' for each server. Perhaps once our servers are 'puppetized' we can dynamically adjust this file, but we're not there yet. After an afternoon of fiddling I've decided to reach out. Any thoughts? Thanks in advance.

    Read the article

  • Bash Parallelization of CPU-intensive processes

    - by ehsanul
    tee forwards its stdin to every single file specified, while pee does the same, but for pipes. These programs send every single line of their stdin to each and every file/pipe specified. However, I was looking for a way to "load balance" the stdin to different pipes, so one line is sent to the first pipe, another line to the second, etc. It would also be nice if the stdout of the pipes are collected into one stream as well. The use case is simple parallelization of CPU intensive processes that work on a line-by-line basis. I was doing a sed on a 14GB file, and it could have run much faster if I could use multiple sed processes. The command was like this: pv infile | sed 's/something//' > outfile To parallelize, the best would be if GNU parallel would support this functionality like so (made up the --demux-stdin option): pv infile | parallel -u -j4 --demux-stdin "sed 's/something//'" > outfile However, there's no option like this and parallel always uses its stdin as arguments for the command it invokes, like xargs. So I tried this, but it's hopelessly slow, and it's clear why: pv infile | parallel -u -j4 "echo {} | sed 's/something//'" > outfile I just wanted to know if there's any other way to do this (short of coding it up myself). If there was a "load-balancing" tee (let's call it lee), I could do this: pv infile | lee >(sed 's/something//' >> outfile) >(sed 's/something//' >> outfile) >(sed 's/something//' >> outfile) >(sed 's/something//' >> outfile) Not pretty, so I'd definitely prefer something like the made up parallel version, but this would work too.

    Read the article

  • Recover data from physically damaged harddrive. What are my options?

    - by Michael Kniskern
    I was trying to replace the power supply in my desktop PC and ended up physically damaging the data connection from the hard drive to the motherboard. The plastic shelf for the copper prongs on the hard drive broke into the cable. Here is a picture of my handy work: I went to Best Buy Geek Squad to discuss my options and they said that they will need to send it to the recover center it could cost anywhere between $250 to $1600 USD to recover the data out the hard drive Is this reasonable for data recovery from a physically damaged hard drive? Are there any other options I can explore? I am going to talk to the data doctors to see what my options are. Update I took the HD to Data Doctors, and they told me that the SATA connection was broken to they would need to replaced the data connector and then copy the data to a brand new hard drive. So, with the initial analysis, cost of replacement parts, and data recovery fee it came out to $865.00 USD. The technician specifically stated if this was an older hard drive that would just need to replace the data connector. But because there is specific information related to the individual hard drive in the flash ROM, they need to transfer the data to a brand new hard drive.

    Read the article

  • SSH into remote server using Public-private keys

    - by maria
    Hi, I have recently setup ssh on two linux machines (lets call them server-a, client-b). I have generated two ssh auth files on client-b machine using ssh key gen and can see both public and private files in .ssh dir. I have named them 'example' and 'example.pub'. Then I have added example.pub to sever-a's auth file. When I try to ssh into server-a it still requests a password authentication where as I want a password less login (private key on client-b is setup without password). When I try to ssh with '-v' .. get the following output: debug1: Next authentication method: publickey debug1: Trying private key: /Users/abc/.ssh/identity debug1: Offering public key: /Users/abc/.ssh/id_rsa debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey,keyboard-interactive debug1: Offering public key: /Users/abc/.ssh/id_dsa debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey,keyboard-interactive debug2: we did not send a packet, disable method debug1: Next authentication method: keyboard-interactive debug2: userauth_kbdint debug2: we sent a keyboard-interactive packet, wait for reply debug2: input_userauth_info_req debug2: input_userauth_info_req: num_prompts 1 Password: Please help.

    Read the article

  • My system administrator set up 2 databases that sync. Master-Master. However, these two databases a

    - by Alex
    DB1 and DB2. I made changes to DB1, and it does not seem to be on DB2. When I do "SHOW SLAVE STATUS\G" on DB2, there seems to be an error: mysql> show slave status\G *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: Master_User: Master_Port: Connect_Retry: 60 Master_Log_File: mysql-bin.0005496 Read_Master_Log_Pos: 5445649315 Relay_Log_File: mysqld-relay-bin.0041705 Relay_Log_Pos: 1624302119 Relay_Master_Log_File: mysql-bin.0004461 Slave_IO_Running: Yes Slave_SQL_Running: No Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 1062 Last_Error: Error 'Duplicate entry '4779' for key 1' on query. Default database: 'falc'. Query: 'INSERT INTO `log` (`anon_id`, `created_at`, `query`, `episode_url`, `detail_id`, `ip`) VALUES ('fdzn1d45kMavF4qbyePv', '2009-11-19 04:19:13', 'amazon', '', '', '130.126.40.57')' Skip_Counter: 0 Exec_Master_Log_Pos: 162301982 Relay_Log_Space: 136505187184 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: NULL 1 row in set (0.00 sec) Then, I did show tables, and it seems like DB2 is lacking a table that I created on DB1...that means that for some reason, DB2 stopped syncing with DB1. How can I simply allow them to be in full synchronization again? All I want is DB2 to be exactly the same as DB1!

    Read the article

  • Setup Firefox to save .pages as .zip automatically

    - by Mike Dtrick
    What do I want to do? I would like Firefox to save files with the .pages extension as .zip files automatically. Scenario You are browsing through your emails and you notice your friend just sent you an email with a file attached (a .pages in this example). Unfortunately, you have a laptop that runs Windows. Your friend continues to send tons of emails with .pages files attached and you are tired of manually saving the files as a .zip file. Ultimately, you would like Firefox to be set up so that the download/file manager recognizes the .pages extension and automatically converts it to a .zip file. What have I done? I have saved files manually by selecting save as "All Files" and setting the extension to .zip. I've gone through Firefox and their documentation and have not found anything on how to complete this task. Why am I doing this? To save time (only a few seconds, not the main reason). I would like to setup a simple solution that "converts" a file automatically without having to recall steps on how to achieve the task manually (for clients who aren't exactly tech savvy). So that clients with Windows can access the files. IMPORTANT NOTE: I am not trying to save the web page, rather an Apple document equivalent to Microsoft Word. UPDATE: The really easy method would be to save one file, right click it, choose properties and open all .pages files up with WinRAR (or any other program that extracts files from a compressed folder). For the sake of learning, I am going to "neglect" this method and continue to do some research on Firefox add-ons. I would still like to have Firefox or the download manager to do the bulk of the work for converting the file.

    Read the article

  • How to prevent remote hosts from delivering mail to Postfix with spoofed From header?

    - by Hongli Lai
    I have a host, let's call it foo.com, on which I'm running Postfix on Debian. Postfix is currently configured to do these things: All mail with @foo.com as recipient is handled by this Postfix server. It forwards all such mail to my Gmail account. The firewall thus allows port 25. All mail with another domain as recipient is rejected. SPF records have been set up for the foo.com domain, saying that foo.com is the sole origin of all mail from @foo.com. Applications running on foo.com can connect to localhost:25 to deliver mail, with [email protected] as sender. However I recently noticed that some spammers are able to send spam to me while passing the SPF checks. Upon further inspection, it looks like they connect to my Postfix server and then say HELO bar.com MAIL FROM:<[email protected]> <---- this! RCPT TO:<[email protected]> DATA From: "Buy Viagra" <[email protected]> <--- and this! ... How do I prevent this? I only want applications running on localhost to be able to say MAIL FROM:<[email protected]>. Here's my current config (main.cf): https://gist.github.com/1283647

    Read the article

  • Monitoring memcached with plink

    - by kojiro
    I need a telnet client that can take commands from a file or stdin so I can do some quick-and-dirty automatic monitoring of memcached. I thought plink would be good for this, but it seems to be doing something beyond what I need: If I telnet into localhost 11211 and write stats, I get the memcached stats, like so: $ telnet localhost 11211 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. stats STAT pid 25099 STAT uptime 91182 STAT time 1349191864 STAT version 1.4.5 STAT pointer_size 64 STAT rusage_user 3.570000 STAT rusage_system 2.740000 STAT curr_connections 5 STAT total_connections 23 STAT connection_structures 11 STAT cmd_get 0 STAT cmd_set 0 STAT cmd_flush 0 STAT get_hits 0 STAT get_misses 0 STAT delete_misses 0 STAT delete_hits 0 STAT incr_misses 0 STAT incr_hits 0 STAT decr_misses 0 STAT decr_hits 0 STAT cas_misses 0 STAT cas_hits 0 STAT cas_badval 0 STAT auth_cmds 0 STAT auth_errors 0 STAT bytes_read 82184 STAT bytes_written 7210 STAT limit_maxbytes 67108864 STAT accepting_conns 1 STAT listen_disabled_num 0 STAT threads 4 STAT conn_yields 0 STAT bytes 0 STAT curr_items 0 STAT total_items 0 STAT evictions 0 STAT reclaimed 0 END But with plink, I get an odd error. I'm using this command: watch -n 30 plink -v -telnet -P 11211 127.0.0.1 <<< $'\nstats' The first time through I get: Looking up host "127.0.0.1" Connecting to 127.0.0.1 port 11211 client: WILL NAWS client: WILL TSPEED client: WILL TTYPE client: WILL NEW_ENVIRON client: DO ECHO client: WILL SGA client: DO SGA ERROR STAT pid 25099 STAT uptime 91245 STAT time 1349191927 STAT version 1.4.5 … END But when watch repeats the command I just get: Looking up host "127.0.0.1" Connecting to 127.0.0.1 port 11211 client: WILL NAWS client: WILL TSPEED client: WILL TTYPE client: WILL NEW_ENVIRON client: DO ECHO client: WILL SGA client: DO SGA Failed to connect to 127.0.0.1: Connection reset by peer Connection reset by peer FATAL ERROR: Connection reset by peer What is plink doing here that is different from normal telnet? How should I be going about this? (I'm not married to plink, but I need a way to continuously send simple telnet commands to memcached without writing a full-fledged perl script.)

    Read the article

  • Local references to old server name remain after Windows 2003 server rename

    - by imagodei
    I have a standalone Win 2003 server with Windows Sharepoint Services (WSS3) running on it. I had to rename the server and I had bunch of problems resulting from this. Note that the server is not in AD environment. Most obvious problems were with Sharepoint, which didn't work. I was somewhat naive to think it will work in the first place, but OK - I've solved this using step 1 & 3 from this site (TNX) Other curious behavior/problems remain. Most disturbing is that Sharepoint isn't able to send email notifications to participants. I noticed there are several references to old server name everywhere I look: in Registry, in Windows Internal Database (MICROSOFT##SSEE). I see instances of old server name in the Sharepoint Central Administration - Operations - Servers in farm. There is reference to a servers: oldname.domain.local oldname.local On one of those servers there is also Windows SharePoint Services Outgoing E-Mail Service (Stopped). Also, when I try to telnet locally to the mail server (Simple Mail Transfer Protocol (SMTP) service), I get a response: 220 oldname.domain.local Microsoft ESMTP MAIL Service, Version: 6.0.3790.4675 ready at Tue, 15 Jun 2010 13:56:19 +0200 IMO these strange naming problems are also the reason why email notifications from within Sharepoint don't work. Can anyone tell me how to correct/replace those references to oldservername? Why is the email service insisting on old name? Of course I would like to try it without reinstalling the server. TNX!

    Read the article

  • Sending email with exim and external sender address

    - by Tronic
    i have following problem: i want to send emails with an rails webapp. i set up an exim server and when looking into the logs, the sending works, but the emails aren't sent really. i had the same problem with another isp. the sender address is hosted on another mailserver, other isp. i think the problem is, that sending doesn't work because the sener address isn't hosted on the same server. do you have any advice on this? the logs (exim) tell me the following: 2011-01-01 14:38:06 1PZ1eo-0000Ga-38 <= <> R=1PZ1eo-0000GY-1p U=Debian-exim P=local S=1778 2011-01-01 14:38:08 1PZ1eo-0000Ga-38 => [email protected] R=dnslookup T=remote_smtp H=mx1.emailsrvr.com [98.129.184.131] X=TLS1.0:RSA_AES_256_CBC_SHA1:32 DN="C=US,O=mx1.emailsrvr.com,OU=GT21850092,OU=See www.geotrust.com/resources/cps (c)08,OU=Domain Control Validated - QuickSSL(R),CN=mx1.emailsrvr.com" 2011-01-01 14:38:08 1PZ1eo-0000Ga-38 Completed [email protected] is the external sender-address! thank you! Edit with more details when sending a mail from command line with echo "Test" | mail -s Testmail [email protected] the logs says 2011-01-01 20:45:24 1PZ7OG-0001Vp-Rx <= root@gustav U=root P=local S=360 2011-01-01 20:45:26 1PZ7OG-0001Vp-Rx => [email protected] R=dnslookup T=remote_smtp H=gmail-smtp-in.l.google.com [209.85.229.27] X=TLS1.0:RSA_ARCFOUR_MD5:16 DN="C=US,ST=California,L=Mountain View,O=Google Inc,CN=mx.google.com" 2011-01-01 20:45:26 1PZ7OG-0001Vp-Rx Completed and i get the mail on my gmail account. but when sending by webapp (when testing locally with sendmail it works fine) i only get this log output 2011-01-01 20:50:08 1PZ7Sq-0001X9-L4 <= <> R=1PZ7Sq-0001X7-Jo U=Debian-exim P=local S=1780 2011-01-01 20:50:11 1PZ7Sq-0001X9-L4 => [email protected] R=dnslookup T=remote_smtp H=mx1.emailsrvr.com [98.129.184.3] X=TLS1.0:RSA_AES_256_CBC_SHA1:32 DN="C=US,O=mx1.emailsrvr.com,OU=GT21850092,OU=See www.geotrust.com/resources/cps (c)08,OU=Domain Control Validated - QuickSSL(R),CN=mx1.emailsrvr.com" 2011-01-01 20:50:11 1PZ7Sq-0001X9-L4 Completed

    Read the article

  • Postfix qmgr process causes heavy overload on mailservers

    - by Mattias
    We are using Postfix as MTA for our e-mailmarketing software and once in a while we see that the load on one of the mailservers rises above 5. The load is caused by the qmgr-process which is the heart of Postfix and I see that it is consuming a lot of CPU resources. The process seems to be stuck because after 15 minutes it is still doing the samething and still increasing the load. Once I restart the postfix service the load rapidly decreases to below 1 and Postfix continues to send e-mails without any problems. I'm wondering if anyone else has encountered this problem and if people have suggestions on how to prevent it. The problem shows up on all our mailservers but almost never at more than 1 at the time. It seems to be triggered only when we are sending a mailing but the size (10 or 100.000 e-mails doesn't seem to make a difference). It maybe happens once a week or even less often and the time and day is also different every time. We tried to solve the problem by decreasing the amount of messages qmgr is allowed to process but this didn't solve it. We are using Postfix 2.5.5 on Debian Lenny 5.0.8 (postfix is installed through the default Debian repository). No special messages can be found in the logs (syslog, messages, mail.*). Thank you for your time

    Read the article

  • cURL - Unkown SSL protocol error - OS X 10.9

    - by saq7
    I am trying to use cURL and get the following error on every https request I make. The error is always the same. HTTP requests work flawlessly. The verbose output is quite useless. Saquibs-MacBook-Pro:~ skothawala$ curl https://google.com -vv * Adding handle: conn: 0x7fe09b803a00 * Adding handle: send: 0 * Adding handle: recv: 0 * Curl_addHandleToPipeline: length: 1 * - Conn 0 (0x7fe09b803a00) send_pipe: 1, recv_pipe: 0 * About to connect() to google.com port 443 (#0) * Trying 74.125.226.129... * Connected to google.com (74.125.226.129) port 443 (#0) * Unknown SSL protocol error in connection to google.com:-9805 * Closing connection 0 curl: (35) Unknown SSL protocol error in connection to google.com:-9805 I have looked through other answers on this forum and other places on the internet, but haven't found an answer. Most people's issues involve particular servers and the configuration of SSL on these servers. Mine however is problematic anytime HTTPS is used (with any website). Can someone please suggest what I should be looking into to solve the problem? Can it be that something is not properly configured? What should I be looking for?

    Read the article

  • Process to replace motherboard and keep CPU

    - by jolivier
    My motherboard has been diagnosed with the Sandy Bridge issue (http://vip.asus.com/eservice/changeSandybridge_MB.aspx?slanguage=en-us) so I am asked by my reseller to send back my motherboard to have a new one compatible with the previous one. My problem is that I have a not cheap Intel CPU currently on it, with its standard heatsink/fan. I would obviously like to keep it to plug it on the new motherboard. I am quite woried about the thermal paste. I was planning to: Remove the CPU and the HSF together (I think they are sticked to each other). Try to separate the CPU and the HSF (I'm not sure how) Clean both of the surfaces When the new motherboard is here, put the CPU back on it. Have new thermal paste to put again on the CPU, put it on the CPU Add the HSF again Do you see any problem about this process? Recommendations? Is it possible to keep the CPU and the HSF together for the whole process or is it impossible to plug the CPU back on the new motherboard in this case? Thanks in advance for your answers. Olivier

    Read the article

  • AWS VPC ELB vs. Custom Load Balancing

    - by CP510
    So I'm wondering if this is a good idea. I have a Amazon AWS VPC setup with a public and private subnets. So I all ready get the Internet Gateway and NAT. I was going to setup all my web servers (Apache2 isntances) and DB servers in the private subnet and use a Load Balancer/Reverse Proxy to pick up requests and send them into the private subnets cluster of servers. My question then, is Amazons ELB's a good use for these, or is it better to setup my own custom instance to handle the public requests and run them through the NAT using nginx or pound? I like the second option just for the sake of having a instance I can log into and check. As well as taking advantage of caching and fail2ban ddos prevention, as well as possibly using fail safes to redirect traffic. But I have no experience with their ELB's, so I thought I'd ask your opinions. Also, if you guys have an opinion on this as well, would using the second option allow me to only have 1 public IP address and be able to route SSH connections through port numbers to respective instances? Thanks in advance!

    Read the article

  • IP Phone over VPN - one way calls unless default route?

    - by dannymcc
    I have come across a strange problem with our VPN and BCM 50 (Nortel/Avaya) phone system. As you can tell by my other questions I have been doing some work on setting a VPN up from one location to another and it's all working well. With one exception. We have an IP phone that is connected at the remote location, straight to a router which has a VPN tunnel to our main practice. The phone works mostly, but every few calls it turns into a one way call. As in, the caller (from the remote phone) can't hear the receiver- but the receiver can hear the caller. This is fixed by setting the VPN tunnel to be the default route for all traffic. The problem with fixing it that way is that all traffic then goes through the tunnel which slows internet access etc. down considerably. The router is set to send the following over the VPN: 192.168.1.0/24 192.168.2.0/24 192.168.4.0/24 The IP of the remote location is: 192.168.3.0/24 The remote router (where the phone is) is a Draytek 2830n, and the local router (at the main practice) is a Draytek 2820. We are using an IPSec tunnel with AES encryption <- as a result of a previous answer pointing to the incompatibility in the hardware encryption. Any advice would be appreciated!

    Read the article

< Previous Page | 437 438 439 440 441 442 443 444 445 446 447 448  | Next Page >