Search Results

Search found 6520 results on 261 pages for 'sent'.

Page 188/261 | < Previous Page | 184 185 186 187 188 189 190 191 192 193 194 195  | Next Page >

  • Amavisd-new(2.6.4-3) failing to do "lookup_sql_dsn" when large number of emails are need to be accessed

    - by sandip
    Amavis is failing to do sql lookup when large number of emails are sent to amavis. Its throwing out error after scanning 40 to 50 email. It shows error like. (!!)TROUBLE in process_request: sql exec: err=7, 57P01,DBD::Pg::st bind_param failed:FATAL: terminating connection due to administrator command\nSSL connection has been closed unexpectedly at (eval 103) line 164, <GEN50> line 5. at (eval 104) line 280, <GEN50> line 5. As soon as this error appears in the logs, Amavis stops and port 10024 is closed. Thinking it to an error due to ssl connection in the database(postgresql-8.4), i had stopped ssl in postgres, but it was of no use. I have tried to configure amavis on another server, but i got the same error again. This happening on a production server, So i am not being able to scan emails as per user settings. Anybody have any idea, what may be the source of this error ?? Please help. Thanks in advance

    Read the article

  • Does having TRIM enabled affect other hard drives on a computer (and how do you know when Windows is using it)?

    - by Breakthrough
    I recently purchased a solid state drive (an OCZ Vertex 2 (80 GB)) to use as my primary operating system partition. I also have three other SATA hard drives of assorted sizes. I successfully installed Windows 7 Professional onto the SSD (works awesome, great response time and transfer rate), and used the other three HDDs for data storage. I was browsing through the Bible of OCZ SSDs, and noticed the following in Section 60-76 - Tweaks and TRIM: Q. How do I know if TRIM is enabled on my OCZ SSD? A. In Windows 7, go to start/run/cmd), type the following: fsutil.exe behaviour query DisableDeleteNotify It should respond back with: DisableDeleteNotify=0 if TRIM support is ready and active. If it's not, then type: fsutil.exe behavior set DisableDeleteNotify 0 After a bit of searching on Google, I found similar results elsewhere (set DisableDeleteNotify to 0, which makes sense since for TRIM to work, the solid-state drive needs to be notified when deletes occur (for the garbage collector) unlike a normal hard drive). When I run the query on fsutil, I get the following result: DisableDeleteNotify = 48 Following the instructions I found, I set this to 0 instead of 48. However, I am beginning to wonder. Is this all the proof I really need that the OS is using TRIM? Also, since this applies globally for the computer, is TRIM data being sent to the other hard drives connected to the computer? And if so, would this cause any degradation in disk performance?

    Read the article

  • How to Set Up an SMTP Submission Server on Linux

    - by Kevin Cox
    I was trying to set up a mail server with no luck. I want it to accept mail from authenticated users only and deliver them. I want the users to be able to connect over the internet. Ideally the mail server wouldn't accept any incoming mail. Essentially I want it to accept messages on a receiving port and transfer them to the intended recipient out port 25. If anyone has some good links and guides that would be awesome. I am quite familiar with linux but have never played around with MTA's and am currently running debian 6. More Specific Problem! Sorry, that was general and postfix is complex. I am having trouble enabling the submission port with encryption and authentication. What Works: Sending mail from the local machine. (sendmail [email protected]). Ports are open. (25 and 587) Connecting to 587 appears to work, I get a "need to starttls" warning and starttls appears to work. But when I try to connect with the next command I get the error below. # openssl s_client -connect localhost:587 -starttls smtp CONNECTED(00000003) depth=0 /CN=localhost.localdomain verify error:num=18:self signed certificate verify return:1 depth=0 /CN=localhost.localdomain verify return:1 --- Certificate chain 0 s:/CN=localhost.localdomain i:/CN=localhost.localdomain --- Server certificate -----BEGIN CERTIFICATE----- MIICvDCCAaQCCQCYHnCzLRUoMTANBgkqhkiG9w0BAQUFADAgMR4wHAYDVQQDExVs b2NhbGhvc3QubG9jYWxkb21haW4wHhcNMTIwMjE3MTMxOTA1WhcNMjIwMjE0MTMx OTA1WjAgMR4wHAYDVQQDExVsb2NhbGhvc3QubG9jYWxkb21haW4wggEiMA0GCSqG SIb3DQEBAQUAA4IBDwAwggEKAoIBAQDEFA/S6VhJihP6OGYrhEtL+SchWxPZGbgb VkgNJ6xK2dhR7hZXKcDtNddL3uf1YYWF76efS5oJPPjLb33NbHBb9imuD8PoynXN isz1oQEbzPE/07VC4srbsNIN92lldbRruDfjDrAbC/H+FBSUA2ImHvzc3xhIjdsb AbHasG1XBm8SkYULVedaD7I7YbnloCx0sTQgCM0Vjx29TXxPrpkcl6usjcQfZHqY ozg8X48Xm7F9CDip35Q+WwfZ6AcEkq9rJUOoZWrLWVcKusuYPCtUb6MdsZEH13IQ rA0+x8fUI3S0fW5xWWG0b4c5IxuM+eXz05DvB7mLyd+2+RwDAx2LAgMBAAEwDQYJ KoZIhvcNAQEFBQADggEBAAj1ib4lX28FhYdWv/RsHoGGFqf933SDipffBPM6Wlr0 jUn7wler7ilP65WVlTxDW+8PhdBmOrLUr0DO470AAS5uUOjdsPgGO+7VE/4/BN+/ naXVDzIcwyaiLbODIdG2s363V7gzibIuKUqOJ7oRLkwtxubt4D0CQN/7GNFY8cL2 in6FrYGDMNY+ve1tqPkukqQnes3DCeEo0+2KMGuwaJRQK3Es9WHotyrjrecPY170 dhDiLz4XaHU7xZwArAhMq/fay87liHvXR860tWq30oSb5DHQf4EloCQK4eJZQtFT B3xUDu7eFuCeXxjm4294YIPoWl5pbrP9vzLYAH+8ufE= -----END CERTIFICATE----- subject=/CN=localhost.localdomain issuer=/CN=localhost.localdomain --- No client certificate CA names sent --- SSL handshake has read 1605 bytes and written 354 bytes --- New, TLSv1/SSLv3, Cipher is DHE-RSA-AES256-SHA Server public key is 2048 bit Secure Renegotiation IS supported Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1 Cipher : DHE-RSA-AES256-SHA Session-ID: E07926641A5EF22B15EB1D0E03FFF75588AB6464702CF4DC2166FFDAC1CA73E2 Session-ID-ctx: Master-Key: 454E8D5D40380DB3A73336775D6911B3DA289E4A1C9587DDC168EC09C2C3457CB30321E44CAD6AE65A66BAE9F33959A9 Key-Arg : None Start Time: 1349059796 Timeout : 300 (sec) Verify return code: 18 (self signed certificate) --- 250 DSN read:errno=0 If I try to connect from evolution I get the following error: The reported error was "HELO command failed: TCP connection reset by peer".

    Read the article

  • KVM virtual machine unable to access internet

    - by peachykeen
    I have KVM set up to run a virtual machine (Windows Home Server 2011 acting as a build agent) on a dedicated server (CentOS 6.3). Recently, I ran updates on the host, and the virtual machine is now unable to connect to the internet. The virtual network is running through NAT, the host has an interface (eth0:0) set up with a static IP (virt-manager shows the network and its IP correctly), and all connections to that IP should be sent to the guest. The host and guest can ping one another, but the guest cannot ping anything above the host, nor can I ping the guest from anywhere else (I can ping the host). Results from the guest to another server under my control and from an external system to the guest both return "Destination port unreachable". Running tcpdump on the host and destination shows the host replying to the ping, but the destination never sees it (it doesn't even look like the host is bothering to send it on at all, which leads me to suspect iptables). The ping output matches that, listing replies from 192.168.100.1. The guest can resolve DNS, however, which I find rather odd. The guest's network settings (connection TCP/IPv4 properties) are set up with a static local IP (192.168.100.128), mask of 255.255.255.0, and gateway and DNS at 192.168.100.1. When originally setting up the vm/net, I had set up some iptables rules to enable bridging, but after my hosting company complained about the bridge, I set up a new virtual net using NAT and believe I removed all the rules. The VM's network was working perfectly fine for the last few months, until yesterday. I haven't heard anything from the hosting company, didn't change anything on the guest, so as far as I know, nothing else has changed (unfortunately the list of packages updated has since fallen off scrollback and I didn't note it down).

    Read the article

  • How can I automatically edit an email before auto-forwarding it?

    - by Miss Cellanie
    Is there a way to automatically edit emails before forwarding them? I'm getting email notifications from Foursquare that I want to send to my phone as text messages. I know how to send messages to my number using an email address (I'm in the US and use Verizon) but I don't know how to strip out any unnecessary formatting, like HTML, before the email gets sent. What I want: Ability to strip out HTML Ability to start forwarding at a specific part of the email based on a search (e.g., I might know that Foursquare starts their messages with "Hey hey!" and only want content after that phrase occurs) Ability to truncate at 160 characters Things I've tried: I'm not using Foursquare DM pings through Twitter because I have two Twitter accounts and Twitter only allows a phone to be linked to one account at a time. I'm not willing to change which account it's linked to. I tried to work around the Twitter limitation using Google Voice, but they don't support SMS short codes. I'll compromise on the features I want if I can find a free solution that doesn't require me to set up my own server. I do think this is computer related because it will happen on my computer, not on my phone. edit My current setup: Gmail in Firefox 3.0.15 on Windows XP. I use a netbook as my only personal computer. However, if the only way to accomplish this well is to set up my own mail server or something, I would still want to know that.

    Read the article

  • Western Digital My Book not recognized by WD software

    - by Kari
    A few years ago I bought a WD My Book Pro 2. It worked fine for a while, then one of the drives failed and I sent it back to be replaced under warranty. I never got around to setting up the new one when I got it back. I finally ran out of room on my internal drive, so I tried to use the external - no go. Both drives spin up, but aren't recognized by either Disk Utility (Mac) or the WD Drive Manager. I tried on a PC as well with fresh software. Then I pulled the drives out of the enclosure (warranty is already expired) and plugged them straight into the PC. Both recognized and working 100% in RAID0. BIOS recognizes either disk as functional; Windows only sees them when both are connected due to the RAID which I can't change without the WD software. The drives that were returned to me are the "Green" drives which I've read are NOT recommended for RAID. Is it possible that this is interfering with them reading externally? Any other ideas? My main computer is a laptop so using them internally isn't an option :(

    Read the article

  • postfix revived and delivered have the same values (?)

    - by thinkingbig
    I have configured my first server (Debian with ISPConfig). Generally i want to send bulk e-mails to our users, i configure postfix and turn on postfix... but... After 1 hour of sending emails i have logs like this: Grand Totals messages 21886 received 21883 delivered 0 forwarded 0 deferred 234 bounced 0 rejected (0%) 0 reject warnings 0 held 0 discarded (0%) 30805k bytes received 31280k bytes delivered 3 senders 3 sending hosts/domains 12588 recipients 3 recipient hosts/domains Per-Hour Traffic Summary time received delivered deferred bounced rejected -------------------------------------------------------------------- 0000-0100 0 0 0 0 0 0100-0200 0 0 0 0 0 0200-0300 0 0 0 0 0 0300-0400 0 0 0 0 0 0400-0500 0 0 0 0 0 0500-0600 0 0 0 0 0 0600-0700 0 0 0 0 0 0700-0800 0 0 0 0 0 0800-0900 0 0 0 0 0 0900-1000 0 0 0 0 0 1000-1100 0 0 0 0 0 1100-1200 0 0 0 0 0 1200-1300 0 0 0 0 0 1300-1400 0 0 0 0 0 1400-1500 0 0 0 0 0 1500-1600 15311 15306 0 168 0 1600-1700 6575 6577 0 66 0 1700-1800 0 0 0 0 0 1800-1900 0 0 0 0 0 1900-2000 0 0 0 0 0 2000-2100 0 0 0 0 0 2100-2200 0 0 0 0 0 2200-2300 0 0 0 0 0 2300-2400 0 0 0 0 0 Host/Domain Summary: Message Delivery sent cnt bytes defers avg dly max dly host/domain 21521 30353k 0 3.4 m 15.5 m wp.pl 355 919k 0 54.9 s 13.0 m mysenderdomainexample.pl 7 8477 0 1.7 s 1.9 s prokonto.pl Host/Domain Summary: Messages Received msg cnt bytes host/domain 21879 30786k mysenderdomainexample.pl 5 16196 mx4.wp.pl 1 3200 mx3.wp.pl Senders by message count 21783 [email protected] 96 [email protected] 6 from=< **So, my question is: 1) Why i have recived and delivered have the same values (approx)? 2) How can I check if an email has been delivered? 3) How to change default "root" and "www-data" user (FROM / RETURN PATH) to another? I have changed this in script, but postfix ignore scripting values and send every mail from root (we have .php send cron's in /etc/crontab) 4) WHY APPROX 100 % MAILS RECIVED HAS BEEN ADRESED TO MY SENDER HOST? Host/Domain Summary: Messages Received Waiting for respond, Regards TB**

    Read the article

  • email attachments [closed]

    - by Alan Doolan
    My company currently use software on a local machine that will take an email from the email server, extract the attachment, rename it and then add it to a folder on a webserver using ftp. This works well but they are currently asking if it can be done 'in the cloud' or what they really mean, not local. Is there any thing that would do this on the server itself? I should clarify a bit. The attachements are various reports that are being sent to different email addresses (mostly google corporate and free accounts). We need the reports to be on a folder on a webserver so that internal pages can take the information in the reports (csv) and use it on the webpages or adds them to a separate database. The key part being that the files need to be in the particular folders. Though it does work to have a computer running software that will take the files, renames them to the required name and uploads them to the folder it relies too heavily on one computer working all the time. This is not something we can depend on at this point. I'll be honest, I'm a web developer and not strong with server systems past my particular standard requirements so this is beyond me. though yes, I am aware that my boss is not 100% sure what 'cloud' means but likes the word.

    Read the article

  • redirecting output from telnet / nc to file in script fails when cron'd

    - by qhartman
    So, I have device on my network which sits there listening on a port for a connection, and when a connection is made it dumps ascii data out. I need to capture that data to a file. I wrote a dead simple shell script that does this: #!/bin/bash #Config Variables. Age is in Days. DATA_ROOT=/root/data FILENAME=data_`date +%F`.dat HOST=device COMPRESS_AGE=3 #Sanity Checks if [ ! -e $DATA_ROOT ] then echo "The directory $DATA_ROOT seems to not exist. Please create it." exit 1 fi if [ -e $DATA_ROOT/$FILENAME ] then echo "You seem to have extracted data already today. Aborting" exit 1 fi #Get Data nc $HOST 2202 > $DATA_ROOT/$FILENAME #Compress old Data find $DATA_ROOT -type f -mtime +$COMPRESS_AGE -exec gzip {} \; exit 0 It works great when I run it by hand, but when I run it from cron, it doesn't capture any of the output. If I replace nc with telnet I see the initial telnet headers about escape sequences and whatnot, but not the data. Ideas? I've tried forcing bash to act like an interactive shell with -i. I've tried redirecting both stderr and stdout. I know it's got to be some silly simple thing, but I'm utterly failing. This is driving me nuts... EDIT I also just noticed that the nc processes from all my previous attempts at this have been siting sleeping, and when I killed them, cron sent me a bunch of non-sensical error messages. At least now I have something to dig into!

    Read the article

  • Gmail: Received an email intended for another person. [closed]

    - by jonescb
    I'm not really sure how email routing works, but someone ordered something on Amazon and I received the email instead of them. Or maybe we both got it, I don't know. The order doesn't show up in my account, so I'm certain I wasn't charged for it, but I shouldn't be getting other peoples' emails. We'll say that my email is [email protected], and somebody who's email is [email protected] places an order on Amazon. The confirmation email is sent to me at [email protected]. I checked the email header, and it did say To: [email protected] which is not my email address. At first I thought that Google ignores periods in email addresses, but I tested the account setup and it doesn't give any error when you put a period in the address. I didn't create the account; I just used the "check availability" function and the address I chose with a period was fine. Maybe someone with knowledge about how Email works could tell me why this happened. Is this a bug in the way Amazon sends emails? Or is it a bug in how Google receives them? Who should I report this issue to?

    Read the article

  • Serve web application error messages from Http server

    - by licorna
    I have nginx as a http server with tomcat as a backend (using proxy_pass). It works great but I want to define my own error pages (404, 500, etc.) and that they are served by nginx and not tomcat. For example I have the following resource: https://domain.com/resource which doesn't exist. If I [GET] that URL then I get a Not Found message from Tomcat and not from nginx. What I want is that every time Tomcat responds with a 404 (or any other error message) nginx sends itself a message to the user: some html file accessible by nginx. The way I have my nginx server configured is very easy, just: location / { proxy_pass http://localhost:8080/<webapp-name>/; } And I've configured port 8080, which is tomcat, as not accessible from outside this machine. I don't think that using different location directives in nginx configuration will work, because there are some resources that depend on the URL: https://domain.com/customer/<non-existent-customer-name>/[GET] Will always return 404 (or any other error message), while: https://domain.com/customer/<existent-customer>/[GET] Will return anything different from 404 (the customer exists). Is there any way of serving Tomcat (Application Server) error messages with Nginx (http Server)? To check the message sent by the proxy_pass directive and act upon it?

    Read the article

  • Windows Server doesn't connect to a network share

    - by Dmitriy N. Laykom
    Windows Server doesn't connect to a network share. Network share is working. Blockquote Pinging 109.123.146.223 with 32 bytes of data: Reply from 109.123.146.223: bytes=32 time<1ms TTL=63 Reply from 109.123.146.223: bytes=32 time<1ms TTL=63 Reply from 109.123.146.223: bytes=32 time<1ms TTL=63 Ping statistics for 109.123.146.223: Packets: Sent = 3, Received = 3, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 0ms, Maximum = 0ms, Average = 0ms net view \shareaddress Blockquote System error 53 has occurred. The network path was not found. When I connected the network share I observed this error message: Blockquote \ "Mapped disk letter" refers to a location that is unavailable. It could be on a hard drive on this computer, or on a network. Check to make sure that the disk is properly inserted, or that you are connected to the Internet or your network, and then try again. If it still cannot be located, the information might have been moved to a different location The network share was mounted via Group Policy. Perchance anyone knows how I can avoid this error? When the OS has been restored from the disk problem has been solved

    Read the article

  • Cloning a git repository from a machine running OS X

    - by Mike
    Hi folks, I'm trying to host a git repository from my home OS X machine, and I'm stuck on the last step of cloning the repository from a remote system. Here's what I've done so far: On the OS X (10.6.6) machine (heretofore dubbed the "server") I created a new admin user Logged into the new user's account Installed git Created an empty git repository via "git init" Turned on remote login Set port mapping on my router (airport extreme) to send ssh traffic to the server Added a ".ssh" directory to the user's home directory From the remote machine (also an OS X 10.6.6 machine), I sent that machine's public key to the server using scp and the login credentials of the user created in step 1 To test that the server would use the remote machine's public key, I ssh'd to the server using the username of the user created in step 1 and indeed was able to connect successfully without being asked for a password I installed git on the remote machine From the remote machine I attempted to "git clone ssh://[email protected]:myrepo" (where "user", "my.server.address", and "myrepo" are all replaced by the actual username, server address and repo folder name, respectively) However, every time I try the command in step 11, I get asked to confirm the server's RSA fingerprint, then I'm asked for a password, but the password for the user I set up for that machine never works. Any advice on how to make this work would be greatly appreciated!

    Read the article

  • Allowing Sharepoint to relay email through Exchange

    - by dunxd
    I have written a Sharepoint 2007 web part that sends a field from a form to a specified email address. I have got the form working as I require, but at present it can only send to internal email addresses. Sharepoint's email functions use SMTP to send to our Exchange 2003 server, but because our Exchange server is configured to prevent relaying, if the To: address is not at a local domain, it won't deliver the mail. I don't want to open up our Exchange server to be a completely open relay. What I want is to allow my Sharepoint servers to send mail to addresses outside our domain. The following seem possible: Allow all mail sent from one of the Sharepoint servers to be relayed Allow all mail from a web application pool account to be relayed (I am not sure that the application pool authenticates to the SMTP server though) A combination of the two Can anyone advise on the best way of doing this? Is setting up a dedicated SMTP server on the Exchange server (not a separate physical server) the right way of going about this? EDIT: Note this is for Exchange 2003. There is a post on setting this up in Exchange 2007 which appears to have recognised the frequent requirement to do what I need. It doesn't give much detail on 2003 though. Can anyone expand?

    Read the article

  • OpenSSL force client to use specific protocol

    - by Ex Umbris
    When subversion attempts to connect to an https URL, the underlying protocol library (openssl) attempts to start the secure protocol negotiation at the most basic level, plain SSL. Unfortunately, I have to connect to a server that requires SSL3 or TLS1, and refuses to respond to SSL or SSL2. I’ve done some troubleshooting using s_client and confirmed that if I let s_client start with the default protocol the server never responds to the CLIENT HELLO: $ openssl s_client -connect server.domain.com:443 CONNECTED(00000003) write:errno=104 --- no peer certificate available --- No client certificate CA names sent --- SSL handshake has read 0 bytes and written 320 bytes --- New, (NONE), Cipher is (NONE) Secure Renegotiation IS NOT supported Compression: NONE Expansion: NONE --- Watching this in Wireshark I see: Client Server -------syn----------> <------ack----------- ---CLIENT HELLO-----> <------ack----------- [60 second pause] <------rst----------- If I tell s_client to use ssl2 the server immediately closes the connection. Only ssl3 and tls1 work. Is there any way to configure openssl to skip SSL and SSL2, and start the negotiation with TLS or SSL3? I've found the OpenSSL config file, but that seems to control only certificate generation.

    Read the article

  • iptables rule on INPUT between 2 ethernet cards on the same host

    - by user1495181
    I have 2 eth cards on the same host. Both connected directly with LAN cable. I set eth0 with ip - 192.168.1.2 I set eth1 with ip - 192.168.1.1 I set this rule: iptables -A INPUT -p tcp -j NFQUEUE --queue-num 0 There are no other rules. (I ran iptables -X,-F) I send TCP syn packet ( with c++ program by using raw socket) from 192.168.1.2 to 192.168.1.1 In wireshark i see that the packet received on eth0, but the iptables rule (above) dosnt apply for this packet. when i sent the packet to remote host and apply this rule on the remote host than it work correct. So, i guess that this is due to the fact that both eth cards exists the same host. . I need to create iptables INPUT rule for local eth card (dest and src on the same machine ). I need it for simplify test. Did i guess the problem correct? is there a way to bypass this? Ps - connected them via switch didn't help. the rule wasn't applied. Run on Ubuntu. TCDUMP show the packet: 10:48:42.365002 IP 192.168.1.2.38550 > 192.168.1.1.34298: Flags [S], seq 0, win 5840, length 0 but logging of iptables like this, has nothing: iptables -A INPUT -p tcp -j LOG --log-prefix '*****************' iptables -A OUTPUT -p tcp -j LOG --log-prefix '#################'

    Read the article

  • Can sendmail be configured to discard routed email that has been rejected by the next hop?

    - by Guy Bolton King
    Background: We have a handful of hosts (running sendmail) acting as the MXs for a few domains each. Each domain is handled via the sendmail/cf /etc/mail/virtusertable, with a set of known recipients and a catch-all reject rule. Mail to postmaster on each host is aliased to root, and root is aliased to root+<host>@ourdomain.com. The MX for ourdomain.com is Google Apps, and [email protected] is a simple group that forwards to the admins. Google Apps will reject some emails at the SMTP stage, usually because of illegal attachments (instead of accepting them and filing them as spam). Problem: Given a particular spam email sent to a domain in a virtusertable entry: If the recipient address rejects the mail, then sendmail will try and send a DSN to the sender. If that sender also rejects the mail (because it's a falsified sender, and the MX for the sender rejects the mail as spam), then sendmail sends a DSN to the postmaster. The routing detailed above takes place, and...Google Apps rejects the mail as well. sendmail now gives up with a "savemail panic", and leaves the mail in the queue forever. Our mail queue fills up with garbage Is there any way I can get sendmail to discard messages that have been rejected by the next virtusertable hop (i.e. after step 1 in the Problem description)? Or does anyone have any other solutions to this?

    Read the article

  • BackupExec 12 + RALUS - VERY slow backups

    - by LVDave
    We use Backup Exec 12 and the Remote Agent for Linux/Unix Servers (RALUS) to backup a large RHEL5 system. For various reasons we need to do a daily working set job. These working-set jobs run abysmally slow. The link between the target machine and the BE server is gigabit, and any other type of job runs 1-3GB/min. These working-set jobs start out at perhaps 40MB/min and over the course of the backup job slowly drops down so low that the BE job rate display in the "current jobs" goes blank.. Since we usually are only doing changed-files for one day, the job is usually small and finishes overnight and we don't worry abotu the slowness, but we had some issues with the backup server, and missed about 6 days of fairly heavy work on the Linux box, so this working-set job will be a doozy.. We have support with Symantec, and I've pestered them a lot about this, they've had me run RALUS in debug mode, sent them that log and a VXgather from the BE host and they had no fix/workaround.. To give an idea, I have the mentioned working-set job running for the last 3 1/2 hours and it's backed up just under 10MEGAbytes.... I'm posting this here to see if anybody in the "real world" has seen this/and/or has any ideas what might be causing these abysmally slow jobs, since Symantec seems to be clueless...

    Read the article

  • How do I tell Websphere 7 about a front end load balancer so that re-directs are handled correctly?

    - by TiGz
    On WebLogic 11G I can use the console to set the FrontendHost and FrondendPort on a server or on a cluster so that re-directs are handled correctly and end up resolving to the front end load balancer instead of the local host. The MBeans associated with this on WebLogic are, for example: MBean Name com.bea:Name=AdminServer,Type=WebServer,Server=AdminServer Attribute Name FrontendHost Description The name of the host to which all redirected URLs will be sent. If specified, WebLogic Server will use this value rather than the one in the HOST header. Sets the HTTP frontendHost Provides a method to ensure that the webapp will always have the correct HOST information, even when the request is coming through a firewall or a proxy. If this parameter is configured, the HOST header will be ignored and the information in this parameter will be used in its place. Type java.lang.String Readable / Writable RW How is the same thing achieved under Websphere 7? Follow up info: So I have 2 use cases actually. One is that I have a web app running under WebSphere on host A on port 9002 and a LB running on host B at port 80, when I visit the home page of the app via the LB on http://hostb/app the app redirects my browser to http://hostb:9002/app and it 404's I think this is WebSphere's fault but I guess it could be the app's fault? The second is that the web app in question needs to send emails containing URls that the customer can click on to get back into the web app - obviously this needs to be via the LB. On WebLogic the app uses MBeans to derive the LB url and I was hoping to use a similar mechanism on WebSphere.

    Read the article

  • iPhone all day meeting request bug?

    - by RodH257
    We've come across a bit of a weird bug in the office. our office is closing for a week or so over christmas, and our admin staff sent a meeting request to everyone in the office to go from 5pm on the 23rd of December until we return to work at 8.30 on the 5th. However there was some confusion, as if you look at the image below, the same meeting request is showing up with different start times for some people. While most people get the 5pm date, for others it shows up as an all day event! Staff have been planning their activities for the 23rd only to find out that their calendar is wrong and they are required to work. With some investigation, we noticed that all of the people with the incorrect time own iPhones or iPads. So perhaps they accepted the meeting on their phones, and it has put the meeting in wrong? There are people with the correct one that have iPhones, but perhaps they are on ios4 still, or perhaps they didn't accept the meeting request on their phone. has anyone else come across this error at all? is there a fix?

    Read the article

  • Changing the modified date of a message in Exchange 2010

    - by jgoldschrafe
    My organization is in the middle of a process to move their Exchange 2010 messaging system from one archiving platform to another. As part of this process, we need to restore all archived messages back into users' email accounts, and then let the new system import them again. The problem is that when the messages are dumped back, the modified date on the message is set to the date it was restored, which trips up message archiving and basically means nobody will have anything archived for six months. So you don't have to ask: no, our archiving platform only uses the modified timestamp on the message and cannot be altered to temporarily use the sent or received timestamp instead to determine whether to archive it. We and others have asked for the feature, but it doesn't exist right now. What we're looking for is a method to go through the user's mailbox and alter the modified timestamp of each message (or preferably received more than X months ago) to the received date of the message. We also don't want to spend more on this tool per user than we're spending on the archiving solution in the first place. We've run across a few tools that are something ridiculous like $25 per user. I don't think we're even paying close to that for Exchange and the archiving solution put together. Whatever we settle on should function on a live mailbox with no downtime. Playing around with PST imports and hacky little things like that isn't going to work. We're fine with programming/scripting, if anyone knows the best way through PowerShell, COM automation or some other way to best handle this.

    Read the article

  • How to deal with ssh's "WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!"?

    - by Vi.
    I often need to login to multiple remote stations that are just placed to the same static IPs for me. SSH complains about changed keys in this case: $ ssh [email protected] @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ ... Offending RSA key in /home/vi/.ssh/known_hosts:70 ... I usually just run vim /home/vi/.ssh/known_hosts +70, dd wq and re-run the SSH command. How to do it simpler? Requirements: The warning should be displayed, and not like this: The authenticity of host '172.1.2.3 (172.1.2.3)' can't be established. It is easy to accept the key change. I expect something like this: $ ssh [email protected] @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ ... The fingerprint for the RSA key sent by the remote host is 82:cd:be:7a:ae:1b:91:2c:23:c1:74:4d:8a:38:10:32. Change the host key in /home/vi/.ssh/known_hosts (yes/no)? yes Warning: Changed host key for '172.1.2.3' (RSA) in the list of known hosts. [email protected]'s password: Simple and differs from usual "The authenticity of host can't be established." message.

    Read the article

  • /etc/hosts: What is loghost? (fresh install of Solaris 10 update 9)

    - by cjavapro
    # # Internet host table # ::1 localhost 127.0.0.1 localhost XX.XX.XX.XX myserver loghost What is the purpose of loghost? If it was not for having loghost in there, all the /etc/hosts files on all the servers in this particular network could be identical. Edit: I looked at /etc/syslog.conf #ident "@(#)syslog.conf 1.5 98/12/14 SMI" /* SunOS 5.0 */ # # Copyright (c) 1991-1998 by Sun Microsystems, Inc. # All rights reserved. # # syslog configuration file. # # This file is processed by m4 so be careful to quote (`') names # that match m4 reserved words. Also, within ifdef's, arguments # containing commas must be quoted. # *.err;kern.notice;auth.notice /dev/sysmsg *.err;kern.debug;daemon.notice;mail.crit /var/adm/messages *.alert;kern.err;daemon.err operator *.alert root *.emerg * # if a non-loghost machine chooses to have authentication messages # sent to the loghost machine, un-comment out the following line: #auth.notice ifdef(`LOGHOST', /var/log/authlog, @loghost) mail.debug ifdef(`LOGHOST', /var/log/syslog, @loghost) # # non-loghost machines will use the following lines to cause "user" # log messages to be logged locally. # ifdef(`LOGHOST', , user.err /dev/sysmsg user.err /var/adm/messages user.alert `root, operator' user.emerg * ) Very interesting. when shutting down,, alerts go to all users probably through *.emerg * Looking at ifdef, it seems that the first parameter checks to see if current machine is a loghost, second parameter is what to do if it is and third parameter is what to do if it is not. Edit: If you want to test a logging rule you can use svcadm restart system-log to restart the logging service and then logger -p notice "test" to send a test log message where notice can be replaced with any type such as user.err, auth.notice, etc.

    Read the article

  • Nginx conditional not evaluating correctly

    - by cjc
    I'm running into a weird problem with nginx and how it evaluates conditionals. Here's the relevant configuration: set $cors FALSE; if ($http_origin ~* (http://example.com|http://dev.example.com:8000|http://dev2.example.com)) { set $cors TRUE; } if ($request_method = 'OPTIONS') { set $cors $cors$request_method; } if ($cors = 'TRUE') { add_header 'Access-Test' "$cors"; add_header 'Access-Control-Allow-Origin' "$http_origin"; add_header 'Access-Control-Allow-Methods' 'POST, OPTIONS'; add_header 'Access-Control-Max-Age' '1728000'; } if ($cors = 'TRUEOPTIONS') { add_header 'Access-Test' "$cors"; add_header 'Access-Control-Allow-Origin' "$http_origin"; add_header 'Access-Control-Allow-Methods' 'POST, OPTIONS'; add_header 'Access-Control-Allow-Headers' 'X-Requested-With, X-Prototype-Version'; add_header 'Access-Control-Max-Age' '1728000'; add_header 'Content-Type' 'text/plain'; } So, the conditional blocks never trigger. When I remove the conditions, I see that the "Access-Test" header and the "Access-Control-Allow-Origin" set correctly, but, as noted, enabling the conditionals causes the headers not to be sent. I'm testing by running: curl -Iv -i --request "OPTIONS" -H "Origin: http://example.com" http://staging.example.com/ Am I missing something obvious? I've tried the "if" with and without quotes, etc. This is nginx 1.2.9.

    Read the article

  • What are the possible problems, when wget returns code 500 but same request works in normal browsers?

    - by markus
    What should I be looking for, when wget returns 500 but the same URL works fine in my web browser? I don't see any access_log entries that seem to be related to the error. DEBUG output created by Wget 1.14 on linux-gnu. <SSL negotiation info stripped out> ---request begin--- GET /survey/de/tools/clear-caches/password/<some-token> HTTP/1.1 User-Agent: Wget/1.14 (linux-gnu) Accept: */* Host: testing.thesurveylab.net Connection: Keep-Alive ---request end--- HTTP request sent, awaiting response... ---response begin--- HTTP/1.0 500 Internal Server Error Date: Wed, 12 Dec 2012 14:53:07 GMT Server: Apache/2.2.3 (CentOS) Set-Cookie: blueprint2-staging=8jnbmkqapl30hjkgo0u6956pd1; path=/ Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Strict-Transport-Security: max-age=8640000;includeSubdomains X-UA-Compatible: IE=Edge,chrome=1 Content-Length: 5 Connection: close Content-Type: text/html; charset=UTF-8 ---response end--- 500 Internal Server Error Stored cookie testing.thesurveylab.net -1 (ANY) / <session> <insecure> [expiry none] blueprint2-staging 8jnbmkqapl30hjkgo0u6956pd1 Closed 3/SSL 0x0000000001f33430 2012-12-12 15:53:07 ERROR 500: Internal Server Error.

    Read the article

< Previous Page | 184 185 186 187 188 189 190 191 192 193 194 195  | Next Page >