Search Results

Search found 5711 results on 229 pages for 'jim m somewhere'.

Page 162/229 | < Previous Page | 158 159 160 161 162 163 164 165 166 167 168 169  | Next Page >

  • HA for Resque & Redis

    - by Chris Go
    Trying to avoid SPOFs for Resque and Redis. Ultimately the client is going to be PHP via (https://github.com/chrisboulton/php-resque). After going through and finding some workable HA for nginx+php-fpm and MySQL (mysql master-master setup as a way to simply master-slave promotion), next up is Resque+Redis. Standard install of Resque uses localhost Redis (at DigitalOcean). I am heavily depending on Amazon Route 53 DNS failover to try to solve this. resque1.domain.com points to localhost redis (redis1.domain.com) = same server resque2.domain.com points to localhost redis (redis2.domain.com) = same server Do resque.domain.com with FAILOVER resque1 as primary and resque2 as secondary. What this means is that most of the time (99%), resque1 should be getting hit with resque2 as just a hot backup. This lets me just have to get 2 servers and makes sure that any hits to resque.domain.com goes somewhere The other way to do this is to break out resque and redis into 4 servers and do it as follows resque1.domain.com - redis.domain.com resque2.domain.com - redis.domain.com redis1.domain.com redis2.domain.com Then setup DNS Failover resque.domain.com - primary: resque1 and secondary: resque2 redis.domain.com - primary: redis1 and secondary: redis2 I'd like to get away for 2 servers if I can but is this 2nd setup much better or negligible? Thanks, Chris

    Read the article

  • Vserver: secure mails from a hacked webservice

    - by lukas
    I plan to rent and setup a vServer with Debian xor CentOS. I know from my host, that the vServers are virtualized with linux-vserver. Assume there is a lighthttpd and some mail transfer agent running and we have to assure that if the lighthttpd will be hacked, the stored e-mails are not readable easily. For me, this sounds impossible but may I missed something or at least you guys can validate the impossibility... :) I think basically there are three obvious approaches. The first is to encrypt all the data. Nevertheless, the server would have to store the key somewhere so an attacker (w|c)ould figure that out. Secondly one could isolate the critical services like lighthttpd. Since I am not allowed to do 'mknod' or remount /dev in a linux-vserver, it is not possible to setup a nested vServer with lxc or similar techniques. The last approach would be to do a chroot but I am not sure if it would provide enough security. Further I have not tried yet, if I am able to do a chroot in a linux-vserver...? Thanks in advance!

    Read the article

  • Ubuntu 12.4 compiz - disable all compiz plugin - empty screen

    - by gotqn
    A friend of mine has installed on my new machine Ubuntu 10.4 (I have always been windows user and have no experience with Linux). I started to watch some tutorial about how to make 'Rotated Cube' using 'Compiz',but the cube appears in the form of a list (only two slides). I have thought this could be result of my video cards (only two - one from the processor and one from the motherboard) and they can not support this options. Anyway, I have decided to disable all compiz plugins and options because my friend has set some, and I started to think there is some misunderstanding between the plugins. After, that I got only empty screen(no menu, no icons, anything) and can do nothing. How to fix this? EDIT: When I remove the compiz stuffs (from the console), the menu is shown again. Then I install the compiz again (some of the effect are still not working). After restart or log out/in the menu is hidden again. I suppose that there are some settings that I've broken but they are saved somewhere in the system and remove the compiz do not deleted them and as a result they are activated after compiz is installed again and the PC is restarted?

    Read the article

  • SIP and NAT routers?

    - by OverTheRainbow
    Hello SIP was not built with NAT routers in mind, and I'd like to get to the bottom of this issue to check what needs to be done on all devices so it works with NAT routers, and understand in what context it just can't be used and I should check more NAT-friendly alternatives like IAX. A picture being worth a thousand words, here's the layout I need to use: http://img62.imageshack.us/img62/4077/sipandnatrouters.jpg The PBX server is located in the private LAN behind a NAT router connected to the Internet (I know it'd be easier if it were located in the public network, but this router doesn't support DMZ's so the server has to be in the private network) A couple of (soft|hard)phones are located on the same LAN and connected to the PBX server, along with a PSTN gateway (Linksys 3102 or a Digium PCI card) Remote users using (soft|hard)phones are located somewhere on the Net with dynamic IP's and are also located behind NAT routers I may or may not have control over the local NAT router where the PBX server is located, but I have no control over the remote NAT routers, either because the users don't have the computer knowledge to map ports or because the routers are off-limit (eg. web cafés, hotel LAN's, etc.) Is it possible to configure the PBX server, the (soft|hard)phones, and the PSTN gateway so that the all conversations work fine, no matter the endpoints (POTS caller/local phone, POTS caller/remote phone, local phones, remote phone/local phone)? In which cases may I expect problems, and are there solutions? FWIW, I'm leaning toward using Freeswitch, but I could end up using Asterisk if there are technical advantages to it in this context. Thank you for any info.

    Read the article

  • What does this ssh error mean?

    - by kevin
    This is my last resort. I've been trying to figure out the problem here for hours. Here's the deal: I have copied my private key from machine #1 onto machine #2. Machine #1 is able to connect via ssh to a server with my public key just fine, but machine #2 gives the following output, when trying to connect to the server: $ ssh -vvv -i /home/kevin/.ssh/kev_rsa [email protected] -p 22312 OpenSSH_5.3p1 Debian-3ubuntu6, OpenSSL 0.9.8k 25 Mar 2009 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to 192.168.1.244 [192.168.1.244] port 22312. debug1: Connection established. debug3: Not a RSA1 key file /home/kevin/.ssh/kev_rsa. debug2: key_type_from_name: unknown key type '-----BEGIN' debug3: key_read: missing keytype debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace ... Permission denied (publickey). There is obviously more debug output that I have omitted, and I can provide upon request. I am convinced however that it doesn't like my private key file. I also had a suspicion that it has to do with how I copied it from machine #1 to machine #2. I copy/pasted the text from the private key onto a flash drive. This might be the problem, however, when I duplicated this method on another working private key file, and did a diff on the original, to the copy/pasted one, they are identical. I've been struggling with this. If I could just get a little more information on why it doesn't like my key, I could fix it I'm sure. Anyone have any ideas on this? Is there some meta-data somewhere that tells ssh that a file is in fact an RSA key?

    Read the article

  • Mac Management and Security

    - by Bart Silverstrim
    I was going through some literature on managing OS X laptops and asked someone some questions about usage scenarios when using the MacBooks. I asked someone more knowledgeable than I about whether it was possible for my Mac to be taken over if I were visiting another site for a conference or if I went on a wifi network at a local coffee house with policies from an OS X Server with workgroup manager (either legit for the site or someone running a version of OS X Server on hardware they have hidden somewhere on the network), which apparently could be set up to do things like limit my access to Finder or impose other neat whiz-bang management features. He said that it is indeed possible for it to happen as it would be assigned via the DHCP server and the OS X server would assume my Mac is a guest and could hand out restrictions and apparently my Mac will happily accept them without notifying me or giving me an option, unlike Windows which I believe would need to be joined to a domain before it becomes "managed" by Active Directory. So my question is as network admins and sysadmins with users traveling with MacBooks, is there a way to reasonably protect your users from having their machines hijacked without resorting to just turning off networking all the time? Or isn't this much of a security hazard? What threat does this pose to the road warriors in your businesses?

    Read the article

  • Firefox isn't using my download manager (flash videos)

    - by John22
    I installed "Free Download Manager." I see the plugin in Tools-Add-ons (it doesn't have any options). I use several different flash video downloaders, because I haven't found one that works period on any site. When I save the video with two I tried, they are being downloaded by Firefox's default download manager (which means simultaneously - which is why I installed the download manager - I need them to download one at a time - in a prioritized queue.) [I used to use Flashgot (long ago), and it worked with some download manager I had installed - but over time it failed to see most videos. I installed Flashgot again, and it still fails to see anything but images and video ads.] Currently, I have to manually start Free Download Manager (from outside of Firefox), start the download in Firefox, stop it, copy the link location from Firefox's download menu, and then add it manually in Free Download Manager. Yuck. Do I need a different download manager (that takes over - recommendations?), or did I somehow install this one wrong or miss a setting somewhere in Firefox? Thanks for any help.

    Read the article

  • How to multiseat with HW 3d accel on CentOS 6.3 Final?

    - by user35070
    I would like to setup a multiseat configuration on CentOS 6.3 (two video cards, two keyboards, two mice, two monitors) and have hardware accelerated 3D on both monitors. 3D HW acceleration rules out Xephyr. I saw somewhere that recent versions of GDM (3.3 and newer?) don't support multiseat, so do I have to install KDM to make this work? If I just create a duplicate section with new device identifiers in my xorg.conf file, will this 'just work'? Using different ports on the same video card and separate keyboards, mice, and displays, the result was a desktop which spanned both monitors with both keyboards and mice acting as the same input in the GUI. I will power down and put in the new video card and report on the results soon. Both video cards are nvidia. UPDATE after putting in another NVIDIA video card, default behavior (before changing xorg.conf) is that one screen works normally, and both mice and keyboards are connected to it. Changing xorg.conf and the display manager to KDM and following the directions here https://help.ubuntu.com/community/MultiseatX#Ubuntu_10.04_.28Lucid.29 , I have 2 mirrored screens connected to separate video cards, DRI enabled, and 2 mice both connected to the same pointer. Keyboards don't do anything, however, I probably just need to fix a setting in xorg.conf I would still like to get multiseat functionality, eg. separate screens with separate input devices I have verified that the separate X processes are running (see page above) using 'ps aux | grepX [01]'

    Read the article

  • Apache directory structure with multiple hosted languages.

    - by anomareh
    I just got a new work machine up and running and I'm trying to decide on how to set everything up directory wise. I've done some digging around and really haven't been able to find anything conclusive. I know it's a question with a variety of answers but I'm hoping there's some sort of general guidelines or best practices to go by. With that said, here are a few things specific to my situation. I will be doing actual development and testing on the same machine as the server. It is a single user machine in the sense that I will be the only one working on the machine. There will be multiple hosted languages, specifically PHP and RoR while possibly expanding later. I'd like the setup to translate well to a production environment. With those 3 things in mind there are a couple of things I've had in the back of mind. Seeing as it's a single user machine I haven't been able to decide whether or not I should be working on things out of my home directory or if they should be located outside of it. I'm feeling that outside of a user directory would be better as it would translate better to a production environment, but I'm also not sure if that will come with any permission annoyances or concerns seeing as I'll be working on the same machine. Hosting multiple languages seems like it may be a bit quirky. With PHP I've found you're generally just dumping the project somewhere in the document root where as something like a Rails app you have the entire project and you only want the public directory in the document root. Thanks for any insight, opinion, or just personal preference from experience anyone can offer.

    Read the article

  • USB Wifi will not connect on Windows 7 (Even though the driver installs OK)

    - by Pete Roberts
    Windows 7 will not connect to a WiFi Netowrk using a USB Network Adapter. I have 3 adapters, A Senoa SUB 364 (EXT), a Repeatit SU2410 USB V2 and a ZYXEL G202. All of these devices install OK on Windows 7 Home Premium on my Destop PC (64 bit) and on my Asus Wii Netbook (32 bit). In each case the adapter can be enabled/disabled and the driver properties says it is working correctly. When I try and connect to a network Windows 7 behaves as though the adapter does not exist and reports no networks. The Wii has an integrated adapter which works perfectly under Windows and connects to either of the 3 networks available to me. I have done all the checks I can on the configuration. What seems odd to me is that it happens to all 3 devices on 2 different windows 7 PCs both of which are working perfectly in any other respect. This suggests the common denominator is me and I must be doing something wrong.. what's also strange is that I cannot find any similar problems being reported on any of the forums. From what reading I've been able to do it seems like the new wifi virtualisation thingy in W7 is not recognising the adapters which suggests I', missing a configuration option somewhere. Looking forward to finding out if I'm not alone or just being stupid. Pete

    Read the article

  • nginx starts up before apache

    - by paullb
    I've been fumbling through setting up redmine on a unbuntu (12.04) box and somewhere along the line NginX got set up and now apache no longer loads because nginx has already grabbed the port. I tried removing NginX with the below command but that didn't seem to make any difference. When I restarted the server and pointed my web browser I still got the "Welcome to NginX" message sudo apt-get purge nginx I have confirmed that NginX is gone because when I run the above now I get as an output Package nginx is not installed, so not removed Yet everytime I start the machine it is running again. I noticed the following for the running processes (if that is helpful) root 923 0.0 0.0 76784 1280 ? Ss 03:00 0:00 nginx: master process /usr/sbin/nginx www-data 925 0.0 0.0 77092 1704 ? S 03:00 0:00 nginx: worker process www-data 926 0.0 0.1 77092 2204 ? S 03:00 0:00 nginx: worker process www-data 927 0.0 0.0 77092 1704 ? S 03:00 0:00 nginx: worker process www-data 928 0.0 0.0 77092 1704 ? S 03:00 0:00 nginx: worker process Any advice for bringing back apache2 as the "default" (for lack of a better term) web server?

    Read the article

  • How do I keep folders synced and backed up between two macs using a Linux NAS (rsync?)

    - by Hultner
    I've got two primary computers, one Mac Pro and one MacBook Pro for when I'm on the go. I've also got a Linux sever which also acts as NAS. Currently I backup the entire computers to an external drive with Time Machine which is rather useless and doesn't sync anything. What I really want to do is to keep my important files synced between both computers and my NAS (which is running RAID 5), that way I'm not backing up easily replaceable systemfiles and I've got all my important files in 3 places where two of them are running raid so at least 5 drives would have to crash at the same time before actual data loss occur. Folders I want to keep synced is basically my photo, documents, development, mamp and work folders and then I want to keep the user library folder backed up but not synced. I'm thinking that I'd have to use rsync but don't know how. Before suggesting Dropbox and similar suggestions I don't want to use them because of several reasons some of them being security (Dropbox obviously proved this), Speed (sometimes I'll sync gigabytes of data and that will be significantly faster locally and probably even through VPN as I have a Gigabit pipe), Space (space on my NAS is cheap and only practically limited by my needs), reliability (even if my internet were to go down I still need to be able to keep my files synced incase I'd need to go somewhere on the fly), price (I already have all the hardware and for the amount of gigabytes and bandwidth I'd need I doubt that there's any free or cheap service). Those are my main reason for wanting to keep it locally. I'm sorry for any spelling or grammatical mistakes that I've might have done. I'm writing this on my smartphone from a shaky train and English isn't my mother tongue. I gratefully appreciate any answers even if only partly solving my problem.

    Read the article

  • Need to link WP Blog with Rails App on Heroku

    - by John Glass
    I have a client who wants to migrate his Rails app to Heroku. However the client also has a blog associated with his domain that runs on WordPress. Currently, the WordPress blog is running happily alongside the Rails app, but once we migrate to Heroku, that clearly won't be possible. The url for the app is like http://mydomain.com, and the url for the blog is like http://mydomain/blog. I realize that the best long-term solution is to redo the blog in a Rails format like Toto or Jekyll. But in the short term, what is the best way to continue hosting the WP blog where it is (or somewhere) but use Heroku to run the app? The client doesn't want the blog to be on a subdomain, but to remain at mydomain/blog for SEO reasons and also since there is traffic to the blog. I have two ideas: Use rack_rewrite or refraction (or just a regular old 301 and Apache mod_rewrite) on the old (non-Heroku) server to redirect the main url from the old site to Heroku. In this case, I can just leave the Wordpress blog running happily where it is. I think?? Is there a reason to choose one of those options (rack_rewrite, refraction, or mod_rewrite) over the others if I do it this way? Switch the DNS info to point to the Heroku site, and then use a 301 redirect from the blog to the old site. But then I'll have to get the old (non-Heroku) site on a subdomain and use some kind of rewrite rules anyway so it looks like it isn't a subdomain. Are either of these approaches preferable, or is there another way to do it that's easier that I'm missing?

    Read the article

  • cygwin sshd times out for remote login

    - by reve_etrange
    I have configured SSHD using Cygwin on Windows 7. I have checked and double-checked all of the following points: Port forwarding is correctly configured Windows Firewall is configured to pass port 22 Local login attempts (using Cygwin SSH) succeed sshd_config has UseDNS No Using nmap from remote machine confirms port 22 is accessible /etc/passwd and /etc/group are correctly populated However, remote login attempts time out. This includes from the local network. user@host:~$ ssh -vvv [email protected] OpenSSH_5.5p1 Debian-4ubuntu6, OpenSSL 0.9.8o 01 Jun 2010 debug1: Reading configuration data /home/user/.ssh/config debug1: Applying options for * debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to the.ip.add.ress [the.ip.add.ress] port 22. debug1: connect to address the.ip.add.ress port 22: Connection timed out ssh: connect to the.ip.add.ress port 22: Connection timed out No messages are logged to /var/log/sshd.log. I suspect that there is a permissions issue with a particular file somewhere, however I have checked the permissions of all my Cygwin binaries, DLLs and the particular files important to Cygwin sshd, including all of: /etc/passwd /etc/group /var /var/log/sshd.log /var/empty Others who have reported this or similar errors appear to have missed one of the points enumerated above. Can anyone point me to a possible solution?

    Read the article

  • IIS8 behind a VPN + Windows Server 2012 - how to properly bind IP+Port

    - by ryugen
    This is my first question so I hope I'm going to give you enough information. I'm running Windows Server 2012 within the Hyper-V environment of my Windows 8 machine. Within Windows Server 2012 I'm running a VPN tool based on openVPN to hide my real IP. When I run IIS8 with the VPN disconnected it works flawlessly through the Internet (port 80 forwarded correctly). But as soon as I connect to the VPN I can't reach my site through the domain anymore. Now I tried basically everything I know which is why I'm asking you guys. I tried binding IIS8 to the IP of my virtual ethernet card. I tried changing the priority of the NIC through the "Network and sharing center" via the advanced tab. I used ipconfig /flushdns in case there was something wrong in the DNS handling. Hell, I even turned off the Windows firewall. I also used a port scanner to verify the problem. The webserver is reachable on port 80 with VPN disconnected and immediately gets unreachable on connect. Theoretically both IPs (my regular one AND the VPN) should be reachable or at least not impair the other one right? Do you have any other suggestion? Do I have to route something somewhere somehow?

    Read the article

  • Juniper router dropping pings to external interface

    - by Alexander Garden
    My organization has a Juniper SSG20-WLAN that routes our traffic to the outside world. We've been having intermittent problems with our internet connection so I wrote up a Python script to ping the internal interface of the router, the external interface, a couple of our internal servers, the ISP router our router talks to, their upstream provider, and Google and Yahoo for good measure. It does that about every minute. What I have found is that when our internet goes out, our Juniper router ceases responding to pings on the external interface. Everything past that is, of course, unreachable. The internal interface and our internal servers continue to echo back without interruption. None of the counters indicate dropped packets of any type. They all look normal. The logs complain about VIP servers being unavailable but otherwise nothing indicative of network issues. My questions are these: Does this exonerate our ISP? Or, contrawise, might a problem with the connection be causing the external interface to go down? Is there somewhere else in the SSG20, beside the system log and counters, that might help me track down info on the problem? UPDATE: Turned out that one of the switches between my monitoring box and the router was a router itself, and occasionally diverting from the gateway to itself. Kudos to those who made suggestions along those lines. Not really sure which answer to mark as accepted, as it was really stuff in the comments that turned out to be right. Thanks for the suggestions.

    Read the article

  • Joining two routers together, but I have no access to the second router, although I know it's IP address and Gateway

    - by JohnnyVegas
    I have temporarily moved into a rented apartment for 4 months, which has wireless. The trouble I am having is that the access points here are wifi only and no RJ45 and I need to use RJ45 to connect some equipment that I am working with. I have purchased an RT-N66U and installed Tomato (shibby ver. 1.28) and successfully replaced the existing access point, but now I want to enable the access point that I have replaced as it links wirelessly to 3 others. Can I plug in a cable from the access point to my RT-N66U and get it to access the internet via my router? I have no access to the existing wireless access point, and don't want to reset it as it's not mine. There is another router situated in the roof somewhere which I also have no access to, but it's supplying my RT-N66U internet and I most definitely have a double-nat, which although isn't the best way of doing things I am limited with what I can do. Any suggestions on routing tables, vlans etc would be helpful, but I have no experience in these fields before - but I know the tomato firmware can cater for this. My router is set to IP 10.0.1.1 and dhcp is 10.0.1.100-200 The wireless access point address was 192.168.1.2 but this was assigned by the router in the roof which has the address 192.168.1.1. There is a cable from this router going to a wall socket which I now have my RT-N66u attached to via the WAN port. I understand it's scruffy and it isn't the way to do things but I have tried to ask for the admin details but as the wireless network is looked after by a third party and nobody knows their details I am stuck with this dilemma. I could buy three wireless access points and replace the existing but this isn't what I want to do, and although I have installed plenty of DD-WRT wireless repeater bridges they simply don't work here for some unknown reason. The phone line here is very noisy too and I don't have the rights to install ADSL in a building that isn't mine, and 3G coverage isn't good enough either. Thanks for your time

    Read the article

  • Drop database on DB2 9.5 - SQL1035N The database is currently in use

    - by Tommy
    I've never got this working the first time, but now I can't seem to do i at all. There is a connection pool somewhere using the database, so trying to drop the database when an application is using the database should give this error. The problem is there are no connection to the database when I issue these commands: db2 connect to mydatabase db2 quiesce database immediate force connections db2 connect reset db2 drop database mydatabase This allways give: SQL1035N The database is currently in use. SQLSTATE=57019 running this command shows no connections/applications DB2 list applications I can even deactivate the database, but still can't drop it. db2 => deactivate database mydatabase DB20000I The DEACTIVATE DATABASE command completed successfully. db2 => drop database mydatabase SQL1035N The database is currently in use. SQLSTATE=57019 db2 => Anyone got any clues? I'm running the cmd-windows as the local administrator (windows 2008) and this is also the admin for DB2. The connectionpool-user cannot connect during quiesce-state.

    Read the article

  • Issues installing new drivers

    - by Luke
    I have a Windows XP Home SP3 system that won't detect anything on USB. It works on Ubuntu Live (off USB), and the USB keyboard and mouse work in the BIOS. Physically speaking, I'm sure it's fine. I installed the SMBus drivers and the USB driver from the motherboard's website, adn that went fine. If I plug anything in, it can detect the type of thing it is (i.e. keyboard, mouse, flash drive, etc) and even the name sometimes (i.e. Microsoft 5 button mouse), but won't accept any drivers. I have tried putting the Windows CD in the drive, but that didn't help. I have scanned for viruses and CHKDSK with no issues, and ran a MemTest86 with no issues. I am limited to one PS/2 connection for inputs, so I'm using the keyboard and haven't tried WU yet. A colleague suggested trying a new USB controller, so I put in a PCI one that only had drivers for 9x on the CD, so I assume that XP has them built in. It goes through the Found New Hardware wizard, but never actually finds drivers. I have also tried running SFC /SCANNOW and System Restore. SFC just flashes and goes away, making me believe it may be a hidden virus somewhere, but everything else seems to work, including MSE. I have reason to believe it's just an issue with detecting hardware, since even the USB Controller card can't seem to find drivers, but it can detect WHEN a USB device is connected Anyone else run into this, or have a suggestion short of re-installing Windows?

    Read the article

  • SSH to VM rejecting password, works from virt-manager console

    - by boundless08
    First of all, I'm sorry if there is a duplicate post somewhere. I searched for a while but none of the posts I found fixed my problem. It's fairly annoying. I created a new VM on our network and when using virt-manager I can log into the VM fine with the username and password. When I try to ssh to the VM from anywhere else it rejects the password, but I know the password is correct. I've even changed it multiple times to make sure its correct. The address I'm ssh'ing to is definitely pointing at the right VM as well, I've tested all this. It's still usable, but the virt-manager console is very limited so the sooner I can get to the bottom of this the better. VM is running ubuntu 12.04 btw. EDIT 1 Checked the auth.log and all I'm getting is "sshd[29304]:Connection closed by 'server.ip.address' [preauth]". I also tried allowing logging in as root, and even turned off password auth altogether in sshd_config and still nothing! I then turned on "AllowEmptyPasswords", still a whole lot of nothing.

    Read the article

  • Postfix aliases and duplicate e-mails, how to fix?

    - by macke
    I have aliases set up in postfix, such as the following: [email protected]: [email protected], [email protected] ... When an email is sent to [email protected], and any of the recipients in that alias is cc:ed which is quite common (ie: "Reply all"), the e-mail is delivered in duplicates. For instance, if an e-mail is sent to [email protected] and [email protected] is cc:ed, it'll get delivered twice. According to the Postfix FAQ, this is by design as Postfix sends e-mail in parallel without expanding the groups, which makes it faster than sendmail. Now that's all fine and dandy, but is it possible to configure Postfix to actually remove duplicate recipients before sending the e-mail? I've found a lot of posts from people all over the net that has the same problem, but I have yet to find an answer. If this is not possible to do in Postfix, is it possible to do it somewhere on the way? I've tried educating my users, but it's rather futile I'm afraid... I'm running postfix on Mac OS X Server 10.6, amavis is set as content_filter and dovecot is set as mailbox_command. I've tried setting up procmail as a content_filter for smtp delivery (as per the suggestion below), but I can't seem to get it right. For various reasons, I can't replace the standard OS X configuration, meaning postfix, amavis and dovecot stay put. I can however add to it if I wish.

    Read the article

  • Change the Default Date setting in Word 2010

    - by Chris
    I am using Word 2010 and Windows 7. You know how when you start typing a date in Word it will automatically suggest what it thinks you want? Like if I start typing “6/29”, a little grey bubble will display “6/29/13 (Press ENTER to Insert)”. How do I get it so the bubble will display the year in a 4 digit format, such as "6/29/2013 (Press Enter to Insert)"? The below picture is how it looks when typing a date into Word. I have already gone to the Date & Time option under the Insert menu and the date format that I want is already selected. I think this is only for using quickparts anyway, so the date automatically updates when you open a document. The Region and Language settings under the Control Panel are correct as well. I thought at one point I found it somewhere under options, but I am sure I looked through everything many times and I can’t find it. I posted this exact question at the Microsoft website and someone replied: Go to the Windows Control Panel and click on Clock, Language and Region and then on Change the date, time, or number format and then modify the Short date format so that it is what you want to be used. So please don't suggest this again, because in my question I did say that I already tried this and it doesn't work, at least not for Word, in this situation. Thanks.

    Read the article

  • OpenSSL response 404 issue on centOS 6

    - by dsp_099
    I followed this tutorial (though it's for 5.2, I figured I'd be alright). The changes I had to make that seemed to have worked: Rename ca.csr to ca.cslr (that's the one the command generated) List it in the ssl.conf as ca.cslr instead of ca.csr I have the following in the httpd.conf <VirtualHost *:80> DocumentRoot /etc/test ServerName site.com </VirtualHost> <VirtualHost *:433> SSLEngine on SSLCertificateFile /etc/pki/tls/certs/ca.crt SSLCertificateKeyFile /etc/pki/tls/private/ca.key <Directory /etc/test> AllowOverride All </Directory> DocumentRoot /etc/test ServerName cryptokings.com </VirtualHost> /test contains a folder inside of it, accessible via http://site.com/test/foo, however attempting to access it via https://site.com/test/foo results in warning that the certificate is untrusted (self-signed, no biggie) a 404 error. Chrome's complains about the certificate are the following: The identity of this website has not been verified. • Server's certificate does not match the URL. • Server's certificate is not trusted. I think those warnings are a side-effect of a self-signed certificate - or is the first one something that needs to be addressed? I seem to be able fetch the root page via https just fine though, it shows a standard CentOS setup page. (That said, I haven't added a VirtualHost entry for it so I suppose that makes sense) I think I've made a mistake somewhere during the setup as I'm not too familiar with the process. During setup, I was prompted for a type of password that would be required when apache restarts but running service httpd restart does not seem to prompt me for one. Any help would be appreciated.

    Read the article

  • Windows Media Sharing not 'always' being detected by PS3

    - by Ahmad
    I'm having a weird problem with Windows Media Sharing on Windows 7 .. I have the following hardware in my network: PC 1 --- My main PC --- runs Windows 7 Ultimate x64 PC 2 --- My backup PC --- runs Windows 7 Ultimate x32 PS3 PC 1 is my main PC which has all my data/media on it .. PC 2 is a backup PC I have, but I use it like once in 2 months .. It has nothing installed on it apart from some very very basic software ... Problem is, my PS3 always sees the media sharing service coming from PC 2, but it never sees the media sharing service coming from PC 1 initially .. Both PC 1 and PC 2 have the same media sharing configuration (All everything on all devices on all networks) ... But when I restart both PCs, the PS3 will only detect PC 2's media sharing service, not PC1 .... However here's the twist .. When PC 1 is restarted, and if I view my 'Network' on PC 2, I do see PC 1's Media Sharing Service, and I'm able to play from it too on PC 2 .. To get my PS3 to also see PC 1's media sharing service, I have to do either of the following 2 things: 1) Play something from PC 1's media sharing service on PC 2 ... The PS3 will then magically also detect PC 1's media sharing service .. 2) Go into the Services area on PC 1 and restart the 'Windows Media Player Network Sharing Service' ... After this, the PS3 also instantly starts to see PC 1's media sharing service .. Since my PS3 is like a month old and is properly detecting PC 1's media sharing service, I think the problem is somewhere in the configuration of PC 1's media sharing service ... Also, on PC 1 I have Norton Internet Security 2012 installed, but I've disabled it completely, and have also disabled Windows Firewall (from PC 1 only) .. Can someone shed some light onto this ?

    Read the article

  • Googlebot repeatedly looks for files that aren't on my server

    - by John at CashCommons
    I'm hosting a site for a volunteer organization. I've moved the site to WordPress, but it wasn't always that way. I suspect at one point it was hacked badly. My Apache error log file has grown to 122 kB in just the past 18 hours. The large majority of the errors logged are of this form -- it's repeated hundreds of times today alone in my log files: [Mon Nov 12 18:29:27 2012] [error] [client xx.xxx.xx.xxx] File does not exist: /home/*******/public_html/*******.org/calendar.php [Mon Nov 12 18:29:27 2012] [error] [client xx.xxx.xx.xxx] File does not exist: /home/*******/public_html/*******.org/404.shtml (I verified that xx.xxx.xx.xxx was a Google server.) I suspect there was a security hole somewhere before, likely in calendar.php, that was exploited. The files don't exist anymore, but there may be many backlinks that exist that reference here, hence why googlebot is so interested in crawling them. How do I fix this gracefully? I still would like Google to index the site. I just want to tell it somehow not to look for these files anymore.

    Read the article

< Previous Page | 158 159 160 161 162 163 164 165 166 167 168 169  | Next Page >