Search Results

Search found 14745 results on 590 pages for 'setting'.

Page 428/590 | < Previous Page | 424 425 426 427 428 429 430 431 432 433 434 435  | Next Page >

  • A separate user for each task?

    - by Mark Tomlin
    I just got a VPS sver the other day, I'm new to server administration, but not that new to Ubuntu (11.04). I use it in my living room as the HTPC, and I had a previous VPS that I used on and off for a team speak server. This one I'm setting up for long term use. So I would like to know the best practice when it comes to websites and tasks that I have the server proforming. I understand that it could be beneficial to separate each website into it's own usergroup or under its own username. I would setup nginx so that it could read all of the users directors (and thus each website) but could not touch anything else. The same with the TeamSpeak, should I make a user for TeamSpeak so that it operates within its own confined area or is this overkill? I do have access to root on the sever and my current plan is to run about 4 websites and a TeamSpeak server. My stack is Linux (Ubuntu 11.04 LTS), nginx, and PHP 5.4.3 (using the PDO SQLite 3 built in driver for the database). Should PHP have it's own user group or is it ok to place it in with nginx?

    Read the article

  • Socket options for a tcp server with 3G clients & frequent disconnections

    - by Joel
    I have a TCP server, written in java, sending and receiving many short messages, from 500 bytes to 100 KB long. It's a chess game and chat server, to make it simple. The server is running Debian 6. Half of the clients are connecting from 3G networks, and half over standard DSL. A portion of the 3G clients lose connection pretty often. The error I get on the server and on the client socket is Connection reset. I have come across this page at Oracle documentation: socketOpt. I am wondering what I could tune there to lower the number of disconnections from 3G clients. I don't mind about the ping or transfer rate, but just about the TCP disconnections. I am not skilled enough to understand the impact of each setting, but I sort of understood that the TCP window was important, although I don't know exactly how. So I'm asking if anyone here has an idea ? Thanks if you can help.

    Read the article

  • How to install a private user script in Chrome 21+?

    - by Mathias Bynens
    In Chrome 20 and older versions, you could simply open any .user.js file in Chrome and it would prompt you to install the user script. However, in Chrome 21 and up, it downloads the file instead, and displays a warning at the top saying “Extensions, apps, and user scripts can only be added from the Chrome Web Store”. The “Learn More” link points to http://support.google.com/chrome_webstore/bin/answer.py?hl=en&answer=2664769, but that page doesn’t say anything about user scripts, only about extensions in .crx format, apps, and themes. This part sounded interesting: Enterprise Administrators: You can specify URLs that are allowed to install extensions, apps, and themes directly through the ExtensionInstallSources policy. So, I ran the following commands, then restarted Chrome and Chrome Canary: defaults write com.google.Chrome ExtensionInstallSources -array "https://gist.github.com/*" defaults write com.google.Chrome.canary ExtensionInstallSources -array "https://gist.github.com/*" Sadly, these settings only seem to affect extensions, apps, and themes (as it says in the text), not user scripts. (I’ve filed a bug asking to make this setting affect user scripts as well.) Any ideas on how to install a private user script (that I don’t want to add to the Chrome Web Store) in Chrome 21+? Update: The problem was that gist.github.com’s raw URLs redirect to a different domain. So, use these commands instead: # Allow installing user scripts via GitHub or Userscripts.org defaults write com.google.Chrome ExtensionInstallSources -array "https://*.github.com/*" "http://userscripts.org/*" defaults write com.google.Chrome.canary ExtensionInstallSources -array "https://*.github.com/*" "http://userscripts.org/*" This works!

    Read the article

  • DisableCrossAccountCopy not working on some Outlook installs, working on others, both going against Exchange

    - by MikeBaz
    As part of a mail migration project from one Exchange organization to another, we need to be able to prevent users from moving/copying messages between their accounts in each organization. (Yes, users will think this is evil; no, it's not my decision; yes, users will hate us.) Luckily, we thought, Outlook 2010 provides the DisableCrossAccountCopy registry value/policy (cf. http://technet.microsoft.com/en-us/library/ff800883.aspx). (Because you can't do multiple Exchange organizations in a single profile before Outlook 2010, this only matters on Outlook 2010. Yes, I'm ignoring for the sake of this question copy/move to/from the filesystem.) In our test lab, in a test forest with a test Exchange organization, with a second Exchange account added to the profile in either of the "real" Exchange organizations, with the value set to "*", everything works as expected. On a workstation in one of the production domains, however, the setting does not seem to work. We have tried it under HKCU, HKLM, HKCU\Software\Policies, and HKLM\Software\Policies. It simply seems to be ignored. The value was set in the OCT on a test machine, but the OCT (and the ADM/ADMX file) have the wrong type for the value. We have located the value in the registry and removed it everywhere it is found, we think, and put it back in HKCU, but it still isn't taking. At the moment, a clean Outlook install is not an option - even if it was, we at this point would need to know what to do to fix the pushed copy (I didn't push the copy out to thousands of machines, I've just been asked to help clean up the current mess). Thoughts?

    Read the article

  • Cannot access SMC8014WG-SI provided by TimeWarner/RoadRunner administrative interface...

    - by Matt Rogish
    I just received installation of RoadRunner internet/TV/Voice and I was given a wi-fi router from the TimeWarner folks. The model is a SMC SMC8014WG-SI. Unfortunately, the password it uses is WEP and that is, as we all know, completely insecure. The tech that was here didn't know how to change it to something like WPA2 w/TKIP, and I was on hold for 20 minutes with the TimeWarner folks before I gave up. My problem is that the default web interface (http://192.168.0.1) isn't responding. I can ping it, I can access the internet through it, but I can't get to the admin interface. I did a "hard reset" of the device but still no dice. My suspicion is that the wi-fi admin interface is disabled (a common setting) but the wired interface isn't working on either of my two laptops (I've tried two laptops with two different cables, no link light activated). Am I SOL? Did they lock this down so I can't do what I want to do? Worst-case is I just hook up my go-to WRT54G router to the other modem and leave this one turned off, but I'd rather use their hardware to avoid any "It's not our problem" in the future. Any thoughts? Thanks!!

    Read the article

  • Wireless USB keyboard and mouse can wake system, but then receiver is inactive

    - by BlueMonkMN
    I have a Microsoft brand USB device that acts as a receiver for a wireless Microsoft Keyboard and a wireless Mouse. When it's operating normally, there are LEDs on the device indicating Caps Lock, Num Lock and Function Lock, of which the latter 2 are usually lit. It is plugged into a Dell Isnpiron 531 with Windows 7 32-bit running on an AMD Athlon 64 X2 Dual Core processor 5000+. When the computer goes to sleep (the power indicator on the main box is flashing), I can wake it by moving the mouse. So far all is good. However, something changed in, I think, the past couple weeks (I suspect due to a Microsoft driver update problem). Before the change, after waking the computer, everything would operate normally as far as I could tell, but now after waking the computer, the receiver has no lights on, and the keyboard and mouse are completely unresponsive (which is odd, considering the mouse woke up the computer). There is a button on the receiver that's supposed to reset the wireless connection and flash the lights while it does so, but it has no effect in this state. It's like the receiver doesn't have power (but how would the system know I moved the mouse, unless the power was on until it woke up?). I have checked the BIOS/CMOS settings or whatever you call them, and did not see anything related to USB in the power management section. I have checked Windows 7 device manager and ensured that all the USB Root Hub devices have the setting unchecked for allowing the USB power to be turned off. Like I said, this was working before, and the only thing I can think of that's changed is applying Windows Updates.

    Read the article

  • chrooting php-fpm with nginx

    - by dragonmantank
    I'm setting up a new server with PHP 5.3.9 and nginx, so I compiled PHP with the php-fpm SAPI options. By itself it works great using the following server entry in nginx: server { listen 80; server_name domain.com www.domain.com; root /var/www/clients/domain.com/www/public; index index.php; log_format gzip '$remote_addr - $remote_user [$time_local] "$request" $status $bytes_sent "$http_referer" "$http_user_agent" "$gzip_ratio"'; access_log /var/www/clients/domain.com/logs/www-access.log; error_log /var/www/clients/domain.com/logs/www-error.log error; location ~\.php$ { fastcgi_pass 127.0.0.1:9001; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/clients/domain.com/www/public$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_script_name; include /etc/nginx/fastcgi_params; } } It servers my PHP files just fine. For added security I wanted to chroot my FPM instance, so I added the following lines to my conf file for this FPM instance: # FPM config chroot = /var/www/clients/domain.com and changed the nginx config: #nginx config for chroot location ~\.php$ { fastcgi_pass 127.0.0.1:9001; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME www/public$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_script_name; include /etc/nginx/fastcgi_params; } With those changes, nginx gives me a File not found message for any PHP scripts. Looking in the error log I can see that it's prepending the root path to my DOCUMENT_ROOT variable that's passed to fastcgi, so I tried to override it in the location block like this: fastcgi_param DOCUMENT_ROOT /www/public/; fastcgi_param SCRIPT_FILENAME $fastcgi_script_name; but I still get the same error, and the debug log shows the full, unchrooted path being sent to PHP-FPM. What am I missing to get this to work?

    Read the article

  • A Domain Admin user doesn't have effective Administrative rights on a Domain Computer

    - by rwetzeler
    I am a developer who is setting up a virtual domain environment of testing purposes and am having trouble with the setup. I have created a new DC on a new Forest... call it dev.contoso.com. I have setup a virtual internal network for all machines that are going to be apart of this virtual test environment and have given each machine a static IP address in the 192.169.150.0 subnet. I have added machine1.dev.contoso.com to the domain dev.contoso.com. I have also provisioned a user account (adminuser) in the domain and made that user a member of Domain Admins group. Upon logging into machine1 using my newly created Domain Admin account, I cannot access/run any files on machine1. When I go into the advanced permissions for the c:\ folder and goto properties - Security Tab - Advanced - Effective Permissions and search for the dev\adminuser (mentioned above), I get an error saying: Windows can't calculate the effective permissions for admin user What do I need to do to get Administrative rights on Machine1? I am using Server 2008 R2 for both the AD controller and machine1.

    Read the article

  • Is there a reason to use internal DNS over 8.8.8.8 ?

    - by skylarking
    I've inherited a LAN where there is really no name resolution being done for local resources... i.e. all users enter IP addresses manually to access printers and network shares. There are no LDAP servers or domains either....workstations simply connect to the network without authentication. DHCP is handled via a core switch... And DNS settings are also handed out by this same core switch. Currently, the DNS assignments are as such, and in this order: 10.1.1.50 / old Pentium III Windows 2003 box running DNS service- 128 MB RAM 169.200.x.x / ISP 4.2.2.2. / the well known public one There a couple thousand clients on the LAN....and most of the activity is web browsing ( this is an educational setting ). First of all, the server seems woefully underpowered for this task...yet there is virtually no slowness when web surfing by clients.... How much horsepower should a heavily used DNS server have ? I have also heard using 4.2.2.2 is a bad idea .... since it has been so overused... Finally, wouldn't it make sense to have a robust external DNS server listed first? ( Google's 8.8.8.8 would seem to be a logical candidate )

    Read the article

  • Segmentation fault on login to mysql

    - by numberwhun
    Hello everyone! I recently did a fresh install of Ubuntu on my laptop (HP dv7, AMD Dual Core with 4 gigs RAM). I am working on installing my development environment and tools and one of the first things I was working on is getting MySQL installed. The following was my configure statement with options: ./configure --prefix=/usr/local/mysql --with-big-tables --with-unix-socket-path=/usr/local/mysql/tmp/mysql.sock --with-named-curses-libs=/lib/libncurses.so.5.7 After I did the make;make install, I did the post configuration such as setting the root password and installing the mysqld daemon in its rightful place. My issue is when I try to log in to mysql to start using it, the following shows what happens: $ mysql -u root -p Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 1 Server version: 5.1.42 Source distribution Segmentation fault I have searched Google extensively, I have searched through the mysql bugs database and I have yet to find anything that matches my issue. Here is the contents of my my.cnf file, in case you want to see it: $ cat /etc/my.cnf [mysqld] basedir=/usr/local/mysql datadir=/usr/local/mysql socket=/usr/local/mysql/tmp/mysql.sock [mysql.server] user=mysql #basedir=/var/lib [client] socket=/usr/local/mysql/tmp/mysql.sock [mysqld_safe] err-log=/usr/local/mysql/logs/mysqld.log pid-file=/var/run/mysqld/mysqld.pid I am really hoping that someone here can tell me what has gone wrong with my installation as I would really love to know. I welcome and look forward to all responses. Thank you in advance! Best regards, Jeff

    Read the article

  • Samba between Ubuntu server 10.10 and Windows Vista, Windows 7

    - by chepukha
    Hi all, I have a linux box running Linux server ubuntu 10.10. I have installed Samba on this linux box and want to share files with my laptops which run Windows Vista home and Windows 7 home. I have been struggling with the setup for almost a month but couldn't get it right. If I try to access share folder from Windows Vista, I get message "Windows cannot access \\server_ip_address". Error code: 0x80070035. The network path was not found. If I access from Windows 7, then after entering password to login I can see the list of share folders on Linux box. But if I click on a share folder, I get the same error message as above. Tail /var/log/samba/log.windows7-pc I got the following message: [2011/03/16 00:17:41.427238, 0] smbd/service.c:988(make_connection_snum) canonicalize_connect_path failed for service sharemedia, path /root/sharemedia Here is my setting in smb.conf [global] share modes = yes netbios name = Samba workgroup = WORKGROUP wins support = yes encrypt passwords = true [sharemedia] comment = Tesing sharing using Samba path=/root/sharemedia/ public = yes valid users = samba_usr_name ; make sure all files are sensible permissions create mask = 0660 force create mask = 0660 directory mask = 2770 force directory mask = 2770 directory security mask = 0000 ; Normal share parameters read only = no browseable = yes writable = yes guest ok = no

    Read the article

  • MacOSX: remove write-protect flag from file in Terminal

    - by Albert
    Hi, I have a file on a FAT32 volume which is shown as write-protected in Finder (so I cannot move it). Removing that write-protected flag in the information dialog works just fine. However, I have many more such files and I thus want to do it via Terminal. I already tried via 'chmod +w' but that didn't worked. 'ls -la' showed me that they are already just fine ("-rwxrwxrwx 1 az az " where az is my user account). Then I thought this might be stored in some xattr properties but 'xattr -l' didn't gave me any entry. Then I thought this might be some ACL setting (whereby I thought they would be stored as xattr but let's try it anyway) - and some Google search returned me something with 'chmod -a' or 'chmod -i' or so. All these tries only give me chmod: No ACL currently associated with file" or chmod: Failed to set ACL on file...: Operation not permitted". But I definitly have no write access to the file because I cannot move it or do any other change to it (in Terminal). Removing the write-access flag in Finder solves that.

    Read the article

  • Keep-Alive header not sent from Tomcat 5.5 http connector?

    - by Codek
    We're currently using a hardware load balancer, which then goes to Apache and that then goes to Tomcat 5.5 via the AJP connector. We've decided to dump apache for various reasons - In our current system it doesnt provide any advantage. However when I look at the headers sent when we do this, the "Keep-Alive: timeout=15 max=96" header doesnt get sent when you use the tomcat http connector Interestingly, i can find no documentiation on "keepalivetimeout" for tomcat5.5, but i can for tomcat6. But neither can i find evidence that tomcat5.5 doesnt support this setting. here's my connector: <Connector port="8090" maxHttpHeaderSize="8192" maxThreads="400" minSpareThreads="150" maxSpareThreads="300" enableLookups="false" connectionTimeout="2" maxKeepAliveRequests="400" disableUploadTimeout="true" /> So; Is there any way I can specify the keepalive timeout if we use the http connector with tomcat 5.5, and force this header entry to be sent? Just to be clear - the exact header entry i see back from the server is this with apache: Keep-Alive: timeout=2, max=100 But nothing from tomcat/coyote. I've looked at this some more, and I dont think the Keep-Alive header entry really matters. The problem seems to be that keep-alives are simply not supported in tomcat 5.5 http connector? They do seem to work in tomcat6 (+java 6). Thanks, Dan

    Read the article

  • Server reporting incorrect mime type for css files

    - by Becky
    We have a VPS server that we host our websites on. I have written a CMS using CodeIgniter. On one of the interfaces, I am attempting to upload a css file to the system. This worked correctly when we had it hosted on shared hosting. Since we've moved it to the VPS, I am getting an "incorrect filetype" error. It all comes down to the fact that the server is reporting a mime type of text/x-c for the css file rather than text/css. I logged in via shell and ran the following command on an existing valid css file (to make sure it wasn't an issue with either CodeIgniter or with php). file --brief --mime 'filename.css' 2>&1 The server gave me the following in response to my command: text/x-c; charset=us-ascii My question ... is there some sort of server setting that I need to tweak to get the server to correctly identify the css file as text/css? Do I just have to add a mime type for the css files to the server? I found the mime types file (etc/mime.types), and it just hase video types and a couple other that I have no idea what they are. There is nothing in there for css or images or html files. Unless I'm looking in the wrong spot. I'm not a server person, so I'm hoping someone can help me out. Some server specs: Apache/2.2.22 (Unix) php 5.3.13 Server API = CGI/FastCGI the fileinfo php extension appears to be disabled

    Read the article

  • Cannot print certain colours on Ubuntu with HP Laser Printer

    - by ILMV
    We have a load of machines running Ubuntu in our office, they are either on 8.04 or 9.10. We have a server which connects a HP JetDirect that connects to a HP 3550 Colour Laser printer using CUPS. The problem we are having is we cannot print red, magenta or yellow at 100%, I've got a picture of the Ubuntu test page to demonstrate my problem: This is obviously a pretty big problem as we are constantly receiving documents with these colours and cannot successfully print them off, we cannot just switch the grayscale, our business depends on being able to print colour (seems trivial but we handle lots of artwork). We're using the recommended driver HP Color LaserJet 3550 footmatic/pxljr (recommended), there is another driver in the list labelled HP Color LaserJet 3550 footmatic/hpijs. These are production printers so need to make sure any setting change won't kick is in the nuts. It would appear HPIJS is for HP Inkjets, makes sense I guess. The problem doesn't occur in Windows. RESOLVED I've managed to solve the problem, I did indeed use the HPIJS driver (apparently for inkjets) but it seems to have worked, we're going to roll with it for now to see how we get on with it.

    Read the article

  • 'txn-current-lock': Permission denied [500, #13] - Subversion + Apache Configuration Issue

    - by wfoster
    Current Setup Fedora 13 32bit Apache 2.2.16 Subversion repositories setup under /var/www/svn I have two different repositories under this directory so my /etc/httpd/conf.d/subversion.conf setup in this way; LoadModule dav_svn_module modules/mod_dav_svn.so LoadModule authz_svn_module modules/mod_authz_svn.so <Location /svn> DAV svn SVNListParentPath on SVNParentPath /var/www/svn <LimitExcept GET PROPFIND OPTIONS REPORT> AuthType Basic AuthName "Subversion Repository" AuthUserFile /etc/httpd/.htpasswd Require valid-user </LimitExcept> </Location> After copying over my repos and using; chmod 755 -R /var/www/svn chcon -R -t httpd_sys_content_t /var/www/svn chown apache:apache -R /var/www/svn I can browse my repos fine through the browser, and I can update all my working copies, however when I try to check in from anywhere I get the same error Can't open file '/var/www/svn/repo/db/txn-current-lock':Permission denied I have been working on this issue for a while now and cant seem to find a solution to my issues. It might be of some use to know that the repo existed on a different server before this, it has been now moved to this new server. Everything I have read seems to indicate that the permissions for apache are incorrect, however apache is set to run as User apache and Group apache. So as far as I can tell my setup is correct. The behavior is not though. Any Ideas? Solution The only way I was able to get this to work is to disable SELinux, it could also be done by setting the proper booleans with SELinux via setsetbool and getsebool since this is just a home server, I decided to disable SELinux and am reaping the benefits now.

    Read the article

  • Virtual host config issues in osx 10.7 server app

    - by Benno
    I have two mac mini lion servers setup to run as production and staging machines. My sysadmin decided on these machines over the previous CentOS we had because it had an "interface" to be able to manage it, rather than just the terminal. To be honest, I prefer the terminal. My problem is, the mac osx 10.7 server.app seems to be having issues with the creation of virtual hosts in the 'Web' section. It seems VERY touchy. For example, I cannot create a http virtual host first. I have to create a https host first with a unique dns name 9e..g vuly6), then create the http host with a different dns name to the first (e.g. www), or it appears to override it the first one, even though one is ssl and one is non-ssl. Further, it seems to override perfectly good configurations at random. For example, the default sites directory is usually /Users/default/Sites/Customsites or something, but sometimes when I load the server.app it changes to /var/empty. Also, if I change or add extra virtual hosts after the first one or two, it starts to mess up and the first two virtual hosts start having issues. Has anyone had any experience with setting up virtual hosts via this app? Am I able to manually create these virtual hosts, without using the app, and without the app overriding my settings when I restart apache?

    Read the article

  • Configuring port forwarding for SSH - no response outside LAN

    - by WinnieNicklaus
    I recently moved, and at the same time purchased a new router (Linksys E1200). Prior to the move, I had my old router set up to forward a port for SSH to servers on my LAN, and I was using DynDNS to manage the external IP address. Everything worked great. I moved and set up the new router (unfortunately, the old one is busted so I can't try things out with it), updated the DynDNS address, and attempted to restore my port forwarding settings. No joy. SSH connections time out, and pings go unanswered. But here's the weird part (i.e., key to the whole thing?): I can ping and SSH just fine from within this LAN. I'm not talking about the local 192.168.1.* addresses. I can actually SSH from a computer on my LAN to the DynDNS external address. It's only when the client is outside the LAN that connections are dropped. This surely suggests a particular point of failure, but I don't know enough to figure out what it is. I can't figure out why it would make a difference where the connections originate, unless there's a filter for "trusted" IP addresses, which is perhaps just restricted to my own. No settings have been touched on the servers, and I can't find any settings suggesting this on the router admin interface. I disabled the router's SPI firewall and "Filter anonymous traffic" setting to no avail. Has anyone heard of this behavior, and what can I do to get past it?

    Read the article

  • Configuring OS X L2TP VPN to use Certificate for IPSEC layer instead of Pre Shared Key

    - by Matthew Savage
    I'm trying to setup a L2TP VPN on an OS X Snow Leopard Server setup, and have had success using a pre-shared key, however I would rather not rely on a simple string, and use a certificate instead. Setting this up on the server side is seemingly easy, you simply select a certificate you have generated from the list, and hit apply, however when I try to use the certificate on the client side it fails. I have exported the certificate into a P12 file, and then transferred to the client, and imported into the login keychain, however when I try to choose the certificate (from Network preferences, clicking Authentication Settings, then selecting Certificate and pressing Select) I am shown the following error: No machine certificates found Certificate authentication cannot be used because your keychain does not contain any suitable certificates. Use Keychain Access to import the certificate into your keychain. If you do not have the certificates required for authentication, contact your network administrator. Unfortunately even when I try to generate a certificate where I override the defaults, ensure the DNS name etc are set properly this doesn't change. When I select Certificate Authentication for the User Auth, and click Select the certificate for the server shows up there, but obviously this isn't where I need it to be available.

    Read the article

  • How do I install the main repositories for RHEL6

    - by eisaacson
    We've setup RHEL6 on a new server. As far as we can tell, our subscription is all setup properly. However, when I run yum repolist, it doesn't show any repositories. /etc/yum.repos.d/redhat.repo is empty. I tried pasting in the content from another RHEL6 server's redhat.repo but as soon as I run yum, it wipes it out again. I just need to get the basic RedHat repositories setup so I can install packages. EDIT: Using the GUI, I went to System Administration Red Hat Subscription Manager. Under the 'Products' tab, it did not show any products. EDIT: When I run yum update, here's what I get: # yum update Loaded plugins: product-id, refresh-packagekit, security, subscription-manager This system is receiving updates from Red Hat Subscription Management. Setting up Update Process No Packages marked for Update When I log in to RedHat customer portal, it shows that subscription as active. EDIT: To make sure I wasn't having a subscription issue. I re-registered and re-subscribed. I get all the same results. # subscription-manager register --force # subscription-manager subscribe --pool=*redacted* EDIT: contents of /etc/yum.conf [main] cachedir=/var/cache/yum/$basearch/$releasever keepcache=0 debuglevel=2 logfile=/var/log/yum.log exactarch=1 obsoletes=1 gpgcheck=1 plugins=1 installonly_limit=3 contents of /etc/yum/pluginconf.d/rhnplugin.conf: [main] enabled = 0 gpgcheck = 1

    Read the article

  • Exim and receiving email with large recipient lists

    - by AceJordin
    I have Exim4 running on Debian configured to receive mail on multiple domains. Exim is set to forward all email that is received to one of the domains to another box. This box is configured with a catchall mailbox that everything goes in. My issue is that when an email is sent to the domain, which contains a large amount of addresses (all to the same domain, but different users), Exim will receive the single email over multiple connections. This means that the catchall mailbox receives multiple copies of the single email all containing the full recipient list. For example, I was able to reproduce it by sending an email from my gmail account that contained 500 recipients (eg [email protected]; [email protected]; [email protected]; etc. for a total of 500). Exim received the message as 20 messages (25 recipients per; appears to be a gmail server setting). So the catchall mailbox received 20 messages, each containing all 500 addresses. I'm pretty sure I understand why this is happening but is there any way I can configure Exim to only receive it once, or to combine it into one? Is there anything that can be done on my end, or am I at the mercy of the sending email server? This is causing havoc with a process that polls the catchall mailbox and parses each recipient in each email.

    Read the article

  • ssh timeout issue connecting to an EC2 instance on OS X

    - by mamusr
    I am new to AWS and not a networking expert but curious to know more about it. I created a VPC with a public subnet only. Then i created an EC2 instance using an Ubuntu 14.04 64-bit pv AMI image (ami-e84d8480) as well generating the key pair needed to connect to it through ssh. I followed amazon's instructions to connect to an EC2 instance via ssh which did not work. Here is my attempted input and debug log: Running on OS X 10.9.4 user$ ssh -vvv -i key.pem [email protected] OpenSSH_6.2p2, OSSLShim 0.9.8r 8 Dec 2011 debug1: Reading configuration data /etc/ssh_config debug1: /etc/ssh_config line 20: Applying options for * debug1: /etc/ssh_config line 102: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to xxx.xxx.xxx.xxx [xxx.xxx.xxx.xxx] port 22. debug1: connect to address xxx.xxx.xxx.xxx port 22: Operation timed out ssh: connect to host xxx.xxx.xxx.xxx port 22: Operation timed out To attempt to resolve the issue: I enabled the SSH port. Tried different usernames other than ubuntu, like ec2-user and root. Initially set an inbound ssh rule in the security group to connect to only my ip address. When that did not work, i changed it to allow any ip to connect. But those actions did not fix the problem. Here are my guesses as to what i am missing in getting the EC2 instance connection to work. My etc/ssh_config file may be preventing the connection from taking place. I may have missed an important networking detail when setting up the VPC. I do not have a public ip address specified for the instance. I am connecting through the private ip address. My questions for the community: Am i going about it the wrong way connecting to the instance through the private ip address? if so, do i need to specify a public ip address for it to connect or some other method?

    Read the article

  • Test whether svn REPO changes are reflected in Working Copy

    - by user492160
    Requirement Changes will be made to the REPO directory and this should get updated to wc(working copy) as opposed to the normal way of WC REPO. Senario: My svn repo- /var/www/svn/drupal My checkout-dir/working-copy- /var/www/html/drupalsite So I've done: edited post-commit hook to contain: "/usr/bin/svn update /var/www/html/drupalsite" I won't make any change to svn WC. I'll make changes to svn REPO- /var/www/svn/drupal. After changes are made to svn repo, run "svn commit /var/www/html/drupalsite". This will trigger the post-commit hook. This inturn will run "svn update /var/www/svn/drupal" and thus my WC will get updated with the changes of REPO. Query a. Would the above steps 1-3 help achieve my 'Requirement'? b. I'd need advise on how to test if the above setup works successfully or not. I'm at loss about the success of steps 1-3 the reason why query(a) is present. This is a bit more of a concern for me. NB: I'm new to subversion. Whatever I've configured till now have been done by reading articles online. Reason for query (b) is because I'm not into development. It seems to be a php drupal website and I happen to be setting it up. So I'm not aware as to how to make a "PROPER" change in REPO so that it gets reflected in WC. If reflected, my configs are right and the team can start on development. I manually put a random file/folder into REPO dir for seeing a change in WC and ran steps 1-3 but was of no avail and later on learned that it was NOT the way to make a change to a REPO. Pleas advise. Thanks

    Read the article

  • Centos yum install git-sv

    - by bob
    Running yum install on Centos yum install git-svn is producing the following errors: Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * addons: mirror.eshk.hk * base: centos.01link.hk * epel: mirror.bjtu.edu.cn * extras: mirror.eshk.hk * rpmforge: apt.sw.be * updates: mirror.vpshosting.com.hk Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package git-svn.i386 0:1.7.3.4-1.el5.rf set to be updated --> Processing Dependency: perl(SVN::Core) for package: git-svn --> Processing Dependency: perl(Error) for package: git-svn --> Processing Dependency: perl(Term::ReadKey) for package: git-svn --> Running transaction check ---> Package perl-Error.noarch 1:0.17010-1.el5 set to be updated ---> Package perl-TermReadKey.i386 0:2.30-4.el5 set to be updated ---> Package subversion-perl.i386 0:1.4.2-4.el5_3.1 set to be updated --> Processing Dependency: subversion = 1.4.2-4.el5_3.1 for package: subversion-perl --> Finished Dependency Resolution subversion-perl-1.4.2-4.el5_3.1.i386 from base has depsolving problems --> Missing Dependency: subversion = 1.4.2-4.el5_3.1 is needed by package subversion-perl-1.4.2-4.el5_3.1.i386 (base) Error: Missing Dependency: subversion = 1.4.2-4.el5_3.1 is needed by package subversion-perl-1.4.2-4.el5_3.1.i386 (base) You could try using --skip-broken to work around the problem You could try running: package-cleanup --problems package-cleanup --dupes rpm -Va --nofiles --nodigest The program package-cleanup is found in the yum-utils package.

    Read the article

  • IIS 6 ASP.NET default handler-mappings and virtual directories

    - by mlauter
    I'm having a problem with setting a default mapping in IIS 6. I want to secure *.HTML files with ASP.NET forms authentication. The problem seems to have something to do with using virtual directories to hold the html files. Here's how it's setup: sample directory tree c:/inetpub/ (nothing in here) d:/web_files/my_web_apps d:/web_files/my_web_apps/app1/ d:/web_files/my_web_apps/app2/ d:/web_files/my_web_apps/html_files/ app1 and app2 both access the same html_files directory, so html_files is set as a virtual directory in the web apps in IIS... sample web directory tree //app1/html_files/ (points to physical directory: d:/web_files/my_web_apps/html_files/) //app2/html_files/ (points to physical directory: d:/web_files/my_web_apps/html_files/) If I put a file called test.html in the root of //app1/ and then add the default mapping to the asp.net dll and setup my security on the root folder with deny="?", then accessing test.html works exactly as expected. If I'm not authenticated, it takes me to the login.aspx page, and if I am authenticated then it displays test.html. If I put the test.html file in the html_files directory I get a totally different behavior. Now the login.aspx page loads and I stuck some code in to check if I was still authenticated: <p>autheticated: <%=User.Identity.IsAuthenticated%></p> I figured it would say false because why else would it bother to load the login page? Nope, it says true - so it knows i'm authenticated, but it won't give me access to the test.html file. I've spent several hours on this and haven't been able to solve it. I'm going to spend some more time on google to see if I've missed something. Fingers crossed.

    Read the article

< Previous Page | 424 425 426 427 428 429 430 431 432 433 434 435  | Next Page >