Search Results

Search found 23079 results on 924 pages for 'local variables'.

Page 373/924 | < Previous Page | 369 370 371 372 373 374 375 376 377 378 379 380  | Next Page >

  • Postfix + procmail - delivery fails because "can't create user output file" - on CentOS 6.2

    - by jshin47
    I verified that my postfix installation / relaying setup worked. Now I am having trouble with procmail. I have it wired to postfix with the following command: mailbox_command = /usr/bin/procmail -f -a "$USER" I have nothing in my procmail config but the following: LOGFILE=/var/procmailrc/log And I send an email to a recipient that previously worked (before I attached procmail). Now it fails with error: Apr 6 14:07:05 localhost postfix/qmgr[15194]: D0C3DFF6E1: from=<[email protected]>, size=938, nrcpt=1 (queue active) Apr 6 14:07:05 localhost postfix/local[1953]: D0C3DFF6E1: to=<[email protected]>, orig_to=<postmaster>, relay=local, delay=0.05, delays=0.02/0.01/0/0.02, dsn=5.2.0, status=bounced (can't create user output file. Command output: procmail: Couldn't create "/var/spool/mail/nobody" procmail: Couldn't read "//root" ) Apr 6 14:07:05 localhost postfix/bounce[1955]: warning: D0C3DFF6E1: undeliverable postmaster notification discarded Apr 6 14:07:05 localhost postfix/qmgr[15194]: D0C3DFF6E1: removed It seems like there is some sort of permissions issue but I do not know what the problem is, nor do I understand how I would go about diagnosing it further. The logfile that I specified is empty, by the way. How can I make procmail+postfix work?

    Read the article

  • Cron won't use msmtp to send emails in case of failed cronjob

    - by Glister
    I'm trying to configure a machine so that it will send me an email if one of the cronjobs output something in case of an error. I'm using Debian Wheezy. Cron is working normally (without the email functionality). msmtp is installed and configured. Have already symlinked /usr/{bin|sbin}/sendmail to /usr/bin/msmtp. I can send email by using: echo "test" | mail -s "subject" [email protected] or by executing: echo "test" | /usr/sbin/sendmail Without the symlink (/usr/sbin/sendmail) cron will tell me that: (CRON) info (No MTA installed, discarding output) With the symlinks I get: (root) MAIL (mailed 1 byte of output; but got status 0x004e, #012) Can you suggest how to config the cron/msmtp pair? Thanks! EDIT: Note: I've written "msmtpd" by mistake. Its not a daemon but rather an SMTP client named just "msmtp" (without the "d" ending). It is executed on demand and it is not running in the background all the time. When I try to send an email by using msmtp like that it works: echo "test" | msmtp [email protected] On the far side, in the logs of the SMTP server I read: Nov 2 09:26:10 S01 postfix/smtpd[12728]: connect from unknown[CLIENT_IP] Nov 2 09:26:12 S01 postfix/smtpd[12728]: 532301C318: client=unknown[CLIENT_IP], sasl_method=CRAM-MD5, [email protected] Nov 2 09:26:12 S01 postfix/cleanup[12733]: 532301C318: message-id=<> Nov 2 09:26:12 S01 postfix/qmgr[2404]: 532301C318: from=<[email protected]>, size=191, nrcpt=1 (queue active) Nov 2 09:26:12 S01 postfix/local[12734]: 532301C318: to=<[email protected]>, orig_to=<[email protected]>, relay=local, delay=0.62, delays=0.59/0.01/0/0.03, dsn=2.0.0, status=sent (delivered to command: IFS=' ' && exec /usr/bin/procmail -f- || exit 75 #1001) Nov 2 09:26:12 S01 postfix/qmgr[2404]: 532301C318: removed Nov 2 09:26:13 S01 postfix/smtpd[12728]: disconnect from unknown[CLIENT_IP] And the Email is delivered to the target user. So it looks like that the msmtp client is working properly. It has to be something in the cron/msmtp integration, but I have no clue what that thing might be. Can you help me?

    Read the article

  • DNS server and fallback outside home

    - by Jens
    I have my own DNS server at home to access local names, and that is working fine. Then I have my laptop, now obviously my laptop leaves the home now and then, therefore it accesses different nets outside my home, and my DNS server is not accessible there... So I figured that I would just add Google as secondary DNS... But actually, when I do that, then suddenly I can't access my local stuff, the page won't resolve (at home that is, obviously), like my laptop is getting a quicker response from Google's DNS or something, because it can't find anything on the addresses I use locally. If I then remove the secondary DNS, and keeps my own, then it works fine again... So do I somehow need to seperate what DNS's to use on what nets? I already use sepperate DNS settings when I connect using my 3G modem, but when I use hotspots it seems to use the same settings regardless (at least in the train), also can it differ wired connections?... Is there another solution? OS: Windows 7 Ultimate, x64 EDIT: Currently trying this "hack/fix" out for the time being: http://blog.johnruiz.com/2011/12/windows-does-not-always-honor-dns-order.html

    Read the article

  • ADSL to T1, Is it worth it for us?

    - by Jack Hickerson
    The company I work for has roughly 45-55 simultaneous users (local and remote/VPN) logged in at a given time. We currently subscribe to an ADSL connection but we have been experiencing slower upload/download speeds as our number of users increase. So, I have a few questions with regards to upgrading our connection to a t1 line. I am aware that the number of channels on a t1 line are much greater then that of our current ADSL connection, but I have heard that the number of active users on a t1 line should be no greater than ~30 for optimal performance. I would think this statement is dependent on what each user was using the connection for and could change depending on this variable. That being said, I have tried to break down how the line would be used in our organization based on our major departments: Sales (~60% of total users) - Everyday surfing, email, research, occasional streaming media Marketing (~15% of total users) - Heavy reliance on uploading/downloading, streaming media, file sharing Other (~25% of total users) - email, rare use of any connection intensive activities. I have considered keeping the ADSL for our local users and dedicating the t1 to our remote users (or vice versa) but the cost is significantly higher then what we had hoped for. All factors being equal (# of users, frequency of downloads/uploads from our current activities) Would you suspect a significant performance increase in making the transition to a t1 line from our current ADSL line? What are your thoughts or recommendations?

    Read the article

  • Hosting websites in our Workplace custom-built datacentre

    - by i.h4d35
    I'm faced with unique learning opportunity at work at the moment. Due to the slowdown (amongst other reasons), the powers that be at my office have decided to abandon our shared hosting providers (both shared and dedicated hosting) and have decided to host the websites at our office's datacentre. We're running 7 websites, wherein the average unique hits per day at the moment is about 900. We have 2 servers set aside for this - one is a DELL POWER EDGE 1850 (Intel Xeon 3 GHZ*2, 4GB RAM, 73GB HDD and the other is an HP DL 380 G3 (Intel Xeon 2.8 GHz, 6 GB RAM, 73 GB HDD) a) I would like to know the pros and cons of going ahead with this project.All the sites will be hosted on a single IP. In all probability, the OS is going to be CentOS. b) Do you think I should consider Virtualization into this equation (KVM/Xen)? I was thinking in terms of separate instances of the DB server and the frontend though I do not know if this is the best way to go. c) Should I be trying to use cloud stacks like OpenStack and try to make it look like websites hosted on some sort of Public Cloud? (something that I checked out here). Here is something else I came across, which looks similar to what needs to be done at our office. About the websites - Of the 7 websites, 4 are basic static websites which basically gives a whole lot of information about a few local institutions. The remaining 3 are local product-based websites developed in PHP wherein end user can view products and order them online. I am trying to take this as a learning experience wherein I can learn to build something from scratch and save the company a little something in the process. The migration needs to be completed by Easter so I guess it gives us some time (or am I being overly optimistic??). I am confused here and would appreciate all the help I can get. Thanks in advance.

    Read the article

  • why sendmail resolves to ISP domain?

    - by digital illusion
    I wish to setup a local mail server for debugging purposes using fedora 15 I set up sendmail, but there is a problem. When I'm not connected to the internet, the local mail server delivers correctly (to localhost). And in /var/log/mail I see that I correctly delivered a mail to [email protected]: Jun 21 18:24:56 PowersourceII sendmail[6019]: p5LGOttt006019: [email protected], size=328, class=0, nrcpts=1, msgid=<[email protected]>, relay=adriano@localhost Jun 21 18:24:56 PowersourceII sendmail[6020]: p5LGOuSV006020: from=<[email protected]>, size=506, class=0, nrcpts=1, msgid=<[email protected]>, proto=ESMTP, daemon=MTA, relay=PowersourceII.localdomain [127.0.0.1] Jun 21 18:24:56 PowersourceII sendmail[6019]: p5LGOttt006019: [email protected], [email protected] (500/500), delay=00:00:01, xdelay=00:00:00, mailer=relay, pri=30328, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (p5LGOuSV006020 Message accepted for delivery) When I connect, networkmanager fills in /etc/resolv.conf with: domain fastwebnet.it search fastwebnet.it localdomain nameserver 62.101.93.101 nameserver 83.103.25.250 Now sendmail does not work any longer and tries to send messages to my ISP domain, as seen in the log: Jun 21 18:40:02 PowersourceII sendmail[6348]: p5LGe1LV006348: [email protected], [email protected] (500/500), delay=00:00:01, xdelay=00:00:01, mailer=relay, pri=30327, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (p5LGe10n006352 Message accepted for delivery) Jun 21 18:40:02 PowersourceII sendmail[6354]: p5LGe10n006352: to=<[email protected]>, delay=00:00:01, xdelay=00:00:00, mailer=esmtp, pri=120651, relay=mx3.fastwebnet.it. [85.18.95.21], dsn=5.1.1, stat=User unknown As you can see, it tries to deliver a mail to [email protected], and fails The setup is working under other ISPs. How can I avoid the fastweb ISP DNS relay? Thank you

    Read the article

  • Specific DNS sometimes resolves to wildcard, incorrectly

    - by Mojo
    I have an intermittent problem, and I'm not sure where to start trying to troubleshoot it. In our dev environment, we have two visible IP addresses on load balancers, one to the front-end, and one to a number of back-end service machines. The front-end is configured to take a wildcard DNS name to support generic "portals." dev.example.com A 10.1.1.1 *.dev.example.com CNAME dev.example.com The back-end servers are all specific names within the same space: core.dev.example.com A 10.1.1.2 cms.dev.example.com CNAME core.dev.example.com search.dev.example.com CNAME core.dev.example.com Here's the problem. Periodically a developer or a program trying to reach, say, cms.dev.example.com will get a result that points to the front-end, instead of the back-end load balancer: cms.dev.example.com is an alias to core.dev.example.com core.dev.example.com is an alias to dev.example.com (WRONG!) dev.example.com 10.1.1.1 The developers are all on Mac OS X machines, though I've seen the problem occur on an Ubuntu machine as well, using a local cloud host DNS resolver. Sometimes the developer is using a VPN, which directs the DNS to its own resolver, and sometimes he's on the local net using a DNS resolver assigned by the NAT router. Sometimes clearing the Mac OS X DNS cache, logging into the VPN, then logging out of the VPN, will make the problem go away. The origin authoritative server is on zerigo, and a dig directly to their name servers always seems to give the correct answer. The published DNS cache time for these records is 15 minutes, but the problem has been intermittent for about a week. Any troubleshooting suggestions?

    Read the article

  • Painfully slow login to AD bound Mac OS X Leopard machine when off home network

    - by GeeBee
    Dear all Just looking for a little help with this problem that seems to trip a lot of people up and is causing me no end of grief. I have a number of fully patched OS X Leopard machines that are bound to my AD (Server 2003). When on the home network, logging in seems swift and works as expected. When users take the machines off site, login can take 5 minutes or more. The user adds correct credentials but the desktop does not appear for a very long time. Outside the office, I have tried logging in using a local Admin account, switching off Airport and then logging in using an AD account. In this situation login is immediate again. It all seems as if Leopard is finding a suitable wireless network, spending far too long looking for the Domain before eventually giving up and using the cached credentials instead. I have read that disabling Bonjour on the machine will stop this problem (i have not yet tested) http://www.macwindows.com/leopardAD.html#111607z ...but I am reluctant to use this "Solution" as I would like to be able to use Bonjour on the local network as well as having AD-bound machines. However, is disabling Bonjour really the only answer? Is there not some time-out setting somewhere that could be amended to stop Leopard spending forever looking for home? Any help would be very gratefully received Thanks Gordon

    Read the article

  • Simple options for port forwarding to a different port?

    - by Nick
    I have three network printers at our local office, all of which listen on port 9100. Non of them offer the option of changing the listening port. We have a single public static IP address, and access to our main network is through a Linksys WRT-54G. We need to be able to print to these printers from outside the office. The problem is, with the 54G, I can only forward a port to the SAME port on a particular IP address. What I really need though is a way to forward to an ip address and a DIFFERENT port. I need to do this: In port Destination 9100 192.168.1.1 : 9100 9101 192.168.1.2 : 9100 9102 192.168.1.3 : 9100 So I'm looking for options. I could setup an old computer with two network cards and IPtables I suppose, but that seems like a lot of overhead for something relatively simple. Is there a way a virtual machine (read: one network card) could do the advanced port forwarding? Where I forward all traffic to it, and it forwards it on to the right printer? Or what about those mini Linux distros that replace the WRT-54G's firmware? Do any of those support what I need "out of the box"? I have a spare WRT- could I make it an IP tables router? Recommendations for mini distros? Or is there an off-the-shelf product that does this (cheap/local preferred)? Any advice / options appreciated. Thanks!

    Read the article

  • Cannot install mercurial properly - PYTHONPATH error

    - by evident
    Hi, I have a server running on Ubuntu 10.04 on which I wanted to install Mercurial via % sudo apt-get install mercurial It seems to have installed successfully and doesn't show me any error messages. But when I try it I get: % hg abort: couldn't find mercurial libraries in [/usr/bin /usr/lib/python2.6 /usr/lib/python2.6/plat-linux2 /usr/lib/python2.6/lib-tk /usr/lib/python2.6/lib-old /usr/lib/python2.6/lib-dynload /usr/lib/python2.6/dist-packages /usr/lib/pymodules/python2.6 /usr/local/lib/python2.6/dist-packages] (check your install and PYTHONPATH) I've googled for a while now and found some sites with the same problem but I still have no idea on how to fix it since it nowhere really says what I need to look for or what I need to add to my PYTHONPATH... By the way, right now my PYTHONPATH seems to be empty: % echo $PYTHONPATH % This is what I get if I look into my /usr/lib/ directory for mercurial: % find /usr/lib/py* -name 'mercurial*' /usr/lib/pymodules/python2.6/mercurial /usr/lib/pymodules/python2.6/mercurial-1.4.3.egg-info /usr/lib/pyshared/python2.6/mercurial Can anybody please help me with that? What (and how) should I set my PYTHONPATH to? I already tried reinstalling, installing with "easy_install mercurial" or with "aptitude reinstall mercurial" but nothing helped. I always get this same error. Would be great if anyone could help... thanks! ADDITION: Building from scratch didn't work out well... when I am logged in as root I can use hg, but when I access with my normal user I get: % hg Traceback (most recent call last): File "/usr/local/bin/hg", line 4, in <module> import pkg_resources File "/usr/lib/python2.6/dist-packages/pkg_resources.py", line 2659, in <module> parse_requirements(__requires__), Environment() File "/usr/lib/python2.6/dist-packages/pkg_resources.py", line 546, in resolve raise DistributionNotFound(req) pkg_resources.DistributionNotFound: mercurial==1.7.2

    Read the article

  • Using GUI ftp on Win7 and Vista without additional software

    - by Stephen Jones
    Goal: provide a 'no-software' method for 'less technical' users to access password protect ftp location from Win7 and Vista (existing approach for WinXP works). 'No software' method to mean without installing additional software (e.g. FileZilla, WinSCP) - the solution is supplied to external non-technical users. WinXP (works): Using Windows Explorer, WinXP supports non-technical ftp access by pasting: ftp://username:[email protected] into the address bar. The remote ftp site's files / directory structure becomes available and can be copied to / from easily (in the style of local file copy / paste) by a 'less technical' user. Win7 / Vista (doesn't work): Pasting the same URL into the Windows Explorer on Win7 or Vista causes an error: An error occurred opening that folder on the FTP server. Make sure you have permission to access that folder. Details: The connection with the server was reset. Notes: a) The same username/password/server typed from the (DOS) command line achieves access to the server, but this is a more 'technical' solution than desired. I am looking for a WinXP equivalent solution. b) Under 'Control Panel' / 'Internet options' / 'Advanced' tab - the boxes for 'Enable FTP folder view' and 'Use Passive FTP' are ticked (enabled) c) Adding an inbound firewall rule for local port 20 (TCP) was attempted with no difference in results (i.e. failure)

    Read the article

  • Sharepoint (WSS 3.0) on SBS 2008 broken.

    - by tcv
    I recently ran the Sharepoint Products and Technologies Wizard. I had hoped this would bring up Sharepoint and allow me to access it so I could begin to learn. But it's not working. Here is some data that I hope is relevant. I am doing all my testing on the SBS 2008 server itself. I changed the hostheader in IIS to reflect an external FQDN I plan to deploy. The SBS server is remote and there are no domain-connected workstations. If I browse "localhost" SSL, I can get to the site, albeit with a self-signed cert warning. If I attempt to connect via SSL using either the internal FQDN (.local), the External FQDN (.net) or any other permutation thereof, I am prompted for credentials three times but am not allowed access. My account is a domain admin. The site is inaccessible using port 80 whether using localhost, internal FQDN (.local), and external FQDN (.net) Right now, I suspect my problem is within IIS, but I don't know. My plan to publish the sharepoint site to the web so my partner and I can check documents in/out. Can someone help me get started in current direction?

    Read the article

  • Restarting shell script with &disown using Monit

    - by Solas Admin
    I have a shell script that runs a C++ backend mail system (PluginHandler). I need to monitor this process in Monit and restart it if it fails. The script: export LD_LIBRARY_PATH=/usr/local/lib/:/CONFIDENTAL/CONFIDENTAL/Common/ cd PluginHandler/ ./PluginHandler This script does not have a PID file and we run this script by executing ./rundaemon.sh &disown ./pluginhandler starts the process and starts logging into /etc/output/output.log I stop the process by identifying the process ID with [ps -f | grep PluginHandler] and then killing the process. I can check the process in Monit just fine, but I think Monit is starting the process if it is not running but it can't do &disown so the process ends as soon as it starts. This is the code in the monitrc file for checking this process: check process Backend matching "PluginHandler" if not exist then alert start "PATH/TO/SCRIPT/rundaemon.sh &disown" alert [email protected] only on {timeout} with mail-format {subject: "[BLAH"} I tried to stop the script from terminating by modifying the script like the following but this does not work either. export LD_LIBRARY_PATH=/usr/local/lib/:/home/CONFIDENTAL/production/CONFIDENTAL/Common/ cd PluginHandler/ (nohup ./PluginHandler &) return Any help to write a proper Monit rules to resolve this issue would be greatly appreciated :)

    Read the article

  • PHP enable sqlite phpinfo states --without-sqlite

    - by Jahmic
    I've seen similar questions but none that address my situation adequately. I'm running Apache and PHP 5.3.6 on a amazon cloud server. phpinfo keeps stating that sqlite is disabled. At least that what it seems from the configure line: './configure' ... '--without-sqlite' In other parts of phpinfo() output: PDO PDO drivers mysql, sqlite PDO Driver for SQLite 3.x enabled SQLite Library 3.6.20 sqlite3 SQLite3 support enabled SQLite3 module version 0.7-dev SQLite Library 3.6.20 Directive Local Value Master Value sqlite3.extension_dir no value no value and at least one the following PHP commands fail: if (!extension_loaded('SQLite') OR !function_exists('sqlite_open')) Yum install states that both sqlite and pdo-lite are already installed. I've tried to enable sqlite by editing my local php.ini by adding: ; Enable sqlite3 extension module extension=sqlite3.so I've checked the main php.ini (/etc/php.ini) and there is nothing specific about disabling it. In fact, there is a sub-conig file loaded in php.d that also specifies this extension as well as another for the pdo-sqlite I'm running of things to look for or try. Any suggestions. How do I find where the PHP configure is stated? Thanks

    Read the article

  • Can't make nodejs mingw32: pkg-config can't find gnutils

    - by valya
    I'm trying to compile nodejs using MSYS, mingw32 on Windows 7-64 Valentin Golev@VALYASNOTEBOOK /home/Valentin_Golev/nodejs $ ./configure Checking for program CL : ok C:\Program Files (x86)\Microsoft V isual Studio 10.0\VC\BIN\x86_amd64\CL.exe Checking for program CL : ok C:\Program Files (x86)\Microsoft V isual Studio 10.0\VC\BIN\CL.exe Checking for program CL : ok C:\Program Files (x86)\Microsoft V isual Studio 10.0\VC\BIN\amd64\CL.exe Checking for program CL : ok c:\Program Files (x86)\Microsoft V isual Studio 9.0\VC\BIN\CL.exe Checking for program CL : ok c:\Program Files (x86)\Microsoft V isual Studio 9.0\VC\BIN\CL.exe Checking for program CL : ok c:\Program Files (x86)\Microsoft V isual Studio 9.0\VC\BIN\x86_amd64\CL.exe Checking for program CL : ok c:\Program Files (x86)\Microsoft V isual Studio 9.0\VC\BIN\CL.exe Checking for program CL : ok c:\Program Files (x86)\Microsoft V isual Studio 9.0\VC\BIN\amd64\CL.exe Checking for program CL : ok c:\Program Files (x86)\Microsoft V isual Studio 9.0\VC\BIN\amd64\CL.exe Checking for program LINK : ok c:\Program Files (x86)\Microsoft V isual Studio 9.0\VC\BIN\amd64\LINK.exe Checking for program LIB : ok c:\Program Files (x86)\Microsoft V isual Studio 9.0\VC\BIN\amd64\LIB.exe Checking for program MT : ok C:\Program Files\\Microsoft SDKs\W indows\v6.0A\bin\x64\MT.exe Checking for program RC : ok C:\Program Files\\Microsoft SDKs\W indows\v6.0A\bin\x64\RC.exe Checking for msvc : ok Checking for msvc : ok Checking for library dl : not found Checking for library execinfo : not found Checking for gnutls >= 2.5.0 : fail --- libeio --- Checking for library pthread : not found Checking for function pthread_create : not found error: the configuration failed (see 'C:\\msys\\1.0\\home\\Valentin_Golev\\node js\\build\\config.log') I have gnutils built and installed! I've checked the config.log, and there was a command: pkg-config --errors-to-stdout --print-errors --atleast-version=2.5.0 gnutls I typed it in the console Valentin Golev@VALYASNOTEBOOK /home/Valentin_Golev/nodejs $ pkg-config --errors-to-stdout --print-errors --atleast-version=2.5.0 gnutls Package gnutls was not found in the pkg-config search path. Perhaps you should add the directory containing `gnutls.pc' to the PKG_CONFIG_PATH environment variable No package 'gnutls' found But, Valentin Golev@VALYASNOTEBOOK ~ $ $PKG_CONFIG_PATH sh: c:/msys/1.0/local/lib/pkgconfig: is a directory Valentin Golev@VALYASNOTEBOOK ~ $ cd $PKG_CONFIG_PATH Valentin Golev@VALYASNOTEBOOK /local/lib/pkgconfig $ ls gnutls-extra.pc gnutls.pc What am I doing wrong?

    Read the article

  • Apache fails to start after WHM easyapache update

    - by Vigrond
    Tryin to get some light shed on this issue Running CentOS I upgraded Apache using easyapache to 2.2 All was well I then used WHM to update Mysql to 5.5 This succeeded but now Apache will not start. The error log was reporting things like [Sun Apr 15 00:44:57 2012] [alert] getpwuid: couldn't determine user name from uid 4294967295, you probably need to modify the User directive [Sun Apr 15 02:27:30 2012] [notice] suEXEC mechanism enabled (wrapper: /usr/local/apache/bin/suexec) [Sun Apr 15 02:27:30 2012] [warn] pid file /usr/local/apache/logs/httpd.pid overwritten -- Unclean shutdown of previous Apache run? [Sun Apr 15 02:27:30 2012] [alert] getpwuid: couldn't determine user name from uid 4294967295, you probably need to modify the User directive [Sun Apr 15 02:27:30 2012] [alert] getpwuid: couldn't determine user name from uid 4294967295, you probably need to modify the User directive [Sun Apr 15 02:27:30 2012] [alert] getpwuid: couldn't determine user name from uid 4294967295, you probably need to modify the User directive [Sun Apr 15 02:27:30 2012] [alert] getpwuid: couldn't determine user name from uid 4294967295, you probably need to modify the User directive [Sun Apr 15 02:27:30 2012] [alert] getpwuid: couldn't determine user name from uid 4294967295, you probably need to modify the User directive [Sun Apr 15 02:27:30 2012] [notice] Apache/2.2.22 (Unix) mod_ssl/2.2.22 OpenSSL/0.9.8e-fips-rhel5 mod_auth_passthrough/2.1 mod_bwlimited/1.4 FrontPage/5.0.2.2635 configured -- resuming normal operations [Sun Apr 15 02:27:30 2012] [alert] Child 4063 returned a Fatal error... Apache is exiting! So I tried to recompile using easyapache again, but easyapache just hangs I tried with base php settings - and it always gets stuck on "bf804000-bf819000 rw-p 7ffffffe9000 00:00 0 [stack]" At this point in cpanel the status says "create srm.conf and access.conf for mod_frontpage" I have tried things like rpm --rebuilddb yum clean all yum update with no luck. I'm kind of running out of ideas, and wondering if anyone could point me to the right direction.

    Read the article

  • Fully FOSS EMail solution

    - by Ravi
    I am looking at various FOSS options to build a robust EMail solution for a government funded university. Commercial options are to be chosen only in the worst case scenario. Here are the requirements: Approx 1000-1500 users - Postfix or Exim? (Sendmail is out;-)) Mailing lists for different groups/Need web based archive - Mailman? Sympa? Centralised identity store - OpenLDAP? Fedora 389DS? Secure IMAP only - no POP3 required - Courier? Dovecot? Cyrus?? Anti Spam - SpamAssasin? what else? Calendaring - ?? webmail - good to have, not mandatory - needs to be very secure...so squirrelmail is out;-)? Other questions: What mailbox storage format to use? where to store? database/file system? Simple and effective HA options? Is there a web proxy equivalent to squid in the mail server world? software load balancers?CARP? Monitoring and alert? Backup? The govt wants to stimulate the local economy by buying hardware locally from whitebox vendors. Also local consultants and university students will do the integration. We looked at out-of-the-box integrated solutions like Axigen, Zimbra and GMail but each was ruled out in favour of a DIY approach in the hopes of full control over the data and avoiding vendor lockin - which i though was a smart thing to do. I wish more provincial governments in the developing world think of these sort of initiatives As for OS - Debian, FreeBSD would be first preference. Commercial OS's need not apply. CentOS as second tier option...

    Read the article

  • Need help with custom init script

    - by churnd
    I'm trying to set up an init script for a process on redhat linux: #!/bin/sh # # Startup script for Conquest # # chkconfig: 345 85 15 - start or stop process definition within the boot process # description: Conquest DICOM Server # processname: conquest # pidfile: /var/run/conquest.pid # Source function library. This creates the operating environment for the process to be started . /etc/rc.d/init.d/functions CONQ_DIR=/usr/local/conquest case "$1" in start) echo -n "Starting Conquest DICOM server: " cd $CONQ_DIR && daemon --user mruser ./dgate -v - Starts only one process of a given name. echo touch /var/lock/subsys/conquest ;; stop) echo -n "Shutting down Conquest DICOM server: " killproc conquest echo rm -f /var/lock/subsys/conquest rm -f /var/run/conquest.pid - Only if process generates this file ;; status) status conquest ;; restart) $0 stop $0 start ;; reload) echo -n "Reloading process-name: " killproc conquest -HUP echo ;; *) echo "Usage: $0 {start|stop|restart|reload|status}" exit 1 esac exit 0 However, the cd $CONQ_DIR is getting ignored, because the script errors out: # ./conquest start Starting Conquest DICOM server: -bash: ./dgate: No such file or directory [FAILED] For some reason, I have to run dgate as ./dgate. I cannot specify the full path /usr/local/conquest/dgate The software came with an init script for a Debian system, so the script uses start-stop-daemon, with the option --chdir to where dgate is, but I haven't found a way to do this with the Redhat daemon function.

    Read the article

  • Workstations cannot see new MS Server 2008 domain, but can access DHCP. (solved)

    - by Radix
    The XP Pro workstations do not see the new replacement domain upon boot; they only see their cached entry for the old (server 2003) domain controller. The old_server is not connected to the network. I have DHCP working with the same scope as the old_server. In my "before-asking" search for a solution I came across the following two articles, and I recall doing things as suggested by the articles. http://www.windowsreference.com/windows-server-2008/how-to-setup-dhcp-server-in-windows-server-2008-step-by-step-guide/ http://www.windowsreference.com/windows-server-2008/step-by-step-guide-for-windows-server-2008-domain-controller-and-dns-server-setup/ The only possible issue is: I was under the impression that the domain netbios needed to match the DC's netbios. The DC netbios is city01 while the domain's FQDN is city.domain.org (I think this is mistaken and should have been just domain.org) But, the second link led me to a post which I believe answers my question. I did as they instructed by opening Local Area Connection Properties, then selecting TCP/IPv4 and setting the sole preferred DNS server to the local hosts static IP (10.10.1.1). Search for "Your problems should clear up" for the post I'm referencing: http://forums.techarena.in/active-directory/1032797.htm Have I misunderstood their instructions? I am hoping to reach the point where I can define users and user groups. Also, does TechNet have a single theoretical overview document I could read. I really don't like treating comps as magic. I will be watching this closely and will quickly answer any questions. If I've left anything out it is because I did not know it was needed. PS: I am loath to ask obviously basic questions, but I am tired and wish to fix this before tomorrow. Also, this is my first server installation, thank you for your help.

    Read the article

  • Windows Server 2008 R2 - Cannot Change DNS Domain Context on Some Machines

    - by Richie086
    So I have a small Windows Server 2008 R2 network consisting of a domain controller, a file server, sql server, etc. All machines are joined to a windows domain (CPUSHIELD.COM) and show up in Active Directory Users and Computers under the Computers OU. Each computer has a DNS record as well that was populated when I joined each computer to the domain. However, when I go to my SQL server VM (which is joined to CPUSHIELD.COM) and try to add domain users or groups to the local users or groups on my file server (which is a physical machine) or my sql server (which is a virtual machine), for some reason I cannot change the context to the CPUSHIELD.COM domain.. For example: Here is the really strange thing, I have two other servers on my network that do show CPUSHIELD.COM in the From This Location field (as I would expect with any machine joined to a domain) and I am able to search the local machine and/or domain for users/groups to add. I have done hundreds of Windows Server 2008 installs and this is the first time I have run into this issue. Any ideas? Let me know if you need more info

    Read the article

  • Xen PV packet loss

    - by Delphinator
    I'm having some serious issues with packetloss with one of my servers. This server is a somewhat old (P4-era) machine running Debian Squeeze and Xen 4.0. There are two domUs running on it (both also Debian Squeeze), one gateway and a fileserver. Unfortunatly the processor has no virtualization extensions, therefore only PV can be used. While investigating why our network seems to be slower than it should I found some pretty bad packet loss (~25%). After further investigation and several experiments I did a measurment between the dom0 and one of the domUs: Server listening on UDP port 5001 Receiving 1470 byte datagrams UDP buffer size: 110 KByte (default) ------------------------------------------------------------ ------------------------------------------------------------ Client connecting to dom0, UDP port 5001 Sending 1470 byte datagrams UDP buffer size: 110 KByte (default) ------------------------------------------------------------ [ 3] local 192.168.1.2(domU) port 33817 connected with 192.168.1.100(dom0) port 5001 [ 4] local 192.168.1.2(domU) port 5001 connected with 192.168.1.100(dom0) port 48606 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 46.3 MBytes 38.7 Mbits/sec [ 3] Sent 33020 datagrams [ 3] Server Report: [ 3] 0.0-10.0 sec 46.2 MBytes 38.6 Mbits/sec 0.030 ms 89/33019 (0.27%) [ 3] 0.0-10.0 sec 1 datagrams received out-of-order [ 4] 0.0-10.2 sec 43.0 MBytes 35.3 Mbits/sec 13.074 ms 11575/42256 (27%) tl;dr: 27% packet loss from dom0 to domU with 50Mbit UDP packets. Same thing happens from anywhere in the network. The problem gets better for smaller bandwidths (0.047% for 5Mbit) and worse for higher (59% for 200Mbit) ones. I did increase the CPU-weight of the dom0, there is no swapping going on, and actual networking-hardware is not involved. I never expected Xen (or anything related) to drop packets, and I'm completly clueless what to try next.

    Read the article

  • Login with Enterprise Principal Name using sssd AD backend in Ubuntu 14.04 LTS

    - by Vinícius Ferrão
    I’m running sssd version 1.11 with the AD backend in Ubuntu 14.04 LTS (1.11.5-1ubuntu3) to authenticate users from Active Directory running on Windows Server 2012 R2, and I’m trying to achieve logins with the User Principal Name for all users of the domain. But the UPN are always Enterprise Principal Names. Let-me illustrate the problem with my user account: Domain: local.example.com sAMAccountName: ferrao UPN: [email protected] (there’s no local in the UPN) I can successfully login with the sAMAccountName atribute, which is fine, but I can’t login with [email protected] which is my UPN. The optimum solution for me is to allow logins from sAMAccountName and the UPN (User Principal Name). If’s not possible, the UPN should be the right way instead of the sAMAccountName. Another annoyance is the homedir pattern with those options in sssd.conf: default_shell = /bin/bash fallback_homedir = /home/%d/%u What I would like to achieve is separated home directories from the EPN. For example: /home/example.com/user /home/whatever.example.com/user But with this pattern I can’t map the way I would like to do. I’ve looked through man pages and was unable to find any answers for this issues. Thanks,

    Read the article

  • 403 Forbidden error on Mac OSX - Apache and nginx

    - by tlianza
    Hi All, There are a million questions like this on Google, but I haven't found a solution to my problem. The default Apache install on my Mac is giving 403 Forbidden errors for everything (default directory, user home directory, virtual server, etc). After sifting through the config files, I figured I'd give nginx a try. Nginx serves files fine from it's home directory, but it won't serve files from a subfolder of my user directory. I've configured a simple virtual host, and requesting index.html returns a 403-forbidden. The error message in nginx's log file is pretty clear - it can't read the file: 2011/01/04 16:13:54 [error] 96440#0: *11 open() "/Users/me/Documents/workspace/mobile/index.html" failed (13: Permission denied), client: 127.0.0.1, server: local.test.com, request: "GET /index.html HTTP/1.1", host: "local.test.com" I've opened up this directory to everyone: drwxrwxrwx 6 me admin 204B Dec 31 20:49 mobile And all the files in it: $ ls -lah mobile/ total 24 drwxrwxrwx 6 me admin 204B Dec 31 20:49 . drwxr-xr-x 71 me me 2.4K Dec 31 20:41 .. -rw-r--r--@ 1 me me 6.0K Jan 2 18:58 .DS_Store -rwxrwxrwx 1 me admin 2.1K Jan 4 14:22 index.html drwxrwxrwx 5 me admin 170B Dec 31 20:45 nbproject drwxrwxrwx 5 me admin 170B Jan 2 18:58 script And yet, I cannot figure out why the nginx process cannot read index.html. It's running as the "nobody" user, but the permissions are set such that anyone can read them.

    Read the article

  • hosts.deny not blocking ip addresses

    - by Jamie
    I have the following in my /etc/hosts.deny file # # hosts.deny This file describes the names of the hosts which are # *not* allowed to use the local INET services, as decided # by the '/usr/sbin/tcpd' server. # # The portmap line is redundant, but it is left to remind you that # the new secure portmap uses hosts.deny and hosts.allow. In particular # you should know that NFS uses portmap! ALL:ALL and this in /etc/hosts.allow # # hosts.allow This file describes the names of the hosts which are # allowed to use the local INET services, as decided # by the '/usr/sbin/tcpd' server. # ALL:xx.xx.xx.xx , xx.xx.xxx.xx , xx.xx.xxx.xxx , xx.x.xxx.xxx , xx.xxx.xxx.xxx but i am still getting lots of these emails: Time: Thu Feb 10 13:39:55 2011 +0000 IP: 202.119.208.220 (CN/China/-) Failures: 5 (sshd) Interval: 300 seconds Blocked: Permanent Block Log entries: Feb 10 13:39:52 ds-103 sshd[12566]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=202.119.208.220 user=root Feb 10 13:39:52 ds-103 sshd[12567]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=202.119.208.220 user=root Feb 10 13:39:52 ds-103 sshd[12568]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=202.119.208.220 user=root Feb 10 13:39:52 ds-103 sshd[12571]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=202.119.208.220 user=root Feb 10 13:39:53 ds-103 sshd[12575]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=202.119.208.220 user=root whats worse is csf is trying to auto block these ip's when the attempt to get in but although it does put ip's in the csf.deny file they do not get blocked either So i am trying to block all ip's with /etc/hosts.deny and allow only the ip's i use with /etc/hosts.allow but so far it doesn't seem to work. right now i'm having to manually block each one with iptables, I would rather it automatically block the hackers in case I was away from a pc or asleep

    Read the article

  • Creating an Apache Virtual Directory, but updating Active Directory DNS

    - by SnoConeGod
    Hello all, I'm just getting started with using the Zend Framework and am following a recommended procedure where I am supposed to create an Apache Virtual Directory for the public-facing portion of a new Zend project. I don't THINK I had any issues creating the Virtual Directory, but my knowledge of the required DNS changes is rather lacking. The dev server I'm using is on a Microsoft Windows Active Directory domain, so I've added A records for both the server name and the subdomain. Still, trying to browse to the site from a Windows 7 PC isn't working properly. What am I missing? What's the proper set of steps for getting an Apache-served subdomain to appear properly in a peer computer's web browser? Details below: server: Debian command-line only, freshly installed today with Zend Server CE LAMP stack server name: ZENDEV subdomain: SQUARE.ZENDEV AD Domain functional level: 2008 mixed (run by a mishmash of 03 and 08 servers) attempting to visit the sites: http://square.zendev and http://square.zendev.domain.local (name of domain redacted, but using the local (not com) suffix) Apache Virtual Directory added to httpd.conf: NameVirtualHost *:80 <VirtualHost *:80> DocumentRoot "/var/www/square/public" ServerName square.localhost </VirtualHost> Is this only a problem with DNS? Or with DNS and my Virtual Directory? Thanks! John

    Read the article

< Previous Page | 369 370 371 372 373 374 375 376 377 378 379 380  | Next Page >