Search Results

Search found 9559 results on 383 pages for 'mail rule'.

Page 315/383 | < Previous Page | 311 312 313 314 315 316 317 318 319 320 321 322  | Next Page >

  • A possible case of hacked email account. What kind of an attack is this?

    - by Rickesh John
    I own a Yahoo mail account. I am using this account for sending resumes and receive notifications from various job portals. But yesterday, I found that some 10-15 mails had been sent to random addresses from my account. Most of them had this format: hr@<companyname>.com I am pretty sure that I didn't send any mails to such addresses. Initially, I thought the job portals may be sending mails on my behalf and Yahoo is logging them, but then I saw the contents. The contents of all those mails were a URL, which I did not click. SCARED. Also, to top it off, my "Sending Name" has been changed to 'Nice Maria'!! o_0 I have taken the necessary measures and changed my password and the secret question. I cannot delete this account as this email is registered with all the job portals and other companies. Is this a simple case of my account being compromised or was I a victim of some web vulnerability? All the mails seem to be bot generated, with only a URL as the message body. Please advice.

    Read the article

  • Bugzilla email issue

    - by xian
    My bugzilla system keep hit the following error: There was an error sending mail from '[email protected]' to '[email protected]':Can't send data I think that is some problem with my setting and configuration. First is the urlbase I have tried setting it to bugzilla.example.com, and http://127.0.0.1:81/, and http://10.0.0.236/ (My laptop IP address, I use this laptop to set up bugzilla) but the error still persists. Actually what should I put in the urlbase field? Parameter = Email Under mail_delivery_method, i choose SMTP. Under mailfrom, I put bugzilla-daemon. smtpserver, I tried leaving it blank, or setting it to 220.181.12.12 before, but could not solve my problem For my sql, the following is the data and command I used: C:\mysql\bin>mysql --user=root -p mysql Enter password: 1234 (When I install mysql into my laptop, it ask me to key an username and password, i have key in username as 'cvuser' and password as '1234', but here never ask me to key in any username) Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 1 Server version: 5.5.15 MySQL Community Server (GPL) mysql> GRANT ALL PRIVILEGES ON bugs.* TO 'bugs'@'localhost' IDENTIFIED BY '123456'; Query OK, 0 rows affected (0.03 sec) In C:\Bugzilla\localconfig, I put the following info: # # How to access the SQL database: # $db_host = "localhost"; # where is the database? $db_port = 3306; # which port to use $db_name = "bugs"; # name of the MySQL database $db_user = "bugs"; # user to attach to the MySQL database # # Enter your database password here. It's normally advisable to specify # a password for your bugzilla database user. # If you use apostrophe (') or a backslash (\) in your password, you'll # need to escape it by preceding it with a \ character. (\') or (\\) # $db_pass = '123456'; Can someone tell me where my mistake is? I have googled for this issue for few days but still cannot find the solution.

    Read the article

  • VPN being blocked somewhere between either my BT2700HGV and DLink DFL-210

    - by Dom
    Hi, For some time I have been unable to get VPN working through my set up. I have a BT2700HGV Router (2 Wire Model) and as a firewall I have the Dlink DFL-210. You can't turn the firewall completely off on the BT2700HGV so I have it set to DMZplus mode for the Firewall. In theory this should then allow the VPN ports through. On the Firewall I have a series of rules set up and one is the pptp-allow rule which should allow access on the correct ports also. When I try to connect via VPN however the client machine gets an error 809. If I check the log on the Dlink firewall, I see this record: http://dl.dropbox.com/u/1041315/packetdrop.PNG The laptop I am testing the vpn with is connected directly to the BT2700HGV router and I am trying to VPN from it onto 81.138.86.217. I can't work out whether I have some sort of problem in the set up of the rules on my firewall or if the BT router (even though it's in DMZplus mode) is still blocking port 1723. I read somewhere that there where problems because BTs Openzone held onto this port for some reason. Any help would be greatly appreciated. If you need further screen shots or information then please let me know. I wasn't able to create new tags for the router and firewall name or insert the picture in as I am new to the forum. Dom :-)

    Read the article

  • Need to get SMTP server on MS Server 2003

    - by Matt Dawdy
    Long story short, client paid networking company to move their website in house. Now I have to figure out how to email out from their website even though they don't have an SMTP server. At least until I install one. Their email is hosted with Gmail right now (the client's domain through Google App for Your Domain). I changed my code to connect as one of their users "[email protected]" and send email. Worked great for about 12 hours. All of a sudden none of the automated emails are going out now, and google is sending the emails back saying that it is a permanent failure and Message Rejected. The link they direct me to, http://mail.google.com/support/bin/answer.py?answer=69585 is telling me that our emails look like spam. They aren't. They are emails we send to out clients about the status of their applications. Seriously, they are NOT spam. So...long story short is out the window, sorry...but I need to get an SMTP server setup inside their domain that I can send emails out of. This thing won't need to receive emails ever, and really only needs 1 email account customercare. What can I do? Will I have to have the networking company open a port in the firewall? Is there one built into Server 2003?

    Read the article

  • Cannot get mod_rewrite to work on Mac OSX Mountain Lion

    - by Joel Joel Binks
    I have tried everything I can think of and it still doesn't work. I am trying to get the example code from Larry Ullman's Advanced PHP book to work. His instructions were a bit lacking so I had to do some research. Here is what I have configured: username.conf <Directory "/Users/me/Sites/"> Options Indexes MultiViews FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> httpd.conf LoadModule rewrite_module libexec/apache2/mod_rewrite.so DocumentRoot "/Users/me/Sites" <Directory /> Options Indexes MultiViews FollowSymLinks AllowOverride All Order deny,allow Allow from all </Directory> <Directory "Users/me/Sites"> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny Allow from all </Directory> .htaccess <IfModule mod_rewrite.so> RewriteEngine on RewriteBase /phplearning/ADVANCED/ch02/ # Redirect certain paths to index.php: RewriteRule ^(about|contact|this|that|search)/?$ index.php?p=$1 RewriteLog "/var/log/apache/rewrite.log" RewriteLogLevel 2 </IfModule> Nothing has worked and it won't even log to the rewrite.log file. What have I done wrong? FYI even when I set up an extremely simple rule or use the root as the rewrite base, it still fails. I have also verified the mod_rewrite module is running. I am really angry.

    Read the article

  • SpamAssassin bayesian score discrepancies

    - by CaptSaltyJack
    This makes my brain hurt. For some reason, SpamAssassin is giving high scores to certain emails, but when I test them on the command line, they get a low score. This one particular email has this in the header: X-Spam-Flag: YES X-Spam-Score: 8.521 X-Spam-Level: ******** X-Spam-Status: Yes, score=8.521 tagged_above=-9999 required=5 tests=[BAYES_99=3.5, BAYES_999=0.2, HTML_MESSAGE=0.001, NO_RECEIVED=-0.001, NO_RELAYS=-0.001, RAZOR2_CF_RANGE_51_100=0.5, RAZOR2_CF_RANGE_E8_51_100=1.886, RAZOR2_CHECK=0.922, URIBL_RHS_DOB=1.514] autolearn=no Yet when I dump the raw email into a file msg and run sudo su amavis -c 'spamassassin -t msg', I get this output: Content analysis details: (3.8 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- 1.5 URIBL_RHS_DOB Contains an URI of a new domain (Day Old Bread) [URIs: cliobeads.com] -1.0 ALL_TRUSTED Passed through trusted hosts only via SMTP 0.0 HTML_MESSAGE BODY: HTML included in message -0.0 BAYES_20 BODY: Bayes spam probability is 5 to 20% [score: 0.1855] 1.9 RAZOR2_CF_RANGE_E8_51_100 Razor2 gives engine 8 confidence level above 50% [cf: 100] 0.5 RAZOR2_CF_RANGE_51_100 Razor2 gives confidence level above 50% [cf: 100] 0.9 RAZOR2_CHECK Listed in Razor2 (http://razor.sf.net/) I'm really confused as to why when the email comes in, it gets a completely different score attached to it than when I run spamassassin -t. Is there some other way I should be testing emails? Also, my users have the ability to drag false positives into a folder called "False Positives," and every day a cron job fires off that runs this on every message in every user's folder: sa-learn --dbpath=/var/lib/amavis/.spamassassin --ham /tmp/*-*.eml >/dev/null I ran sudo locate bayes_toks and there's definitely only one bayes DB on the system, in /var/lib/amavis/.spamassassin. I'm clueless, any help would be great and may help restore my sanity!

    Read the article

  • LDAP Authentication woes

    - by Marcelo de Moraes Serpa
    Hello list, I have a local OpenLDAP server with a couple of users. I'm using it for development purposes, here's the ldif: #Top level - the organization dn: dc=site, dc=com dc: site description: My Organization objectClass: dcObject objectClass: organization o: Organization #Top level - manager dn: cn=Manager, dc=site, dc=com objectClass: organizationalRole cn: Manager #Second level - organizational units dn: ou=people, dc=site, dc=com ou: people description: All people in the organization objectClass: organizationalunit dn: ou=groups, dc=site, dc=com ou: groups description: All groups in the organization objectClass: organizationalunit #Third level - people dn: uid=celoserpa, ou=people, dc=site, dc=com objectclass: pilotPerson objectclass: uidObject uid: celoserpa cn: Marcelo de Moraes Serpa sn: de Moraes Serpa userPassword: secret_12345 mail: [email protected] So far, so good. I can bind with "cn=Manager,dc=site,dc=com" and the 12345678 password (the local server password, setup on slapd.conf). However, I would like to bind with any user in under the people OU. In this case, I'd like to bind with: dn: uid=celoserpa, ou=people, dc=site, dc=com userPassword: secret_12345 But I'm getting a "(49) - Invalid Credentials" error everytime. I have tried through CLI tools (such as ldapadd, ldapwhoami, etc) and also ruby/ldap. The bind with these credentials fails with a invalid credentials error. I thought that it could be an ACL issue, however, the ACLs on slapd.conf seem to be right: access to attrs=userPassword by self write by dn.sub="ou=people,dc=site,dc=com" read by anonymous auth access to * by * read I was suspecting that maybe OpenLDAP doesn't compare against userPassword? Or maybe some ACL configuration I am missing that is somehow affecting the read access to userPassword for the specific DN. I'm really lost here, any suggestion appreciated! Cheers, Marcelo.

    Read the article

  • How can I avoid an error in this .htaccess file?

    - by mipadi
    I have a blog. The blog is stored under the /blog/ prefix on my website. It has the usual URLs for a blog, so articles have URLs in the format /blog/:year/:month/:day/:title/. First and foremost, I want to automatically redirect visitors to the www subdomain (in case they leave that off), and internally rewrite the root URL to /blog/, so that the front page of the blog appears on the front page of the site. I have accomplished that with the following set of rewrite rules in my .htaccess file: RewriteEngine On # Rewrite monkey-robot.com to www.monkey-robot.com RewriteCond %{HTTP_HOST} ^monkey-robot\.com$ RewriteRule ^(.*)$ http://www.monkey-robot.com/$1 [R=301,L] RewriteRule ^$ /blog/ [L] RewriteRule ^feeds/blog/?$ /feeds/blog/atom.xml [L] That works fine. The problem is that the front page of the blog now appears at two distinct URLs: / and /blog/. So I'd like to redirect the /blog/ URL to the root URL. Initially I tried to accomplish this with the following set of rewrite rules: RewriteEngine On # Rewrite monkey-robot.com to www.monkey-robot.com RewriteCond %{HTTP_HOST} ^monkey-robot\.com$ RewriteRule ^(.*)$ http://www.monkey-robot.com/$1 [R=301,L] RewriteRule ^$ /blog/ [L] RewriteRule ^blog/?$ / [R,L] RewriteRule ^feeds/blog/?$ /feeds/blog/atom.xml [L] But that gave me an infinite redirect (maybe because of the preceding rule?). So then I tried this set: RewriteEngine On # Rewrite monkey-robot.com to www.monkey-robot.com RewriteCond %{HTTP_HOST} ^monkey-robot\.com$ RewriteRule ^(.*)$ http://www.monkey-robot.com/$1 [R=301,L] RewriteRule ^$ /blog/ [L] RewriteRule ^blog/?$ http://www.monkey-robot.com/ [R,L] RewriteRule ^feeds/blog/?$ /feeds/blog/atom.xml [L] But I got a 500 Internal Server Error with the following log message: Invalid command '[R,L]', perhaps misspelled or defined by a module not included in the server configuration What gives? I don't think [R,L] is a syntax error.

    Read the article

  • Munin with postgresql 9.2

    - by jreid9001
    I am trying to set up Munin to collect stats on a server with postgresql 9.1 and 9.2 (the server is currently running 9.1, have tested on a fresh VM with 9.2 to rule out some weird problem on the running server. I had to patch some of the plugins for 9.2 due to renamed columns (e.g. procpid to pid), but that's no problem). Munin is installed from the EPEL repos, postgres from the official one. Both up to date. When I try to run munin-node-configure --suggest, I get this output: # The following plugins caused errors: # postgres_bgwriter: # Junk printed to stderr # postgres_cache_: # Junk printed to stderr # postgres_checkpoints: # Junk printed to stderr # postgres_connections_: # Junk printed to stderr # postgres_connections_db: # Junk printed to stderr # postgres_locks_: # Junk printed to stderr # postgres_querylength_: # Junk printed to stderr # postgres_scans_: # Junk printed to stderr # postgres_size_: # Junk printed to stderr # postgres_transactions_: # Junk printed to stderr # postgres_tuples_: # Junk printed to stderr # postgres_users: # Junk printed to stderr # postgres_xlog: # Junk printed to stderr After a lot of searching around, I edited /etc/munin/plugin-conf.d/munin-node and added the following: [postgres*] user postgres This stops munin-node-configure complaining about stderr and lets me add the plugins, but when I telnet to the server on 4949 and try to fetch the stats, I just get "Bad exit". When I run the plugin individually via munin-run (e.g. munin-run postgres_size_ALL ), it works completely fine. Looking at /var/log/munin/munin-node.log, this is the output: Error output from postgres_size_ALL: DBI connect('dbname=template1','',...)failed: could not connect to server: Permission denied Is the server running locally and accepting connections on Unix domain socket "/tmp/.s.PGSQL.5432"? at /usr/share/perl5/vendor_perl/Munin/Plugin/Pgsql.pm line 377 Service 'postgres_size_ALL exited with status 1/0. I am now out of ideas... the socket definitely exists, and pg_hba.conf is set to allow all users/databases from localhost with trust.

    Read the article

  • Returning a 404 page when a folder is accessed from one domain, but allowing access from other domains and IP addresses

    - by okw
    Situation: I want to return a 404 page ("404.php") when a folder ("hidden") is accessed from the example.com domain. I want the same folder to be accessible from a subdomain ("hidden.example.com") or from a different domain ("hidden.com") which are both configured in a single VirtualHost entry. The server has multiple IP addresses that it listens on. Each IP address serves identical content from the example.com domain (sharing a VirtualHost entry.) I want the folder to be accessible from the IP address. The server is configured to use SSL/TLS/HTTPS. HTTPS is optional on example.com, but HTTPS is enforced in the .htaccess file for the hidden folder using a rewrite rule shown below. /www/hidden/.htaccess RewriteCond %{HTTPS} !=on RewriteRule .* https://%{SERVER_NAME}%{REQUEST_URI} [R,L] I know that {SERVER_ADDR} gives the server's IP address, but does it return the one that the client is requesting from? I'm also starting to think that something in the VirtualHosts file would be more appropriate. Any thoughts on this? What should be allowed: http://87.65.43.21/hidden/ https://87.65.43.21/hidden/ http://12.34.56.78/hidden/ https://12.34.56.78/hidden/ http://hidden.example.com/ https://hidden.example.com/ http://hidden.com/ https://hidden.com/ http://www.hidden.com/ https://www.hidden.com/ What should be 404-ed with 404.php http://example.com/hidden/ https://example.com/hidden/ http://www.example.com/hidden/ https://www.example.com/hidden/ http://example.com/hidden/hiddenfile.php https://example.com/hidden/hiddenfile.php etc. Thanks.

    Read the article

  • Advance DNS - Redirecting Emails to new webhost

    - by Martin
    I am not to sure if this question belongs here but I will surely find out soon enough. I have two web hosts (Not sure why it has been setup this way but it has). I do not want to use the original web host to handle the emails as the Data that we get from them is 500 mb which is already full with hosting the website. The second web host has an unlimited data plan and was created so we could use this host for the email accounts. Now the problem is I have reset the Advance DNS Zone records on both accounts and I am not sure what they were before. (Silly me should have taken a backup of how it was setup before hand I know) Emails were working before and going to the second hosts server now they are going to the first host but it has no email addresses setup for use so all emails are bouncing saying that the address does not exist. Host 1 IP: 192.185.96.110 Host 2 IP: 27.54.88.66 So far I have changed the Advanced DNS Zone record on Host 1 with the following: A Record: mail.australisinstitute.qld.edu.au - 27.54.88.66 I have not made any changes on Host 2 and both hosts have the default MX Records. If I need to provide any more information I can but I just hope someone can decipher what I have said haha. Cheers in advance!

    Read the article

  • Emails sent to outlook.com not being delivered

    - by imukcedup
    I'm having an issue that is a little strange. I have a cPanel webserver that I own and have root. I was testing out emailing and noticed some issues. When I send an email to outlook.com address the email sends ok but nothing is recieved at the outlook mailbox. I also dont get an 'email delivery failure notification' in any mailbox. 2014-06-12 09:53:47 SMTP connection from [127.0.0.1]:45334 (TCP/IP connection count = 1) 2014-06-12 09:53:47 1Wv5Rr-0003rA-2K <= [email protected] H=localhost (ourdomain.com) [127.0.0.1]:45334 P=esmtpa A=dovecot_login:joe S=667 [email protected] T="This is a test message" for [email protected] 2014-06-12 09:53:47 SMTP connection from localhost (ourdomain.com) [127.0.0.1]:45334 closed by QUIT 2014-06-12 09:53:50 cwd=/var/spool/MailScanner/incoming/1029481 5 args: /usr/sbin/exim -C /etc/exim_outgoing.conf -Mc 1Wv5Rr-0003rA-2K 2014-06-12 09:53:50 1Wv5Rr-0003rA-2K SMTP connection outbound 1402581230 1Wv5Rr-0003rA-2K ourdomain.com [email protected] 2014-06-12 09:53:50 1Wv5Rr-0003rA-2K => Test Account <[email protected]> R=archive_outgoing_email T=archiver_outgoing 2014-06-12 09:53:52 1Wv5Rr-0003rA-2K => [email protected] R=dkim_lookuphost T=dkim_remote_smtp H=mx1.hotmail.com [65.54.188.110] X=UNKNOWN:AES128-SHA256:128 C="250 <[email protected]> Queued mail for delivery" 2014-06-12 09:53:52 1Wv5Rr-0003rA-2K Completed I have checked the outlook.com's spam folders and its not in there either. This is a new IP address allocation from our ISP and there was a block on gmail addresses, so we know it was used for spam. But with gmail we got a notifaction of failure and I know outlook/microsoft also send out notification. Does anyone know what could be happening here? Thanks

    Read the article

  • Proper Network Infastructure Setup DMZ, VPN, Routing Hardware Question

    - by NickToyota
    Greetings Server Fault Universe, So here's a quick background. Two weeks ago I started a new position as the systems administrator for an expanding health services company of just over 100 persons. The individual I was replacing left the company with little to no notice. Basically, I have inherited a network of one main HQ (where I am situated) which has existed for over 10 years, with five smaller offices (less than 20 persons). I am trying to make sense of the current setup. The network at the HQ includes: Linksys RV082 Router providing internet access for employees and site to site VPN connecting the smaller offices (using an RV042 each). We have both cable and dsl lines connected to balance traffic (however this does not work at all and is not my main concern right now). Cisco Ironport appliance. This is the main gateway for our incoming and outgoing emails. This also has an external IP and internal IP. Lotus domino in and out email servers connected to the mentioned Cisco gateway. These also have an external IP and internal IP. Two windows 2003 and 2008 boxes running as domain controllers with DNS of course. These also have both an external IP and internal IP. Website and web mail servers also on both external and internal IPs. I am still confused as why there are so many servers connected directly to the internet. I am seriously looking to redesign this setup with proper security practices in mind (my highest concern) and am in need of a proper firewall setup for the external/internal servers along with a VPN solution about 50 employees. Budget is not a concern as I have been given some flexibility to purchase necessary solutions. I have been told Cisco ASA appliance may help. Does anyone out in the Server Fault Universe have some recommendations? Thank you all in advance.

    Read the article

  • Copying email with qmail and Plesk

    - by Greg
    I need to keep a copy of all outgoing and incoming email (for a single domain if possible) using qmail or Plesk. I can't recompile qmail, so qmailtap is out of the question, as is setting QUEUE_EXTRA in extra.h. I'm pretty sure it should be possible with Plesk's mailmng utility, aka Mail Handlers but I'm having trouble getting them to work. I've registered 2 hooks: incoming hook ./mailmng --add-handler --handler-name=incoming --recipient-domain=example.com --executable=/xxx/incoming.sh --context=/xxx/incoming/ --hook=before-local incoming.sh #!/bin/bash # The email is passed on stdin - grab it to a variable e=`cat -` # $1 = context (/xxx/incoming) # $3 = recipient ([email protected]) # Create /xxx/incoming/[email protected] mkdir -p $1$3 # Save the email to /xxx/incoming/[email protected]/0123456789.txt echo "$e" > $1$3/`date +%s%N`.txt # Echo PASS to stderr echo 'PASS' >&2 # Echo the email to stdout echo "$e" outgoing hook # ./mailmng --add-handler --handler-name=outgoing --sender-domain=holidaysplease.com --executable=/xxx/outgoing.sh --context=/xxx/outgoing/ --hook=before-remote The outgoing.sh file is the same as incoming.sh, except replace $3 (recipient) with $2 (sender). The incoming hook does work, but saves 2 copies of each email - one before and one after SpamAssassin has run. The outgoing hook doesn't seem to get called at all. So finally, my questions are: How can I make the incoming hook save only a single copy (preferably after SpamAssassin has run)? How can I get the outgoing hook to work?

    Read the article

  • Default Gateway solution on NAT'd network (best options)

    - by kwiksand
    I've recently changed a network from a bunch of machines exposed to the net on a network to a more security conscious Firewall-fronted network with a DMZ for public services. Everything's mostly working perfectly now, but I've got the old problem of NAT Loopback where a machine within the LAN wants to access a public service via the public/external IP. I've solved this problem previously in a small/SOHO environment simply using NAT loopback features of the router in use or a simple iptables rule to do the same, but I want to make sure I make the most resilient choice with the least concern. It seems I can: Use iptables as I've said to DNAT and MASQUERADE the change source/destination so the connection works correctly i.e iptables -A PREROUTING -t nat -d ip.of.eth0.here -p tcp --dport 8080 -j DNAT --to 192.168.0.201:8080 iptables -t nat -A POSTROUTING -s 192.168.0.0/24 -p tcp --dport 8080 -d 192.168.0.201 -j MASQUERADE Use split DNS, with internal mappings for public IP's Potentially do some route nastyness by setting the Default Gateway to use a different externally exposed IP to then come back in the public route (messy) Someone mentioned putting the Default Gateway within the DMZ as well (on serverfault), but I can't find the post again. I'm sure this is a common issue for many with NAT'd networks, but I've not really seen the perfect solve all when it comes to fixing this problem. What is your opinion?

    Read the article

  • Trying to use a SmartHost with my Exchange 2010 server

    - by Pure.Krome
    Hi folks, I'm trying to use a SmartHost with my Exchange 2010 Server. SmartHost details: Secure SMTPS: securemail.internode.on.net 465 <-- Note: that's port 465 Configure your existing SMTP settings (in your email program) to: use authentication (enter your Internode username and password, enter your username as [email protected]). enable SSL for sending email (SMTPS). So I've added the smart host details to my Org Config -> Hub Transport. I then used PowerShell to add the port:- Set-SendConnector "securemail.internode.on.net" -port 465 I've then added my username/password (as suggested above) to the SmartHost as Basic Authentication (with no TLS). Then I try sending an email and I get the following error message :- 451 4.4.0 Primary target IP address responded with: "421 4.4.2 Connection dropped due to ConnectionReset." So i'm not sure how to continue. I also tried ticking the TLS box but stll I get the same error. If i don't use SMTPS (secure SMTP, on port 465) and use basic SMTP on port 25 with no Authentication, email gets sent. Any ideas? EDIT: Btw, I can telnet to that server on port 465 from my mail server .. just to make sure i'm not getting firewall'd, etc.

    Read the article

  • Installing Silverstripe on 000webhost.com (free web host)

    - by benwad
    Hi I'm trying to learn how to work Silverstripe so I extracted the tar file to my free hosting account. I then went on install.php and edited the permissions to meet the requirements set out in install.php but I still get two warnings from the 'webserver configuration' section: I can't tell what webserver you are running. Without Apache I can't tell if mod_rewrite is enabled. I can't tell whether mod_rewrite is running. You may need to configure a rewriting rule yourself. I looked in phpinfo() and mod_rewrite appears to be installed. I contacted the web host and they said it was to do with virtual directory paths, and I should add 'RewriteBase /' to the top of my .htaccess file in the public_html directory. However I did this and still had the same problem. The install.php script says that I can install it even with these warnings but when I press 'install' it just refreshes the install.php page. It doesn't even overwrite the .htaccess file. 000webhost.com says they have successfully installed Silverstripe on their user accounts without much configuration but I can't seem to find out how. EDIT: I managed to get to the next page but now there is another warning which is stopping it installing: Friendly URLs are not working. This is most likely because mod_rewrite isn't configuredcorrectly on your site. Please check the following things in your Apache configuration; you may need to get your web host or server administrator to do this for you: * mod_rewrite is enabled * AllowOverride All is set for your directory I also get this error message from the server: Warning: unlink(mysite/_config.php) [function.unlink]: Permission denied in /home/a2716553/public_html/install.php on line 701

    Read the article

  • OpenLDAP Authentication UID vs CN issues

    - by user145457
    I'm having trouble authenticating services using uid for authentication, which I thought was the standard method for authentication on the user. So basically, my users are added in ldap like this: # jsmith, Users, example.com dn: uid=jsmith,ou=Users,dc=example,dc=com uidNumber: 10003 loginShell: /bin/bash sn: Smith mail: [email protected] homeDirectory: /home/jsmith displayName: John Smith givenName: John uid: jsmith gecos: John Smith gidNumber: 10000 cn: John Smith title: System Administrator But when I try to authenticate using typical webapps or services like this: jsmith password I get: ldapsearch -x -h ldap.example.com -D "cn=jsmith,ou=Users,dc=example,dc=com" -W -b "dc=example,dc=com" Enter LDAP Password: ldap_bind: Invalid credentials (49) But if I use: ldapsearch -x -h ldap.example.com -D "uid=jsmith,ou=Users,dc=example,dc=com" -W -b "dc=example,dc=com" It works. HOWEVER...most webapps and authentication methods seem to use another method. So on a webapp I'm using, unless I specify the user as: uid=smith,ou=users,dc=example,dc=com Nothing works. In the webapp I just need users to put: jsmith in the user field. Keep in mind my ldap is using the "new" cn=config method of storing settings. So if someone has an obvious ldif I'm missing please provide. Let me know if you need further info. This is openldap on ubuntu 12.04. Thanks, Dave

    Read the article

  • Postfix selective header_checks: smtpd_relay_restrictions vs. smtpd_recipient_restrictions

    - by luke
    Some of my customers implemented commercial software that violate email-RFCs such that we have had to relax our header checks. In consequence, we receive more spam. Prolog: I know the domains (customer.com) and IP-addresses (a.b.c.d/C) these emails come from Kind request for help: Is it possible to setup one Postfix (2.11) instance on Linux such that: It applies only some header checks for emails from .*@customer.com But applies all header checks for all other email sources? I thought of a combination of mynetworks that includes the subnet a.b.c.d/C in smtpd_recipient_restrictions -- allowing all these messages through -- and simultaneously avoid an open-relay with smtpd_relay_restrictions. However, this has not worked out as expected. Any idea or help is highly appreciated. Thanks in advance. Luke ==EDIT== For the current issue, I solved the problem by prepending REDIRECTs to header_checks as follows: /^received: from.*customer.com.*by mail.own.com.*for.*luke@own.*/ REDIRECT [email protected] This works so far as neeeded. Irrespective thereof, I am still looking for a postfix configuration that would turn this text-based setting into an IP-Address-Range based forwarding rule.... Thanks. Luke

    Read the article

  • Exchange Server 2010: move mailboxes from recoveded and mounted edb to user's mailbox

    - by user36090
    One of our exchange servers crashed, and I am trying to recover the mailboxes. We had 1 exchange 2003 server named "apex" and 1 exchange 2010 server named "2008Enterprise. the exchange 2010 server named "2008Enterprise" crashed. I created a new exchange 2010 server named "Providence". I ran the command on Providence: New-MailboxDatabase -Recovery -Name JBCMail -Server Providence -EdbFilePath "c:\data\Exchange\Mailbox\Mailbox Database 0579285147\Mailbox Database 0579285147.edb" -LogFolderPath "c:\data\Exchange\Mailbox\Mailbox Database 0579285147" this command executed and finished without error I then ran the command: eseutil /p E00 this command was executed from the below directory: c:\data\Exchange\Mailbox\Mailbox Database 0579285147 I then mounted the JBCMail with the mount command note: I do not have my full typed command. Inside my Exchange Management Console (EMC) I can view the new mailbox database named JBCMail. The JBCMail database is show as mounted on the exchange server named Providence. I can see the crashed Exchange server named 2008Exchange. In the EMC the crashed exchange server states the Copy Status under ServerConfiguration-Mailbox is ServiceDown. From here I need to recover three mailboxes The mail boxes are on the apex server. How do I move the mailboxs from apex to Providence? How do I restore the mailboxes from JBCmail mounted database to the user's mailbox? I do not fully understand how to use the Restore-Mailbox command because when I use this command it tries to restore the mailbox to the dead apex server. Restore-Mailbox -ID 'Jason Young' -RecoveryDatabase JBCMail

    Read the article

  • Postfix Whitelist before recipient restrictions

    - by GruffTech
    Alright. Some background. We have an anti-spam cluster trucking about 2-3 million emails per day, blocking somewhere in the range of 99% of spam email from our end users. The underlying SMTP server is Postfix 2.2.10. The "Frontline defense" before mail gets carted off to SpamAssassin/ClamAV/ ect ect, is attached below. ...basic config.... smtpd_recipient_restrictions = reject_unauth_destination, reject_rbl_client b.barracudacentral.org, reject_rbl_client cbl.abuseat.org, reject_rbl_client bl.mailspike.net, check_policy_service unix:postgrey/socket ...more basic config.... As you can see, standard RBL services from various companies, as well as a Postgrey service. The problem is, I have one client (out of thousands) who is very upset that we blocked an important email of theirs. It was sent through a russian freemailer who was currently blocked in two of our three RBL servers. I explained the situation to them, however they are insisting we do not block any of their emails. So i need a method of whitelisting ANY email that comes to domain.com, however i need it to take place before any of the recipient restrictions, they want no RBL or postgrey blocking at all. I've done a bit of research myself, http://www.howtoforge.com/how-to-whitelist-hosts-ip-addresses-in-postfix seemed to be a good guide at first, almost fixing my problem, But i want it to accept based on TO address, not originating server.

    Read the article

  • DansGuardian/Squid Traffic doesn't get back to user

    - by DKNUCKLES
    I've purchased a Squid appliance that I'm attempting to implement, however the lack of documentation has left me a bit high and dry. Forgive me if this is a silly question, but this is my first attempt at implementing Squid. From what I can ascertain from the documentation (or lack thereof), the users connect to DansGuardian first at port 8080 where the filtering is done, at which point it forwards it to the Squid appliance at port 3128. The traffic is then sent to the internet. The setup I have is as follows Gateway (MikroTik router) : 192.168.88.1 Squid/DansGuardian :192.168.88.100 Client : 192.168.88.238 Client --- Gateway --- Proxy --- Internet I have set up a simple NAT rule to forward all traffic from the client machine (for testing purposes) to go to the DansGuardian. The traffic seems to get there, although I see a lot of SYN_RECV w/ a netstat -antp command on the virtual appliance machine. From this I gather that the traffic is NOT being routed back to the client machine. Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN - tcp 0 0 192.168.88.100:8080 192.168.88.238:55786 SYN_RECV - tcp 0 0 192.168.88.100:8080 192.168.88.238:55787 SYN_RECV - tcp 0 0 192.168.88.100:8080 192.168.88.238:55785 SYN_RECV - tcp 0 0 192.168.88.100:8080 192.168.88.238:55788 SYN_RECV - tcp 0 0 0.0.0.0:10000 0.0.0.0:* LISTEN - tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN - Is this a routing issue or an issue with the Squid Appliance?

    Read the article

  • How can I tell GoogleBot that a subdirectory is now a subdomain? [migrated]

    - by cwd
    I had about a million pages of a catalog indexed under a subdirectory, and now that's moved to a subdomain. GoogleBot is crawling each one of them and getting a 301 redirect to the new location. Even though I have set up the redirect rule in the apache sites-enabled configuration file, (i.e. it's early on when apache does the redirect - PHP is not even getting loaded), even though I have done that, the server isn't handling the load well. GoogleBot is making around 5 requests per second, and on top of my normal traffic that is hiking up the CPU for a few hours at a time. I checked in Webmaster Tools and the corresponding documentation for a way to let Google know that the content had been moved from a subdirectory to a subdomain, but with little luck. Basically the most helpful thing I saw said to just send 301 headers for the new location. How can I tell GoogleBot that a subdirectory is now a subdomain? If that is not an option, how can I more efficiently send 301 redirects out for a particular subdomain? I was thinking perhaps the Nginx server but I'm not sure that I can run both Apache and Nginx side by side on port 80 for different subdomains.

    Read the article

  • How to configure SCSI hard drives and RAID for Poweredge 2850 web server

    - by Saul
    I'm trying to set up a Poweredge 2850 as a web server, but as a server novice it's causing me some confusion. Its a virgin install so no data to be lost as yet, so I'd like to get the best arrangement for setting up Windows Server 2008. The box will run IIS, a mail and FTP server. The current physical arrangement of the hot swap drives is 1 73GB 3 146GB 5 blank 0 73GB 2 146GB 4 146GB (but flashes green, amber off) When I enter the PERC config screens on boot up I've got Raid Ch- 0 ID 0 ONLIN A00-00 1 ONLIN A00-01 2 ONLIN A01-00 3 ONLIN A01-01 4 HOTSP I think that drives 0 and 1 are set to RAID 1 and drives 2 and 3 are also set to RAID 1, certainly I can see 2 logical drives, both raid 1 of 69880MB and 139900MB Now what I think I am getting here is that the 2 73GB drives mirror each other and the 2 146GB drives mirror 2? so by my noob thinking if a drive fails I can pull it, insert and new one and it will reduplicate from its matching pair? I think the flashing amber probably indicates a failing drive in slot 4, should that just be binned? What confuses me coming from a home user XP background is that when I load up Windows Server 2008 OS under my computer I only see a C drive of about 70GB capacity. i.e wheres the 146GB drive? Any advice appreciated

    Read the article

  • Does Windows 7 Authenticate Cached Credentials on Startup

    - by Farray
    Problem I have a Windows domain user account that gets automatically locked-out semi-regularly. Troubleshooting Thus Far The only rule on the domain that should automatically lock an account is too many failed login attempts. I do not think anyone nefarious is trying to access my account. The problem started occurring after changing my password so I think it's a stored credential problem. Further to that, in the Event Viewer's System log I found Warnings from Security-Kerberos that says: The password stored in Credential Manager is invalid. This might be caused by the user changing the password from this computer or a different computer. To resolve this error, open Credential Manager in Control Panel, and reenter the password for the credential mydomain\myuser. I checked the Credential Manager and all it has are a few TERMSRV/servername credentials stored by Remote Desktop. I know which stored credential was incorrect, but it was stored for Remote Desktop access to a specific machine and was not being used (at least not by me) at the time of the warnings. The Security-Kerberos warning appears when the system was starting up (after a Windows Update reboot) and also appeared earlier this morning when nobody was logged into the machine. Clarification after SnOrfus answer: There was 1 set of invalid credentials that was stored for a terminal server. The rest of the credentials are known to be valid (used often & recently without issues). I logged on to the domain this morning without issue. I then ran windows update which rebooted the computer. After the restart, I couldn't log in (due to account being locked out). After unlocking & logging on to the domain, I checked Event Viewer which showed a problem with credentials after restarting. Since the only stored credentials (according to Credential Manager) are for terminal servers, why would there be a Credential problem on restart when remote desktop was not being used? Question Does anyone know if Windows 7 "randomly" checks the authentication of cached credentials?

    Read the article

< Previous Page | 311 312 313 314 315 316 317 318 319 320 321 322  | Next Page >