I install minix in virtual box but I dont know whether it installed correctly or not. Just check this screen shot and lemme know how to test it Screenshot
Postfix serves for my virtual domains and works fine.
But for one of my domains:
- it bounces mails targeted at [email protected]
- it rejects mails targeted at [email protected]
problem is, [email protected] does not exist either.
here is my postconf
why does it bounce [email protected], and reject other non existing mails?
Thanks.
If I have
>ip ro
192.168.14.0/24 dev eth0
another host can obtain my mac address. But if I flush routing info:
>ip ro flush table main
arp resolution doesn't work. Broadcast packets "Who has 192.168.14.149" reach eth0 but OS (Linux) doesn't respond despite eth0 has address 192.168.14.149. What connection exists between routing and arp resolution?
I have a PSD for a website that must be sliced. I can not work in Win or OSX box(or a virtual machine).
What are the solutions for editing a PSD in Ubuntu 9.10?
Which one do you recommend?
Thank you.
Some mail sent from sites on my server bounce back with the following mail.log message
Nov 26 17:27:53 blogu postfix/smtp[16858]: C4DD22908EC0: to=, relay=rejecting-domain.ro[rejecting-ip]:25, delay=2.5, delays=0.1/0/2.3/0.04, dsn=5.0.0, status=bounced (host rejecting-domain.ro[rejecting-ip] said: 550 Access denied - Invalid HELO name (See RFC2821 4.1.1.1) (in reply to MAIL FROM command))
On the receiving end, my emails are logged like this:
2011-11-22 15:09:35 H=static.39.80.4.46.clients.your-server.de (Ubuntu-1004-lucid-64-minimal) [my-server-ip] rejected MAIL : Access denied - Invalid HELO name (See RFC2821 4.1.1.1)
I managed to have a local install of Gitorious. Now I need to finalize the apache integration using a virtual server but nothing seems to work. See for example my /etc/hosts file:
127.0.0.1 localhost
172.26.17.70 darkstar.ilri.org darkstar
172.26.17.70 git.darkstar.ilri.org
My vhosts.conf has the following entries:
#
# Use name-based virtual hosting.
#
NameVirtualHost *:80
<VirtualHost *:80>
<Directory /srv/httpd/htdocs>
Options Indexes FollowSymLinks ExecCGI
AllowOverride None
Order allow,deny
Allow from all
</Directory>
ServerName darkstar.ilri.org
DocumentRoot /srv/httpd/htdocs
ErrorLog /var/log/httpd/error_log
AddHandler cgi-script .cgi
</VirtualHost>
<VirtualHost *:80>
<Directory /srv/httpd/git.darkstar.ilri.org/gitorious/public>
Options FollowSymLinks ExecCGI
AllowOverride None
Order allow,deny
Allow from All
</Directory>
AddHandler cgi-script .cgi
DocumentRoot /srv/httpd/git.darkstar.ilri.org/gitorious/public
ServerName git.darkstar.ilri.org
ErrorLog /var/www/git.darkstar.ilri.org/log/error.log
CustomLog /var/www/git.darkstar.ilri.org/log/access.log combined
AddOutputFilterByType DEFLATE text/html text/plain text/xml text/javascript text/css application/x-javascript
BrowserMatch ^Mozilla/4 gzip-only-text/html
BrowserMatch ^Mozilla/4\.0[678] no-gzip
BrowserMatch \bMSIE !no-gzip !gzip-only-text/html
<FilesMatch "\.(ico|pdf|flv|jpg|jpeg|png|gif|js|css|swf)$">
ExpiresActive On
ExpiresDefault "access plus 1 year"
</FilesMatch>
FileETag None
RewriteEngine On
RewriteCond %{DOCUMENT_ROOT}/system/maintenance.html -f
RewriteCond %{SCRIPT_FILENAME} !maintenance.html
RewriteRule ^.*$ /system/maintenance.html [L]
</VirtualHost>
Now, when I go with Firefox to darkstar.ilri.org it shows the default Apache screen: "It works!". but when I go to git.darkstar.ilri.org it waits for few seconds then falls to darkstar.ilri.org and the default apache page. No error is reported. If I run httpd -S I get:
VirtualHost configuration:
wildcard NameVirtualHosts and _default_ servers:
*:80 is a NameVirtualHost
default server darkstar.ilri.org (/etc/httpd/extra/httpd-vhosts.conf:21)
port 80 namevhost darkstar.ilri.org (/etc/httpd/extra/httpd-vhosts.conf:21)
port 80 namevhost git.darkstar.ilri.org (/etc/httpd/extra/httpd-vhosts.conf:37)
Syntax OK
The funny thing is that if I configure gotirious in a host called gitrepository, add 127.0.0.1 gitrepository and go with Firefox to gitrepository.. Gitorious works... But why not with git.darkstar.ilri.org?
Many thanks in advance.
For OpenVZ, an example of container-based virtualization, it seems that host and all guests are sharing the filesystem cache. This sounds paradoxical when talking about virtualization, but this is actually a feature of OpenVZ.
It makes sense too. Because only one kernel is running, it's possible to benefit from sharing the same pages of filesystem cache in memory. And while it sounds beneficial, I think a set up here actually suffers in performance from it. Here's why I think why: my machines aren't actually sharing any files on disk so I can't benefit from this feature in OpenVZ.
Several OpenVZ machines are running MySQL with MyISAM tables. MyISAM relies on the system's filesystem cache for caching of data files, unlike InnoDB's buffer pool. Also some virtual machines are known to do heavy and large I/O operations on the same filesystem in the host.
For example, when running cat *.MYD > /dev/null on some large database in one machine, I saw the filesystem cache lowering in another, monitored by htop. This essentially flushes all the useful filesystem cache in guests (FIFO) and so it flushes the MySQL caches in the guests.
Now users are complaining that MySQL is very slow. And it is. Some simple SELECT queries take several seconds on times disk I/O is heavily used by other machines.
So, simply put:
Is there a way to avoid filesystem cache being wiped out by other virtual machines in container-based virtualization?
Some thoughts:
Choosing algorithm for flushing filesystem cache in the kernel. (possible? how?)
Reserving a certain amount of pages for a single VM. (seems no option for filesystem cache type of pages that reading man vzctl)
Will running MySQL on another filesystem get me anywhere?
If not, I think my alternatives are:
Use KVM for MySQL-MyISAM running VMs. KVM actually assigns memory to the VM and does not allow swapping out caches unless using a balloon driver.
Move to InnoDB and tune the buffer pools, dirty pages, etc. This is now considered to be 'nice to have' on the long-term as not everyone responsible for administration of the system understands InnoDB.
more suggestions welcome.
System software: Proxmox (now 1.9, could be upgraded to 2.x). One big LV assigned for the VMs.
I've been through the awstats docs for a while now, it just seems to be failing with the Logformat,
http://pastebin.com/raw.php?i=J1Ecfu4c
I'm using the following in awstats,
LogFormat = "%host - - %host_r %time1 %methodurl %code %bytesd %refererquot %uaquot %otherquot"
(from nginx)
log_format main
'$remote_addr - $remote_user [$time_local] $request '
'"$status" $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log logs/access.log main;
sample hits: http://pastebin.com/raw.php?i=qD9PKN52
We have setup ipsec and l2tp on linux. One question came up (due to firewall management policy) is whether it's possible to have 1 virtual interface instead of one per connected client.
Now we have:
ppp0 serverip clientip1
ppp1 serverip clientip2
Want to have:
l2tp_tun serverip serverip
like with OpenVPN's tun interfaces and then to be able to push IP address and route to each client.
I want the PC's that receive IP from my Ubuntu DHCP3-server to be able to retrieve the GPOs that are on my windows 2003 server.
I'm using virtualbox and 3 virtual machines:
1 windows 2003 server 192.168.0.2 with 1 NIC (internal network).
1 ubuntu server 10.04 lts 192.168.0.1 with 1 NIC (internal network) and 3 aliases 192.168.21.0, 192.168.22.0, 192.168.100.0
1 Windows XP machine with 3 NIC's (internal network).
I've a bonding on two interfaces. I'd like to monitor wether they are connected to different switches (the switches have hostnames).
ethX should be connected to switchX and ethY to switchY.
Currently I'm checking this with following command:
tcpdump -vv -s0 -i ethX ether host 01:00:0c:cc:cc:cc
After a minute it prints out the hostname (and much more information) from the switch.
Are there any other solutions to monitor this?
Greeting
I'm frequently getting 503 Service Unavailable when I have limit_req turned on. On my logs:
[error] 22963#0: *70136 limiting requests, excess: 1.000 by zone "blitz", client: 64.xxx.xxx.xx, server: dat.com, request: "GET /id/85 HTTP/1.1", host: "dat.com"
My nginx configuration:
limit_req_zone $binary_remote_addr zone=blitz:60m rate=5r/s;
limit_req zone=blitz;
How do I resolve this issue. Isn't 60m already big enough? All my static files are hosted on a amazon s3.
I have read many other posts but cannot figure this out.
eth0 is my external connected to a Comcast modem. The server has internet access with no issues.
eth1 is internal and running DHCP for the clients. I have DHCP working just fine, all my clients can get an IP and ping the server but they cannot access the internet.
I am using ISC-DHCP-SERVER and have set /etc/default/isc-dhcp-server to INTERFACE="eht1"
Here is my dhcpd.conf file located in /etc/dhcp/dhcpd.conf
ddns-update-style interim;
ignore client-updates;
subnet 10.0.10.0 netmask 255.255.255.0 {
range 10.0.10.10 10.0.10.200;
option routers 10.0.10.2;
option subnet-mask 255.255.255.0;
option domain-name-servers 208.67.222.222, 208.67.220.220; #OpenDNS
# option domain-name "example.com";
default-lease-time 21600;
max-lease-time 43200;
authoritative;
}
I have made the *net.ipv4.ip_forward=1* change in /etc/sysctl.conf
here is my interfaces file:
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet dhcp
iface eth1 inet static
address 10.0.10.2
netmask 255.255.255.0
network 10.0.10.0
auto eth1
And finally- here is my iptables.conf file:
# Firewall configuration written by system-config-firewall
# Manual customization of this file is not recommended.
*nat
:PREROUTING ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
-A POSTROUTING -s 10.0.10.0/24 -o eth0 -j MASQUERADE
#-A PREROUTING -i eth0 -p tcp --dport 59668 -j DNAT --to-destination 10.0.10.2:59668
COMMIT
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -i eth1 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 80 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 443 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 53 -j ACCEPT
-A INPUT -m state --state NEW -m udp -p udp --dport 53 -j ACCEPT
-A FORWARD -s 10.0.10.0/24 -o eth0 -j ACCEPT
-A FORWARD -d 10.0.10.0/24 -m state --state ESTABLISHED,RELATED -i eth0 -j ACCEPT
-A FORWARD -p icmp -j ACCEPT
-A FORWARD -i lo -j ACCEPT
-A FORWARD -i eth1 -j ACCEPT
#-A FORWARD -i eth0 -m state --state NEW -m tcp -p tcp -d 10.0.10.2 --dport 59668 -j ACCEPT
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -j REJECT --reject-with icmp-host-prohibited
COMMIT
I am completely stuck. I cannot figure out why the clients cannot access the internet. Am I missing a service? Is a service not running? Any help would be greatly appreciated. I tried to be as thorough as possible but please let me know if I have missed something. Thank you!
Have configured local authentication which was working fine.And today I wanted to implement RADIUS too .. but after I have done, Im unable to login to my firewall
user-identity default-domain LOCAL aaa authentication ssh console LOCAL
and
RADIUS
aaa-server RADIUS protocol radius aaa-server RADIUS (inside) host xyzabc Key zzzzzz
aaa authentication ssh console RADIUS aaa authentication enable console RADIUS aaa authentication http console RADIUS
Can someone help me login to my firewall
Our network of Windows 2003 and Windows 2008 servers suddenly hasDNS issues. There are 7 DCs. Two at our main office and one each at branch sites (one branch has two a 2008R2 and WIN2K3) Only two are WIN2008R2
Running DCDIAG on the WIN2K3 at main site (DC1) reports no issues. Running at any branch site reports two issues All other test pass. The server DC1 can be PINGed by name from any site
Starting test: frsevent
There are warning or error events within the last 24 hours after the
SYSVOL has been shared. Failing SYSVOL replication problems may cause
Group Policy problems.
Starting test: FsmoCheck
Warning: DcGetDcName(PDC_REQUIRED) call failed, error 1355
A Primary Domain Controller could not be located.
The server holding the PDC role is down.
Netdom.exe /query DC reports the expected servers.
netdom query fsmo
This reports the server at the main office holds the following roles:
* Schema owner
Domain role owner
PDC role
RID pool manager
Infrastructure owner
In the DNS management snap-in, DC1 appears as DNS server but does not appear in
_msdcs-dc-_sites-Default-First-Site-Name-_TCP
There is no _ldap or –kerberos record pointing to DC1
Same issue msdcs-dc-_sites- -_TCP
Again there is no _ldap or –kerberos record pointing to DC1
Under Domain DNS Zones there is no entry for the server. This is the case for any _tcp folder in the DNS.
The server DC1 appears correctly as a name server in the Reverse Lookup Zone. There is a Host(A) record for DC1 but in the Forward Lookup Zone there is no (same as parent folder) Host(A) for the DC1 server but such an entry exists for the other DCs at branch sites and the other DC at the main office.
We have tried stopping and starting the netlogon service, restarting DNS and also dcdiag /fix.
Netdiag reports error:
Trust relationship test. . . . . . : Failed
[FATAL] Secure channel to domain 'XXX' is broken. [ERROR_NO_LOGON_SERVERS]
[WARNING] Failed to query SPN registration on DC- One entry for each branch DC
All braches lsit the problem server and it can be Pinged by name from any branch
Fixing is number one priority but also would like to determine the casue.
I'm trying to setup a .htaccess file which will allow users to bypass the password block if they come from a domain which does not start with preview. e.g. http://preview.example.com would trigger the password and http://example.com would not.
Here's what I've got so far:
SetEnvIfNoCase Host preview(.*\.)? preview_site
AuthUserFile /Users/me/.htpasswd
AuthGroupFile /dev/null
AuthType Basic
AuthName "Development Area"
Require valid-user
Order deny,allow
Allow from 127
deny from env=preview_site
Satisfy any
Any ideas?
I plan to create my first DC and forest on a physical server, then I want to run a second DC on a virtual server that will replicate the first DC. I understand that this will provide redundancy for AD that if the first domain controller went down the second would resume until the first is back online. Would this work and how?
I have an application installed in my RHEL6 box that has a GUI (AppGui.sh). My problem is that a few non-tech users would like to access this GUI remotely. I've tried several guides over the internet but I still cant make it work.
I tried:
-Installing X Window System
-Enabling FORWARDX11=yes in my sshd_config
-Exporting $DISPLAY variable
-Connecting through ssh -X user@host (simply stays there)
How can I setup my box from scratch to make this work?
Im using squid as a reverse proxy to host multiple web servers on one internet IP. It works fine and has been doing so for the past few months. I have just noticed that every request sent to my servers is logged as comming from the squid servers IP address.
Is there anyway to make squid pass the originating IP to the web servers?
Im using squid as a reverse proxy to host multiple web servers on one internet IP. It works fine and has been doing so for the past few months. I have just noticed that every request sent to my servers is logged as comming from the squid servers IP address.
Is there anyway to make squid pass the originating IP to the web servers?
I have seen in many developer talks, the presenter using a demo.local URL instead of the conventional localhost/demo for faster access.
I've read about editing host entries over here How can I create shorter URLs to sites on my computer? but my question is since the localhost IP is the same 127.0.0.1 for every folder inside my var/www or htdocs then how to make it accessible in the shorter format?
What is the recommended procedure to install sshd on Windows 7 Enterprise ? I have already installed cygwin and openssh on this computer.
i'm open to non-cygwin sshd as long as there won't be any compatibility issues with cygwin related programs.
UPDATE
I want setup/config information as well. Currently, I get an error if I try to connect to localhost.
$ ssh localhost
ssh: connect to host localhost port 22: Connection refused
We use Apache 2.2 to host SVN repository on a Windows 2003 machine.
Works fine except that over a couple of weeks the httpd process inflates and starts consuming something like 1.5 gigabytes of virtual memory. All operations with the repository become very slow.
What to tweak to prevent httpd from cosuming so many resources?
We have a host that will be used for creating VM clones from time to time for testing purposees.It is used actively for testing and users tend to keep a lot of files in their profiles.
Is there way to impose limit on user profile on OU level without introducing roaming profiles?