Hello guys! is it possible to multicast over internet? I think IGMP is not allowed by the ISPs, and also when the server sends stream to internet what is the upload bandwidth on the server side? thanks
Hi
I'm setting up a SQL cluster (SQL 2008), Windows 2008 R2.
I enable the network access on local dtc and then create a DTC resource in my cluster . the problem is that when i start up the resource it does nto pull through my settings to enable network access.
the log shows this:
MSDTC started with the following settings:
Security Configuration (OFF = 0 and ON = 1):
Allow Remote Administrator = 0,
Network Clients = 0,
Trasaction Manager Communication:
Allow Inbound Transactions = 0,
Allow Outbound Transactions = 0,
Transaction Internet Protocol (TIP) = 0,
Enable XA Transactions = 0,
Enable SNA LU 6.2 Transactions = 1,
MSDTC Communications Security = Mutual Authentication Required,
Account = NT AUTHORITY\NetworkService,
Firewall Exclusion Detected = 0
Transaction Bridge Installed = 0
Filtering Duplicate Events = 1
where when i restart the local dtc service it says this:
Security Configuration (OFF = 0 and ON = 1):
Allow Remote Administrator = 0,
Network Clients = 1,
Trasaction Manager Communication:
Allow Inbound Transactions = 1,
Allow Outbound Transactions = 1,
Transaction Internet Protocol (TIP) = 0,
Enable XA Transactions = 1,
Enable SNA LU 6.2 Transactions = 1,
MSDTC Communications Security = No Authentication Required,
Account = NT AUTHORITY\NetworkService,
Firewall Exclusion Detected = 0
Transaction Bridge Installed = 0
Filtering Duplicate Events = 1
settings on both nodes in teh cluster is the same. I have reinstalled and restarted to many times to mention.
Any ideas ?
I've read quite a bit and am close, I feel, and I'm pulling my hair out ... please help!
I have an OpenVPN cliend whose server sets local routes and also changes the default gw (I know I can prevent that with --route-nopull). I'd like to have all outgoing http and ssh traffic via the local gw, and everything else via the vpn.
Local IP is 192.168.1.6/24, gw 192.168.1.1.
OpenVPN local IP is 10.102.1.6/32, gw 192.168.1.5
OpenVPN server is at {OPENVPN_SERVER_IP}
Here's the route table after openvpn connection:
# ip route show table main
0.0.0.0/1 via 10.102.1.5 dev tun0
default via 192.168.1.1 dev eth0 proto static
10.102.1.1 via 10.102.1.5 dev tun0
10.102.1.5 dev tun0 proto kernel scope link src 10.102.1.6
{OPENVPN_SERVER_IP} via 192.168.1.1 dev eth0
128.0.0.0/1 via 10.102.1.5 dev tun0
169.254.0.0/16 dev eth0 scope link metric 1000
192.168.1.0/24 dev eth0 proto kernel scope link src 192.168.1.6 metric 1
This makes all packets go via to the VPN tunnel except those destined for 192.168.1.0/24.
Doing wget -qO- http://echoip.org shows the vpn server's address, as expected, the packets have 10.102.1.6 as source address (the vpn local ip), and are routed via tun0 ... as reported by tcpdump -i tun0 (tcpdump -i eth0 sees none of this traffic).
What I tried was:
create a 2nd routing table holding the 192.168.1.6/24 routing info (copied from the main table above)
add an iptables -t mangle -I PREROUTING rule to mark packets destined for port 80
add an ip rule to match on the mangled packet and point it to the 2nd routing table
add an ip rule for to 192.168.1.6 and from 192.168.1.6 to point to the 2nd routing table (though this is superfluous)
changed the ipv4 filter validation to none in net.ipv4.conf.tun0.rp_filter=0 and net.ipv4.conf.eth0.rp_filter=0
I also tried an iptables mangle output rule, iptables nat prerouting rule. It still fails and I'm not sure what I'm missing:
iptables mangle prerouting: packet still goes via vpn
iptables mangle output: packet times out
Is it not the case that to achieve what I want, then when doing wget http://echoip.org I should change the packet's source address to 192.168.1.6 before routing it off? But if I do that, the response from the http server would be routed back to 192.168.1.6 and wget would not see it as it is still bound to tun0 (the vpn interface)?
Can a kind soul please help? What commands would you execute after the openvpn connects to achieve what I want?
Looking forward to hair regrowth ...
I ssh into a server than start a job (for instance rsync), then I just want to be able to log out from the server and let the job run its course. But if I just do rsync ... & I think it's still connected to the tty in some way, and that the job dies when the tty goes away when logging out. Is there any (easy) way to disconnect the process from the tty to be able to log out without the process terminating?
I'm trying to map WebDAV with SSL as a network drive in Windows XP. (I've been at this for several hours) I can read the share just fine using a browser and with Network Places, but it refuses to mount as a network drive.
I've tried it using the Windows explorer interface and net use.
Net use with the \\server@ssl:443\webdav method gives System error 53. https://server/webdav gives error 67.
Any help would be appreciated.
I have a CentOS 6.3 client that needs to access NFS storage. There are two NFS servers that serve up the same content stored on a SAN with a clustered filesystem. How do I set up CentOS to failover to the backup NFS server if needed? When I Google, I keep reading that Linux does not support this, but that would be strange since there is plenty of information out there on how to set up a clustered Linux NFS server farm...
I understand that setting the maximum number of connections available in a connection pool should be the same as your maxThreads configured for your Tomcat server (which correlates to the number of requests that can be handled)
For tomcat the default is 200, I assume there is a maximum that you can safely configure for your Tomcat server before things start getting out of control, which I assume is also governed by the resources of the machine it is running on.
I am trying to get an understanding of the size of maxThreads that people are using with success, is 1000 too big?
At the moment I am struggling with the configuration of lighttpd together with ViewVC. The configuration was ported from Apache 2.2.x, which is still running on the machine, serving the WebDAV/SVN stuff, being proxied through.
Now, the problem I am having appears to be with the rewrite rules and I'm not really sure what I am missing here. Here's my configuration (slightly condensed to keep it concise):
var.hgwebfcgi = "/var/www/vcs/bin/hgweb.fcgi"
var.viewvcfcgi = "/var/www/vcs/bin/wsgi/viewvc.fcgi"
var.viewvcstatic = "/var/www/vcs/templates/docroot"
var.vcs_errorlog = "/var/log/lighttpd/error.log"
var.vcs_accesslog = "/var/log/lighttpd/access.log"
$HTTP["host"] =~ "domain.tld" {
$SERVER["socket"] == ":443" {
protocol = "https://"
ssl.engine = "enable"
ssl.pemfile = "/etc/lighttpd/ssl/..."
ssl.ca-file = "/etc/lighttpd/ssl/..."
ssl.use-sslv2 = "disable"
setenv.add-environment = ( "HTTPS" => "on" )
url.rewrite-once += ("^/mercurial$" => "/mercurial/" )
url.rewrite-once += ("^/$" => "/viewvc.fcgi" )
alias.url += ( "/viewvc-static" => var.viewvcstatic )
alias.url += ( "/robots.txt" => var.robots )
alias.url += ( "/favicon.ico" => var.favicon )
alias.url += ( "/mercurial" => var.hgwebfcgi )
alias.url += ( "/viewvc.fcgi" => var.viewvcfcgi )
$HTTP["url"] =~ "^/mercurial" {
fastcgi.server += (
".fcgi" => ( (
"bin-path" => var.hgwebfcgi,
"socket" => "/tmp/hgwebdir.sock",
"min-procs" => 1,
"max-procs" => 5
) )
)
} else $HTTP["url"] =~ "^/viewvc\.fcgi" {
fastcgi.server += (
".fcgi" => ( (
"bin-path" => var.viewvcfcgi,
"socket" => "/tmp/viewvc.sock",
"min-procs" => 1,
"max-procs" => 5
) )
)
}
expire.url = ( "/viewvc-static" => "access plus 60 days" )
server.errorlog = var.vcs_errorlog
accesslog.filename = var.vcs_accesslog
}
}
Now, when I access the domain.tld, I correctly see the index of the repositories. However, when I look at the links for each respective repository (or click them, for that matter), it's of the form https://domain.tld/viewvc.fcgi/reponame instead of the intended https://domain.tld/reponame.
What do I have to change/add to achieve this? Do I have to "abuse" the index file mechanism somehow? Goal is to keep the /mercurial alias functional.
So far I've tried sifting through the lighttpd book from Packt again, also through the lighttpd documentation, but found nothing that seemed to match the problem.
I have a Citrix Web Interface (as part of XenApp 6.0 on Windows Server 2008 R2) that is behind a NAT, I can access the web interface fine (via both SSL and standard port 80) but when I go to launch a application that connection is still being made over the server's internal IP address.
How do I configure the web interface to default to the external IP address of the box instead of its internal LAN IP?
I have a tmpfs mount defined in my /etc/fstab with a size of 1024m, but when I restart the server it sizes itself to 5.9G. If I run mount -o remount /dev/shm, the size will correct itself to 1G. But it'll revert the next time the server is restarted.
The entry in fstab is:
tmpfs /dev/shm tmpfs size=1024m 0 0
Could there another file that mount could be calling during startup? How might I find that file?
I have an http post application that allows me to upload files to my server. I'm hosting the application on a fedora server running tomcat. When I disable firestarter I can post. When firestarter is running I cannot. What can I do to enable posts to my application?
Can someone give me any tips on setting up some sort of Rsync server/client on Windows 7 to run rsync between both my web hosting server, and a backup server that I have running Ubuntu? I've tried setting it up with this tutorial:
http://www.youtube.com/watch?v=CvwdkZLNtnA
Using copssh, and cwrsync. Ran into all sorts of troubles, including not being able to get cwrsync to run (it installs properly, but never starts up), and copssh not generating the keys at all. The guy was running Windows Server 2003, though, so I'm guessing the problems could just be because I'm running Windows 7.
I've been trying to set it up with my Windows machine being the rsync server, and then Ubuntu and my webhosting VPS as the clients, but I realize it may be easier (and make more sense) to just setup the rsync server on Ubuntu, and then an rsync client on Windows 7?
Can anyone point me in the right direction? I'm thinking of using this guide:
http://www.gaztronics.net/rsync.php
It seems a bit outdated, though.
I am new to ubuntu , i have installed rdiff-backup.
I have folder called sqlfiles on remote ftp server.The sql filesa are stored for last three days and then deleted. But i want to download the all copies to local computers
I want to have incremental backups on my local server so that
1)If file is same then it should not be copied
2)if different , then overwrite it
3)If file is in local directory and not in FTP , then leave as it is
How can i apply those rules to r-diff
I've configured xampp and firewall so I can access desktop pc's localhost over my local network through desktop pc's IP.
But I'm not able to access auctual projects:
I can access:
http://192.168.x.x/xampp or http://192.168.x.x/phpMyAdmin
But I cannot access:
http://192.168.x.x/myWebsite/
I get an error:
Server error
We're sorry! The server encountered an internal error and was unable to complete your request. Please try again later.
error 500
I'm following this guide: http://atik97.wordpress.com/2010/06/12/how-to-upgrade-to-php-5-3-in-ubuntu-9-10/
I've done all the steps, but it's still showing php 5.2.6 - any ideas?
I have also tried -cgi instead of -cli, neither have any effect.
update
I've tried rebooting the server to see if that would have any effect and unfortunately it didn't
update
Output of dpkg -l *php*:
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Cfg-files/Unpacked/Failed-cfg/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Hold/Reinst-required/X=both-problems (Status,Err: uppercase=bad)
||/ Name Version Description
+++-=============================================-=============================================-==========================================================================================================
un libapache2-mod-php4 <none> (no description available)
ii libapache2-mod-php5 5.2.6.dfsg.1-3ubuntu4.6 server-side, HTML-embedded scripting language (Apache 2 module)
un libapache2-mod-php5filter <none> (no description available)
ii php-pear 5.2.6.dfsg.1-3ubuntu4.6 PEAR - PHP Extension and Application Repository
un php4-cli <none> (no description available)
un php4-dev <none> (no description available)
un php4-mysql <none> (no description available)
un php4-pear <none> (no description available)
ii php5 5.2.6.dfsg.1-3ubuntu4.6 server-side, HTML-embedded scripting language (metapackage)
ii php5-cgi 5.2.6.dfsg.1-3ubuntu4.6 server-side, HTML-embedded scripting language (CGI binary)
ii php5-cli 5.2.6.dfsg.1-3ubuntu4.6 command-line interpreter for the php5 scripting language
ii php5-common 5.2.6.dfsg.1-3ubuntu4.6 Common files for packages built from the php5 source
ii php5-curl 5.2.6.dfsg.1-3ubuntu4.6 CURL module for php5
un php5-dev <none> (no description available)
ii php5-gd 5.2.6.dfsg.1-3ubuntu4.6 GD module for php5
ii php5-imap 5.2.6-0ubuntu5.1 IMAP module for php5
un php5-json <none> (no description available)
ii php5-mcrypt 5.2.6-0ubuntu2 MCrypt module for php5
ii php5-mysql 5.2.6.dfsg.1-3ubuntu4.6 MySQL module for php5
un php5-mysqli <none> (no description available)
ii php5-xsl 5.2.6.dfsg.1-3ubuntu4.6 XSL module for php5
un phpapi-20060613+lfs <none> (no description available)
ii phpmyadmin 4:3.1.2-1ubuntu0.2 MySQL web administration tool
update
The following commands and their outputs:
grep php53 /etc/apt/sources.list
deb http://php53.dotdeb.org stable all
deb-src http://php53.dotdeb.org stable all
apt-cache search -f "libapache2-mod-php5"
http://pastebin.com/XNXdsXYC
update
I've updated the question with more details on installed packages.
I just made some changes to a DNS zone in Webmin and clicked the "Apply Changes" button. I received the error message:
rndc: connection to remote host closed This may indicate that the remote server is using an older version of the command protocol, this host is not authorized to connect, or the key is invalid
How can I troubleshoot / repair this? I copied parts of the BIND config from a failing server, so I suspect that's what causing it...
I'm running Exchange 2007 SP3 which is exposing outlook web access over only HTTPS. However the server delivers the sessionid cookie without the secure flag set. Even though I don't have port 80 open, this cookie is still vulnerable to being stolen over port 80 in the event of a man-in-the-middle attack. It also contributes to a PCI-DSS failure
Does anyone know if I can persuade the web server/application to set the secure flag?
Is it possible to add wildcard serveralias (example: *.somesite.com) in an apache server without modifying httpd.conf manually? I use a DNS different from my hosting server and i have added asterisk A record to my DNS to point all request like (test.somesite.com,test2.somesite.com) to my hosting servers IP, but i don't see anyway of adding asterisk serveraliases to apache httpd.conf file in my cpanel. Pls is there a solution?
I rent a server from a German company. I have remote access to it as well as WHM and CPanel. I decided to use Google's mail servers for obvious reasons. I am not an admin just an average guy trying to set up what needs to be set up. The problem is I am unable to make the necessary settings. I watched Youtube tutorials, followed written ones as well as Google's help, but there is (at least) one serious problem with my domain settings.
The domain console alwasy says Your MX records are incorrect
When I check dappwall.com in mxtoolbox.com it says
Pref Hostname IP Address TTL
10 mail.dappwall.com 46.4.88.247 24 hrs
But this is not the host name. I checked WHM and my hostname is server1.dappwall.com. I can confirm it by typing the hostname command in putty.
However, if I do an mx lookup at mxtoolbox.com on server1.dappwall.com or mail.dappwall.com I get
Lookup failed after 1 name servers timed out or responded non-authoritatively
I ran checks on the google apps toolbox on dappwall.com and two problems emerged:
1.No Google mail exchangers found. Relayhost configuration?
10 mail.dappwall.com
In Google Apps > Settings for Gmail > Advanced settings it also says that my current MX records for dappwall.com is
Priority Points to
10 MAIL.DAPPWALL.COM.
So mail.dappwall.com again.
I also have access to a robot provided by the company I rent the server from. Here I see this mail at two places but how should I (if it's necessary) modify this?
I set Email routing to Automatically Detect Configuration.
2.There SHOULD be a valid SPF record.
"v=spf1 include:_spf.google.com ~all"
In the DNS Zone Editor I added this spf record:
Name TTL Class Type Record
dappwall.com. 1440 IN TXT v=spf1 include:_spf.google.com ~all
In the cPanel Email Authentication page it says
SPF:
Status: Enabled Warning: cPanel is unable to verify that this server is an authoritative nameserver for dappwall.com. [?]
Your current raw SPF record is : v=spf1 include:_spf.google.com ~all
How can I confirm that my server is an authoritative nameserver for dappwall.com?
In WHM Service Configuration Mailserver selection Dovecot was set but I disabled it (i don't know if that's ok).
What am I missing here? Where is that mail.dappwall.com coming from?
I want to restart ssh or sshd but I get this error:
qqqq@Matrix-Server:/$ sudo /etc/init.d/ssh stop
sudo: /etc/init.d/ssh: command not found
qqqq@Matrix-Server:/$
Do I need to install ssh or sshd or does it come with Ubuntu?
HI,
I follow your istruction and everythig works.
I have an DHCP server than it assign "Ip client" without gateway.
Internet with IE or Firefox Browser works but FTP service doesn't work.
In squid.conf I have put a line:
acl Safe_ports port 80 21 443 389 5307 8080 3144 8282 88 8443 20443 11438 1443 8050 30021 10443 4747 4774 1384
Have I to put gateway in DHCP Server?
Have you any suggestion for me?
Thanks for your help
I'd like to start a process when the PC starts up but before the user logs in. Then, after the user logs in they see the console/gui for already running process. If they logoff, the process will continue to run in the background until they log back in again.
Is this possible in Windows Server 2008 R2?
It seems perfect for daemon/server applications.
In a vsftpd server enviroment, shared various directories from nfs mountpoints, I can log in without problem, but when I send the first "ls", the vsftp give me the directory listing:
lftp [email protected]:~ ls
-rw-rw-rw- 1 1160 1016 392 Jun 06 09:28 test.gif
but not give me the shell again (lftp client). In the server log I can see that the last message is:
"150 Here comes the directory listing."
Why happend this?
Assuming that I have a dedicated server on which I am running multiple instances of mysql and postresql servers. How without iotop determine which instance in particular time (proc/pid/io shows data collected in some peroid of time) makes the biggest IO (so it increases IOWAIT)?
When lots of ppl do something on DB then I clearly see which instance is making the load because of high cpu usage, but I had a situation when the cpu usage was just normal, but very high iowait made a huge load on server and i had problem finding process that was making some outstanding IO
I just set up my new remote office network the problem is i cannot access shared folders to the home office (without turning on the vpn)
I control the servers remotely but would really like to access ports 139 and 445.
The problem is that they are open on the server side but it appears as though the packets are being dropped before they get get to the server... any way i can tell where the packet is being dropped?