Is there good tool that can do same thing as windows 'mstsc' and also has some features, like
save different session info, so don't need to remember difference IP/ID/pwd. Thanks.
Nagios is a wonderful too for monitoring servers. Their web interface is not bad, either. However I am not crazy about using the HTTP Authentication that comes standard.
Is there a way to use another method of authentication? (and I don't mean restricting access by IP address in the .htaccess file) Something with a form-based login would be wonderful, but perhaps there is no such thing. I'm hoping you guys have found something I haven't.
Is it possible to add wildcard serveralias (example: *.somesite.com) in an apache server without modifying httpd.conf manually? I use a DNS different from my hosting server and i have added asterisk A record to my DNS to point all request like (test.somesite.com,test2.somesite.com) to my hosting servers IP, but i don't see anyway of adding asterisk serveraliases to apache httpd.conf file in my cpanel. Pls is there a solution?
I am running tomcat server on Fedora machine.
when I run tomcat using following command,
service tomcat start
it runs on localhost ,
but when i try to connect to the server using public ip address of the server as follows remotly
http://xxx.xxx.xxx.xxx:8080
it does not start
could someone help me with this issue
Thanks in advance for any help
I'm interested in any software, experience or guidelines listed that help to deal with listing installed services, their primary user (or business person that is responsible for this service), domain names, ip addresses, ports in your servers.
Servers are both Windows and Linux, so licenses are also good to track with all of this information.
Scale of the infrastructure in question - 20-50 servers.
Currently we have no better idea as to use Excel for it.
Our websites are being crawled by content thieves on a regular basis. We obviously want to let through the nice bots and legitimate user activity, but block questionable activity.
We have tried IP blocking at our firewall, but this becomes to manage the block lists. Also, we have used IIS-handlers, however that complicates our web applications.
Is anyone familiar with network appliances, firewalls or application services (say for IIS) that can reduce or eliminate the content scrapers?
Hi,
Assume, I am having 2 ec2 accounts (say A and B), both have different list of security groups. Now I want to open a particular port (say 80) for an instance running in account A, to account B. ie, I want to only allow account B instances, to access account A's 80 port. Could any one update me, is there a way to do this.??
Additionally, may I access account A's instance from account B's instance by using its private ip address/host name ??
Thanks in Advance,
I got a quote from a colo, saying
"I could do 3U, the power and 20 megs burstable to 100 for $250 a
month with each additional meg billed to the 95th percentile at 6.50
per meg..so you would have he ability to burst if you needed it but
not pay for the full 24/7 amount of the IP."
I'm assuming it means - 20Mbits unmetered, anything above is billed on the 95th percentile at the rate of $6.50/Mbit. Am I right? And how do you measure the $x.xx per meg, at 95th percentile?
I was inspecting my apache access logs(I use default combined log format) and I came a cross a wired entry
69.171.247.0 - - [22/Oct/2012:18:15:20 +0200] "GET /some site resources HTTP/1.1" 404 514 "-" "facebookexternalhit/1.0 (+http://www.facebook.com/externalhit_uatext.php)"
As u see, this query come from a facebook robot that extract objects from site when somebody post a link.
What I find weird is the logged ip address : 69.171.247.0
Does anybody know how is that possible ?
I have created web site, added to IIS 7, in the binding set up host name as "mysite.com". (here "mysite.com" is my registered domain that points to my IP address)
So when I assigned port 8095 and open site as mysite.com:8095 it succesfully opens on my local pc and outsite my network pc, but if I set up port 80 there, http://mysite.com opens only on my pc, but not in outside pc. Firewall is disabled.
How to resolve that problem, please help!?
I will have 2 servers in different datacenters (different countries) and I want to use DNS load-balancing mainly for High Availability of website hosted on those 2 servers. It is just ad tracking site, which records hit in local database and returns few lines on html code.
I want to return 2 A records each time because of DNS pinning in browsers (if one server fails, browser will try second A record which it has already cached).
Both servers will be acting also as DNS servers for redundancy. Now comes my proposed solution: I will use BIND and have both servers as a master for that zone. On each server there will be running script, which will periodically test availability (http) of both servers and remove IP from DNS in case of failure.
Now the questions :)
1) Is BIND suitable for this solution? I think BIND performance is good and it is easy to manipulate the zone file via script. And as I will modify the zone only in case of failure/maintenance, the modifications (and thus bind reload) won't be often.
2) I plan to use TTL of 5 minutes. The website will have about 1000-3000 req/s but from distinct clients (each IP only 1-3 requests), so I think the DNS load won't be too much. I suppose their ISPs will cache the responses for those 5 mins. Is there any reason to lower the TTL even more?
3) Is my master-master approach good? Or should I make one of the servers master and the other one slave? Right now each server can monitor both itself and the other one. If only webservice fails, both DNS nodes will notice it. If the whole server fails, then the remaining DNS node will notice it and the failed node will not answer DNS queries anyway.
4) Is it a big issue when one NS server does not respond to queries? If yes, I can make a third DNS, so anytime at least 2 of them would accept queries...
5) Should I rewrite the zone file via script, or just use dynamic DNS update (for example via nsupdateutility)?
Our internal DNS queries go through Active Directory. We are hosting a site that is not in our domain, but internal users need to get the internal IP address for routing.
How do I configure Active Directory to return A records for a few arbitrary domain names, not just those in our own domain?
I have a handful of (content unrelated) sites with decent PRs and I'm considering hosting them all on the same server. I've heard that if you do this, internal linking between two seperate domains on that server may be seen as less "valid" by Google in PageRank terms (since you obviously own both of the sites as they share an IP address).
Anyone have any experience in this? I'd love to save some hosting cash by consolidating, but not at the expense of losing the ability to link my sites together powerfully.
I have a dhcp in my home and I would like to setup a dns server too.
I would like to implement a linux solution but I think I can't get hands on without understanding - very superficially - if I can achieve such result.
My pc (hostname: test) gets a 192.168.1.7 from dhcp.
Its dns server is my router (192.168.1.1).
How can the router relate my ip change (as soon as the lease is over) to my hostname?
We have a SQL Server that has important databases for our clients, if the server goes down we want another server to be ready to be switched over (we would just change the IP). The question is, how can we automatically sync the primary SQL Server to the secondary one periodically through out the day? Or even in real time?
Thanks!
Interesting question I have this python code:
import sys, bottle, gevent
from bottle import *
from gevent import *
from gevent.wsgi import WSGIServer
@route("/")
def index():
yield "/"
application=bottle.default_app()
WSGIServer(('', port), application, spawn=None).serve_forever()
that runs standalone with nignx infront of it as a reverse proxy.
Now each of these pieces of code run separately but I run multiple of these per domain per project(directory) but the code thinks for some reason that it is top level and its not so when you go to mydomain.com/something it works but if you go to mydomain.com/something/ you will get an error. No I have tested and figured out that nginx is stripping the "something" from the request/query so that when you go to mydomain.com/something/ the code thinks you are going to mydomain.com// how do I get nginx to stop removing this information?
Nginx site code:
upstream mydomain {
server 127.0.0.1:10100 max_fails=5 fail_timeout=10s;
}
upstream subdirectory {
server 127.0.0.1:10199 max_fails=5 fail_timeout=10s;
}
server {
listen 80;
server_name mydomain.com;
access_log /var/log/nginx/access.log;
location /sub {
proxy_pass http://subdirectory/;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_max_temp_file_size 0;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
}
location /subdir {
proxy_pass http://subdirectory/;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_max_temp_file_size 0;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
}
}
Good evening everyone or possible early morning if you are in my neck of the woods.
My problem seems trivial but after several hours of testing, researching and fiddling I can't seem to get this simple nginx rewrite function to work.
There are several rewrites we need, some will have multiple parameters but I cant even get this simple 1 parameter current url to alter at all to the desired.
Current: website.com/public/viewpost.php?id=post-title
Desired: website.com/public/post/post-title
Can someone kindly point me to as what I have done wrong, I am baffled / very tired...
For testing purposes before we launch we were just using a simple port on the server. Here is that section.
# Listen on port 7774 for dev test
server {
listen 7774;
server_name localhost;
root /usr/share/nginx/html/paa;
index index.php home.php index.html index.htm /public/index.php;
location ~* /uploads/.*\.php$ {
if ($request_uri ~* (^\/|\.jpg|\.png|\.gif)$ ) {
break;
}
return 444;
}
location ~ \.php$ {
try_files $uri @rewrite =404;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_pass php5-fpm-sock;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_intercept_errors on;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location @rewrite {
rewrite ^/viewpost.php$ /post/$arg_id? permanent;
}
}
I have tried countless attempts such as above @rewrite and simpler:
location / {
rewrite ^/post/(.*)$ /viewpost.php?id=$1 last;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_pass php5-fpm-sock;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_intercept_errors on;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
I can not seem to get anything to work at all, I have tried changing the location tried multiple rules...
Please tell me what I have done wrong.
Pause for facepalm
[relocated from stack overflow as per mod suggestion]
I've been following all the OpenVPN Bridge tutorials I can, but I'm still missing something. Does anyone know of a super detailed tutorial\explanation of bridging?
If anyone has bridging running, can I get a copy of your interfaces file to see how you've got it going. (Obviously change the ip address, just please change them consistently.)
I need to configure almost 3 dozen laptops. We need half of them to have different IP addresses. Would configuring one laptop and then ghosting the rest be the fastest way to do this or is there a better way?
I just installed Ubuntu Server 12.04 in an office machine with openSSH, DNS and LAMP Server. I also made the IP static and I can access the server in my office premises easily, but when I try to access my server from my home it is not working.
I know I have to make some changes and need to install some firewall (I had just gone through with a couple of posts) but I guess an expert advise will save my time here.
I am running Exchange 2013 on Windows Server 2012 R2.
When I add my exchange account to Outlook, it seems to work perfectly (sending/receiving email, syncing everything), but when I open the account settings it has the following set as the Server:
[email protected]
I would have expects this to be: mail.domain.com since this is the DNS A record pointing to the IP of my server. Where is it getting this server name?
Is there good tool that can do same thing as windows 'mstsc' and also has some features, like
save different session info, so don't need to remember difference IP/ID/pwd. Thanks.
I have the following network:
http://i.stack.imgur.com/rapkH.jpg
I want to send all the traffic from the devices that connect to the 192.168.0.1 router to the 192.168.10.1 router(and eventually to the Internet), by passing through the server and an additional router. Almost 2 days have passed and I can't figure what is wrong.
While searching on the Internet for some similar configuration I found some articles that are somehow related to my needs, but the proposed solutions don't seem to work for me. This is a similar article: iptables forwarding between two interface
I done the following steps for the configuration process:
Set static IP address 192.168.1.90 for the eth0 on the server from the 192.168.1.1 router
Set static IP address 192.168.0.90 for the eth1 on the server from the 192.168.0.1 router
Forwarded all the traffic from 192.168.0.1 router to the server on eth1 interface witch seems to be working. The router firmware has some option to redirect all the traffic from all the ports to a specified address.
Added the following rules on the server(Only the following, there aren't any additional rules):
iptables -t nat -A POSTROUTING -o eth1 -j MASQUERADE
iptables -A FORWARD -i eth1 -o eth0 -m state -–state RELATED,ESTABLISHED -j ACCEPT
iptables -A FORWARD -i eth0 -o eth1 -j ACCEPT
I also tried changing
iptables -A FORWARD -i eth1 -o eth0 -m state -–state RELATED,ESTABLISHED -j ACCEPT
into
iptables -A FORWARD -i eth1 -o eth0 -j ACCEPT
but still is not working.
After adding the following to enable the packet forwarding for the server that is running CentOS:
echo 1 /proc/sys/net/ipv4/ip_forward
sysctl -w net.ipv4.ip_forward = 1
After a server restart and extra an extra check to see that all the configuration from above are still available I tried to see again if I can ping from a computer connected to 192.168.0.1/24 LAN the router from 192.168.1.1 but it didn't worked.
The server has tshark(console wireshark) installed and I found that while sending a ping from a computer connected to 192.168.0.1 router to 192.168.1.1 the 192.168.0.90(eth1) receives the ping but it doesn't forward it to the eth0 interface as the rule tells:
iptables -A FORWARD -i eth1 -o eth0 -j ACCEPT
and don't now why this is happening.
Questions:
The iptables seem that don't work as I am expecting. Is there a need to add in the NAT table from iptables rules to redirect the traffic to the proper location, or is something else wrong with what I've done?
I want to use tshark to view the traffic on the server because I think that is the best at doing this. Do you know something better that tshark to capture the traffic and maybe analyze it?
1.amazonaws doesnt provide dns service?
2.i can only assign static ip through ec2
so the only way to assign domain name is to use third party dns service? which do you all recommend? i need one that able to add SRV