Search Results

Search found 11483 results on 460 pages for 'ip contracts'.

Page 325/460 | < Previous Page | 321 322 323 324 325 326 327 328 329 330 331 332  | Next Page >

  • The canonical "blocking BitTorrent" question

    - by Aphex5
    How can one block, or severely slow down, BitTorrent and similar peer-to-peer (P2P) services on one's small home/office network? In searching Server Fault I wasn't able to find a question that served as a rallying point for the best technical ideas on this. The existing questions are all about specific situations, and the dominant answers are social/legal in nature. Those are valid approaches, but a purely technical discussion would be useful to a lot of people, I suspect. Let's assume that you don't have access to the machines on the network. With encryption use increasing in P2P traffic, it seems like stateful packet inspection is becoming a less workable solution. One idea that seems to make sense to me is simply throttling down heavy users by IP, regardless of what they're sending or receiving -- but it doesn't seem many routers support that functionality at the moment. What's your preferred method to throttle P2P/BitTorrent traffic? My apologies if this is a dupe.

    Read the article

  • Conditional Directory Index In Htaccess

    - by icelizard
    This relates to the question in: http://stackoverflow.com/questions/1599717/conditional-directoryindex-in-htaccess The answer states that the following should work: SetEnvIf Remote_Addr ^127\.0\.0\.0$ owner <IfDefine owner> DirectoryIndex index.html </IfDefine> <IfDefine !owner> DirectoryIndex index.php </IfDefine> I am not sure this works, the setting of the Env var deffinately does, but no matter what IP I visit the site from the DirectoryIndex is always index.php Is there something wrong with the conditional or should I be using something else? Thanks in advance

    Read the article

  • Changing shared printer settings to default to greyscale

    - by Chris
    My company has about 60 employees all running Windows Vista or 7 and a gigantic Minolta printer hooked up to an EFI Fiery Image Processor. We're burning about $300 a month in printer supplies alone. I'm trying to find a way to cause the printer to default to grayscale in order to save money. So far I've tried: Changing settings on the image processor Changing settings on the print server Looking through the Group Policy editor to see if I can find anything useful Creating a new printer on the print server and setting it to be grayscale only Adding the printer to my computer directly (through a TCP/IP port) and setting it to be greyscale only Has anybody successfully done this before? If so, how was it gone about? I don't expect anybody to know the specifics of my environment, I just not sure what the right direction is.

    Read the article

  • Grant relay to servers based on AD security group membership

    - by john
    We're moving our relay from an Exchange 2003 server to an Exchange 2010 server. I was hoping the "Grant or deny relay permissions to specific users or groups" option would still be available in some form, but I can't find out how to do it. I've read up on recieve connectors and so far I can't get it to work. I have edited the security on the Recieve Connector to allow the following extended rights to the group and added computer accounts to that group: Accept Routing Headers Bypass Anti-spam Submit to Server Accept any Sender Accept any Recipient Then I suddenly realised while testing... How would the receive connector resolve the permission to a particular AD object, maybe a reverse DNS lookup? What I'd like to know is if what I'm trying to achieve is possible, and how it would be possible. I would rather not revert to an IP-based list as this is not as manageable, and I'm trying to avoid creating static IPs/reservations for a number of workstations that would otherwise not need them.

    Read the article

  • apxs cannot install mod_cloudflare on centos

    - by Adam
    [ Linux - CentOS - Apache 2.2 - mod_cloudflare - apxs2 ] I have changed my nameservers to point to CloudFlare. The problem is that all the IP addresses are coming in as CloudFlare's. This is no good, because I have to monitor and block some specific traffic. mod_cloudflare is supposed to resolve this but I have been unable to get this installed. The command in the documentation uses apxs2. I can't figure out how to install this, or if it just means for 'apache 2.4'. I'm running 2.2.3, and I can use 'apxs'. When I run: apxs -aic mod_cloudflare.c I get the error apxs:Error: Command failed with rc=65536 Does this mean I need apxs2 or something else? How do I get mod_cloudflare working on my server? I appreciate any help, the documentation is vague and limited.

    Read the article

  • How to upgrade a single instance's size without downtime

    - by Justin Meltzer
    I'm afraid there may not be a way to do this since we're not load balancing, but I'd like to know if there is any way to upgrade an EC2 EBS backed instance to a larger size without downtime. First of all, we have everything on one instance: both our app and our database (mongodb). This is along the lines i'm thinking: I know you can create snapshots of your EBS and an AMI of your instance. We already have an AMI and we create hourly snapshots. If I spin up a new separate instance of a larger size and then implement (not sure what the right term is here) the snapshots so that our database is up to date, then I could switch the A record of our domain from the old ip address to the new one. However, I'm afraid that after copying over the data from the snapshot, by the time it takes to change the A record and have that change propagate, the data could potentially be stale. Is there a way to prevent this, and is there a better way to do this than I am suggesting?

    Read the article

  • Self-hosted browser-based remote desktop script?

    - by rlsaj
    I need a self-hosted browser based remote desktop script that will connect me from any PC to my work PC. I need to either host this script within my own dedicated hosting or on my work PC. The PC that I need to remote into is always the one PC (Win7) and the IP never changes, and I have access to the Router/Firewall within. I have tried many remote desktop services and applications - LogMeIn, Team Viewer, (Ultra/Tight) VNC, GoToMyPC and iTeleport Connect and even Windows Remote Desktop - and the web services (or ports) are blocked at whatever free wi-fi/hotel/coffee shop I am at. Note that I will need to be able to access this from any PC, so I won't be able to install any applications (or use any portable software) - hence my thinking that it will need to be browser based on a standard (not blocked) port. If I can set up a web based remote desktop application - really a homebrew LogMeIn - then I should solve my problem. What is the best option here?

    Read the article

  • Routing from the DMZ to the interior network only

    - by Allan
    I have a home network connected to Verizon's FIOS service. Verizon's ActionTec router is connected to the ONT via coax to establish the MOCA network. My DD-WRT router's WAN port is connected to one of the ActionTec's LAN ports. The DD-WRT router is configured with a static IP address and assigned to the DMZ (this is done so I only have to configure port forwarding once). My issue is that this does not allow computers connected to the DD-WRT to serve streaming audio/video to the MOCA network using Verizon's Media Manager. I know that Media Manager uses ports 18001, 5050, and 5060, but I don't know how to forward those ports so that they are only available to the ActionTec's network and not the rest of the internet.

    Read the article

  • Questions about Domains and DNS

    - by ShoX
    Hi, I am totally new to the DNS and server hosting world and not quite sure what I need. I want to get a domain, forward it to my own server, so that the user sees example.com in the url bar and example.com/foo/bar will work. Depending on what subdomain it is, it should do different things (another base-directory at webserver, ftp, etc). Also my email should be able to be sent to and received by that server. What irritates me, is the fact, that in the A-record I can only list IP-addresses and no ports. So do I have to set up a nameserver on my own server? Or do I accomplish this via vhosts on my webserver? I would appreciate any help or link to a tutorial. I know how DNS works, know some basic apache-stuff, etc... so no need to explain that. Thanks

    Read the article

  • Hosting several domains on one server using IIS 7

    - by Øyvind Knobloch-Bråthen
    I have created several web sites inside IIS7 on my server. All of them use the same ip and port, but different host names. Currently I have set the host name to www.mydomain.com. Now my question is, how do I get my actual domains to target the different sites on my server. Second question. Can I set my host name to only mydomain.com to make sure that all requests to that domain is handeled by the same application? Primarily, I want both www.mydomain.com and mydomain.com to work when the user types the address in their browser.

    Read the article

  • How to invalidate nginx reverse proxy cache in front of other nginx servers?

    - by Olivier Lance
    I'm running a Proxmox server on a single IP address, that will dispatch HTTP requests to containers depending on the requested host. I am using nginx on the Proxmox side to listen to HTTP requests and I am using the proxy_pass directive in my different server blocks to dispatch requests according to the server_name. My containers run on Ubuntu and are also running a nginx instance. I'm having troubles with caching on a particular website that is fully static: nginx keeps on serving me stale content after files updates, until I: Clear /var/cache/nginx/ and restart nginx or set proxy_cache off for this server and reload the config Here's the detail of my configuration: On the server (proxmox): /etc/nginx/nginx.conf: user www-data; worker_processes 8; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; use epoll; } http { ## # Basic Settings ## sendfile on; #tcp_nopush on; tcp_nodelay on; #keepalive_timeout 65; types_hash_max_size 2048; server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; client_body_buffer_size 1k; client_max_body_size 8m; large_client_header_buffers 1 1K; ignore_invalid_headers on; client_body_timeout 5; client_header_timeout 5; keepalive_timeout 5 5; send_timeout 5; server_name_in_redirect off; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; gzip_vary on; gzip_proxied any; gzip_comp_level 6; # gzip_buffers 16 8k; gzip_http_version 1.1; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; limit_conn_zone $binary_remote_addr zone=gulag:1m; limit_conn gulag 50; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } /etc/nginx/conf.d/proxy.conf: proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_hide_header X-Powered-By; proxy_intercept_errors on; proxy_buffering on; proxy_cache_key "$scheme://$host$request_uri"; proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=cache:10m inactive=7d max_size=700m; /etc/nginx/sites-available/my-domain.conf: server { listen 80; server_name .my-domain.com; access_log off; location / { proxy_pass http://my-domain.local:80/; proxy_cache cache; proxy_cache_valid 12h; expires 30d; proxy_cache_use_stale error timeout invalid_header updating; } } On the container (my-domain.local): nginx.conf: (everything is inside the main config file -- it's been done quickly...) user www-data; worker_processes 1; error_log logs/error.log; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; #tcp_nopush on; keepalive_timeout 65; gzip off; server { listen 80; server_name .my-domain.com; root /var/www; access_log logs/host.access.log; } } I've read many blog posts and answers before resolving to posting my own questions... most answers I can see suggest setting sendfile off; but that didn't work for me. I have tried many other things, double checked my settings and all seems fine. So I'm wondering whether I am not expecting nginx's cache to do something it's not meant to...? Basically, I thought that if one of my static files in my container was updated, the cache in my reverse proxy would be invalidated and my browser would get the new version of the file when it requests it... But I now have the sentiment I misunderstood many things. Of all things, I now wonder how nginx on the server can know about a file in the container has changed? I have seen a directive proxy_header_pass (or something alike), should I use this to let the nginx instance from the container somehow inform the one in Proxmox about updated files? Is this expectation just a dream, or can I do it with nginx on my current architecture?

    Read the article

  • open-iscsi does not login into targets on boot

    - by Creshal
    We have a Debian Lenny server with open-iscsi that's configured to log into a target automatically: hostname:~# grep \\.startup /etc/iscsi/iscsid.conf node.startup = automatic hostname:~# grep \\.startup /etc/iscsi/nodes/iqn..../the.correct.ip.address\,port node.startup = automatic node.conn[0].startup = automatic hostname:~# If I issue a restart of open-iscsi via init.d, it works fine. But if I reboot the machine, iscsi starts, but does not even search for targets. I have to manually restart it before it works. Any ideas how to make it find the target on boot?

    Read the article

  • Machine account authentication on Radius server

    - by O.Shevchenko
    My workstation is under Linux. I have an Active Directory domain controller + Radius server on Windows 2008. I can verify user account 'radius-01' using 'radtest' tool: $ radtest -t pap radius-01 password123 195.234.133.32 1812 password123 Sending Access-Request of id 98 to 195.234.73.2 port 1812 User-Name = "radius-01" User-Password = "password123" NAS-IP-Address = 127.0.1.1 NAS-Port = 1812 rad_recv: Access-Accept packet from host 195.234.133.32 port 1812, id=98, length=84 Framed-MTU = 1344 Framed-Protocol = PPP Service-Type = Framed-User Class = 0x537004f00000013700010200ac1c0... I have joined my Linux PC to Active Directory domain ARB-HRK using Samba: [root@shev-arb]# net ads testjoin Join is OK I can dump machine password: [root@shev-arb]# tdbdump /var/lib/samba/private/secrets.tdb { key(34) = "SECRETS/MACHINE_PASSWORD/ARB-HRK" data(15) = "yGgXJsquRnpT0g\00" } How can i authenticate my machine account on Radius server? Do anybody know any tools for this, like: radtest shev-arb$ yGgXJsquRnpT0g 195.234.133.32 1812 password123 (this command fails)

    Read the article

  • dig and dig -x are answering different

    - by erdemkeren
    I don't want the name provider to manage my records. I want to handle it myself. So I installed bind9 and made some configurations reading some articles and following some examples. bind didn't show any error after I created/edited the required files but; When I write `dig www.foo.com, I see the IP of the advertisement page of my name provider. But when I write dig -x server_ip_address; I see the name I purchased. What am I doing wrong? Can't a server be the nameserver of it's own? Is it a must to configure the records from the company I bought the name from? By the way, I realised that, my previous question was not clear, I deleted it, and asking the same question in a different way.

    Read the article

  • Source of Unexplained Requests in Server Logs

    - by Synetech inc.
    Hi, I am baffled by some entries in my server logs, specifically the web-server logs. Other than normal, expected traffic, I have noticed three types of request errors (eg 404, etc.): Broken links, ie links from old, external pages that point to pages that are no longer here Sequences of probes, ie some jerk trying to hack in by scanning my server for a series of exploitable admin type pages and such What appear to be completely random requests for things that have never existed on the server or even have anything to do with the server, and appear by themselves (ie not a series of requests like the probes) Could it somehow be a mistyped URL or IP? That’s about the only thing that I can think of, but still, how could I get a request on say, foobar.dyndns.org (12.34.56.78) for something like www.wantsfly.com/prx2.php or /MNG/LIVE or http://ant.dsabuse.com/abc.php?auth=45V456b09m&strPassword=X%5BMTR__CBZ%40VA&nLoginId=43. (Those are a few actual requests from my logs.) Can someone please explain scenario three to me? Thanks.

    Read the article

  • How to route traffic through a specific SOCKS proxy on a per-app basis?

    - by GJ.
    I'm running a certain desktop app (actually via AIR if it makes any difference) which doesn't have any built-in proxy configuration settings. I need to get all traffic just from this app directed through a secure SOCKS proxy. This implies I can't use the global network preferences, as these would affect many other apps. Is there any way to force all network communication through a given SOCKS proxy on a per-app basis? It would also be helpful to know if there's a way to perform such routing globally, based on specific IP addresses (as this could allow for some reasonable workaround).

    Read the article

  • Configuring BIND to use VM's DNS for specific domain

    - by Srirangan
    I work on a project for which I use an Ubuntu server vm on an Ubuntu host. The VM runs all the services / webapps through haproxy and nginx and serves it on the domain (xyz.com). I manually modify my resolv.conf to use the VMs IP address as the nameserver and I can run my app on the host browser. The problem is I am modifying an auto-generated file (resolv.conf) and I need to do it each time. Is there a smart way to say: -- are you accessing xyz.com? -- if yes use VM's DNS server, else use the hosts

    Read the article

  • Is there a way to submit a batch of commands to a Cisco router and have them execute from the router?

    - by atroon
    I need to change the configuration of a remote (6 hours' drive) client's Cisco 871 (IOS 12.4.15T) from my location because of some new internet service at his location. To be more precise, I need to change the default route, ip address of the outside interface (Fa4) and disable the PPPoE setup there. Unfortunately, doing any of this will (obviously) break the connection to the router. I do not have an out-of-band management modem set up (I know, I know). Is there any way to enter the commands I need to have run and have them execute one after the other, from a file on flash:? I have never tried anything like that before. Essentially a DOS-style batch file is exactly what I need. Nothing like it seems to be out there except using kron to execute CLI commands, but that is specified here as only taking EXEC commands, not configuration ones. Is there hope, or do I travel?

    Read the article

  • I just got a linode VPS a week ago and I've been flagged for SSH scanning...

    - by meder
    I got a 32-bit Debian VPS from http://linode.com and I really haven't done any sort of advanced configuration for securing it ( port 22; password enabled ). It seems somehow there is ssh scanning going on from my IP, I'm being flagged as this is against the TOS. I've been SSHing only from my home Comcast ISP which I run Linux on. Is this a common thing when getting a new vps? Are there any standard security configuration tips? I'm quite confused as to how my machine has been accused of this ssh scanning.

    Read the article

  • Windows Server 2003 - passwordless access to \\myhost\ but not \\myhost.mydomain.net\

    - by Charles Duffy
    I have a Windows Server 2003 system on which passwordless access to local UNC paths is possible using the server's unqualified hostname or its IP address, but not via its FQDN -- even when the hosts file is used to map that FQDN directly to 127.0.0.1. That is: \\127.0.0.1\ - passwordless \\myhost\ - passwordless \\myhost.mydomain.com\ - brings up an authentication dialog Unfortunately, I have a local application trying to resolve UNC paths including the host's FQDN. I've tried resolving myhost.mydomain.com to 127.0.0.1 in both hosts and lmhosts, and calling ping myhost.mydomain.com at the command prompt gives the appearance that this resolution has taken effect; even so, attempting to open \\myhost.mydomain.com\ from Windows Explorer brings up a password prompt, while \\127.0.0.1\ does not. The system is using an OpenDirectory server (Apple's Kerberos+LDAP directory service) for authentication.

    Read the article

  • Finding the most common errors in event logs using Powershell.

    - by Paul
    I have the event logs for one of our servers locally in .evtx format. I can load the log file into PS using the command: Get-WinEvent -Path D:\Desktop\serverlogs.evtx What I would like to do is on the Message field group events where the text matches by a certain percent (say 80% the same). As we have stacktraces for errors in the details which will be the same, but we also log the client's IP, url that was accessed which will likely be different. I want to group them so that I can work out the most common errors to prioritize fixing them and as there are 25,000+ errors in the log file I would rather not do it manually. I think I can work out how to do most of this, but am not sure how I could do the 'group fields which are mostly the same' part, does powershell have anything like this built in?

    Read the article

  • Connect to MySql on other machine on LAN

    - by Ankur Sachdeva
    I am facing problem with connecting MySql database on the other machine on the same network. Could not connect to the specified instance. MySql error number 1130 Host 'abc' is not allowed to connect to this MySql server (Pinging ok time 1-3 ms ttl =128) I have check out the followings: Tcp/IP enabled RegEdit under hlocal machine .... parameters .. maxUserpORT And timedelay.. Grant all . to 'root'@'Myipaddress' please help to the earliest..

    Read the article

  • How to correctly set up iWARP? Preferably on loopback

    - by ajdecon
    iWARP is a protocol for doing remote direct memory access (RDMA) on top of TCP/IP, so that it can work with Ethernet and other network types as opposed to Infiniband. It works with many of the standard IB interfaces - the IB verbs, for example - so it's all pretty transparent. I'm doing some IB-verbs programming (mostly for the sake of learning about how they work better), and it'd be wonderfully convenient for me if I could use iWARP to do RDMA over my loopback interface, so that I could test some of my code without getting on our IB-connected cluster. :-) But I cannot figure out how to get a "local development environment" set up: there are no tutorials I'm aware of for even setting up iWARP from scratch on a server or a network interface. Can anyone give me a tutorial or point me in the right direction? Environment is Fedora 16 running in VirtualBox.

    Read the article

  • Where is debian storing its network settings?

    - by user13743
    I have a debian machine that is supposed to have a static ip, but insists on getting its address from the DHCP server. Here's this settings file: $> cat /etc/network/interfaces # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback # The primary network interface allow-hotplug eth0 iface eth0 inet static address 192.168.1.99 gateway 192.168.1.1 netmask 255.255.255.0 network 192.168.1.0 broadcast 192.168.1.255 Yet $> sudo /etc/init.d/networking restart Reconfiguring network interfaces...done. $> sudo ifconfig eth0 Link encap:Ethernet HWaddr 00:e0:03:09:05:2e inet addr:192.168.1.205 Bcast:255.255.255.255 Mask:255.255.255.0 ... Where is it being told to use dhcp?

    Read the article

  • VirtualBox "Bridged Adapter" when host NIC is turned off

    - by chris_l
    Hi, I'm running Linux (Debian Etch) in a VirtualBox VM on my MacBook. I usually ssh from my Mac terminal to the guest machine. I also want to access the internet from my guest, so I set up my host's WLAN card (en1) as a bridged adapter for eth0 on the client. This works fine, but when I turn off the WLAN card (e.g. to reduce battery consumption), I'd still like to ssh from my host to the guest. This fails of course, because en1 loses its IP address. Is a bridged adapter the best option for what I want to do? How can I make it work? (A simple "ifconfig en1 add 10.0.0.4" didn't do the trick...) Thanks Chris

    Read the article

< Previous Page | 321 322 323 324 325 326 327 328 329 330 331 332  | Next Page >