Search Results

Search found 7776 results on 312 pages for 'configure in'.

Page 250/312 | < Previous Page | 246 247 248 249 250 251 252 253 254 255 256 257  | Next Page >

  • Network topology for both direct and routed traffic between two nodes

    - by IndigoFire
    Despite it's small size, this is the most difficult network design problem I've faced. There are three nodes in this network: PC running Windows XP with an internal WiFi adapter.Base station with both WiFi and a Wireless Modem (WiModem)Mobile device with both WiFi and WiModem The modem is a low-bandwidth but high-reliability connection. We'd like to use WiFi for high-bandwidth stuff like file transfers when the mobile is nearby, and the modem for control information. Here's the tricky part: we'd like the wifi traffic to go directly from the mobile to the PC, as rebroadcasting packets on the same WiFi channel takes up double the bandwidth. We can do that with a manual configuration by giving the both the PC and the base station two IP addresses for their WiFi interfaces: one on a subnet shared with the mobile, and one on their own subnet. The routes on the PC are set up so that any traffic going to the mobile via WiModem goes through the secondary IP address so that return traffic from the mobile also goes through the WiModem. Here's what that looks like: PC WiFi 1: 192.168.2.10/24 WiFi 2: 192.168.3.10/24 Default route: 192.168.2.1 Base Station WiFi 1: 192.168.2.1/24 WiFi 2: 192.168.3.1/24 WiModem: 192.168.4.1/24 Mobile WiFi: 192.168.3.20/24 WiModem: 192.168.4.20/24 We'd like to move to having the base station automatically configure the mobile and PC, as the manual setup is problematic when you start having multiple mobiles and PCs. This means that the PC can only have 1 IP address and needs to be treated as being pretty simple. Is it possible to have a setup driven by DHCP on the base station that is efficient with bandwidth?

    Read the article

  • Error on LDAP Login - xsessions error - Session lasted less than 10 seconds

    - by Draineh
    I have two machines both running CentOS 5.6 64bit. On the LDAP Machine it has a DHCP, BIND and OpenLDAP Server. LDAP is correctly configured and users can authenticate against it. Using root I configure machine 2 to use LDAP for authentication and when trying to log in it successfully authenticates against a saved user on the LDAP Server but produces the following errors and then throws me back to the login screen. I can still sign in as root and use the machine as normal. The syslog doesn't show any errors and I disabled SELinux to see if it was interfering. The error; Your session only lasted less than 10 seconds. If you have not lgoged out yourself, this could mean that there is some installation problem or that you may be out of diskspace. Try logging in with one of the failsafe sessions to see if you can fix this problem. There is then a tickbox to view the contents of ~/.xsessions-errors which contains; /etc/gdm/PreSession/Default: Registering your session with utmp /etc/gdm/PreSession/Default: running: /usr/bin/sessreg -a -u /var/run/utmp -x "/var/gdm:0:Xservers" -h "" -l ":0" "admin" localuser:admin being added to access control list No profile for user 'admin' found /bin/sh: /usr/bin/dbus-launch --exit-with-session /etc/X11/Xinit/Xclients: No such file or directory /bin/sh: line 0: exec: /usr/bin/dbus-launch --exit-with-session /etc/X11/xinit/Xclients: cannot execute: No such file or directory Apologies if someone notices something isn't spelt quite right or doesn't sound right, the system never actually creates or saves this file so I have had to type it across from the screen. Through the authentication panel in CentOS on the client I have set it to create the users home directory on login. The user is being correctly authenticated and the /home/admin folder has been created but this error would suggest it has not? The client is a new install on an 80gb hard drive so there is well over 80% of the drive still available. Thanks for any suggestions or pointers.

    Read the article

  • 4.4.1 Timeout in 10 minute intervals SMTP on batch email jobs

    - by TEEKAY
    I am running a job that uses SMTP and it can run in excess of an hour, emailing the entire time. It's not my code but a workflow based app so I just get a form to configure the mail server, subj, msg, etc and can't see it's implementation. I know it is .NET and SmtpClient. I have been seeing 4.4.1 timeouts every 10 minutes being reported by the application as the response from the server. The # of emails in those 10 minute sessions are variable, between 100 and below 150 which leads me to ask about the 10 minute timeout time specifically. I have found there are several exchange properties (though I don't know what version they are running) that set timeout limits. (http://technet.microsoft.com/en-us/library/bb232205%28v=exchg.150%29.aspx) Would those values for ConnectionInactivityTimeOut and ConnectionTimeout be the controlling the timeouts? and finally I would like to ask if exchange considers the consistent connection(s) it kept receiving from the same source as one continuous connection and cause the timeout each 10 minutes and cause the timeout? I am using a static ip of the mail server. Thanks if anyone can shed any light on my problem. EDIT - It is my belief that the library is just keeping the connections around and isn't wrapped in any cleanup code or using statement. That said, I still haven't made any progress on this issue in the last year and just requeue the failed ones as I see them.

    Read the article

  • Can't access server sound card when vnc'd into ubuntu server

    - by Corey Kennedy
    I've set up my ubunutu 10 server with xfce, nxserver, and now tightvncserver so that I can control it remotely from my Windows 7 laptop. NX is working fine for remote access, but when I run (for example) exaile, no sound will be sent through the server's sound card. I installed tightvncserver and connected, but ran into the same problem. Exaile opens, sound isn't muted, I can see that sound cards are installed (via cat /proc/asound/cards), but I can't seem to get the remote sessions to access the server's sound card. Also, just to confirm that the sound card was working I hooked up a montior/keyboard to the server and opened a local xfce session. That worked fine. While I had the local session running, I was also able to open a remote session with NXClient and start exaile - which then successfully piped sound to the local card. After disconnecting the monitor/keyboard and moving the box back to its normal spot, though, I was not able to play sound via either an NX or VNC session. Does anyone have any suggestions? Surely it's possible to configure my remote sessions to pipe sound to the server's sound card, right? Or at least get xfce up and running without a monitor or keyboard but with access to the sound card so I can VNC into it? Thanks!

    Read the article

  • Does migrating 2 domain controllers between 2 datacentre requires both virtual machines to be shut down at the same time?

    - by Imagineer
    I was attempting to migrate 2 virtual machines that are domain controllers between 2 datacentres running ESX 3.5 and ESX 4.1. I was advised to shut down both domain controller at the same time during the migration process. This is to avoid USN Rollback and other replication issues. The following are the steps that I was planning to perform: 1. Shutdown both DC. 2. Copy both VMs files across to new datacentre using Veeam FastSCP (connection to both vCentre through IP address instead of hostname) 3. Power them up at new datacentre. 4. Configure Network interface/DNS/DHCP for both DCs in new datacentre I have tried to use Veeam FastSCP rather than VMware Standalone Converter is because its copying rather than converting. Someone also suggested that I use backup and restore app like Veeam backup and replication software. Sounds like a simple job, but after shutting down both DCs, the transfer rate using FastSCP is so slow registering only 1KB/s as oppose to the normal 1MB/s (or more). When that attempt to transfer failed, I tried to cold clone both DCs resulted in the both ESX hosts get disconnected. I have tried troubleshooting by referring to this - VMware KB - Diagnosing an ESX Server that is Disconnected or Not Responding in VirtualCenter It seems that the DNS being down is the caused of all unusual occurrence. The moment I powered up the DCs via VMware console command, the ESX host were able to connect to the vCentre again. How can I avoid such a pitfall again? Am I doing it correctly? Any help would be greatly appreciated! Thank you.

    Read the article

  • virtual web folder served by PHP script

    - by Martin
    I am trying to configure my apache to be able to display (virtual) pages like: mywebpage.com/something1 mywebpage.com/something2 mywebpage.com/folder/something3 I would like these "somethingX" and "folder" folders to be only virtual, not physical directories. For a start it would be great to send all requests to mywebpage to one PHP script which will somehow receive the original path information (there is some SERVER array as far as I know) and call necessary PHP functions (so far I use addresses like mywebpage.com/index.php?page=blabla&otherparameters=values...). Is that possible? I am struggling with different combination, currently I am with following file in /etc/apache2/conf.d/something.conf (not working of course). What is the correct way to proceed? Thanks. <Location /myweb> SetHandler my-handler Action my-handler /srv/www/htdocs/myweb/product.php virtual </Location> My pages are in /srv/www/htdocs/myweb. I tried with Location, with Directory, with Action and SetHandler, with AddHandler... ;-) Some configurations were ignored, some caused "object not found" with nothing relevant in error log.

    Read the article

  • Securing NTP: which method to use?

    - by Harry
    Can someone good at NTP configuration please share which method is the best/easiest to implement a secure, tamper-proof version of NTP? Here are some difficulties... I don't have the luxury of having my own stratum 0 time source, so must rely on external time servers. Should I read up on the AutoKey method or should I try to go the MD5 route? Based on what I know about symmetric cryptography, it seems that the MD5 method relies on a pre-agreed set of keys (symmetric cryptography) between the client and the server, and, so, is prone to man-in-the-middle attack. AutoKey, on the other hand, does not appear to work behind a NAT or a masquerading host. Is this still true, by the way? (This reference link is dated 2004, so I'm not sure what is the state of art today.) 4.1 Are public AutoKey-talking time servers available? I browsed through the NTP book by David Mills. The book looks excellent in a way (coming from the NTP creator after all), but the information therein is also overwhelming. I just need to first configure a secure version of NTP and then may be later worry about its architectural and engineering underpinnings. Can someone please wade me through these drowning NTP waters? Don't necessarily need a working config from you, just info on which NTP mode/config to try and may be also a public time server that supports that mode/config. Many thanks, /HS

    Read the article

  • Windows thinks outgoing connections are incoming connections?

    - by Slayer537
    I have a rather weird issue.. I'm trying to configure Windows Firewall to block all outgoing connections to a certain app, but allow all incoming. This app is used to transfer files across a network. The reason for this type of setup is to only allow certain users (IP Address) access to the files I have, but to still allow others to see what's available. Since Windows Firewall defaults to allowing all outgoing connections, I made a rule to deny all outgoing connections that were not in the IP ranges I specified. For the incoming connections, I'd like to leave it at allow all, but at the moment it is set to only allow the connections that also have outgoing permissions set. If I blanket say allow all incoming connections, I observe that unauthorized IP Address are able to actually download files, even though their IP was blocked in the outgoing connections. To shed a little more visibility on this, I used NetLimiter to see what was going on. NetLimiter showed me that the connection was an incoming connection. Shouldn't this be an outgoing connection, as I am uploading files to them, not the other way around? Is there a way to make the connection type be correct and show up as outgoing instead of incoming?

    Read the article

  • In Windows 7 power management, is it possible to set different sleep settings for different SATA disks?

    - by Ben Voigt
    I'm having an issue with Windows 7 either freezing up or generating a BSOD coming out of sleep. I suspect that it is related to my boot/OS drive, an OCZ Vertex SE SSD, because numerous other Vertex users have reported sleep problems. Notably, if I put the computer to sleep, it almost always wakes correctly. If it goes to sleep after a timeout, it almost always BSODs. I disabled timed sleep and now it freezes when left unattended. My next step is to disable "Put hard disks to sleep after X minutes", but I'd like to change this setting only for the SSD and not for the rotating data disks, which I would like to spin down normally. Does anyone know a place to configure sleep on a per-disk basis? I don't need to set different timeouts on different disks (although that would be nice), simply setting "this disk sleeps" and "sleep is disabled for this disk" would be great. Additional system information: Windows 7 Ultimate x64, Core i5 - P55 chipset, Intel RST drivers are installed. One SSD, two rotating HDD, and a DVD-RW drive are all connected to the Intel SATA ports. I could potentially move some of these to my motherboard's other SATA controller if that would help.

    Read the article

  • Debian Wheezy IPv6 isn't configured with ifup post-up hook

    - by aef
    We recently set up a server on Debian Wheezy Beta 3 (x86_64) which has a native IPv6 connection. We configured the eth0 interface to get the IPv6 configuration through some post-up hook commands in /etc/network/interfaces. The result is, that after the booting the system up, there is only IPv4 and an auto-configured link-local IPv6 address configured on the interface, as if the command has never been executed. When we additionally place the commands after the call to ifup -a inside the /etc/init.d/networking init script, everything works as expected and we have a fully configured interface after booting up. This is quite an ugly way to configure the interface. What are we doing wrong with the ifup post-up hooks? Or is this a bug? The section from /etc/network/interfaces looks like this (IP-addresses changed): allow-hotplug eth0 iface eth0 inet static address 1.2.3.1 netmask 255.255.255.192 network 1.2.3.0 broadcast 1.2.3.63 gateway 1.2.3.62 dns-nameservers 8.8.8.8 dns-search mydomain.tld post-up ip -6 addr add 2001:db8:100:3022::2 dev eth0 post-up ip -6 route add fe80::1 dev eth0 post-up ip -6 route add default via fe80::1 dev eth0 I also tried it in this alternative way: auto eth0 iface eth0 inet static address 1.2.3.1 netmask 255.255.255.192 network 1.2.3.0 broadcast 1.2.3.63 gateway 1.2.3.62 dns-nameservers 8.8.8.8 dns-search mydomain.tld iface eth0 inet6 static address 2001:db8:100:3022::2 netmask 64 gateway fe80::1 What we added to /etc/init.d/networking: … case "$1" in start) process_options check_ifstate if [ "$CONFIGURE_INTERFACES" = no ] then log_action_msg "Not configuring network interfaces, see /etc/default/networking" exit 0 fi set -f exclusions=$(process_exclusions) log_action_begin_msg "Configuring network interfaces" if ifup -a $exclusions $verbose && ifup_hotplug $exclusions $verbose # Our additions ip -6 addr add 2001:db8:100:3022::2 dev eth0 ip -6 route add fe80::1 dev eth0 ip -6 route add default via fe80::1 dev eth0 then log_action_end_msg $? else log_action_end_msg $? fi ;; …

    Read the article

  • Apache mod_proxy, how to forward request into local network ip(server)

    - by Beck
    Can't figure out, how to configure mod_proxy for this. I have two domains, one is working fine at the moment. Second is bind to the same ip. I need to forward requests from second domain to another server in local network. like that: domain1.com => 192.168.1.101 domain2.com => 192.168.1.102 What configuration or directives i should use? Thanks ;) Update <VirtualHost *:80> ServerName www.domain2.com ProxyRequests Off ProxyPreserveHost On <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass / http://192.168.1.103:8080/ ProxyPassReverse / http://192.168.1.103:8080/ </VirtualHost> It just doesn't redirect to second server. That's it. And when i restart apache, it says something with overlapping 80 port. [warn] _default_ VirtualHost overlap on port 80, the first has precedence

    Read the article

  • reverse proxying with NGINX to two back-end servers

    - by aag
    I am trying to learn how to configure the Nginx proxy. All requests from external (www.external.com) should go to internal server 10.10.10.16:2080, except for www.external.com/nagios requests, which should go to internal 10.10.10.18. My location block looks as follows: location ~* / { proxy_buffers 16 4k; proxy_buffer_size 2k; proxy_buffering off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Accept-Encoding ""; proxy_pass http://10.10.10.16:2080; } # # nagios server location ~* /nagios/ { proxy_buffers 16 4k; proxy_buffer_size 2k; proxy_buffering off; # proxy_set_header Host $host; # proxy_set_header X-Real-IP $remote_addr; # proxy_set_header Accept-Encoding ""; proxy_pass http://10.10.10.18; } The first location seems to work fine. However, any request to www.external.com/nagios sends the browser into the eternal pastures. Of course, 10.10.10.18/nagios was tested and works fine. What am I missing?

    Read the article

  • Partition is missing in /dev

    - by haimg
    I'm having a strange problem since I moved from Centos5 to Centos6. I have three disks, first two are used as a RAID1, and third one is a stand-alone backup disk that is not listed in /etc/fstab (it is mounded when needed and then unmounted). My problem: After a boot, /dev/sdc exists but /dev/sdc1 does not. Also, the links in /dev/disks are also absent for the first partition of sdc. Disk itself is fine, and if I hot-remove it and plug it back in, /dev/sdc1 appears ok and everything is working. My question: What subsystem manages auto-discovery of disks, partitions, etc. during the boot process (e.g. what creates /dev/disks/by-label)? How do I configure it to scan /dev/sdc too and create all relevant files and links in /dev ? Edit: Here's the relevant part of dmesg output (the only place sdc appears). It does list sdc1, but it's not in /dev! sd 1:0:0:0: [sdb] 1953525168 512-byte logical blocks: (1.00 TB/931 GiB) sd 3:0:0:0: [sdc] 976773168 512-byte logical blocks: (500 GB/465 GiB) sd 1:0:0:0: [sdb] Write Protect is off sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA sd 3:0:0:0: [sdc] Write Protect is off sd 3:0:0:0: [sdc] Mode Sense: 00 3a 00 00 sd 3:0:0:0: [sdc] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA sdb: sdc: sd 0:0:0:0: [sda] 1953525168 512-byte logical blocks: (1.00 TB/931 GiB) sd 0:0:0:0: [sda] Write Protect is off sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA sda: DMAR:[DMA Read] Request device [00:1e.0] fault addr 361bc000 DMAR:[fault reason 06] PTE Read access is not set sdb1 sdb2 sdb3 sdc1 sda1 sd 1:0:0:0: [sdb] Attached SCSI disk sd 3:0:0:0: [sdc] Attached SCSI disk sda2 sda3 sd 0:0:0:0: [sda] Attached SCSI disk

    Read the article

  • Vmware player change dhcp server settings

    - by Tathagata
    I have a Windows Server 2003 running from a Vmware player on Win 7 box. The idea is to test Windows Deployment service in the virtual network. Is it possible to configure the vmware dhcp server with WDS related stuff(option 66, 67)? I found a few references where people were using the vnetlib.exe to start, stop the dhcp serverchange the subnet mask etc - but there's no info on how to get set the dhcp server options. DHCP config from the virtual network editor I do have the Workstation, without the license for it. In the Virtual network Editor, the DHCP settings for the network I'm using, only allows me to set the subnetmask, IP ranges and stuff like that. But not the dhcp options. DHCP server on the WDS server Authorizing the DHCP server in the guest WDS server fails. The VMware player can run its own dhcp server fro the virtual network with out any authorization from the Active directory - can I do the same, with Win dhcp server in the guest Win Server? ~~~~~ Can I authorize W2K8 DHCP server for private network, even when prohibited in enterprise network? says we have to run a third party dhcp server... :/.

    Read the article

  • User permissions linux. (proftpd / nginx)

    - by user55745
    I've been having a complete nightmare trying to configure proftpd. I've got proftp server working with an sql database. However I want to have any files uploaded able to viewed by the webserver running on the same box. The folders get created in /var/tmp/ as rwx------ 2 ftpuser ftpgroup 4096 Oct 8 20:35 50730c4346512 drwx------ 2 ftpuser ftpgroup 4096 Oct 8 20:38 50730f3a811ca I've tried adding www-data to group with the following usermod -g www-data ftpuser But this doesn't allow the web server access. In proftpd.conf I have the following umask set Umask 0022 It doesn't seem to make a difference what I set that value to. /etc/group (sure I've messed up one of these two but I'm getting desperate) ftpgroup:x:2001:www-data www-data:x:33:ftpgroup /etc/passwd www-data:x:33:33:www-data:/var/www:/bin/sh proftpd:x:108:65534::/var/run/proftpd:/bin/false ftp:x:109:65534::/srv/ftp:/bin/false ftpuser:x:2001:33:proftpd user www-data:/bin/null:/bin/false The ftpuser table in the database has uid / gid set to 2oo1 for both. I'm going absolutely crazy trying to solve this any help would be greatly appreciated. p.s Also, although if I manually connect to the ftp server I can upload files via FileZilla. Although this isn't working for the web-camera, although there is talky talky going on between the server and the camera.

    Read the article

  • Need advice in setting up server. fastCGI, suExec, speed, security, etc.

    - by lewisqic
    I am running my own dedicated server with centOS 5 and WHM/cPanel. I would like to configure my server to meet my needs but I need a little help. It will only be my own websites being run on this server. I'm still a little green when it comes to server administration so please forgive my ignorance. What I Would Like to Have: I need some public directories to be writable (for user image uploads and things like that) but I don't want those directories to have 777 permissions. I need individual accounts to have the ability to set custom php settings for their own account without affecting other accounts, whether through a php.ini file or through .htaccess or any other method. I would like things to run as fast as possible, whether that means using a php optimizer or cacher, such as eaccelerator or xcache or anything else. I need things to be as secure as possible. Here Are My Questions What should I use for my php handler? DSO? CGI? fastCGI? suPHP? Other? Should I be using suEXEC? What are the benefits or downfalls of this? What php optimizer/cacher is best to use? Are there any other security tips I need to know about all of this? I'd appreciate any advice or direction that can be offered. Thanks!

    Read the article

  • Problem with ubuntu 10.10 running from USB drive

    - by Surjya Narayana Padhi
    I recently downloaded Ubuntu 10.10 and created an USB drive with that. I started to run the Ubuntu from that USB drive. But I am facing so much problem. I am thinking why its not so much easy like Windows to do all my job in ubuntu. Always I get some error message or to install something. This time I am getting the following errors. I am trying to download and install Aircrack-ng. So used the command sudo apt-get install aircrack-ng. But the installation stops with the following error : update-initramfs: deferring update (trigger activated) cp: cannot stat `/vmlinuz': No such file or directory dpkg: error processing bcmwl-kernel-source (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: initramfs-tools bcmwl-kernel-source E: Sub-process /usr/bin/dpkg returned an error code (1) I don't even have the aptitude command installed till now. Are all these errors because of I am running the ubuntu from USB drive? Is there any simple and easy way to go to Ubuntu Software Center and download all the required essentials at one shot and then Aircrack-ng? I could not find the Aircrack-ng in Ubuntu Software Center Can anybody give me detail steps to solve all my problems above. I am frustrated searching for updates and installations. When something works and something does not work. Can anybody suggest me how I should proceed after installing ubuntu to run on a USB drive. So that I can use the OS like Windows. Like software download,wireless driver, sound, video, documents, C:, D: all things should be there. Please somebody help.

    Read the article

  • How to provide users with isolated drive letters in Windows 2008 R2 (Terminal Server)

    - by Pierre
    I need to be able to host several RDP sessions on a Terminal Server, where users of group A see a drive X: mapped to a given folder of the server and another group B see the same drive letter X: mapped to another folder. For instance : User 1, Group A X: --> C:\data\A User 2, Group A X: --> C:\data\A User 3, Group B X: --> C:\data\B User 4, Group C X: --> C:\data\C Is this possible. If so, how do I configure the virtual drive mapping so that the user has nothing special to do; i.e. I want the letter X: to be available to Remote Apps launched by the user, or if the user logs in to the remote desktop. Can I somehow use subst to get this to work? I would like to avoid, if possible, mounting drive letters on local shares (i.e. I don't like the idea of having to go through \\localhost\data-A to reach the user's data).

    Read the article

  • How to distribute multiple executions of an app across many machines

    - by Salec
    I've got a simulation app (64-bit windows) that runs without any user interaction. This app gathers information and pushes it to a remote MS SQL Server. What I'd like to do is execute this simulation as many times as I can on multiple machines after our nightly build has finished and it has passed the test suite. If possible I'd love to have the ability to configure it to stop after x total runs or if the entire batch has taken over y hours. I've tried using Visual Studio's built in test framework since we already have a test lab set up with multiple agents. I created a single unit test that simply runs the simulation then I created an ordered test and added that single test multiple times (from what I gather, this is the only way to execute the same unit test more than once). I found that ordered tests are only run on a single agent and not distributed which is very limiting. We use TeamCity to perform our nightly builds and I suspect it's possible to implement this on top of that, but I'm fairly new to TeamCity. We also have Jenkins and Bamboo available and I'm open to any other software that would get the job done presuming it runs on a 64-bit Windows OS. Any suggestions?

    Read the article

  • How to set a static route for an external IP address

    - by HorusKol
    Further to my earlier question about bridging different subnets - I now need to route requests for one particular IP address differently to all other traffic. I have the following routing in my iptables on our router: # Allow established connections, and those !not! coming from the public interface # eth0 = public interface # eth1 = private interface #1 (10.1.1.0/24) # eth2 = private interface #2 (129.2.2.0/25) iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A INPUT -m state --state NEW ! -i eth0 -j ACCEPT iptables -A FORWARD -i eth0 -o eth1 -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A FORWARD -i eth0 -o eth2 -m state --state ESTABLISHED,RELATED -j ACCEPT # Allow outgoing connections from the private interfaces iptables -A FORWARD -i eth1 -o eth0 -j ACCEPT iptables -A FORWARD -i eth2 -o eth0 -j ACCEPT # Allow the two private connections to talk to each other iptables -A FORWARD -i eth1 -o eth2 -j ACCEPT iptables -A FORWARD -i eth2 -o eth1 -j ACCEPT # Masquerade (NAT) iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE # Don't forward any other traffic from the public to the private iptables -A FORWARD -i eth0 -o eth1 -j REJECT iptables -A FORWARD -i eth0 -o eth2 -j REJECT This configuration means that users will be forwarded through a modem/router with a public address - this is all well and good for most purposes, and in the main it doesn't matter that all computers are hidden behind the one public IP. However, some users need to be able to access a proxy at 192.111.222.111:8080 - and the proxy needs to identify this traffic as coming through a gateway at 129.2.2.126 - it won't respond otherwise. I tried adding a static route on our local gateway with: route add -host 192.111.222.111 gw 129.2.2.126 dev eth2 I can successfully ping 192.111.222.111 from the router. When I trace the route, it lists the 129.2.2.126 gateway, but I just get * on each of the following hops (I think this makes sense since this is just a web-proxy and requires authentication). When I try to ping this address from a host on the 129.2.2.0/25 network it fails. Should I do this in the iptables chain instead? How would I configure this routing?

    Read the article

  • Configuring two nearby WLANs: should I use the same ssid?

    - by Rory
    I'm configuring a home network for basic internet use (ie don't really need connectivity between workstations on the network). My brick walls mean a single wireless router doesn't provide good coverage throughout the house, so I have purchased two powerline adapters and now have the incoming modem/wireless router at one end of the house plugged into a powerline adapter, and at the other end of the house the other powerline adapter plugged into another wireless router. Currently the two wireless networks have different ssids. (The powerline adapters only do power-Ethernet; they're not wireless access points themselves.) This works well, except when I move between rooms and would ideally like my devices (iPad, phones, laptops) to switch from the weak to the strong signal. Sometimes there's enough signal that they hold on to the weak connects instead of switching to the strong one. Should I name the two networks the same ssid, and if so what is the actual effect? Do the signals get confused, is the bandwidth affected, will this help my devices seamlessly move from one to the other, or is the ssid just a cosmetic thing that actually doesn't have any impact on this situation? Are there any other settings that I should configure to make my setup optimal?

    Read the article

  • Error authenticating git repository with Redmine

    - by woni
    I've setup Redmine 2.1 on my Debian Squeeze server following this Tutorial HowTo configure Redmine for advanced git integration (I tried to use the grack path). Redmine server is running properly, but I have a problem granting users access to git repositories. When I try to clone a repository it says: error: The requested URL returned error: 500 while accessing The apache error.log shows this entry: [Fri Sep 28 15:50:56 2012] [crit] [client xx.xx.xx.xx] configuration error: couldn't check user. Check your authn provider!: /repo.git/info/refs It also asks me for user and password when cloning, but it shouldn't if I understand the tutorial right. I'm using the Redmine authentication module: <VirtualHost *:80> ServerName my.server.at DocumentRoot "/var/www/my.server.at/public" PerlLoadModule Apache::Redmine <Directory "/var/www/my.server.at/public"> Options None AllowOverride None Order allow,deny Allow from all </Directory> SetEnv REMOTE_USER=$REDIRECT_REMOTE_USER" SetEnv GIT_PROJECT_ROOT /var/git/my.server.at/ SetEnv GIT_HTTP_EXPORT_ALL ScriptAlias /git/ /usr/lib/git-core/git-http-backend <Location /> Order allow,deny Allow from all AuthType Basic AuthName Git Require valid-user AuthBasicAuthoritative Off AuthUserFile /dev/null AuthGroupFile /dev/null PerlAccessHandler Apache::Authn::Redmine::access_handler PerlAuthenHandler Apache::Authn::Redmine::authen_handler RedmineDSN "DBI:mysql:database=redmine;host=localhost" RedmineDbUser "user" RedmineDbPass "password" RedmineGitSmartHttp yes </Location> </VirtualHost> Can someone help me please and explain the error and what I can do to solve my problem?

    Read the article

  • Dual Boot Installing Ubuntu 12.04 with Windows 7 (64) on a non UEFI system fails

    - by Randnum
    I cannot seem to install the correct boot loader for a non-UEFI firmware system. I'm trying to install Ubuntu 12.04 and Windows 7 (64) which are technically compatible with GPT but for windows only if the firmware is UEFI enabled. My system uses the old BIOS system and does not support UEFI. Therefore, whenever I finish my Ubuntu install and try to install Windows I get a "cannot install to GPT partition type" error. Even if I use Gparted to format a special NTFS file format for windows it can't handle the GPT partition style because it doesn't have UEFI. But my ubuntu install always forces GPT during installation and never asks if I want to install the old BIOS style MBR instead. How do I resolve this? Both OS's will install fine on their own the problem is when I try to install the second OS it doesn't recognize any of the other's partitions and tries to rewrite it's own on top of the other. I've tried both OS's first and always run into the same problem. Since there is no way to make Windows recognize GPT without upgrading my Motherboard how do I tell Ubuntu to use the old BIOS MBR on install? Do I have to download a special Ubuntu with a specific grub version? or should I manaually configure my partition somehow to force it not to use GPT? Thank you,

    Read the article

  • Memory overcommitment on VmWare ESXi 5.0

    - by Tibor
    I would like to understand better the possibilities of VmWare ESXi memory overcommitment. I've read this paper from VmWare, so I am familiar with general concepts, such as hypervisor swapping, memory balooning and page sharing. It seems that a combination of these techniques allows for quite a large degree of overcommitment. However, I am not sure. I am deploying a virtual test lab comprising of 4 identical sets of virtual servers and workstations and a couple of virtual router instances. Overall, I expect to be running around 20 virtual machines with Windows XP, Windows 7 and Ubuntu for workstation hosts as well as CentOS and Windows 2008 Server instances for servers. The problem is, however, that the host machine only has 12GB of RAM and I don't have an option to stuff in some more. I would like to know what is the best option to configure hosts in order to achieve reasonable performance within the constrains. I have these two options: Allocate as little as possible of RAM to each virtual machine. Allocate an extraordinary amount (such as 4 GB per instance) and let the baloon driver do the rest. Something else? Which would work better? Machines will mostly be idle, so I don't have any major performance expectations, but they should run reasonably smoothly nevertheless.

    Read the article

  • Assigning cores to VM in vSphere

    - by user114933
    Complete vSphere newbie here... Background: So, I have a 12 core machine with 24 VMs on it. Currently, all the processing power is shared between these VMs equally. The question: Can I configure one VM to be given two CPU's worth processing no matter what's happening on the other machines? My Research: I tried two things in vSphere... I set the reservation and limit on one VM to equal the same as two cores. To test if my objective was being reached, I measured the time it would take to gzip a file when other VMs were running nothing and when other VMs were running CPU intensive operations. I expected the time to gzip the file would be the same because this VM gets priority for some processing. Unfortunately, the time taken to gzip the file when other VMs were running something was significantly more than when other VMs were not running anything. I tried setting the Hyperthreaded Core Sharing mode to Internal hoping that this would mean that my VM would get at least an entire core to itself. This did not work either. Thanks in advance!

    Read the article

< Previous Page | 246 247 248 249 250 251 252 253 254 255 256 257  | Next Page >