Search Results

Search found 26176 results on 1048 pages for 'stream socket client'.

Page 907/1048 | < Previous Page | 903 904 905 906 907 908 909 910 911 912 913 914  | Next Page >

  • How to remap IPs visible from local machine to IPs visible from a machine I have SSH access to?

    - by gooli
    I'm so far out of my depth I don't even know what to google for. There's a server I can connect to via SSH. Via that server I can access other server on its subnet via SSH. What I want to do is be able to access the machines that server has access to directly. Say the server IP is 192.168.7.7 and is the only one in the 192.168.x.x range I have access to. I'd like to configure things in such a way that when I to access say 192.168.7.100 on my machine, the connection will go through an SSH tunnel I open to 192.168.7.7 and out to 192.168.7.100. I would like this to work for any port if at all possible. I know I can set an HTTP proxy and even a SOCKS proxy, but I'm wondering is there is a way to actually remap some of the IP my machine sees to IP only visible from the remote machine. What would this configuration be called? IS this NAT, VPN, IP2IP or something else? How can I set up this on a Windows client box that connects via SSH to a Linux box? Sounds to me like I need to set up some kind of filtering on the network driver or possibly a virtual NIC, but I'm not sure where to go next.

    Read the article

  • KVM Guest installed from console. But how to get to the guest's console?

    - by badbishop
    I'm trying to install a fully virtualized guest (Fedora 14 x86_64) on KVM (RHEL 6), using command-line only (both hypervisor and guest). It goes without errors, and without a tangible result . I'd like to know how to do a text-only installation. So, here's what I've done: # virt-install \ --name=FE --ram=756 --vcpus=1 \ --file=/var/lib/libvirt/images/FE.img --network bridge:br0 \ --nographics --os-type=linux \ --extra-args='console=tty0' -v \ --cdrom=/media/usb/Fedora-14-x86_64-Live-Desktop.iso Starting install... Creating domain... | 0 B 00:00 Connected to domain FE Escape character is ^] ÿ Now what? As I understand after googling for a couple of days, I should see the guest's output from the text installation, but nothing happens. virt-viewer cannot connect to it, kindly suggesting that I explore all the options by adding --help (which I did). If I reconnect with virsh, I see this: Domain installation still in progress. You can reconnect to the console to complete the installation process. [root@v ~] # virsh console FEConnected to domain FE Escape character is ^] This shows that VM is running # virsh list Id Name State ---------------------------------- 8 FE running Qemu log: LC_ALL=C PATH=/sbin:/usr/sbin:/bin:/usr/bin /usr/libexec/qemu-kvm -S -M rhel6.0.0 -enable-kvm -m 756 -smp 1,sockets=1,cores=1,threads=1 -name FE -uuid 6989d008-7c89-424c-d2d3-f41235c57a18 -nographic -nodefconfig -nodefaults -chardev socket,id=monitor,path=/var/lib/libvirt/qemu/FE.monitor,server,nowait -mon chardev=monitor,mode=control -rtc base=utc -no-reboot -boot d -drive file=/var/lib/libvirt/images/FE.img,if=none,id=drive-ide0-0-0,format=raw,cache=none -device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -drive file=/media/usb/Fedora-14-x86_64-Live-Desktop.iso,if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw -device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -netdev tap,fd=20,id=hostnet0 -device rtl8139,netdev=hostnet0,id=net0,mac=52:54:00:0a:65:8d,bus=pci.0,addr=0x2 -chardev pty,id=serial0 -device isa-serial,chardev=serial0 -usb -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3 char device redirected to /dev/pts/1 Output of /etc/libvirt/qemu/FE.xml # cat /etc/libvirt/qemu/FE.xml <domain type='kvm'> <name>FE</name> <uuid>6989d008-7c89-424c-d2d3-f41235c57a18</uuid> <memory>774144</memory> <currentMemory>774144</currentMemory> <vcpu>1</vcpu> <os> <type arch='x86_64' machine='rhel6.0.0'>hvm</type> <boot dev='hd'/> </os> <features> <acpi/> <apic/> <pae/> </features> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/libexec/qemu-kvm</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='none'/> <source file='/var/lib/libvirt/images/FE.img'/> <target dev='hda' bus='ide'/> <address type='drive' controller='0' bus='0' unit='0'/> </disk> <disk type='block' device='cdrom'> <driver name='qemu' type='raw'/> <target dev='hdc' bus='ide'/> <readonly/> <address type='drive' controller='0' bus='1' unit='0'/> </disk> <controller type='ide' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <interface type='bridge'> <mac address='52:54:00:0a:65:8d'/> <source bridge='br0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </interface> <serial type='pty'> <target port='0'/> </serial> <console type='pty'> <target port='0'/> </console> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </memballoon> </devices> </domain> I'm obviously missing something that many others don't, but what is it? Thanx in advance!

    Read the article

  • How Do I Install Latest Java Plugin on Ubuntu 8.04 LTS with Custom Firefox 3.6.3?

    - by Volomike
    I have Ubuntu 8.04 LTS and will need to remain on that for awhile as I am a programmer and cannot disturb what I'm doing in the middle of a project. As you know, it doesn't want to install the latest Firefox by default and keeps me in the stone age. So, I had to install my own Firefox. I did so and with relative ease I was able to install the Flash plugin using the normal 'ln -s' technique one usually does with /usr/lib/firefox/plugins. Now I need to do a Gotomypc.com task and it requires Java -- moan -- so I need to figure out how to install Java. I downloaded and installed the latest Java plugin in /usr/java, and I do see underneath that layer of folders a plugins folder with a .so file. So, I went to /usr/lib/firefox/plugins and did this: ln -s /usr/java/jre1.6.0_20/plugin/i386/ns7/libjavaplugin_oji.so Then, I also read on the web that one has to create a ~/.mozilla/plugins folder, cd into it, and then run the same ln -s command again. Another site recommended finding one's ~/.mozilla/firefox and renaming it to ~/.mozilla/firefox.old (after you backed up your bookmarks) and then launching firefox again so that it creates ~/.mozilla/firefox and uses the new Java plugin. Well, all these attempts have completely failed and it is incredibly frustrating. I do about:plugins to see my plugins and all I get are the Flash and the default null plugin. I do not see the Java one. Also, they said on my tools menu with Firefox 3.6.3 I would see a Java Console menu and I don't see that either. I found a pluginreg.dat somewhere deep under ~/.mozilla/firefox, but it does not list the Java plugin inside -- only Flash and the null plugin. Please help me install Java. I need to help a client out and need to connect to his PC remotely with gotomypc, which requires Java inside firefox.

    Read the article

  • Loading dependencies for custom puppet functions

    - by Ben Smith
    I have written a custom puppet function, which is working fine, that depends on the cloudservers gem (a Rackspace client library). This is fine if I have pre-installed the gem on a server before running puppet but totally breaks if I have not installed the gem as the function seems to be run during the 'compilation' sweep, well before my package definition is realised. Here's what my .pp looks like, with get_hosts the function that requires the cloudservers gem. package { "rubygems": ensure => installed, provider => "gem"; } package { "cloudservers": ensure => installed, provider => "gem", require => Package["rubygems"]; } class hosts::us { $hosts = get_hosts("us") hostentry { $hosts: } } define hostentry() { $parts = split($name, ',') $address = $parts[0] $ip = $parts[1] $aliases = $parts[2] host{ $address: ip => $ip, host_aliases => $aliases } } Is there a way to stop the function getting run so early, or at least having it's run depend up the library being installed. Alternatively, is there a way that I can add dependencies somewhere in the functions folder that will be available to the function?

    Read the article

  • Remote Desktop *from* Windows 2008 R2 Server

    - by freefaller
    Summary: how do I create an RDC connection from a Windows 2008 server to another server? Our client will only allow us to connect to their server via a static IP address (which is fair enough), but unfortunately as we're a very small company we don't have one in the office. As a work around, we had the connection working through our old Windows 2003 server (dynamic-cloud from 1and1). .. however we have just rebuilt the server to run under Windows 2008 R2 (don't ask, but it was necessary), and now I simply cannot get the connection working. I have added an "Outbound Rule" to Windows Firewall with Advanced Security (TCP, All local ports, 3389 remote port - I have also tried the other way around). I have added a packet filter IP security rule with the same details. The 1and1 firewall rules (through their online control panel) allows for 3389 TCP and UDP. But it is simply not connecting (yes, the server is definitely on and able to accept connections) with the general error of... Remote Desktop can’t connect to the remote computer for one of these reasons: 1) Remote access to the server is not enabled 2) The remote computer is turned off 3) The remote computer is not available on the network Is there anything obvious I've missed - or something I can use to find out where the request is being blocked? The new server is using the exact same IP address as before, so I don't believe that would be an issue. Unless it's trying to use an IPv6 address rather than the old IPv4 address that it was before? I apologise that I am not a network person by trade, but I know more than anybody else in my office!!

    Read the article

  • mount nfs subdirectory and still apply parent directory permissions

    - by Christophe Drevet
    A NFS server exports : /export/home computers /export/cont1 computers On the filesystem, there are these permissions : $ ls -al /export/cont1 drwxr-x--- 6 root group1 4096 2010-05-04 10:57 . drwxrwxrwx 5 root root 4096 2010-05-07 14:52 .. drwxrwxrwx 2 root root 4096 2010-05-06 20:33 .snapshot drwxr-xr-x 2 user1 group1 4096 2010-05-04 10:57 user1 drwxr-xr-x 2 user2 group1 4096 2010-05-04 10:57 user2 drwxr-xr-x 2 user3 group1 4096 2010-05-04 10:57 user3 So that user4, which is in not in the group1 can't access this directory and its subdirectories. Now, on its client machine, this user can do : $ sudo mount server:/export/cont1/user3 /mnt/temp and then access the directory without permissions on /export/cont1 : $ id uid=7943(user4) gid=7943(user4) groupes=1189(group4) $ ls -al /mnt/temp/ drwxr-xr-x 3 user3 group1 4096 2010-05-04 10:57 . drwxr-xr-x 7 root root 4096 2010-05-04 11:02 .. -rw-r--r-- 1 user3 group1 6 2010-05-04 10:56 README Is there a way to apply /export/cont1 permissions even if it is not mounted ? The goal is to enable users to mount /home/user3 and only access it if they can access /export/cont1 on the nfs server. Said in another way : how can I allow a machine to mount /export/cont1/user3 and still don't allow user4 to access it. Maybe NFSv4 and Kerberos can help ?

    Read the article

  • Can I get advice on my nginx configuration (as a proxy in front of Jira and Confluence)?

    - by Nate
    I was wondering if I could get some advice on my nginx configuration. The config seems to be working, but I'm unsure if I'm doing everything properly. The basic idea is to have a Jira and Confluence server (in separate Tomcat instances) running on the same machine, with nginx in front to handle SSL for both. I want only SSL connections to be made to Jira/Confluence. Jira is running on 127.0.0.1:9090 and Confluence on 127.0.0.1:8080. Here is my nginx.conf, any advice or tips would be greatly appreciated. user nginx; worker_processes 1; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] $request ' '"$status" $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #gzip on; # Load config files from the /etc/nginx/conf.d directory include /etc/nginx/conf.d/*.conf; # Our self-signed cert ssl_certificate /etc/ssl/certs/fissl.crt; ssl_certificate_key /etc/ssl/private/fissl.key; # redirect non-ssl Confluence to ssl server { listen 80; server_name confluence.example.com; rewrite ^(.*) https://confluence.example.com$1 permanent; } # redirect non-ssl Jira to ssl server { listen 80; server_name jira.example.com; rewrite ^(.*) https://jira.example.com$1 permanent; } # # The Confluence server # server { listen 443; server_name confluence.example.com; ssl on; access_log /var/log/nginx/confluence.access.log main; error_log /var/log/nginx/confluence.error.log; location / { proxy_pass http://127.0.0.1:8080; proxy_set_header X-Forwarded-Proto https; proxy_set_header Host $http_host; } error_page 404 /404.html; location = /404.html { root /usr/share/nginx/html; } redirect server error pages to the static page /50x.html error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } } # # The Jira server # server { listen 443; server_name jira.example.com; ssl on; access_log /var/log/nginx/jira.access.log main; error_log /var/log/nginx/jira.error.log; location / { proxy_pass http://127.0.0.1:9090/; proxy_set_header X-Forwarded-Proto https; proxy_set_header Host $http_host; } error_page 404 /404.html; location = /404.html { root /usr/share/nginx/html; } # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } } }

    Read the article

  • Network Load Balancing, intermittent port problem

    - by Jimmy Chandra
    Trying to troubleshoot an intermittent problem. I think it might be related to an NLB issue. We are using Windows Network Load Balancing to balance load for our multiserver SharePoint front ends. Say... Web Front End 1 IP is 192.168.1.100 and Web Front End 2 IP is 192.168.1.101, the NLB is setup to load balance both WFE servers on any incoming traffic to the IP 192.168.1.200. Sometimes we got an intermittent issue where when we try to access the SharePoint site using 192.168.1.200:8080 (say the site is set up to run on port 8080) from a remote client, it will display page not found. Pinging the 192.168.1.200 will give responses, but when trying to telnet to 192.168.1.200:8080 it just won't connect. However, browsing the SharePoint site directly on individual WFE (192.168.1.100 and 192.168.1.101) show no problem whatsoever. My guess also (we didn't get a chance to try it yet, but I think it should work), if I try connecting remotely to individual server, it will respond just fine. But any attempt on trying to connect using the virtual IP (192.168.1.200) will fail miserably. Funny thing is, after a while it will return back to normal. Anyone had similar experience with this type of problem while implementing NLB before? We are doing this in a virtual environment.

    Read the article

  • Durability of Websockets Server

    - by smitchell360
    I am starting to experiment with websockets. Does anyone know of a websockets server (open source or paid) that provides a durable store of the websocket "channel"? All of the examples that I have found do not address durability -- if a websockets server goes down, all "channel" data is lost. Services such as Pusher do not really discuss whether they address the durability issue (and I have not received a response from tech support yet). Happy to roll my own, but would rather not reinvent the wheel. EDIT: I'm not looking for websockets 101 information. That is readily available and understood. I'm looking for a server (open source or paid) that supports websockets and has a durable store for the websocket data so that, in the event that a server fails, a new server can take over where the original one left off. Two main purposes: 1. support failover scenarios contemplated by the websockets Network Working Group http://tools.ietf.org/html/draft-ibc-websocket-dns-srv-02#section-5.1 (most importantly so that missed messages are sent when a client connects to a failover server) 2. support scenarios where new subscribers must receive all past messages that were published. Of course this can be handled at the application layer...but that is not what I am looking for.

    Read the article

  • vmware vmdk disk problem

    - by dmtr
    Hello, I have a vmware esxi 4 server and 2 storage servers (mount as nfs). Between the storage servers (fedora 14) is made drbd cluster (dual primary) and ocfs2 filesystem, also every server has local partition with ext4 filesystem, both are mounted as nfs on esxi server. When i tried to copy a virtual machine (naturally it power off) files from ext4 partition to ocfs2 partition, vmdk total file size is different, but md5sum is the same. on ext4 partition: # ls -la total 28492228 -rw------- 1 root root 42949672960 Jan 14 14:46 disk-flat.vmdk # md5sum disk-flat.vmdk 0eaebe3138beb32f54ea5de6dfe5a987 on ocfs2 partition: # ls -la total 13974660 -rw------- 1 root root 42949672960 Jan 14 16:16 disk-flat.vmdk # md5sum disk-flat.vmdk 0eaebe3138beb32f54ea5de6dfe5a987 When i power on the virtual machine from ocfs2 partition it dosn't work. I have a windows on the virtual machine and it freez?s after windows logo. From ext4 partition the virtual machine is worked. Test with linux (create and install on ext4 partition and copy) the same problem appears. When i create a virtual machine directly from ocfs2 partition, there are no problems. I tried to copy via vSphere client, and i have the same problem. Any suggestions ?

    Read the article

  • Remote assistance from Remote Desktop sessions: unable to control

    - by syneticon-dj
    Since Remote Control (aka Session Shadowing) is gone for good in Server 2012 Remote Desktop Session hosts, I am looking for a replacement to support users in a cross-domain environment. Since Remote Assistance is supposed to work for Remote Desktop Sessions as well, I tried leveraging that for support purposes by enabling unsolicited remote assistance for all Remote Desktop Session Hosts via Group Policy. All seems to be working well except that the "expert" seems to be unable to actually excercise any mouse or keyboard control when the remote assistance session has been initiated from a Remote Desktop session itself. Mouse clicks and keyboard strokes from the "expert" session (Server 2012) seem to simply be ignored even after the assisted user has acknowledged the request for control. I would like to see this working through RD sessions for the support staff due to a number of reasons: not every support agent would have the appropriate client system version to support users on a specific terminal server (e.g. an agent might have a Windows Vista or Windows 7 station and thus be unable to offer assistance to users on Server 2012 RDSHs) a support agent would not necessarily have a station which is a member of the specific destination domain (mainly due to the reason that more than a single domain's users are supported) what am I missing?

    Read the article

  • Virus / Malware: Explorer window with strange user logged into Hotmail

    - by abel
    I was looking into a PC, the user of which had complained that he couldn't connect to the internet and that the PC was experiencing random restarts. The PC runs WinXP SP3. On examination, I found that the Wireless Zero Configuration service was stopped. I enabled that and the internet was back on(The pc connected through wifi). Then I started firefox and browsed to gmail.com. I did not launch any other program, except for a few explorer windows. It was then I noticed a window had popped up(it was not a pop up). It had the explorer folder icon and instead of explorer folder contents, it showed a hotmail page, with a user named "Homer Stinson" logged in. The titlebar was empty and there were no toolbars. I asked the client whether this was his email id, which he said it was not. I opened task manager, which did not show this explorer window in it's Application tab. I switched back to the 'rogue' window and found that the hotmail settings page was now open, which later changed to the hotmail edit profile page for the same user. I was not clicking anything. Then suddenly the window closed. I checked the autorun locations, fired up a Malwarebytes Anti Malware scan which gave a clean result. The system also had an updated installation of AVG. I don't want a solution for this virus(?) problem. I asked this here because I wanted to know if somebody has come across something similar. What kind of malware can this be? The user had not seen a similar window before and I should have taken screenshots. (PS:Homer Stinson is an imaginary name. I searched for the other real name with some relevant keywords but could not come up with a virus/malware discussion post.) UPDATE: When I checked the PC later a DEP error had popped up closing which restarted the PC.

    Read the article

  • Abysmal transfer speeds on gigabit network

    - by Vegard Larsen
    I am having trouble getting my Gigabit network to work properly between my desktop computer and my Windows Home Server. When copying files to my server (connected through my switch), I am seeing file transfer speeds of below 10MB/s, sometimes even below 1MB/s. The machine configurations are: Desktop Intel Core 2 Quad Q6600 Windows 7 Ultimate x64 2x WD Green 1TB drives in striped RAID 4GB RAM AB9 QuadGT motherboard Realtek RTL8810SC network adapter Windows Home Server AMD Athlon 64 X2 4GB RAM 6x WD Green 1,5TB drives in storage pool Gigabyte GA-MA78GM-S2H motherboard Realtek 8111C network adapter Switch dLink Green DGS-1008D 8-port Both machines report being connected at 1Gbps. The switch lights up with green lights for those two ports, indicating 1Gbps. When connecting the machines through the switch, I am seeing insanely low speeds from WHS to the desktop measured with iperf: 10Kbits/sec (WHS is running iperf -c, desktop is iperf -s). Using iperf the other way (WHS is iperf -s, desktop iperf -c) speeds are also bad (~20Mbits/sec). Connecting the machines directly with a patch cable, I see much higher speeds when connecting from desktop to WHS (~300 Mbits/sec), but still around 10Kbits/sec when connecting from WHS to the desktop. File transfer speeds are also much quicker (both directions). Log from desktop for iperf connection from WHS (through switch): C:\temp>iperf -s ------------------------------------------------------------ Server listening on TCP port 5001 TCP window size: 8.00 KByte (default) ------------------------------------------------------------ [248] local 192.168.1.32 port 5001 connected with 192.168.1.20 port 3227 [ ID] Interval Transfer Bandwidth [248] 0.0-18.5 sec 24.0 KBytes 10.6 Kbits/sec Log from desktop for iperf connection to WHS (through switch): C:\temp>iperf -c 192.168.1.20 ------------------------------------------------------------ Client connecting to 192.168.1.20, TCP port 5001 TCP window size: 8.00 KByte (default) ------------------------------------------------------------ [148] local 192.168.1.32 port 57012 connected with 192.168.1.20 port 5001 [ ID] Interval Transfer Bandwidth [148] 0.0-10.3 sec 28.5 MBytes 23.3 Mbits/sec What is going on here? Unfortunately I don't have any other gigabit-capable devices to try with.

    Read the article

  • BSOD during Cygwin install

    - by Mike Pennington
    I have been running Cygwin (1.7.7) under Windows Vista 64-bit, SP2 for at least a year, and had no problems with my installation at all until today. When I tried to install apt-cyg, I realized that I needed to get the svn client. During the installation of the package dependencies, Vista threw a blue screen of death. I hoped this was a one-time occurrence, so I rebooted into Vista and tried to install svn again; same result. Next I completely removed Cygwin's directory, and all Cygwin registry entries; at this point, I tried reinstalling the base Cygwin system again. During the installation of bash, I got another BSOD: Apologies for the fuzzy pic, I had to shoot with my phone camera. I googled for BSOD and Cygwin and found a post by Dave Korn mentioning that the only reasons Cygwin should cause a BSOD is because of either ioperm.sys or some USB utilities. I have neither on my system. Every time I have tried reinstalling, Vista BSODs; I have not had problems installing any other software packages on this Vista machine. How can I get Cygwin installed again (without reinstalling Vista)?

    Read the article

  • Redirect specific domains with DNS

    - by user66377
    Currently we filter internet content using OpenDNS, our internal Windows DC/DNS servers point to the router's DNS, which then points to the OpenDNS servers. This works well to block all computer's on the network equally. New issue. We now need to separate what computers can go to what sites. So facebook is blocked for everyone right now, but I need to open it up to the 3 community computers now. The 3 community computers will be on an untrusted network seperate from the company computers so they can have their own DNS server, from their own router. The issue is though they still must connect to the internet using the same IP address. So OpenDNS sees the same IP and blocks them the same way. We are looking into getting a second IP, but it's not likely an option without going up to the next major level with our ISP which we don't want to do. My thought is this. Can I setup a DNS server on the untrusted network, and then depending on the request that comes in, have it send it to either OpenDNS or our ISP's DNS? Example www.facebook.com and www.youtube.com are both on the OpenDNS blacklist. So if they go to www.youtube.com, the local DNS server goes to the ISP's DNS to get the IP and thus the client gets the right IP and can go to the site. This would be manually entered for each allowed site thus creating a white list. Then if they go to www.facebook.com, since the local DNS server does not find an entry, it sends the request to OpenDNS, which then sees the site is on the blacklist, and thus sends the it's blocked webpage. The local DNS server can be either Bind on Linux or MS DNS on Window 2008. If this can be done, can you give some direction as well as I've never setup a DNS such as this before. Thanks

    Read the article

  • Router recommendation to virtualize 800 IPs

    - by delerious010
    I've recently been looking at getting some new load balancers for our environment as we are expecting to double our client base in the next 12 months. Currently we have 400 public IPS serving 800 clusters ( 2 clusters / IP due to ports ) on Coyote Point Balancers, and distributing connections to 3 web servers serving about 6GBytes outgoing, 2Gbytes in per day. If we double, this would be about 800 IPs, possibly 1600 clusters, and about 6 servers per cluster ( for a total of 9600 so called "real servers" using Barracuda's lingo ). Due to the amount of clusters, most solutions I've looked at ( Coyote, Barracuda, Loadbalancer.org ) seem to be unsure whether they'll be able to handle our planned growth, mostly due to health checks performed on the servers ... which makes total sense when you think of it. So the fine folk at loadbalancer.org recommended that we may be better off offload the 400-800 public IPs, which we require for SSL eCommerce solutions, over to a forward facing router. From that point on, the router could do some mangling to route EXT_IP:443 to INT_IP:INT_PORT which would then allow us to reduce the Load Balancer configuration to 1 or 2 clusters, thus resolving the health check problem. Does this idea make sense to yall ? Or would you have other recommendations to make ? Secondly, what router would you recommend for such an undertaking ? I'd be looking at something that has some form of failover mechanism built in. On a totally unrelated note, I've got to admit that I'm extremely pleased with the responses I got from loadbalancer.org. Their responses to my inquiries were surprisingly helpful ( i.e. I didn't feel as if I was taking to a sales guy trying to push something ). ( No I don't work for them, and sadly nor are they sending me free gear ).

    Read the article

  • Docs for OpenSSH CA-based certificate based authentication

    - by Zoredache
    OpenSSH 5.4 added a new method for certificate authentication (changes). * Add support for certificate authentication of users and hosts using a new, minimal OpenSSH certificate format (not X.509). Certificates contain a public key, identity information and some validity constraints and are signed with a standard SSH public key using ssh-keygen(1). CA keys may be marked as trusted in authorized_keys or via a TrustedUserCAKeys option in sshd_config(5) (for user authentication), or in known_hosts (for host authentication). Documentation for certificate support may be found in ssh-keygen(1), sshd(8) and ssh(1) and a description of the protocol extensions in PROTOCOL.certkeys. Is there any guides or documentation beyond what is mentioned in the ssh-keygen man-page? The man page covers how to generate certificate and use them, but it doesn't really seem to provide much information about the certificate authority setup. For example, can I sign the keys with an intermediate CA, and have the server trust the parent CA? This comment about the new feature seems to mean that I could setup my servers to trust the CA, then setup a method to sign keys, and then users would not have to publish their individual keys on the server. This also seems to support key expiration, which is great since getting rid of old/invalid keys is more difficult then it should be. But I am hoping to find some more documentation about describe the total configuration CA, SSH server, and SSH client settings needed to make this work.

    Read the article

  • DNS-Based Environment Determination

    - by zvolkov
    Found the following here. The questions is: where can I find more details on how exactly implement this on Windows? Any guide or how-to anybody? Or maybe you can provide your invaluable suggestions? Specifically, how do I make so that "all QA servers would first resolve entries in qa.example.com first and then if that lookup failed they would try example.com" (I'm a dev, not a DNS specialist, but our IT Support has refused to help on this:() Use DNS Based Environment Determination for your servers. Do this by initially splitting your top level domain into a number of sub domains depending on their function, and then creating DNS Service Names in each of the sub domains pointing to the relevant server for that service. Based on the list above we would then have: * clientdb.prod.example.com for Production * clientdb.perf.example.com for Performance Testing * clientdb.qa.example.com for QA * clientdb.dev.example.com for Development Servers then resolve entries in their relevant sub domain by function. That is, all QA servers would first resolve entries in qa.example.com first and then if that lookup failed they would try example.com. This allows you to have a single configuration entry for your client database hostname (clientdb) that would resolve correctly in all environments. This technique has the added advantage of still having global services defined in a common top level domain. This seems to be related to Providing "split horizon" DNS service. Reading that, I see that I will probably need separate DNS Server for each environment. Is this true or does Windows support some form of "tagging" the records to be visible depending on the requestor's IP?

    Read the article

  • What does it mean for the file name to be shown with red background

    - by user56614
    I'm trying to install Cisco VPN client on Linux Ubuntu 10.04. The installer creates the directory, places all the necessary files in it, and then fails to launch the binary. I tried to launch it myself, the system rebukes me too. Closer inspection yields the following: eugene@eugene-desktop:/opt/cisco/vpn/bin$ sudo chmod u+x vpnagentd eugene@eugene-desktop:/opt/cisco/vpn/bin$ ls -la total 5124 drwxr-xr-x 2 root root 4096 2010-10-23 11:51 . drwxr-xr-x 6 root root 4096 2010-10-23 11:51 .. -rwxr-xr-x 1 root root 1607236 2010-10-23 11:51 vpn -rwsr-xr-x 1 root root 1204692 2010-10-23 11:51 vpnagentd -r--r--r-- 1 root root 697380 2010-10-23 11:51 vpndownloader.sh -rwxr-xr-x 1 root root 1712708 2010-10-23 11:51 vpnui -rwxr-xr-x 1 root root 3654 2010-10-23 11:51 vpn_uninstall.sh eugene@eugene-desktop:/opt/cisco/vpn/bin$ ./vpnagentd bash: ./vpnagentd: No such file or directory eugene@eugene-desktop:/opt/cisco/vpn/bin$ sudo ./vpnagentd sudo: unable to execute ./vpnagentd: No such file or directory The file name "vpnagentd" is shown in white letters with red background. The other three executables are in green letters with black background, as expected. Any ideas?

    Read the article

  • Incorrect directory permissions with OpenSSH on Cygwin on Windows Server 2008 SP2

    - by Davy Brion
    I ran into a weird directory permission problem when logged in to a Win2008SP2 (not R2) server through SSH. When I open a local cygwin shell on the server, i can do this: myUser@myServer ~ $ cd /cygdrive/c/Windows/System32/inetsrv/ myUser@myServer /cygdrive/c/Windows/System32/inetsrv $ cd config myUser@myServer /cygdrive/c/Windows/System32/inetsrv/config $ I have no issues accessing the 'config' directory when using a local cygwin shell. 'myUser' has all necessary permissions to access the directory as well. In fact, 'myUser' is a local administrator on the machine. Listing the permissions of the config folder through the local cygwin shell shows the following output: 4 drwx------+ 1 SYSTEM SYSTEM 0 Aug 2 09:38 config But when I log into the server with a SSH client (in this case Putty), i run into the following problem: myUser@myServer ~ $ cd /cygdrive/c/Windows/System32/inetsrv/ myUser@myServer /cygdrive/c/Windows/System32/inetsrv $ cd config -bash: cd: config: Permission denied It also doesn't list the proper permissions through SSH: 0 drwxr-x--- 1 ???????? ???????? 0 Aug 2 09:38 config When I look at the running processes on the server with Task Manager (with a remote desktop connection), it shows that all bash.exe processes are running under the 'myUser' account, so I don't understand why I can't access that particular directory through SSH but have no problems accessing it in a local cygwin shell. I'm using OpenSSH 5.9p1-1. I'm not sure what the Cygwin version is... I used the latest setup.exe (version 2.738) of Cygwin, but I can't seem the find any other Cygwin-related version number. I doubt that it's related to SSH/Cygwin though, because when I connect from the Win2008SP2 server to my local Win7 machine through SSH (using the same OpenSSH/Cygwin versions) I can access the /cygdrive/c/Windows/System32/inetsrv/config folder without issues. Does anyone have an idea on what the issue could be?

    Read the article

  • VNC on Xen failure

    - by BCable
    The following config works and creates a good VM in Xen: # Kernel Setup kernel = "/boot/vmlinuz-2.6.18.8-xenU" # Memory memory = "256" # Disk disk = [ "file:/opt/xen/domains/110/sda1.img,sda1,w", "file:/opt/xen/domains/110/swap.img,sda2,w" ] # container name name = "110" hostname = "boo" # Networking vif = ["type=ieomu, bridge=xenbr0"] # VNC vnc = 1 #vfb = [ 'type=vnc,vncdisplay=2,vnclisten=0.0.0.0,vncpasswd=110' ] # Behavior Settings root = "/dev/sda1" extra = "fastboot" But when I uncomment the VFB line, I get the following error after it hangs for at least 30 seconds: [root@customer 110]# xm create boo.cfg Using config file "./boo.cfg". Error: Device 0 (vkbd) could not be connected. Hotplug scripts not working. Any ideas? Part two of this question: Sometimes it actually works, and a port is opened. When this happens, nmap shows the VNC ports open and I can connect via the VNC client, but it just hangs at "Connection established." and no VNC display shows up. I've tried multiple VNC clients (TightVNC, TightVNC Java Console, RealVNC), but they all fail to connect. Does VNC through Xen require X to be started in order to function? I was under the impression that it would show the console screen, so I'm confused as to why all these issues are occurring. Thanks!

    Read the article

  • Web Interfaces not opening even after Port Forwarding is said to be working!

    - by Ahmad
    I'm encountering this strange problem which has baffled me to the ground, and which I haven't encountered even after years of doing port forwarding .. ! I am hoping somebody here can help me solve this mystery .. :) My network configuration is as follows: I have a DSL modem (custom made and branded by my ISP) which is receiving a DSL stream ... it has an external IP which is visible to the world, say, 11.22.33.44 ... This modem has DHCP enabled, has an internal IP for itself, which is 192.168.1.1 .. it is connected to 2 laptops via and ethernet cable .. Laptop 1 has IP 192.168.1.2, and Laptop 2 has IP 192.168.1.3 ... On Laptop 1, two applications are running, jDownloader and Media Player Classic, which have their web interfaces on ports 8765 and 13579, respectively ... I can access both of these web interfaces from Laptop 2 by opening these addresses: 192.1681.2:8765 and 192.168.1.2:13579 ... both of their web interfaces open up, meaning the web interfaces are working fine .. Moving on, I now want to access these web interfaces from outside my network as well, and so I've configured port forwarding in my PTCL modem to forward all traffic on ports between 8000 and 14000 (both TCP and UDP) to IP 192.168.1.2 ... I have verified that port forwarding is working by testing it using PortForward.com's port checker tool, and this website too: [URL]http://www.yougetsignal.com/tools/open-ports/[/URL] When I use the website, if I'm running the applications on Laptop 2, the website reports that the port is open .. if I then close the application, the website reports the port is closed ... This makes sense as nothing is listening on my machine in the latter case .. Also, if I disable port forwarding in my modem, again, the website reports the port is closed ... so, the website's results seem to be okay ... Same of the above can be said when I'm used PortForward.com's port checker tool ... So again, everything okay so far ... Now, here comes the problem !! ... Despite the above tools reporting that port forwarding is working, I am unable to open the web interfaces from outside my network ... So for example, if I tried to browse 11.22.33.44:8765 or 11.22.33.44:13579, nothing opens in my browser ... But if I accessed these web server's locally from Laptop 3, by typing in 192.168.1.2:8765 or 192.168.1.2:13579, they opened ... So where is the problem here ?? The tools report unanimously that port forwarding is working, and yet I am unable to open the web interfaces from outside the network .. Also note that I have disabled the firewall from my computer, and have also made sure that any option in the above programs (whose web interfaces I am trying to open) that says only local connections are to be accepted, is disabled ... So whats the problem ... ?!! Any ideas ??

    Read the article

  • Should I use Evernote or Org-mode for taking notes?

    - by tobeannounced
    I am looking for an app that will help me manage my notes, and after coming across Org-mode, I was wondering whether Org-mode's functionality is strong enough that it can remove the need for me to use another note taking app (because org is more of a task management app), such as Evernote. My wishes for a note taking app are: can be accessed offline in some form, eg through an iPhone app or desktop client Org-Mode and Evernote can both do this, however it seems like MobileOrg is more aimed at tasks, rather than notes? If this is the case, I probably would use Evernote in addition to MobileOrg. I can clip web content into easily for research Evernote has the browser extension, how is it with Org-Mode? I know I can use c-c c-l, but how suited is it really for taking notes on stuff I am browsing in Chrome/Firefox? has voice notes on the iPhone and computer too, if possible Org-Mode cannot do this on the iPhone, on the computer could I record audio externally and then link the files in? I can add notes too on my iPhone & computer while not connected to the internet both can do this. The types of notes I am likely to have include: howtos/things I have learnt, documentation on my setup/stuff, research on things I may do in the future, ideas, and task specific notes. I have thought about where I would want to access each of these notes and will post that here if you think it would help. So, is Org-mode strong enough in note-taking and the requirements I listed that I can avoid the need to use a separate tool for taking notes?

    Read the article

  • Setup git repository on gentoo server using gitosis & ssh

    - by ikso
    I installed git and gitosis as described here in this guide Here are the steps I took: Server: Gentoo Client: MAC OS X 1) git install emerge dev-util/git 2) gitosis install cd ~/src git clone git://eagain.net/gitosis.git cd gitosis python setup.py install 3) added git user adduser --system --shell /bin/sh --comment 'git version control' --no-user-group --home-dir /home/git git In /etc/shadow now: git:!:14665:::::: 4) On local computer (Mac OS X) (local login is ipx, server login is expert) ssh-keygen -t dsa got 2 files: ~/.ssh/id_dsa.pub ~/.ssh/id_dsa 5) Copied id_dsa.pub onto server ~/.ssh/id_dsa.pub Added content from file ~/.ssh/id_dsa.pub into file ~/.ssh/authorized_keys cp ~/.ssh/id_dsa.pub /tmp/id_dsa.pub sudo -H -u git gitosis-init < /tmp/id_rsa.pub sudo chmod 755 /home/git/repositories/gitosis-admin.git/hooks/post-update 6) Added 2 params to /etc/ssh/sshd_config RSAAuthentication yes PubkeyAuthentication yes Full sshd_config: Protocol 2 RSAAuthentication yes PubkeyAuthentication yes PasswordAuthentication no UsePAM yes PrintMotd no PrintLastLog no Subsystem sftp /usr/lib64/misc/sftp-server 7) Local settings in file ~/.ssh/config: Host myserver.com.ua User expert Port 22 IdentityFile ~/.ssh/id_dsa 8) Tested: ssh [email protected] Done! 9) Next step. There I have problem git clone [email protected]:gitosis-admin.git cd gitosis-admin SSH asked password for user git. Why ssh should allow me to login as user git? The git user doesn't have a password. The ssh key I created is for the user expert. How this should work? Do I have to add some params to sshd_config?

    Read the article

  • Strange IIS hits originating from Trend Micro

    - by TesterTurnedDeveloper
    I'm trying to trace thru an error on a extranet site I maintain. I've had a look thru the logs, and I'm seeing hits originate from these IP addresses: 216.104.15.130 216.104.15.138 216.104.15.142 216.104.15.13 150.70.84.49 150.70.84.44 Network-tools.com gives 'TREND MICRO INCORPORATED' as the owner of all these IPs. The hits fail as they aren't sending any cookies (therefore aren't considered logged in). The hits are to pages containing URLs that only the logged in user would see, i.e. ImageEdit.aspx?ImageId=467424. I.e. the server isn't guessing these URLs, someone would have to log into the site to know these URLs exist. Theory: the Trend Antivirus client grabs URLs and sends them to the server for 'extra processing'? Googling around gives me this: http://www.forumpostersunion.com/showthread.php?p=51272 - where people are reporting comment spam from these addresses. The articles says their servers have been hacked (a few months ago, presumably fixed now?). A hacked server wouldn't explain how the URLs have been plucked off the user's PCs. Has anyone seen this before? Anything nefarious going on here?

    Read the article

< Previous Page | 903 904 905 906 907 908 909 910 911 912 913 914  | Next Page >