Search Results

Search found 4578 results on 184 pages for 'connections'.

Page 144/184 | < Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >

  • IPTables forward from only one ip on my server

    - by user1307079
    I was able to get my server to forward connections on a certain port to a different IP, but when I add -d to specify an IP to froward from, It does not work. This is what I am trying, iptables -t nat -A PREROUTING -d 173.208.230.107 -p tcp --dport 80 iptables -t nat -nvL-j DNAT --to-destination 38.105.20.226:80. It works fine without the -d. Here is my ifconfig dump: em1 Link encap:Ethernet HWaddr 00:A0:D1:ED:D0:54 inet addr:173.208.230.106 Bcast:173.208.230.111 Mask:255.255.255.248 inet6 addr: fe80::2a0:d1ff:feed:d054/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:100058 errors:0 dropped:0 overruns:0 frame:0 TX packets:18941701 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:12779711 (12.1 MiB) TX bytes:825498499 (787.2 MiB) Memory:fbde0000-fbe00000 em1:9 Link encap:Ethernet HWaddr 00:A0:D1:ED:D0:54 inet addr:173.208.230.107 Bcast:173.208.230.111 Mask:255.255.255.248 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Memory:fbde0000-fbe00000 em1:10 Link encap:Ethernet HWaddr 00:A0:D1:ED:D0:54 inet addr:173.208.230.108 Bcast:173.208.230.111 Mask:255.255.255.248 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Memory:fbde0000-fbe00000 em1:11 Link encap:Ethernet HWaddr 00:A0:D1:ED:D0:54 inet addr:173.208.230.109 Bcast:173.208.230.111 Mask:255.255.255.248 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Memory:fbde0000-fbe00000 em1:12 Link encap:Ethernet HWaddr 00:A0:D1:ED:D0:54 inet addr:173.208.230.110 Bcast:173.208.230.111 Mask:255.255.255.248 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Memory:fbde0000-fbe00000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)

    Read the article

  • then an error occurred during the login process - Connection Error 233

    - by scott brunner
    We have SQL Server 2008 installed on 64Bit Windows Server 2003. When we try connect to the local SQL Server using SQL Server Management Studio at the console, we get the error: A connection was successfully established with the server, but then an error occurred during the login process. provider: Shared Memory Provider, error: 0 - No process is on the other end of the pipe. When we try TCP from same local SSMS to local server, we get the same error but intead of the pipe message its something like "connection forcibly closed". Now, here is the strange part - we CAN connect to this SQL Server from any other machine on the network using SSMS. - AND - WE CAN'T connect to ANY SQL Server from the problem server. So it seems the SQL Server instance is fine and accepting remote connections. However, the SSMS on that machine will not connect to any SQL Server even remotely. When we try an ADO.NET connection from C# remotely we can connect, run that same code on the console of the trouble server and we get the same errors. How can this be solved?

    Read the article

  • Different external ip addresses from different sites

    - by user630286
    My router is ClearOS 6(Centos 6). In my router, I have two external (internet) network connections from two ISP's. The primary connection is eth2 connected to a cable modem and the second one is ppp0 connected to a dsl modem. I have assigned eth2 as the primary connection (with a high metric value). In fact this is done through clearos's multiwan web interface. I have a test in my Nagios to monitor whether the primary connection. This connection is done based on the result of curl ifconfig.me But it seems that ifconfig.me is always giving the ip address of my secondary connection. I tested it through a browser. Yes ifconfig.me gives the secondary internet's(ppp0) ip address. But whatismyipaddress.[com|org] give my primary ip address (eth2). I checked the default route on the router through ip route list 0/0 which also shows the primary connection (eth2) as the default route. The traceroute www.google.com and traceroute ifconfig.me both seems to trace through the primary connection (eth2). As our secondary internet connection has only got a limited download, I don't want to end up having to pay a large sum at the end of the month. Has somebody got an idea why the ifconfig.me shows my secondary address? What is the best way to ensure that my router(and thus the lan) use the right internet connection.

    Read the article

  • Strange WiFi problem - network is "not connected" but works

    - by GalacticCowboy
    Background I'm using a Windows XP tablet PC connected to my home network. I do broadcast the SSID, but otherwise the wireless network is locked down as tight as I can make it - WPA2 with a strong key, MAC filtering, etc. I've had this computer for about 4 years and the router for about 6 months. Before that, the previous router was set up in the same way, and I've never had any particular problems. Problem This morning - as I type this, as a matter of fact - Windows is reporting that none of my network connections are connected. Yet somehow my Internet connectivity still works! ipconfig reports a valid IP address, domain, etc., even though Windows apparently doesn't know about it. I tried repairing the connection and it had no effect - the repair eventually timed out. I'm pretty sure that I could reboot the computer and/or the router, etc. and it would fix the problem. However, I'm more interested in knowing if any of you have ever seen anything like this, and what might have caused it? Since it works I'm not inclined to mess with it too much. My concern is that it's a precursor to bigger problems.

    Read the article

  • Virtual Machine Network Services causes networking problems in Vista Enterprise 64 bit install

    - by Bill
    I have a Quad-core/8GB Vista Enterprise 64-bit (SP2) installation on which I installed Virtual PC 2007. I have a problem that is opposite of all that I found searching around the Internet--everybody has problems making network connections from their guest VM. When Virtual Machine Network Services is enabled in the protocol stack for my network card across a reboot, it causes access problems to the network. The amount of time to login in using a domain credentialed account is upwards of 3 minutes, then after reaching the desktop the network and sharing center shows that my connection to the domain is unauthenticated. Disabling and re-enabling the Virtual Machine Network Services (uncheck in network properties/apply/recheck/apply) fixes the problem. And as long as I have the VMNS disabled when I shutdown the restart runs smoothly. I just have to remember to enable after login and disable before shutdown. I have un-installed and re-installed Virtual PC 2007 multiple times with restarts between. The install consists of the SP1 + a KB patch for guest resolution fix. Any help would be greatly appreciated. Some additional information... At one point during my hairpulling and teethgnashing with this, I tried to ping my primary DC and observed some weird responses: (Our DC is 10.10.10.25, my dynamic IP was 10.10.10.203) Reply from 10.10.10.203, Destination host unreachable. Request timed out. Reply from 10.10.10.25: ... This is not consistently repeatable, but thought it might strike a chord with someone.

    Read the article

  • Secure data from a server to a workstation using jumper hosts

    - by apalsson
    Hello. I have a WWW-server, my problem is that the content is sensitive and should not be accessible for people without proper credentials. How can I improve the ease of use but still maintain security following scenario; The Server is accessed through a "jumper host", i.e. the client connects to the jumper using VPN-connection and uses RemoteDesktop to access the jumper. From the jumper he uses RemoteDesktop again to access the Server. Finally on the Server the user can access content using a WWW-browser. All the way from the VPN-client to the WWW-browser requires authentication using a SmartCard-token. This seems quite secure to me. Content only gets mirrored on the RemoteDesktop between Server and jumper, no cached files to worry about. Connection between jumper and client is protected using VPN(ssl), so no eavesdropping. But it is quite cumbersome for the clients with many steps and connections to open. :( So, how can I improve the user experience accessing my server without compromising security? Thanks.

    Read the article

  • is a wildcard SSL the only option in this multiple VHOST/1IP setup?

    - by solsol
    I have a web app set up that needs the following SSL encryption: secure.myapp.com -> SSL www.myapp.com/login -> SSL www.myapp.com/signup -> SSL If I'm correct, I could run one SSL certificate for my whole www.myapp.com/* pages. The problem is that I have a subdomain called secure.myapp.com that either needs to be on a separate IP address to work with SSL. Right now I have one server, one public IP and a number of Virtual Hosts in apache to make this work. I'd rather not buy an expensive Wildcard SSL certificate to secure just one subdomain. What is your advice on this? If it IS the only solution any tips on getting a price worthy wildcard SSL cert is appreciated. I have read about SNI that allows the use of multiple SSL certs, but not all browsers (IE6!) support this. Since we are building a web app for the public, we cannot have IE6 to run on unencrypted connections. Thanks for you help

    Read the article

  • Internet Troubles - PPPoE vs PPPoA?

    - by AkkA
    I have been having some internet troubles at home (ADSL2+ connection in Australia). We get random drop-outs from the authentication connection. It will keep the connection to the DSL service, but we lose authentication and either have to restart the router/modem (its combined, a Belkin one, not sure on model number) or unplug the phone cable, wait about 30 seconds and plug it in again. I've called the ISP (Telstra) a few times, but they only offer limited support when we dont use their supported hardware. Apparently something had happened on their side, they checked the box again (at least it sounded that simple), and told me it would be fine. It wasnt. I've replaced all the filters around the house, but that didnt help either. We do live a little bit away from the exchange (get a sync speed of about 3000/900), so I thought it could be due to line noise but that hasnt helped. Telstra allow both PPPoE and PPPoA connections (which I'm configuring through my router, dont have software on the PC side). I've been running PPPoA the whole time, would it make any difference changing it to PPPoE? If not, are there any other theories as to why we would be experiencing these drop-outs? It has been fine for at least 12 months, then suddenly started about 2 months ago.

    Read the article

  • How do I connect to SSH without the password to be requested every time ? - Already follow some answers here but it doesn't work

    - by MEM
    MAC OS X Lion 10.7.3 1) On host, I've created an authorized_keys file inside .ssh folder, by doing: touch authorized_keys 2) I've copy my public ssh key into host .ssh folder by doing: scp ~/.ssh/mykey.pub [email protected]:/home/userhost/.ssh/mykey.pub 3) I've place it's contents inside authorized files by doing: cat mykey.pub >> authorized_keys 4) Then I've removed the mykey.pub file: rm mykey.pub 5) On my terminal, locally, inside my ~/.ssh folder I made: ssh-add mykey (notice that it is without the pub extension); 6) I've closed and opened again the terminal. When I first connect to this host, it has being added to the *known_hosts* file inside ~/.ssh; I've pico known_hosts and the hash is there. Still, every time I connect by doing: ssh [email protected] it requests a password ! What am I missing here ? UPDATE: I've done EVEN TWO MORE THINGS here: 7) Set your key to be the default identity - if it doesn't exist, create; touch ~/.ssh/config and place inside the following line: IdentityFile ~/.ssh/yourkeyname *id_rsa is normally your default key. You should switched to your key. This tells that the outgoing ssh connections should use this as a default identity.* 8) Add a bash process to your ssh-agent: ssh-agent bash ssh-add ~/.ssh/yourkeyname Lisinge answer helped but it's not definitive. If we restart our machine, the password gets prompted again!!! How can we debug this? What can we do here? How can we check where is this process failing ? UPDATE 2: If I use: ssh -v -i <keyfile> [email protected] I get among other things: OpenSSH_5.6p1, OpenSSL 0.9.8r 8 Feb 2011 Warning: Identity file yourkeyname not accessible: No such file or directory. This message refers to what? The identify file is not accessible on the localhost, or it's not accessible on the remote host ? Please advice

    Read the article

  • Windows Vista/7 dropping Mac Server share points

    - by Hooligancat
    My Windows Vista and Windows 7 clients are having problems maintaining access to SMB shares on a Mac server. The initial connection to the server appears to be OK, as the Windows clients can see all of the server share points. However, the client randomly drops a couple of the server share points although the clients can still see the server. For example. If I have the following share points on the Mac server: Share A Share B Share C Share D Share E The Windows client can see these shares most of the time and can access them most of the time. But randomly a couple of the shares will just get dropped or go missing from the Windows client's ability to view them so I end up with something like: Share B Share D Share E All the share points are established int the same way with the same permission settings. My Mac OSX Server is set up with the following for SMB: SMB sharing enabled Standalone Server Workgroup of `CORPORATE` Allow Guest Access = YES Client connections limit = 100 Authentication: NTLMv2 & Kerberos and NTLM Code Page is Latin US (437) This is a workgroup master browser WINS registration is set to Enable WINS server (tried with setting off) Enable virtual share points for homes YES I noticed in my SMB file service log that the clients appear to connect OK, but I get the following error which implies a reset by either the server or the client: /SourceCache/samba/samba-187.9/samba/source/lib/util_sock.c:read_data(534) read_data: read failure for 4 bytes to client 192.168.0.99. = Connection reset by peer I am a bit stumped as to a direction to turn to try and get this to resolve. Continued attempts to access the server from the client will reconnect to the share points, but they inevitably get dropped again in the near future. Any and all help much appreciated.

    Read the article

  • How to send from my Z88 to my PC

    - by Bevan
    I've got a Cambridge Z88 that I want to get working with my PC. Around 6 years ago - in 2004 - I made heavy use of my Z88 to do a whole bunch of writing on the train while commuting to and from work. The Z88 is solid state, lightweight and has a full size silent keyboard, so it works very well as a writing instrument. I still have the serial cable I soldered up back then and used successfully in 2004. It has these connections: Z88 9 pin ----- ------- 2 TxD ------> RxD 2 3 RxD <------ TxD 3 7 GND <-----> GND 5 4 RTS ------> CTS 8 5 CTS <-+ RTS 7 8 DCD <-+---- DTR 4 9 DTR ----+-> DCD 1 +-> DSR 6 Unfortunately, I haven't been able to find my notes from 2004 that describe how I got it to work back then. I've spent several hours trying to Google a result, but to no avail. I'm pretty sure the cable is fine - after all, it's what I used successfully six years ago, and I've checked it out with a multimeter - so I'm focusing on the PC end of things, which is where I'd like some assistance. Q1: In my recent attempts, I've been using both Hyperterminal (as built into Windows XP) and the command line (copy com2: con:), but with no success. What's a good (better!) serial communications application to use? Is there one that allows me to see as deep as the signalling that's occurring on the wire? Q2: If you have a Z88 that works correctly with your PC, what software do you use on the PC end, and what's the pinout of your cable? I'm pretty sure that the Z88 itself is working properly: When using the built in Import/Export tool to send a file, I see different behaviour when my serial cable is connected compared to disconnected. When disconnected, the transmission appears to work, with a progress meter counting up and then finishing; when connected, nothing happens 'cept a timeout if I wait long enough.

    Read the article

  • A little guidance setting up FTP server authentication on Windows Server 2008 R2 standard?

    - by Ropstah
    I have a (clean) server running Windows Server 2008 R2 standard. I would just like to use it for serving a website and a FTP server through IIS. IIS is installed and serves my website propery. I have now added a FTP site but when I try to logon using my user/pass i get the following error: 530 User cannot login From this article (http://support.microsoft.com/kb/200475) I understand that these four causes can be pointed out: The Allow only anonymous connections security setting has been turned on in the Microsoft Management Console (MMC). Not the case The username does not have the Log on locally permission in User Manager. The user is in the Users group, however I'm not able to logon through RDP. I tried configuring this by following this article through GPMC however this only works when I'm logged in as a domain user on a domain controller which I'm not: I'm logged in as administrator The username does not have the Access this computer from the network permission in User Manager. Not sure what this implies...? The Domain Name was not specified together with the username (in the form of DOMAIN\username). Tried adding the server name: server\username, not working... I am an absolute server noob and I'd just like to be able to connect through FTP... Any guidance is highly appreciated!

    Read the article

  • Effects of internet connection speeds on server queries

    - by SephMerah
    Can my internet connection significantly effect queries run on phpmyadmin? I am currently 18 down and 30 up. I switched internet connections today and noticed a deep drop in query performance. The query that I am running is SELECT * FROM table. Simple. The table has one row of data. The MySQL server is on the same server as everything else. It is a VPS. Godaddy hosts. I dont have any other information. Centos 6.3 MySQL 5.1 PhpMyAdmin 3.4 Okay used google tools to inspect the XHR going out and coming in and this is what it reported. {"success":true,"message":"<div class=\"success\">Your SQL query has been executed successfully ( Query took 0.0033 sec )<\/div>","sql_query":"<div id=\"result_query\" align=\"\">\n<div class=\"success\">Your SQL query has been executed successfully ( Query took 0.0033 sec ) SNIP..................."}. So apparently my server is fine. The strange thing is though.. The returned XHR comes back exactly as soon as I execute the query on the page. It comes back within less than a second. Why PhpMyadmin does not report the change immediately. I am going to try a re-install.

    Read the article

  • Windows 8 Internet Explorer 11 proxy automation script

    - by Stefan Bollmann
    Similar to this post, I'd like to change my proxy settings using a script. However, it fails. When I am behind the proxy, IE does not connect to the internet. Here I try the first solution from craig: function FindProxyForURL(url, host) { if (isInNet(myIpAddress(), "myactualip", "myactualsubnetip")) return "PROXY proxyasshowninpicture:portihavetouseforthisproxy_see_picture"; else return "DIRECT"; } This script is saved as proxy.pac in c:\windows and my configuration is* in LAN settings: No automatically detected settings, yes, use automatic config script: file://c:/windows/proxy.pac No proxy server. So, what am I doing wrong? ---------------- update -------------- However, when I set up a proxy in my LAN configurations: IE -> Internet Options -> Connections -> LAN Settings check: Use a proxy Server for your LAN Address: <a pingable proxy> Port: <portnr> everything is fine for this environment. Now I try a simpler script like function FindProxyForURL(url, host) { return "PROXY <pingable proxy>:<portnr>; DIRECT"; } With a configuration described above** I am not able to get through the proxy.

    Read the article

  • Datacentre Rack naming convention with flexibility for reassignment of server roles

    - by g18c
    We are just shifting across to a new rack and until now have used names of cartoon characters. This is not going to work anymore, and need a better naming convention. Physically i would like to name the servers by location, and then have an alias as to its actual function/customer, i.e. Physical name LONS1R1SVR1 meaning London, suite 1, rack 1, server 1 Customer Alias Since the servers can be reassigned from time to time, for the above physical server name, i would have an alias as a column in a spreadsheet, that would be set to the customers host-name, i.e. wwww.customerserver1.com Patching For patching, I am looking at labeling up the physically connections, i.e. LON1S1R1SVR1-PWR1 LON1S1R1SVR1-PWR2 LON1S1R1SVR1-ETH0 LON1S1R1SVR1-KVM Ultimately if i am labeling cables, I really want to avoid putting LON1S1R1SQLSVR on any patch cord in case the server gets formatted and changed from a SQL server to a WWW server which would need to relabel all the patch cords also. In addition, throwing in virtual machines, i have got confused very quickly. I appreciated that it may be confusing having a physical host-name and customer alias. Please let me know what you run with and any other standards or best practices that i can follow?

    Read the article

  • The best way to hide data Encryption,Connection,Hardware

    - by Tico Raaphorst
    So to say, if i have a VPS which i own now, and i wanted to make the most secure and stable system that i can make. How would i do that? Just to try: I installed debian 7 with LVM Encryption via installation: You get the 2 partitions a /boot and a encrypted partition. When booting you will be prompted to fill in the password to unlock the encryption of the encrypted partition, Which then will have more partitions like /home /usr and swapspace which will automatically mount. Now, i do need to fill in the password over a VNC-SSL connection via the control panel website of the VPS hoster, so they can see my disk encryption password if they wanted to, they have the option if they wanted to look at what i have as data right? Data encryption on VPS , Is it possible to have a 100% secure virtual private server? So lets say i have my server and it is sitting well locked next to me, with the following examples covered bios (you have to replace bios) raid (you have to unlock raid-config) disk (you have to unlock disk encryption) filelike-zip-tar (files are stored in encrypted archives) which are in some other crypted file mounted as partition (archives mounted as partitions) all on the same system So it will be slow but it would be extremely difficult to crack the encryption. So to say if you stole the server. Then i only need to make the connection like ssh safer with single use passwords, block all incoming and outgoing connections but give one "exception" for myself. And maybe one for if i somehow lose my identity for the "exeption" What other overkill but realistic security options are available, i have heard about SElinux?

    Read the article

  • Large, high performance object or key/value store for HTTP serving on Linux

    - by Tommy
    I have a service that serves images to end users at a very high rate using plain HTTP. The images vary between 4 and 64kbytes, and there are 1.300.000.000 of them in total. The dataset is about 30TiB in size and changes (new objects, updates, deletes) make out less than 1% of the requests. The number of requests pr. second vary from 240 to 9000 and is dispersed pretty much all over, with few objects being especially "hot". As of now, these images are files on a ext3 filesystem distributed read only across a large amount of mid range servers. This poses several problems: Using a fileysystem is very inefficient since the metadata size is large, the inode/dentry cache is volatile on linux and some daemons tend to stat()/readdir() it's way through the directory structure, which in my case becomes very expensive. Updating the dataset is very time consuming and requires remounting between set A and B. The only reasonable handling is operating on the block device for backup, copying, etc. What I would like is a deamon that: speaks HTTP (get, put, delete and perhaps update) stores data it in an efficient structure. The index should remain in memory, and considering the amount of objects, the overhead must be small. The software should be able to handle massive connections with slow (if any) time needed to ramp up. Index should be read in memory at startup. Statistics would be nice, but not mandatory. I have experimented a bit with riak, redis, mongodb, kyoto and varnish with persistent storage, but I haven't had the chance to dig in really deep yet.

    Read the article

  • CentOS 5.7 issues with iptables

    - by Corey Whitaker
    I'm trying to set up IPTables on a new CentOS server. This server will function as an FTP server that I need to be accessible from the outside, however, I want to lock down SSH to only accept internal IP connections. I need to allow SSH for 10.0.0.0/8 and 172.16.132.0/24. Below I've posted my /etc/sysconfig/iptables file. Whenever I apply this, I essentially lock myself out and I have to access it via console using Vsphere. Can somebody show me what I'm doing wrong? I'm connecting from my laptop with an IP of 172.16.132.226. *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [115:15604] :RH-Firewall-1-INPUT - [0:0] -A INPUT -j RH-Firewall-1-INPUT -A FORWARD -j RH-Firewall-1-INPUT -A RH-Firewall-1-INPUT -i lo -j ACCEPT -A RH-Firewall-1-INPUT -p icmp -m icmp --icmp-type any -j ACCEPT -A RH-Firewall-1-INPUT -p esp -j ACCEPT -A RH-Firewall-1-INPUT -p ah -j ACCEPT -A RH-Firewall-1-INPUT -d 224.0.0.251 -p udp -m udp --dport 5353 -j ACCEPT -A RH-Firewall-1-INPUT -p udp -m udp --dport 631 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m tcp --dport 631 -j ACCEPT -A RH-Firewall-1-INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A RH-Firewall-1-INPUT -s 10.0.0.0/8 -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT -A RH-Firewall-1-INPUT -s 172.16.132.0/24 -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 20 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 21 -j ACCEPT -A RH-Firewall-1-INPUT -j REJECT --reject-with icmp-host-prohibited COMMIT

    Read the article

  • How to limit reverse SSH tunelling ports?

    - by funktku
    We have a public server which accepts SSH connections from multiple clients behind firewalls. Each of these clients create a Reverse SSH tunnel by using the ssh -R command from their web servers at port 80 to our public server. The destination port(at the client side) of the Reverse SSH Tunnel is 80 and the source port(at public server side) depends on the user. We are planning on maintaining a map of port addresses for each user. For example, client A would tunnel their web server at port 80 to our port 8000; client B from 80 to 8001; client C from 80 to 8002. Client A: ssh -R 8000:internal.webserver:80 clienta@publicserver Client B: ssh -R 8001:internal.webserver:80 clientb@publicserver Client C: ssh -R 8002:internal.webserver:80 clientc@publicserver Basically, what we are trying to do is bind each user with a port and not allow them to tunnel to any other ports. If we were using the forward tunneling feature of SSH with ssh -L, we could permit which port to be tunneled by using the permitopen=host:port configuration. However, there is no equivalent for reverse SSH tunnel. Is there a way of restricting reverse tunneling ports per user?

    Read the article

  • Mac updated just now, postgres now broken

    - by user52224
    I run postgres 9.1 / ruby 1.9.2 / rails 3.1.0 on a maxbook air for local dev. It's all been running smoothly for months, (though this is the first time I've done development on a mac.) It's a macbook air from last year, and today I got the mac osx software update message as I have a few times before, and my system downloaded approx 450mb of updates and restarted. It now says it's on OSX 10.7.3. Point is, postgres has stopped working, when I start my thin server (mirror heroku cedar) as normal, and then browse to my rails app I get: PG::Error could not connect to server: Permission denied Is the server running locally and accepting connections on Unix domain socket "/var/pgsql_socket/.s.PGSQL.5432"? What happened? After browsing around a few questions I'm still confused, but here's some extra info: Running psql from command line gives same error I can run pgadmin 3 and connect via it and run SQL no problems Running which psql shows the version as /usr/bin/psql I created a PostgreSQL user back when I got the mac (it's always been on lion) I've no idea why, almost certainly I was following a tutorial which I neglected to store in my notes. Point is I am aware there is a _postgres user as well. I know it's rubbish, but apart from a note on passwords, I don't have any extra info on how I configured postgres - though the obvious implication is that I did not use the _postgres user. Anyone have suggestions or information on what might have changed / what I can try to debug and fix? Thanks. Edit: Playing around based on this question and answer: http://stackoverflow.com/questions/7975414/check-status-of-postgresql-server-mac-os-x, see this string of commands: $ sudo su postgreSQL bash-3.2$ /Library/PostgreSQL/9.1/bin/pg_ctl start -D /Library/PostgreSQL/9.1/data pg_ctl: another server might be running; trying to start server anyway server starting bash-3.2$ 2012-04-08 19:03:39 GMT FATAL: lock file "postmaster.pid" already exists 2012-04-08 19:03:39 GMT HINT: Is another postmaster (PID 68) running in data directory "/Library/PostgreSQL/9.1/data"? bash-3.2$ exit

    Read the article

  • Moving Microsoft Exchange server to the private network.

    - by Alexey Shatygin
    In one of the offices, we have a 50-computers network, which had only one server machine: Windows 2003 Server Microsoft ISA Server Microsoft Exchange 2003 This server worked as a gateway (proxy server), mail server, file server, firewall and domain controller. It had two network interfaces, one for WAN (let's say 222.222.222.222) and one for LAN (192.168.1.1). I set up a Linux box to be the gateway (without a proxy), so the Linux box now has the following interfaces: 222.222.222.222 (our external IP, we removed it from the Windows machine) and 192.168.1.100 (internal IP), but we need to keep the old Windows server as a mail server and a proxy for some of our users, until we prepare another Linux machine for that, so I need the mail server on that machine to be available from the Internet. I set up iptables rules to redirect all the incoming connections on the 25th and 110th ports of our external IP to 192.168.1.1:25 and 192.168.1.1:110 and when I try to telnet our SMTP service telnet 222.222.222.222 25 I get the greetings from our windows server's (192.168.1.1) SMTP service, and that's works fine. But when I telnet POP3 service telnet 222.222.222.222 110 I only get the blank black screen and the connection seem to disappear if I press any button. I've checked the ISA rules - everything seems to be the same for 110th and 25th ports. When I telnet on 110th ports of our Windows server from our new gateway machine like this: telnet 192.168.1.1 110 I get the acces to it's POP3 service: +OK Microsoft Exchange Server 2003 POP3 server version 6.5.7638.1 (...) ready. What sould I do, to make the POP3 service available through our new gateway?

    Read the article

  • how to set up dual wireless routers (G and N) with single broadband connection

    - by user15973
    A local cafe has an 802.11g wireless router attached to either an 8- or 12Mbs broadband connection. Although many customers still have -g devices, an increasing number are showing up with 802.11n devices (e.g. Mac laptops, iPads). The cafe owner is content with his router's area coverage, but he would like for his customers to be able to take advantage of the higher -n download speeds. He could simply replace the -g router with a -n router, but reportedly -n routers slow down considerably when servicing both -g and -n connections. Another option would be to buy the -n router, run an Ethernet cable between the two routers, and (of course) hook up the broadband connection to one of the two routers. Still another option would be to attach an ordinary, wired switch to the broadband connection and then attach both routers to the switch. Aside from the cost issue, what are the tradeoffs of those approaches? Which would you recommend, and why? Is there a better option than the ones I mentioned? Thank you in advance for your advice.

    Read the article

  • Connection between Asp.Net and Oracle 10g Express Edition

    - by l3gion
    Hello, I'm struggling to find a way to connect my Asp .Net + C# application with my Oracle 10g Express Edition. Here's my scenario, I'm at Mac OS and I have 2 Virtual machines, one for Win 7 (VS 2010 app) and another with a Parallels Virtual Appliance with Oracle 10g Express Edition 1.1. Which provider (Oledb, ODP.NET, etc..) should I use? How to make the connection to the server in C#? Right now I have this: <appSettings> <add key="conn" value="Data Source=10.211.55.11;Persist Security Info=True;User ID=l3gion;Password=l3gion;" /> </appSettings> And at the .cs file: SqlCommand cmd = new SqlCommand("insert_thing", new SqlConnection(ConfigurationManager.AppSettings["conn"])); cmd.CommandType = CommandType.StoredProcedure; *insert_thing is a stored procedure Using this I got this error: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) I've searched for some possible solutions. Tried some, including: firewall disabled, allow remote connection at oracle express edition using this cmd line ("EXEC DBMS_XDB.SETLISTENERLOCALACCESS(FALSE);").. The error persists. Can anyone guide me into the right direction? I'm a newbie with this type of things. Thank you for your patience. regards

    Read the article

  • Issue with InnoDB engine while enabling and [ skip-innodb ]

    - by Ahn
    How to enable InnoDB, which was previously disabled with skip-innodb option. Case 1: Disabled the innodb with skip-innodb option and show engines givens as below. Engine | Support ... | InnoDB | NO ...... Case 2: As I want to enable the innodb, I commanded the #skip-innodb option and restarted. But now the show engines even not showing the InnoDB engine in the list. ? Mysql Version : 5.1.57-community-log OS : CentOS release 5.7 (Final) Log: 120622 13:06:36 InnoDB: Initializing buffer pool, size = 8.0M 120622 13:06:36 InnoDB: Completed initialization of buffer pool InnoDB: No valid checkpoint found. InnoDB: If this error appears when you are creating an InnoDB database, InnoDB: the problem may be that during an earlier attempt you managed InnoDB: to create the InnoDB data files, but log file creation failed. InnoDB: If that is the case, please refer to InnoDB: http://dev.mysql.com/doc/refman/5.1/en/error-creating-innodb.html 120622 13:06:36 [ERROR] Plugin 'InnoDB' init function returned error. 120622 13:06:36 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 120622 13:06:36 [Note] Event Scheduler: Loaded 0 events 120622 13:06:36 [Note] /usr/sbin/mysqld: ready for connections. Version: '5.1.57-community-log' socket: '/data/mysqlsnd/mysql.sock1' port: 3307 MySQL Community Server (GPL)

    Read the article

  • Ping with explicit next-hop selection (aka Monitoring multiple default gateways)

    - by Michuelnik
    I have a linux (debian) router with two internet connections (A) and (B). (A) is preferred, (B) is fallback. I want to monitor the internet connection (and not only the availability of the gateways!) and change the default route appropriately. If (A) is not providing internet, switch to (B) If (A) is providing internet again, switch back to (A). Only problem I have is in case (2). My routing table points towards a working internet so I cannot easily detect whether internet is working over link (A) again. I am search for a ping or traceroute (or other diagnosis-tool) which can select the next-hop explicitly. ping -r looks promising, but can only ping a host on the lan. (It only has to write another destination address in the packet, damnit!) traceroute -g gateway looks even more promising and nearly does what I want - but sets source routing options which my next-hops deny. (Not within my administrative boundary...) I just want a $ping, that can: select a source interface (and address) select a next-hop on that interface ping any arbitrary ip address I could do evil trickery with policy-based routing but that would have production impact for all users. I would like to see a side-effect-free solution....

    Read the article

< Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >