Search Results

Search found 4578 results on 184 pages for 'connections'.

Page 144/184 | < Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >

  • How to limit reverse SSH tunelling ports?

    - by funktku
    We have a public server which accepts SSH connections from multiple clients behind firewalls. Each of these clients create a Reverse SSH tunnel by using the ssh -R command from their web servers at port 80 to our public server. The destination port(at the client side) of the Reverse SSH Tunnel is 80 and the source port(at public server side) depends on the user. We are planning on maintaining a map of port addresses for each user. For example, client A would tunnel their web server at port 80 to our port 8000; client B from 80 to 8001; client C from 80 to 8002. Client A: ssh -R 8000:internal.webserver:80 clienta@publicserver Client B: ssh -R 8001:internal.webserver:80 clientb@publicserver Client C: ssh -R 8002:internal.webserver:80 clientc@publicserver Basically, what we are trying to do is bind each user with a port and not allow them to tunnel to any other ports. If we were using the forward tunneling feature of SSH with ssh -L, we could permit which port to be tunneled by using the permitopen=host:port configuration. However, there is no equivalent for reverse SSH tunnel. Is there a way of restricting reverse tunneling ports per user?

    Read the article

  • IPTables forward from only one ip on my server

    - by user1307079
    I was able to get my server to forward connections on a certain port to a different IP, but when I add -d to specify an IP to froward from, It does not work. This is what I am trying, iptables -t nat -A PREROUTING -d 173.208.230.107 -p tcp --dport 80 iptables -t nat -nvL-j DNAT --to-destination 38.105.20.226:80. It works fine without the -d. Here is my ifconfig dump: em1 Link encap:Ethernet HWaddr 00:A0:D1:ED:D0:54 inet addr:173.208.230.106 Bcast:173.208.230.111 Mask:255.255.255.248 inet6 addr: fe80::2a0:d1ff:feed:d054/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:100058 errors:0 dropped:0 overruns:0 frame:0 TX packets:18941701 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:12779711 (12.1 MiB) TX bytes:825498499 (787.2 MiB) Memory:fbde0000-fbe00000 em1:9 Link encap:Ethernet HWaddr 00:A0:D1:ED:D0:54 inet addr:173.208.230.107 Bcast:173.208.230.111 Mask:255.255.255.248 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Memory:fbde0000-fbe00000 em1:10 Link encap:Ethernet HWaddr 00:A0:D1:ED:D0:54 inet addr:173.208.230.108 Bcast:173.208.230.111 Mask:255.255.255.248 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Memory:fbde0000-fbe00000 em1:11 Link encap:Ethernet HWaddr 00:A0:D1:ED:D0:54 inet addr:173.208.230.109 Bcast:173.208.230.111 Mask:255.255.255.248 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Memory:fbde0000-fbe00000 em1:12 Link encap:Ethernet HWaddr 00:A0:D1:ED:D0:54 inet addr:173.208.230.110 Bcast:173.208.230.111 Mask:255.255.255.248 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Memory:fbde0000-fbe00000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)

    Read the article

  • Effects of internet connection speeds on server queries

    - by SephMerah
    Can my internet connection significantly effect queries run on phpmyadmin? I am currently 18 down and 30 up. I switched internet connections today and noticed a deep drop in query performance. The query that I am running is SELECT * FROM table. Simple. The table has one row of data. The MySQL server is on the same server as everything else. It is a VPS. Godaddy hosts. I dont have any other information. Centos 6.3 MySQL 5.1 PhpMyAdmin 3.4 Okay used google tools to inspect the XHR going out and coming in and this is what it reported. {"success":true,"message":"<div class=\"success\">Your SQL query has been executed successfully ( Query took 0.0033 sec )<\/div>","sql_query":"<div id=\"result_query\" align=\"\">\n<div class=\"success\">Your SQL query has been executed successfully ( Query took 0.0033 sec ) SNIP..................."}. So apparently my server is fine. The strange thing is though.. The returned XHR comes back exactly as soon as I execute the query on the page. It comes back within less than a second. Why PhpMyadmin does not report the change immediately. I am going to try a re-install.

    Read the article

  • How do I connect to SSH without the password to be requested every time ? - Already follow some answers here but it doesn't work

    - by MEM
    MAC OS X Lion 10.7.3 1) On host, I've created an authorized_keys file inside .ssh folder, by doing: touch authorized_keys 2) I've copy my public ssh key into host .ssh folder by doing: scp ~/.ssh/mykey.pub [email protected]:/home/userhost/.ssh/mykey.pub 3) I've place it's contents inside authorized files by doing: cat mykey.pub >> authorized_keys 4) Then I've removed the mykey.pub file: rm mykey.pub 5) On my terminal, locally, inside my ~/.ssh folder I made: ssh-add mykey (notice that it is without the pub extension); 6) I've closed and opened again the terminal. When I first connect to this host, it has being added to the *known_hosts* file inside ~/.ssh; I've pico known_hosts and the hash is there. Still, every time I connect by doing: ssh [email protected] it requests a password ! What am I missing here ? UPDATE: I've done EVEN TWO MORE THINGS here: 7) Set your key to be the default identity - if it doesn't exist, create; touch ~/.ssh/config and place inside the following line: IdentityFile ~/.ssh/yourkeyname *id_rsa is normally your default key. You should switched to your key. This tells that the outgoing ssh connections should use this as a default identity.* 8) Add a bash process to your ssh-agent: ssh-agent bash ssh-add ~/.ssh/yourkeyname Lisinge answer helped but it's not definitive. If we restart our machine, the password gets prompted again!!! How can we debug this? What can we do here? How can we check where is this process failing ? UPDATE 2: If I use: ssh -v -i <keyfile> [email protected] I get among other things: OpenSSH_5.6p1, OpenSSL 0.9.8r 8 Feb 2011 Warning: Identity file yourkeyname not accessible: No such file or directory. This message refers to what? The identify file is not accessible on the localhost, or it's not accessible on the remote host ? Please advice

    Read the article

  • D-LINK DIR-615 router keeps giving my wireless devices bad ip addresses

    - by mlsteeves
    I have a D-LINK DIR-615 router, and wired devices have no problem getting an IP, however; wireless devices end up with a 169.254.. address (subsequently, they cannot access the internet through the router). I have removed all wired connections from the router, so there is no other dhcp server running. I've also gone back to the store, and replaced it with another, thinking that maybe it was defective. According to the router, it gave 192.168.0.101 to the wireless device. According to the wireless device it got 169.254.67.71. I've tried both a laptop and an iPod Touch, both exhibit the same behaviour. Has anyone seen this type of behaviour, or have any ideas of stuff to try? NEW INFORMATION I looked at the logs on the router, and when the wireless device tries to connect, this is what is logged: Sep 10 18:13:39 UDHCPD sending OFFER of 192.168.0.111 Sep 10 18:13:31 UDHCPD sending OFFER of 192.168.0.111 Sep 10 18:13:26 UDHCPD sending OFFER of 192.168.0.111 Sep 10 18:13:23 UDHCPD sending OFFER of 192.168.0.111 Sep 10 18:13:21 UDHCPD sending OFFER of 192.168.0.111 I connected a computer directly to the router, and here is what it looks like: Sep 10 18:14:18 UDHCPD Inform: add_lease 192.168.0.110 Sep 10 18:14:14 UDHCPD sending ACK to 192.168.0.110 Sep 10 18:14:14 UDHCPD sending OFFER of 192.168.0.110 Not sure if that helps or not.

    Read the article

  • Configuring a MySQL 5.1 Instance on Windows 7 Professional x64 Fails

    - by Thomas Owens
    I'm trying to set up my laptops to function as mobile development environments. Installing the software on my Linux machine and getting it configured was fairly straightforward, however I'm having trouble getting MySQL 5.1 Server installed and configured on Windows 7 Professional 64-bit. I'm currently using the Windows MSI Installer for the complete MySQL 5.1 system (as opposed to the Essentials installer also available). I've tried to install using both the 32-bit and 64-bit versions of MySQL 5.1 - the same events occur in both. I've installed both the Server Instance Configuration Wizard and Workbench and everything appears to be installed just fine. When I open the Instance Configuration Wizard, I select Detailed Configuration. On the next screen, I select Development Environment, then Multifunctional Database on the next screen. I leave the InnoDB settings unchanged. I select Manual Setting with 5 concurrent connections. I enable TCP/IP Networking on Port 3306 and Enable Strict Mode. I select the Standard Character Set. I check the boxes for Install as a Windows Service (and provide the name "MySQL") and Include the Bin Directory in Windows PATH. On the next screen, I set my root user name and password. I do not enable root access from remote machines and I also do not create an anonymous account. On the final screen of the wizard, when I click "Execute", the first two tasks (Prepare Configuration and Write Configuration File) complete. However, when it reaches Start Service, the wizard hangs and becomes unresponsive ("Not Responding" appears in the title bar and Task Manager). I would really like to be able to use both my Windows and Linux laptops as full-blown mobile development environments, but I can't do that without being able to run MySQL. Has anyone encountered this problem before? What options do I have to correct it?

    Read the article

  • How far should we take the N+N redundancy craziness ?

    - by Brann
    The industry standard when it comes from redundancy is quite high, to say the least. To illustrate my point, here is my current setup (I'm running a financial service). Each server has a RAID array in case something goes wrong on one hard drive .... and in case something goes wrong on the server, it's mirrored by another spare identical server ... and both server cannot go down at the same time, because I've got redundant power, and redundant network connectivity, etc ... and my hosting center itself has dual electricity connections to two different energy providers, and redundant network connectivity, and redundant toilets in case the two security guards (sorry, four) needs to use it at the same time ... and in case something goes wrong anyway (a nuclear nuke? can't think of anything else), I've got another identical hosting facility in another country with the exact same setup. Cost of reputational damage if down = very high Probability of a hardware failure with my setup : <<1% Probability of a hardware failure with a less paranoiac setup : <<1% ASWELL Probability of a software failure in our application code : 1% (if your software is never down because of bugs, then I suggest you doublecheck your reporting/monitoring system is not down. Even SQLServer - which is arguably developed and tested by clever people with a strong methodology - is sometimes down) In other words, I feel like I could host a cheap laptop in my mother's flat, and the human/software problems would still be my higher risk. Of course, there are other things to take into consideration such as : scalability data security the clients expectations that you meet the industry standard But still, hosting two servers in two different data centers (without extra spare servers, nor doubled network equipment apart from the one provided by my hosting facility) would provide me with the scalability and the physical security I need. I feel like we're reaching a point where redundancy is just a communcation tool. Honestly, what's the difference between a 99.999% uptime and a 99.9999% uptime when you know you'll be down 1% of the time because of software bugs ? How far do you push your redundancy crazyness ?

    Read the article

  • The best way to hide data Encryption,Connection,Hardware

    - by Tico Raaphorst
    So to say, if i have a VPS which i own now, and i wanted to make the most secure and stable system that i can make. How would i do that? Just to try: I installed debian 7 with LVM Encryption via installation: You get the 2 partitions a /boot and a encrypted partition. When booting you will be prompted to fill in the password to unlock the encryption of the encrypted partition, Which then will have more partitions like /home /usr and swapspace which will automatically mount. Now, i do need to fill in the password over a VNC-SSL connection via the control panel website of the VPS hoster, so they can see my disk encryption password if they wanted to, they have the option if they wanted to look at what i have as data right? Data encryption on VPS , Is it possible to have a 100% secure virtual private server? So lets say i have my server and it is sitting well locked next to me, with the following examples covered bios (you have to replace bios) raid (you have to unlock raid-config) disk (you have to unlock disk encryption) filelike-zip-tar (files are stored in encrypted archives) which are in some other crypted file mounted as partition (archives mounted as partitions) all on the same system So it will be slow but it would be extremely difficult to crack the encryption. So to say if you stole the server. Then i only need to make the connection like ssh safer with single use passwords, block all incoming and outgoing connections but give one "exception" for myself. And maybe one for if i somehow lose my identity for the "exeption" What other overkill but realistic security options are available, i have heard about SElinux?

    Read the article

  • A little guidance setting up FTP server authentication on Windows Server 2008 R2 standard?

    - by Ropstah
    I have a (clean) server running Windows Server 2008 R2 standard. I would just like to use it for serving a website and a FTP server through IIS. IIS is installed and serves my website propery. I have now added a FTP site but when I try to logon using my user/pass i get the following error: 530 User cannot login From this article (http://support.microsoft.com/kb/200475) I understand that these four causes can be pointed out: The Allow only anonymous connections security setting has been turned on in the Microsoft Management Console (MMC). Not the case The username does not have the Log on locally permission in User Manager. The user is in the Users group, however I'm not able to logon through RDP. I tried configuring this by following this article through GPMC however this only works when I'm logged in as a domain user on a domain controller which I'm not: I'm logged in as administrator The username does not have the Access this computer from the network permission in User Manager. Not sure what this implies...? The Domain Name was not specified together with the username (in the form of DOMAIN\username). Tried adding the server name: server\username, not working... I am an absolute server noob and I'd just like to be able to connect through FTP... Any guidance is highly appreciated!

    Read the article

  • IP tables blocking access to most hosts but some accesses being logged

    - by epo
    What am I getting wrong? A while back I locked down my web hosting service while hardening it or at least trying to. Apache listens on port 80 only and I set up iptables using the following: IPS="list of IPs" iptables --new-chain webtest # Accept all established connections iptables -A INPUT --protocol tcp --dport 80 --jump webtest iptables -A INPUT --match state --state ESTABLISHED,RELATED --jump ACCEPT iptables -A webtest --match state --state ESTABLISHED,RELATED --jump ACCEPT for ip in $IPS; do iptables -A webtest --match state --state NEW --source $ip --jump ACCEPT done iptables -A webtest --jump DROP However looking at my apache logs I notice various log entries in access_log, e.g. 221.192.199.35 - - [16/May/2010:13:04:31 +0100] "GET http://www.wantsfly.com/prx2.php?hash=926DE27C156B40E55E4CFC8F005053E2D81E6D688AF0 HTTP/1.0" 404 206 "-" "Mozilla/ 4.0 (compatible; MSIE 6.0; Windows NT 5.0)" 201.228.144.124 - - [16/May/2010:11:54:16 +0100] "GET /w00tw00t.at.ISC.SANS.DFind:) HTTP/1.1" 400 226 "-" "-" 207.46.195.224 - - [16/May/2010:04:06:48 +0100] "GET /robots.txt HTTP/1.1" 200 311 "-" "msnbot/2.0b (+http://search.msn.com/msnbot.htm)" How are these slipping through? I don't mind the indexing bots (though I am a little surprised to see them get through). I suppose they must be getting through using the ESTABLISHED,RELATED rules. And no, I can't for the life of me remember why the first match state rule is there So 2 questions: is there a better way to set up iptables to restrict access to specified hosts? How exactly are these 3 examples slipping through?

    Read the article

  • External HDD incorrectly detected as internal - how change to enable hot swap/eject?

    - by Sam
    Hi All, I have win 7 x64 Home Prem. The HDD is a seagate barracuda, 7200.7 ST3120827AS. 3.5", Serial: 3ms006n6, Firmware: 3.42 (no further updates) NexStar CX External case (drivers installed). I have three drives: WD320 with OS installed WD750 data storage (internal) seagate 120 (external) - connected via esata board connected to sata on motherboard (MSI p43 neo) Tried uninstalling HDD in device manager to no effect. Also the internal WD750 is detected as an external drive and win taskbar icon allows for it to be ejected (unlike the seagate). All drives are configured - Online, Simple, Basic, NTFS, Active, Primary Partition (except c drive). The seagate was previously used as a primary disk with XP operating system so I deleted the volume and created/reformatted (not quick). HDD is no longer "Active". But did not fix problem. Background Originally, I installed win 7 with the bios set to IDE and forgot to install the chipset drivers. Then I changed win 7 to install the AHCI drivers, changed the bios to AHCI and rebooted. Win 7 loaded drivers but WD HDD gave problems/crashed. I installed chipset drivers and latest intell storage matrix software thingie (in safe mode). Everything worked fine after that except for the problem of not corrrectly detecting the external drive] I have noticed that under the driver properties (and similarly in the registry) the two drives are configured differently (e.g. in driver details property capabilities for the WD the value is set to 0000006, CM_DEVCAP_REMOVABLE & EJECTSUPPORTED - whereas the seagate shows 0000080 & CM_DEVCAP_SURPRISEREMOVALOK). Any easy way to configure things? I tried physically swapping the sata connections on the mainboard without success So far I have found that a solution to my problem might be to perform some reg changes: http://superuser.com/questions/12955/how-do-i-remove-the-option-to-eject-sata-drives-from-the-windows-7-tray-icon

    Read the article

  • git : The remote end hung up unexpectedly - too many simultaneous users?

    - by Pritam Barhate
    I asked this first on StackOverflow and I was suggested that I should ask it here: We have a self hosted git server (Gitolite) on a VPS account (CPU:2.68GHz RAM:1824MB). This same VPS is also used to publish our underdevelopment web apps for client demos. (Very little traffic). so the main use of the server is as a Git Server Only. This git server is accessed by a team of 30-40 people for various projects. Our problem is that during the day when 6-7 people are trying to access the server (sometimes same repo) we get frequent error message: ssh: connect to host xxx.xxx.xx.xx port 22: Bad file number fatal: The remote end hung up unexpectedly After trying for 10-15 minutes it generally succeeds. During early mornings and late nights when there are only 1-2 people, git commands work with 100% success rate. Also I would like to note that if I access the other file hosted on the server through HTTP it works fine. I found a couple of questions on StackOverflow and on other sites regarding this. But most of the people point towards SSH key set up or conflicts between Msysgit and Cygns SSH. However I don't think this is the problem in our case as we get this behavior on Windows (using msysgit only) as well as Mac Machines. Also if it was SSH configuration issue then it shouldn't work at all. But in our case it works after 10-15 minutes. I think in our case it might be too many simultaneous connections to same server (or same repo) or something like that. Does there exists a setting or a conf file that needs to modified to solve this problem? Please help me solve this problem or point me in the right direction. Thanks in advance. Pritam.

    Read the article

  • How to configure traffic from a specific IP hardcoded to an IP to forward to another IP:PORT using i

    - by cclark
    Unfortunately we have a client who has hardcoded a device to point at a specific IP and port. We'd like to redirect traffic from their IP to our load balancer which will send the HTTP POSTs to a pool of servers able to handle that request. I would like existing traffic from all other IPs to be unaffected. I believe iptables is the best way to accomplish this and I think this command should work: /sbin/iptables -t nat -A PREROUTING -s $CUSTIP -j DNAT -p tcp --dport 8080 -d $CURR_SERVER_IP --to-destination $NEW_SERVER_IP:8080 Unfortunately it isn't working as expected. I'm not sure if I need to add another rule, potentially in the POSTROUTING chain? Below I've substituted the variables above with real IPs and tried to replicate the layout in my test environment in incremental steps. $CURR_SERVER_IP = 192.168.2.11 $NEW_SERVER_IP = 192.168.2.12 $CUST_IP = 192.168.0.50 Port forward on the same IP /sbin/iptables -t nat -A PREROUTING -p tcp -d 192.168.2.11 --dport 16000 -j DNAT --to-destination 192.168.2.11:8080 Works exactly as expected. IP and port forward to a different machine /sbin/iptables -t nat -A PREROUTING -p tcp -d 192.168.2.11 --dport 16000 -j DNAT --to-destination 192.168.2.12:8080 Connections seem to timeout. Restrict IP and port forward to only be applied to requests from a specific IP /sbin/iptables -t nat -A PREROUTING -p tcp -s 192.168.0.50 -d 192.168.2.11 --dport 16000 -j DNAT --to-destination 192.168.2.12:8080 Times out as well. Probably for the same reason as the previous entry. Does anyone have any insights or suggestions? thanks,

    Read the article

  • Issue with InnoDB engine while enabling and [ skip-innodb ]

    - by Ahn
    How to enable InnoDB, which was previously disabled with skip-innodb option. Case 1: Disabled the innodb with skip-innodb option and show engines givens as below. Engine | Support ... | InnoDB | NO ...... Case 2: As I want to enable the innodb, I commanded the #skip-innodb option and restarted. But now the show engines even not showing the InnoDB engine in the list. ? Mysql Version : 5.1.57-community-log OS : CentOS release 5.7 (Final) Log: 120622 13:06:36 InnoDB: Initializing buffer pool, size = 8.0M 120622 13:06:36 InnoDB: Completed initialization of buffer pool InnoDB: No valid checkpoint found. InnoDB: If this error appears when you are creating an InnoDB database, InnoDB: the problem may be that during an earlier attempt you managed InnoDB: to create the InnoDB data files, but log file creation failed. InnoDB: If that is the case, please refer to InnoDB: http://dev.mysql.com/doc/refman/5.1/en/error-creating-innodb.html 120622 13:06:36 [ERROR] Plugin 'InnoDB' init function returned error. 120622 13:06:36 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 120622 13:06:36 [Note] Event Scheduler: Loaded 0 events 120622 13:06:36 [Note] /usr/sbin/mysqld: ready for connections. Version: '5.1.57-community-log' socket: '/data/mysqlsnd/mysql.sock1' port: 3307 MySQL Community Server (GPL)

    Read the article

  • How to prevent eMule from jamming up the router?

    - by the searcher
    Usually, when eMule is started, after some time, I find that the router is jammed, so the internet connection on that computer stopped working, or it seemed to be waiting for some port to be freed up before it can connect to a website. This sometimes affect even other PCs or Macs using the same router. Is there a way to prevent eMule from hogging too much resource or ports? I see that there is under Options -> Connection "Max Sources/File" and a "Connection Limits - Maximum Connections". Right now I set them to really low numbers: the first to 120 and the second to 200, but what are good numbers to fill in there so that it can work well without jamming up the router or use up the network resource of the PC or Mac? Or could it be that the number of files that are "Waiting" is too high, and used up too much resource? (If so, can emule automatically limit the number to 10 or 20 to prevent using too much resource?) (This happened before on Linksys router, Netgear router, and the AT&T U-verse router.)

    Read the article

  • How do you avoid that server documentation gets out of sync with the actual setup?

    - by Frerich Raabe
    I'm a hobbyist maintaining a small FreeBSD server serving mail via IMAP - it's an exercise in server administration. The setup does have reasonably good documentation (in AsciiDoc format) which recently allowed another person to recreate the entire setup from scratch in less than 30 minutes. However, I noticed that after the initial setup, it easily happens that small changes done to the system (say: inetd gets disabbled, my IMAP server listens on an additional port for ManageSieve connections, a new router is added to the exim configuration) don't end up in the documentation immediately (if at all). My idea was to avoid this problem by (partially?) generating the documentation out of the configuration files and the comments therein - one way to implement this may be to put /etc and /usr/local/etc into some source code management system (say - git) and then run a script which regenerates the documentation on every commit. However, I'm not sure whether that would be overkill and/or too difficult to get right (after all, I don't want complete copies of the source files in my documentation but rather just the diffs). How do other people avoid that the server documentation gets outdated - is there a good way to keep them in sync automatically, or do you just have the discipline to update the documentation the same time you modify the system?

    Read the article

  • do I need to create an AD site for VPN network

    - by ykyri
    I have Windows Domain level 2008 R2. There are four GC DC in four different physical locations. I have Kerio-based VPN network for replication and remote administration. Here is how network configured: dc1: local IP: 192.168.0.10 VPN IP: 192.168.1.10 dc2: local IP: 10.10.8.11 VPN IP: 192.168.1.11 dc3: local IP: 10.10.9.12 VPN IP: 192.168.1.12 dc4: local IP: 10.10.10.13 VPN IP: 192.168.1.13 That's simple, replication and all works fine but when running dcdiag on dc3 I have an error: A warning event occurred. EventID: 0x000016AF During the past 4.12 hours there have been 216 connections to this Domain Controller from client machines whose IP addresses don't map to any of the existing sites in the enterprise. <...> The log(s) may contain additional unrelated debugging information. To filter out the needed information, please search for lines which contain text 'NO_CLIENT_SITE:'. The first word after this string is the client name and the second word is the client IP address. Here is netlogon.log lines example: 05/30 12:07:39 DOMAIN.NAME: NO_CLIENT_SITE: dc2 192.168.1.11 05/31 09:52:11 DOMAIN.NAME: NO_CLIENT_SITE: dc4 192.168.1.13 05/31 19:49:31 DOMAIN.NAME: NO_CLIENT_SITE: adm-note 192.168.1.101 07/01 05:16:26 DOMAIN.NAME: NO_CLIENT_SITE: dc1 192.168.1.10 All VPN-joined computers are generates same log line as above. Computer amd-note is for example administrator's notebook, also have VPN. Question is should I add new AD site and bind VPN subnet 192.168.1.0/24 with that site?

    Read the article

  • Mac updated just now, postgres now broken

    - by user52224
    I run postgres 9.1 / ruby 1.9.2 / rails 3.1.0 on a maxbook air for local dev. It's all been running smoothly for months, (though this is the first time I've done development on a mac.) It's a macbook air from last year, and today I got the mac osx software update message as I have a few times before, and my system downloaded approx 450mb of updates and restarted. It now says it's on OSX 10.7.3. Point is, postgres has stopped working, when I start my thin server (mirror heroku cedar) as normal, and then browse to my rails app I get: PG::Error could not connect to server: Permission denied Is the server running locally and accepting connections on Unix domain socket "/var/pgsql_socket/.s.PGSQL.5432"? What happened? After browsing around a few questions I'm still confused, but here's some extra info: Running psql from command line gives same error I can run pgadmin 3 and connect via it and run SQL no problems Running which psql shows the version as /usr/bin/psql I created a PostgreSQL user back when I got the mac (it's always been on lion) I've no idea why, almost certainly I was following a tutorial which I neglected to store in my notes. Point is I am aware there is a _postgres user as well. I know it's rubbish, but apart from a note on passwords, I don't have any extra info on how I configured postgres - though the obvious implication is that I did not use the _postgres user. Anyone have suggestions or information on what might have changed / what I can try to debug and fix? Thanks. Edit: Playing around based on this question and answer: http://stackoverflow.com/questions/7975414/check-status-of-postgresql-server-mac-os-x, see this string of commands: $ sudo su postgreSQL bash-3.2$ /Library/PostgreSQL/9.1/bin/pg_ctl start -D /Library/PostgreSQL/9.1/data pg_ctl: another server might be running; trying to start server anyway server starting bash-3.2$ 2012-04-08 19:03:39 GMT FATAL: lock file "postmaster.pid" already exists 2012-04-08 19:03:39 GMT HINT: Is another postmaster (PID 68) running in data directory "/Library/PostgreSQL/9.1/data"? bash-3.2$ exit

    Read the article

  • Large, high performance object or key/value store for HTTP serving on Linux

    - by Tommy
    I have a service that serves images to end users at a very high rate using plain HTTP. The images vary between 4 and 64kbytes, and there are 1.300.000.000 of them in total. The dataset is about 30TiB in size and changes (new objects, updates, deletes) make out less than 1% of the requests. The number of requests pr. second vary from 240 to 9000 and is dispersed pretty much all over, with few objects being especially "hot". As of now, these images are files on a ext3 filesystem distributed read only across a large amount of mid range servers. This poses several problems: Using a fileysystem is very inefficient since the metadata size is large, the inode/dentry cache is volatile on linux and some daemons tend to stat()/readdir() it's way through the directory structure, which in my case becomes very expensive. Updating the dataset is very time consuming and requires remounting between set A and B. The only reasonable handling is operating on the block device for backup, copying, etc. What I would like is a deamon that: speaks HTTP (get, put, delete and perhaps update) stores data it in an efficient structure. The index should remain in memory, and considering the amount of objects, the overhead must be small. The software should be able to handle massive connections with slow (if any) time needed to ramp up. Index should be read in memory at startup. Statistics would be nice, but not mandatory. I have experimented a bit with riak, redis, mongodb, kyoto and varnish with persistent storage, but I haven't had the chance to dig in really deep yet.

    Read the article

  • is a wildcard SSL the only option in this multiple VHOST/1IP setup?

    - by solsol
    I have a web app set up that needs the following SSL encryption: secure.myapp.com -> SSL www.myapp.com/login -> SSL www.myapp.com/signup -> SSL If I'm correct, I could run one SSL certificate for my whole www.myapp.com/* pages. The problem is that I have a subdomain called secure.myapp.com that either needs to be on a separate IP address to work with SSL. Right now I have one server, one public IP and a number of Virtual Hosts in apache to make this work. I'd rather not buy an expensive Wildcard SSL certificate to secure just one subdomain. What is your advice on this? If it IS the only solution any tips on getting a price worthy wildcard SSL cert is appreciated. I have read about SNI that allows the use of multiple SSL certs, but not all browsers (IE6!) support this. Since we are building a web app for the public, we cannot have IE6 to run on unencrypted connections. Thanks for you help

    Read the article

  • Secure data from a server to a workstation using jumper hosts

    - by apalsson
    Hello. I have a WWW-server, my problem is that the content is sensitive and should not be accessible for people without proper credentials. How can I improve the ease of use but still maintain security following scenario; The Server is accessed through a "jumper host", i.e. the client connects to the jumper using VPN-connection and uses RemoteDesktop to access the jumper. From the jumper he uses RemoteDesktop again to access the Server. Finally on the Server the user can access content using a WWW-browser. All the way from the VPN-client to the WWW-browser requires authentication using a SmartCard-token. This seems quite secure to me. Content only gets mirrored on the RemoteDesktop between Server and jumper, no cached files to worry about. Connection between jumper and client is protected using VPN(ssl), so no eavesdropping. But it is quite cumbersome for the clients with many steps and connections to open. :( So, how can I improve the user experience accessing my server without compromising security? Thanks.

    Read the article

  • Joomla performance problems on AWS

    - by Bobby Jack
    I'm running a site on AWS with the following setup: Single m1.small instance (web server) Single RDS m1.small db Joomla 1.5 Generally, the site is performant, but is fairly low-traffic - say around 50-100 visits / hour. However, at peak time, we see about double that traffic. During peak time, pretty much every day: CPU usage on the web server slowly climbs to 100% CPU usage on the RDS server climbs quite quickly to about 30%, from an average of about 15 Database connections shoot up to about 140, from a normal average of about 2 or 3 The site is then occasionally unreachable, certainly according to pingdom monitoring. Does anyone recognise this behaviour? Can you point me in the right direction to begin investigating? Of course, RDS makes it difficult to do things like slow query logging, so I've started by regularly dumping the mysql process list into a file to see if there's anything I can spot there, but it would be good to have something more concrete to investigate. UPDATE At least, can someone confirm that I'm definitely right in saying that the level of traffic implies the problem must be a specific type of query taking way longer than it should to execute? This would happen if a table gets locked, and many queries need to write to it, right? For this very reason, I've already changed the __session table type to InnoDB.

    Read the article

  • Persistent PuTTY sessions for multiple windows

    - by Tgr
    I'm working in various Linux environments through PuTTY connections which break from time to time. I'm looking for a solution to make the PuTTY windows persist (e.g. if I was editing a file, then after reconnecting I should be in the same editor with the same file open at the same place), with the following requirements: it shouldn't require any manual setup at the beginning of the session or after reconnection (I don't want to type in screen or anything like that) I have several windows open to the same machine with the same user, which tend to disconnect at the same time the number/role of windows is not constant (it's not like I have an mc window, a mysql window and a "script runner" window; sometimes I use one window for search or for SVN commands, other times I need several at the same time) sometimes I need to change the properties of the windows for a task (large window for grepping/editing, small windows because I need to see two of them at the same time, red background because I am modifying the live database in MySQL etc), so I need to get the same console back in the same window after a reconnect Is there a way to achieve this? I suppose I should use screen or something equivalent, but how does it know which window I am reconnecting from? Is there some way to pass a unique window identifier to the shell from PuTTY?

    Read the article

  • Windows Vista/7 dropping Mac Server share points

    - by Hooligancat
    My Windows Vista and Windows 7 clients are having problems maintaining access to SMB shares on a Mac server. The initial connection to the server appears to be OK, as the Windows clients can see all of the server share points. However, the client randomly drops a couple of the server share points although the clients can still see the server. For example. If I have the following share points on the Mac server: Share A Share B Share C Share D Share E The Windows client can see these shares most of the time and can access them most of the time. But randomly a couple of the shares will just get dropped or go missing from the Windows client's ability to view them so I end up with something like: Share B Share D Share E All the share points are established int the same way with the same permission settings. My Mac OSX Server is set up with the following for SMB: SMB sharing enabled Standalone Server Workgroup of `CORPORATE` Allow Guest Access = YES Client connections limit = 100 Authentication: NTLMv2 & Kerberos and NTLM Code Page is Latin US (437) This is a workgroup master browser WINS registration is set to Enable WINS server (tried with setting off) Enable virtual share points for homes YES I noticed in my SMB file service log that the clients appear to connect OK, but I get the following error which implies a reset by either the server or the client: /SourceCache/samba/samba-187.9/samba/source/lib/util_sock.c:read_data(534) read_data: read failure for 4 bytes to client 192.168.0.99. = Connection reset by peer I am a bit stumped as to a direction to turn to try and get this to resolve. Continued attempts to access the server from the client will reconnect to the share points, but they inevitably get dropped again in the near future. Any and all help much appreciated.

    Read the article

  • Datacentre Rack naming convention with flexibility for reassignment of server roles

    - by g18c
    We are just shifting across to a new rack and until now have used names of cartoon characters. This is not going to work anymore, and need a better naming convention. Physically i would like to name the servers by location, and then have an alias as to its actual function/customer, i.e. Physical name LONS1R1SVR1 meaning London, suite 1, rack 1, server 1 Customer Alias Since the servers can be reassigned from time to time, for the above physical server name, i would have an alias as a column in a spreadsheet, that would be set to the customers host-name, i.e. wwww.customerserver1.com Patching For patching, I am looking at labeling up the physically connections, i.e. LON1S1R1SVR1-PWR1 LON1S1R1SVR1-PWR2 LON1S1R1SVR1-ETH0 LON1S1R1SVR1-KVM Ultimately if i am labeling cables, I really want to avoid putting LON1S1R1SQLSVR on any patch cord in case the server gets formatted and changed from a SQL server to a WWW server which would need to relabel all the patch cords also. In addition, throwing in virtual machines, i have got confused very quickly. I appreciated that it may be confusing having a physical host-name and customer alias. Please let me know what you run with and any other standards or best practices that i can follow?

    Read the article

< Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >