Search Results

Search found 102773 results on 4111 pages for 'mcitp sql server 2008'.

Page 1460/4111 | < Previous Page | 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467  | Next Page >

  • Android webbrowser returns code 500 for webpage on Nginx webserver

    - by Paxxil
    Hey! I've come to a very weird behavior of a web browser on android mobile phone (I've tried HTC Wildfire and HTC Desire phones). I have a web server with Nginx v0.8.54. When i try to open a web page on the phone it shows me error: The requested item could not be loaded! (Status code: 500) BUT it only happens when I am requesting page through Mobile network. On Wifi it works just fine .... but there is more .... if I stop Nginx and start Apache web server it works just fine on both Mobile network and wifi. I've also tried other mobile network and it is the same behavior. Some server stats: Firewall is OFF Selinux is OFF the web page (using Nginx web server) opens normally on any other browser (IE, FF, Opera, Chrome, Safari) on the laptop or PC Nothing in nginx error.log This is the only entry in access.log when the page is requested: xxx.xxx.xxx.xxx - - [17/Mar/2011:11:19:49 -0500] 200 "GET / HTTP/1.1" 27405 "-" "Mozilla/5.0 (Linux; U; Android 2.2; en-gb; Desire_A8181 Build/FRF91) AppleWebKit/533.1 (KHTML, like Gecko) Version/4.0 Mobile Safari/533.1" "-" index.html has only "Hello World" string in it. There is no fishy javascript or anything else. .... but there is even more.... if i open the same page on another server, with the same Nginx build, with the same server and web server configuration.... it opens just fine. if anyone has any idea on what may be going on, i would really appreciate it if you let me know. Thanks! EDIT: i forgot to mention that page opens OK on Iphone and Nokia

    Read the article

  • Apache Proxy and Rewrite, append directory to URL

    - by ionFish
    I have a backend server running on http://10.0.2.20/ from the local network, which serves similar to this: / (root) | |_user1/ | |_www/ | |_private/ | |_user2/ |_www/ |_private/ (etc.) Accessing http://10.0.2.20/user1/ of course, contains 'www' and 'private' folders and is proxied through a public server using Apache's Reverse Proxy. I'd like it so the following happens: http://public-proxy-server/user1/ actually shows the content from http://10.0.2.20/user1/www/ without indicating it in the URL. (/private/ would not be accessible via the public proxy server). The key here, is for it to be dynamic, so all requests to http://public-proxy-server/*/should show content from http://10.0.2.20/*/www/. Again, the proxy currently works fine; below is the config: (On the public server) <VirtualHost *:80> ServerName www.domain.com ProxyRequests Off ProxyPreserveHost On ProxyVia full ProxyPass / http://10.0.2.20/ ProxyPassReverse / http://10.0.2.20/ </VirtualHost> (On the backend server) <VirtualHost *:80> ... #this directory contains folders 'user1' and 'user2' DocumentRoot /var/www/ ... </VirtualHost>

    Read the article

  • Using NFS for scalable PHP/MySQL web application

    - by Jeroen Moons
    Here's the situation: I have a PHP/MySQL web application that accepts user uploads (pdf files). From these pdf files' pages a preview image is made on the fly and presented to the web app's users. Some pdfs might be on the large side, most will be under 50 MB but some extreme cases could be as large as a few hundred MB. A little waiting for the preview image for large pdf files is acceptable but no more than a minute let's say. Everything is running on one server for now, but soon the app will hit the server's limit on both storage and processing power. My idea to solve the problem: To deal with this situation I had the idea of having one or more pdf processing servers as needed, and one or more file storage servers. These two types of servers are mounted to the server on which the actual app runs using NFS. The app could then use GearMan to delegate pdf processing tasks to these processing servers. The processing server can mount the storage server and read the file stored there, process it and write its output to that server. The servers I'm talking about will be amazon ec2 instances. The web app returns a link to the resulting pdf preview image on the storage server that was used which can then be used on the front end to show the image to the user. My question: I have zero experience with apps that use multiple servers, is this idea viable or is there a better way to do it? Is an NFS setup fast and reliable enough for this situation?

    Read the article

  • Remote Desktop Connection issues

    - by stead1984
    I have a server at a remote site, the sites are connected to each other a site-to-site VPN connection using Cisco ASA 5510 firewalls. One end is managed by me, the other managed by the remote location's IT, between the 2 of us is another party who manage and route the connections. Remote desktop has been working fine with no problems then recently I noticed it was working for ONE server over the VPN which it previously had done. All the routes seem fine and I can still ping the remote server and even download files from an FTP site on the remote server.... so the VPN seems fine. Remote Desktop works fine to the remote server within the remote location but not over the VPN. I don't understand why it's stopped working, I originally thought it was a rule in place by the other party but they stress it's not them. The only thing that has changed on the server initiating the RDP connection is that it now runs file services sharing a folder. The source server (remote location) may or may not have had updates applied. Any idea's?

    Read the article

  • Prevent SSL certificate being returned for a specific domain

    - by jezmck
    Apologies for a long question: We've taken on a new client whose web hosting was previously on their in-house server which still has their Exchange/Outlook email. We now host their domain (and many others) on our server. They're complaining that they're getting errors in Outlook. I don't understand the AutoDiscover stuff at the root of the problem, but believe that I just need to stop the SSL certificate on our server being returned when requested at a particular domain: Yes it is, the issue lies with "{newclient}.com" being pointed to your server IP and that server has Port 443 open with an SSL certificate associated to it. So when Outlook/ActiveSync use autodiscover to find the mailbox settings it find your SSL (because 443 is open) and flags it as an error. The solution is to close 443 so its not discovered, Autodiscover will then proceed to mail.{newclient}.com via the MX / ServiceRecords and discover the correct SSL. I'm new here and there was no hand-over, so I don't know whether other currently hosted sites need to accept SSL connections, though I suspect some will, or may in future. This is a live server, so I can't risk trying loads of options in case I take the server offline! I feel like I should be adding something like the following to vhosts.conf. <VirtualHost *:443> ServerName {newclient}.com ServerAlias www.{newclient}.com SSLEngine Off SSLCertificateFile {NONE} SSLCertificateKeyFile {NONE} </VirtualHost> Apologies for the fact that I don't know enough about this subject to be able to ask the question more clearly!

    Read the article

  • Why does Exchange 2003 silently reject emails with large attachments?

    - by Cypher
    Our environment: Exchange Server 2003 Standard, single instance, running on Windows Server 2003 Standard. configured to not send/receive mail with attachments larger than 10 MB. NDRs are not enabled. The issue: When an external sender sends an email with an attachment larger than 10MB, Exchange, as configured, does not receive the message. However, the sender of that message does not receive any notifications from his own mail server that the message could not be delivered due to attachment size. However, if an external user tries to send an email to a non-existent user, they do receive a message from their mail server indicating that the user does not exist. Why is that, and is there anything I can do about it? It would be nice if the sender received notification that the attachment file size exceeds our limits and their message was never received... Update The Exchange server has a SpamAssassin box in front of it... could that have something to do with it? Here is one of the last lines from SpamAssassin's logs when searching for my test e-mails: mail postfix/smtp[19133]: 2B80917758: to=, relay=10.0.0.8[10.0.0.8]:25, delay=4.3, delays=2.6/0/0/1.7, dsn=2.6.0, status=sent (250 2.6.0 Queued mail for delivery) My assumption is that Spam Assassin thinks the message is OK and is forwarding it off to Exchange. Update I've verified that Exchange is receiving the message and generating an NDR. However, delivery of NDRs are disabled to prevent Backscatter. Is there something that I can do to get Exchange to send a bounce message to the sending mail server (or verify that message is being sent) so the sending mail server can notify its sender of the bounce?

    Read the article

  • Can a folder on a NAS be made available as a physical drive in VMWare?

    - by asbjornu
    We are currently in the process of moving from a single web server to two load balanced web servers and are facing some challenges we don't quite know how to fix. One of these is that the current single server hosts applications that write stuff to disk. The applications running on the server expects that when something is written to disk it later will in fact exist, so it's important that this premise is fulfilled with the dual server architecture as well. The dual server setup is a couple of VMWare instances with Windows Server 2008 R2 as the guest operating system. Out of the box, these instances does not share any kind of file system, so just moving the applications over would make them break since one instance would write something to the file system that doesn't exist on the other. Thus we need to share a file system between the two virtual servers. Our host has proposed to create a network share on a SAN and map this share individually on each virtual machine. This doesn't work too well due to NTFS permissions, etc., because the share needs to be accessed by several independent web applications that won't even be in the same application pool. The only solution that kind of works is to hard code an "identity" for each web application into its web.config file, but this means password in clear text which doesn't sit well with me. Since the servers are virtual, I'm thinknig: Wouldn't it be possible to make a NAS area available as a physical disk in the gues operating system somehow? Since VMWare has full control of the virtual hardware, you'd think it would be able to "fake" a local hard drive in the virtual machine that in reality is a folder on a NAS, but so far I haven't found anything that states how and if this is possible. So I have to ask the wonderful Server Fault community: Can a folder on a NAS be made available as a physical drive (typical D:) in both of the virtual machines?

    Read the article

  • 100% CPU load on Ubuntu 10.04.3 LTS 64bit

    - by deadtired
    I have 2 days since I am trying to fix this issue, with no success. The server is a mysql database server. Hardware: DELL Poweredge 1950, 2x Intel Xeon Quad Core E5345 @ 2.33GHz, 16 Gb mem, 2x 146Gb SAS (software RAID1) Software: Ubuntu 10.04.3 LTS, MySQL 5.1.41 Issue: while mysql is not used and runs with no database, everything seems alright. As soon as I install a database, it has the reason to bring all 8 cores in 100% with low memory consumption. So, you can imagine the load average goes high (I saw 212 load average for the first time). The server doesn't become unresponsive, but you can see it's slow while browsing the project installed. Additional info: the database used is not more than 24MB and it was moved from a server with less resources and a lot more larger databases. So it's not the database/project. my.cnf is not a reason also, as I used both default one and the one I use on the same distribution on another server.What is interesting is that mysql doesn't close any process and runs to the limit of the max_connections. Logs are quiet. Nothing there. I switched to this Ubuntu version after I suspected some problems in the newly Ubuntu 11.10 server. This one worked alright for an hour after I made a kernel upgrade to 3.0.1 (it was using the memory also) I tested disk speed and seems alright. Some more output on the running server: dstat -cndymlp -N total -D total 3: htop command: Idea? Did anyone meet the same problem? Any fix you can think of?

    Read the article

  • SSL Connection Error

    - by toffee.beanns
    I have purchased a comodo ssl cert and have submitted the Certificate Signing Request (CSR) generated by my server to the ssl management site. With the 3 files it returned me with, - AddTrustExternalCARoot.crt - PositiveSSLCA2.crt - www_mydomainname_com.crt I have uploaded them to my /etc/ssl/ssl-certs folder and have updated my virtual host in my sites-available and restarted accordingly. NameVirtualHost 107.167.120.195:80 #sample ip address NameVirtualHost 107.167.120.195:443 #sample ip address ......... #normal http virtual host (working well) <VirtualHost 107.167.120.195:443> ServerAdmin [email protected] ServerName mydomainname.com ServerAlias www.mydomainname.com DocumentRoot /var/www/mydomainname SSLEngine on SSLCertificateFile /etc/ssl/ssl-certs/www_mydomainname.com.crt SSLCertificateKeyFile /etc/ssl/ssl-certs/server.key SSLCertificateChainFile /etc/ssl/ssl-certs/PositiveSSLCA2.crt </VirtualHost> I have also enabled ran 'a2enmod ssl' and it's enabled. This is the error I get when I access the webpage https in chrome: SSL connection error Error code: ERR_SSL_PROTOCOL_ERROR Unable to make a secure connection to the server. This may be a problem with the server, or it may be requiring a client authentication certificate that you don't have. I have also checked out my apache log files and there seems to be an error saying that the Common Name (CN) is not the same as the server. RSA server certificate CommonName (CN) `www.mydomainname.com' does NOT match server name!? and Invalid method in request \x16\x03\x01 What should I do?

    Read the article

  • How to access a port via OpenVpn only

    - by Andy M
    I've set up an openvpn server alongside an apache website that can only be accessed on port 8100 on the same machine. My /etc/openvpn/server.conf file looks like this: port 1194 proto tcp dev tun ca ./easy-rsa2/keys/ca.crt cert ./easy-rsa2/keys/server.crt key ./easy-rsa2/keys/server.key # This file should be kept secret dh ./easy-rsa2/keys/dh1024.pem # Diffie-Hellman parameter server 10.8.0.0 255.255.255.0 ifconfig-pool-persist ipp.txt # make sure clients can still connect to the internet push "redirect-gateway def1 bypass-dhcp" keepalive 10 120 comp-lzo persist-key persist-tun status openvpn-status.log verb 3 Now I tried to let only clients connected to the vpn network access the website on apache via port 8100. So I defined a few iptables rules: #!/bin/sh # My system IP/set ip address of server SERVER_IP="192.168.0.2" # Flushing all rules iptables -F iptables -X # Setting default filter policy iptables -P INPUT DROP iptables -P OUTPUT DROP iptables -P FORWARD DROP # Allow incoming access to port 8100 from OpenVPN 10.8.0.1 iptables -A INPUT -i tun0 -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o tun0 -p tcp --sport 80 -m state --state ESTABLISHED -j ACCEPT # outgoing http iptables -A OUTPUT -o tun0 -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A INPUT -i tun0 -p tcp --sport 80 -m state --state ESTABLISHED -j ACCEPT Now when I connect to the server from my client computer and try to access the website on 192.168.0.2:8100, my browser can't open it. Will I have to forward traffic from tun0 to eth0? Or is there anything else I'm missing?

    Read the article

  • Network connection keeps dropping - bad hardware?

    - by Bill Sambrone
    Hello all, I've into a bit of a wall with a client of mine. In an office of 20 people, he is the only one who experiences broken connections to his mapped network drives. I have everyone set up with about 6 mapped drives, all pointing to the same server (no DFS), and everyone else can access them lightning fast. The environment consists of a mix of Windows 7 and XP machines, all 32-bit. The server holding the data everyone is mapping to is running on Server 2008 R2, and is a domain controller. We recently swapped out their old 10/100 switch for a shiny new Dell PowerConnect gigabit switch. We have also replaced an old dying Sonicwall with a shiny new one. Everything is running on an ESX host except for the DC, where everyone is getting data from. In my client's office, we have done the following: Swapped out his computer (Win7 and XP box) Swapped out the desktop switch in his office Removed the desktop switch in his office Changed out the network cable going to the wall Ran 'net config server /autodisconnect:-1' on the server Disabled remote differential compression on his current Win7 box When we swapped out his network cable, everything seemed fine for about 4 days. Normally I would get a phone call a couple times per day letting me know that Outlook has crashed (there is a 9GB PST living on the server he is always connected to), or that his software he is running from his L drive has crashed. I almost thought I had this solved, but after we rebooted the DC the other night he all of a sudden couldn't stay connected to his mapped network drives for more than 10 minutes. When I ran 'net use' from the command prompt, it listed all the network drives where were randomly in a state of 'OK', 'Disconnected', or 'Reconnecting'. What else should I try? Maybe there is bad wiring in the wall, patch panel, or a bad port in the new switch I have in the server room?

    Read the article

  • Samba Does Not List Files In Shared Directory

    - by Sean M
    I am running Samba on a CentOS server, and I am experiencing a problem where it allows me to connect to the server and see a share, but shows the share as an empty directory. I find this behavior strange. Here is the stanza in my smb.conf for the given share: [seanm] path = /home/seanm writeable = yes valid users = seanm, root read only = No Here's what I see on the server side: [seanm@server ~]$ ls -l -rw-r--r-- 1 seanm seanm 40 Jan 4 13:45 pangram.txt And yet: [seanm@client ~]$ smbclient //server/seanm -U seanm -W WORKGROUP Enter seanm's password: Domain=[WORKGROUP] OS=[Unix] Server=[Samba 3.0.33-3.29.el5_5.1] smb: \> ls . D 0 Fri Jan 7 10:08:55 2011 .. D 0 Fri Jan 7 07:58:31 2011 58994 blocks of size 262144. 50356 blocks available This behavior is present on both a Windows client and a Linux client system. The behavior is present with the firewall on and with the firewall off, so it's not that. Neither /var/log/messages nor /var/log/secure have any complaints about Samba. I doubt that SELinux is a problem: just in case, here are the relevant settings. [root@server ~]# getsebool -a | grep samba samba_domain_controller --> off samba_enable_home_dirs --> on samba_export_all_ro --> off samba_export_all_rw --> off samba_share_fusefs --> off samba_share_nfs --> off use_samba_home_dirs --> on virt_use_samba --> off What am I doing wrong here, and what can I do to fix it?

    Read the article

  • git : The remote end hung up unexpectedly - too many simultaneous users?

    - by Pritam Barhate
    I asked this first on StackOverflow and I was suggested that I should ask it here: We have a self hosted git server (Gitolite) on a VPS account (CPU:2.68GHz RAM:1824MB). This same VPS is also used to publish our underdevelopment web apps for client demos. (Very little traffic). so the main use of the server is as a Git Server Only. This git server is accessed by a team of 30-40 people for various projects. Our problem is that during the day when 6-7 people are trying to access the server (sometimes same repo) we get frequent error message: ssh: connect to host xxx.xxx.xx.xx port 22: Bad file number fatal: The remote end hung up unexpectedly After trying for 10-15 minutes it generally succeeds. During early mornings and late nights when there are only 1-2 people, git commands work with 100% success rate. Also I would like to note that if I access the other file hosted on the server through HTTP it works fine. I found a couple of questions on StackOverflow and on other sites regarding this. But most of the people point towards SSH key set up or conflicts between Msysgit and Cygns SSH. However I don't think this is the problem in our case as we get this behavior on Windows (using msysgit only) as well as Mac Machines. Also if it was SSH configuration issue then it shouldn't work at all. But in our case it works after 10-15 minutes. I think in our case it might be too many simultaneous connections to same server (or same repo) or something like that. Does there exists a setting or a conf file that needs to modified to solve this problem? Please help me solve this problem or point me in the right direction. Thanks in advance. Pritam.

    Read the article

  • DRBD stacked resources: recovering from failure

    - by Marcus Downing
    We're running a stacked four-node DRBD setup like this: A --> B | | v v C D This means three DRBD resources running across these four servers. Servers A and B are Xen hosts running VMs, while servers C and D are for backups. A is in the same datacentre as C. From server A to server C, in the first datacentre, using protocol B From server B to server D, in the second datacentre, using protocol B From server A to server B, different datacentres, stacked resource using protocol A First question: booting a stacked resource We haven't got any vital data running on this setup yet - we're still making sure it works first. This means simulating power cuts, network outages etc and seeing what steps we need to recover. When we pull the power out of server A, both resources go down; it attempts to bring them back up at next boot. However, it only succeeds at bringing up the lower-level resource, A-C. The stacked resource A-B doesn't even try to connect, presumably because it can't find the device until it's a connected primary on the lower level. So if anything goes wrong we need to manually log in and bring that resource up, then start the virtual machine on top of it. Second question: setting the primary of a stacked resource Our lower-level resources are configured so that the right one is considered primary: resource test-AC { on A { ... } on C { ... } startup { become-primary-on A; } } But I don't see any way to do the same with a stacked resource, as the following isn't a valid config: resource test-AB { stacked-on-top-of test-AC { ... } stacked-on-top-of test-BD { ... } startup { become-primary-on test-AC; } } This too means that recovering from a failure requires manual intervention. Is there no way to set the automatic primary for a stacked resource?

    Read the article

  • Extract sender activity from postfix logs for auditing user

    - by Aseques
    We have a mail user on our postfix server that was using the company mail to send compromising information to the competence. I've been asked to make a report of the actions for that user in the last time. There are tools like pflogsumm and others that can extract statistic data, but I haven't so far find anything useful to get all the info for a user because the data is in multiple lines. I'd like to get something like this: For the sent mail 11/11/11 00:00:00 [email protected] -> [email protected] 11/11/11 00:00:01 [email protected] -> [email protected] For the received mail 10/10/11 00:00:00 [email protected] -> [email protected] 10/10/11 00:00:01 [email protected] -> [email protected] I know I can do a script by myself, but matching the postfix ID for every mail is not something that can be made with a simple grep, and I've a lot of mail history that I have to recheck distributed among diferent files and so on. The source log is the standard postfix format, for example this one... Sep 13 16:15:57 server postfix/qmgr[18142]: B35CB5ED3D: from=<[email protected], size=10755, nrcpt=1 (queue active) Sep 13 16:15:57 server postfix/smtpd[32099]: disconnect from localhost[127.0.0.1] Sep 13 16:15:57 server postfix/smtp[32420]: 58C3E5EC9C: to=<[email protected]>, relay=127.0.0.1[127.0.0.1]:10024, delay=1.4, delays=0.01/0/0/1.4, dsn=2.0.0, status=sent (250 2.0.0 Ok, id=32697-04, from MTA([127.0.0.1]:10025): 250 2.0.0 Ok: queued as B35CB5ED3D) Sep 13 16:15:57 server postfix/qmgr[18142]: 58C3E5EC9C: removed Sep 13 16:15:57 server postfix/smtp[32379]: B35CB5ED3D: to=<[email protected]>, relay=mail.anothercompany.com[123.123.123.163]:25, delay=0.06, delays=0.03/0/0.01/0.02, dsn=2.0.0, status=sent (250 2.0.0 Ok: queued as 77D0EB6C025) Sep 13 16:15:57 server postfix/qmgr[18142]: B35CB5ED3D: removed

    Read the article

  • Nginx fails upon proxying PUT requests

    - by PartlyCloudy
    Hi. I have an arbitrary web server that supports the full range of HTTP methods, including PUT for uploads. The server runs fine in all tests with different clients. I now wanted to set this server behind an nginx reverse proxy. However, each PUT request fails. The entity body is not forwarded to the backend web server. The header fields are sent, but not body. I searched the nginx proxy documentation and find several hints that PUT might not be supported. But I also found people running svn/ web dav stuff behind nginx, so it should work. Any ideas? Here is my config: server { listen 80; server_name my.domain.name; location / { proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://127.0.0.1:8000; } } Client == HTTP PUT ==> Nginx == HTTP Proxy ==> Backend Server The error.log shows no entries concerning this behaviour. Thanks in advance!

    Read the article

  • openvpn port 53 bypasses allows restrictions ( find similar ports)

    - by user181216
    scenario of wifi : i'm using wifi in hostel which having cyberoam firewall and all the computer which uses that access point. that access point have following configuration default gateway : 192.168.100.1 primary dns server : 192.168.100.1 here, when i try to open a website the cyberoam firewall redirects the page to a login page (with correct login information, we can browse internet else not), and also website access and bandwidth limitations. once i've heard about pd-proxy which finds open port and tunnels through a port ( usually udp 53). using pd-proxy with UDP 53 port, i can browse internet without login, even bandwidth limit is bypassed !!! and another software called openvpn with connecting openvpn server through udp port 53 i can browse internet without even login into the cyberoam. both of softwares uses port 53, specially openvpn with port 53, now i've a VPS server in which i can install openvpn server and connect through the VPS server to browse internet. i know why that is happening because with pinging on some website(eb. google.com) it returns it's ip address that means it allows dns queries without login. but the problem is there is already DNS service is running on the VPS server on port 53. and i can only use 53 port to bypass the limitations as i think. and i can not run openvpn service on my VPS server on port 53. so how to scan the wifi for vulnerable ports like 53 so that i can figure out the magic port and start a openvpn service on VPS on the same port. ( i want to scan similar vulnerable ports like 53 on cyberoam in which the traffic can be tunneled, not want to scan services running on ports). improvement of the question with retags and edits are always welcomed... NOTE : all these are for Educational purpose only, i'm curious about network related knowledge.....

    Read the article

  • use ssh tunnel with phpmyadmin

    - by JohnMerlino
    I been using ssh tunnel to bypass firewall of remote mysql server. On my Ubuntu 12.04 installation, it works via the terminal and it works when using a program called mysql workbench. However, that program freezes often and I want to try phpmyadmin as an alternative. However, I cannot connect to remote server using ssh tunnel on phpmyadmin, albeit I can connect locally. These are the steps I've tried: 1) Open a tunnel, listening on localhost:3307 and forwarding everything to xxx.xxx.xxx.xxx:3306 (used 3307 because MySQL on my local machine uses the default port 3306): ssh -L 3307:localhost:3306 [email protected] So now I have the port for tunnel open and I have my local mysql installation default port: $ netstat -tln Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN tcp 0 0 127.0.0.1:3307 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN ... 2) Now I can easily connect to remote server via localhost using the terminal: $ mysql -u user.name -p -h 127.0.0.1 -P 3307 Notice that I expicitly identify 3307 as the port, so traffic forwards to the remote server, and hence it logs me in to the remote server. Unfortunately, the localhost/phpmyadmin local login interface doesn't allow you to specify a port option. So I modify the config-db.php file and change the $dbport variable to 3307, under the impression that the phpmyadmin interface will now work with port 3307: $ sudo vim /etc/phpmyadmin/config-db.php $dbport='3307'; Then I restart the mysql server. Unfortunately, it didn't work. When I use the remote credentials to login, it gives me error: #1045 Cannot log in to the MySQL server

    Read the article

  • Can't access site internally, but DNS works

    - by BloodyIron
    1) I have apache2 running a vhost for a website. 2) This apache2 instance is already successfuly setup for other websites on it to be accessible internally and externally. 3) I am using an internal bind9 server to resolve the new website's domain internally to the private IP. This bind9 server is not public facing, nor is it the master server on the internet. 4) The DNS internally resolves to the right IP. 5) Firefox reports "server not found". 6) I have copied the config almost identically to other configs that are known to work (adjusting for proper paths of course). In turn I have reloaded and restarted apache2 repeatedly. 7) I have an entry to forward .org .info .net alternative TLDs to .com in the vhost config for this domain, and my browser goes from .org to .com despite note #5. 8) /var/log/apache2/access.log shows when someone externally tries to access the site, but no activity is observed when someone tries to access internally. Changing the log level does not appear to improve the situation. 9) I am out of ideas, nothing appears to be wrong. Please help? To be explicit. Why is this new site unreachable internally? I would like to clarify on something, even though I have already outlined this. YES I know this system is in a private network. NO it is not going through a router. YES I am using an internal DNS server (bind9) to resolve, and YES it does resolve to the proper internal IP. YES other websites on the same server setup in the same way with internal resolution work right now and have done for a while. Everything for this domain is setup the same as the other working domains as far as I can tell. The other working domains are internally AND externally accessible. This domain I am working with is only currently externally accessible. When I go to it internally firefox tells me "Server not found".

    Read the article

  • EC2 kernel decision and issues with creating a new machine with my AMI

    - by roacha
    I could really use some advice. I started a new instance on EC2 using Amazon's AMI and during the deployment process I selected a Kernel ID of "Use Default". I then configured my server the way that I wanted to and took a snapshot of it. I then created my own AMI to create new servers with. When I try and create a new server with this AMI the server fails to start and I get the error: EXT3-fs: sda1: couldn't mount because of unsupported optional features (240). Which appears to happen because I am selecting a kernel id of "Use default" again when building my second server. I have read that in order for this to work I need to choose the same kernel id that was used in my original server. I have deleted my original server and don't know what it was using. What is the best process to follow in order to not have these issues? Should I choose "Use Default" for my original server? How do you know which kernel it selected? Then should I just document this and always specify this during the deployment of my next servers using my custom AMI? OR should I choose a custom kernel id during the initial build and always use this one moving ahead hoping Amazon never retires it? Thanks for any advice!

    Read the article

  • Full Apache config migration

    - by Victor Rashkov
    I searched alot and didn't find an applicable answer. I have a working LAMP setup on Ubuntu machine and I have to migrate to a new server in a different country. The old server is 11.10, the new server is 12.04LTS. My problem is that I simply can not remember the steps I followed when I configured the current server which is not the basic LAMP install. It is Apache with FastCGI, SuEXEC, a GD library, worker MPM and all sitting on top of a mhddfs system. There are also other configs I've changed and I can not recall what they are. Because of the complexity of the setup, my attempts to migrate to the new server fail. I get permissions errors, cgi problems etc. Therefore my question is : Is there a sane way to simply tar a full backup of the current web server installation, including MySQL, Php amd the apache server with all configs, and then move it to the new machine? I shall be forever thankful on any advise. So far non of thise I found here gave me an answer. Thanks!

    Read the article

  • Allowing Sharepoint to relay email through Exchange

    - by dunxd
    I have written a Sharepoint 2007 web part that sends a field from a form to a specified email address. I have got the form working as I require, but at present it can only send to internal email addresses. Sharepoint's email functions use SMTP to send to our Exchange 2003 server, but because our Exchange server is configured to prevent relaying, if the To: address is not at a local domain, it won't deliver the mail. I don't want to open up our Exchange server to be a completely open relay. What I want is to allow my Sharepoint servers to send mail to addresses outside our domain. The following seem possible: Allow all mail sent from one of the Sharepoint servers to be relayed Allow all mail from a web application pool account to be relayed (I am not sure that the application pool authenticates to the SMTP server though) A combination of the two Can anyone advise on the best way of doing this? Is setting up a dedicated SMTP server on the Exchange server (not a separate physical server) the right way of going about this? EDIT: Note this is for Exchange 2003. There is a post on setting this up in Exchange 2007 which appears to have recognised the frequent requirement to do what I need. It doesn't give much detail on 2003 though. Can anyone expand?

    Read the article

  • MySQL stops accepting connections over 3306, still working on localhost

    - by Ben Dilts
    I have a MySQL database that stopped accepting connections from my web server altogether. So I SSH'ed into the server and started checking its vitals. The hard disks had plenty of open space, and there was plenty of available memory and swap space. Nothing was eating up the CPU (close to 100% idle). I even connected to MySQL locally and ran a few queries without any issues. But SHOW PROCESSLIST only showed my own connection, no others. Worst of all, in the MySQL log, no errors even remotely coincided with the unavailability of the server. On the web server, I got an error saying "Lost connection to MySQL server during query" at the moment the unavailability started, followed by a bunch of "MySQL server has gone away" errors. There's only one other application on the server that accepts network connections, and I killed that one (in case it was holding too many open connections or something), but it didn't help. Finally I just restarted the MySQL process, and everything is (for now) working again. What else should I check in these circumstances? Any idea what the problem might be? And how might I verify that is in fact the problem?

    Read the article

  • What are the typical methods used to scale up/out email storage servers?

    - by nareshov
    Hi, What I've tried: I have two email storage architectures. Old and new. Old: courier-imapds on several (18+) 1TB-storage servers. If one of them show signs of running out of disk space, we migrate a few email accounts to another server. the servers don't have replicas. no backups either. New: dovecot2 on a single huge server with 16TB (SATA) storage and a few SSDs we store fresh mails on the SSDs and run a doveadm purge to move mails older than a day to the SATA disks there is an identical server which has a max-15min-old rsync backup from the primary server higher-ups/management wanted to pack in as much storage as possible per server in order to minimise the cost of SSDs per server the rsync'ing is done because GlusterFS wasn't replicating well under that high small/random-IO. scaling out was expected to be done with provisioning another pair of such huge servers on facing disk-crunch issues like in the old architecture, manual moving of email accounts would be done. Concerns/doubts: I'm not convinced with the synchronously-replicated filesystem idea works well for heavy random/small-IO. GlusterFS isn't working for us yet, I'm not sure if there's another filesystem out there for this use case. The idea was to keep identical pairs and use DNS round-robin for email delivery and IMAP/POP3 access. And if one the servers went down for whatever reasons (planned/unplanned), we'd move the IP to the other server in the pair. In filesystems like Lustre, I get the advantage of a single namespace whereby I do not have to worry about manually migrating accounts around and updating MAILHOME paths and other metadata/data. Questions: What are the typical methods used to scale up/out with the traditional software (courier-imapd / dovecot)? Do traditional software that store on a locally mounted filesystem pose a roadblock to scale out with minimal "problems"? Does one have to re-write (parts of) these to work with an object-storage of some sort - such as OpenStack object storage?

    Read the article

  • How do you update without cutting off users?

    - by Griffin
    I searched around and I was surprised that I couldn't find an answer to this question. My assumption is that you have multiple servers. Normally they both will be doing their specific take (for the rest of this I will assume a simple website). Now lets say server A & B need updates. Do you update server A while server B keeps pushing out the webpage and then when server A is okay you update server B? This seems like it would work in small scale but seems horrible in large scale due to the fact that you'd need twice the power that you normally have. When dealing with a large number of servers do you update small sections at a time? I thought the problem with this would be if server A can't work alongside server B C D E or F any-longer that's not that bad. But when you start updating you slowly lose this small percentage. What is the proper way to deal with updates like this?

    Read the article

< Previous Page | 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467  | Next Page >