Search Results

Search found 12497 results on 500 pages for 'linked servers'.

Page 413/500 | < Previous Page | 409 410 411 412 413 414 415 416 417 418 419 420  | Next Page >

  • How can adding a server to a domain cause Remote Desktop to stop working?

    - by Adrian Grigore
    I have two dedicated with Windows 2008 R2 servers which I am using for Web hosting. One Server A is a domain controller, Server B should simply be added to the domain controlled by Server A. So I RDP'd into Server B and changed the system settings so that Server B is part of that domain. I entered my domain admin credentials, was welcomed to the domain and asked to reboot the server. So far everything seemed to work smoothly After rebooting, I could not open an RDP connection to Server B anymore: Remote Desktop can’t connect to the remote computer for one of these reasons: 1) Remote access to the server is not enabled 2) The remote computer is turned off 3) The remote computer is not available on the network Make sure the remote computer is turned on and connected to the network, and that remote access is enabled. I restored an older backup of Server B and switched off the firewall before adding the server to my domain. But the problem reoccurred just the same. What could be the reason for this? The domain is brandnew and I did not change any of the default settings. Could this be some kind of domain-wide default policy that shuts down RDP on any domain clients? Or perhaps it has to do with the fact that Server B is virtual? Thanks for your help, Adrian

    Read the article

  • ntpd on Fedora Core 6 with high negative time reset values

    - by Mark White
    The basic problem is we have a FC6 server instance running on a virtual machine, and the system time seems to have been slowly varying until it is now causing a problem. The server runs 24/7 and has been up for 155 days. It has been changed to show GMT, and reports the time as (example) 00:15:15 GMT whereas the actual time is 00:00:00 GMT. This is an offset of 915 seconds. selinux has been changed to 'setenforce 0' for testing and I am running as root. I stop the ntpd service and change the time in System|Administration|Date & Time. The time still shows the same with 'date' in bash. There are no error logs. I change the date with 'date --set' in bash. The response confirms the changed date. I run 'date' and the incorrect date is shown. There are no error logs. I start the ntpd service and /var/log/messages shows success with 'time reset -915.720139s'. The date remains unchanged. ntpq -p shows three three time servers all have offsets of around -915 seconds. I stop ntpd service and try 'ntpd -gqx' and get the same result as above - success, but a large negative time reset. I've tried varying combinations of the above, and a few more settings in System|Administration|Date & Time - no change. I just need to reset the system time to GMT. No offset. But I can't wait for ntpd to slew the time over the next few weeks. Any advice is welcome, cheers! Surely this shouldn't be this difficult... Mark...

    Read the article

  • OpenVPN: ifup tap0 drops all connections

    - by raspi
    I'm trying to create star shaped "virtual" LAN with OpenVPN which is not connected to physical network. ie. tap0 packets should not go to eth0. Packet should only go through OpenVPN to connected clients. This setup works with my OpenVPN testing machine which runs Virtual Box but not on my actual server which is running on top of Xen. Both servers are running Ubuntu Intrepid. /etc/network/interfaces: iface tap0 inet manual address 10.10.10.1 netmask 255.255.255.0 gateway 10.10.10.1 /etc/openvpn/server.conf mode server tls-server port 1194 proto udp dev tap client-to-client ca /etc/openvpn/easy-rsa/keys/ca.crt cert /etc/openvpn/easy-rsa/keys/servername.crt key /etc/openvpn/easy-rsa/keys/servername.key dh /etc/openvpn/easy-rsa/keys/dh384.pem ifconfig-pool-persist ipp.txt server-bridge 10.10.10.1 255.255.255.0 10.10.10.128 10.10.10.250 push .route 10.10.10.1 255.255.255.0 keepalive 5 60 comp-lzo persist-key persist-tun status /var/log/openvpn-status.log log-append /var/log/openvpn.log verb 3 user nobody group nogroup ifup tap0 on Virtual Box: everything ok, SSH keeps running. But on Xen SSH connection drops and I have to reboot whole machine. What I'm missing?

    Read the article

  • Rsyslogd not listening on port

    - by amorfis
    I installed rsyslogd on ubuntu server, started it and everything looks fine, but the port the server should listen on is not opened. ubuntu@node7:~$ sudo service rsyslog restart rsyslog stop/waiting rsyslog start/running, process 14114 Netstat shows it is not listening: ubuntu@node7:~$ netstat -tlan Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN tcp 0 320 172.22.0.17:22 10.8.8.38:61335 ESTABLISHED tcp6 0 0 :::22 :::* LISTEN tcp6 0 0 :::2776 :::* LISTEN tcp6 0 0 :::2777 :::* LISTEN tcp6 0 0 172.22.0.17:2777 172.22.0.11:56554 ESTABLISHED tcp6 0 0 172.22.0.17:2776 172.22.0.11:39780 ESTABLISHED This is how /etc/rsyslog.conf looks like (most comments omitted): ubuntu@node7:~$ cat /etc/rsyslog.conf ################# #### MODULES #### ################# $ModLoad imuxsock # provides support for local system logging $ModLoad imklog # provides kernel logging support (previously done by rklogd) $ModLoad imtcp $InputTCPServerRun 514 ########################### #### GLOBAL DIRECTIVES #### ########################### $ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat $RepeatedMsgReduction on $WorkDirectory /var/spool/rsyslog $FileOwner syslog $FileGroup adm $FileCreateMode 0640 $DirCreateMode 0755 $Umask 0022 $PrivDropToUser syslog $PrivDropToGroup adm $IncludeConfig /etc/rsyslog.d/*.conf In /etc/rsyslog.d/35-server-per-host.conf I have following lines, and I suspect this can be the cause. What does it mean? # Stop processing of all non-local messages. You can process remote messages # on levels less than 35. :fromhost-ip,!isequal,"127.0.0.1" ~ and if it is, how could I change it to have server listening and receiving and logging messages? UPDATE: I commented out suspected line, but still it's not listening on port 514

    Read the article

  • MySQL encoding problem after site move

    - by Quan Zhou
    Guys, I need your help. Since last month my friend has lost his database on Dreamhost, he decided to move his wordpress based blog site (written in Chinese) to my server. He's using a wp-plugin called wp-db-backup to perform regular db backups. And the servers backgrounds are: Dreamhost: Linux 2.6.31.5-modsign-aufs2-grsec-2-opt mysql Ver 14.12 Distrib 5.0.16, for pc-linux-gnu (i386) using readline 5.0 apache2 unknown version My Server: Linux li159-46 2.6.32.12-x86_64-linode12 mysql Ver 14.14 Distrib 5.1.45, for debian-linux-gnu (x86_64) using readline 6.1 nginx 0.8.36 His site's encoding was UTF-8 in both wp-config and db. I imported his db backup file in UTF-8 by default, then I sync'd files using rsync from dreamhost, then I just changed the db address and nothing more. But when I take first look at the "new" site, it was full of unreadable characters, I met this problem before, I changed charset options in browser but none of them can make it displayed properly. Then I converted his db to GB18030, it works with only if browser set charset to GB18030 either GBK, but by default they recognize the charset as UTF-8. I tried to edit the headers but it doesn't work. What could I do now? Thx~~

    Read the article

  • DNS something is wrong?

    - by Nickolas R.
    Hello I am configuring bind9 on a server with two network interfaces, one is connected to the LAN and the other is connected to the Internet through NAT so bind is not faced directly to the Internet. Everything seems to work fine, clients can do both forward and reverse lookups but somethings seems strange. On the server if i try to ping www.google.com one time, a great amount of network activity is genereated, alot more that one would expect so i decided to sniff the traffic with tcpdump. When loading the dump into Wireshark i can see about 250 entries with "Standard query A" and "Standard query response" Here a some of the entries from the dump DNS Standard query A www.google.com DNS Standard query A blackhole-1.iana.org DNS Standard query A blackhole-2.iana.org DNS Standard query response DNS Standard query A ns2.isc-sns.com DNS Standard query A ns1.isc-sns.net DNS Standard query A ns3.isc-sns.info DNS Standard query response PTR b.iana-servers.net RRSIG DNS Standard query A auth2.dns.cogentco.com DNS Standard query A ns1.crsnic.net DNS Standard query A ns2.nsiregistry.net DNS Standard query A ns3.verisign-grs.net DNS Standard query A ns4.verisign-grs.net DNS Standard query PTR 79.52.19.199.in-addr.arpa I do not have too much experince with DNS yet, but i am pretty sure that something is wrong. Anybody that have an idea of whats is going on?

    Read the article

  • Where do I define a group policy that will set a users desktop background color to green the first time they log in?

    - by Tyler
    Servers: W2k8 R2 x64 Desktops: Win7 Pro x64 Our current group policy uses a custom ADM file to define certain properties of the desktop (Background Image (centered), Background Color is green (00 74 00)). This policy works for us, but the down-side is that policies defined in our custom ADM are only applied after a GPUpdate /Force is applied. We would like these desktop theme settings to be applied the first time the user logs onto the computer. I've been working on a new policy that forces the computer to wait for the network when the user logs on to handle folder redirection. The reason for writing the new policy was to resolve the issue that a user needs to run GPupdate /Force the first time they log in, so it doesn't make sense for me to implement the new policy if there is still something that requires GPUpdate /Force to get the user in the state that we want them. I've moved the setting for background image out into Admin Templates- Desktop- Desktop- "Desktop Wallpaper" so this is now being set properly when the user first logs in. Now I'm left with a black background until I force a group policy update. I have tried to play around with setting a default "Theme" and had limited success; this was not reliable enough to call a solution. I suppose I could set the background color with a script? Any thoughts? It feels like I'm missing something obvious, or that this should be much easier than it is.

    Read the article

  • debian VM refusing all traffic apart from http

    - by james lewis
    I've got a VM with a fresh install of Debian (wheezy) and I've installed node and mongo on it. The VM is using a bridged network connection so I was expecting to be able to point my host machines browser at the ip address of the Debian VM (port 1337 for my node example or port 28017 for my mongo status page) and see one of the two services (node or mongo). My requests are refused though. As far as I can tell Debian allows all traffic by default and you have to manually configure iptables to drop traffic. I've checked iptables and it says it's setup to allow anything through. It looks like this: root@devbox:/home/jlewis# iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination As a test I setup nginx and I was able to get to the nginx landing page from my host no problems so obviously http traffic is allowed. I then set nginx up to forward all traffic upstream to mongo - no problems there, I was able to see the status page. I then did the same for my example node server and again, no problems. So http traffic is fine, but all other traffic is blocked. Anyone know why debian might be refusing all other traffic other than iptables being setup to drop it? EDIT - output from netstat -nltp: Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 127.0.0.1:28017 0.0.0.0:* LISTEN 1762/mongod tcp 0 0 0.0.0.0:51028 0.0.0.0:* LISTEN 1541/rpc.statd tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 2462/sshd tcp 0 0 127.0.0.1:1337 0.0.0.0:* LISTEN 2794/node tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 2274/exim4 tcp 0 0 127.0.0.1:27017 0.0.0.0:* LISTEN 1762/mongod tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 1510/rpcbind tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 2189/nginx tcp6 0 0 :::22 :::* LISTEN 2462/sshd tcp6 0 0 :::45335 :::* LISTEN 1541/rpc.statd tcp6 0 0 ::1:25 :::* LISTEN 2274/exim4 tcp6 0 0 :::111 :::* LISTEN 1510/rpcbind

    Read the article

  • Authenticated User Impersonation in Classic ASP under IIS7

    - by user52663
    I've recently moved one of our servers from Server 2003 and IIS6 to Server 2008 R2 and IIS7 (technically IIS7.5 I suppose). In doing so I am transitioning a small account management tool written in classic ASP and have run into a problem with user impersonation. Extensive searching hasn't been much help so far. Under IIS6, the site was configured to impersonate the logged-in user. Thus, if a domain admin logged in, he was able to run commands to create user directories, adjust permissions, etc. Using Procmon you can see the processes executing as that user. This worked fine. However, with the same code under IIS7, I am unable to get this behavior. I have enabled Basic Authentication, disabled Anonymous Auth, enabled impersonation and have changed the app pool to classic instead of integrated pipelining. Everything seems to be configured correctly, however, all the processes launched by the classic ASP site continue to run as the default AppPool identity and not the logged-in user. If it matters, programs are being launched with code such as: set Wsh = Server.CreateObject("WScript.Shell") Wsh.Run("cmd.exe /C mkdir D:\users\foo") Monitoring via Procmon shows cmd.exe being run as either "Classic .NET AppPool" or "DefaultAppPool" depending on the pipeline mode. Any suggestions on how to get the classic ASP site to impersonate and execute as the authenticated user would be great. Thanks!

    Read the article

  • Dynamic ARP Entries turning into Static ARP entries

    - by Zach
    I recently acquired a client that has a strange ARP caching issue on one of thier servers. I have a server that will eventually start turning it's dynamic ARP entries into static ARP entries. This causes problems because when the machine that has a static ARP entries on this server receives a new IP via DHCP, then the server is not able to communicate with the clients. Clearing the ARP cache resolves the issue and the server is fine for about a week and then it starts slowly turning ARP entries into static ARP entries. I haven't narrowed it down to when or how many it starts to do, but slowly you start seeing 1 static ARP and then 5 and then 10. The server in question is a Windows Server 2003 SP2. It is a DC, DHCP, and DNS server. I've checked the DHCP scope options and there's nothing in there that would indicate anything to do with static ARP entries. The only thing different between this DNS server and our other DNS server is that the 'Dynamically Update DNA A and PTR records for DHCP clients that do not request updates' is checked on the problematic server. I've done a bit of research about this and it seems that this may happen if any PXE type services are running, from what I can tell, there is nothing running a PXE server. I'm a bit lost as I have never seen dynamic ARP entries start to turn into static ARP entries. Right now my solution is a schedule task that runs every 24 hours to clear the ARP cache (arp -d *). I would like to not rely on this schedule task. Has anybody seen this before or have any suggestions on how to troubleshoot this?

    Read the article

  • Load Sharing Regarding Large Websites

    - by JHarley1
    Hello, I have a question regarding Load Sharing for large websites. My Understanding: So if you have a website that has millions of fits a day you will need to have an architecture that can support this sort of pressure. You can either do one or two things: Invest in a single large server that has huge amounts of processing power, memory and storage (such as Microsoft's TerraServer). Spread the load of your website across a number of machines. Let me tackle the second approach, so you have a collection of machines all running Web Server Software and all having access to identical copies of the websites pages. You can either spread the load across these machines using a cyclic pattern in a DNS or you can use a Load Ballancing Switch. The advantages of this approach is: - Redundancy - servers can fail and the others would "pick up the slack" - Incremental - the ability to easily add new machines to this set-up. My Question's Is there a Virtual approach to this issue of load balancing now? If the website runs from a database - is there still only a single copy of the database? If a user had a session running on one Server (e.g. they had gone to www.example.org and had been assigned to Server 2 - were they had created a session) if they refreshed the website (and were allocated Server 3) would they still have their session? What are the other disadvantages associated with Load Balancing? Many Thanks, J

    Read the article

  • Hubs/switches taking out switches?

    - by Bart Silverstrim
    Here's the issue...we have a network with a lot of Cisco switches. Someone plugged in a hub on the network, and then we started seeing "weird" behavior; errors in communication between clients and servers, or network timeouts, dropping network connections, etc. It seemed that somehow that hub (or SOHO switch) was particularly freaking out our Cisco 3700 series switches. Disconnect that hub or netgear-type SOHO switch and things settled down again. We're in the process of trying to get a centralized logging server for SNMP and management, etc., to see if we can trap errors or narrow down when someone does this sort of thing without our knowledge because things seem to work, for the most part, without issue, we just get freaky oddball incidents on particular switches that don't seem to have any explanation until we find out someone decided to take matters into their own hands to expand available ports in their room. Without getting into procedure changes or locking down ports or "in our organization they'd be fired" answers, can someone explain why adding a small switch or hub, not necessarily a SOHO router (even a dumb hub apparently caused the 3700's to freak out) sending DHCP request out, will cause issues? The boss said it's because the Cisco's are getting confused because that rogue hub/switch is bridging multiple MAC's/IP's into one port on the Cisco switches and they just choke on that, but I thought their routing tables should be able to handle multiple machines coming into the port. Anyone see that behavior before and have a clearer explanation of what's happening? I'd like to know for future troubleshooting and better understanding that just waving my hand and saying "you just can't".

    Read the article

  • domain2.com redirects to domain1.com in Apache

    - by Dmitry Mikhaylov
    I created new virtual host, but when I try to request it, Apache redirects me to another virtual host. What could cause this problem? <VirtualHost XXX.XXX.XXX.XXX:80 > ServerName domain1.com AddDefaultCharset utf-8 CustomLog /var/www/httpd-logs/domain1.com.access.log combined DocumentRoot /home/user/www/domain1.com ErrorLog /var/www/httpd-logs/domain1.com.error.log ServerAdmin [email protected] ServerAlias www.domain1.com SuexecUserGroup user user AddType application/x-httpd-php .php .php3 .php4 .php5 .phtml AddType application/x-httpd-php-source .phps php_admin_value open_basedir "/home/user:." php_admin_value sendmail_path "/usr/sbin/sendmail -t -i -f [email protected]" php_admin_value upload_tmp_dir "/home/user/mod-tmp" php_admin_value session.save_path "/home/user/mod-tmp" ScriptAlias /cgi-bin/ /home/user/www/domain1.com/cgi-bin/ </VirtualHost> <VirtualHost XXX.XXX.XXX.XXX:80 > ServerName domain2.com CustomLog /dev/null combined DocumentRoot /home/user/www/domain2.com ErrorLog /dev/null ServerAdmin [email protected] ServerAlias www.domain2.com SuexecUserGroup user user AddType application/x-httpd-php .php .php3 .php4 .php5 .phtml AddType application/x-httpd-php-source .phps php_admin_value open_basedir "/home/user:." php_admin_value sendmail_path "/usr/sbin/sendmail -t -i -f [email protected]" php_admin_value upload_tmp_dir "/home/user/mod-tmp" php_admin_value session.save_path "/home/user/mod-tmp" </VirtualHost> "apache2ctl -S" output: VirtualHost configuration: XXX.XXX.XXX.XXX:80 is a NameVirtualHost default server domain1.com (/etc/apache2/apache2.conf:266) port 80 namevhost domain1.com (/etc/apache2/apache2.conf:266) port 80 namevhost domain2.com (/etc/apache2/apache2.conf:284) XXX.XXX.XXX.XXX:443 is a NameVirtualHost default server domain1.com (/etc/apache2/apache2.conf:246) port 443 namevhost domain1.com (/etc/apache2/apache2.conf:246) wildcard NameVirtualHosts and _default_ servers: *:443 is a NameVirtualHost default server www.example.com (/etc/apache2/apache2.conf:239) port 443 namevhost www.example.com (/etc/apache2/apache2.conf:239) *:80 is a NameVirtualHost default server domain1.com (/etc/apache2/sites-enabled/000-default:1) port 80 namevhost domain1.com (/etc/apache2/sites-enabled/000-default:1)

    Read the article

  • Wrong Outlook anywhere settings

    - by Ken Guru
    Hey all I wanted to enable NTLM authentication on OutlookAnywhere, and after doing the command Set-OutlookAnywhere -IISAuthenticationMethods Basic,NTLM, my settings got changed. This is a dump before I run the command: [PS] C:\Windows\system32Get-OutlookAnywhere ServerName : EXCAS01 SSLOffloading : False ExternalHostname : ClientAuthenticationMethod : Basic IISAuthenticationMethods : {Basic} MetabasePath : IIS:///W3SVC/1/ROOT/Rpc Path : C:\Windows\System32\RpcProxy Server : EXCAS01 AdminDisplayName : ExchangeVersion : 0.1 (8.0.535.0) Name : Rpc (Default Web Site) DistinguishedName : CN=Rpc (Default Web Site),CN=HTTP,CN=Protocols,CN= EXCAS01,CN=Servers,CN=Exchange Administrative Grou p (FYDIBOHF23SPDLT),CN=Administrative Groups,CN=Fi rst Organization,CN=Microsoft Exchange,CN=Services ,CN=Configuration,DC=asp,DC=ssc,DC=no Identity : EXCAS01\Rpc (Default Web Site) Guid : 289b4865-caf1-4412-95ee-6fb0dff55e8b ObjectCategory : asp.ssc.no/Configuration/Schema/ms-Exch-Rpc-Http-V irtual-Directory ObjectClass : {top, msExchVirtualDirectory, msExchRpcHttpVirtual Directory} WhenChanged : 05.01.2011 16:59:55 WhenCreated : 27.11.2009 11:20:12 OriginatingServer : IsValid : True Noticde the settings for "Name", "DistinguishedName", and "Identity". After I run the command, I ended up with this: [PS] C:\Windows\system32Get-OutlookAnywhere ServerName : EXCAS01 SSLOffloading : False ExternalHostname : ClientAuthenticationMethod : Basic IISAuthenticationMethods : {Basic, Ntlm} MetabasePath : IIS:///W3SVC/1/ROOT/Rpc Path : C:\Windows\System32\RpcProxy Server : EXCAS01 AdminDisplayName : ExchangeVersion : 0.1 (8.0.535.0) Name : EXCAS01 DistinguishedName : CN=EXCAS01,CN=HTTP,CN=Protocols,CN=EXCAS01,CN=Serv ers,CN=Exchange Administrative Group (FYDIBOHF23SP DLT),CN=Administrative Groups,CN=First Organizatio n,CN=Microsoft Exchange,CN=Services,CN=Configurati on,DC=asp,DC=ssc,DC=no Identity : EXCAS01\EXCAS01 Guid : 289b4865-caf1-4412-95ee-6fb0dff55e8b ObjectCategory : asp.ssc.no/Configuration/Schema/ms-Exch-Rpc-Http-V irtual-Directory ObjectClass : {top, msExchVirtualDirectory, msExchRpcHttpVirtual Directory} WhenChanged : 06.01.2011 09:43:50 WhenCreated : 27.11.2009 11:20:12 OriginatingServer : ASP-DC-2. IsValid : True Now, the "Name", "DistinguishedName" and "Identity" has changed, and when I try to change it back by running "Set-OutlookAnywhere -Identity "EXCAS01\Rpc (Default Web Site)", I get the following error: [PS] C:\Windows\system32Set-OutlookAnywhere -Identity "EXCAS01\Rpc (Default Web Site)" Set-OutlookAnywhere : The operation could not be performed because object 'EXCA S01\Rpc (Default Web Site)' could not be found on domain controller 'ASP-DC-2.'. Remember, the RPC over HTTP works fine with Basic authentication (even with the wrong settings), but NTLM still doesnt work. How do I change back the settings?

    Read the article

  • Squid SSL transparent proxy - SSL_connect:error in SSLv2/v3 read server hello A

    - by larryzhao
    I am trying to setup a SSL proxy for one of my internal servers to visit https://www.googleapis.com using Squid, to make my Rails application on that server to reach googleapis.com via the proxy. I am new to this, so my approach is to setup a SSL transparent proxy with Squid. I build Squid 3.3 on Ubuntu 12.04, generated a pair of ssl key and crt, and configure squid like this: http_port 443 transparent cert=/home/larry/ssl/server.csr key=/home/larry/ssl/server.key And leaves almost all other configurations default. The authorization of the dir that holds key/crt is drwxrwxr-x 2 proxy proxy 4096 Oct 17 15:45 ssl Back on my dev laptop, I put <proxy-server-ip> www.googleapis.com in my /etc/hosts to make the call goes to my proxy server. But when I try it in my rails application, I got: SSL_connect returned=1 errno=0 state=SSLv2/v3 read server hello A: unknown protocol And I also tried with openssl in cli: openssl s_client -state -nbio -connect www.googleapis.com:443 2>&1 | grep "^SSL" SSL_connect:before/connect initialization SSL_connect:SSLv2/v3 write client hello A SSL_connect:error in SSLv2/v3 read server hello A SSL_connect:error in SSLv2/v3 read server hello A Where did I do wrong?

    Read the article

  • Some URLs fail to load on Windows web portal

    - by jpolache
    I’m working in a large data center and have been assigned to troubleshoot and issue with a windows (IIS) web server that acts as a portal for a customer of the data center. This portal server is on a DMZ at the local data center. I don’t have access to the portal desktop and am relying on an off-site administrator to work with me to do testing and report the condition of the portal. He tells me there are no software firewalls or other filtering configured. While most of the remote web pages work fine, several of the URSs the portal is suppose to serve up fail to load. I had wireshark installed on the portal system and had a capture taken of one of the failures. I used IE to access one of the remote web servers at issue. I could see the TCP SYN-ACK coming back from the remote server, but after several HTTP GETs fail to get a response the portal server sends a reset. The webmaster of the remote web server assures me that no sites are being blocked. I had a capture taken outside the local firewall, so there should be no issue there. Another tech set up a laptop and used the IP address of the portal (we took the portal off-line for the test). The laptop loads the URL as expected. I tried having Firefox loaded to make sure that the HTTP GET was not mal-formed. Same failure as with IE. So, it seems it is not the remote web server or the network, because there was no problem with the laptop. At this point, I’m not sure what other questions to ask or tests to do.

    Read the article

  • Cause of slow download speed on a particular EC2 instance?

    - by James
    I have a networking issue I'm trying to solve. I have two EC2 instances, same zone, same type. On one of the two EC2 instances (the 'bad' instance), the download speed is really poor (200k/s), while on the other (the 'good' instance), the download speed is fine, comfortable at 30M/s +). To clarify, I'm talking about downloading files to the EC2 instance while ssh'd into the server, e.g running wget with a large file. I've tried different files, including S3 objects and a large linux ISO from elsewhere. Running ethtool eth0 only returns 'Link detected: yes' for both. When running ifconfig, both return the same for most part, aside from how the good instance shows no error packets yet the bad instance shows many: UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:168372370 errors:5075643 dropped:0 overruns:0 frame:0 TX packets:122116480 errors:0 dropped:0 overruns:0 carrier:0 Both servers are configured the same, at least were supposed to be. How can I go about diagnosing the cause for the slow download speed? Is there anything particular to EC2 instances that could cause this? Having trouble knowing where to start. Thanks for any help!

    Read the article

  • MS SQL Server Firewall Ports

    - by mmacaulay
    Hi, I've recently found myself in the position of quickly deploying a production app on SQL Server 2008 (EXPRESS), and I've been having some issues with configuring firewall rules between our web server running the ASP.NET app and our database server. Everything that I can find on the internet claims that I should only need to have TCP ports 1433/1434 and UDP port 1434 accessible on the database server. However, we were unable to get connectivity going between the web app and the database with just those ports. With the help of one of the guys in our datacentre, we discovered that there was traffic also going to TCP port 2242 on the database server. After opening this port, everything worked, but we're not sure why. Later on, I had to reinstall SQL Server due to some disk space issues, and found that the problem had resurfaced - after another session with the packet sniffer, we discovered that this time traffic was going to TCP port 4541 on the database server. My question is, is there some configuration option that I'm missing in SQL server that's making it choose random ports? I'd like to have our firewall rules locked down as much as possible, and of course we'd like to avoid any future mysterious connectivity issues, especially once the app is live. Both servers are running Windows 2003 R2 X64.

    Read the article

  • CIFS mounted drive setting "stick-bit" on all files, cannot change permissions or modify files

    - by mattmcmanus
    I have a folder mounted on an Ubuntu 8.10 sever through cifs that I simply cannot change the permissions on once mounted. Here is a breakdown of what's going on: All files within the mounted folder automatically have their permissions set to -rwxrwSrwx regardless of whether the file is create on the windows server or on the linux machine. I have the same directory mounted on two other linux servers (both running 9.10 instead of 8.10) with no problems at all. They all are using the same fstab options and the same credentials. //server/folder /media/backups cifs credentials=/etc/samba/.arcadia_cred,noexec,noserverino 0 0 I've I run a chmod command a million different ways, all of which report successfully changing the permissions. However it doesn't. The issue began after I updated from 8.04 to 8.10 Any idea why this may be happening on one machine? Since it started after an upgrade I'm not sure what is the bes thing to do. Any help you could give would great! None of my automated backup scripts are working because of this!

    Read the article

  • Resources for Smartphone Security

    - by Shial
    My organization is currently working on improving our data and network security due to increasing HIPAA laws and a general need to get a better grasp on controlling our health related information. We are a non-profit working with people with developmental disabilities so we handle a lot of medical related information. One area that has been identified as a risk is our use of smartphones, specifically at this time Windows Mobile 6.1 devices from T-Mobile. We do not utilize the VPNs on the phones so there isn't any way they can access our databases or file servers (username/password for VPNs is not the domain logons). What would be exposed however is the particular user's email account since you could extract out the username/password and access the email either on the device or on our web email (Exchange 2003) which could contain HIPAA protected confidential information about clients and services and this would be an incident that would have to be reported. What resources or ideas would help us secure these devices? I'm not worried about data interception (using SSL) but more about physical theft or loss of the device. Are there websites that I just have not found with guidelines and suggestions or particualar products that would help protect us? I also don't want to limit the discussion to windows Mobile either. I myself am looking at an android 2.0 device and there is always the eventual possibility we could get pushed to enable the VPNs. I know this is a subject that likely won't have any particular correct answer and it is something we should all be aware of since there devices are sitting outside of our immediate control most of the time.

    Read the article

  • Windows 7 - Windwos XP - sharing - why isn't working?

    - by durumdara
    Hi! This is seems to be "hardware" and not "software" / "programming" question, but I need to use this share in my programs, so it is "close to programming". We had an XP based wireless network. The server is XP Professional, the clients are XP Home (Notebooks). This was working well with folder sharing (with user rights, not simple share). Then we replaced the one of the notebook with Win7/X64 notebook. First time this can reach the server, and the another client too. Later I went to another sites, and connect to another servers, another networks. And then, when I return to this network, I saw that I cannot connect to this server. Nothing of resources I see, and when try to dbl click on this computer, I got login window, where I can write anything, never I can login... The interesting part, that: Another XP home can see the server, can login as quest, or with other user. The server can see the XP home notebook. The Win7 can see the notebook's shared folders, and XP home can see the Win7 shared folders. The server can see the Win7 folders, BUT: the Win7 cannot see the server folders. Cannot see the resources too... The Win7 is in "work networking group", the group name is not mshome. I tried everything on the server, I tried to remove MS client, restore it with simple sharing, set guest password, etc., but I lost the possibilities to access this server from Win7. Does anyone have any idea what I need to see, what I need to set to access these resource - to use them in my programs? Thanks for every info, link: dd

    Read the article

  • Gmail.com detect mail as spam, but the server is not on any BlackList

    - by Tomer W
    I have an issue with Google. (GMail to be exact) About 1 month ago, we had a security breach, and mail was relayed through our servers. we got listed in almost ALL Black-Lists :( we fixed the problem, and requested removal from Black-lists, which was granted easily. currently (over 3 weeks), we are not sending any spam anymore. furthermore, we got clear from all the Black-lists (MxToolBox Black-List Search Result) But, GMail still refuse to get Anything from the server, stating '550 Spam'. Following, Telnet attempt to send to gmail: 220 mx.google.com ESMTP g47si45436208eep.123 helo megatec.co.il 250 mx.google.com at your service mail from: <[email protected]> 250 2.1.0 OK g47si45436208eep.123 rcpt to: <[email protected]> 250 2.1.5 OK g47si45436208eep.123 Data 354 Go ahead g47si45436208eep.123 Test123 . 550-5.7.1 [62.219.123.33 11] Our system has detected that this message is 550-5.7.1 likely unsolicited mail. To reduce the amount of spam sent to Gmail, 550-5.7.1 this message has been blocked. Please visit 550-5.7.1 http://support.google.com/mail/bin/answer.py?hl=en&answer=188131 for 550 5.7.1 more information. g47si45436208eep.123 Connection to host lost. i tried filling the form @ Gmail - Report Delivery Problem i also tried reaching Google by phone, but the message was to go to the Link mentioned above. I Checked ReverseDNS and is ok... We dont have TLS, but that shouldn't be a problem, shouldn't it? Note: we are not a Bulk sender. Anyone has an idea? what can be blocking our IP? Anyone know whom can be contacted in order to resolve this BL listing?

    Read the article

  • Webservice randomly dropping connections - possibly due to firewall nonevent data?

    - by adam
    I have a hosted webapp which requests data from a REST webservice in our office. Each page calls one (or several) webservices, which go from our host, via our firewall (a Watchguard Firebox) to a server in our office. All of a sudden, the app has dramatically slowed. We have determined that the webservice is timing out at random when called externally (it's fine when called within the office network). I'm pretty certain it's our connection which is dropping the webservice call, so I've written a quick php/curl script which calls the webservice over many iterations and shows the various timings. Below is an example output, showing both a failed and a successful call (with a 5 second timeout): http_code namelookup_time connect_time pretransfer_time starttransfer_time total_time 1 0 0.000096 0.0342 0.0000 0.0000 0.0342 2 200 0.000052 0.0332 0.1327 0.1751 0.1752 As per iteration #1 above, failed requests seem to be failing between connect and pretransfer. I'm not sure if this shows that the connection is successfully past the firewall, or could the firewall still cause an issue? Our firewall is showing a series of nondata event log messages for the relevant access rule. Our IT team tells me these are routine, although I can find no mention of these in Google. I'm not sure if this fits in between connect and pretransfer. Having elinated the webservice server (by testing internally) and the live webapp (by testing different code on different external servers, I am left suspecting the connection to the office. Could the firebox nondata events be causing a problem between connect and pretransfer?

    Read the article

  • Network Load Balancing, intermittent port problem

    - by Jimmy Chandra
    Trying to troubleshoot an intermittent problem. I think it might be related to an NLB issue. We are using Windows Network Load Balancing to balance load for our multiserver SharePoint front ends. Say... Web Front End 1 IP is 192.168.1.100 and Web Front End 2 IP is 192.168.1.101, the NLB is setup to load balance both WFE servers on any incoming traffic to the IP 192.168.1.200. Sometimes we got an intermittent issue where when we try to access the SharePoint site using 192.168.1.200:8080 (say the site is set up to run on port 8080) from a remote client, it will display page not found. Pinging the 192.168.1.200 will give responses, but when trying to telnet to 192.168.1.200:8080 it just won't connect. However, browsing the SharePoint site directly on individual WFE (192.168.1.100 and 192.168.1.101) show no problem whatsoever. My guess also (we didn't get a chance to try it yet, but I think it should work), if I try connecting remotely to individual server, it will respond just fine. But any attempt on trying to connect using the virtual IP (192.168.1.200) will fail miserably. Funny thing is, after a while it will return back to normal. Anyone had similar experience with this type of problem while implementing NLB before? We are doing this in a virtual environment.

    Read the article

  • Choose identity from ssh-agent by file name

    - by leoluk
    Problem: I have some 20-30 ssh-agent identities. Most servers refuse authentication with Too many failed authentications, as SSH usually won't let me try 20 different keys to log in. At the moment, I am specifying the identity file for every host manually, using the IdentityFile and the IdentitiesOnly directive, so that SSH will only try one key file, which works. Unfortunately, this stops working as soon as the original keys aren't available anymore. ssh-add -l shows me the correct paths for every key file, and they match with the paths in .ssh/config, but it doesn't work. Apparently, SSH selects the indentity by public key signature and not by file name, which means that the original files have to be available so that SSH can extract the public key. There are two problems with this: it stops working as soon as I unplug the flash drive holding the keys it renders agent forwarding useless as the key files aren't available on the remote host Of course, I could extract the public keys from my identity files and store them on my computer, and on every remote computer I usually log into. This doesn't looks like a desirable solution, though. What I need is a possibility to select an identity from ssh-agent by file name, so that I can easily select the right key using .ssh/config or by passing -i /path/to/original/key, even on a remote host I SSH'd into. It would be even better if I could "nickname" the keys so that I don't even have to specify the full path.

    Read the article

< Previous Page | 409 410 411 412 413 414 415 416 417 418 419 420  | Next Page >