Search Results

Search found 9715 results on 389 pages for 'servers'.

Page 135/389 | < Previous Page | 131 132 133 134 135 136 137 138 139 140 141 142  | Next Page >

  • Network speed between a VM and another machine which is not residing on the same host, is 11MB/s at most

    - by Henno
    Problem Network speed between a VM and another machine which is not residing on the same host, is 11MB/s at most. Topology Facts ESXi5 version is 5.0.0.504890 VM has the latest Vmware Tools installed VM is using E1000 network driver Physical box has Win Srv 2008 R2 as the OS CrystalDiskMark says the drive on physical box can read/write 100MB/s vCenter is another vm on esx both vm and physical box are showing 1Gbps link speed Configuration Networking shows vmnic0 as 1000 Full NTttcp is a client/server tool from Microsoft for measuring pure network throughput Here's what I've done so far: Test1: VM is running Filezilla FTP Server (default settings, one user account made) Physical box is running Filezilla FTP Client (default settings) Physical box is uploading a big file to FTP server Transfer speed (as observed by Windows Task Manager on both machines): ~11MB/s (bad) Physical box is downloading that file from FTP server Transfer speed (as observed by Windows Task Manager on both machines): still ~11MB/s (bad) Could it be disk performance issue? Test2: Physical box is running ntttcpr.exe -a 6 -m 6,0,VM_IP_ADDRESS VM is running ntttcps.exe -a 6 -m 6,0,PHY_BOX_IP_ADDRESS Transfer speed (as observed by Windows Task Manager on both machines): ~11MB/s (bad) Could it be switch performance issue? Test3: physical box is running vSphere Client I open Summary Storage datastore Browse Datastore... from physical box and upload a file to datastore Transfer speed (as observed by Windows Task Manager on physical box): ~26-36MB/s (good) Could it be a vm specific issue? Test4: Installed ntttcp to another vm on the same esx server Measured network performance between vms on the same esx server with NTttcp Transfer speed (as observed by Windows Task Manager on physical box): ~90-120MB/s (excellent :) Test5: I have another esx server on the same site, connecting to the same datastore and same switch. Those two ESX servers have both 2 NICs. One NIC goes to switch while the other goes directly to the other ESX server. vMotioned one of the testing vms off to the other ESX host Measured network performance between vms on different esx servers with NTttcp Transfer speed (as observed by Windows Task Manager on physical box): ~11MB/s (bad) While I'm aware of these: ESXi 4.1 slow file transfer ESXi 5 network performance is slow Debian Etch and ESXi slow network speeds VMWare ESXi slow file copy to guest they did not help (or I must have been missed something)

    Read the article

  • How do I keep a bridge enabled on a bonded interface?

    - by jlawer
    I'm working on setting up a pair of CentOS 6.3 servers that will run a couple of KVM vms and have come across a problem setting up a bridge on a bond. I am using Mode 4 (802.3ad) bonding on a pair of stacked Dell Powerconnect 5524 switches connecting to R320 servers. There are 2 links (1 to each switch) that form a Link Aggregation Group (802.3ad / LACP bonding). On top of the bond I have VLAN Tagging. I've verified this is a problem on multiple other bonding modes so it isn't just a mode 4 issue. I am testing what happens when 1 link is dropped (ie switch dies, cable breaks, etc). If I don't have a bridge (for KVM), everything works fine, failover happens as expected. If I have the bridge enabled, it works fine until failover (unplugging a cable). When failover happens /var/log/messages shows the slave link going down, followed within a second by: kernel: br1: port 1(bond0.8) entering disabled state The thing is /proc/net/bonding/bond0 shows the link is up as expected (simply with only 1 slave instead of 2). If I plug the cable back in it recovers and brings the bridge back to an enabled state. I actually have tested this while a ping is occuring and if the timing is right a packet will actually leave the system after the link is lost, but before the disabled message occurs. This disabled state I assumed was STP, but I have disabled STP on the bridge configuration and this issue still occurs. brctl showstp br1 still shows the link as disabled when it is running without a slave. I also switched between the nics in the server (I have 2x Broadcom & 4x intel). It doesn't matter which configuration I have. Does anyone know of a way to force the bridge to stay enabled or why its detecting the bond as disabled, when it isn't?

    Read the article

  • Why my Ldirectord check multiple times on read server every interval?

    - by garconcn
    I have a Ldirectord server and two real servers. My ldirectord used to check the request page on real server once in every interval, but now I found that it check four times. I have monitored the log on both real servers, they have the same problem. Here is my ldirectord configuration: checktimeout=10 checkinterval=5 autoreload=yes logfile="/var/log/ldirectord.log" quiescent=no virtual=192.168.1.100:80 fallback=127.0.0.1:80 real=192.168.1.10:80 gate real=192.168.1.20:80 gate service=http request="lb.html" receive="still alive" scheduler=sh persistent=60 protocol=tcp checktype=negotiate Ldirectord will connect to each real server once every 5 seconds (checkinterval) and request 192.168.0.10:80/test.html (real/request). The access log in real server: 192.168.1.100 - - [13/Jun/2012:10:36:44 -0700] "GET /lb.html HTTP/1.1" 200 12 "-" "libwww-perl/5.805" 192.168.1.100 - - [13/Jun/2012:10:36:44 -0700] "GET /lb.html HTTP/1.1" 200 12 "-" "libwww-perl/5.805" 192.168.1.100 - - [13/Jun/2012:10:36:44 -0700] "GET /lb.html HTTP/1.1" 200 12 "-" "libwww-perl/5.805" 192.168.1.100 - - [13/Jun/2012:10:36:44 -0700] "GET /lb.html HTTP/1.1" 200 12 "-" "libwww-perl/5.805" 192.168.1.100 - - [13/Jun/2012:10:36:49 -0700] "GET /lb.html HTTP/1.1" 200 12 "-" "libwww-perl/5.805" 192.168.1.100 - - [13/Jun/2012:10:36:49 -0700] "GET /lb.html HTTP/1.1" 200 12 "-" "libwww-perl/5.805" 192.168.1.100 - - [13/Jun/2012:10:36:49 -0700] "GET /lb.html HTTP/1.1" 200 12 "-" "libwww-perl/5.805" 192.168.1.100 - - [13/Jun/2012:10:36:49 -0700] "GET /lb.html HTTP/1.1" 200 12 "-" "libwww-perl/5.805" 192.168.1.100 - - [13/Jun/2012:10:36:54 -0700] "GET /lb.html HTTP/1.1" 200 12 "-" "libwww-perl/5.805" 192.168.1.100 - - [13/Jun/2012:10:36:54 -0700] "GET /lb.html HTTP/1.1" 200 12 "-" "libwww-perl/5.805" 192.168.1.100 - - [13/Jun/2012:10:36:54 -0700] "GET /lb.html HTTP/1.1" 200 12 "-" "libwww-perl/5.805" 192.168.1.100 - - [13/Jun/2012:10:36:54 -0700] "GET /lb.html HTTP/1.1" 200 12 "-" "libwww-perl/5.805"

    Read the article

  • How to encourage Windows administrators to pick up scripting?

    - by icelava
    When I worked as an administrator in my first job, I was frustrated that our administration processes with Windows servers were a series of point-and-clicks; we could never match the level of efficiency with the Unix servers which had a group of shell scripts to automate a lot of the work. I soon read about WSH and ADSI and wasted no time learning just how much automation I was able to achieve with scripting. There was a huge problem though - almost none of my Windows colleagues were really interested in learning scripting. They seemed happy with the manually mouse-clicking chores and were never excited at the prospect of using scripts to do the work on their behalf. I struggled to convince them to pick up scripting skills despite the evident increases in efficiency. I left that job in pursuit of a full-time software development career thereafter. Almost a decade on working in various environments and different customers, I still encounter Windows administrators mainly possessing this general "mood" where they would avoid scripting as much as possible. Despite the increasing level of accessibility Windows server technologies are opening up for scripting and automation. I am almost certain the majority of administrators are administrators precisely because they absolutely hate performing any kind of programming duties. What are some means to encourage and motivate administrators that scripting can really help them in the long run?

    Read the article

  • Is Cherokee (probably) the best static content server for beginner sysadmins?

    - by Bad Learner
    I have read the pros and cons of most of the popular web servers and have come to a conclusion that Apache would (probably) be the best web server for serving dynamic content - - no wonder YouTube, Flickr and Facbook, among many others, use it. I do not know if that C10K problem applies to Apache even when serving dynamic content only, but I think any web server used to serve dynamic content needs some good tweaking for optimized performance, and the fact that nothing beats Apache when it comes to documentation, resources and support on the web, I think should will go with Apache for dynamic content. That apart, the confusion begins when it comes to choosing web servers for static content (including streaming videos). I see that Nginx, Cherokee and Lighttpd are among the best (I am not considering non-open source or non-linux stuff here). So, which too choose? I know one cannot go wrong with any of the three (Nginx, Cherokee, Lighttpd). Lighttpd's development has evidently gotten slower than it was a good time ago. The documentation is pretty good for all the three, and hopefully, so are the resources (knowledge of these among the users of Stackoverflow/Serverfault sites, the web etc). Precisely, and noting point [2] and [3], if I am not wrong, I should either go with Nginx or Cherokee. I would love to see someone clarify these... is Cherokee just as fast (mb/s), performant (connections/s), and reliable (think downtime/restarting server) as Nginx for serving static content and load balancing, for small, medium to large (and really large) websites and applications? (Think, the size of YouTube, Apache or Facebook.) if the answer for the Q above is a big "hell, yes!" then, I should probably prefer Cherokee, right? Because, since I am a beginner, it would a lot easier to setup Cherokee as it has a graphical admin user interface + really good documentation. Yes? I could be wrong, I could be right. I put down what I know so that you can offer most relevant advise. Pardon if anything I've said is offensive.

    Read the article

  • GlusterFS as elastic file storage?

    - by Christopher Vanderlinden
    Is there any way to run GlusterFS in a replicated mode, but with the ability to dynamically scale the volume up and down? Say you have 3 servers all running glusterd. your Gluster volume would have to be setup with replica 3 gluster volume create test-volume replica 3 192.168.0.150:/test-volume 192.168.0.151:/test-volume 192.168.0.152:/test-volume You would then mount it as say \mnt\gfs_test What happens when I want to add 2 more servers to the storage pool and then also use them in this volume? Is there any easy way to expand the volume AND increase that replica count to 5? My end goal is to run this on EC2 instances, say 3 Apache front ends, with the webroot setup on the gluster volume mount. My concern is that if I ever need to spin up a server, I would want the server to not only be an additional Apache front end, but also another server in the gluster file system, adding to fault tolerance as well as possibly giving a slight boost in read speed. Maybe there are better options that would fit the bill here? Thanks.

    Read the article

  • AFP/SSH stopped working on OS X Server

    - by churnd
    I have 3 Mac OS X servers all bound to AD, all configured in the Golden Triangle setup. All 3 are completely separate from each other in terms of services, but all reside on the same internal network and are all bound to the same Active Directory domain. Two are 10.5.x (latest updates) and one is 10.6.3. Last weekend, all 3 simultaneously stopped allowing Active Directory users access to certain services, specifically AFP & SSH. SMB still works fine on all 3. I asked the AD admin if anything changed, and he said "Yes, we made a change to user accounts to toughen up security", and suggested I use [email protected] instead of just username. This still didn't work. I have completely removed one of my servers from AD, and re-joined, but this didn't work either. I can do kinit from command line and get a Kerberos ticket. sudo klist -ke shows all services are configured to use the correct Kerberos principles. I have been scavenging the logs for any useful info. The AFP log just shows that I'm connecting and disconnecting. The DirectoryService.log shows stuff about misconfigured Kerberos hashes, but my research is showing that's not uncommon. /var/log/system.log isn't showing anything useful that I can see. I'm not sure where to go from here. Any help/ideas appreciated.

    Read the article

  • Issues using gmail with google apps and external domain

    - by Jonathan Kelly
    I have recently tried to use gmail through google apps as my main email client, but I'm experiencing a few different problems. I am managing the domain (conjunktiondesign.co.uk) through 123reg.co.uk but it is hosted through fasthosts.co.uk. I transfered the domain to 123reg as fasthosts did not allow me to change the MX records myself. I followed the setup instructions step by step on google apps and changed the MX records as they told me to. My email was now working perfectly but my website was down and I was getting the following error: The dnsserver returned: No DNS records I have a friend that is using the same system as me (ie. Externally hosted domain and google apps mail) and I changed my 123reg details to the same that he had (as his was working perfectly - both email and website). I changed my name servers to point to fasthosts, rather than 123reg and I added an A record called '@' pointing to fasthosts IP address. I also created another A record called 'www' pointing to fasthosts IP address. After I did this, my website worked almost immediately but I have only realised that since changing it my email is now down. I have not received anything since Saturday. I am a web designer and would consider myself fairly tech savvy, but I have no idea about A records, CNAME's and all the things I have been messing about with! What I ultimately need is someone to help me get my email and website working at the same time, rather than one being down when the other is OK. I seem only able to get one or the other working. I have now changed the name servers back to 123reg in an attempt to get my email back as it is more important than my website at this stage. Any help is much appreciated. Thanks.

    Read the article

  • Windows 2008 R2 RDS - Double Login

    - by colo_joe
    Issue: Double logins when connecting to RemoteApps or Remote Desktop Environment: Gateway = 1 server 2008 R2 - Roles = Gateway, Session Broker, Connection Mgr, Session Host Configuration server Session hosts = 2 servers 2008 R2 - Roles = App Manager and Session host configuration Testing: I can get to the url http://RDS.domain.com/rdweb - I get prompted for authentication (1) Pass authentication, get list of remote apps. Click on remoteapps or remote desktop, get prompted for authentication again (2). Pass authentication, I get access to app or RDP. Done so far. On session host Signed rdp files with cert. Added the following to the custom RDP settings: Authenticaton level:i:0 = If server authentication fails, connect to the computer without warning (Connect and don’t warn me). prompt for credentials on client:i:1 = RDC will prompt for credentials when connecting to a server that does not support server authentication. enablecredsspsupport:i:1 = RDP will use CredSSP, if the operating system supports CredSSP. Edited the javascript file as found in http://support.microsoft.com/kb/977507 Added Connection ID, and added Web Access server to TS Web Access Computers group on the Session host servers, and Signed apps as found in hxxp://blogs.msdn.com/b/rds/archive/2009/08/11/introducing-web-single-sign-on-for-remoteapp-and-desktop-connections.aspx Note: This double login happens internally and externally.

    Read the article

  • What should be monitored to troubleshoot file sharing problems?

    - by RyanW
    I'm running into some problems with a file share used by an ASP.NET web application. With this configuration, there are 2 web servers (win2k8 web) that connect to a file server (win2k8 enterprise), reading and writing files using a file share. Recently, one of the web servers has begun encountering an error accessing the file share: IOException: The specified network name is no longer available. There does not appear to be much info on the web for explaining what's causing this and how to best fix it, so I'm looking at what I can monitor in order to get clues. I'm not sure if it's hardware, just a load issue, file size, frequency, etc. With Windows perfmon, what can I monitor on the File Server side? There's the "Files Open" object, any other good ones? What can I monitor on the web server side? EDIT: I'll add that the UNC path uses the IP address of the file server, not a name to resolve. Also the share is a single, flat directory with over 100K files.

    Read the article

  • How to enable IPv6 glue records (AAAA) in PowerDNS

    - by aef
    I'm running a PowerDNS 3.1 on a Debian Wheezy Beta 4 system. The zone data is accessed through a PostgreSQL database, the server answers to both IPv4 and IPv6 queries. If the DNS-Server knows the A record for one of the name servers referenced by NS records on a zone, it automatically return these A records as additional information to the response on an NS query for that zone. Now even if it knows the AAAA record for one of the name servers of the NS records, it currently does never return an AAAA record as additional information. How can I enable this? Or is there anything I could be doing wrong? Output of dig @ns.mydomain.tld NS mydomain.tld: ;; QUESTION SECTION: ;mydomain.tld. IN NS ;; ANSWER SECTION: mydomain.tld. 86400 IN NS ns3.nsprovider.de. mydomain.tld. 86400 IN NS ns2.nsprovider.de. mydomain.tld. 86400 IN NS ns.mydomain.tld. mydomain.tld. 86400 IN NS ns.nsprovider.de. ;; ADDITIONAL SECTION: ns2.nsprovider.de. 86400 IN A 1.2.3.1 ns.nsprovider.de. 86400 IN A 1.2.3.2 ns.mydomain.tld. 600 IN A 192.0.2.194 ns3.nsprovider.de. 86400 IN A 1.2.3.3 Output of dig @ns.mydomain.tld A ns.mydomain.tld: ;; QUESTION SECTION: ;ns.mydomain.tld. IN A ;; ANSWER SECTION: ns.mydomain.tld. 600 IN A 192.0.2.194 Output of dig @ns.mydomain.tld AAAA ns.mydomain.tld: ;; QUESTION SECTION: ;ns.mydomain.tld. IN AAAA ;; ANSWER SECTION: ns.mydomain.tld. 86400 IN AAAA 2001:db8:100:3022:1::3

    Read the article

  • Memory Usage of SQL Server

    - by Ashish
    SQL Server instance on my server is using almost full memory available in my Physical Server. Say if i am having 8GB of RAM than SQL Server is using 7.8 GB of RAM from system. I also have read articles and also read many similar questions regarding same on this forum and i understand that memory is reserved and it is using memory. But i have 2 same servers and 2 SQL Servers, why this is happening on a single SQL Instance not on other. Also when i run DBCC MemoryStatus than it is showing up... VM Reserved 8282008 VM Committed 537936 so from this we know that SQL reserved whole 8GB memory, but why this VM Committed keeps increasing. What i understand is VM Committed is: VM Committed: This value shows the overall amount of VAS that SQL Server has committed. VAS that is committed has been associated with physical memory. So this is the memory SQL Server has committed (from this i understand that physical memory actually SQL Server is using at instance). So like to know the reason behind this ever increasing VM Committed memory on my server and not on another. Thanks in Advance.

    Read the article

  • What are problems and pitfalls with a public facing Active Directory

    - by Ralph Shillington
    The situation that i'm faced with is this: We plan on using a number of server applications hosted on Amazon EC2 machines, mainly Microsoft Team Foundation Server. These services rely heavily on Active Directory. Since our servers are in the Amazon cloud it should go without saying (but I will) that all our users are remote. It seems that we can't setup VPN on our EC2 instance -- so the users will have to join the domain, directly over the internet then they'll be able to authenticate and once authenticated, use that token for accessing resources such as TFS. on the DC instance, I can shut down all ports, except those needed for joining/authenicating to the domain. I can also filter the IP on that machine to just those address that we are expecting our users to be at (it's a small group) On the web based application servers, I imagine all we need to open is port 80 (or 8080 in the case of TFS) One of the problems that I'm faced with is what domain name to use for this Active directory. Should I go with "ourDomainName.com" or "OurDomainName.local" If I choose the latter, does that not mean that I'll have to get all our users to change their DNS address to point to our server, so it can resolve the domain name (I guess I could also distribute a host file) Perhaps there is another alternative that I'm completely missing.

    Read the article

  • Splunk is fantastically expensive: What are the alternatives?

    - by samsmith
    This has been discussed, but it has been several months, so it may be time to revisit it: Earlier discussion RE Splunk alternatives For the record, Splunk rocks. But the pricing is simply beyond what we can consider (When I spoke with Splunk today, the cost for a system to index 5gb/day of data is over $30,000.) That is more than we spend on SQL Server (by a large multiple), more than we spend on a rack of servers (by a multiple), etc. etc. The splunk sales team is correct (that for $30K we get more value and functionality than if we spend the same building our own system), but it doesn't matter. The splunk cost is simply too high (by a multiple). Soooooo, we are looking around! Is anyone out there building a splunk like system? Our basic need: Able to listen for syslog messages on multiple udp ports Able to index the incoming data in an async way Some kind of search engine Some kind of UI An API to the search engine (to embed in our console) We currently need to index 3-5gb/day, but need to be able to scale to 10gb/day or more. We do not need a lot of history (30 days is fine). We use Windows 2008 and 2003 servers. Thanks for your thoughts!

    Read the article

  • Standards for documenting/designing infrastructure

    - by Paul
    We have a moderately complex solution for which we need to construct a production environment. There are around a dozen components (and here I'm using a definition of "component" which means "can fail independently of other components" - e.g. an Apache server, a Weblogic web app, an ftp server, an ejabberd server, etc). There are a number of weblogic web apps - and one thing we need to decide is how many weblogic containers to run these web apps in. The system needs to be highly available, and communications in and out of the system are typically secured by SSL Our datacentre team will handle things like VLAN design, racking, server specification and build. So the kinds of decisions we still need to make are: How to map components to physical servers (and weblogic containers) Identify all communication paths, ensure all are either resilient or there's an "upstream" comms path that is resilient, and failover of that depends on all single-points of failure "downstream". Decide where to terminate SSL (on load balancers, or on Apache servers, for instance). My question isn't really about how to make the decisions, but whether there are any standards for documenting (especially in diagrams) the design questions and the design decisions. It seems odd, for instance, that Visio doesn't have a template for something like this - it has templates for more physical layout, and for more logical /software architecture diagrams. So right now I'm using a basic Visio diagram to represent each component, the commms between them with plans to augment this with hostnames, ports, whether each comms link is resilient etc, etc. This all feels like something that must been done many times before. Are there standards for documenting this?

    Read the article

  • openvpn problem

    - by Jared Voronik
    I have a problem with openvpn. I have already setup openvpn sucessfully on some other servers in the past (basic configuration, nothing special). On this server, I used the same config file, but for setting up nat iptables -t nat -A POSTROUTING -s 10.4.0.0/24 -o eth0 -j MASQUERADE doesn't work. It gives error: iptables v1.3.5: can't initialize iptables table `nat': Table does not exist (do you need to insmod?) Perhaps iptables or your kernel needs to be upgraded. How do I fix this error? Also, if I can't fix this error, can I do bridging instead of routing? I have only 1 interface, and I can connect to remote server only via ssh (and need to avoid reboots if at all possible) so if briding means a whole ethernet card has to be devoted to the openvpn (and no other servers) then briding is out, otherwise I can use briding. Do you know of a simple, step by step guide to configure openvpn briding (just simple openvpn server and clients that can access internet through vpn server, nothing fancy)?

    Read the article

  • Does my Oracle DBA need root access?

    - by Dr I
    I'm currently discussing with my Oracle DBA Collegue that request a root access on our production servers. I'm not so hot to let him use the root access on our production servers. He is arguing that he need it to perform some operations like restarting the server and some other obscure arguments. The point is that I'm not agree with him because I've set him a Oracle user/group and a dba group where Oracle user belong. Everything is running smoothy and without any root permissions for now. I also think that all administrative tasks like scheduled server restart and so one need to be operated by the proper administrator (The Systems administrator on our case) to avoid any kind of issues related to a misunderstanding of the infrastructure interactions. So, I need the help of both, sysadmins and Oracle DBAs to lead me on the correct direction. If my collegue really need this rights I'll give him, but I'm just basically quite affraid of that because of security and integrity concerns. I know that my collegue is really good as a Oracle DBA and he know is work very well, but I also know that I've very few cases where a software and its admin really need root access. Once again, I'm not looking for pros/cons but rather an advice on the way that I should take to deal with this situation.

    Read the article

  • Dynamically blocking excessive HTTP bandwidth use?

    - by Jeff Atwood
    We were a little surprised to see this on our Cacti graphs for June 4 web traffic: We ran Log Parser on our IIS logs and it turns out this was a perfect storm of Yahoo and Google bots indexing us.. in that 3 hour period, we saw 287k hits from 3 different google ips, plus 104k from yahoo. Ouch? While we don't want to block Google or Yahoo, this has come up before. We have access to a Cisco PIX 515E, and we're thinking about putting that in front so we can dynamically deal with bandwidth offenders without touching our web servers directly. But is that the best solution? I'm wondering if there is any software or hardware that can help us identify and block excessive bandwidth use, ideally in real time? Perhaps some bit of hardware or open-source software we can put in front of our web servers? We are mostly a windows shop but we have some linux skills as well; we're also open to buying hardware if the PIX 515E isn't sufficient. What would you recommend?

    Read the article

  • Cannot access shares via full domain name on Server 2008R2

    - by Stu
    Hi, I have a strange issue. We have a 2008R2 PDC and BDC. I can join the domain fine and everything seems "normal". However, on some of the other 2008R2 servers, I am unable to do things like a gpupdate. When I try, I get an error that the clocks are wrong (they aren't) and that I don't have permission. So far, this has only affected our 2008R2 servers -- the Win 7 clients are fine. The really strange things is if I browse to: \\mydomain.lan\sysvol - I get the error. But! if I browse to: \\MYDOMAIN\sysvol - it works fine. I can also access the \hostname.domain\sysvol remotely for each of the DC's and it's fine. So in short, it appears the permissions are fine since I can access them all individually on the same account. It also seems unlikely it's on the server as most clients can access it fine. The only drama I have is when I try to use the full domain name (which of course gpupdate does) on a 2008R2 server. Also, it's not just sysvol...netlogon has the same issues too on the affected machines. Any ideas? Thanks! Drew

    Read the article

  • I get "An error occurred while Windows was synchronizing with [name of time server]." when trying t

    - by ChrisF
    Prompted by the answers to this question I decided to give the Windows built in time synchronisation another go. However, no matter what time server I use I get this error: "An error occurred while Windows was synchronizing with [name of time server]." The help suggests the following as reasons for failure: You are not connected to the Internet. Establish an Internet connection before you attempt to synchronize your clock. Your personal or network firewall prevents clock synchronization. Most corporate and organizational firewalls will block time synchronization, as do some personal firewalls. Home users should read the firewall documentation for information about unblocking network time protocol (NTP). You should be able to synchronize your clock if you switch to Windows Firewall. The Internet time server is too busy or is temporarily unavailable. If this is the case, try synchronizing your clock later, or update it manually by double-clicking the clock on the taskbar. You can also try using a different time server. The time shown on your computer is too different from the current time on the Internet time server. Internet time servers might not synchronize your clock if your computer's time is off by more than 15 hours. To synchronize the time properly, ensure that the date and time settings are set close to your current time in the Date and Time Properties in Control Panel. Now the first reason is clearly wrong - I am connected to the internet. I can see the 2nd being the most likely cause. I have Sygate Personal Firewall running, but it normally asks if something it trying to connect for the first time. Does anyone know I can unblock the NTP protocol - or at least check if it is blocked?. I don't think it's #3 or #4 as I've tried a number of different servers including the one currently used by Atomic Clock Sync. Though if someone knows the address of a UK time server I can double check this.

    Read the article

  • Backup solution to backup terabytes and lots of static files on linux server?

    - by user28679
    Which backup tool or solution would you use to backup terabytes and lots of files on a production linux server ? Note that the files are all different and almost never modified, and usage is mostly adding files, so data volume is today 3TB growing all the time at around +15GB/day. Please do not reply rsync. Basic unix tools are not enough, rsync does not keep history, rdiff-backup miserably fails from time to time and screw the history. Moreover these are all file based backup, which put a lot of IOwait just to browse directories and query stat(). But i guess, except R1Soft CDP, there is no way around that. We tried R1Soft CDP backup, which is block level backup, and it proved good and efficient for all our other servers, but systematically fails on the server with 3 terabytes and gazillions of files. That is already more than 2 months that the engineers of R1Soft and datacenter are playing a hot ball game... and still no backup except regular rsync We never tried big commercial solutions, except R1Soft CDP since it was provided as an optional service by the datacented hosting our servers.

    Read the article

  • How far should we take the N+N redundancy craziness ?

    - by Brann
    The industry standard when it comes from redundancy is quite high, to say the least. To illustrate my point, here is my current setup (I'm running a financial service). Each server has a RAID array in case something goes wrong on one hard drive .... and in case something goes wrong on the server, it's mirrored by another spare identical server ... and both server cannot go down at the same time, because I've got redundant power, and redundant network connectivity, etc ... and my hosting center itself has dual electricity connections to two different energy providers, and redundant network connectivity, and redundant toilets in case the two security guards (sorry, four) needs to use it at the same time ... and in case something goes wrong anyway (a nuclear nuke? can't think of anything else), I've got another identical hosting facility in another country with the exact same setup. Cost of reputational damage if down = very high Probability of a hardware failure with my setup : <<1% Probability of a hardware failure with a less paranoiac setup : <<1% ASWELL Probability of a software failure in our application code : 1% (if your software is never down because of bugs, then I suggest you doublecheck your reporting/monitoring system is not down. Even SQLServer - which is arguably developed and tested by clever people with a strong methodology - is sometimes down) In other words, I feel like I could host a cheap laptop in my mother's flat, and the human/software problems would still be my higher risk. Of course, there are other things to take into consideration such as : scalability data security the clients expectations that you meet the industry standard But still, hosting two servers in two different data centers (without extra spare servers, nor doubled network equipment apart from the one provided by my hosting facility) would provide me with the scalability and the physical security I need. I feel like we're reaching a point where redundancy is just a communcation tool. Honestly, what's the difference between a 99.999% uptime and a 99.9999% uptime when you know you'll be down 1% of the time because of software bugs ? How far do you push your redundancy crazyness ?

    Read the article

  • Troubleshoot IIS 6 and Intermittent SQL Server Connectivity Loss

    - by jpsnow
    I am not a System Administrator, but I am temporarily trying to fill that role. I have 2 Windows 2003 Servers. 1 server has my Microsoft SQL Database (SQL Server 2005) and the other is the web server with IIS6. Intermittently (once every month or 2) the web server loses connectivity to the database server for a period of roughly 10 minutes. It normally regains connectivity without intervention. The website is set up to send me an email when there is a problem connecting to the database. Within the email, I get the error code: [Microsoft][ODBC SQL Server Driver][TCP/IP Sockets]SQL Server does not exist or access denied. I am able to access both servers during the outage. I can go to my SQL server and open up SQL server without any issues. I have looked in the EventViewer and do not see anything. The last time that I had this issue, I stopped IIS6 and started it. After doing this, the issue was resolved. This leads me to believe that it is an issue with IIS and not SQL server. How can I begin troubleshooting?

    Read the article

  • Proper web server setup

    - by DMin
    I just got myself a slicehost basic slice to play around with so I can learn how to setup web-servers. I have Ubuntu 10.04.2 installed on the server. I was able to successfully get the server up and running from scratch, these were the things I did - following this tutorial. I know this is probably just a starters tutorial, so, I was wondering if you guys can tell me what you like to do while setting up production servers. These are the steps that were followed : Update and Upgrade Ubuntu sudo apt-get install apache2 php5-mysql libapache2-mod-php5 mysql-server Backup a copy of and edit apache2.conf Set : 'ServerTokens Full' to 'ServerTokens Prod''ServerSignature On' to 'ServerSignature Off' Backup php.ini and then Change “expose_php = On” to “expose_php = Off” Restart Apache Install Shorewall firewall Configure Shorewall to only accept HTTP and SSH connections(in the rules file) Enable shorewall on startup Add the website to the server : sudo usermod -g www-data root sudo chown -R www-data:www-data /var/www sudo chmod -R 775 /var/www I want make this CommunityWiki but can't seem to find the option to do it. Please feel free to add any feedback on the processes and things I am doing right/wrong. Much appriciated, thanks! :)

    Read the article

  • Suddenly getting lock timeouts with MySQL

    - by Marc Hughes
    We've got a web app hosted on Amazon Web services. Our database is a multi-az RDS MySQL server running 5.1.57 and 3-4 app servers talk to it. Today, we started seeing a lot of errors along the lines of "Lock wait timeout exceeded; try restarting transaction" - almost 1% of POST requests are seeing this. There have been no modifications to the code running on the site. There have been no schema changes. We haven't had a big spike in traffic. I've been looking at the processes running, and none seem out of control. I tried scaling our RDS instance from a small to a large, with no effect. Two days ago, Amazon had some outages. As part of the recovery from that, our RDS server, and our app servers ended up in different availability zones, but all within the same region. But yesterday, everything was fine so I'm not convinced that's related. The lock timeouts are in different types of requests and occur in different InnoDB tables. I have noticed the number of open connections jumped when we started seeing problems, but they may be a symptom and not a cause. What are my next steps in debugging this?

    Read the article

< Previous Page | 131 132 133 134 135 136 137 138 139 140 141 142  | Next Page >