Search Results

Search found 61615 results on 2465 pages for 'execution time'.

Page 542/2465 | < Previous Page | 538 539 540 541 542 543 544 545 546 547 548 549  | Next Page >

  • Excel how to get an average for column for rows that meet multiple criteria

    - by Jess
    I would like to know the average days between open and close dates for an item with a close date in a particular month. So from the below example in Jan 2013 items 2,5 and 6 were closed (Closed can be RESOLVED or CANCELLED status), each were open for 26, 9 and 6 days respectivly. So of the jobs that have a closed date in Jan 2013 (between 01/01/2013 and 13/02/13) they have an average open time (between open and close date) of 13.67 days to 2dp. I have tried a few ways to get this to work and i think the issue I am having is with the AVERAGE function. First time using a forum so apologies if my question is unclear. Was unable to post image to have this comma seperated below Item_ID,Open_Date,Status,Close_Date 1,1/06/2012,RESOLVED,16/07/2012 2,20/12/2012,RESOLVED,16/01/2013 3,2/01/2013,IN PROGRESS, 4,3/01/2013,CANCELLED,7/05/2013 5,3/01/2013,RESOLVED,12/01/2013 6,4/01/2013,RESOLVED,10/01/2013 7,1/02/2013,RESOLVED,15/02/2013 8,2/02/2013,OPEN, 9,7/02/2013,CANCELLED,26/02/2013

    Read the article

  • How difficult is it to setup Mac OS X Server?

    - by Anriëtte Combrink
    Hi there We are a small office of about 4 people, and we would like to have a 27-inch iMac (Core 2 Duo) setup as a server and workstation simultaneously, using Mac OS X Server. This might seem like overkill (and stupidity at the same time), but here is the situation: we want to convert our whole office to Mac, only one full-time PC left we will not use it's mail server we might use it's chat server we want it setup to provide VPN we are a small office so I don't see how the server can be overrun with too much traffic. How difficult would it be to set it up in this way? I have a fairly advanced knowledge of Mac OS X but have never encountered Mac OS X Server. I think I would be able to set it up, but what are the probable pitfalls that might come up? Has anyone else been in a similar situation?

    Read the article

  • Best approach to utilize RamDisk for Chrome?

    - by laggingreflex
    I use a lot of tabs and after a while less recently opened tabs take some time to become responsive, which I guess is because they're being un-cached to HDD as they're not required. So after creating a Ram-Disk I have two options, use --disk-cache-dir="G:/" switch to do what it does. Or what I'm currently doing: using a directory junction for "[...]\AppData\Local\Google\Chrome\User Data\Default" to move that entire folder over to Ram-Disk. I thought this would be better than just disk-cache but what do I know. Is it? As one can guess it'll be a pain saving/loading the Ram-Disk image each time I start chrome but if it really is better than the former approach I'll write a script or something.

    Read the article

  • How to disable Mac OS X from using swap when there still is "Inactive" memory?

    - by Motin
    A common phenomena in my day to day usage (and several other's according to various posts throughout the internet) of OS X, the system seems to become slow whenever there is no more "Free" memory available. Supposedly, this is due to swapping, since heavy disk activity is apparent and that vm_stat reports many pageouts. (Correct me from wrong) However, the amount of "Inactive" ram is typically around 12.5%-25% of all available memory (^1.) when swapping starts/occurs/ends. According to http://support.apple.com/kb/ht1342 : Inactive memory This information in memory is not actively being used, but was recently used. For example, if you've been using Mail and then quit it, the RAM that Mail was using is marked as Inactive memory. This Inactive memory is available for use by another application, just like Free memory. However, if you open Mail before its Inactive memory is used by a different application, Mail will open quicker because its Inactive memory is converted to Active memory, instead of loading Mail from the slower hard disk. And according to http://developer.apple.com/library/mac/#documentation/Performance/Conceptual/ManagingMemory/Articles/AboutMemory.html : The inactive list contains pages that are currently resident in physical memory but have not been accessed recently. These pages contain valid data but may be released from memory at any time. So, basically: When a program has quit, it's memory becomes marked as Inactive and should be claimable at any time. Still, OS X will prefer to start swapping out memory to the Swap file instead of just claiming this memory, whenever the "Free" memory gets to low. Why? What is the advantage of this behavior over, say, instantly releasing Inactive memory and not even touch the swap file? Some sources (^2.) indicate that OS X would page out the "Inactive" memory to swap before releasing it, but that doesn't make sense now does it if the memory may be released from memory at any time? Swapping is expensive, releasing is cheap, right? Can this behavior be changed using some preference or known hack? (Preferably one that doesn't include disabling swap/dynamic_pager altogether and restarting...) I do appreciate the purge command, as well as the concept of Repairing disk permissions to force some Free memory, but those are ways to painfully force more Free memory than to actually fixing the swap/release decision logic... Btw a similar question was asked here: http://forums.macnn.com/90/mac-os-x/434650/why-does-os-x-swap-when/ and here: http://hintsforums.macworld.com/showthread.php?t=87688 but even though the OPs re-asked the core question, none of the replies addresses an answer to it... ^1. UPDATE 17-mar-2012 Since I first posted this question, I have gone from 4gb to 8gb of installed ram, and the problem remains. The amount of "Inactive" ram was 0.5gb-1.0gb before and is now typically around 1.0-2.0GB when swapping starts/occurs/ends, ie it seems that around 12.5%-25% of the ram is preserved as Inactive by osx kernel logic. ^2. For instance http://apple.stackexchange.com/questions/4288/what-does-it-mean-if-i-have-lots-of-inactive-memory-at-the-end-of-a-work-day : Once all your memory is used (free memory is 0), the OS will write out inactive memory to the swapfile to make more room in active memory. UPDATE 17-mar-2012 Here is a round-up of the methods that have been suggested to help so far: The purge command "Used to approximate initial boot conditions with a cold disk buffer cache for performance analysis. It does not affect anonymous memory that has been allocated through malloc, vm_allocate, etc". This is useful to prevent osx to swap-out the disk cache (which is ridiculous that osx actually does so in the first place), but with the downside that the disk cache is released, meaning that if the disk cache was not about to be swapped out, one would simply end up with a cold disk buffer cache, probably affecting performance negatively. The FreeMemory app and/or Repairing disk permissions to force some Free memory Doesn't help releasing any memory, only moving some gigabytes of memory contents from ram to the hd. In the end, this causes lots of swap-ins when I attempt to use the applications that were open while freeing memory, as a lot of its vm is now on swap. Speeding up swap-allocation using dynamicpagerwrapper Seems a good thing to do in order to speed up swap-usage, but does not address the problem of osx swapping in the first place while there is still inactive memory. Disabling swap by disabling dynamicpager and restarting This will force osx not to use swap to the price of the system hanging when all memory is used. Not a viable alternative... Disabling swap using a hacked dynamicpager Similar to disabling dynamicpager above, some excerpts from the comments to the blog post indicate that this is not a viable solution: "The Inactive Memory is high as usual". "when your system is running out of memory, the whole os hangs...", "if you consume the whole amount of memory of the mac, the machine will likely hang" To sum up, I am still unaware of a way of disabling Mac OS X from using swap when there still is "Inactive" memory. If it isn't possible, maybe at least there is an explanation somewhere of why osx prefers to swap out memory that may be released from memory at any time?

    Read the article

  • How to kill this two dialog box for ever permanently in Ubuntu?

    - by YumYumYum
    How to permanently forever remove this 2 dialog boxes from my setup? There are two dialog box very disturbing reason why Ubuntu is becoming disturbing OS. no way to remove them nor it gives any option to kill it. Any idea please how to remove this two dialog boxes completely from my systems? Which appears time time without my wish, like virus, i just dont want to keep those dialog box showing up annoyingly. NOTE: None of the answers and follow up helped to solve that which was asked here: http://askubuntu.com/questions/186312/how-to-remove-permanently-those-error-prompts-while-using-openbox-gnome

    Read the article

  • backup of KVM VM's running on Ubuntu 12.4.1 precise edition from a remote machine

    - by Dr. Death
    I am creating a library API which will take the backup of all the VM's running on KVM hypervisor. My VM's can be of any type. I am taking this backup from a remote machine and need to put the backup at remote server. I have KVM, Libvirt installed on my system. Some of my VM's are LVM based and some are normal VM's running on KVM. I research and found out an excellent perl script for taking the backup http://pof.eslack.org/2010/12/23/best-solution-to-fully-backup-kvm-virtual-machines/ but since I am developing this library in C++ I cannot use it however it has given me a good understanding of how it will work. One thing I didnot able to sort out is if my VM's are not created using virt-manager or are created using any other tool them virsh system list command does not give them in the list of running VM's however they are running perfectly on my KVM server. Is there a way to list these VM's in my system list anyhow? secondly, when I am taking backup from the remote machine I am getting out of my ssh mode as soon as my libvirt command finishes and for every command I need to ssh again, Is there a way that I do not need to ssh each and every time? I have already used the rsa key for ssh but when once my command finishes my control moves to the remote machine again and try to find out my source VM location in remote machine's local drives which in turn fails it. here is the main problem I am facing. also for the LVM based VM I am able to take the live backup but for non LVM based my machines are getting suspended and not been able to take the live backup. Since my library will work on the remote machine only I might not know the VM's configruation on the KVM server. so need to make it consistent for all the VM's. Please share any thing related to this issue so that I may be able to take the live backup of the non lvm vm's also. I'll update my working and any research findings time to time to all of you. Thanks in advance for your suggestions in these regards.

    Read the article

  • Internet cafe software for linux

    - by pehrs
    I have gotten a request to roll out a total of 8 internet cafe's in a large network. Budget is non-existent as it will all be done for a non-profit. I was planing to use Ubuntu and live-cds to minimize the amount of management required, but I can't seem to find any suitable internet cafe system that is Ubuntu based. The requirements are pretty basic: It needs to keep track of logged in time and log out users when their time it up. No billing will be done, it will just be used to ensure people can share the computers fairly. It should be possible to force logout from a central system. Users will be unskilled, so it has to have a GUI. What (preferably free, considering the shoe-string budget) software would you suggest to manage this?

    Read the article

  • Deleting an undeletable Directory in Windows 7

    - by Kaizen
    I have encountered a problem from time to time but have not been able to resolve it without formatting. I have a directory called d:\DotNet that I want to delete. I cannot because inside this folder there is another folder called: T4 Code generation and Misc. When I try to deleting or access T4 Code generation and Misc., I get the following error: Could not find this item This is no longer located in D:\DotNet. Verify this item's location and try again. Hopefully this is a simple fix.

    Read the article

  • How to add a Receive All only button to Outlook 2010

    - by user328157
    I managed, somehow, 2-3 years ago to add a big button on to task bar to Send All mail only. I don't recall how I did it but it replicates a function built into Outlook but is much bigger visually. However, more importantly I want a button to just receive all my email. But, I can't find anywhere how to do that. Most of the time I don't want to send mail at the same time that I am receiving, for a multitude of reasons. And, I don't want to make them drafts coz it is a pain to then send them, you need to open each one up and then click send again. Anybody know how to fix my problem? Much appreciated the Godzonekid

    Read the article

  • How do you create large, growable, shared filesystems on Linux at AWS?

    - by Reece
    What are acceptable/reasonable/best ways to provide large, growable, shared storage at AWS, exposed as a single filesystem? We're currently making 1TB EBS volumes ~biweekly and NFS exporting with no_subtree_check and nohide. In this setup, distinct exports appear under a single mount on the client. This arrangement does not scale well. The options we've considered: LVM2 with ext4. resize2fs is too slow. Btrfs on Linux. not obviously ready for prime time yet. ZFS on Linux. not obviously ready for prime time yet (although LLNL uses it) ZFS on Solaris. future of this combo is uncertain (to me), and new OS in the mix glusterfs. heard mostly good but two scary (and maybe old?) stories. The ideal solution would provide sharing, a single fs view, easy expandability, snapshots, and replication. Thanks for sharing ideas and experience.

    Read the article

  • Can't SSH to remote server,how to avoid this

    - by snow8261
    From time to time,we suffer problems like we can not remote connect to our server via ssh.So we have to send someone on site to restart the computer for this problem.It causes a lot of pain.The situation is we have to remote connect to our server,which are very important like database server and application server and etc.We have met problems like ssh hang,like command ssh [email protected] with no response. when using ssh -v debug mode, it says : debug1: Connection established. debug1: identity file /.ssh/identity type -1 debug1: identity file /.ssh/id_rsa type -1 debug1: identity file /.ssh/id_dsa type -1 debug1: loaded 3 keys and we met this situation many times with no clue how to solve it.Is any log which can identify this problem? or Is there a tool for this problem? help needed!Any idea are appreciated.

    Read the article

  • "Show In Finder" won't open a new finder window

    - by Gavin Miller
    The "Show In Finder" action isn't working on Mac OS X Mountain Lion. The problem has just started to occur all the time, before it was a bit sporadic, but now it happens all the time. Things that don't work: In the chrome Downloads page clicking any of the "Show in Finder" links. Right clicking a file in XCode and choosing "Show in Finder" Things that work: open . in terminal command-n after command tabbing to Finder. Things I've tried to fix the issue: Opt - Right Click finder in the dock and relauching Restarting my computer Anybody ever experienced this issue?

    Read the article

  • Cause of flapping UNKNOWN Nagios status?

    - by jldugger
    We run some Nagios service checks via OpsView, and one of our hosts is getting a strange response for SSH: "UNKNOWN: Service results are stale" It happens regularly, but seems to go away as the system retries a 2nd and 3rd time. It started after a patch and reboot of the server in question last week. The system itself responds to SSH from boxes I've tested with (which doesn't include the monitoring system I am not given access to). /var/log/secure is full of lines ala: sshd[15628]: Did not receive identification string from xxx.xxx.226.20 Time stamps are reliably every five minutes, which is pretty obviously the monitoring script disconnecting once it gets a login prompt. Anyone know what might be causing this, or how to fix it? It's really frustrating to see this pop on and off the status page.

    Read the article

  • SuSE always logs a user in TVM session

    - by rohan
    Hello all, I recently installed Suse Linux enterprise Desktop 11 on my box. I created an user and logged in first time into a GNOME session without any problems. Last time I logged in I selected the session as TWM and that got me into the T windows manager just fine. Now when I log out and try to log back into a GNOME session, it will still log me into the TVM session. I have tried restarting the box but that has not helped. However, when I remote log in to the machine it will let me got on the GNOME session just fine. I'm guessing this is probably a really simple fix, however I am a Linux newbie and doing a google search isn't yielding me what I'm looking for. Thanks in advance for your help, Rohan

    Read the article

  • Profiles and using the local profile for a domain user

    - by Harry
    I’m having some trouble with profiles and would like to reach out for some help. I’ve tried to do some research to help myself along, but I’m not making much progress on my own. I’ve pretty much taken over the sys admin duties for my small lab, I don’t have much experience to justify it besides I’m the only with the time and dedication to go at it (The environment was in a state of disrepair). My network and domain I look over are extremely small by most standards, about 10 users at a time. They are pretty intensive activity on the network, and we do work with fairly large files. None of the network is online, which is nice at the moment because it allows me not to have another headache. On to my profile problem, I have set up roaming profiles for the users in the network. Now after a little research, I think I will be switching this to a hybrid of folder redirection and roaming profiles as this seems to best practice. I also don’t want the users having to wait for a long time if they have a bloated profile. Now I’ve finally got a build working using MDT. We have Mac Pros, and it wasn’t fun getting everything to play nice. The way I did this was by setting up a reference computer and installing all the software and tools that each user would need and editing the settings preferences to how we would need them. I think used MDT to do a sys prep and capture to create the image of my reference computer. Using the reference image I can push out my images to the rest of the desktops in my environment. The issue I’m having is when we join the computer to domain. The user can login and operate fine on the computer, but I’d like a more. When the user is logged on with their domain user name they lose a lot of the icons I had on my reference image, as well as the desktop background and some other miscellaneous settings. I would love to have the user log on using their domain user name and see the icons and desktop environment as I had it setup on the reference computer. I’m not sure if it is possible, or something simple that I’m missing, but any help would be greatly appreciated!

    Read the article

  • SuSE always logs a user in TVM session

    - by rohan
    Hello all, Our sa recently installed Suse Linux enterprise Desktop 11 on my work box and I logged in first time into a GNOME session without any problems. Last time I logged in I selected the session as TWM and that got me into the T windows manager just fine. Now when I log out and try to log back into a GNOME session, it will still log me into the TVM session. I have tried restarting the box but that has not helped. Though when I remote log in to the machine it will let me get on the GNOME session just fine. I'm guessing this is probably a really simple fix, however I am a Linux newbie and doing a google search isn't yielding me what I'm looking for. My sa cant figure what is wrong either Thanks in advance for your help, Rohan

    Read the article

  • "Couldn't resolve host" for any external content

    - by scatteredbomb
    On our site we run a few different scripts for various sites (uploading to amazon s3, data from chartbeat, script to count twitter followers) and all of them just stop working from time to time. They work most days, but then some days (like today) they all just stop working. This simple script to get follower count into PHP $url = "http://twitter.com/users/show/username"; $response = file_get_contents ( $url ); $t_profile = new SimpleXMLElement ( $response ); $count = $t_profile->followers_count; Just sits there for a couple minutes, then finally spits out an error that says "Couldn't resolve host". Any script we use for an external site gives us this error. I'm not really sure where to check what's blocking these connections all of a sudden, and why it seems to work most times, then doesn't for a day or so, then works again. Any tips? Update: Contents of resolv.conf search 147.225.210.rdns.ubiquityservers.com nameserver 72.37.224.5 nameserver 72.37.224.6

    Read the article

  • Understanding Zabbix Triggers

    - by Mediocre Gopher
    I have zabbix set with an item to monitor a log file on a zabbix client: log["/var/log/program_name/client.log","ERROR:","UTF-8",100] And a trigger to determine when that log file get's more ERRORs: {Template_Linux:log["/var/log/program_name/client.log","ERROR:","UTF-8",100].change(0)}#0 This trigger gets tripped when the log file gets ERRORs the first time, but then that first trigger just sits around for ever in Monitoring-Triggers. My understanding is that the next time the server checks the value of log["/var/log/program_name/client.log","ERROR:","UTF-8",100] and sees that it hasn't changed that the trigger would go away. Obviously this isn't the case. Could someone explain why this first trigger isn't going away? Ultimately my goal is to receive an email whenever ERRORs are added to that log file, but I would like to understand how triggers are working first.

    Read the article

  • rhel/centos vs. ubuntu (possibly other debian-based systems) linux in handling duplicate ips in the same subnet

    - by johnshen64
    This has bothered me for quite a while but I never found out why or how to change the behavior. ip duplicates could be caused by typos or dhcp errors etc., but they do occur from time to time. in rpm-based systems such as centos, the old server with the duplicate ip wins, and the new server will get an error in bringing up the nic (ip address already used). this is somewhat harmless because we can just fix the system that is coming up. ubuntu only the other hand happily grabs the used ip for itself and leave the old server/device without a valid ip. this is the more dangerous behavior because it causes outages. what i want is to change the ubuntu behavior to that of the centos/rhel so would appreciate any help.

    Read the article

  • Windows DNS Server 2008 R2 fallaciously returns SERVFAIL

    - by Easter Sunshine
    I have a Windows 2008 R2 domain controller which is also a DNS server. When resolving certain TLDs, it returns a SERVFAIL: $ dig bogus. ; <<>> DiG 9.8.1 <<>> bogus. ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 31919 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;bogus. IN A I get the same result for a real TLD like com. when querying the DC as shown above. Compare to a BIND server that is working as expected: $ dig bogus. @128.59.59.70 ; <<>> DiG 9.8.1 <<>> bogus. @128.59.59.70 ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 30141 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0 ;; QUESTION SECTION: ;bogus. IN A ;; AUTHORITY SECTION: . 10800 IN SOA a.root-servers.net. nstld.verisign-grs.com. 2012012501 1800 900 604800 86400 ;; Query time: 18 msec ;; SERVER: 128.59.59.70#53(128.59.59.70) ;; WHEN: Wed Jan 25 14:09:14 2012 ;; MSG SIZE rcvd: 98 Similarly, when I query my Windows DNS server with dig . any, I get a SERVFAIL but the BIND servers return the root zone as expected. This sounds similar to the issue described in http://support.microsoft.com/kb/968372 except I am using two forwarders (128.59.59.70 from above as well as 128.59.62.10) and falling back to root hints so the preconditions to expose the issue are not the same. Nevertheless, I also applied the MaxCacheTTL registry fix as described and restarted DNS and the whole server as well but the problem persists. The problem occurs on all domain controllers in this domain and has occurred since half a year ago, even though the servers are getting automatic Windows updates. EDIT Here is a debug log. The client is 160.39.114.110, which is my workstation. 1/25/2012 2:16:01 PM 0E08 PACKET 000000001EA6BFD0 UDP Rcv 160.39.114.110 2e94 Q [0001 D NOERROR] A (5)bogus(0) UDP question info at 000000001EA6BFD0 Socket = 508 Remote addr 160.39.114.110, port 49710 Time Query=1077016, Queued=0, Expire=0 Buf length = 0x0fa0 (4000) Msg length = 0x0017 (23) Message: XID 0x2e94 Flags 0x0100 QR 0 (QUESTION) OPCODE 0 (QUERY) AA 0 TC 0 RD 1 RA 0 Z 0 CD 0 AD 0 RCODE 0 (NOERROR) QCOUNT 1 ACOUNT 0 NSCOUNT 0 ARCOUNT 0 QUESTION SECTION: Offset = 0x000c, RR count = 0 Name "(5)bogus(0)" QTYPE A (1) QCLASS 1 ANSWER SECTION: empty AUTHORITY SECTION: empty ADDITIONAL SECTION: empty 1/25/2012 2:16:01 PM 0E08 PACKET 000000001EA6BFD0 UDP Snd 160.39.114.110 2e94 R Q [8281 DR SERVFAIL] A (5)bogus(0) UDP response info at 000000001EA6BFD0 Socket = 508 Remote addr 160.39.114.110, port 49710 Time Query=1077016, Queued=0, Expire=0 Buf length = 0x0fa0 (4000) Msg length = 0x0017 (23) Message: XID 0x2e94 Flags 0x8182 QR 1 (RESPONSE) OPCODE 0 (QUERY) AA 0 TC 0 RD 1 RA 1 Z 0 CD 0 AD 0 RCODE 2 (SERVFAIL) QCOUNT 1 ACOUNT 0 NSCOUNT 0 ARCOUNT 0 QUESTION SECTION: Offset = 0x000c, RR count = 0 Name "(5)bogus(0)" QTYPE A (1) QCLASS 1 ANSWER SECTION: empty AUTHORITY SECTION: empty ADDITIONAL SECTION: empty Every option in the debug log box was checked except "filter by IP". By contrast, when I query, say, accounts.google.com, I can see the DNS server go out to its forwarder (128.59.59.70, for example). In this case, I didn't see any packets going out from my DNS server even though bogus. was not in the cache (the debug log was already running and this is the first time I queried this server for bogus. or any TLD). It just returned SERVFAIL without consulting any other DNS server, as in the Microsoft KB article linked above.

    Read the article

  • DFS keeps constantly replicating almost all files

    - by Adrian Godong
    We have always had problems with DFS, but recently it has gotten worse (with no apparent reason) to the point it's becoming harmful. We have one master server and DFS connections to other four servers. The four severs don't modify any files, so all replications always propagate from the master to the four other servers. The replicated directory has about 900,000 files. In the recent weeks, every time we check DFS, the DSF backlogs have hundredths of thousand of files. For instance now, the master server now replicates about 700,000 to three of the four servers while the fourth one is fine. Sometimes, only one is off, sometimes two and this time three. Also, it is never the same set of servers. It is inconceivable that something periodically touches all 900,000 files. The biggest change which happens is a scheduled update of several thousand files every six hours. Does anybody have the same problem? Is it a known issue?

    Read the article

  • Access to NTP via IP which doesn't change often

    - by faulty
    I'm trying to sync the clock of our production server located in a data center with pool.ntp.org. For security reason, our servers has no internet access unless we requested to open specific ip/port explicitly. I worked out a list of IPs based on 0.asia.ntp.org 1.asia.ntp.org 2.asia.ntp.org 3.asia.ntp.org Not realizing ntp.org is using round robin DNS and the servers being voluntary, they changes from time to time. In fact the IP I've got from 3.asia.ntp.org last month is no longer working now. I'm wondering if there's a publicly known NTP server that doesn't change as often or if there's a way to go around this without having to request an update to the firewall on a monthly basis. I believe many admin is facing the same issue here.

    Read the article

  • Slow logins with roaming profiles

    - by tliff
    We are running an ActiveDirectory environment with Windows 2008 as DC and Samba 3.3 as fileserver, using roaming profiles. Some of our offices are connected to HQ via slowish links (1/2 Mbit). Naturally this is not very fast but that was expected. What I do not understand is, that if a user logs out (taking a long time to sync, as expected) and then logs in again the next day it also takes a long time to login. And that is what I don't understand. Shouldn't the sync recognize that nothing has changed rather quickly? Also: Is there any decent docu on how the synchronization is implemented?

    Read the article

  • AWStats on Plesk consumes all of CPU and crashes server - how do you disable plesk.

    - by columbo
    Hello, I have Plesk 9.0.1 running on a Red Hat server. Every week or so at about 4:10am the server locks up. At this time the server CPU usage shhots from 4% to 90% at the same time as a mass of awstats.pl processes start (I can't see how many as my datat only shows the top 30 processes, but all of these are awstats.pl). I turned off awstats through the Plesk control panel for all but 5 domains but I still get 90% CPU usage and at least 30 instances of awstats.pl happening at 4:10am as usual. Does anyone know why this may be? Does anyone know how to disable awstats (I have stats covered using piwik)? Thanks

    Read the article

  • Syn_Recievd on port 80 , IIS 7.5

    - by Ashian
    Hi I have a trouble on my windows 2008 server. I host several web site on it. From some days ago, my web sites stop responding on port 80 after a while. In this time I can't access web sites from local machine and from remote. I can also browse websites on other ports ( custom port that I set) I find that I have many Syn_Received status on netstat. And when web sites stop, I got only syn_received on port 80. I have to restart server because when I try to restart IIS , it takes a long time to stop W3SVC and many times it doesn’t stop at all. Would anyone please tell me : - How can I manage Syn Attack ? Thanks

    Read the article

< Previous Page | 538 539 540 541 542 543 544 545 546 547 548 549  | Next Page >