Search Results

Search found 17945 results on 718 pages for 'last fm'.

Page 488/718 | < Previous Page | 484 485 486 487 488 489 490 491 492 493 494 495  | Next Page >

  • Exchange 2003 ActiveSync problem

    - by colemanm
    We're having problems getting iPhones to sync properly with SBS 2003 Exchange. When you add a new Exchange ActiveSync account on an iPhone and enter all the pertinent information, it shows a "Verifying Exchange account info" message for a minute or so, then says everything's verified and asks what you want to sync, Mail, Contacts, Calendars... so it looks like it's working. However, when you go to the Mail app and select the Exchange email account, it just shows an "Inbox" folder with nothing in it. When you try refreshing, it attempts for a second, then says "Last Updated" with a timestamp, as if it worked, but there's no mail and no error message/feedback at all. I think I've narrowed it down to some sort of certificate issue, but I'm having trouble finding out where to go from here... I ran MS's Exchange connectivity testing tool with these results: Our cert was purchased from Network Solutions, and I'd already added it to the IIS Default Website for OWA purposes. But this report makes it look like the cert is somehow problematic. I don't know what to do now... Here's a shot of the cert details, just in case:

    Read the article

  • Limit the amount of data that can be stored in a folder on Ubuntu Server 12.04?

    - by dougoftheabaci
    I'm in the process of building my first server. It's up, it's running, I'm transferring copious amounts of data away from my horrid little Drobo (DO NOT BUY ONE OF THESE, EVER). However, there's one thing I have yet to do: I'd like to set it up for Time Machine backups as well. I've seen all the guides and I have some idea of how to set the whole thing up, but the issue is that Time Machine will just fill up as much space as you let it. So if I let it lose in my 8 TB zpool it'll slowly consume every last available sector. This, of course, is not acceptable. I have a folder at the root of my zpool called "ZFS Time Machine" and I would like to limit it to 1 TB (all I need for backup purposes). However, I have no idea how to do that. Is this possible? I can continue using a small external hard drive attached via FW800 if I have to but I'd much rather prefer putting everything on my server.

    Read the article

  • .NET 3.5 installation comes up with Error 0x800F0906, then 0x800F0081F using dism

    - by Austin Meadows
    I've recently tried installing .NET 3.5 for an application on Windows 8.1. I used the OS's popup thing to download/install .NET 3.5 and always get error code 0x800F0906. Upon further research, I found I would have to pop in my Windows 8 CD and install it with this command, where "E:\" is where my CD is mounted: Dism /online /enable-feature /featurename:NetFx3 /All /Source:E:\sources\sxs /LimitAccess This and any derivative of it (e.g., removing /LimitAccess) has not worked for me and has either given me the same error code (0x800F0906) or a different one, 0x800F0081F. I've even copied the sxs folder to my hard drive, just in case something was going on with the CD Drive, only to have the same results. In that case, I used this command line: Dism /online /enable-feature /featurename:NetFx3 /All /Source:C:\dotnet35 /LimitAccess I find this surreal because in both cases, the files are indeed there but the program thinks it's not. Here's the CBS.log file. Any ideas on how to fix this? Any help is very appreciated :) EDIT: I now have a proper dism.log file, I'm not sure what happened to the last one or why it did that. Here's the link to the new log file. It's interesting to note that it doesn't recognize some of the commands in the script such as "featurename" or "source".

    Read the article

  • Windows 7 just deleted 4 days of work wtf!?

    - by Mat
    Hey! I'm just a bit about to freak out. I just finished a project and rebooted my computer. It didn't want to boot anymore so I had to use the windows 7 system repair option. it run for a minute and then booted up. Now most of my sourcecode from the last 4 days of work is gone! background: sometimes (most often after installing new software) my notebook won't boot up anymore. It will just show the little Win 7 flag, but not read from the harddisk anymore. If I hardabord and reboot then, it asks me whether to start windows normally (which won't work) or to run "windows startup repair". If i run it, it does some stuff for about two or three minutes and then I can boot windows again. Usually after this, .exe files i added to the computer during previous days are gone - but other files so far were not touched. But now, after this happened, a whole bunch of ".as" (ActionScript source files) from my project are gone!! does anyone know where and whether there's a way to recover them?? please help! Thank you!

    Read the article

  • Coldfusion 9 losing connection to MySQL 5 database server a couple of weeks after the server is started

    - by user1503757
    We get the following Coldfusion error message after our server have been running for a couple of weeks: Error Executing Database Query.Could not create connection to database server. Attempted reconnect 3 times We run Coldfusion Enterprise 9 on a one year old XServer with Snow Leopard and MySQL 5 The server has about ten DSN set up in the Coldfusion Administrator All local, with default advanced settings, and host set to "localhost" The server is not under heavy load. The strange thing is that after a restart of the server, everything works fine. Then after a week or so, some databases will stop working, in the sense that Coldfusion cannot create a connection to them. If I then go to the Coldfusion Administrator and click "Verify all datasources", I will get that only 2 or 3 got verified, the other ones failed, and it is always the same datasources that can't be verified when the server starts to behave like this if I try to verify again, BUT NOT neccessary the same datasources that couldn't be verified the last time the server behaved like this. I know about the setting "max_connections" and we have included a line for that setting in the MySQL config file and set it to 2000, and when we read it by a query it says "2000", so that can't be the problem. Anyone?

    Read the article

  • Windows Server 2003 DC hangs after network drivers update

    - by tcv
    Earlier today, we attempted to update the Broadcom BCM5716C network drivers on a Windows Server 2003. (Dell PowerEdge T310, FWIW). Since then we have not been able to boot the server in any normal mode. Safe Mode works. Safe Mode with Networking and regular bootups hang at "Applying Network Settings." I haven't tried Last Known Good Configuration nor have I tried Directory Services Restore Mode. I should also mention that the longest I've allowed "Applying Network Settings" was perhaps 30 minutes. I spoke to Dell since the server is under a basic warranty. They sent me the original Broadcom drivers. The trouble seems to be, however, that since I can only boot in Safe Mode, I can't install the application package as given. In safe mode, I receive the error: "The system administrator has set policies to prohibit this installation." I can install the drivers independently, but that doesn't allow the NICs to work. The most I've been able to get are Code 10 errors on each NIC. I plan to get back to the site tomorrow to attempt installation of a different NIC. I'm wondering what else I can try.

    Read the article

  • Explorer.exe not starting after login on Windows Server 2003 (Terminal Services and console)

    - by Pepperoni Icecream
    When users login to a Windows Server 2003 R2 running Terminal Services they have a blank desktop. Upon inspection, explorer.exe is not running. When I login as administrator, using either RDP or to the console, I am having the same issue. I can pull up the taskman and start explorer.exe manually. I have another Terminal Server setup exactly the same way (same apps, settings, GPO, etc . . .) the only difference is we deployed Symantec Endpoint Client 11.0.5 on Friday. For some reason the working Terminal Server is still on 11.0.4, but the suspect server received the 11.0.5 client upgrade. I checked the eventviewer for any relevant explorer.exe entries to no avail. It seems that if SEP is preventing explorer.exe from starting at login it would do the same for the domain admin starting explorer.exe from the taskman. I disabled the SEP client and services on the server and issued smc -stop and tried logging in again. Still no explorer.exe. So I'm not sure if the client upgrade is relevant but it is worth mentioning since that was the last system change. The 2 servers are members of a NLB group. I took the bad terminal server out of the group until the issue is resolved. Actually stopped the host using NLB manager Any help is appreciated.

    Read the article

  • How to troubleshoot Hyper-V VSS writer causing backup failure on Server 2008 R2

    - by Tim Anderson
    I have a Windows Server 2008 R2 machine running Hyper-V. Backups using Windows Server Backup fail with the error: The backup operation that started at '?2011?-?01?-?02T10:37:01.230000000Z' has failed because the Volume Shadow Copy Service operation to create a shadow copy of the volumes being backed up failed with following error code '2155348129'. Please review the event details for a solution, and then rerun the backup operation once the issue is resolved. I have traced this to a problem with the Hyper-V VSS writer. vssadmin list writers reports: Writer name: 'Microsoft Hyper-V VSS Writer' Writer Id: {66841cd4-6ded-4f4b-8f17-fd23f8ddc3de} Writer Instance Id: {fcf0dd79-d282-4465-88ae-7b6857e055c2} State: [8] Failed Last error: Inconsistent shadow copy However I can't get any further. A few relevant facts: I get the error even if all the VMs are shut down If I disable the Hyper-V VSS Writer by stopping the Hyper-V Management Service backup completes OK There are no errors in the Hyper-V-VMMS application log I tried to set tracing for VSS but can't get any output for some reason. I set the correct registry entries but no trace log is generated. Tim

    Read the article

  • Virtual (ESXi4) Win 2k8 R2 server hangs when adding role(s)

    - by Holocryptic
    I'm trying to provision a 2k8r2 Enterprise server in ESXi4. The OS installation goes fine, VMware tools, adding to domain, updates. All the basic stuff before you start adding Roles and Features. I've had this happen on two attempts already, and I'm not sure where the problem might be. I don't think it's hardware, because I have another 2k8r2 Standard server that's running fine. The only real difference is the install media. The server that's working was installed using a trial ISO and license. The one I'm having problems with is a full MAK installation. When I go to add a Role (the last case was Application Server) it gets all the way to "collecting installation results" before it hangs. CPU utilization in the vSphere client shows little spikes of activity with flatlines inbetween, but the whole console is locked up. The only way to release it is to power off and bring it back up. When you go to look at the added roles after bringing it back up, it shows that it is installed, but I don't trust that something didn't get wedged in all of that. The first install I did was with Thin Disk provisioning. The second attempt was with regular disk provisioning. In both cases 4GB of RAM, 2 vCPUs. VMware host is a HP Proliant DL380 G6, RAID-1 OS, RAID-5 data volume. 12 GB RAM. Has anyone else had this problem, or know where I should start poking around?

    Read the article

  • Exchange 2003: Unrestrict send mail size for specific users / groups?

    - by Kip
    Good (insert appropriate time of day here) SF folks, I have the following situation; We have a message size limit for sending set at 20mb in Global Settings | Message Delivery. We have a limit of 50mb set at an external 3rd party spam vendor. I need to enable some users to be able to send messages that are upwards of around 40mb in size. However, when I set the Sending Message Size Maximum to 50mb within the delivery restrictions of a users exchange properties, it would appear that this does not win. It seems that the lowest value wins for this situation. I need to be able to allow certain users to send messages larger than the 20mb limit, but to have everyone else have the 20mb limit in place. How can I do this? The only way I could see was to raise the limit set in Global Settings | Message Delivery to 50mb and then set everyone elses (bar the people who need increased limit) delivery restrictions max size down. But I cannot see an easy way to do the last bit hence my post here looking for advice. There are valid reasons we need to send mail this size and whilst we are putting together other mechanisms for delivery this data, we still need to get this put in place. Thanks in advance Kip

    Read the article

  • It seems Windows 8.1 killed my two T60 laptop batteries

    - by rstock
    Upgraded Windows 7 to Win8 earlier this year, and last week upgraded to Windows 8.1. (Lenovo T60) Had no problem with battery usage when on Win7 nor Win8. After about a week of Win8.1 on my system, the battery stop working, while the system was on. The orange batt. indicator just keeps flashing. The system does not charge the battery (even though I know there was life in it). I installed a known good fully charged battery from another T60, it worked for aboute 40 mins then it instantly died in fron of my eyes. The system now shows the same orange flashing batt. light, but it is not charging. I know both these batteries are still good, they just appear to be dead. My research suggest that the new Win8.1 may not have updated the battery driver to Win8. I have since done that. Same problem. Research i s also pointing me to some 'smart chip' on the batteries that need to a reset. Is this possible ?? Does anyone know a process to reset the 'smart chip' on these batteries (fru# 92P1139) ????

    Read the article

  • How to set up multiple DNS servers on an intranet

    - by Brent
    We have an Active Directory network, with a mixture of Windows DNS, linux BIND servers, and want to use OpenDNS as our external DNS provider. I am wondering What is the best way to set up these servers (regarding forwarders, recursion, etc.)? Active Directory is our main internal DNS for our domain, and has 3 redundant servers. DHCP and all our servers use these as their DNS servers. Then we have a legacy AD server from an old network that is still authoritative for a bunch of domains. Finally, we have a couple of Linux Bind servers that are authoritative for a bunch of websites we host. Should our main AD servers point to our legacy AD server, which points to one of our BIND servers, which points to the other BIND server, which finally points out to openDNS? Or should our main AD servers point to all of these directly? - or is there a better option? What happens if a domain is listed in 2 places? Does DNS process the forwarders in order? What about root servers - if I want to use OpenDNS for "everything else", do I just list them as the last forwarders, and delete the root servers from all my DNS servers? How does recursion work - in this scenario, should I be using recursion or not?

    Read the article

  • get ubuntu terminal to send an escape sequence (control+shift+up)

    - by user62046
    This problem starts when I use emacs ( with -nw option). Let me first explain it. I tried to define hotkey (for emacs) as following (global-set-key [(control shift up)] 'other-window) but it doesn't work (no error, just doesn't work), neither does (global-set-key [(control shift down)] 'other-window) But (global-set-key [(control shift right)] 'other-window) and (global-set-key [(control shift left)] 'other-window) work! But because the last two key combinations are used by emacs (as default), I don't wanna change them for other functions. So how could I make control-shift-up and control-shift-down work? I have googled "(control shift up)", it seems that control-shift-up is used by other people, (but not very few results). In the Stack Overflow forum, Gille answered me as following: Ctrl+Shift+Up does send a signal to your computer, but your terminal emulator is apparently not transmitting any escape sequence for it. So your problem is in two parts. First you must get your terminal emulator to send an escape sequence, which depends on your terminal emulator, and is Super User material, or Unix.SE if you're using a unix system. Then you need to declare the escape sequence in Emacs, and my answer explains that part So I come here for this question: How do I get my terminal (I use ubuntu 10.04, and the built-in terminal) to send an escape sequence for Control+Shift+Up Control+Shift+down

    Read the article

  • Windows 7 - mysteriously missing free HDD space

    - by sYnfo
    I have Windows7 installed on 50GB (Oops, it should have been 45GB, sorry) partition, and every now and then it gets full, and I have to resize that partition. I always thought it is quite normal. But it happened again today and this time, I'm sure it is not normal, because since last resizing (35GB 45GB) I did not install any new apps or whatever. Also, sum of sizes off all, including hidden & system, root folders and files is ~18GB, yet windows is indicating that all 50GB are used up... Any idea what is going on? EDIT: Great tools everyone! (SourceForge appears to be offline at the moment, I'll check WinDirStat later) Alas, non of them solved my problem just yet... Screenshot from SpaceSniffer: On the right there is some kind of "Unknows Space", any idea what that could be? EDIT2: After those two apps failing to help much I didn't expect it, but WinDirStat actually helped. It showed that those missing 27GB are in my Temp folder (Well, that should have been my first guess anyway). There I found hundreds of ~100MB files, named like HTT????.tmp. After some googling it appears to be a problem with ESET NOD32 antivirus and it's ThreatSense feature. Thank you all for help! :)

    Read the article

  • Regular issue with keys on temp tables

    - by Christian
    We run a large forum with lots of reads and writes, particularly to the posts and topics tables which are both innodb. Last week I started doing 12 hourly backups with innobackupex because mysqldump just takes forever (7+ million rows in posts table.) It seems that something doesn't like these backups because I have a recurring problem every other day. The symptoms; The front page of the site starts throwing errors The logs start showing errors like Error: 126 - Incorrect key file for table '/tmp/mysql/#sql_4e87_14.MYI'; try to repair it The /tmp/ dir fills up and we start getting Error: 1030 - Got error 28 from storage engine in the logs. The only way to fix is to optimize table on each of the posts and topics tables. I'm trying all I can to stop MySQL using disks for temp tables, but I'd have more problems than this if it used all my memory also. My my.cnf is here; https://gist.github.com/cbiggins/0aa26f6defb7a14541d7 The box has 32GB memory and I don't come near that usually. Currently at 15GB use. Thanks in advance. Update 1: Despite the conf looking like there is replication, there isn't. This is a stand alone instance.

    Read the article

  • Getting started with webserver clustering.

    - by Ernie
    I work for a small ISP, and we host about 250 domains and all the stuff that goes along with that: DNS, mail, spam filtering, and backups. Currently, we have separate DNS servers (two of them) and mail servers (outgoing mail is actually on the secondary DNS server, but was previously on its own server). In the past, this was done as an insurance measure. The last thing we need is for some doofus (usually yours truly) to hose a server, taking out DNS and mail right along with it, or for spammers to jam our incoming SMTP server, preventing outgoing mail from being sent too. In the past, this was a problem, and our servers were set up the way they are now to combat it. However, clustering solutions like Sun's Cobalt RAQ (in days of olde) and Virtualmin appear to cater to an all-in-one approach, then deal with failures through redundant servers. I have avoided this thus far, but we've been using Virtualmin on our web server for a while now, and I'd like to expand into using it for a high availability cluster. Our networking partner has recently built a datacenter that has eliminated all of our other bugaboos like network, cooling, and power issues, so now the only thing left to go wrong is me hosing a server, which happened earlier this month. One of the bigger reasons we've avoided going this route is because our hardware requirements aren't particularly high. One server easily handles all the sites we host (most of them are flat sites). Also, load-balancing routers tend to be expensive and complicated. All that I'm really expecting to do is building a two-node cluster for redundancy so that when I hose a server (however rare that might be), we're not out for 8-12 hours while I rebuild it. What I need to know is how to get started, and if I'm really in a position to bother with this kind of thing at all.

    Read the article

  • SSD cache to minimize HDD spin-up time?

    - by sirprize
    short version first: I'm looking for Linux compatible software which is able to transparently cache HDD writes using an SSD. However, I only want to spin up the HDD once or twice a day (to write the cached data to the HDD). The rest of the time, the HDD should not be spinning due to noise concerns. Now the longer version: I have built a completely silent computer running Xubuntu. It has a A10-6700T APU, huge fanless cooler, fanless PSU, SSD. The problem is: it also has (and needs) a noisy HDD and I want to forbid spinning it up during the night. All writes should be cached on the SSD, reads are not needed in the night. Throughout every day, this computer will automatically download about 5 GB of data which will be retained for about a year, giving a total needed disk capacity of slightly less than 2 TB. This data is currently stored on a 3 TB noisy hard disk drive which is spinning day and night. Sometimes, I'll need to access some data from several months ago. However, most times I'll only need data from the last 14 days, which would fit on the SSD. Ideally, I'd like a transparent solution (all data on one filesystem) which caches all writes to the SSD, writing to the HDD only once a day. Reads would be served by the cache if they were still on the SDD, else the HDD would have to spin up. I have tried bcache without much success (using cache_mode=writeback, writeback_running=0, writeback_delay=86400, sequential_cutoff=0, congested_write_threshold_us=0 - anything missing?) and I read about ZFS ZIL/L2ARC but I'm not sure I can achieve my goal with ZFS. Any pointers? If all else fails, I will simply use some scripts to automatically copy files over to the big drive while deleting the oldest files from the SSD.

    Read the article

  • rsyslog - template - regex data for insertion into db

    - by Mike Purcell
    I've been googling around the last few days looking for a solid example of how to regex a log entry for desired data, which is then to be inserted into a database, but apparently my google-fu is lacking. What I am trying to do is track when an email is sent, and then track the remote mta response, specifically the dsn code. At this point I have two templates setup for each situation: # /etc/rsyslog.conf ... $Template tpl_custom_header, "MPurcell: CUSTOM HEADER Template: %msg%\n" $Template tpl_response_dsn, "MPurcell: RESPONSE DSN Template: %msg%\n" # /etc/rsyslog.d/mail if $programname == 'mail-myapp' then /var/log/mail/myapp.log if ($programname == 'mail-myapp') and ($msg contains 'X-custom_header') then /var/log/mail/test.log;tpl_custom_header if ($programname == 'mail-myapp') and ($msg contains 'dsn=') then /var/log/mail/test.log;tpl_response_dsn & ~ Example log entries: MPurcell: CUSTOM HEADER Template: D921940A1A: prepend: header X-custom_header: 101 from localhost[127.0.0.1]; from=<[email protected]> to=<[email protected]> proto=ESMTP helo=<localhost>: headername: message-id MPurcell: RESPONSE DSN Template: D921940A1A: to=<[email protected]>, relay=gmail-smtp-in.l.google.com[2607:f8b0:400e:c02::1a]:25, delay=2, delays=0.12/0.01/0.82/1.1, dsn=2.0.0, status=sent (250 2.0.0 OK 1372378600 o4si2828280pac.279 - gsmtp) From the CUSTOM HEADER Template I would like to extract: D921940A1A, and X-custom_header value; 101 From the RESPONSE DSN Template I would like to extract: D921940A1A, and "dsn=2.0.0"

    Read the article

  • Very uneven CPU utilization with SQL Server 2012 on 2 processor computer with 16 cores / processor

    - by cooplarsh
    After installing SQL Server Enterprise 2012 with the Server + Cal license model, on a computer with 2 processors each with 16 cores (and no hyperthreading involved) and putting the server under extremely heavy load the 16 cores on the first processor were very underutilized, the first 4 cores on the 2nd CPU were heavily utilized, and the last 12 cores were not used at all (because of the 20 core limit for this sql server version). Total CPU utilization was displaying as around 25%. Unfortunately, the server suffered from extremely poor performance even though if the tasks were evenly distributed across the 20 cores it wouldn't have been anywhere near as bad. The Windows Server was running on a VMWare virtual image under ESX Server, but all of the CPU was allocated to the windows server. We tried changing affinity settings (e.g., allocating most cores to CPU and the others to I/O), but that didn't help solve the performance problems. Upgrading the product edition to SQL Server Enterprise Core 2012 not only allowed the SQL Server to utilize the 12 previously unused cores on the 2nd processor, but it also resulted in a much more even distribution of tasks across all of the processors. To get through the backlog of requests cpU utilization jumped to around 90%, and then came down to around 33% once it was caught up, but performance improved dramatically since we failed over to the newly updated version And the performance issues went away. I was wondering if anyone knows what might cause SQL Server to unevenly distribute the load, relying almost exclusively on the first 4 cores of the 2nd processor that had 12 cores idle, and allocate only a few tasks to each of the 16 cores on the first processor. Also, is there any way we could have more evenly distributed the load across the 20 cores that were being used without the product edition upgrade? The flip side of that question is what did the product upgrade do that caused SQL Server to start evenly distributing the load across all of the cores that it recognized? Thanks to any insight to answer these questions and/or links that might help me better understand how to make sense of what was happenings.

    Read the article

  • Maxtor 500GB external hard drive not being detected but power is going to it?

    - by ClarkeyBoy
    I have 2 * Maxtor Onetouch 4 Lite 500GB external hard drives (part no. 9NT2A4-500). They both used to work fine on my old laptop (an Acer) but I have not used them for about a year, since my laptop was stolen and I got this one (also an Acer [Aspire 7738G]). I have one plugged into the mains with one of the leads I believe was supplied with them. It appears to be receiving power as it is warm and the power light (on the unit itself) is on; also the mains adapter is fairly warm. I also have it plugged into my laptop with a USB lead which I have tested on my mp3 player (so I know it works). However my hard drive is not showing on my computer. I have tried checking for new hardware, installing the software that was supplied with it, checking drive letters in case it is registered as C: or something stupid, checking for problems etc... I can't find any cause for it to do this. It does appear to be starting up and, possibly, shutting down and restarting constantly (that's what it sounds like altho I can't be certain). I have had both hard drives stored in different places for the last year and they're both doing the same thing.. if it was only one then I'd guess it had got damaged or corrupted or something but since it is both I doubt this is it. The only things in common with both of them are the leads and the laptop, however I know the USB lead works and guess the mains lead works as there is power going to the unit. Has anyone come across this before or does anyone have any idea what the cause / solution to the problem is? Any help would be greatly appreciated. Regards, Richard

    Read the article

  • What is causing Null Pointer Exception in the following code in java? [migrated]

    - by Joe
    When I run the following code I get Null Pointer Exception. I cannot figure out why that is happening. Need Help. public class LinkedList<T> { private Link head = null; private int length = 0; public T get(int index) { return find(index).item; } public void set(int index, T item) { find(index).item = item; } public int length() { return length; } public void add(T item) { Link<T> ptr = head; if (ptr == null) { // empty list so append to head head = new Link<T>(item); } else { // non-empty list, so locate last link while (ptr.next != null) { ptr = ptr.next; } ptr.next = new Link<T>(item); } length++; // update length cache } // traverse list looking for link at index private Link<T> find(int index) { Link<T> ptr = head; int i = 0; while (i++ != index) { if(ptr!=null) { ptr = ptr.next; } } return ptr; } private static class Link<S> { public S item; public Link<S> next; public Link(S item) { this.item = item; } } public static void main(String[] args) { new LinkedList<String>().get(1); } }

    Read the article

  • Hard drive had reallocated sectors...but now it magically doesn't! Can I trust it?

    - by rob
    Last week my SMART diagnostics utility, CrystalDiskInfo, reported that the external hard drive that I was saving my backups to had suddenly reported 900+ reallocated sectors. I double-checked to confirm, then ordered a replacement drive. I spent all of this week copying data from that drive to the new drive. But toward the end of the copy, something peculiar happened. CrystalDiskInfo popped up an alert that the reallocated sector count had gone back down to 0. I know that when SMART detects a read error on a block, it adds that block to the current pending reallocation list. If it subsequently is successfully written or read later, it is removed from the list and assumed to be fine, but if a subsequent write fails, it is marked bad and added to the reallocated sector count. What concerns me most is that I've never read anywhere that a sector can be recovered as "good" after it has been marked as a bad sector and remapped. I've just finished running an extended SMART diagnostic, and it found no surface errors. Now I'm doubtful that the manufacturer will honor a warranty claim if the SMART info does not report any problems. Has anyone had this happen? If so, then is the drive, indeed, okay, or should I be concerned about an imminent failure?

    Read the article

  • Any e-mail client with additional grouping functionality on Mac OS X?

    - by harald
    Hello, I'm very unhappy with my mail experience. I'm receiving a lot of mails from various clients and projects and need a way to better organize them. I would really love to have additional grouping-functionality with the e-mail client. Currently I'm using Mac OS X's Mail.app, but I am not bound to this. So I am open to any Mail.app-plugin or independent mail application commercial or not for Mac OS X -- should support IMAP, though -- but I think this should not be a problem nowadays? With Mail.app i'm doing the following: group by thread sort by datetime descending What I would love to have is not only additional tagging-functionality for e-mails -- I know, that at least thunderbird and postbox support them. I would love to have some additional grouping functionality for these tags -- inside the main mail window. So maybe I can summarize the important points: "native" Mac OS X mail client (no web-mailer please) automatic-tagging functionality (eg.: auto-apply tags by some kind of filter) easy access to tagged mails Easy access to tagged mails: I would really love to have some additional grouping functionality in the mail folders. The mail application should put all tagged mails in a group -- the groups should be sorted by last received e-mail. Inside the group I would still like to have the possiblity to group by thread. or It would be ok to have a list of tags (topics) on the left pane of the mail client. For example postbox: There is the 'accounts-section', there is the 'folders-section' -- but why is there no 'topics-section'? Thanks very much,

    Read the article

  • Spots appear in a rectangle area on screen, ubuntu gnome 13.04, nvidia driver

    - by frozen-flame
    I am using Ubuntu Gnome 13.04 with nvidia-310 driver installed. My GPU is GeForce GTX650. Strange spots freqently appear on screen, with following traits: Spots are restricted in one or two rectangle areas at any instant. When typing, the pattern of spots change. Possibly increase, or all disappear when one key pressed. Mouse movement also influences. This problem last within one boot. The only way can I get rid of this problem is to reboot. It can be detected as soon as entering desktop if it appears. Simultaneously, the "power off" option is lost in the top-right menu of Gnome3. Never such problem when using windows 7 on the same computer, neither ubuntu with Nouveou driver. Seldomly, half of the screen become black. I googled a lot. Similar conditions are described, but no confirmed solution. Uninstall-r einstall strategy does not work. Any clue solving this will be appeciated.

    Read the article

  • SCCM 2012 Clients no longer detecting

    - by user3685428
    Here is the scenario I had a fully functioning SCCM 2012 site server with the DP, MP, SUP, Application catalog, etc. roles configured and working. There is only one server on this site. Everything was great but i was not happy with SUP, so i decided to create a separate WSUS server and configure Windows Updates through GPOs. That setup worked great as well so i went ahead and removed the SUP role from SCCM and removed the WSUS feature from my SCCM server (they were configured on the same SCCM Server). I did not notice any problems right away. A couple days later i noticed that the OSD deployments were giving errors, and after a couple hours of trying suggestions from Google, i was able to uninstall PXE and make a few changes and reinstall with WDS to get it working again. Again, thought everything was fine and continued on. The last couple days i have noticed that any new machine deployed or installing the Client will show in the SCCM console as "No" Client. The client machines will show connected to a site but the software center shows "IT Organization" instead of our site like the previous clients. The existing clients all seem to be functioning normally. they still receive application distributions and configuration baselines, etc. Reinstalling, uninstalling and reinstalling, repairing does not fix the problems and this happens on all new clients. ClientLocation.log shows it connecting to the correct MP. Nothing odd in any of the logs except for the ClientMessaging.log which repeats continuously this line: <![LOG[Raising event: instance of CCM_CcmHttp_Status { ClientID = "GUID:0450fde3-ab82-41bf-9c33-87a18113744b"; DateTime = "20140528214824.993000+000"; HostName = "SOUNDWAVE.domain.org"; HRESULT = "0x00000000"; ProcessID = 4092; StatusCode = 0; ThreadID = 3720; }; ]LOG]!><time="16:48:24.994+300" date="05-28-2014" component="CcmMessaging" context="" type="1" thread="3720" file="event.cpp:706"> thanks

    Read the article

< Previous Page | 484 485 486 487 488 489 490 491 492 493 494 495  | Next Page >