Search Results

Search found 501 results on 21 pages for 'reliability'.

Page 2/21 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Best Practice for Software Maintenance Console

    - by Ben-G
    I am looking for a list of must-have maintenance/administration features/components/services for enterprise applications. I know following common components: Configuration Cockpit (shows current configuration mistakes/issues) Load-Analysis (shows the current load on different system components) Vitality measures Log File Access System Restart Capability Backup/Restore Capability Are there any widely accepted services/features which are included in any software with a focus on reliablity and maintainability?

    Read the article

  • Does BitLocker reduce write reliability?

    - by Unsigned
    For the purposes of this question, BitLocker refers to the BitLocker-to-go variety on a disk with write-caching disabled. NTFS supports metadata journaling, which, although not completely failsafe, does mitigate certain types of potential filesystem errors. Assuming an NTFS volume is protected with BitLocker, does this reduce the failure tolerance? Would a power failure during a write to an NTFS volume, that's protected with BitLocker, be more prone to corruption than on an unencrypted NTFS volume?

    Read the article

  • Reliability of S.M.A.R.T.?

    - by Mark
    I've been using ActiveSmart to monitor my hard-drives health for a few weeks now, and its telling me my brand new 1.5 TB hard-drive is half-dead already. About on-par with one of my hard-drives which I know is at least half dead because I've been having some read errors and heard ticking noises. Now I haven't actually noticed any problems with my 1.5 TB drive; should I be concerned that it's going to crap out on me too? Or could ActiveSmart be giving a mis-diagnosis because I use it a lot or something (I've used up 795 GB in the 2 and a half weeks I've had it). The events that ActiveSmart has been catching is "Hardware ECC recovered". Maybe these new fangled super big hard-drives somehow rely on ECCs to squeeze out the extra space, but this isn't actually a cause for concern?

    Read the article

  • Reliability of VMware ESXi for backup

    - by Laurent
    Currently, I'm using a server as an online backup and to run some VMs with VMware Server. I'm interested in converting it to VMware ESXi but have some concerns about the possible corruption of my VMDKs if I choose to store my data on them. I was also thinking of storing the data directly on the datastore but can't find any way to mount a VMFS volume with a LiveCD if ESXi is unable to start. What are my options? Is continuing to use VMware Server is a good idea, knowing that I DO want to use the server for both virtualization and backup purposes. Thanks.

    Read the article

  • What is Google Docs' SLA?

    - by Walter White
    Hi all, I am evaluating online storage and for me, that means either Amazon S3 or Google Docs. Amazon very clearly posts there reliability and SLA: http://aws.amazon.com/s3/#protecting Their rates are obviously higher than Google's, but it is really hard to compare without having an SLA. Does anyone know what Google's commitment is for reliability? Is it 99.99% for data, is there anyway to make that more durable? I have to ask too, wouldn't google docs at least be inheritently more reliable than a hard drive? Thanks, Walter

    Read the article

  • What is the fastest RAID in practise?

    - by Luke
    I'm going to be rebuilding my server, and I want much faster access to my data. I've used RAID 1 and 0 in the past, and decided upon RAID 10 (dedicated RAID card). Then someone told me to use RAID 5+0, then someone else told me to use RAID 6+0. Assuming the Hardware RAID Card supports each level, what is currently the FASTEST RAID available, given x number of hard drives? Reliability is now another factor, and I am willing to spend money on new drives if a drive (or multiple) fail. I simply want to know what the fastest RAID level is, along with some reliability for recovering from a failure

    Read the article

  • Can an SSD notify the hosting OS that its wear level is getting high?

    - by Tony_Henrich
    I read a lot about SSDs and I am interested in them for server use. My biggest concern is their reliability. A lot of writes shortens their life span. I can mitigate this problem if I can run some kind of diagnostics on a regular basis on the SSD or if the SSD can automatically warn the OS that its reliability is reaching a critical level. Think of this as S.M.A.R.T or software like SpinRite for SSDs. Does anything I mentioned exist now? Which kind/brand of SSD does this? I don't mind swapping out a tired SSD for a newer one once a while. I am pretty sure that SSDs life is calculated in years and not in few months? For me, the improved performance will pay for the SSD over and over. I am planning to use plenty of RAM as well.

    Read the article

  • SharpArch.Core.PreconditionException: For better clarity and reliability, Entities with an assigned

    - by Quintin Par
    When I do a save of an entity with assigned id I get a SharpArch.Core.PreconditionException: For better clarity and reliability, Entities with an assigned Id must call Save or Update My class is public class CompanyUserRepository : RepositoryWithTypedId<CompanyUser, string>, ICompanyUserRepository { public override CompanyUser SaveOrUpdate(CompanyUser entity) { var user = base.SaveOrUpdate(entity); //Do some stuff return user; } } How do I go about saving this entity? RepositoryWithTypedId does not expose a Save method Related question. This tells you the reason, but I haven't found out the Sharp Architecture way to do a Save.

    Read the article

  • GetHashCode Method reliability in Silverlight/WP7.1

    - by abhinav
    I am attempting to hash and keep(the hash) an object of type IEnumerable<anotherobject> which has about a 1000 entries. I'll be generating another such object, but this time I'd like to check for any changes in the values of the entries using the hash codes of the two objects. Basically, I was wondering if GetHashCode() is apt for this, both from a performance perspective and reliability perspective (getting different values for different object values and same values for same object values, always). If I have to override it, what would be a good way to do so, does it always depend on the type of anotherobject and what Equals means when comparing two anotherobjects? Is there a generic way to do it? This concern is because my object can be quite big.

    Read the article

  • Recommendations for stable, reliable flash drives

    - by Josh Kelley
    We're looking to purchase some flash drives for use in some embedded devices. Most of our requirements aren't too different the generic "good, fast" flash drive: reliability is very important, speed is good, and so that the drive will fit, the case shouldn't be too large (so no OCZ Throttles). Consistency is also a major priority; we'd like to be able to buy more or less the same product a year or two from now without having to worry about the manufacturer swapping drive components with less reliable or slower parts. (We've been burned already by our previous manufacturer doing this.) Any recommendations, especially regarding consistency? I can read Ars Technica to get an overview of current models, but what are consistently good models?

    Read the article

  • Reliability of UDP on localhost

    - by Bryan Ward
    I know that UDP is inherently unreliable, but when connecting to localhost I would expect the kernel handles the connection differently since everything can be handled internally. So in this special case, is UDP considered a reliable protocol, or will the kernel still potentially junk some packets if buffers are overrun?

    Read the article

  • What is Google Docs' SLA?

    - by Walter White
    Hi all, I am evaluating online storage and for me, that means either Amazon S3 or Google Docs. Amazon very clearly posts there reliability and SLA: http://aws.amazon.com/s3/#protecting Their rates are obviously higher than Google's, but it is really hard to compare without having an SLA. Does anyone know what Google's commitment is for reliability? Is it 99.99% for data, is there anyway to make that more durable? I have to ask too, wouldn't google docs at least be inheritently more reliable than a hard drive? Thanks, Walter

    Read the article

  • How common are power supply failures in comparison to hard disk failures?

    - by Adrian Grigore
    Hi, My webhost offers two different types of high availability options for dedicated servers: Redundant hard disks (RAID1) Redundant hard disks (RAID1) plus redundant power supply How common is a power supply failure in comparison to hard disk failure? I know it's not possible to know the exact figures without knowing the exact hardware, but ballpark figures are good enough for me at the moment. Thanks, Adrian

    Read the article

  • Are SANs unreliable?

    - by chaos
    So at the place where I wear one of my various hats, this one representing a development rather than admin role, there's been an initiative to move to SANs. So far, I have been spectacularly unimpressed. First it was this behavior where, when MySQL databases are on the SAN, the first few tables that anything tries to hit after the system boots come up as nonexistent and MySQL has to be restarted before it realizes they're actually there. Then today, on multiple systems (including the primary SVN repository, ever-so-wonderfully) we get SAN mounts spewing IO errors and the filesystems going into read-only, which is the kind of behavior I expect from directly mounted naked disks, not fault-tolerant managed storage. Right now, I'm at the point where if I were putting together a project and somebody said "hey we should use SANs", my response would be "GTFO". So basically I want to know whether my experience is typical or even common, or whether I'm having some kind of freakishly bad luck with SANs. The systems these SANs are attached to are all CentOS machines, if that's relevant.

    Read the article

  • Which database to use and system/db administration by layman [closed]

    - by blah
    So my friend and I got briliant ;) idea for a business. Since it is not predictable whether it will work out or not, we decided to keep cost as low as possible to start with, in particular not to hire anyone. If it will work out as expected it will generate enough profit to hire professionals in few months. But for the first few months we'll be doing everything by ourselfs. He's a business/finance major, and I'm a software developer, so obviously I have to take care of IT :) It will be a webapp, written in python/django. My questions regarding this project: 1) What database should I choose? I'm experienced with oracle, and have been working with SQL Server for a while, but both of them are too expensive(at least now). It's a developer experience, I've never done any dba stuff. I'm looking for something free(as in beer). Looks like MySql or PostgreSQL are most popular in this sector. I would appreciate any comments on which db to choose. I'm open to any suggestions(it doesn't have to be MySql or Postgre). Here's what I know about data: It will be almost dates and numbers, a little bit of text. Searched mainly by dates. Data will almost never be updated, mostly inserted and browsed. From 30k to 300k new records/month. 2) Servers. My idea is to rent two dedicated servers. During normal operation one would be a web server(debian/apache), other would be a db server(debian/?). My recovery plan is to install everything on both, and in case of trouble with one of machines just run everything on the other one. Does it even makes sense? Any other tips appreciated. Thanks.

    Read the article

  • Do superuser's prefer business grade or consumer grade PCs?

    - by joelhaus
    Having burned through a number of consumer grade laptops in recent years, I'm wondering if the additional cost of a business grade computer is a worth while investment. I'm considering getting a laptop with slightly lower specs to justify the added cost. The primary benefits I see are: (i) the notebook will be more reliable, (ii) have a longer life and (iii) the warranty (parts and labor) will be 3 years instead of 1 year. Are there any other considerations one should keep in mind when shopping for a business grade PC? Is purchasing direct from the manufacturer wise or are there other options that should be considered too? Thanks in advance!

    Read the article

  • How harmful is a hard disk spin cycle?

    - by Gilles
    It is conventional wisdom¹ that each time you spin a hard disk down and back up, you shave some time off its life expectancy. The topic has been discussed before: Is turning off hard disks harmful? What's the effect of standby (spindown) mode on modern hard drives? Common explanations for why spindowns and spinups are harmful are that they induce more stress on the mechanical parts than ordinary running, and that they cause heat variations that are harmful to the device mechanics. Is there any data showing quantitatively how bad a spin cycle is? That is, how much life expectancy does a spin cycle cost? Or, more practically, if I know that I'm not going to need a disk for X seconds, how large should X be to warrant spinning down? ¹ But conventional wisdom has been wrong before; for example, it is commonly held that hard disks should be kept as cool as possible, but the one published study on the topic shows that cooler drives actually fail more. This study is no help here since all the disks surveyed were powered on 24/7.

    Read the article

  • How harmful is a hard disk spin cycle?

    - by Gilles
    It is conventional wisdom¹ that each time you spin a hard disk down and back up, you shave some time off its life expectancy. The topic has been discussed before: Is turning off hard disks harmful? What's the effect of standby (spindown) mode on modern hard drives? Common explanations for why spindowns and spinups are harmful are that they induce more stress on the mechanical parts than ordinary running, and that they cause heat variations that are harmful to the device mechanics. Is there any data showing quantitatively how bad a spin cycle is? That is, how much life expectancy does a spin cycle cost? Or, more practically, if I know that I'm not going to need a disk for X seconds, how large should X be to warrant spinning down? ¹ But conventional wisdom has been wrong before; for example, it is commonly held that hard disks should be kept as cool as possible, but the one published study on the topic shows that cooler drives actually fail more. This study is no help here since all the disks surveyed were powered on 24/7.

    Read the article

  • How stable is zfs-fuse 0.6.9 on Linux?

    - by Mavrik
    I'm thinking of using ZFS for my home-made NAS array. I would have 4 HDDs in raidz on a Ubuntu Server 10.04 machine. I'd like to use the snapshot capability and dedup when storing data. I'm not so much concerned about the speed, since the machine is accessed via N wireless network and that is probably going to be the bottleneck. So does anyone have any practical experience with zfs-fuse 0.6.9 on such (or simillar) configuration?

    Read the article

  • When running a shell script, how can you protect it from overwriting or truncating files?

    - by Joseph Garvin
    If while an application is running one of the shared libraries it uses is written to or truncated, then the application will crash. Moving the file or removing it wholesale with 'rm' will not cause a crash, because the OS (Solaris in this case but I assume this is true on Linux and other *nix as well) is smart enough to not delete the inode associated with the file while any process has it open. I have a shell script that performs installation of shared libraries. Sometimes, it may be used to reinstall versions of shared libraries that were already installed, without an uninstall first. Because applications may be using the already installed shared libraries, it's important the the script is smart enough to rm the files or move them out of the way (e.g. to a 'deleted' folder that cron could empty at a time when we know no applications will be running) before installing the new ones so that they're not overwritten or truncated. Unfortunately, recently an application crashed just after an install. Coincidence? It's difficult to tell. The real solution here is to switch over to a more robust installation method than an old gigantic shell script, but it'd be nice to have some extra protection until the switch is made. Is there any way to wrap a shell script to protect it from overwriting or truncating files (and ideally failing loudly), but still allowing them to be moved or rm'd? Standard UNIX file permissions won't do the trick because you can't distinguish moving/removing from overwriting/truncating. Aliases could work but I'm not sure what entirety of commands need to be aliased. I imagine something like truss/strace except before each action it checks against a filter whether to actually do it. I don't need a perfect solution that would work even against an intentionally malicious script. Ideas I have so far: Alias cp to GNU cp (not the default since I'm on Solaris) and use the --remove-destination option. Alias install to GNU install and use the --backup option. It might be smart enough to move the existing file to the backup file name rather than making a copy, thus preserving the inode. "set noclobber" in ~/.bashrc so that I/O redirection won't overwrite files

    Read the article

  • Looking for a reliable web provider that supports ASP.NET? Shared LAMP account a plus.

    - by Cory Charlton
    My title is probably not very clear but here's the deal. I'm a software engineer with experience in many languages but my current focus is Windows/Web applications using C# and .NET. I'm currently running a personal blog using WordPress and love it. I need to setup a website for my consulting company and, while I enjoy the canned benefits of a CMS like WordPress, would like to build a custom ASP.NET site. Either way my current LAMP host is not secure so I'm looking to switch and looking for a reliable alternative. My ultimate wish list of requirements would be a cost-effective (currently spending ~$120/yr for web+domain hosting) host that would allow me to deploy my own ASP.NET code and host a WordPress blog (IIS w/ PHP to external MySQL or separate LAMP site). Thanks in advance for your recommendations (Google is not good for this type of search :-D) Edit: I'm fine if I have to ditch WordPress. Really I'm just looking for a good ASP.NET host, the WordPress compatibility would be a plus.

    Read the article

  • What lasts longer: Data stored on non-volatile flash RAM, optical media, or magnetic disk?

    - by Chris W. Rea
    What lasts longer: Data stored on non-volatile flash RAM (USB stick or SD cards?), optical media (CD, DVD, or Blu-Ray?), or magnetic disk (floppies, hard drives?) My gut tells me optical media, but I'm not sure. Furthermore, which of those digital media would be most suitable for long-term data storage where environmental issues are unknown, such as low/high temperature or humidity? For example, what digital media could be stored in a basement, attic, or time capsule, and be expected to survive a reasonably long time? e.g. a lifetime, and then some. Update: Looks like optical media and magnetic tape each have one vote below. Does anybody else have an opinion or know of a study comparing the two?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >