Search Results

Search found 31356 results on 1255 pages for 'database backups'.

Page 594/1255 | < Previous Page | 590 591 592 593 594 595 596 597 598 599 600 601  | Next Page >

  • How do you handle data archiving?

    - by 20th Century Boy
    Backups are one thing, but long term archival is another. For example, you might be required to store emails for 7 years, or keep all project data indefinitely. I used to save archives to tape, but then I've had tapes get destroyed (drives rip the tape out). So...write to 2 tapes I hear you say. Is that what others do? Have 2 (or more) tapes of the same data for redundancy? But then the other issue is that tapes cannot usually be read by different backup software vendors. Eg if you go from Arcserve - Backup Exec - Commvault over 10 years you would need to keep all 3 systems so that you could restore old data. Likewise for hardware. Old tapes might not be barcoded. Might not be compatible with the new library etc etc. So do you keep old tape hardware AND old software just in case you might need to restore a 10 year-old file? Or...when you move to a new backup system do you migrate all archived data to the new system and re-archive it onto new tapes? That could be a huge job. Any thoughts?

    Read the article

  • How to customize pickle for django model objects

    - by muudscope
    I need to pickle a complex object that refers to django model objects. The standard pickling process stores a denormalized object in the pickle. So if the object changes on the database between pickling and unpickling, the model is now out of date. (I know this is true with in-memory objects too, but the pickling is a convenient time to address it.) So what I'd like is a way to not pickle the full django model object. Instead just store its class and id, and re-fetch the contents from the database on load. Can I specify a custom pickle method for this class? I'm happy to write a wrapper class around the django model to handle the lazy fetching from db, if there's a way to do the pickling.

    Read the article

  • Web hosting for multiple web sites providing system isolation

    - by Justin
    We have a small number of projects where we expect the client will not be maintaining the installed versions of applications we install to power the site (such as Drupal). Given that an important part of security is keeping things updated, we don't want to host these projects on our Plesk-powered dedicated servers that currently host lots of our other client's websites. Our goal is to find a host where we can deploy isolated instances (be these slices, virtual servers, grid servers, etc) for each individual (or groups of 2-3) web sites as we need them. These instances would be completely separate, so that if one web site were hacked it would not impact any other site. Typical hosting requirements: Linux Apache PHP 5 MySQL Supports Drupal Ability to setup a cron task (but we don't need SSH access) Daily backups Virtualized/cloud hosting (we want to avoid shared) Pricing per site is around $25/month OS is patched automatically Some options we have considered but won't work: MediaTemple: Two major data center-wide security incidents and recent downtime foster doubt about this host's technical ability. Slicehost: This would require us to manage the entire server, which we don't want to do. Rackspace Cloud Sites (formerly Mosso): No backup options. Do you have any recommended hosting options for given these requirements?

    Read the article

  • How to determine the maximum integer the model can handle?

    - by John Mee
    "What is the biggest integer the model field that this application instance can handle?" We have sys.maxint, but I'm looking for the database+model instance. We have the IntegerField, the SmallIntegerField, the PositiveSmallIntegerField, and a couple of others beside. They could all vary between each other and each database type. I found the "IntegerRangeField" custom field example here on stackoverflow. Might have to use that, and guess the lowest common denominator? Or rethink the design I suppose. Is there an easy way to work out the biggest integer an IntegerField, or its variants, can cope with?

    Read the article

  • How to arrange the items of gridview ?

    - by kranthi
    I have a settings page in my asp.net website,in which a user can select desired items and their order to be displayed in another page.I am displaying these items(which are obtained from database) in a gridview with checkboxes,up/down arrows next to them.Once the user makes his selection/rearranges the items and clicks on the 'Save' button,I am saving the data into another database table.When this particular user logsin again I want to check the items which were already chosen by him and arrange them in the order he specified, on page load.I am able to check the checkboxes of the already chosen items but do not understand how do I arrange the items in the user specified order? Please help. Thanks.

    Read the article

  • Limit the amount of data that can be stored in a folder on Ubuntu Server 12.04?

    - by dougoftheabaci
    I'm in the process of building my first server. It's up, it's running, I'm transferring copious amounts of data away from my horrid little Drobo (DO NOT BUY ONE OF THESE, EVER). However, there's one thing I have yet to do: I'd like to set it up for Time Machine backups as well. I've seen all the guides and I have some idea of how to set the whole thing up, but the issue is that Time Machine will just fill up as much space as you let it. So if I let it lose in my 8 TB zpool it'll slowly consume every last available sector. This, of course, is not acceptable. I have a folder at the root of my zpool called "ZFS Time Machine" and I would like to limit it to 1 TB (all I need for backup purposes). However, I have no idea how to do that. Is this possible? I can continue using a small external hard drive attached via FW800 if I have to but I'd much rather prefer putting everything on my server.

    Read the article

  • Optimum configuration of McAfee for Servers

    - by Wayne Arthurton
    Our corporate standard is McAfee Enterprise, unfortunately this is non-negotiable. On two types of servers I'm responsible for, SQL & Web, we have noticed major performance issues with the corporate standard setup. Max scan time 45sec One policy for all processes Scan ALL files on write, read and open for backup Heuristics: Find unknown programs, trojans and macros Detect unwanted programs Exclude: EVT, LDF, LOG, MDF, VMD, , windows file protection) This of course still causes major slowdowns. IIS .NET recompiles are slow especially with SharePoint, SQL backups and restores, SQL Analysis Services, Integration Services and temp data from them as well. I have looked from time to time, for some best practices on setting up McAfee of SQL & SQL Analysis Service, SQL Integration Service, Visual Studio, Sharepoint, and .NET web servers in general. How do people setup McAfee enterprise on their corporate serves keeping security intact, but affecting performance as minimally as possible? Has anyone run across white papers on these setups? Obviously some are case by case, but there must be some best practices out there somewhere.

    Read the article

  • Are ORM's counterproductive to OO design?

    - by Jeremiah
    In OOD, design of an object is said to be characterized by its identity and behavior. Having used OR/M's in the past, the primary purpose, in my opinion, revolves around the ability to store/retrieve data. That is to say, OR/M objects are not design by behavior, but rather data (i.e. database tables). Case and point: Many OR/M tools come with a point-to-a-database-table-and-click-object-generator. If objects are no longer characterized by behavior this will, in my opinion, muddy the identity and responsibility of the objects. Subsequently, if objects are not defined by a responsibility this could lend a hand to having tightly coupled classes and overall poor design. Furthermore, I would think that in an application setting, you would be heading towards scalability issues. So, my question is, do you think that ORM's are counterproductive to OO design? Perhaps the underlying question would be whether or not they are counterproductive to application development.

    Read the article

  • SQL Server Compact 'Data Directory' macro in Connection String - more info needed

    - by codeulike
    So, as described on this msdn page, when you define a Connection String for SQL Server Compact 3.5, you can use the "Data Directory" macro, like this: quote from this msdn page: Data Directory Support SQL Server Compact 3.5 now supports the Data Directory macro. This means that if you add the string |DataDirectory| (enclosed in pipe symbols) to a file path, it will resolve to the path of the database. For example, consider the connection string: "Data Source= c:\program files\MyApp\Mydb.sdf" When using Data Directory, you can instead use the following connection string: "Data Source = |DataDirectory|\Mydb.sdf" For more information, see How to: Deploy a SQL Server Compact 3.5 Database with an Application. However, the 'for more information' link on msdn doesn't actually give any more information. So my question is: How does the |Data Directory| macro translate at run time? For WinForm apps, it seems to just give the location of the executable. Or is it more complicated than that?

    Read the article

  • Fluent NHibernate - Unable to parse integer as enum.

    - by Aaron Smith
    I have a column mapped to an enum with a convention set up to map this as an integer to the database. When I run the code to pull the data from the database I get the error "Can't Parse 4 as Status" public class Provider:Entity<Provider> { public virtual Enums.ProviderStatus Status { get; set; } } public class ProviderMap:ClassMap<Provider> { public ProviderMap() { Map(x => x.Status); } } class EnumConvention:IUserTypeConvention { public void Accept(IAcceptanceCriteria<IPropertyInspector> criteria) { criteria.Expect(x => x.Property.PropertyType.IsEnum); } public void Apply(IPropertyInstance instance) { instance.CustomType(instance.Property.PropertyType); } } Any idea what I'm doing wrong?

    Read the article

  • Backup hardware and strategy on distributed Windows Server 2008 network

    - by CesarGon
    This question is a follow up to this. We have a Windows Server 2008 R2 domain over a network that spans two different buildings, linked by a 100-Mbps point-to-point line. Over 60 users work in the organisation. We are planning to use DFS folders and DFS replication for file serving across the organisation. The estimated data volume is over 2 TB, and will grow at approximately 20% annually. The idea is to set up a DFS file server in each building and use DFS so that all the contents stay replicated over the 100-Mbps link. We are now considering backup hardware and strategies. We are Dell customers and, after browsing the online Dell catalogue, I can see a number of backup hardware options. My main doubts are the following: Would you go for a tape library, disk backup, or are there other options worth considering? Would you perform batch backups (i.e. nightly) or would you use continuous backup (i.e. while users are working)? Would you use a dedicated backup server to which the tape library (or any other backup device) is attached, or is there any other alternative way of doing things? My experience with backup hardware and overall setup is limited, so I appreciate any good piece of advice that you may have. Thanks.

    Read the article

  • How long will a USB key with an OS installed on it last?

    - by Xananax
    I've heard numerous times that installing an OS on a USB key is a bad thing to do, as USBs typically have a certain number of writes before dying, and installing an OS on it will wear it out (unless it's used sporadically for rescue purposes). Nonetheless, I am very tempted to install some flavour of Linux (Ubuntu or Arch, I haven't decided yet) on a small, transportable, USB Key. My problem is, although you read a lot that it's "bad", you are never told how bad. How long would it last (provided, say, a pc that is 24/7 on)? A month? A year? Five years? Is there recipes to make it last longer? Is there any reason beside weariness that should prevent me from attempting this? I mean, if it can be calculated, then I could theoretically shield myself by doing regular backups on another key when the deadline gets close (for example). Notes I am not talking of using a USB as a live CD, but actually installing the OS on it.) When I say "USB Key", I refer to the little USBs with a flash memory, not an external USB hard drive. For the curious, my reason is that I work in a lot of different places, on different PCs, and I have a very customized session, with my own WM, my own key bindings, my own scripts, , a selection of plugins for firefox and chrome, etc, and currently I am synchronizing all this through a mix of dropbox, git, and transporting files on USBs, and and it's becoming a chore. It would be much simpler for me to just plug the USB and mount the hard disk of the PC I am using and use it's processing power without actually needing to install any OS on it.

    Read the article

  • Debugging a Drobo that chokes Windows 7x64 When Plugged In

    - by Pridkett
    I've had a love hate relationship with my Drobo for a long time. After two years of using it on a Linux box, I moved it over to a Windows 7 machine where it seemed to work just fine for a long time, but it was under very light usage. Mainly backups that never actually happened. Recently I began using it for additional backup services (through CrashPlan, which is great). This means the Drobo gets a lot more usage. Also it means that something interesting happens, the Drobo can choke my system on startup. Here's what I mean: Start computer without Drobo plugged in, CrashPlan and Drobo Dashboard services disabled: 105s Start computer with Drobo plugged in Crashplan disabled, Drobo Dashboard enabled: 250s (and 1 cpu at 100% for a very long time, drobo churning) Start computer with Drobo plugged in, CrashPlan and Drobo Dashboard disabled: 250s (1 cpu at 100% for a very long time, drobo churning) Start computer with Drobo plugged in, Crashplan and Drobo Dashboard enabled: 300s (1 cpu at 100% for a very long time, drobo churning) If I yank the USB plug on the Drobo the CPU usage goes down to nothing very quickly. The slow startup in the fourth scenario is because CrashPlan is trying desperately to load stuff up on the H: drive before it gives up, so I've disabled it for the time being. So here's my question: What the heck is going on when I plug the drobo in? I've fired up Process Explorer and see that the System process is hogging the CPU, specifically it's an ntoskrnl.exe/KdPollBreakIn thread that's going ape. Is this something that's wrong with Drobo? Windows? Any idea on how to find out? If it matters, here's tech info: Athlon 64x2 4400, 2GB RAM, Win7 Ultimate, Drobo USB (2x1TB, 2x320GB)

    Read the article

  • What is a hardware-id?

    - by Rob
    Some forums that I regularly visit sell premium programs, and to prevent them from being leaked they use hardware-id authentication. That is, first they send you a program to run to grab your HWID, you tell them your HWID, they store it in a database, then they send you the actual program. If your HWID isn't in the database, the program won't run. So what is Hardware-ID, and how is it generated? Why is it that my HWID is different depending on the programmer that sends me a HWID-grabber?

    Read the article

  • Is it Possible to Query Multiple Databases with WCF Data Services?

    - by Mas
    I have data being inserted into multiple databases with the same schema. The multiple databases exist for performance reasons. I need to create a WCF service that a client can use to query the databases. However from the client's point of view, there is only 1 database. By this I mean when a client performs a query, it should query all databases and return the combined results. I also need to provide the flexibility for the client to define its own queries. Therefore I am looking into WCF Data Services, which provides the very nice functionality for client specified queries. So far, it seems that a DataService can only make a query to a single database. I found no override that would allow me to dispatch queries to multiple databases. Does anyone know if it is possible for a WCF Data Service to query against multiple databases with the same schema?

    Read the article

  • Getting started with webserver clustering.

    - by Ernie
    I work for a small ISP, and we host about 250 domains and all the stuff that goes along with that: DNS, mail, spam filtering, and backups. Currently, we have separate DNS servers (two of them) and mail servers (outgoing mail is actually on the secondary DNS server, but was previously on its own server). In the past, this was done as an insurance measure. The last thing we need is for some doofus (usually yours truly) to hose a server, taking out DNS and mail right along with it, or for spammers to jam our incoming SMTP server, preventing outgoing mail from being sent too. In the past, this was a problem, and our servers were set up the way they are now to combat it. However, clustering solutions like Sun's Cobalt RAQ (in days of olde) and Virtualmin appear to cater to an all-in-one approach, then deal with failures through redundant servers. I have avoided this thus far, but we've been using Virtualmin on our web server for a while now, and I'd like to expand into using it for a high availability cluster. Our networking partner has recently built a datacenter that has eliminated all of our other bugaboos like network, cooling, and power issues, so now the only thing left to go wrong is me hosing a server, which happened earlier this month. One of the bigger reasons we've avoided going this route is because our hardware requirements aren't particularly high. One server easily handles all the sites we host (most of them are flat sites). Also, load-balancing routers tend to be expensive and complicated. All that I'm really expecting to do is building a two-node cluster for redundancy so that when I hose a server (however rare that might be), we're not out for 8-12 hours while I rebuild it. What I need to know is how to get started, and if I'm really in a position to bother with this kind of thing at all.

    Read the article

  • Hierarchical data in Linq - options and performance

    - by Anthony
    I have some hierarchical data - each entry has an id and a (nullable) parent entry id. I want to retrieve all entries in the tree under a given entry. This is in a SQL Server 2005 database. I am querying it with LINQ to SQL in C# 3.5. LINQ to SQL does not support Common Table Expressions directly. My choices are to assemble the data in code with several LINQ queries, or to make a view on the database that surfaces a CTE. Which option (or another option) do you think will perform better when data volumes get large? Is SQL Server 2008's HierarchyId type supported in Linq to SQL?

    Read the article

  • Connecting to external MySQL DB from a web server not running MySQL

    - by jrb04c
    While I've been working with MySQL for years, this is the first time I've run across this very newbie-esq issue. Due to a client demand, I must host their website files (PHP) on a IIS server that is not running MySQL (instead, they are running MSSQL). However, I have developed the site using a MySQL database which is located on an external host (Rackspace Cloud). Obviously, my mysql_connect function is now bombing because MySQL is not running on localhost. Question: Is it even possible to hit an external MySQL database if localhost is not running MySQL? Apologies for the rookie question, and many thanks in advance.

    Read the article

  • AJAX, PHP, XML, and cascading drop-down lists

    - by Dave Jarvis
    What PHP libraries would you recommend to implement the following: Three dependent drop-down lists Three XML data sources AJAX-based Essentially, I'd like to create an XML database and wire up a form that allows the user to select three different dependent parameters: User clicks Region User clicks District (filtered by Region) User clicks Station (filtered by District) Even though I would like to use PHP and XML, the general problem is: One XHTML form Three dependent, cascading drop-down lists Three flat files (no relational database) for the list data The solution must be efficient, simple, reliable, and cross-browser. What technologies would you recommend to solve the problem? Thank you!

    Read the article

  • TypeInitializationException When Getting an NHibernate Session

    - by Paul Johnson
    I’ve run into what appears to be an NHibernate config problem. Basically, I ran up a simple proof of concept persistence integration test using NUnit, the test simply querys an Oracle database and successfully returns the last record received by the underlying table. However, when the assemblies are taken out of the NUnit test environment and deployed as they would be for an actual application build, my call for an NHibernate session results in a ‘TypeInitializationException’ whilst executing the code line: sessionFactory = New Configuration().Configure().BuildSessionFactory() The application is a vb.net console app running against an Oracle 9.2 database, using a ‘coding framework’ published on the web by Bill McCafferty entitled 'NHibernate Best Practices with ASP.NET' (pre S#harp Architecture). I am running version 2.1.2.4000 of NHibernate. Any assistance much appreciated. Kind Regards Paul J.

    Read the article

  • Hard drive had reallocated sectors...but now it magically doesn't! Can I trust it?

    - by rob
    Last week my SMART diagnostics utility, CrystalDiskInfo, reported that the external hard drive that I was saving my backups to had suddenly reported 900+ reallocated sectors. I double-checked to confirm, then ordered a replacement drive. I spent all of this week copying data from that drive to the new drive. But toward the end of the copy, something peculiar happened. CrystalDiskInfo popped up an alert that the reallocated sector count had gone back down to 0. I know that when SMART detects a read error on a block, it adds that block to the current pending reallocation list. If it subsequently is successfully written or read later, it is removed from the list and assumed to be fine, but if a subsequent write fails, it is marked bad and added to the reallocated sector count. What concerns me most is that I've never read anywhere that a sector can be recovered as "good" after it has been marked as a bad sector and remapped. I've just finished running an extended SMART diagnostic, and it found no surface errors. Now I'm doubtful that the manufacturer will honor a warranty claim if the SMART info does not report any problems. Has anyone had this happen? If so, then is the drive, indeed, okay, or should I be concerned about an imminent failure?

    Read the article

  • What is considered a long execution time?

    - by stjowa
    I am trying to figure out just how "efficient" my server-side code is. Using start and end microtime(true) values, I am able to calculate the time it took my script to run. I am getting times from .3 - .5 seconds. These scripts do a number of database queries to return different values to the user. What is considered an efficient execution time for PHP scripts that will be run online for a website? Note: I know it depends on exactly what is being done, but just consider this a standard script that reads from a database and returns values to the user. Also, I look at Google and see them search the internet in .15 seconds and I feel like my script is crap. Thanks.

    Read the article

  • Recording user data for heatmap with javascript

    - by Hanpan
    Hi, I was wondering how sites such as crazyegg.com store user click data during a session. Obviously there is some underlying script which is storing each clicks data, but how is that data then populated into a database? It seems to me the simple solution would be to send data via AJAX but when you consider that it's almost impossible to get a cross browser page unload function setup, I'm wondering if there is perhaps some other more advanced way of getting metric data. I even saw a site which records each mouse movement and I am guessing they are definitely not sending that data to a database on each mouse move event. So, in a nutshell, what kind of technology would I need in order to monitor user activity on my site and then store this information in order to create metric data? I am not looking to recreate GA, I'm just very interested to know how this sort of thing is done. Thanks in advance

    Read the article

< Previous Page | 590 591 592 593 594 595 596 597 598 599 600 601  | Next Page >