Search Results

Search found 31356 results on 1255 pages for 'database backups'.

Page 595/1255 | < Previous Page | 591 592 593 594 595 596 597 598 599 600 601 602  | Next Page >

  • Windows 7 home backup solution, with offsite provision

    - by Richard E
    I am looking for a home backup solution for my single Windows 7 (Home Premium) PC. I have about 500GB of data to backup. I would like to spend less than GBP 300 on the solution. I don't see the need to backup the whole PC, rather specific folder branches (iTunes, photos, documents, Outlook files, user folders such as desktop, favorites etc). I would like a solution that enables me to maintain backups in two separate physical locations (e.g. home and work). To facilitate this I am imagining a storage unit with slots for two removable drives, along with three separate drives. At any one time two of the drives will be being backed up to in the storage unit. The third will be located at my work. Periodically I will take one of the drives into work and leave it there, then bring the drive that was there back home, and plug it into the storage unit. It will then be backed up along with the other drive that was left in the storage unit. This approach should cover scenarios such as virus attack and fire or theft from one location. Thoughts and comments on the sanity of this approach please...

    Read the article

  • AJAX, PHP, XML, and cascading drop-down lists

    - by Dave Jarvis
    What PHP libraries would you recommend to implement the following: Three dependent drop-down lists Three XML data sources AJAX-based Essentially, I'd like to create an XML database and wire up a form that allows the user to select three different dependent parameters: User clicks Region User clicks District (filtered by Region) User clicks Station (filtered by District) Even though I would like to use PHP and XML, the general problem is: One XHTML form Three dependent, cascading drop-down lists Three flat files (no relational database) for the list data The solution must be efficient, simple, reliable, and cross-browser. What technologies would you recommend to solve the problem? Thank you!

    Read the article

  • Help with Linked Server Error

    - by Randy Minder
    In SSMS 2008, I am trying to execute a stored procedure in a database on another server. The call looks something like the following: EXEC [RemoteServer].Database.Schema.StoredProcedureName @param1, @param2 The linked server is set up correctly, and has both RPC and RPC OUT set to true. Security on the linked server is set to Be made using the login's current security context. When I attempt to execute the stored procedure, I get the following error: Msg 18483, Level 14, State 1, Line 1 Could not connect to server 'RemoteServer' because '' is not defined as a remote login at the server. Verify that you have specified the correct login name. I am connected to the local server using Windows Authentication. Anyone know why I would be getting this error?

    Read the article

  • SQL Server 2008 Management Studio doesn't recognize new Schema

    - by Lieven Cardoen
    I have created a new Schema in a database called Contexts. Now when I want to write a query, Management Studio doesn't recognize the tables that belong to the new Schema. It says: 'Invalid object name Contexts.ContextLibraries'... Transact-SQL: INSERT INTO [Contexts].[ContextLibraries] (ChannelId, [IsSystem]) VALUES (@ChannelId, 1) When I try the same thing on my local database, it does work... Any ideas? I did try to change the Default schema for the user from dbo to Contexts but this doesn't work. Also checked Contexts in Schemas owned by this user without success. Update: Apparently the sql query does work but the editor gives a fault saying the object is invalid.

    Read the article

  • Limit the amount of data that can be stored in a folder on Ubuntu Server 12.04?

    - by dougoftheabaci
    I'm in the process of building my first server. It's up, it's running, I'm transferring copious amounts of data away from my horrid little Drobo (DO NOT BUY ONE OF THESE, EVER). However, there's one thing I have yet to do: I'd like to set it up for Time Machine backups as well. I've seen all the guides and I have some idea of how to set the whole thing up, but the issue is that Time Machine will just fill up as much space as you let it. So if I let it lose in my 8 TB zpool it'll slowly consume every last available sector. This, of course, is not acceptable. I have a folder at the root of my zpool called "ZFS Time Machine" and I would like to limit it to 1 TB (all I need for backup purposes). However, I have no idea how to do that. Is this possible? I can continue using a small external hard drive attached via FW800 if I have to but I'd much rather prefer putting everything on my server.

    Read the article

  • Sharepoint Foundation 2010 development single machine installation problems

    - by Robert Koritnik
    I'm having problems installing development machine for Sharepoint (Foundation) 2010. This is what I did so far on the same machine: Installed a clean Windows 7 x64 with 4GB of RAM without being part of any domain. Just a simple standalone machine. Enabled IIS related features as described here except IIS6 related ones (two of them) Installed SQL Server 2008 R2 Development Edition (DB Engine and Writer being enabled but not SQL Agent) Installed Visual Studio 2010 Premium Started installing Sharepoint Foundation 2010 with first extracting files, changing config to enable Windows 7 installation and then installed it as Server Farm (then Complete) to avoid installing SQL Express. Created a separate SPF_CONFIG local user with Logon on as a service right. Opened SPF Management Shell and run New-SPConfigurationDatabase so I am able to use a non-domain username (SPF_CONFIG that I created in the previous step) But all I get is this: The outcome after this error is: Database Sharepoint2010Config is created User SPF_CONFIG is added to SQL Server and attached to this newly created database as dbowner and checking SQL server security logins this user has following rights: dbcreator securityadmin public

    Read the article

  • Invoking active_record error - can not load file in Ruby on Rails

    - by user1623624
    When I try to run rails generate scaffold test the following error always shows C:\Lab\railapps\dbtest>rails generate scaffold test invoke active_record C:/RailsInstaller/Ruby1.9.3/lib/ruby/gems/1.9.1/gems/activesupport-3.2.1/lib/active_support/dependencies.rb:251:in `require': Please install the oracle_enhanced_adapter: `gem install activerecord-oracle_enhanced-adapter` (cannot load such file -- active_record/connection_adapters/oracle_enhanced_adapter) (LoadError) from C:/RailsInstaller/Ruby1.9.3/lib/ruby/gems/1.9.1/gems/activesupport-3.2.1/lib/active_support/dependencies.rb:251:in `block in require'" I did install gem oci8 then activerecord-oracle-enhanced-adapter. Can you help me by having a look? Thanks a lot. Version information C:\Lab\railapps\dbtest>gem list ruby-oci8 *** LOCAL GEMS *** ruby-oci8 (2.1.2 ruby x86-mingw32, 2.0.6) C:\Lab\railapps\dbtestgem list activerecord-oracle_enhanced-adapter *** LOCAL GEMS *** activerecord-oracle_enhanced-adapter (1.4.1) database.yml under configure development: adapter: oracle_enhanced database: cvrman.cablevision.com username: ruby password: ruby

    Read the article

  • Eclipse plugins for Spring / Hibernate development?

    - by es11
    I have a running dynamic web project in Eclipse (Java EE + Maven + Spring). I am at the point where I need to integrate a persistence layer and want to use Hibernate with a mySql database. I am wonder what plugins would be useful for me at this point? For Hibernate should I install hibernate tools or is it not necessary? Are then any plugins that are most widely use for connecting / exploring database connections that would be appropriate for the type of project I am working on? Thanks.

    Read the article

  • Easiest way to retrofit retry logic on LINQ to SQL migration to SQL Azure

    - by Pat James
    I have a couple of existing ASP .NET web forms and MVC applications that currently use LINQ to SQL with a SQL Server 2008 Express database on a Windows VPS: one VPS for both IIS and SQL. I am starting to outgrow the VPS's ability to effectively host both SQL and IIS and am getting ready to split them up. I am considering migrating the database to SQL Azure and keeping IIS on the VPS. After doing initial research it sounds like implementing retry logic in the data access layer is a must-do when adopting SQL Azure. I suspect this is even more critical to implement in my situation where IIS will be on a VPS outside of the Azure infrastructure. I am looking for pointers on how to do this with the least effort and impact on my existing code base. Is there a good retry pattern that can be applied once at the LINQ to SQL data access layer, as opposed to having to wrap all of my LINQ to SQL operations in try/catch/wait/retry logic?

    Read the article

  • Strategy Design Pattern -- *dynamic* !!!

    - by alexeypro
    My application will have different strategies for my objects. What's the best way of implementing that? I would really love the case when we can make strategy classes implementation dynamically loaded from, say, some relational database. Not sure how do that better, though. What's the best approach? Idea is that say we want to apply to object MyObj strategy Strategy123 then we just load from database by ID 123 the object, deserialize it, get the Strategy class, and use it with MyObj. The maintenance while sounds easier from the first look can be a pain in the long run if Strategy interfaces changes, etc. What can I do also? I want to find solution when I should be keeping Strategy classes in codebase -- just for the sake that I don't need code change and re-deployment of the application if my Strategy changes, or I add new strategy. Please advise!

    Read the article

  • Can you update/add records in SQL using a datagridview and LINQ to SQL

    - by Jordan S
    Is it possible to bind a DataGridView to a LINQ to SQL class so that when I make changes to the records in the datagridview it automatically updates the SQL database? I have tried binding the data like this but if I make changes to the data in the datagrid view they do not actually affect the data in the database... BOMClassesDataContext DB = new BOMClassesDataContext(); var mfrs = from m in DB.Manufacturers select m; BindingSource bs = new BindingSource(); bs.DataSource = mfrs; dataGridView1.DataSource = bs;

    Read the article

  • AJAX call in a continuously loop?

    - by Mestika
    Hi, I want to create some kind of AJAX script or call that continuously will check a MySQL database if any new messages has arrived. When there is a new message in the database, the AJAX script should invoke a kind of alert box or message box. I’m not quite a AJAX expert (yet anyway) and have Googled around to find a solution but I’m having a hard time to figure out where to begin. I imagine that it is kind of the same method that an AJAX chat is using to see if any new chat-message has been send. I’ve also tried to search for AJAX (httpxmlrequest) call in a continuously and infinity loop but still haven’t got a solution yet. I hope there is someone, which can help me with such a AJAX script or maybe nudge me in the right direction. Thanks Sincerely Mestika

    Read the article

  • Windows Azure Platform, latest version?

    - by Vimvq1987
    I searched through internet but found nothing. The whitepapers of Windows Azure Platform say something like that: In its first release, the maximum size of a single database in SQL Azure Database is 10 gigabytes A few things are omitted in the technology’s first release, however, such as the SQL Common Language Runtime (CLR) and support for spatial data. (Microsoft says that both will be available in a future version.) I want to know that Microsoft had updated Windows Azure Platform and removed these limits or not? I decided to post this question here instead of Serverfault.com because it's more relative to programming than administration. Thank you

    Read the article

  • Best way to optimize queries like this in Django

    - by chris
    I am trying to lower the amount of queries that my django app is using, but I am a little confused on how to do it. I would like to get a query set with one hit to the database and then filter items from that set. I have tried a couple of things, but I always get queries for each set. let's say I want to get all names from my DB, but also separate out the people just named Ted. Both the names and the ted set will be used in the template. This will give me two sets, one with all names and one with Ted.. but also hits the database twice: namelist = People.objects.all() tedList = namelist.filter(name='ted') Is there a way to filter the first set without hitting the data base again?

    Read the article

  • What is considered a long execution time?

    - by stjowa
    I am trying to figure out just how "efficient" my server-side code is. Using start and end microtime(true) values, I am able to calculate the time it took my script to run. I am getting times from .3 - .5 seconds. These scripts do a number of database queries to return different values to the user. What is considered an efficient execution time for PHP scripts that will be run online for a website? Note: I know it depends on exactly what is being done, but just consider this a standard script that reads from a database and returns values to the user. Also, I look at Google and see them search the internet in .15 seconds and I feel like my script is crap. Thanks.

    Read the article

  • Hard drive had reallocated sectors...but now it magically doesn't! Can I trust it?

    - by rob
    Last week my SMART diagnostics utility, CrystalDiskInfo, reported that the external hard drive that I was saving my backups to had suddenly reported 900+ reallocated sectors. I double-checked to confirm, then ordered a replacement drive. I spent all of this week copying data from that drive to the new drive. But toward the end of the copy, something peculiar happened. CrystalDiskInfo popped up an alert that the reallocated sector count had gone back down to 0. I know that when SMART detects a read error on a block, it adds that block to the current pending reallocation list. If it subsequently is successfully written or read later, it is removed from the list and assumed to be fine, but if a subsequent write fails, it is marked bad and added to the reallocated sector count. What concerns me most is that I've never read anywhere that a sector can be recovered as "good" after it has been marked as a bad sector and remapped. I've just finished running an extended SMART diagnostic, and it found no surface errors. Now I'm doubtful that the manufacturer will honor a warranty claim if the SMART info does not report any problems. Has anyone had this happen? If so, then is the drive, indeed, okay, or should I be concerned about an imminent failure?

    Read the article

  • Why is ContextConfiguration location different in idea and eclipse

    - by jakob
    Hello experts. In my team we work both in Eclipse and Idea. That works pretty good, except for one minor issue that I can't figure out how to solve. When setting the ContextConfiguration location in our tests and running them inside Eclipse everything works like a charm: @Test(groups = { "database" }) @ContextConfiguration(locations = {" file:src/main/webapp/WEB-INF/applicationContext.xml" }) But in my Idea env I get "could not find applicationContext" error. I need to set the location like this(project name is services): @Test(groups = { "database" }) @ContextConfiguration(locations = {" file:services/src/main/webapp/WEB-INF/applicationContext.xml" }) The project structure is like this: parent.pom with two child poms: services.pom and other.pom. When running the test in the terminal from the service project like this: mvn -Dtest=com.mytest.service.somepackage.TheTest test there are no issues. I guess that since my project structure is parent-with-two-children the need of /service is necessary(The project is created by pointing out the parent pom). Is there a way to fix this? Could you please help me with a solution. thx

    Read the article

  • DDD - Validation of unique constraint

    - by W3Max
    In DDD you should never let your entities enter an invalid state. That being said, how do you handle the validation of a unique constraint? The creation of an entity is not a real problem. But let say you have an entity that must have a unique name and there is a thousand instances of this entity type - they are not in memory but stored in a database. Now let say you want to rename an instance. You can't just use a setter... the object could enter an invalid state - you have to validate against the database. How do you handle this scenario in a web environment?

    Read the article

  • Grails/Groovy taglib handling parsing dynamically inserted tags.

    - by Dan Guy
    Is there a way to have a custom taglib operate on data loaded in a .gsp file such that it picks up any tags embedded in the data stored in the database. For instance, let's say I'm doing: <g:each in="${activities}"> <li>${it.payload}</li> </g:each> And inside the payload, which is coming from the database, is text like "Person a did event <company:event id="15124124">Event Description</company:event>" Can you have a taglib that handles company:event tags on the fly?

    Read the article

  • Eclipselink read complex oject model in an ordered way

    - by Raven
    Hi, I need to read a complex model in an ordered way with eclipselink. The order is mandantory because it is a huge database and I want to have an output of a small portion of the database in a jface tableview. Trying to reorder it in the loading/quering thread takes too long and ordering it in the LabelProvider blocks the UI thread too much time, so I thought if Eclipselink could be used that way, that the database will order it, it might give me the performance I need. Unfortunately the object model can not be changed :-( The model is something like: @SuppressWarnings("serial") @Entity public class Thing implements Serializable { @Id @GeneratedValue(strategy = GenerationType.TABLE) private int id; private String name; @OneToMany(cascade=CascadeType.ALL) @PrivateOwned private List<Property> properties = new ArrayList<Property>(); ... // getter and setter following here } public class Property implements Serializable { @Id @GeneratedValue(strategy = GenerationType.TABLE) private int id; @OneToOne private Item item; private String value; ... // getter and setter following here } public class Item implements Serializable { @Id @GeneratedValue(strategy = GenerationType.TABLE) private int id; private String name; .... // getter and setter following here } // Code end In the table view the y-axis is more or less created with the query Query q = em.createQuery("SELECT m FROM Thing m ORDER BY m.name ASC"); using the "name" attribute from the Thing objects as label. In the table view the x-axis is more or less created with the query Query q = em.createQuery("SELECT m FROM Item m ORDER BY m.name ASC"); using the "name" attribute from the Item objects as label. Each cell has the value Things.getProperties().get[x].getValue() Unfortunately the list "properties" is not ordered, so the combination of cell value and x-axis column number (x) is not necessarily correct. Therefore I need to order the list "properties" in the same way as I ordered the labeling of the x-axis. And exactly this is the thing I dont know how it is done. So querying for the Thing objects should return the list "properties" "ORDER BY name ASC" but of the "Item"s objects. My ideas are something like having a query with two JOINs. Joing Things with Property and with Item but somehow I was unable to get it to work yet. Thank you for your help and your ideas to solve this riddle.

    Read the article

  • How long will a USB key with an OS installed on it last?

    - by Xananax
    I've heard numerous times that installing an OS on a USB key is a bad thing to do, as USBs typically have a certain number of writes before dying, and installing an OS on it will wear it out (unless it's used sporadically for rescue purposes). Nonetheless, I am very tempted to install some flavour of Linux (Ubuntu or Arch, I haven't decided yet) on a small, transportable, USB Key. My problem is, although you read a lot that it's "bad", you are never told how bad. How long would it last (provided, say, a pc that is 24/7 on)? A month? A year? Five years? Is there recipes to make it last longer? Is there any reason beside weariness that should prevent me from attempting this? I mean, if it can be calculated, then I could theoretically shield myself by doing regular backups on another key when the deadline gets close (for example). Notes I am not talking of using a USB as a live CD, but actually installing the OS on it.) When I say "USB Key", I refer to the little USBs with a flash memory, not an external USB hard drive. For the curious, my reason is that I work in a lot of different places, on different PCs, and I have a very customized session, with my own WM, my own key bindings, my own scripts, , a selection of plugins for firefox and chrome, etc, and currently I am synchronizing all this through a mix of dropbox, git, and transporting files on USBs, and and it's becoming a chore. It would be much simpler for me to just plug the USB and mount the hard disk of the PC I am using and use it's processing power without actually needing to install any OS on it.

    Read the article

  • MySQL encoding problem after site move

    - by Quan Zhou
    Guys, I need your help. Since last month my friend has lost his database on Dreamhost, he decided to move his wordpress based blog site (written in Chinese) to my server. He's using a wp-plugin called wp-db-backup to perform regular db backups. And the servers backgrounds are: Dreamhost: Linux 2.6.31.5-modsign-aufs2-grsec-2-opt mysql Ver 14.12 Distrib 5.0.16, for pc-linux-gnu (i386) using readline 5.0 apache2 unknown version My Server: Linux li159-46 2.6.32.12-x86_64-linode12 mysql Ver 14.14 Distrib 5.1.45, for debian-linux-gnu (x86_64) using readline 6.1 nginx 0.8.36 His site's encoding was UTF-8 in both wp-config and db. I imported his db backup file in UTF-8 by default, then I sync'd files using rsync from dreamhost, then I just changed the db address and nothing more. But when I take first look at the "new" site, it was full of unreadable characters, I met this problem before, I changed charset options in browser but none of them can make it displayed properly. Then I converted his db to GB18030, it works with only if browser set charset to GB18030 either GBK, but by default they recognize the charset as UTF-8. I tried to edit the headers but it doesn't work. What could I do now? Thx~~

    Read the article

  • Repair bad character due to encoding problem

    - by remi bourgarel
    Hi all, Recently we had an encoding problem in our system : If we had the string "æ" in our db ,it became "æ" on our web pages. Now this problem is solved, but the problem is that now we have a lot of "æ" in our database : users didn't see and validate pre-filled form with these characters. I found that If you read in utf 8 C3A6 you'll get "æ", if you read it in ascii you'll get "æ". It's strange because if I execute "select convert(varbinary(40),N'æ'),convert(varbinary(40),'æ')" I don't have the same result... Do you have any idea on how I can fix my database (ie change all "æ" to "æ") ? thx

    Read the article

< Previous Page | 591 592 593 594 595 596 597 598 599 600 601 602  | Next Page >