Search Results

Search found 31356 results on 1255 pages for 'database backups'.

Page 591/1255 | < Previous Page | 587 588 589 590 591 592 593 594 595 596 597 598  | Next Page >

  • Best way to find updates in xml feed

    - by misterjinx
    Hello all, I have an xml feed that I have to check periodically for updates. The xml consists of many elements and I'm looking to figure it out which is the best (and probably faster) way to find out which elements suffered updates from last time I've checked. What I think of is to check first the lastBuildDate for modifications and if it differs from the previous one to start parse the xml again. This would involve keeping each element with all of its attributes in my database. But each element can have different number of attributes as well as other nested elements. So if it would be to store each element in my database what would be the best way to keep them ? That's why I'm asking for your help :) Thank you.

    Read the article

  • How do I insert into a unique key into a table?

    - by Ben McCormack
    I want to insert data into a table where I don't know the next unique key that I need. I'm not sure how to format my INSERT query so that the value of the Key field is 1 greater than the maximum value for the key in the table. I know this is a hack, but I'm just running a quick test against a database and need to make sure I always send over a Unique key. Here's the SQL I have so far: INSERT INTO [CMS2000].[dbo].[aDataTypesTest] ([KeyFld] ,[Int1]) VALUES ((SELECT Max([KeyFld]) FROM [dbo].[aDataTypesTest]) + 1 ,1) which errors out with: Msg 1046, Level 15, State 1, Line 5 Subqueries are not allowed in this context. Only scalar expressions are allowed. I'm not able to modify the underlying database table. What do I need to do to ensure a unique insert in my INSERT SQL code?

    Read the article

  • Data recovery on a corrupted 3TB disk

    - by Mark K Cowan
    Short version I probably need software to run a deep-scan recovery (ideally on Linux) to find files on NTFS filesystem. The file data is intact, but the references are no longer present. Analogous to recovering data from a "quick-formatted" partition. Hopefully there is a smarter way available than deep-scan, one which would recover filenames and possibly paths. Long version I have a 3TB disk containing a load of backups. Windows 7 SP1 refused to detect the disk when plugged in directly via SATA, so I put it on a USB/SATA adaptor which seemed to work at first. The SATA/USB adaptor probably does not support disks over 2.2TB though. Windows first asked me if I wanted to 'format' the disk, then later showed me most of the contents but some folder were inaccessible. I stupidly decided to run a CHKDSK on my backup disk, which made the folders accessible but also left them empty. I connected this disk via SATA to my main PC (Arch Linux). I tried: testdisk ntfsundelete ntfsfix --no-action (to look for diagnostically relevant faults, disk was "OK" though) to no avail as the files references in the tables had presumably been zeroed out by CHKDSK, rather than using a typical journal'd deletion). If it is useful at all, a majority of the files that I want to recover are JPEG, Photoshop PSD, and MPEG-3/MPEG-4/AVI/MKV files. If worst comes to worst, I'll just design my own sector scanner and use some simple heuristic-driven analysis to recover raw binary blocks of data from the disk which appears to match the structures of the above file types. I am unfamiliar with the exact workings of NTFS but used to be proficient at recovering FAT32 systems with just a hex-editor, so I can provide any useful diagnostic information if you let me know how to find it! My priorities in ascending order of importance for choosing the accepted answer: Restores directory structure Recovers many filenames in addition to the file data Is free / very cheap Runs on Linux Recovers a majority of file data The last point is the most important, but the more of the higher points you match the more rep you'll probably get :)

    Read the article

  • VMware vSphere 4.1 and BackupExec 2010

    - by Josh
    I'm sure a common problem with most shops is backups, their size, and the window in which you have to back up the data. What we are working with: VMware vSphere 4.1 Cluster PS4000XV Equallogic Storage Array (1.6TB Volume dedicated for Backup to Disk) Physical Backup Server with a single LTO4 drive. BackupExec 2010 R3 with the following agents, Exchange, SQL, Active Directory, VMware. Dual Gigabit MPIO Connections between all devices (Storage Array, Backup Server, VM Hosts) What we would like to accomplish: I would like to implement an efficient Backup to Disk to Tape solution where all of our VMs are backed up to the Storage Array first, and then once completely backed up to the array are replicated to tape. In the event we needed to recover, we would be able to do so directly from tape. Where we are at currently. Of the several ways I have setup the jobs in Backup Exec 2010 R3 the backup jobs all queue up at the same time, as soon as a job is finished backing up to disk it then starts that same job to tape, but pulling from the original source instead of the designated B2D location. I understand that I could create a job that backs up the "Backup to Disk" folder to tape, but in the event of restoration, I would first need to stage the data in the B2D folder before I could restore the VM. I would really like to hear from individuals in similar situations. Any and all comments and critiques are appreciated.

    Read the article

  • Symantec Protection Suite Enterprise Edition

    - by rihatum
    We (our company) are planning to deploy Symantec Endpoint Protection and Symantec Desktop Recovery 2011 Desktop Edition to our 3000 - 4000 workstations (Windows7 32 and 64) with a few 100s with Windows XP 32/64 Bit. I have read the implementation guide for SEP and have read tech-notes for Desktop Recovery 2011. Our team have planned to deploy this as follows : 1 x dedicated SQL 2008R2 for Symantec Endpoint Protection (Instead of using the Embedded Database) 1 x Dedicated SQL 2008R2 for Symantec Desktop Recovery 2011 (Instead of using the Embedded Database) 1 x Dedicated W2K8 R2 Box for the SEPM (Symantec Endpoint Protection Manager - Mgmt. APP) 1 x Dedicated W2K8 R2 Box for the Symantec Desktop Recovery 2011 Management Application Agent Deployment : As per Symantec Documentation for both of the above, an agent can be pushed via the Mgmt. Application (provided no firewalls are blocking ports required etc. - we have Windows firewall disabled already). Above is the initial plan we have for 3000 - 4000 client workstation (Windows) Now my Questions :-) a) If we had these users distributed amongst two sites with AD DC / GC in each site, How would I restrict SEPM and Desktop Mgmt. solution to only check for users in their respective site ? b) At present all users are under one building but we are going to move some dept. to a new location (with dedicated connectivity), How would we control which SEPM / MGMT Server is responsible for which site ? c) What Hardware would you recommend as a Server spec for the SQL server 16GB RAM, Dual XEON? d) What Hardware would you recommend as a Server spec for the MGMT Servers 16GB RAM each with DUAL xeon and sas disks? e) Also, how do you or would you recommend to protect these 4 servers (2 x SQL and 2 x MGMT Servers)? f) How would you recommend to store backups for these desktops? We do have a SAN and a NAS in our environment and we do have one spare DAS (Dell MD3000). If you have anything to add / correct - that will be really helpful before diving into the actual implementation phase. Will be most grateful with your suggestions, recommendations and corrections with above - Many Thanks ! Rihatum

    Read the article

  • Can't pop3 from exchange server after a reboot

    - by BLAKE
    Last night I shutdown my Exchange 2003 Virtual Machine, I added a new VHD (For backups), and booted it again. Now I can't POP3 email from it with Outlook 2007. In Outlook I get the error: Task '[email protected] - Receiving' reported error (0x800CCC0F) : 'The connection to the server was interrupted. If this problem continues, contact your server administrator or Internet service provider (ISP).' Does anybody know what is wrong? All I did was a reboot. I haven't formated the added disk. There are no weird errors in the event log. I can still send mail with Outlook over port 25. I can send and recieve mail with OWA. I can POP3 the mail to my phone (it take about 15 minutes after sending a message, but I do get it eventually). EDIT: The 'Microsoft Exchange POP3' Service says that it is started but if I stop it and try to start it again, it fails saying 'Could not start the Microsoft Exchange POP3 service on Local Computer. Error 1053: The service did not respond to the start or control request in a timely fashion.' I did some googling and someone on exchangefreaks.com said that if I use task manager to 'End Task' on inetinfo.exe, then I can start the POP3 service fine. Does anyone know what causes this problem? I am fine for now since I did get the Service started, but If it does this after every reboot...

    Read the article

  • MySQL to AppEngine

    - by Daniel Naito
    Hi Nick! How are you? I'm from Brazil and study at FATEC (college located in Brazil). I'm trying to learn about AppEngine. Now, I'm trying to load a large database from MySQL to AppEngine to perform some queries, but I don't know how i can do it. I did some testing with CSV files,but is there any way to perform the direct import from MySQL? This database is from Pentaho BI Server (www.pentaho.com). Thank you for your attention. Regards, Daniel Naito

    Read the article

  • How to select a MAX value from column in Query Builder in Kohana framework?

    - by Victor Czechov
    I need to INSERT a data to table, but before a query I must to know the MAX value from column position, than to INSERT a data WHERE my SELECTED before position+1. Is it possible with query builder? following my first comment I did query: $p = DB::select(array(DB::expr('MAX(`position`)', 'p')))->from('supercategories')->execute(); echo $p; the error: ErrorException [ Notice ]: Undefined offset: 1 MODPATH\database\classes\kohana\database.php [ 505 ] 500 */ 501 public function quote_column($column) 502 { 503 if (is_array($column)) 504 { 505 list($column, $alias) = $column; 506 } 507 508 if ($column instanceof Database_Query) 509 { 510 // Create a sub-query

    Read the article

  • Managing a difficult manager

    - by griegs
    I have a situation here at work. We are redeveloping our basic architecture across the entire company. Currently we have the following hierarchy; SQL Database <= Stored Procs not allowed. nHibernate Classes to convert nHibernate into our own objects Web Service <= for all external and [internal] calls. Class to take objects from Web Service and back into our own objects and then… Normal nTier application architecture such as Data Transformation Layer, Business layer etc. Within the database, when we are writing a hierarchy of objects to the database, say for example; Order Person Details Address Product Other We need to serialise the object and save it, in its entirety, to an image field in a table. No attempt has been made to store the objects in their own tables so that we can do useful stuff like report on it. This is an architecture that was implemented [way] before I started and as you can probably appreciate, is a complete nightmare not to mention slow as a wet weekend. We’re not even allowed to have stored procs within SQL server because in my boss’s last job they had a hundred or so and he had a problem identifying them all so therefore all stored procs are the devil. Now the same person that developed the above architecture has developed the new one. It came as no surprise that he’s essentially used the same framework only now it’s using DotNet 3.5 with interfaces and generics. We still have to go through web services, still need to serialise (everything), still not allowed to use stored procs etc. In fact, we’re only barely able to bang two rocks together here. He says to us that the framework is open for discussion but when you discuss it, unless you approve of his design, you are told flatly “No”. He simply won’t listen to any other suggestions. Even when you show him demo applications of his proposed architecture v’s yours and he can see the speed difference, he still won’t take that on board. So I guess my question is, and I know others have experienced the same things out there, how do I get through to someone like this? How do you convince someone to ditch Web Services for internal calls and applications? How do you demonstrate, and make it stick, that stored procs are a better way to go than ad-hoc sql statements? This is killing me. I don’t want to repeat the mistakes of the past and I certainly don’t want to write code that I know is going to be slow and cumbersome. Help!

    Read the article

  • Memory efficient import many data files into panda DataFrame in Python

    - by richardh
    I import into a panda DataFrame a directory of |-delimited.dat files. The following code works, but I eventually run out of RAM with a MemoryError:. import pandas as pd import glob temp = [] dataDir = 'C:/users/richard/research/data/edgar/masterfiles' for dataFile in glob.glob(dataDir + '/master_*.dat'): print dataFile temp.append(pd.read_table(dataFile, delimiter='|', header=0)) masterAll = pd.concat(temp) Is there a more memory efficient approach? Or should I go whole hog to a database? (I will move to a database eventually, but I am baby stepping my move to pandas.) Thanks! FWIW, here is the head of an example .dat file: cik|cname|ftype|date|fileloc 1000032|BINCH JAMES G|4|2011-03-08|edgar/data/1000032/0001181431-11-016512.txt 1000045|NICHOLAS FINANCIAL INC|10-Q|2011-02-11|edgar/data/1000045/0001193125-11-031933.txt 1000045|NICHOLAS FINANCIAL INC|8-K|2011-01-11|edgar/data/1000045/0001193125-11-005531.txt 1000045|NICHOLAS FINANCIAL INC|8-K|2011-01-27|edgar/data/1000045/0001193125-11-015631.txt 1000045|NICHOLAS FINANCIAL INC|SC 13G/A|2011-02-14|edgar/data/1000045/0000929638-11-00151.txt

    Read the article

  • How do I prevent duplicate entrys to mySQL?

    - by ggfan
    On my site, I have a form that users fill out to become a member. They fill out name, bday, email, etc. Then when they click submit, the data gets into mySQL. But sometimes when a user clicks submit many times or refreshes the page, the data gets inputted to the database more than once. How can I prevent this? Is there a code I can use to only let one set of data get into the database? This is also a problem in my comment section. I allow uses to put comments on people's profiles. But when they abuse the refresh button or submit button, I get like 10 of the same comments. Thanks.

    Read the article

  • Using Unity and interfaces, how do I create a concrete class that implements IDisposable

    - by Ryan ONeill
    I have an interface (IDbAccess) for a database access class so that I can unit test it using Unity. It all works fine in Unity and now I want to make the concrete database class implement IDisposable so that it closes the db connections. My problem is that Unity does not understand that my concrete class is disposable because the interface (IDbAccess) cannot implement another interface. So how can I write code like this (pseudo code) so that Unity is aware that it needs to dispose the class as soon as I am done? Using var MyDbAccessInstance = Unity.Resolve<IDbAccess> { } Thanks Ryan

    Read the article

  • Should an event-sourced aggregate root have query access to the event sourcing repository?

    - by JD Courtoy
    I'm working on an event-sourced CQRS implementation, using DDD in the application / domain layer. I have an object model that looks like this: public class Person : AggregateRootBase { private Guid? _bookingId; public Person(Identification identification) { Apply(new PersonCreatedEvent(identification)); } public Booking CreateBooking() { // Enforce Person invariants var booking = new Booking(); Apply(new PersonBookedEvent(booking.Id)); return booking; } public void Release() { // Enforce Person invariants // Should we load the booking here from the aggregate repository? // We need to ensure that booking is released as well. var booking = BookingRepository.Load(_bookingId); booking.Release(); Apply(new PersonReleasedEvent(_bookingId)); } [EventHandler] public void Handle(PersonBookedEvent @event) { _bookingId = @event.BookingId; } [EventHandler] public void Handle(PersonReleasedEvent @event) { _bookingId = null; } } public class Booking : AggregateRootBase { private DateTime _bookingDate; private DateTime? _releaseDate; public Booking() { //Enforce invariants Apply(new BookingCreatedEvent()); } public void Release() { //Enforce invariants Apply(new BookingReleasedEvent()); } [EventHandler] public void Handle(BookingCreatedEvent @event) { _bookingDate = SystemTime.Now(); } [EventHandler] public void Handle(BookingReleasedEvent @event) { _releaseDate = SystemTime.Now(); } // Some other business activities unrelated to a person } With my understanding of DDD so far, both Person and Booking are seperate aggregate roots for two reasons: There are times when business components will pull Booking objects separately from the database. (ie, a person that has been released has a previous booking modified due to incorrect information). There should not be locking contention between Person and Booking whenever a Booking needs to be updated. One other business requirement is that a Booking can never occur for a Person more than once at a time. Due to this, I'm concerned about querying the query database on the read side as there could potentially be some inconsistency there (due to using CQRS and having an eventually consistent read database). Should the aggregate roots be allowed to query the event-sourced backing store for objects (lazy-loading them as needed)? Are there any other avenues of implementation that would make more sense?

    Read the article

  • Preserving text formatting for dynamic website

    - by Mohit
    Hello Folks, I am facing a problem while creating a dynamic website. I am building it for some pharma company, which have many products. The problem is, every product have different sections of description, and have to be formatted differently. I wanted to store all the product descriptions in the database, but at the same time preserve the formatting of each description. I also plan to provide an admin interface, where they could edit the product information themselves. I could use Joomla or any other CMS for that purpose, but i wanted to know if i want to build such a system of my own, where i could format the text in an editor and them save that thing into database and when i retrieve it, i get the same formatting. How could i do this? Also i wanted to do this in PHP. Thanks -- Mohit

    Read the article

  • Specify which row to return on SQLite Group By

    - by lozzar
    I'm faced with a bit of a difficult problem. I store all the versions of all documents in a single table. Each document has a unique id, and the version is stored as an integer which is incremented everytime there is a new version. I need a query that will only select the latest version of each document from the database. While using GROUP BY works, it appears that it will break if the versions are not inserted in the database in the order of version (ie. it takes the maximum ROWID which will not always be the latest version). Note, that the latest version of each document will most likely be a different number (ie. document A is at version 3, and document B is at version 6). I'm at my wits end, does anybody know how to do this (select all the documents, but only return a single record for each document_id, and that the record returned should have the highest version number)?

    Read the article

  • Faster bulk inserts in sqlite3?

    - by scubabbl
    I have a file of about 30000 lines of data I want to load into a sqlite3 database. Is there a faster way that generating insert statements for each line of data? The data is space delimited and maps directly to the sqlite3 table. Is there any sort of bulk insert method for adding volume data to a database? Has anyone devised some deviously wonderful way of doing this if it's not built in? I should preface this by asking is there a c++ way to do it from the API? Thanks.

    Read the article

  • MySQL auto increment

    - by mouthpiec
    Hi, I have table with an auto-increment field, but I need to transfer the table to another table on another database. Will the value of field 1 be 1, that of field 2 be 2, etc? Also in case the database get corrupted and I need to restore the data, will the auto-increment effect in some way? will the value change? (eg if the first row, id (auto-inc) = 1, name = john, country = UK .... will the id field remain 1?) I am asking because if other table refer to this value, all data will get out of sync if this field change.

    Read the article

  • Proper way to set class variables

    - by ensnare
    I'm writing a class to insert users into a database, and before I get too far in, I just want to make sure that my OO approach is clean: class User(object): def setName(self,name): #Do sanity checks on name self._name = name def setPassword(self,password): #Check password length > 6 characters #Encrypt to md5 self._password = password def commit(self): #Commit to database >>u = User() >>u.setName('Jason Martinez') >>u.setPassword('linebreak') >>u.commit() Is this the right approach? Should I declare class variables up top? Should I use a _ in front of all the class variables to make them private? Thanks for helping out.

    Read the article

  • Swipe gestures on Android ListView items

    - by Bartek
    I have a ListView populated by a ResourceCursorAdapter. I use the loaders mechanism to query a ContentProvider for list items. I detect swipe gestures on the list items to perform some actions on them. New items get added by a background service, so the list can change dynamically. Everything works fine, except when I start swiping and a database change occurs (as a result of the background service adding a new row). In such case the gesture is not detected properly. I noticed that ACTION_CANCEL is dispatched to the list item view and also that bindView is executed for all visible items. Inside the bindView method I only set some text - I don't change any listeners there. How can I make gestures work even when new items are being added by the background service? Perhaps there's a way to prevent the motion from being cancelled or I can pause database updates so they don't interrupt the gesture.

    Read the article

  • Updating Checked Checkboxes using CodeIgniter + MySQL

    - by Tim
    Hello I have about 8 check boxes that are being generated dynamically from my database. This is the code in my controller //Start Get Processes Query $this->db->select('*'); $this->db->from('projects_processes'); $this->db->where('process_enabled', '1'); $data['getprocesses'] = $this->db->get(); //End Get Processes Query //Start Get Checked Processes Query $this->db->select('*'); $this->db->from('projects_processes_reg'); $this->db->where('project_id', $project_id); $data['getchecked'] = $this->db->get(); //End Get Processes Query This is the code in my view. <?php if($getprocesses->result_array()) { ?> <?php foreach($getprocesses->result_array() as $getprocessrow): ?> <tr> <td><input <?php if($getchecked->result_array()) { foreach($getchecked->result_array() as $getcheckedrow): if($getprocessrow['process_id'] == $getcheckedrow['process_id']) { echo 'checked'; } endforeach; }?> type="checkbox" name="progresscheck[]" value="<?php echo $getprocessrow['process_id']; ?>"><?php echo $getprocessrow['process_name']; ?><br> </td> </tr> <?php endforeach; ?> This generates the checkboxes into the form and also checks the appropriate ones as specified by the database. The problem is updating them. What I have been doing so far is simply deleting all checkbox entries for the project and then re-inserting all the values into the database. This is bad because 1. It's slow and horrible. 2. I lose all my meta data of when the check boxes were checked. So I guess my question is, how do I update only the checkboxes that have been changed? Thanks, Tim

    Read the article

  • IDataServiceMetadataProvider / ResourceType.... what for dynamic types with no CLR type?

    - by TomTom
    Hello, I try to expose a database via ADO RIA for which we have only an ODBC based interface. The "database" is a server and new elements are developped all the time, so I would like the server to check metadata at start (using the odbc schema methods) and then expose what he finds via RIA services.... clients can the nregenerate when they need access to new elements. As such, I dont ahve any CLR types for all the tabled developped. ResourceType tableType = new ResourceType( typeof(object), ResourceTypeKind.EntityType, null, "Martini", table_name, false ); tableType.CanReflectOnInstanceType = false; I can somehow not put in NULl as CLR element type, and entering typeof(object) seems to result in reflection errors when trying to access the properties. Any documentation on how to do that? I dont really want to get into having types... though if I have to, I probably will dynamically generate some via bytecode emit.

    Read the article

  • how to connect java and mysql using mysql connector java 5.1.12

    - by user225269
    I'm still a beginner in java. I dont have any idea on how to import the files that I have downloaded into my java class. Its in this path: E:\Users\user\Downloads\mysql-connector-java-5.1.12 I don't know what to do with the files I extracted from the file that I have downloaded in the mysql site for me to connect my java application and mysql database. I'm using Netbeans 6.8. And have also installed wampserver. ive already check out This: http://stackoverflow.com/questions/2118369/java-trouble-connecting-to-mysql and this: http://stackoverflow.com/questions/1640910/connecting-to-a-mysql-database But they don't seem to have answers on how to make use of the mysql java connector file from mysql site. Please help, thanks.

    Read the article

  • selected Rows/Line in QTableView copy to QClipboard

    - by Berschi
    Hi. First of all, sorry for bad English. It's about C++ and Qt. I have a SQLite-Database and I did it into a QSqlTableModel. To show the Database, I put that Model into a QTableView. Now I want to create a Method where the selected Rows (or the whole Line) will be copied into the QClipboard. After that I want to insert it into my OpenOffice.Calc-Document. But I have no Idea what to do with the "Selected"-SIGNAL and the QModelIndex and how to put this into the Clipboard. So can you please help me? Berschi

    Read the article

  • mysql good way of programing

    - by Syom
    i have the video gallery in my database, which has over 200 000 videos. in my home page in the site i show exactly some videos, which must satisfy to some criteria. and so, what is the question. is it a good way to sort videos every time the home page opene, or i must save the sort results somewhere in the database, and refresh them only if something change. i think it can save me a lot of time. what you think about it. thanks in advance.

    Read the article

< Previous Page | 587 588 589 590 591 592 593 594 595 596 597 598  | Next Page >