Search Results

Search found 31356 results on 1255 pages for 'database backups'.

Page 601/1255 | < Previous Page | 597 598 599 600 601 602 603 604 605 606 607 608  | Next Page >

  • DPM 2007 clashing with existing SQL backup job

    - by Paul D'Ambra
    I've recently installed a DPM2007 server on Server 2003 and have set up a protection group against a server 2003 server running SQL 2005 SP3. The SQL server in question has a full backup (as a sql agent job) once a day and transaction log backups hourly. These are zipped up and FTP'd to a server offsite by a scheduled task. Since adding the DPM job I'm receiving many error messages: DPM tried to do a SQL log backup, either as part of a backup job or a recovery to latest point in time job. The SQL log backup job has detected a discontinuity in the SQL log chain for database SERVER_NAME\DB_Name since the last backup. All incremental backup jobs will fail until an express full backup runs. My google-fu suggests that I need to change the full backup my sqlagent job is running to a copy_only job. But I think this means that I can't use that backup with the transaction_logs to restore the database if the building (including the DPM server) burns down. I'm sure I'm missing something obvious and thought I'd see what the hivemind suggests. It is an option to set-up a co-located DPM server elsewhere and have DPM stream the backup but that's obviously more expensive than the current set up. Many thanks in advance

    Read the article

  • Mark row as deleted in dataTable on UserDeletingRow

    - by dav.evans
    I have a datatable bound to a winforms dataGridView via a BindingSourceControl. I want to be able handle the UserDeletingRow event from the dataGridView and mark the row in my dataTable as deleted. I need to then be able to retrieve the rows marked as deleted from the datatable so that I can delete them from my database when a Save button is clicked. Please not I dont want to delete from the database on each firing of UserDeletingRow, only mark that row as deleted in my dataset. Can anyone point out how to do this?

    Read the article

  • Read and write struct in C

    - by Sergey
    I have a struct: typedef struct student { char fname[30]; char sname[30]; char tname[30]; Faculty fac; int course; char group[10]; int room; int bad; } Student; I read it from the file: Database * dbOpen(char *fname) { FILE *fp = fopen(fname, "rb"); List *lst, *temp; Student *std; Database *db = malloc(sizeof(*db)); if (!fp) return NULL; FileNameS = fname; std = malloc(sizeof(*std)); if (!fread(std, sizeof(*std), 1, fp)) { db->head = db->tail = NULL; return db; } lst = malloc(sizeof(*lst)); lst->s = std; lst->prev = NULL; db->head = lst; while (!feof(fp)) { fread(std, sizeof(*std), 1, fp); temp = malloc(sizeof(*temp)); temp->s = std; temp->prev = lst; lst->next = temp; lst = temp; } lst->next = NULL; db->tail = lst; fclose(fp); return db; } And I have a problem... At the last record i have a such file pointer: `fp 0x10311448 {_ptr=0x00344b90 "???????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? _ _iobuf * ` And i read last record 2 times... Save file code: void * dbClose(Database *db) { FILE *fp = fopen(FileNameS, "w+b"); List *lst, *temp; lst = db->head; while(lst != NULL) { fwrite(lst->s, sizeof(*(lst->s)), 1, fp); temp = lst; lst = lst->next; free(temp); } free(db); fclose(fp); }

    Read the article

  • Proper way to set object instance variables

    - by ensnare
    I'm writing a class to insert users into a database, and before I get too far in, I just want to make sure that my OO approach is clean: class User(object): def setName(self,name): #Do sanity checks on name self._name = name def setPassword(self,password): #Check password length > 6 characters #Encrypt to md5 self._password = password def commit(self): #Commit to database >>u = User() >>u.setName('Jason Martinez') >>u.setPassword('linebreak') >>u.commit() Is this the right approach? Should I declare class variables up top? Should I use a _ in front of all the class variables to make them private? Thanks for helping out.

    Read the article

  • CakePHP Accessing Dynamically Created Tables?

    - by Dave
    As part of a web application users can upload files of data, which generates a new table in a dedicated MySQL database to store the data in. They can then manipulate this data in various ways. The next version of this app is being written in CakePHP, and at the moment I can't figure out how to dynamically assign these tables at runtime. I have the different database config's set up and can create the tables on data upload just fine, but once this is completed I cannot access the new table from the controller as part of the record CRUD actions for the data manipulate. I hoped that it would be along the lines of function controllerAction(){ $this->uses[] = 'newTable'; $data = $this->newTable->find('all'); //use data } But it returns the error Undefined property: ReportsController::$newTable Fatal error: Call to a member function find() on a non-object in /app/controllers/reports_controller.php on line 60 Can anyone help.

    Read the article

  • can I see all SQL statements sent over an ODBC connection?

    - by Dave Cameron
    I'm working with a third-party application that uses ODBC to connect to, and alter, a database. During certain failure modes, the end-results are not what I expect. To understand it better, I'd like some way of inspecting all the statements sent to the database. Is there a way to do this with ODBC? I know with JDBC I could use http://www.p6spy.com/ to see all statements sent, for example when debugging hibernate. p6spy is a "proxy" driver that records commands sent and forwards them on to the real JDBC driver. Another possibility might be a protocol sniffer that would capture statements over the wire. Although, I'm unsure if ODBC includes a standard wire protocol, or only specifieds the API. Does anyone know of existing tools that would allow me to do either of these things? Alternatively, is there another approach I could take?

    Read the article

  • JSON encode MySQL results then read using jQuery

    - by silentw
    I have a database table with some rows that I want to fetch using PHP and then encode them using JSON. Currently, my database structure is the following: idcomponente | quantidade After fetching the values in PHP, I want to know how can I encode them using JSON (with multiple rows, using the same names) so I can read them using jQuery.post(). $.post('test.php',{id:id},function(data){ //READ data HERE }); Thanks in advance Edit: So far, I made this: $.post('edit/producomponentes.php',{id:id},function(data){ console.log(data); }); Logs this: [Object { componente="1", quantidade="2"}, Object { componente="3", quantidade="3"}] Now how can I go through each row and fetch their properties? (data.componente, data.quantidade)

    Read the article

  • Dynamic Form Help for PHP and MySQL

    - by Tony
    The code below works as far as inserting records from a file into MySQL, but it only does so properly if the columns in the file are already ordered the same way as in the database. I would like for the user to be able to select the drop down the corresponds to each column in their file to match it up with the columns in the database (the database has email address, first name, last name). I am not sure how to accomplish this. Any ideas? <?php $lines =file('book1.csv'); foreach($lines as $data) { list($col1[],$col2[],$col3[]) = explode(',',$data); } $i = count($col1); if (isset($_POST['submitted'])) { DEFINE ('DB_USER', 'root'); DEFINE ('DB_PASSWORD', 'password'); DEFINE ('DB_HOST', 'localhost'); DEFINE ('DB_NAME', 'csvimport'); // Make the connection: $dbc = @mysqli_connect (DB_HOST, DB_USER, DB_PASSWORD, DB_NAME); for($d=1; $d<$i; $d++) { $q = "INSERT into contacts (email, first, last) VALUES ('$col3[$d]', '$col1[$d]', '$col2[$d]')"; $r = @mysqli_query ($dbc, $q); } } echo "<form action =\"handle2.php\" method=\"post\">Email<br /> <select name =\"email\"> <option value='col1'>$col1[0]</option> <option value='col2'>$col2[0]</option> <option value='col3'>$col3[0]</option> </select><br /><br /> First Name <br /> <select name=\"field2\"> <option value='col1'>$col1[0]</option> <option value='col2'>$col2[0]</option> <option value='col3'>$col3[0]</option> </select><br /><br /> Last Name <br /> <select name=\"field3\"> <option value='col1'>$col1[0]</option> <option value='col2'>$col2[0]</option> <option value='col3'>$col3[0]</option> </select><br /><br /> <input type=\"submit\" name=\"submit\" value=\"Submit\" /> <input type=\"hidden\" name=\"submitted\" value=\"TRUE\" /> </form>"; ?>

    Read the article

  • NTDS Replication Warning (Event ID 2089)

    - by Chris_K
    I have a simple little network with 3 AD servers in 2 sites. Site A has Win2k3 SP2 and Win2k SP4 servers, site B has a single Win2k3 SP2 server. All have been in place for at least 3 years now. Just last week I started getting Event 2089 "not backed up" warnings (example below) on both of the win2k3 servers. I understand what the message means, no need to send me links to the technet article explaining it. I'll improve my backups. What I'm more curious about is why did I just start getting this message now? Why haven't I been getting it for the past 3 years?!? Perhaps this is related: I recently decommissioned a few other sites and AD controllers (there used to be 3 more sites, each with their own controller). Don't worry, I did proper DCpromo exercises and made sure we didn't lose anything. But would shutting those down possibly be related to why I get this error now? This won't keep me awake at night but I am curious as to what changed... Event Type: Warning Event Source: NTDS Replication Event Category: Backup Event ID: 2089 Date: 3/28/2010 Time: 9:25:27 AM User: NT AUTHORITY\ANONYMOUS LOGON Computer: RedactedName Description: This directory partition has not been backed up since at least the following number of days. Directory partition: DC=MyDomain,DC=com 'Backup latency interval' (days): 30 It is recommended that you take a backup as often as possible to recover from accidental loss of data. However if you haven't taken a backup since at least the 'backup latency interval' number of days, this message will be logged every day until a backup is taken. You can take a backup of any replica that holds this partition. By default the 'Backup latency interval' is set to half the 'Tombstone Lifetime Interval'. If you want to change the default 'Backup latency interval', you could do so by adding the following registry key. 'Backup latency interval' (days) registry key: System\CurrentControlSet\Services\NTDS\Parameters\Backup Latency Threshold (days) For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.

    Read the article

  • MYSQL KEY-VALUE PAIR Viability

    - by Amit
    Hi, I am new to mysql and I am looking for some answers to the follwoing questions: a) Can mysql community server can be leveraged for a key-value pair type database.?? b) Which mysql engine is best suited for a key-value pair type database ?? c) Is Mysql cluster a must for horizontal scaling of key-value based datastore or can it be acheived using MySQL replication?? d) Are there any docs or whitepapers for best practices when implementiing a kv datastore on mysql?? e) Are there any known big implementations other that friendfeed doing kv pair using MYSQL?? Would really appreciate some advise from all you Mysql gurus out there !! Thanks In Advance, Amit

    Read the article

  • How to tune ASP.NET CreateUserWizard?

    - by Max
    I have created ASP.NET WebForms site on IIS 7.5. I want to create step by step user registration. I want to store the basic and detailed information about registered users in a specially created database table (not in aspnet_users table). I want to validate email first and then prevent next registration step for the user whose email address already exists in the database. At the last registration step I want to present summary form. All previous input and select fields should be duplicated in this form with "disabled" attribute. Please tell me how to adjust CreateUserWizard ASP.NET Control and web.config file to these needs?

    Read the article

  • Complex relationship between tables in NHibernate

    - by Ilya Kogan
    Hi all, I'm writing a Fluent NHibernate mapping for a legacy Oracle database. The challenge is that the tables have composite primary keys. If I were at total freedom, I would redesign the relationships and auto-generate primary keys, but other applications must write to the same database and read from it, so I cannot do it. These are the two tables I'll focus on: Example data Trips table: 1, 10:00, 11:00 ... 1, 12:00, 15:00 ... 1, 16:00, 19:00 ... 2, 12:00, 13:00 ... 3, 9:00, 18:00 ... Faults table: 1, 13:00 ... 1, 23:00 ... 2, 12:30 ... In this case, vehicle 1 made three trips and has two faults. The first fault happened during the second trip, and the second fault happened while the vehicle was resting. Vehicle 2 had one trip, during which a fault happened. Constraints Trips of the same vehicle never overlap. So the tables have an optional one-to-many relationship, because every fault either happens during a trip or it doesn't. If I wanted to join them in SQL, I would write: select ... from Faults left outer join Trips on Faults.VehicleId = Trips.VehicleId and Faults.FaultTime between Trips.TripStartTime and Trips.TripEndTime and then I'd get a dataset where every fault appears exactly once (one-to-many as I said). Note that there is no Vehicles table, and I don't need one. But I did create a view that contains all VehicleIds from both tables, so I can use it as a junction table. What am I actually looking for? The tables are huge because they cover years of data, and every time I only need to fetch a range of a few hours. So I need a mapping and a criteria that will run something like the following SQL underneath: select ... from Faults left outer join Trips on Faults.VehicleId = Trips.VehicleId and Faults.FaultTime between Trips.TripStartTime and Trips.TripEndTime where Faults.FaultTime between :p0 and :p1 Do you have any ideas how to achieve it? Note 1: Currently the application shouldn't write to the database, so persistence is not a must, although if the mapping supports persistence, it may help at some point in the future. Note 2: I know it's a tough one, so if you give me a great answer, you will be properly rewarded :) Thank you for reading this long question, and now I only hope for the best :)

    Read the article

  • How can I allow users to switch data sources for an SSRS report?

    - by fatcat1111
    I have two SQL Server databases with identical schemas, but different data. I also have SSRS generating reports, in native mode, for one of the databases. All reports the same shared data source. I would like to allow users to get reports for the other database. I created a second shared data source for the second database. Modifying the reports to use this second data source results in reports as expected. Because the RDLs are the same, except for the data source, and because I don't want to maintain what are basically duplicate reports, I'm looking for a way to dynamically switch data sources, depending on user input. Is there an easy means of accomplishing this? An existing solution would be best. Barring that, can the RDL's data source be parametrized? Or, can the RDS's connection string be parametrized?

    Read the article

  • How to schedule automatic (daily) snapshots of AWS EC2 Windows Instance?

    - by Stanley
    I have some Windows servers hosted on Amazon EC2. Some run Windows Server 2003 and other run Windows Server 2008. These are EBS-backed instances. Most of the instances also have some additional EBS-volumes attached. We want to schedule a daily snapshot of the windows machines (and also the attached EBS-volumes) to S3 so that we have daily backups available. One would think that this is a very common requirement and would be made available via the AWS Management Console, but alas, it is not. What approaches are available? How do I schedule daily snapshots on our Windows Servers? There are several scripting examples available online for Linux, but not so much for windows. I have had a look at http://sehmer.blogspot.com/2011/04/amazon-ec2-daily-snapshot-script-for.html as well as https://github.com/ronmichael/aws-snapshot-scheduler. Has anyone used one of these approaches and does it work? I have also considered a service like Skeddly which seems inexpensive at first glance but when you look at using it for several servers the price soon escalates to such a point where it seems a better option to create your own solution as you can then apply it to new servers in the future. With Skeddly we'll pay for each server. How do we schedule daily snapshots of our windows instances?

    Read the article

  • entity framework inserting a many-to-many relationship between two existing objects while updating

    - by redbluegreen
    I'm trying to do this: using(var context = new SampleEntities()) { User user = select a user from database; //Update user's properties user.Username = ... user.Website = ... //Add a role Role role = select a role from database //trying to insert into table UserRoles which has columns (UserID, RoleID) user.Roles.Add(role); //Apply property changes context.ApplyPropertyChanges("Users", user); context.SaveChanges(); } However, I get an exception telling me that "The existing object in the ObjectContext is in the Added state" and can't "ApplyPropertyChanges". If "ApplyPropertyChanges()" is removed, it adds a User. What orders should these methods be called? I don't need to do them separately right? Thanks.

    Read the article

  • Stop propagating deletes

    - by Mark
    Is it just me or is anyone else finding EF very difficult to use in a real app :( I'm using it as the data layer and have created custom business objects. I'm having difficulty converting the business objects back to EF objects and updating/adding/deleting from the database. Does anyone know a good, simple example of doing this? Actually the current problem that's driving me nuts is when I delete something EF tries to delete other related stuff as well. For example, if I delete an invoice it will also delete the associated customer! Seems odd. I can't figure out how to stop it doing this. // tried: invoiceEfData.CustomerReference = null; // also tried invoiceEfData.Customer = null; context.DeleteObject(invoiceEfData); context.SaveChanges(); // at this point I get a database error due to it attempting to delete the customer

    Read the article

  • Run java thread at specific times

    - by rmarimon
    I have a web application that synchronizes with a central database four times per hour. The process usually takes 2 minutes. I would like to run this process as a thread at X:55, X:10, X:25, and X:40 so that the users knows that at X:00, X:15, X:30, and X:45 they have a clean copy of the database. It is just about managing expectations. I have gone through the executor in java.util.concurrent but the scheduling is done with the scheduleAtFixedRate which I believe provides no guarantee about when this is actually run in terms of the hours. I could use a first delay to launch the Runnable so that the first one is close to the launch time and schedule for every 15 minutes but it seems that this would probably diverge in time. Is there an easier way to schedule the thread to run 5 minutes before every quarter hour?

    Read the article

  • Microsoft Sync Framework - Local DB and Remote DB have to have the same schema?

    - by Josh
    When using MSF, is it implied in the technology that the sync tables are supposed to be 1-1? The reason I'm wondering is that if I'm synching from a SQL2005 database to a SQLCE, I might want the CE one to be a little more flattened out so I can get data out with a simpler SELECT statement (as CE does not support sprocs). For example, I might have a tblCustomer, tblOrder, and tblCustomerOrder in the central database, but in the local databases one table with all the data might be preferred. Of course I'd still want the updates to reflect back and forth between the two databases. Does MSF make this possible, or does the local DB have to have the same tables as the central?

    Read the article

  • Custom date time picker for Windows Forms that is locked to GMT time

    - by m3ntat
    Using Visual Studio 2008, c#, .net 2.0. I have a Windows Forms client application that contains a scheduling UI section, currently this is housed only in the London office with the standard datetime picker control, the selected time is saved in a UK database (GMT) and a London based server aapplication processes the schedules. There is a requirement to roll the client out to various global locations, Hong Kong, New York etc and allow them to setup schedules that run according to GMT time on the London server. I'll have a label on screen saying "note schedules are GMT" what I need is a good way to present a datetime picker that always shows and is in sync with the database server's GMT time regardless of where the client app is running globally. Suggestions on how to acheive this? thanks.

    Read the article

  • NHibernate - Mapping tagging of entities

    - by ZNS
    Hi! I've been wrecking my mind on how to get my tagging of entities to work. I'll get right into some database structuring: tblTag TagId - int32 - PK Name tblTagEntity TagId - PK EntityId - PK EntityType - string - PK tblImage ImageId - int32 - PK tblBlog BlogId - int32 - PK class Image Id EntityType { get { return "MyNamespace.Entities.Image"; } IList Tags; class Blog Id EntityType { get { return "MyNamespace.Entities.Blog"; } IList Tags; The obvious problem I have here is that EntityType is an identifer but doesn't exist in the database. If anyone could help with the this mapping I'd be very grateful.

    Read the article

  • Coordinating the logging output from a web app installed in a SharePoint farm.

    - by Kelly French
    We are deploying web parts to SharePoint 2007 and would like to include logging (log4net). The ideal solution would be to use a database appender to avoid the problems with knowing which actual server is executing the web part. This questions has been helpful: http://stackoverflow.com/questions/219668/sharepoint-and-log4net. I've got log4net working in a stand-alone web app using Visual Studio dev server using the web.config for the log4net settings and a file appender for the output. I'd like to transition to SharePoint and still use the log file output so I can make sure it's all working first, then change the config around to log to a database. Is this going to be too much trouble? How have other developers added log4.net into their solutions for SharePoint?

    Read the article

  • Design question what pattern to use

    - by rahul
    Problem Description: We have a requirement of storing the snapshot of an entity temporarily in the database untill a certain period of time untill all the processes to approve the data are completed. Once all approvals are completed the data shall be permanantly persisted into the actual table. Example: 1) Consider an entity called "User". One way to create the user is through the "Create Account Process". 2) The "Create Account Process" shall capture all the information of the User and store it in a temporary table in the database. 3) The data shall be used by the "Account Approval Process" to run its verification process. 4. After all the verification is completed successfully, the User data shall be persisted to the actual table. Question: Where to store the user data entered during "Create Account Process". Additionally, User data should be editable till the verification process is complete.

    Read the article

  • How come jQuery can't remove this element?

    - by George Edison
    I have a table like this: <table id='inventory_table'> <tr id='no_items_avail'> <td> There are no items in the database. </td> </tr> </table> And I want to get rid of it with jQuery like this: $('#inventory_table tbody tr#no_items_avail').remove(); However it doesn't seem to be working at all. What am I doing wrong? Edit: Also, the row above was originally inserted into the DOM with another jQuery call: $('#inventory_table tbody').append("<tr id='no_items_avail'....... If that helps. And, running: alert($('#no_items_avail').text()); yields "There are no items in the database." as expected.

    Read the article

< Previous Page | 597 598 599 600 601 602 603 604 605 606 607 608  | Next Page >