Search Results

Search found 695 results on 28 pages for 'deletes'.

Page 3/28 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Why do a small amount of add/deletes take several seconds in EF4?

    - by TomWij
    Using the Entity Framework 4. I have created a Code First database in the past and a piece of code needs to delete and add 16 objects, this takes 6 seconds each. That's 300+ ms for each query! The deletes/adds occur in a foreach scope and there is a SaveChanges() outside the foreach. In the above image you see that each takes 6 seconds, which is 34% of the time, for 16 calls. That doesn't sound normal to me... Why is this and how can I improve the performance? If there is no solution: Are there any workarounds I can use? It would be a pain to rewrite my project...

    Read the article

  • When Typing on my mac the cursor moves/deletes/replaces text.

    - by David
    Help! I don't know what is happening, with my computer (Macbook Black OSX 1ha0.6.6) but recently whenever I am typing my cursor suddenly moves in the middle of my frase or paragraph, deleting text, replacing words or just closing applications. I don't know what might be the cause but it's driving me crazy. I have disabled typinator (which had worked fine for a couple of months) and looked through the keybindings in SystemPreferencesKeyboardKeyboard Shortcuts. But for have not been able to find any answers. It happens in all apps that require typing. Textmate, Chrome, FIrefox, Texedit, Mail. Does anybody know if there is a way I can review all keyboard shortcuts, to see if the issue lies there or any suggestions? Thanking you dearly Dave

    Read the article

  • How to create a rails habtm that deletes/destroys without error?

    - by Bradley
    I created a simple example as a sanity check and still can not seem to destroy an item on either side of a has_and_belongs_to_many relationship in rails. Whenever I try to delete an object from either table, I get the dreaded NameError / "uninitialized constant" error message. To demonstrate, I created a sample rails app with a Boy class and Dog class. I used the basic scaffold for each and created a linking table called boys_dogs. I then added a simple before_save routine to create a new 'dog' any time a boy was created and establish a relationship, just to get things setup easily. dog.rb class Dog < ActiveRecord::Base has_and_belongs_to_many :Boys end boy.rb class Boy < ActiveRecord::Base has_and_belongs_to_many :Dogs def before_save self.Dogs.build( :name => "Rover" ) end end schema.rb ActiveRecord::Schema.define(:version => 20100118034401) do create_table "boys", :force => true do |t| t.string "name" t.datetime "created_at" t.datetime "updated_at" end create_table "boys_dogs", :id => false, :force => true do |t| t.integer "boy_id" t.integer "dog_id" t.datetime "created_at" t.datetime "updated_at" end create_table "dogs", :force => true do |t| t.string "name" t.datetime "created_at" t.datetime "updated_at" end end I've seen lots of posts here and elsewhere about similar problems, but the solutions are normally using belongs_to and the plural/singular class names being confused. I don't think that is the case here, but I tried switching the habtm statement to use the singular name just to see if it helped (with no luck). I seem to be missing something simple here. The actual error message is: NameError in BoysController#destroy uninitialized constant Boy::Dogs The trace looks like: /Library/Ruby/Gems/1.8/gems/activesupport-2.3.4/lib/active_support/dependencies.rb:105:in const_missing' (eval):3:indestroy_without_callbacks' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.4/lib/active_record/callbacks.rb:337:in destroy_without_transactions' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.4/lib/active_record/transactions.rb:229:insend' ... Thanks.

    Read the article

  • Should I commit or rollback a transaction that creates a temp table, reads, then deletes it?

    - by Triynko
    To select information related to a list of hundreds of IDs... rather than make a huge select statement, I create temp table, insert the ids into it, join it with a table to select the rows matching the IDs, then delete the temp table. So this is essentially a read operation, with no permanent changes made to any persistent database tables. I do this in a transaction, to ensure the temp table is deleted when I'm finished. My question is... what happens when I commit such a transaction vs. let it roll it back? Performance-wise... does the DB engine have to do more work to roll back the transaction vs committing it? Is there even a difference since the only modifications are done to a temp table? Related question here, but doesn't answer my specific case involving temp tables: http://stackoverflow.com/questions/309834/should-i-commit-or-rollback-a-read-transaction

    Read the article

  • How can I do more than one level of cascading deletes in Linq?

    - by Gary McGill
    If I have a Customers table linked to an Orders table, and I want to delete a customer and its corresponding orders, then I can do: dataContext.Orders.DeleteAllOnSubmit(customer.Orders); dataContext.Customers.DeleteOnSubmit(customer); ...which is great. However, what if I also have an OrderItems table, and I want to delete the order items for each of the orders deleted? I can see how I could use DeleteAllOnSubmit to cause the deletion of all the order items for a single order, but how can I do it for all the orders?

    Read the article

  • Gtk, Does deleting builder pointer deletes all the Widgets created using it.

    - by PP
    I am creating builder pointer as follows. GtkBuilder *builder_ptr; builder_ptr = gtk_builder_new(); if( ! gtk_builder_add_from_file(builder_ptr, "Test.glade", &error ) ) printf("\n Error Builder, Exit!\n"); and i am deleting this builder pointer as follows: g_object_unref(G_OBJECT(m_builder)); this builder pointer contains 2-3 GtkWindows and other widgets. So my question is that do i need to delete all the windows in this builder manually when i delete this builder or all the windows will get destroyed when i delete builder pointer. Thanks, PP.

    Read the article

  • code doesnot delete specific extra files but deletes all, also no recursion for directory, help me t

    - by OM The Eternity
    I have to compare two folder structure and with reference of source folder I want to delete all the files/folders present in other destination folder which do not exist in reference source folder, how could i do this? $original = scan_dir_recursive('/var/www/html/copy2'); $mirror = scan_dir_recursive('/var/www/html/copy1'); function scan_dir_recursive($dir) { $all_paths = array(); $new_paths = scandir($dir); foreach ($new_paths as $path) { if ($path == '.' || $path == '..') { continue; } $path = $dir . DIRECTORY_SEPARATOR . $path; if (is_dir($path)) { $all_paths = array_merge($all_paths, scan_dir_recursive($path)); } else { $all_paths[] = $path; } } return $all_paths; } foreach($mirror as $mirr) { if($mirr != '.' && $mirr != '..') { if(!in_array($mirr, $original)) { unlink($mirr); // delete the file } } } The above code shows what i did.. Here My copy1 folder contains extra files than copy2 folders hence i need these extra files to be deleted. Below given output is are arrays of original Mirror and of difference of both.. Original Array ( [0] => /var/www/html/copy2/Copy (5) of New Text Document.txt [1] => /var/www/html/copy2/Copy of New Text Document.txt ) Mirror Array ( [0] => /var/www/html/copy1/Copy (2) of New Text Document.txt [1] => /var/www/html/copy1/Copy (3) of New Text Document.txt [2] => /var/www/html/copy1/Copy (5) of New Text Document.txt ) Difference Array ( [0] => /var/www/html/copy1/Copy (2) of New Text Document.txt [1] => /var/www/html/copy1/Copy (3) of New Text Document.txt [2] => /var/www/html/copy1/Copy (5) of New Text Document.txt ) when i iterate a loop to delete on difference array all files has to be deleted as per displayed output.. how can i rectify this.. the loop for deletion is given below. $dirs_to_delete = array(); foreach ($diff_path as $path) { if (is_dir($path)) { $dirs_to_delete[] = $path; } else { unlink($path); } } while ($dir = array_pop($dirs_to_delete)) { rmdir($dir); }

    Read the article

  • How to make Unit Tests to make sure stored procedure is deleting row from the database?

    - by aspdotnetuser
    I'm new to unit testing and I need some help with the following. I have created a small project to help me learn how to make Unit Tests. The functionality for one of the forms in my application deletes a user from the User table (and other rows in mapping tables). Currently, the unit test I have created to test this sets up the required objects and then calls the business rules method (passing in the user id) which calls the data access method to execute the stored procedure that deletes the rows in the tables. Is this the correct method to test whether something is being deleted successfully? Should the unit test / setup method first insert some test data which the unit test then deletes?

    Read the article

  • Is it reasonable for REST resources to be singular and plural?

    - by Evan
    I have been wondering if, rather than a more traditional layout like this: api/Products GET // gets product(s) by id PUT // updates product(s) by id DELETE // deletes (product(s) by id POST // creates product(s) Would it be more useful to have a singular and a plural, for example: api/Product GET // gets a product by id PUT // updates a product by id DELETE // deletes a product by id POST // creates a product api/Products GET // gets a collection of products by id PUT // updates a collection of products by id DELETE // deletes a collection of products (not the products themselves) POST // creates a collection of products based on filter parameters passed So, to create a collection of products you might do: POST api/Products {data: filters} // returns api/Products/<id> And then, to reference it, you might do: GET api/Products/<id> // returns array of products In my opinion, the main advantage of doing things this way is that it allows for easy caching of collections of products. One might, for example, put a lifetime of an hour on collections of products, thus drastically reducing the calls on a server. Of course, I currently only see the good side of doing things this way, what's the downside?

    Read the article

  • Windows system restore deletes various executables and *.js files. How does it decide which files to delete?

    - by Leftium
    I restored my system from a Windows System Restore point. It solved some issues I was having, but introduced other strange problems (like my optical drive disappeared). One thing that surprised me was several files from my Web2Py installation were deleted: the executables and *.js files; possibly some others (like favicon.ico). I did not expect this because Web2Py is basically a portable, standalone application. You just unzip it and run the executable inside, so nothing should be registered with Windows. My question is: what files does Windows system restore delete, and how does it decide this? I'm just wondering what other files I'm missing and if there's a way to get restore them (without rolling back the restore point). Perhaps it scans for certain files types (like exe, js, ico, dll) with a creation date that was after the restore point creation date? Some other people who experienced a similar problem: Dropbox: Lost Files User files missing after run system restore. update: I found some more references on how Windows System Restore works: Understanding how System Restore in Windows Vista treats executable files Why Vista's System Restore is Dangerous and What to do About it

    Read the article

  • How do large blobs affect SQL delete performance, and how can I mitigate the impact?

    - by Max Pollack
    I'm currently experiencing a strange issue that my understanding of SQL Server doesn't quite mesh with. We use SQL as our file storage for our internal storage service, and our database has about half a million rows in it. Most of the files (86%) are 1mb or under, but even on fresh copies of our database where we simply populate the table with data for the purposes of a test, it appears that rows with large amounts of data stored in a BLOB frequently cause timeouts when our SQL Server is under load. My understanding of how SQL Server deletes rows is that it's a garbage collection process, i.e. the row is marked as a ghost and the row is later deleted by the ghost cleanup process after the changes are copied to the transaction log. This suggests to me that regardless of the size of the data in the blob, row deletion should be close to instantaneous. However when deleting these rows we are definitely experiencing large numbers of timeouts and astoundingly low performance. In our test data set, its files over 30mb that cause this issue. This is an edge case, we don't frequently encounter these, and even though we're looking into SQL filestream as a solution to some of our problems, we're trying to narrow down where these issues are originating from. We ARE performing our deletes inside of a transaction. We're also performing updates to metadata such as file size stats, but these exist in a separate table away from the file data itself. Hierarchy data is stored in the table that contains the file information. Really, in the end it's not so much what we're doing around the deletes that matters, we just can't find any references to low delete performance on rows that contain a large amount of data in a BLOB. We are trying to determine if this is even an avenue worth exploring, or if it has to be one of our processes around the delete that's causing the issue. Are there any situations in which this could occur? Is it common for a database server to come to the point of complete timeouts when many of these deletes are occurring simultaneously? Is there a way to combat this issue if it exists? (cross-posted from StackOverflow )

    Read the article

  • Jet Database (ms access) ExecuteNonQuery - Can I make it faster?

    - by bluebill
    Hi all, I have this generic routine that I wrote that takes a list of sql strings and executes them against the database. Is there any way I can make this work faster? Typically it'll see maybe 200 inserts or deletes or updates at a time. Sometimes there is a mixture of updates, inserts and deletes. Would it be a good idea to separate the queries by type (i.e. group inserts together, then updates and then deletes)? I am running this against an ms access database and using vb.net 2005. Public Function ExecuteNonQuery(ByVal sql As List(Of String), ByVal dbConnection as String) As Integer If sql Is Nothing OrElse sql.Count = 0 Then Return 0 Dim recordCount As Integer = 0 Using connection As New OleDb.OleDbConnection(dbConnection) connection.Open() Dim transaction As OleDb.OleDbTransaction = connection.BeginTransaction() 'Using cmd As New OleDb.OleDbCommand() Using cmd As OleDb.OleDbCommand = connection.CreateCommand cmd.Connection = connection cmd.Transaction = transaction For Each s As String In sql If Not String.IsNullOrEmpty(s) Then cmd.CommandText = s recordCount += cmd.ExecuteNonQuery() End If Next transaction.Commit() End Using End Using Return recordCount End Function

    Read the article

  • Entity Framework v1 &hellip; Brief Synopsis and Tips &ndash; Part 2

    - by Rohit Gupta
    Using Entity Framework with ASMX Web sErvices and WCF Web Service: If you use ASMX WebService to expose Entity objects from Entity Framework... then the ASMX Webservice does not  include object graphs, one work around is to use Facade pattern or to use WCF Service. The other important aspect of using ASMX Web Services along with Entity Framework is that the ASMX Client is not aware of the existence of EF v1 since the client solely deals with C# objects (not EntityObjects or ObjectContext). Since the client is not aware of the ObjectContext hence the client cannot participate in change tracking since the client only receives the Current Values and not the Orginal values when the service sends the the Entity objects to the client. Thus there are 2 drawbacks to using EntityFramework with ASMX Web Service: 1. Object state is not maintained... so to overcome this limitation we need insert/update single entity at a time and retrieve the original values for the entity being updated on the server/service end before calling Save Changes. 2. ASMX does not maintain object graphs... i.e. Customer.Reservations or Customer.Reservations.Trip relationships are not maintained. Thus you need to send these relationships separately from service to client. WCF Web Service overcomes the object graph limitation of ASMX Web Service, but we need to insure that we are populating all the non-null scalar properties of all the objects in the object graph before calling Update. WCF Web service still cannot overcome the second limitation of tracking changes to entities at the client end. Also note that the "Customer" class in the Client is very different from the "Customer" class in the Entity Framework Model Entities. They are incompatible with each other hence we cannot cast one to the other. However the .NET Framework translates the client "Customer" Entity to the EFv1 Model "customer" Entity once the entity is serialzed back on the ASMX server end. If you need change tracking enabled on the client then we need to use WCF Data Services which is available with VS 2010. ====================================================================================================== In WCF when adding an object that has relationships, the framework assumes that every object in the object graph needs to be added to store. for e.g. in a Customer.Reservations.Trip object graph, when a Customer Entity is added to the store, the EFv1 assumes that it needs to a add a Reservations collection and also Trips for each Reservation. Thus if we need to use existing Trips for reservations then we need to insure that we null out the Trip object reference from Reservations and set the TripReference to the EntityKey of the desired Trip instead. ====================================================================================================== Understanding Relationships and Associations in EFv1 The Golden Rule of EF is that it does not load entities/relationships unless you ask it to explicitly do so. However there is 1 exception to this rule. This exception happens when you attach/detach entities from the ObjectContext. If you detach an Entity in a ObjectGraph from the ObjectContext, then the ObjectContext removes the ObjectStateEntry for this Entity and all the relationship Objects associated with this Entity. For e.g. in a Customer.Order.OrderDetails if the Customer Entity is detached from the ObjectContext then you cannot traverse to the Order and OrderDetails Entities (that still exist in the ObjectContext) from the Customer Entity(which does not exist in the Object Context) Conversely, if you JOIN a entity that is not in the ObjectContext with a Entity that is in the ObjContext then the First Entity will automatically be added to the ObjContext since relationships for the 2 Entities need to exist in the ObjContext. ========================================================= You cannot attach an EntityCollection to an entity through its navigation property for e.g. you cannot code myContact.Addresses = myAddressEntityCollection ========================================================== Cascade Deletes in EDM: The Designer does not support specifying cascase deletes for a Entity. To enable cascasde deletes on a Entity in EDM use the Association definition in CSDL for the Entity. for e.g. SalesOrderDetail (SOD) has a Foreign Key relationship with SalesOrderHeader (SalesOrderHeader 1 : SalesOrderDetail *) if you specify a cascade Delete on SalesOrderHeader Entity then calling deleteObject on SalesOrderHeader (SOH) Entity will send delete commands for SOH record and all the SOD records that reference the SOH record. ========================================================== As a good design practise, if you use Cascade Deletes insure that Cascade delete facet is used both in the EDM as well as in the database. Even though it is not absolutely mandatory to have Cascade deletes on both Database and EDM (since you can see that just the Cascade delete spec on the SOH Entity in EDM will insure that SOH record and all related SOD records will be deleted from the database ... even though you dont have cascade delete configured in the database in the SOD table) ============================================================== Maintaining relationships in Code When Setting a Navigation property of a Entity (for e.g. setting the Contact Navigation property of Address Entity) the following rules apply : If both objects are detached, no relationship object will be created. You are simply setting a property the CLR way. If both objects are attached, a relationship object will be created. If only one of the objects is attached, the other will become attached and a relationship object will be created. If that detached object is new, when it is attached to the context its EntityState will be Added. One important rule to remember regarding synchronizing the EntityReference.Value and EntityReference.EntityKey properties is that when attaching an Entity which has a EntityReference (e.g. Address Entity with ContactReference) the Value property will take precedence and if the Value and EntityKey are out of sync, the EntityKey will be updated to match the Value. ====================================================== If you call .Load() method on a detached Entity then the .Load() operation will throw an exception. There is one exception to this rule. If you load entities using MergeOption.NoTracking, you will be able to call .Load() on such entities since these Entities are accessible by the ObjectContext. So the bottomline is that we need Objectontext to be able to call .Load() method to do deffered loading on EntityReference or EntityCollection. Another rule to remember is that you cannot call .Load() on entities that have a EntityState.Added State since the ObjectContext uses the EntityKey of the Primary (Parent) Entity when loading the related (Child) Entity (and not the EntityKey of the child (even if the EntityKey of the child is present before calling .Load()) ====================================================== You can use ObjContext.Add() to add a entity to the ObjContext and set the EntityState of the new Entity to EntityState.Added. here no relationships are added/updated. You can also use EntityCollection.Add() method to add an entity to another entity's related EntityCollection for e.g. contact has a Addresses EntityCollection so to add a new address use contact.Addresses.Add(newAddress) to add a new address to the Addresses EntityCollection. Note that if the entity does not already exist in the ObjectContext then calling contact.Addresses.Add(myAddress) will cause a new Address Entity to be added to the ObjContext with EntityState.Added and it will also add a RelationshipEntry (a relationship object) with EntityState.Added which connects the Contact (contact) with the new address newAddress. Note that if the entity already exists in the Objectcontext (being part theOtherContact.Addresses Collection), then calling contact.Addresses.Add(existingAddress) will add 2 RelationshipEntry objects to the ObjectStateEntry Collection, one with EntityState.Deleted and the other with EntityState.Added. This implies that the existingAddress Entity is removed from the theOtherContact.Addresses Collection and Added to the contact.Addresses Collection..effectively reassigning the address entity from the theOtherContact to "contact". This is called moving an existing entity to a new object graph. ====================================================== You usually use ObjectContext.Attach() and EntityCollection.Attach() methods usually when you need to reconstruct the ObjectGraph after deserializing the objects as received from a ASMX Web Service Client. Attach is usually used to connect existing Entities in the ObjectContext. When EntityCollection.Attach() is called the EntityState of the RelationshipEntry (the relationship object) remains as EntityState.unchanged whereas when EntityCollection.Add() method is called the EntityState of the relationship object changes to EntityState.Added or EntityState.Deleted as the situation demands. ========================================================= LINQ To Entities Tips: Select Many does Inner Join by default.   for e.g. from c in Contact from a in c.Address select c ... this will do a Inner Join between the Contacts and Addresses Table and return only those Contacts that have a Address. ======================================================== Group Joins Do LEFT Join by default. e.g. from a in Address join c in Contact ON a.Contact.ContactID == c.ContactID Into g WHERE a.CountryRegion == "US" select g; This query will do a left join on the Contact table and return contacts that have a address in "US" region The following query : from c in Contact join a in Address.Where(a1 => a1.CountryRegion == "US") on c.ContactID  equals a.Contact.ContactID into addresses select new {c, addresses} will do a left join on the Address table and return All Contacts. In these Contacts only those will have its Address EntityCollection Populated which have a Address in the "US" region, the other contacts will have 0 Addresses in the Address collection (even if addresses for those contacts exist in the database but are in a different region) ======================================================== Linq to Entities does not support DefaultIfEmpty().... instead use .Include("Address") Query Builder method to do a Left JOIN or use Group Joins if you need more control like Filtering on the Address EntityCollection of Contact Entity =================================================================== Use CreateSourceQuery() on the EntityReference or EntityCollection if you need to add filters during deferred loading of Entities (Deferred loading in EFv1 happens when you call Load() method on the EntityReference or EntityCollection. for e.g. var cust=context.Contacts.OfType<Customer>().First(); var sq = cust.Reservations.CreateSourceQuery().Where(r => r.ReservationDate > new DateTime(2008,1,1)); cust.Reservations.Attach(sq); This populates only those reservations that are older than Jan 1 2008. This is the only way (in EFv1) to Attach a Range of Entities to a EntityCollection using the Attach() method ================================================================== If you need to get the Foreign Key value for a entity e.g. to get the ContactID value from a Address Entity use this :                                address.ContactReference.EntityKey.EntityKeyValues.Where(k=> k.Key == "ContactID")

    Read the article

  • Is SQL Server DRI (ON DELETE CASCADE) slow?

    - by Aaronaught
    I've been analyzing a recurring "bug report" (perf issue) in one of our systems related to a particularly slow delete operation. Long story short: It seems that the CASCADE DELETE keys were largely responsible, and I'd like to know (a) if this makes sense, and (b) why it's the case. We have a schema of, let's say, widgets, those being at the root of a large graph of related tables and related-to-related tables and so on. To be perfectly clear, deleting from this table is actively discouraged; it is the "nuclear option" and users are under no illusions to the contrary. Nevertheless, it sometimes just has to be done. The schema looks something like this: Widgets | +--- Anvils (1:1) | | | +--- AnvilTestData (1:N) | +--- WidgetHistory (1:N) | +--- WidgetHistoryDetails (1:N) Nothing too scary, really. A Widget can be different types, an Anvil is a special type, so that relationship is 1:1 (or more accurately 1:0..1). Then there's a large amount of data - perhaps thousands of rows of AnvilTestData per Anvil collected over time, dealing with hardness, corrosion, exact weight, hammer compatibility, usability issues, and impact tests with cartoon heads. Then every Widget has a long, boring history of various types of transactions - production, inventory moves, sales, defect investigations, RMAs, repairs, customer complaints, etc. There might be 10-20k details for a single widget, or none at all, depending on its age. So, unsurprisingly, there's a CASCADE DELETE relationship at every level here. If a Widget needs to be deleted, it means something's gone terribly wrong and we need to erase any records of that widget ever existing, including its history, test data, etc. Again, nuclear option. Relations are all indexed, statistics are up to date. Normal queries are fast. The system tends to hum along pretty smoothly for everything except deletes. Getting to the point here, finally, for various reasons we only allow deleting one widget at a time, so a delete statement would look like this: DELETE FROM Widgets WHERE WidgetID = @WidgetID Pretty simple, innocuous looking delete... that takes over 2 minutes to run, for a widget with no data! After slogging through execution plans I was finally able to pick out the AnvilTestData and WidgetHistoryDetails deletes as the sub-operations with the highest cost. So I experimented with turning off the CASCADE (but keeping the actual FK, just setting it to NO ACTION) and rewriting the script as something very much like the following: DECLARE @AnvilID int SELECT @AnvilID = AnvilID FROM Anvils WHERE WidgetID = @WidgetID DELETE FROM AnvilTestData WHERE AnvilID = @AnvilID DELETE FROM WidgetHistory WHERE HistoryID IN ( SELECT HistoryID FROM WidgetHistory WHERE WidgetID = @WidgetID) DELETE FROM Widgets WHERE WidgetID = @WidgetID Both of these "optimizations" resulted in significant speedups, each one shaving nearly a full minute off the execution time, so that the original 2-minute deletion now takes about 5-10 seconds - at least for new widgets, without much history or test data. Just to be absolutely clear, there is still a CASCADE from WidgetHistory to WidgetHistoryDetails, where the fanout is highest, I only removed the one originating from Widgets. Further "flattening" of the cascade relationships resulted in progressively less dramatic but still noticeable speedups, to the point where deleting a new widget was almost instantaneous once all of the cascade deletes to larger tables were removed and replaced with explicit deletes. I'm using DBCC DROPCLEANBUFFERS and DBCC FREEPROCCACHE before each test. I've disabled all triggers that might be causing further slowdowns (although those would show up in the execution plan anyway). And I'm testing against older widgets, too, and noticing a significant speedup there as well; deletes that used to take 5 minutes now take 20-40 seconds. Now I'm an ardent supporter of the "SELECT ain't broken" philosophy, but there just doesn't seem to be any logical explanation for this behaviour other than crushing, mind-boggling inefficiency of the CASCADE DELETE relationships. So, my questions are: Is this a known issue with DRI in SQL Server? (I couldn't seem to find any references to this sort of thing on Google or here in SO; I suspect the answer is no.) If not, is there another explanation for the behaviour I'm seeing? If it is a known issue, why is it an issue, and are there better workarounds I could be using?

    Read the article

  • How to recover unsaved PSD file on MacOSX

    - by cenk
    Adobe Photoshop creates temporary *.psb files for emergency recovery at this path: ~/Library/Application Support/Adobe/Adobe Photoshop CS6/AutoRecover The files created have names like _Untitled-10FDB62ECBABBFF5C8EAD958EBC9CFAE2E.psb with current user:group as designated owner. If you save the file you are working on OR you hit "don't save" when prompted, the temporary files are deleted. Now, system creates and deletes these files. I am trying to recover the emergency file but I think the "undelete" utilities were created assuming the "user" deletes the file - like going into the trash bin and then emptying the trash... Anyone having experience about this? Thanks.

    Read the article

  • Data that has been deleted in P6, how is it updated in Analytics

    - by Jeffrey McDaniel
    In P6 Reporting Database 2.0 the ETL process looked to the refrdel table in the P6 PMDB to determine which projects were deleted. The refrdel table could not be cleared out between ETL runs or those deletes would be lost. After the ETL process is run the refrdel can be cleared out. It is important to keep any purging of the refrdel in a consistent cycle so the ETL process can pick up these deletes and process them accordingly.  In P6 Reporting Database 2.2 and higher the Extended Schema is used as the data source. In the Extended Schema, deleted data is filtered out by the views. The Extended Schema services will handle any interaction with the refrdel table, this concern with timing refrdel cleanup and ETL runs is not applicable as of this release. In the Extended Schema tables (ex. TaskX) there can still be deleted data present. The Extended Schema views join on the primary PMDB tables (ex. Task) and filter out any deleted data.  Any data that was deleted that remains in the Extended Schema tables can be cleaned out at a designated time by running the clean up procedure as documented in the P6 Extended Schema white paper. This can be run occasionally but is not necessary to run often unless large amounts of data has been deleted.

    Read the article

  • Is rsync --delete safe in case of disk failure

    - by enedene
    I have two data hard drives on my Linux server and I use second as a backup for a first drive. I use rsync for that purpose. An example would be: rsync -r -v --delete /media/disk1/ /media/disk2/ What this does is that it copies every file/directory from /media/disk1/ to /media/disk2/ but also deletes any difference. For example, lets say that files A and B but not file C are on disk1, and on disk2 there is no A and B files, but there is C. The result would be that after the command on disk2 I'd have files A and B, but file C would be deleted, just like on disk1. Now, a rather disastrous scenario had crossed my mind; what if disk1 dies, system continues to work since system files are on my system disk, but when rsync tries to backup my data on disk2 from broken disk1, it deletes all the files from disk2 because it can't read anything on disk1. Is this a possible scenario, or is there a protection from it build in rsync?

    Read the article

  • Clean logging with BASH

    - by Matt Krouse
    I have a script that deletes files 7 days or older and then logs them to a folder. It logs and deletes everything correctly but when I open up the log file for viewing, its very sloppy. log=$HOME/Deleted/$(date) find $HOME/OldLogFiles/ -type f -mtime +7 -delete -print > "$log" The log file is difficult to read Example File Output: (when opened in notepad) /home/u0146121/OldLogFiles/file1.txt/home/u0146121/OldLogFiles/file2.txt/home/u0146121/OldLogFiles/file3.txt Is there anyway to log the file nicer and cleaner? Maybe with the Filename, date deleted, and how old it was? Any suggestions help!

    Read the article

  • Date Tracking in Oracle HRMS

    - by Manoj Madhusoodanan
    Update Date Track Modes To maintain employee data effectively Oracle HCM is using a mechanism called date tracking.The main motive behind the date track mode is to maintain past,present and future data effectively.The various update date track modes are: CORRECTION : Over writes the data. No history will maintain.UPDATE : Keeps the history and new change will effect as of effective dateUPDATE_CHANGE_INSERT : Inserts the record and preserves the futureUPDATE_OVERRIDE : Inserts the record and overrides the future Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Action: Created Employee # 22 on 01-JAN-2012 The record in PER_ALL_PEOPLE_F is as shown below. Effective Start Date Effective End Date Employee Number Marital Status Object Version Number 01-JAN-2012 31-DEC-4712 24 2 Action: Updated record in CORRECTION mode Effective Start Date Effective End Date Employee Number Marital Status Object Version Number 01-JAN-2012 31-DEC-4712 24 Single 3 Action: Updated record in UPDATE mode effective 01-JUN-2012 and Marital Status = Married Effective Start Date Effective End Date Employee Number Marital Status Object Version Number 01-JAN-2012 31-MAY-2012 24 Single 4 01-JUN-2012 31-DEC-4712 24 Married 5 Action: Updated record in UPDATE mode effective 01-SEP-2012 and Marital Status = Divorced Effective Start Date Effective End Date Employee Number Marital Status Object Version Number 01-JAN-2012 31-MAY-2012 24 Single 4 01-JUN-2012 31-AUG-2012 24 Married 6 01-SEP-2012 31-DEC-4712 24 Divorced 7 Action: Updated record in UPDATE_CHANGE_INSERT mode effective 01-MAR-2012 and Marital Status = Living Together Effective Start Date Effective End Date Employee Number Marital Status Object Version Number 01-JAN-2012 29-FEB-2012 24 Single 8 01-MAR-2012 31-MAY-2012 24 Living Together 9 01-JUN-2012 31-AUG-2012 24 Married 6 01-SEP-2012 31-DEC-4712 24 Divorced 7 Action: Updated record in UPDATE_OVERRIDE mode effective 01-AUG-2012 and Marital Status = Divorced Effective Start Date Effective End Date Employee Number Marital Status Object Version Number 01-JAN-2012 29-FEB-2012 24 Single 8 01-MAR-2012 31-MAY-2012 24 Living Together 9 01-JUN-2012 31-JUL-2012 24 Married 10 01-AUG-2012 31-DEC-4712 24 Divorced 11  Delete Date Track Modes The various delete date track modes are ZAP : wipes all recordsDELETE : Deletes  current recordFUTURE_CHANGE : Deletes current and future changes.DELETE_NEXT_CHANGE : Deletes next change Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Element Entry records are shown below. Effective Start Date Effective End Date Element Entry Id Object Version Number 01-JAN-2012 12-OCT-2012 129831 3 13-OCT-2012 19-OCT-2012 129831 5 20-OCT-2012 31-DEC-4712 129831 6 Action: Delete record in ZAP mode effective 14-JAN-2012 No rows Action: Delete record in DELETE mode effective 14-OCT-2012 Effective Start Date Effective End Date Element Entry Id Object Version Number 01-JAN-2012 12-OCT-2012 129831 3 13-OCT-2012 14-OCT-2012 129831 6 Action: Delete record in FUTURE_CHANGE mode effective 14-JAN-2012 Effective Start Date Effective End Date Element Entry Id Object Version Number 01-JAN-2012 31-DEC-4712 129831 4 Action: Delete record in NEXT_CHANGE mode effective 14-JAN-2012 Effective Start Date Effective End Date Element Entry Id Object Version Number 01-JAN-2012 19-OCT-2012 129831 4 20-OCT-2012 31-DEC-4712 129831 6

    Read the article

  • Breaking 1NF to model subset constraints. Does this sound sane?

    - by Chris Travers
    My first question here. Appologize if it is in the wrong forum but this seems pretty conceptual. I am looking at doing something that goes against conventional wisdom and want to get some feedback as to whether this is totally insane or will result in problems, so critique away! I am on PostgreSQL 9.1 but may be moving to 9.2 for this part of this project. To re-iterate: Does it seem sane to break 1NF in this way? I am not looking for debugging code so much as where people see problems that this might lead. The Problem In double entry accounting, financial transactions are journal entries with an arbitrary number of lines. Each line has either a left value (debit) or a right value (credit) which can be modelled as a single value with negatives as debits and positives as credits or vice versa. The sum of all debits and credits must equal zero (so if we go with a single amount field, sum(amount) must equal zero for each financial journal entry). SQL-based databases, pretty much required for this sort of work, have no way to express this sort of constraint natively and so any approach to enforcing it in the database seems rather complex. The Write Model The journal entries are append only. There is a possibility we will add a delete model but it will be subject to a different set of restrictions and so is not applicable here. If and when we allow deletes, we will probably do them using a simple ON DELETE CASCADE designation on the foreign key, and require that deletes go through a dedicated stored procedure which can enforce the other constraints. So inserts and selects have to be accommodated but updates and deletes do not for this task. My Proposed Solution My proposed solution is to break first normal form and model constraints on arrays of tuples, with a trigger that breaks the rows out into another table. CREATE TABLE journal_line ( entry_id bigserial primary key, account_id int not null references account(id), journal_entry_id bigint not null, -- adding references later amount numeric not null ); I would then add "table methods" to extract debits and credits for reporting purposes: CREATE OR REPLACE FUNCTION debits(journal_line) RETURNS numeric LANGUAGE sql IMMUTABLE AS $$ SELECT CASE WHEN $1.amount < 0 THEN $1.amount * -1 ELSE NULL END; $$; CREATE OR REPLACE FUNCTION credits(journal_line) RETURNS numeric LANGUAGE sql IMMUTABLE AS $$ SELECT CASE WHEN $1.amount > 0 THEN $1.amount ELSE NULL END; $$; Then the journal entry table (simplified for this example): CREATE TABLE journal_entry ( entry_id bigserial primary key, -- no natural keys :-( journal_id int not null references journal(id), date_posted date not null, reference text not null, description text not null, journal_lines journal_line[] not null ); Then a table method and and check constraints: CREATE OR REPLACE FUNCTION running_total(journal_entry) returns numeric language sql immutable as $$ SELECT sum(amount) FROM unnest($1.journal_lines); $$; ALTER TABLE journal_entry ADD CONSTRAINT CHECK (((journal_entry.running_total) = 0)); ALTER TABLE journal_line ADD FOREIGN KEY journal_entry_id REFERENCES journal_entry(entry_id); And finally we'd have a breakout trigger: CREATE OR REPLACE FUNCTION je_breakout() RETURNS TRIGGER LANGUAGE PLPGSQL AS $$ BEGIN IF TG_OP = 'INSERT' THEN INSERT INTO journal_line (journal_entry_id, account_id, amount) SELECT NEW.id, account_id, amount FROM unnest(NEW.journal_lines); RETURN NEW; ELSE RAISE EXCEPTION 'Operation Not Allowed'; END IF; END; $$; And finally CREATE TRIGGER AFTER INSERT OR UPDATE OR DELETE ON journal_entry FOR EACH ROW EXECUTE_PROCEDURE je_breaout(); Of course the example above is simplified. There will be a status table that will track approval status allowing for separation of duties, etc. However the goal here is to prevent unbalanced transactions. Any feedback? Does this sound entirely insane? Standard Solutions? In getting to this point I have to say I have looked at four different current ERP solutions to this problems: Represent every line item as a debit and a credit against different accounts. Use of foreign keys against the line item table to enforce an eventual running total of 0 Use of constraint triggers in PostgreSQL Forcing all validation here solely through the app logic. My concerns are that #1 is pretty limiting and very hard to audit internally. It's not programmer transparent and so it strikes me as being difficult to work with in the future. The second strikes me as being very complex and required a series of contraints and foreign keys against self to make work, and therefore it strikes me as complex, hard to sort out at least in my mind, and thus hard to work with. The fourth could be done as we force all access through stored procedures anyway and this is the most common solution (have the app total things up and throw an error otherwise). However, I think proof that a constraint is followed is superior to test cases, and so the question becomes whether this in fact generates insert anomilies rather than solving them. If this is a solved problem it isn't the case that everyone agrees on the solution....

    Read the article

  • In VisualStudio 2008, emacs mode - how can you enable "overwrite"?

    - by Abby Fichtner
    Using VisualStudio 2008, have emacs keyboard mapping scheme enabled. If I select text and try to paste over it, it INSERTS the new text, rather than replacing it. Also, if I select text and hit DELETE it deletes the first character AFTER the selected text (just as if I didn't have any text selected). Does anyone know how to fix this so that I get the standard windows behavior. That is: If I select text and try to paste over it, it replaces the selected text with what I pasted in. If I select text and hit the DELETE key, it actually deletes the text I have selected Thanks! Abby

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >