Search Results

Search found 79588 results on 3184 pages for 'sql data storage'.

Page 133/3184 | < Previous Page | 129 130 131 132 133 134 135 136 137 138 139 140  | Next Page >

  • SQL Table stored as a Heap - the dangers within

    - by MikeD
    Nearly all of the time I create a table, I include a primary key, and often that PK is implemented as a clustered index. Those two don't always have to go together, but in my world they almost always do. On a recent project, I was working on a data warehouse and a set of SSIS packages to import data from an OLTP database into my data warehouse. The data I was importing from the business database into the warehouse was mostly new rows, sometimes updates to existing rows, and sometimes deletes. I decided to use the MERGE statement to implement the insert, update or delete in the data warehouse, I found it quite performant to have a stored procedure that extracted all the new, updated, and deleted rows from the source database and dump it into a working table in my data warehouse, then run a stored proc in the warehouse that was the MERGE statement that took the rows from the working table and updated the real fact table. Use Warehouse CREATE TABLE Integration.MergePolicy (PolicyId int, PolicyTypeKey int, Premium money, Deductible money, EffectiveDate date, Operation varchar(5)) CREATE TABLE fact.Policy (PolicyKey int identity primary key, PolicyId int, PolicyTypeKey int, Premium money, Deductible money, EffectiveDate date) CREATE PROC Integration.MergePolicy as begin begin tran Merge fact.Policy as tgtUsing Integration.MergePolicy as SrcOn (tgt.PolicyId = Src.PolicyId) When not matched by Target then Insert (PolicyId, PolicyTypeKey, Premium, Deductible, EffectiveDate)values (src.PolicyId, src.PolicyTypeKey, src.Premium, src.Deductible, src.EffectiveDate) When matched and src.Operation = 'U' then Update set PolicyTypeKey = src.PolicyTypeKey,Premium = src.Premium,Deductible = src.Deductible,EffectiveDate = src.EffectiveDate When matched and src.Operation = 'D' then Delete ;delete from Integration.WorkPolicy commit end Notice that my worktable (Integration.MergePolicy) doesn't have any primary key or clustered index. I didn't think this would be a problem, since it was relatively small table and was empty after each time I ran the stored proc. For one of the work tables, during the initial loads of the warehouse, it was getting about 1.5 million rows inserted, processed, then deleted. Also, because of a bug in the extraction process, the same 1.5 million rows (plus a few hundred more each time) was getting inserted, processed, and deleted. This was being sone on a fairly hefty server that was otherwise unused, and no one was paying any attention to the time it was taking. This week I received a backup of this database and loaded it on my laptop to troubleshoot the problem, and of course it took a good ten minutes or more to run the process. However, what seemed strange to me was that after I fixed the problem and happened to run the merge sproc when the work table was completely empty, it still took almost ten minutes to complete. I immediately looked back at the MERGE statement to see if I had some sort of outer join that meant it would be scanning the target table (which had about 2 million rows in it), then turned on the execution plan output to see what was happening under the hood. Running the stored procedure again took a long time, and the plan output didn't show me much - 55% on the MERGE statement, and 45% on the DELETE statement, and table scans on the work table in both places. I was surprised at the relative cost of the DELETE statement, because there were really 0 rows to delete, but I was expecting to see the table scans. (I was beginning now to suspect that my problem was because the work table was being stored as a heap.) Then I turned on STATS_IO and ran the sproc again. The output was quite interesting.Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.Table 'Policy'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.Table 'MergePolicy'. Scan count 1, logical reads 433276, physical reads 60, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. I've reproduced the above from memory, the details aren't exact, but the essential bit was the very high number of logical reads on the table stored as a heap. Even just doing a SELECT Count(*) from Integration.MergePolicy incurred that sort of output, even though the result was always 0. I suppose I should research more on the allocation and deallocation of pages to tables stored as a heap, but I haven't, and my original assumption that a table stored as a heap with no rows would only need to read one page to answer any query was definitely proven wrong. It's likely that some sort of physical defragmentation of the table may have cleaned that up, but it seemed that the easiest answer was to put a clustered index on the table. After doing so, the execution plan showed a cluster index scan, and the IO stats showed only a single page read. (I aborted my first attempt at adding a clustered index on the table because it was taking too long - instead I ran TRUNCATE TABLE Integration.MergePolicy first and added the clustered index, both of which took very little time). I suspect I may not have noticed this if I had used TRUNCATE TABLE Integration.MergePolicy instead of DELETE FROM Integration.MergePolicy, since I'm guessing that the truncate operation does some rather quick releasing of pages allocated to the heap table. In the future, I will likely be much more careful to have a clustered index on every table I use, even the working tables. Mike  

    Read the article

  • Deleted Partition Recovery

    - by ankur.trapasiya
    Recently i was installing ubuntu 12.04 on my system. There were 4 partitions on my system and i selected one of the four partition for the installation and chose the option of re sizing the partition. Initially my partition was of size 100+GB and i created another partition out of it of size 15GB (EXT4). Now the moment i changed this partition structure my original partition got lost along with its data and i am left with 50GB partition and 50GB unallocated free space. Now the data that i have lost is meant a lot to me and i want to recover that data. So is there any way i can recover it ? And i haven't checked "format" option while resizing the partition. Thanks in advance.

    Read the article

  • Fast set indexing data structure for superset retrieval

    - by Asterios
    I am given a set of sets: {{a,b}, {a,b,c}, {a,c}, {a,c,f}} I would like to have a data structure to index those sets such that the following "lookup" is executed fast: find all supersets of a given set. For example, given the set {a,c} the structure would return {{a,b,c}, {a,c,f}, {a,c}} but not {a,b}. Any suggestions? Could this be done with a smart trie-like data structure storing sets after a proper sorting? This data structures is going to be queried a lot. Thus, I'm searching for a structure that might be expensive in build but rather fast to query.

    Read the article

  • Using Sql Server Change Data Capture with a frequently changing schema

    - by Pete
    We are looking into enabling Sql Server Change Data Capture for a new subsystem we are building. It's not really because we need it, but we are being pushed for having a complete history traceability, and CDC would nicely solve this requirement with minimum effort on our parts. We are following an agile development process, which in this case means that we frequently make changes to the database schema, e.g. adding new columns, moving data to other columns, etc. We did a small test where we created a table, enabled CDC for that table, and then added a new column to the table. Changes to the new column is not registered in the CDC table. Is there a mechanism to update the CDC table to the new schema, and are there any best practices to how you deal with captured data when migrating the database schema?

    Read the article

  • How to calculate change in ANSI SQL

    - by morpheous
    I have a table that contains sales data. The data is stored in a table that looks like this: CREATE table sales_data ( sales_time timestamp , sales_amt double ) I need to write parameterized queries that will allow me to do the following: Return the change in sales_amt between times t2 and t1, where t2 and t1 are separated by a time interval (integer) of N. This query will allow for querying for weekly changes in sales (for example). Return the change in change of sales_amt between times t2 and t1, and time t3 and t4. Thats is to calculate the value (val(t2)-val(t1)) - (val(t4)-val(t3)). where t2 and t1 are separated by the same time interval (interval N) as the interval between t4 and t3. This query will allow for querying for changes in weekly changes in sales (for example).

    Read the article

  • Converting SQL to LINQ to XML

    - by Morano88
    I'm writing the following code to convert SQL to LINQ and then to XML: SqlConnection thisConnection = new SqlConnection(@"Data Source=3BDALLAH-PC;Initial Catalog=XMLC;Integrated Security=True;Pooling=False;"); thisConnection.Open(); XElement eventsGive = new XElement("data", from c in ?????? select new XElement("event", new XAttribute("start", c.start), new XAttribute("end",c.eend), new XAttribute("title",c.title), new XAttribute("Color",c.Color), new XAttribute("link",c.link))); Console.WriteLine(eventsGive); The name of the table is "XMLC" and I want to refer to it. How can I do that? When I put its name directly VS gives an error. Also when I say thisConnection.XMLC it doesn't work.

    Read the article

  • Recovering data from hard disk after an accidental Ubuntu reinstallation

    - by Saurabh Agarwal
    My computer got wiped accidentally due to a fresh Ubuntu installation. Since the drive contains very important data and codes, it would be really great if the same could be recovered. It is a 2TB hard drive which had Ubuntu 10.10 earlier. It now has a Ubuntu 12.04 installed on it (which I understand occupies ~4GB). The machine has been powered off since. The installation was done using a usb with the option where the previous ubuntu installation is removed. Since installation doesn't take a lot of time, I'm inclined to think that the disk wasn't completely formatted and that most of the data is still there. I have no experience with recovery and hence a detailed explanation is very helpful. NOTE: I can arrange an additional 2TB hard disk for copying data. My computer has a fast internet connection and I have other computers connected to the network which I may use to access the previous one as well.

    Read the article

  • Prepare and import data into existing database

    - by Álvaro G. Vicario
    I maintain a PHP application with SQL Server backend. The DB structure is roughly this: lot === lot_id (pk, identify) lot_code building ======== buildin_id (pk, identity) lot_id (fk) inspection ========== inspection_id (pk, identify) building_id (fk) date inspector result The database already has lots and buildings and I need to import some inspections. Key points are: It's a one-time initial load. Data comes in an Excel file. The Excel data is unaware of DB autogenerated IDs: inspections must be linked to buildings through their lot_code What are my options to do such data load? date inspector result lot_code ========== =========== ======== ======== 31/12/2009 John Smith Pass 987654X 28/02/2010 Bill Jones Fail 123456B

    Read the article

  • Out-Of-Memory while doing Core Data migration

    - by Kamchatka
    Hello, I'm migrating a CoreData model between two versions of an application. I was storing binary data as blobs in the previous version and I want to take them out of the blobs for performance. My issue is that during the migration it seems that Core Data loads everything into memory which leads to Low Memory Warnings and then to my app being killed. Apple documentation suggests the following : http://developer.apple.com/library/mac/documentation/Cocoa/Conceptual/CoreDataVersioning/Articles/vmCustomizingTheProcess.html#//apple_ref/doc/uid/TP40005510-SW9 However, it seems to rely on the fact that the large objects are applied different mapping. In my case, all the objects are basically the same and the same mapping has to be applied to each of them. I don't see in this case how I could apply their technique. How should I handle a migration with very large objects ?

    Read the article

  • Adjusting Timezone - Convert XML DateTime to SQL DateTime

    - by noob.spt
    We are using TypedDataSet in our application. Data is passed to procedure in form of XML for insert/update. Now after populating DE with data, datetime remains the same though timezone information is added as below. Date in DB: 2009-10-29 18:52:53.43 Date in XML: 2009-10-29T18:52:53.43-05:00 Now when I am trying to convert below XML to SQL DateTime it is adjusting 5 hours and I am getting 2009-10-29 23:52:53.430 as the final output, which is wrong. Need to find a way to extract datetime from below XML snippet ignoring timezone. I have XML in following format, with timezone difference -05.00 <Order> <EnteredDateTime>2009-10-29T18:52:53.43-05:00</EnteredDateTime> </Order>

    Read the article

  • Select row data as ColumnName and Value

    - by Bobcat1506
    I have a history table and I need to select the values from this table in ColumnName, ColumnValue form. I am using SQL Server 2008 and I wasn’t sure if I could use the PIVOT function to accomplish this. Below is a simplified example of what I need to accomplish: This is what I have: The table’s schema is CREATE TABLE TABLE1 (ID INT PRIMARY KEY, NAME VARCHAR(50)) The “history” table’s schema is CREATE TABLE TABLE1_HISTORY( ID INT, NAME VARCHAR(50), TYPE VARCHAR(50), TRANSACTION_ID VARCHAR(50)) Here is the data from TABLE1_HISTORY ID NAME TYPE TRANSACTION_ID 1 Joe INSERT a 1 Bill UPDATE b 1 Bill DELETE c I need to extract the data from TABLE1_HISTORY into this format: TransactionId Type ColumnName ColumnValue a INSERT ID 1 a INSERT NAME Joe b UPDATE ID 1 b UPDATE NAME Bill c DELETE ID 1 c DELETE NAME Bill Other than upgrading to Enterprise Edition and leveraging the built in change tracking functionality, what is your suggestion for accomplishing this task?

    Read the article

  • regarding like query operator

    - by stackoverflowuser
    Hi For the below data (well..there are many more nodes in the team foundation server table which i need to refer to..below is just a sample) Nodes ------------------------ \node1\node2\node3\ \node1\node2\node5\ \node1\node2\node3\node4 I was wondering if i can apply something like (below query does not give the required results) select * from table_a where nodes like '\node1\node2\%\' to get the below data \node1\node2\node3\ \node1\node2\node5\ and something like (below does not give the required results) select * from table_a where nodes like '\node1\node2\%\%\' to get \node1\node2\node3\ \node1\node2\node5\ \node1\node2\node3\node4 Can the above be done with like operator? Pls. suggest. Thanks

    Read the article

  • LINQ2SQL: orderby note.hasChildren(), name ascending

    - by Peter Bridger
    I have a hierarchical data structure which I'm displaying in a webpage as a treeview. I want to data to be ordered to first show nodes ordered alphabetically which have no children, then under these nodes ordered alphabetically which have children. Currently I'm ordering all nodes in one group, which means nodes with children appear next to nodes with no children. I'm using a recursive method to build up the treeview, which has this LINQ code at it's heart: var filteredCategory = from c in category orderby c.Name ascending where c.ParentCategoryId == parentCategoryId && c.Active == true select c; So this is the orderby statement I want to enhance. Shown below is the database table structure: [dbo].[Category]( [CategoryId] [int] IDENTITY(1,1) NOT NULL, [Name] [varchar](100) NOT NULL, [Level] [tinyint] NOT NULL, [ParentCategoryId] [int] NOT NULL, [Selectable] [bit] NOT NULL CONSTRAINT [DF_Category_Selectable] DEFAULT ((1)), [Active] [bit] NOT NULL CONSTRAINT [DF_Category_Active] DEFAULT ((1))

    Read the article

  • Serializing Data Structures in C

    - by src
    I've recently read three separate books on algorithms and data structures, tcp/ip socket programming, and programming with memory. The book about memory briefly discussed the topic of serializing data structures for the purposes of storing it to disk, or sending it across a network. I can't help but wonder why the the other two books didn't discuss serialization at all. After an unsuccessful web/book search I'm left wondering where I can find a good book/paper/tutorial on serializing data structures in C? Where or how did you learn it?

    Read the article

  • How to expose an entity via alternate keys with spring data rest

    - by dan carter
    Spring-data-rest does a great job exposing entities via their primary key for GET, PUT and DELETE etc. operations. /myentityies/123 It also exposes search operations. /myentities/search/byMyOtherKey?myOtherKey=123 In my case the entities have a number of alternate keys. The systems calling us, will know the objects by these IDs, rather than our internal primary key. Is it possible to expose the objects via another URL and have the GET, PUT and DELETE handled by the built-in spring-data-rest controllers? /myentities/myotherkey/456 We'd like to avoid forcing the calling systems to have to make two requests for each update. I've tried playing with @RestResource path value, but there doesn't seem to be a way to add additional paths.

    Read the article

  • Using an IN clause in Vb.net to save something to the database using SQL

    - by Rob
    I have a textbox and a button on a form. I wish to run a query (in Vb.Net) that will produce a query with the IN Values. Below is an example of my code myConnection = New SqlConnection("Data Source=sqldb\;Initial Catalog=Rec;Integrated Security=True") myConnection.Open() myCommand = New SqlCommand("UPDATE dbo.Recordings SET Status = 0 where RecID in ('" & txtRecID.Text & "') ", myConnection) ra = myCommand.ExecuteNonQuery() myConnection.Close() MsgBox("Done!", _ MsgBoxStyle.Information, "Done") When I enter a single value it works but when I enter values with commas it throws an error: "Conversion failed when converting the varchar value '1234,4567' to data type int." Could someone please help me to solve this or if there is an alternative way? Many Thanks

    Read the article

  • ASP.NET/VB/SQL: trying to insert data, getting error "no value given for required parameters"

    - by Sara
    I am pretty sure this is a basic syntax error, I am new at this and basically figuring things out by trial and error... I am trying to insert data from textboxes into an Access database, where the primary key fields in tableCourse are prefix and course_number. It keeps giving me the "no value given for one or more required parameters" error. Here is my codebehind: Protected Sub Wizard1_FinishButtonClick(ByVal sender As Object, ByVal e As System.Web.UI.WebControls.WizardNavigationEventArgs) Handles Wizard1.FinishButtonClick 'Collect Data Dim myDept = txtDept.Text Dim myFirst = txtFirstName.Text Dim myLast = txtLastName.Text Dim myPrefix = txtCoursePrefix.Text Dim myNum = txtCourseNum.Text 'Define Connection Dim myConn As New OleDbConnection myConn.ConnectionString = AccessDataSource1.ConnectionString 'Create commands Dim myIns1 As New OleDbCommand("INSERT INTO tableCourse (department, name_first, name_last, prefix, course_number) VALUES (@myDept, @myFirst, @myLast, @myPrefix, @myNum)", myConn) 'Execute the commands myConn.Open() myIns1.ExecuteNonQuery() End Sub

    Read the article

  • SQL binary value to PHP variable leading zeros

    - by Agony
    Using sql query to pull data from a mssql database results in a value that still has leading zeros. The data in database is stored as binary(13) - so it will pull all 13 digits. However the value is a text, so any leading zeros will generally show up as '?' in a form on the site - and in return will update wrong data to the database later. So what i need is to only select/display the text itself, not all 13 bytes. using: SELECT CONVERT(char,uilock_pw) AS uipwd FROM tbl_UserAccount or SELECT uilock_pw FROM tbl_UserAccount still adds the leading zeros to the char array. Example in database: 0x71776531323300000000000000 Would show up as: qwe123??????? But should be: qwe123 Im not even sure what character those ? represent. Using Echo results in a normal qwe123 - but not in a form.

    Read the article

  • Sql Server Compact Edition 3.5: Does the data persist in the database file?

    - by jerbersoft
    Hi guys, I have this question in which I have a SQL Server Compact Edition database for a desktop application. I am completely new to Sql Server Compact Edition. The question is, does the data inserted in the database persist even if the application is shut down or restarted? Coz I cant seem to find my data when using Sql Server Management Studio to manage the database. Am I missing anything/something? EDIT: Is SQL Server Compact Edition used for caching local data only? We cant use it like what we normally do on Sql Server Express for example managing data using Sql Server Management Studio?

    Read the article

  • Copy a Table's data from a Stored Procedure

    - by Niike2
    I am learning how to use SQL and Stored Procedures. I know the syntax is incorrect: Copy data from one table into another table on another Database with a Stored Procedure. The problem is I don't know what table or what database to copy to. I want it to use parameters and not specify the columns specifically. I have 2 Databases (Master_db and Master_copy) and the same table structure on each DB. I want to quickly select a table in Master_db and copy that table's data into Master_copy table with same name. I have come up with something like this: USE Master_DB CREATE PROCEDURE TransferData DEFINE @tableFrom, @tableTo, @databaseTo; INSERT INTO @databaseTo.dbo.@databaseTo SELECT * FROM Master_DB.dbo.@tableFrom GO;

    Read the article

  • Data structure supporting the following operations

    - by 500865
    I'm looking for a data structure for working with a set of data which is most efficient to do the following : Check whether an item has been categorized or not. (The categorized and uncategorized set are disjoint sets). Get the category of an item. Get all the items in a particular category. Get all the uncategorized items. Remove a particular item from the data set. I was thinking of having a Dictionary<String, Set<String>> to hold all the items in a given category, but that doesn't solve 2.

    Read the article

  • SSAS Reporting Services - Set specific language / translation

    - by Chris
    Hi all, in the data warehouse there's a default language for the measures, and I added a translation for German captions. In a Visual Studio Report Server project, when creating a query with my German OS, the cube and its measures are displayed in German language. When dragging measures to the mdx query windows, the default measure name is used. That's what I want and what I expect, since when writing MDX queries I would like to use the default measure names. But when executing the query, the columns created for each measure is translated to German again. This resuls in having German columns names within my dataset, which I dont want. I'd like to have the english column names. I already tried to change the connection string to: Data Source=server;Initial Catalog=DataWarehouse;LocaleIdentifier=1033 But that doesn't help, I still see German translations. Anyone knows how to set a specific translation?

    Read the article

  • Database design: one huge table or separate tables?

    - by littlegreen
    Currently I am designing a database for use in our company. We are using SQL Server 2008. The database will hold data gathered from several customers. The goal of the database is to acquire aggregate benchmark numbers over several customers. Recently, I have become worried with the fact that one table in particular will be getting very big. Each customer has approximately 20.000.000 rows of data, and there will soon be 30 customers in the database (if not more). A lot of queries will be done on this table. I am already noticing performance issues and users being temporarily locked out. My question, will we be able to handle this table in the future, or is it better to split this table up into smaller tables for each customer?

    Read the article

  • sql server - bulk insert error

    - by user554134
    I am using bulk insert and getting below error: Note: The data in the load file is not beyong the configured column length Running Command: bulk insert load_data from 'C:\temp\dataload\load_file.txt' with (firstrow = 1, fieldterminator = '0x09', rowterminator = '\n',MAXERRORS = 0, ERRORFILE = 'C:\temp\dataload\load_file') Contents of load file: user_name file_path asset_owner city import_date admin C:\ admin toronto 04/12/2012 Error: Msg 4863, Level 16, State 1, Line 1 Bulk load data conversion error (truncation) for row 1, column 6 (validated). Msg 7399, Level 16, State 1, Line 1 The OLE DB provider "BULK" for linked server "(null)" reported an error. The provider did not give any information about the error. Msg 7330, Level 16, State 2, Line 1 Cannot fetch a row from OLE DB provider "BULK" for linked server "(null)".

    Read the article

  • T-SQL Specific Syntax problem (Simple no doubt)

    - by Yoda
    Hi guys, I have an issue with a query I'm trying to run on some data, I guess the place to start is to describe the data. Ok so I have a list of email addresses, each email address has a unique ID and an account ID Also in my tables I have a set number which auto incrememnts, this will allow me to target duplicate email addresses What I need to do is something like this. Insert into duplicates (EMAIL,ACCOUNTID,ID) SELECT Email,AccountID,ID FROM EmailAddresses Group by Email,AccountID Having Count(email)>1 Order by AccountID, Email So essentially I want to select all duplicate email addresses and insert them (and their relative fields) into a new table broken down by accountID so I can run some further querys on it. I have been battling with this for way too long and could just use a fresh perspective. Cheers in advance

    Read the article

< Previous Page | 129 130 131 132 133 134 135 136 137 138 139 140  | Next Page >