Search Results

Search found 79588 results on 3184 pages for 'sql data storage'.

Page 217/3184 | < Previous Page | 213 214 215 216 217 218 219 220 221 222 223 224  | Next Page >

  • Errors with large data sources

    - by The Sheek Geek
    I'm doing some benchmarking on large data sources and binding/exporting data for reporting. I started with using a data set, filling it with 100000 rows and then attempting to open a crystal report with the retrieved data. I noticed that the data set filled just fine (took about 779 milliseconds) however, when attempting to export the data to the report or even bind to a gridview the application would fail with an OutOfMemoryException. Does anyone experienced this before or have an idea of how to get around it? It is very possible that clients will run reports for years worth of data and 100000 rows are not inconceivable. The application and the benchmark code are written in C# using ORACLE and SQL Server databases. I still have some data sources to test, but would like to know how to get around this just in case I don't find a better solution.

    Read the article

  • Azure Blobs - ArgumentNullException when calling UploadFile()

    - by Ariel
    I’m getting the following exception when trying to upload a file with the following code: string encodedUrl = "videos/Sample.mp4" CloudBlockBlob encodedVideoBlob = blobClient.GetBlockBlobReference(encodedUrl); Log(string.Format("Got blob reference for {0}", encodedUrl), EventLogEntryType.Information); encodedVideoBlob.Properties.ContentType = contentType; encodedVideoBlob.Metadata[BlobProperty.Description] = description; encodedVideoBlob.UploadFile(localEncodedBlobPath); I see the "Got blob reference" message, so I assume the reference resolves correctly. Void Run() C:\Inter\Projects\PoC\WorkerRole\WorkerRole.cs (40) System.ArgumentNullException: Value cannot be null. Parameter name: value at Microsoft.WindowsAzure.StorageClient.Tasks.Task`1.get_Result() at Microsoft.WindowsAzure.StorageClient.Tasks.Task`1.ExecuteAndWait() at Microsoft.WindowsAzure.StorageClient.CloudBlob.UploadFromStream(Stream source, BlobRequestOptions options) at Microsoft.WindowsAzure.StorageClient.CloudBlob.UploadFile(String fileName, BlobRequestOptions options) at EncoderWorkerRole.WorkerRole.ProcessJobOutput(IJob job, String videoBlobToEncodeUrl) in C:\Inter\Projects\PoC\WorkerRole\WorkerRole.cs:line 144 at EncoderWorkerRole.WorkerRole.Run() in C:\Inter\Projects\PoC\WorkerRole\WorkerRole.cs:line 40 Interestingly, I'm running that same snippet from an on-premises server i.e., outside of Azure and it works correctly. Ideas welcome, thanks!

    Read the article

  • Why does MS SQL Mgmt Studio Express keep forgetting my passwords?

    - by Ryan
    I have about had it with this tool, I check the save password box at the login dialogue but it just doesn't work. Sometimes it will for a few days, and then the password will just be gone. Nearly every time I load this thing up I have to track down the password again and type it in. Is there some password rule in the database that would be causing this? This is driving me absolutely crazy.

    Read the article

  • How to Auto-Increment Non-Primary Key? - SQL Server

    - by user311509
    CREATE TABLE SupplierQuote ( supplierQuoteID int identity (3504,2) CONSTRAINT supquoteid_pk PRIMARY KEY, PONumber int identity (9553,20) NOT NULL . . . CONSTRAINT ponumber_uq UNIQUE(PONumber) ); The above ddl produces an error: Msg 2744, Level 16, State 2, Line 1 Multiple identity columns specified for table 'SupplierQuote'. Only one identity column per table is allowed. How can i solve it? I want PONumber to be auto-incremented.

    Read the article

  • Unable to drop constraint in sql server 2005 "Could not drop constraint. See previous errors"

    - by DannykPowell
    I'm trying to drop a constraint on a db table, something like: ALTER TABLE MyTable drop CONSTRAINT FK_MyTable_AnotherTable But the execution just runs and runs. If I stop it I see: Msg 3727, Level 16, State 0, Line 2 Could not drop constraint. See previous errors. Web search throws up various pages but note that the constraint is properly named and I am trying to remove it using the correct name

    Read the article

  • How do I use a concatenation of 2 columns in a SQL DB in ASP.NET properly?

    - by user293357
    I'm using LinqToSql like this with a CheckBoxList in ASP.NET: var teachers = from x in dc.teachers select x; cbl.DataSource = teachers; cbl.DataTextField = "name"; cbl.DataValueField = "teacherID"; cbl.DataBind(); I want to display both "firstname" and "name" in the DataTextField however. I found this solution but I'm using LINQ: http://stackoverflow.com/questions/839223/concatenate-two-fields-in-a-dropdown How do I do this?

    Read the article

  • String Storage and Retrieval in Android

    - by dweebsonduty
    I have a program that has items with 3 attributes. Name(String) : Count(String) : Amount(String) Currently I am writing the database when someone is trying to access a section then reading from the database. The writing is taking too long. So my question is how can I perhaps create a nested array where I can say if name = carrot then print carrot['count'] and carrot['amount']. I am very new to Java so some sample code would be great. Or any other suggestions that you may have as far as tackling this issue.

    Read the article

  • Why are there performance differences when a SQL function is called from .Net app vs when the same c

    - by Dan Snell
    We are having a problem in our test and dev environments with a function that runs quite slowly at times when called from an .Net Application. When we call this function directly from management studio it works fine. Here are the differences when they are profiled: From the Application: CPU: 906 Reads: 61853 Writes: 0 Duration: 926 From SSMS: CPU: 15 Reads: 11243 Writes: 0 Duration: 31 Now we have determined that when we recompile the function the performance returns to what we are expecting and the performance profile when run from the application matches that of what we get when we run it from SSMS. It will start slowing down again at what appear to random intervals. We have not seen this in prod but they may be in part because everything is recompiled there on a weekly basis. So what might cause this sort of behavior?

    Read the article

  • With SQL can you use a sub-query in a WHERE LIKE clause?

    - by Jason
    I'm not even sure how to even phrase this as it sounds weird conceptually, but I'll give it a try. Basically I'm looking for a way to create a query that is essentially a WHERE IN LIKE SELECT statement. As an example, if I wanted to find all user records with a hotmail.com email address, I could do something like: SELECT UserEmail FROM Users WHERE (UserEmail LIKE '%hotmail.com') But what if I wanted to use a subquery as the matching criteria? Something like this: SELECT UserEmail FROM Users WHERE (UserEmail LIKE (SELECT '%'+ Domain FROM Domains)) Is that even possible? If so, what's the right syntax?

    Read the article

  • What do you call the concept of dynamic data definition?

    - by DJTripleThreat
    Maybe this is simpler and more straightforward then what I'm thinking but I can't seem to find this concept on google anywhere. The concept is this: You have a table in a database and the table has a specified number of columns. However, it has been asked of me by previous clients that there also be a set of dynamic user defined columns that can be added on the fly. What is this concept called and is it considered a design pattern?

    Read the article

  • In SQL How do I copy values from one table to another based on another field's value?

    - by Joshua1729
    Okay I have two tables VOUCHERT with the following fields ACTIVATIONCODE SERIALNUMBER VOUCHERDATADBID UNAVAILABLEAT UNAVAILABLEOPERATORDBID AVAILABLEAT AVAILABLEOPERATORDBID ACTIVATIONCODENEW EXT1 EXT2 EXT3 DENOMINATION -- I added this column into the table. and the second table is VOUCHERDATAT with the following fields VOUCHERDATADBID BATCHID VALUE CURRENCY VOUCHERGROUP EXPIRYDATE AGENT EXT1 EXT2 EXT3 What I want to do is copy the corresponding VALUE from VOUCHERDATAT and put it into DENOMINATION of VOUCHERT. The linking between the two is VOUCHERDATADBID. How do I go about it? It is not a 1:1 mapping. What I mean is there may be 1000 SERIALNUMBERS with a same VOUCHERDATADBID. And that VOUCHERDATADBID has only entry in VOUCHERDATAT, hence one value. Therefore, all serial numbers belonging to a certain VOUCHERDATADBID will have the same value. Will JOINS work? What type of JOIN should I use? Or is UPDATE table the way to go? Thanks for the help !!

    Read the article

  • How can I exclude LEFT JOINed tables from TOP in SQL Server?

    - by Kalessin
    Let's say I have two tables of books and two tables of their corresponding editions. I have a query as follows: SELECT TOP 10 * FROM (SELECT hbID, hbTitle, hbPublisherID, hbPublishDate, hbedID, hbedDate FROM hardback LEFT JOIN hardbackEdition on hbID = hbedID UNION SELECT pbID, pbTitle, pbPublisher, pbPublishDate, pbedID, pbedDate FROM paperback Left JOIN paperbackEdition on pbID = pbedID ) books WHERE hbPublisherID = 7 ORDER BY hbPublishDate DESC If there are 5 editions of the first two hardback and/or paperback books, this query only returns two books. However, I want the TOP 10 to apply only to the number of actual book records returned. Is there a way I can select 10 actual books, and still get all of their associated edition records? In case it's relevant, I do not have database permissions to CREATE and DROP temporary tables. Thanks for reading! Update To clarify: The paperback table has an associated table of paperback editions. The hardback table has an associated table of hardback editions. The hardback and paperback tables are not related to each other except to the user who will (hopefully!) see them displayed together.

    Read the article

  • Does SQL Server guarantee sequential inserting of an identity column?

    - by balpha
    In other words, is the following "cursoring" approach guaranteed to work: retrieve rows from DB save the largest ID from the returned records for later, e.g. in LastMax later, "SELECT * FROM MyTable WHERE Id > {0}", LastMax In order for that to work, I have to be sure that every row I didn't get in step 1 has an Id greater than LastMax. Is this guaranteed, or can I run into weird race conditions?

    Read the article

  • MSSQL / T-SQL : How to update equal percentages of a resultset?

    - by Kent Comeaux
    I need a way to take a resultset of KeyIDs and divide it up as equally as possible and update records differently for each division based on the KeyIDs. In other words, there is SELECT KeyID FROM TableA WHERE (some criteria exists) I want to update TableA 3 different ways by 3 equal portions of KeyIDs. UPDATE TableA SET FieldA = Value1 WHERE KeyID IN (the first 1/3 of the SELECT resultset above) UPDATE TableA SET FieldA = Value2 WHERE KeyID IN (the second 1/3 of the SELECT resultset above) UPDATE TableA SET FieldA = Value3 WHERE KeyID IN (the third 1/3 of the SELECT resultset above) or something to that effect. Thanks for any and all of your responses.

    Read the article

  • Recomposing data structure in Excel

    - by Velletti
    I've got a sheet of 35k rows of the following kind of data that I want to reshape into table below. So, I want to reshape this data in a way to get all the people within a specific GroupID in separate columns. I suppose that I should add a counter for each row within specific group id? Also, I suppose these kind of issues are a lot more comfortable to be done in databases? Since I often have this kind of data, I need be much quicker about solving it, then I am now.

    Read the article

  • SQL - Please Help - How can I select values from different rows depending on the most recent entry

    - by user321185
    Hiya, Basically I have a table which is used to hold employee work wear details. It is formed of the columns: EmployeeID, CostCentre, AssociateLevel, IssueDate, TrouserSize, TrouserLength, TopSize & ShoeSize. An employee can be assigned a pair of trousers, a top and shoes at the same time or only one or two pieces of clothing. As we all know sepeoples sizes and employee levels can change which is why I need help really. I need to be able to select the most recent clothes size and associate level for each item of clothing for each employee. So for example if employee '54664LSS' was given a pair of 'XL' trousers and a 'L' top on 24/03/11 but then received a 'M' top on 26/05/10. Then the values of the 'M' sized top and the 'L' sized trousers would need to be returned. Any help would be greatly appreciated as I'm pretty stuck :(. Thanks.

    Read the article

  • How can I use SQL Server's full text search across multiple rows at once?

    - by Morbo
    I'm trying to improve the search functionality on my web forums. I've got a table of posts, and each post has (among other less interesting things): PostID, a unique ID for the individual post. ThreadID, an ID of the thread the post belongs to. There can be any number of posts per thread. Text, because a forum would be really boring without it. I want to write an efficient query that will search the threads in the forum for a series of words, and it should return a hit for any ThreadID for which there are posts that include all of the search words. For example, let's say that thread 9 has post 1001 with the word "cat" in it, and also post 1027 with the word "hat" in it. I want a search for cat hat to return a hit for thread 9. This seems like a straightforward requirement, but I don't know of an efficient way to do it. Using the regular FREETEXT and CONTAINS capabilities for N'cat AND hat' won't return any hits in the above example because the words exist in different posts, even though those posts are in the same thread. (As far as I can tell, when using CREATE FULLTEXT INDEX I have to give it my index on the primary key PostID, and can't tell it to index all posts with the same ThreadID together.) The solution that I currently have in place works, but sucks: maintain a separate table that contains the entire concatenated post text of every thread, and make a full text index on THAT. I'm looking for a solution that doesn't require me to keep a duplicate copy of the entire text of every thread in my forums. Any ideas? Am I missing something obvious?

    Read the article

< Previous Page | 213 214 215 216 217 218 219 220 221 222 223 224  | Next Page >