Search Results

Search found 79588 results on 3184 pages for 'sql data storage'.

Page 312/3184 | < Previous Page | 308 309 310 311 312 313 314 315 316 317 318 319  | Next Page >

  • What is the mysql 5.5 equivalent for the sys.dm_fts_index_keywords_by_document in sql 2008

    - by djsurge
    I'm making a web application that uses the data in the sys.dm_fts_index_keywords_by_document. I'm interested how many times a given term occurs in each string that is indexed. For example, I have a table with a column called comments, the table has various strings in the comments field. When I make that column full text searchable, the dm_fts_index_keywords_by_document is created and I can see the word per document data. Can i do the same thing in mySQL?

    Read the article

  • Count rows against to SQL server (2005) table?

    - by David.Chu.ca
    I have a simple question with two options to get count of rows in a SQL server (2005). I am using VS 2005. There are two options to get the count: SELECT id FROM Table1 WHERE dt >= startDt AND dt < endDt;; I get a list of ids from above call in cache and then I get count by List.Count. Another option is SELECT COUNT(*) FROM Table1 WHERE dt >= startDt AND dt < endDt; The above call will get the count directly. The issue is that I had several cases of exceptions with the second method: timeout. What I found is that the table1 is too big with millions of data. When I used the first option, it seems OK. I am confused by the fact that Count() takes more time than getting all the rows(is that true?). Not sure if the aggregation call with Count() would cause SQL server to create temporary table or cache on server side and it would result in slow performance when table is too big? I am not sure what is the best way to get the count?

    Read the article

  • Great data mining quotes

    - by Andrei Savu
    I'm searching some data mining related quotes. Can you tell me some of the quotes you like? On the internet I have only found this site: http://www.quotesea.com/quotes/with/data%20mining Thanks.

    Read the article

  • DataGridView not displaying a row after it is created

    - by joslinm
    Hi, I'm using Visual Studio 10 and I just created a Database using SQL Server CE. Within it, I made a table CSLDataTable and that automatically created a CSLDataSet & CSLDataTableTableAdapter. The three variables were automatically created in my MainWindow.cs class: cSLDataSet cSLDataTableTableAdapter cSLDataTableBindingSource I have added a DataGridView in my Form called dataGridView and datasource cSLDataTableBindingSource. In my MainWindow(), I tried adding a row as a test: public MainWindow() { InitializeComponent(); CSLDataSet.CSLDataTableRow row = cSLDataSet.CSLDataTable.NewCSLDataTableRow(); row.File_ = "file"; row.Artist = "artist11"; row.Album = "album"; row.Save_Structure = "save"; row.Sent = false; row.Error = true; row.Release_Format = "release"; row.Bit_Rate = "bitrate.."; row.Year = "year"; row.Physical_Format = "format"; row.Bit_Format = "bitformat"; row.File_Path = "File!!path"; row.Site_Origin = "what"; cSLDataSet.CSLDataTable.Rows.Add(row); cSLDataSet.AcceptChanges(); cSLDataTableTableAdapter.Fill(cSLDataSet.CSLDataTable); cSLDataTableTableAdapter.Update(cSLDataSet); dataGridView.Refresh(); dataGridView.Update(); } In regards to the DataSet methods I tried calling, I had been trying to find a "correct" way to interact with the adapter, dataset, and datatable to successfully show the row, but to no avail. I'm rather new to using SQL Server CE Database, and read a lot of the MSDN sites & thought I was on the right track, but I've had no luck. The DataGridView shows the headers correctly, but that new row does not show up.

    Read the article

  • Generating MySQL UPDATE statements containing BLOB image data

    - by Bob
    I'm trying to write an SQL statement that will generate an SQL script that will update a BLOB field with an IMAGE being selected from the database. This is what I have: select concat( 'UPDATE `IMAGE` SET THUMBNAIL = ', QUOTE( THUMBNAIL ), ' WHERE ID = ', ID, ';' ) as UPDATE_STATEMENT from IMAGE; In the above, THUMBNAIL is a BLOB field containing raw image data. When I run the resulting script I get the following error: ERROR at line 2: Unknown command '\\'. I first tried this without the QUOTE() function, like so: select concat( 'UPDATE `IMAGE` SET THUMBNAIL = \'', THUMBNAIL, '\' WHERE ID = ', ID, ';' ) as UPDATE_STATEMENT from IMAGE; Running the resulting script produces this error: ERROR at line 2: Unknown command '\0'. What is the proper function to apply to this BLOB field in the select, so the UPDATE statements will work? If context is required, I'm looking to migrate thumbnails generated on one server to another server for certain image IDs only. I would use mysqldump, but I don't want to clobber the entire table. Any help is greatly appreciated!

    Read the article

  • How to store a list in a column of a database table.

    - by John Berryman
    Howdy! So, per Mehrdad's answer to a related question, I get it that a "proper" database table column doesn't store a list. Rather, you should create another table that effectively holds the elements of said list and then link to it directly or through a junction table. However, the type of list I want to create will be composed of unique items (unlike the linked question's fruit example). Furthermore, the items in my list are explicitly sorted - which means that if I stored the elements in another table, I'd have to sort them every time I accessed them. Finally, the list is basically atomic in that any time I wish to access the list, I will want to access the entire list rather than just a piece of it - so it seems silly to have to issue a database query to gather together pieces of the list. AKX's solution (linked above) is to serialize the list and store it in a binary column. But this also seems inconvenient because it means that I have to worry about serialization and deserialization. Is there any better solution? If there is no better solution, then why? It seems that this problem should come up from time to time. ... just a little more info to let you know where I'm coming from. As soon as I had just begun understanding SQL and databases in general, I was turned on to LINQ to SQL, and so now I'm a little spoiled because I expect to deal with my programming object model without having to think about how the objects are queried or stored in the database. Thanks All! John

    Read the article

  • svg data visualizations

    - by garymlewis
    I'd like to experiment with SVG as a way of displaying data-driven graphs, charts, etc. The data exists as xml, and I'll use XQuery to produce the xml. What options (eg, graphics libraries) should I consider for creating the SVG from the xml? Many thanks.

    Read the article

  • What lasts longer: Data stored on non-volatile flash RAM, optical media, or magnetic disk?

    - by Chris W. Rea
    What lasts longer: Data stored on non-volatile flash RAM (USB stick or SD cards?), optical media (CD, DVD, or Blu-Ray?), or magnetic disk (floppies, hard drives?) My gut tells me optical media, but I'm not sure. Furthermore, which of those digital media would be most suitable for long-term data storage where environmental issues are unknown, such as low/high temperature or humidity? For example, what digital media could be stored in a basement, attic, or time capsule, and be expected to survive a reasonably long time? e.g. a lifetime, and then some. Update: Looks like optical media and magnetic tape each have one vote below. Does anybody else have an opinion or know of a study comparing the two?

    Read the article

  • SQL command to get field of a maximum value, without making two select

    - by António Capelo
    I'm starting to learn SQL and I'm working on this exercise: I have a "books" table which holds the info on every book (including price and genre ID). I need to get the name of the genre which has the highest average price. I suppose that I first need to group the prices by genre and then retrieve the name of the highest.. I know that I can get the results GENRE VS COST with the following: select b.genre, round(avg(b.price),2) as cost from books b group by b.genre; My question is, to get the genre with the highest AVG price from that result, do I have to make: select aux.genre from ( select b.genre, round(avg(b.price),2) as cost from books b group by b.genre ) aux where aux.cost = (select max(aux.cost) from ( select b.genre, round(avg(b.price),2) as cost from books l group by b.genre ) aux); Is it bad practice or isn't there another way? I get the correct result but I'm not confortable with creating two times the same selection. I'm not using PL SQL so I can't use variables or anything like that.. Any help will be appreciated. Thanks in advance!

    Read the article

  • PL/SQL Sum by hour

    - by Steve
    Hi, I have some data with start and stop date that I need to sum. I am not sure how to code for it. Here are is the data I have to use: STARTTIME,STOPTIME,EVENTCAPACITY 8/12/2009 1:15:00 PM,8/12/2009 1:59:59 PM,100 8/12/2009 2:00:00 PM,8/12/2009 2:29:59 PM,100 8/12/2009 2:30:00 PM,8/12/2009 2:59:59 PM,80 8/12/2009 3:00:00 PM,8/12/2009 3:59:59 PM,85 In this example I would need the sum from 1pm to 2pm, 2pm to 3pm and 3pm to 4pm Any suggestions are appreciated. Steve

    Read the article

  • Need advice on comparing the performance of 2 equivalent linq to sql queries

    - by uvita
    I am working on tool to optimize linq to sql queries. Basically it intercepts the linq execution pipeline and makes some optimizations like for example removing a redundant join from a query. Of course, there is an overhead in the execution time before the query gets executed in the dbms, but then, the query should be processed faster. I don't want to use a sql profiler because I know that the generated query will be perform better in the dbms than the original one, I am looking for a correct way of measuring the global time between the creation of the query in linq and the end of its execution. Currently, I am using the Stopwatch class and my code looks something like this: var sw = new Stopwatch(); sw.Start(); const int amount = 100; for (var i = 0; i < amount; i++) { ExecuteNonOptimizedQuery(); } sw.Stop(); Console.Writeline("Executing the query {2} times took: {0}ms. On average, each query took: {1}ms", sw.ElapsedMilliseconds, sw.ElapsedMilliseconds / amount, amount); Basically the ExecutenNonOptimizedQuery() method creates a new DataContext, creates a query and then iterates over the results. I did this for both versions of the query, the normal one and the optimized one. I took the idea from this post from Frans Bouma. Is there any other approach/considerations I should take? Thanks in advance!

    Read the article

  • Excel and SQL, order by help

    - by perlnoob
    Im stuck in Excel 2007, running a query, it worked until I wanted to add a 2nd row containing "field 2". Select "Site Updates"."Posted By", "Site Uploaded"."Site Upload Date" From site_info.dbo."Site Updates" Where ("Site Updates"."Posted By") AND "Site Uploaded"."Site Upload Date">={ts '2010-05-01 00:00:00'}), ("Site Location"='Chicago') Union all Select "Site Updates"."Posted By", "Site Uploaded"."Site Upload Date" From site_info.dbo."Site Updates" Where ("Site Updates"."Posted By") AND "Site Uploaded"."Site Upload Date">={ts '2010-05-01 00:00:00'}), ("Site Location"='Denver') Order By "Site Location" ASC; Basically I want 2 different cells for the locations, example name - Chicago - denver user1 - 100 - 20 user2 - 34 - 1002 Right now for some odd reason, its combining it like: name - chicago user1 - 120 user2 - 1036 Please note updating to 2010 beta is not a viable option for me at this point. Any and all input that will help me is greatly apprecaited. I have read over http://www.techonthenet.com/sql/order_by.php however its not gotten me very far in this question. If you have another SQL resource you recomend for people trying to get their feet wet, I'd greatly apprecaite it. If it helps all the info is on the same table.

    Read the article

  • Best Practices for Exchanging data between Desktop and Web Application

    - by Amitd
    Hi, I have to pass information from a desktop application to Web application and vice versa. What are the best practices that are regularly used? Currrently I'm using Asp.Net and a Winform. To pass data to Web Site im creating a (POST) WebRequest and posting an xml to the site. To pass data to Application im using .Net Remoting from Asp.net (Winform is an adminstration and monitoring application) Also currently both Web app and Winform are on the same machine.(but can change).

    Read the article

  • Calculate differences between rows while grouping with SQL

    - by Guido
    I have a postgresql table containing movements of different items (models) between warehouses. For example, the following record means that 5 units of model 1 have been sent form warehouse 1 to 2: source target model units ------ ------ ----- ----- 1 2 1 5 I am trying to build a SQL query to obtain the difference between units sent and received, grouped by models. Again with an example: source target model units ------ ------ ----- ----- 1 2 1 5 -- 5 sent from 1 to 2 1 2 2 1 2 1 1 2 -- 2 sent from 2 to 1 2 1 1 1 -- 1 more sent from 2 to 1 The result should be: source target model diff ------ ------ ----- ---- 1 2 1 2 -- 5 sent minus 3 received 1 2 2 1 I wonder if this is possible with a single SQL query. Here is the table creation script and some data, just in case anyone wants to try it: CREATE TEMP TABLE movements ( source INTEGER, target INTEGER, model INTEGER, units INTEGER ); insert into movements values (1,2,1,5); insert into movements values (1,2,2,1); insert into movements values (2,1,1,2); insert into movements values (2,1,1,1);

    Read the article

  • MSSQL, varchar data to nvarchar data

    - by Øyvind
    I've got a database with collation Danish_Norwegian_CS_AS and lots of varchar columns. I'd like to convert all this data to unicode, but haven't found a way to convert this data yet. If I've understood correctly, the encoding used is UCS-2 little endian. For example I've got a column containing 'PÃ¥l-Trygve' which is easily converted with C# to 'Pål-Trygve' using Encoding.Default.GetString(Encoding.UTF8.GetBytes("PÃ¥l-Trygve")); Is there a way to do this conversion in the microsoft SQL client?

    Read the article

  • Data-Virtualization problem with SurfaceScrollViewer

    - by TWith2Sugars
    I'm in a situation where I'm using an ItemsControl with a SurfaceScrollViewer bound to an AsyncVirtualizingCollection and all of the data is being requested. I'm aware that it's due to the ItemsControl request all of the data but I'm not sure how to get around this. I've tried the AsyncVirtualizingCollection bound to a ListBox and it works fine and I'm now attempting to having it bound to a SurfaceListBox but the problem persists. Any ideas of how to overcome this? Thanks & Regards Tony

    Read the article

  • Should I write more SQL to be more efficient, or less SQL to be less buggy?

    - by RenderIn
    I've been writing a lot of one-off SQL queries to return exactly what a certain page needs and no more. I could reuse existing queries and issue a number of SQL requests linear to the number of records on the page. As an example, I have a query to return People and a query to return Job Details for a person. To return a list of people with their job details I could query once for people and then once for each person to retrieve their job details. I've found that in most cases that solution returns things in a reasonable amount of time, but I don't know how well it will scale in my environment. Instead I've been writing queries to join people + job details, or people + salary history, etc. I'm looking at my models and I see how I could shave off maybe 30% of my code if I were to re-use existing queries. This is a big temptation. Is it a bad thing to go for reuse over efficiency in general or does it all come down to the specific situation? Should I first do it the easy way and then optimize later, or is it best to get the code knocked out while everything is fresh in my mind? Thoughts, experiences?

    Read the article

  • Reading numeric Date value from CSV file to data.frame in "R"

    - by Dick Eshelman
    D <- read.csv("sample1.csv", header = FALSE, sep = ",") D V1 V2 V3 V4 1 20100316 109825 352120 239065 2 20100317 108625 352020 239000 3 20100318 109125 352324 241065 D[,1] [1] 20100316 20100317 20100318 In the above example how do I get the data in D[,1] to be read, and stored as date values: 2010-03-16, 2010-03-17, 2010-03-18 ? I have lots of data files in this format. TIA,

    Read the article

  • which iphone data model to choose?

    - by Tronic
    i need to get some data somewhere to put in a timeline. the data strcture is like this: - Item - Name - Year - ShortInfo (mainly keywords and short texts) - LongInfo (much text with videos/audios (urls) a friend of mine told me i should you a plist and get all that stuff in there, but what about a sqlite database? any advice? regards

    Read the article

  • how to update tables' structures keeping current data

    - by Leon
    I have an c# application that uses tables from sqlserver 2008 database (runs on standalone pc with local sqlserver). Initially i install database on this pc with some initial data (there are some tables that application uses and the user doesn't touch). The question is - how can i upgrade this database after user created some new data without harming it (i continue developing and can add some new tables or stored procedures or add some columns to existing tables). Thanks in advance!

    Read the article

  • Cocoa Core data filename?

    - by RW
    I followed Apple's example for creating a managed object which btw was great... http://developer.apple.com/cocoa/coredatatutorial/index.html However I now want to know what "name" (filename) the user saved his data as. Does anyone know how to pull the filename from the core data object. something like this would be great... NSLog (@"the filename is %@", [coreData filename]); Any ideas?

    Read the article

  • injection attack (I thought I was protected!) <?php /**/eval(base64_decode( everywhere

    - by Cyprus106
    I've got a fully custom PHP site with a lot of database calls. I just got injection hacked. This little chunk of code below showed up in dozens of my PHP pages. <?php /**/ eval(base64_decode(big string of code.... I've been pretty careful about my SQL calls and such; they're all in this format: $query = sprintf("UPDATE Sales SET `Shipped`='1', `Tracking_Number`='%s' WHERE ID='%s' LIMIT 1 ;", mysql_real_escape_string($trackNo), mysql_real_escape_string($id)); $result = mysql_query($query); mysql_close(); For the record, I rarely use mysql_close() at the end though. That just happened to be the code I grabbed. I can't think of any places where I don't use mysql_real_escape_string(), (although I'm sure there's probably a couple. I'll be grepping soon to find out) There's also no places where users can put in custom HTML or anything. In fact, most of the user-accessible pages, if they use SQL calls at all, are almost inevitably "SELECT * FROM" pages that use a GET or POST, depending. Obviously I need to beef up my security, but I've never had an attack like this and I'm not positive what I should do. I've decided to put limits on all my inputs and go through looking to see if i missed a mysql_real_escape_string somewhere... Anybody else have any suggestions? Also... what does this type of code do? Why is it there?

    Read the article

  • Core Data deleteObject: sets attributes to nil

    - by SG1
    I am implementing an undo/redo mechanism in my app. This works fine for lots of cases. However, I can't undo past deleteObject:. the object is correctly saved in the undo queue, and I get it back and reinsterted into the Core Data stack just fine when calling undo. The problem is that all it's attributes are getting set to nil when I delete it. I have an entity "Canvas" with a to-many relationship called "graphics" to a "Graphic" entity, which has its inverse set to "canvas". Deleting a Graphic, then inserting it back, doesn't work. Here's the code (the redo method is basically the same): - (void)deleteGraphic:(id)aGraphic { //NSLog(@"undo drawing"); //Prepare the undo/redo [self.undoManager beginUndoGrouping]; [self.undoManager setActionName:@"Delete Graphic"]; [[self.detailItem valueForKey:@"graphics"] removeObject:aGraphic]; [[self managedObjectContext] deleteObject:aGraphic]; //End undo/redo [self.undoManager registerUndoWithTarget:self selector:@selector(insertGraphic:) object:aGraphic]; [self.undoManager endUndoGrouping]; NSLog(@"graphics are %@", [self sortedGraphics]); //Update drawing [self.quartzView setNeedsDisplay]; } and here's the wierdness: Before delete: graphics are ( <NSManagedObject: 0x1cc3f0> (entity: Graphic; id: 0x1c05f0 <x-coredata:///Graphic/t840FE8AD-F2E7-4214-822F-7994FF93D4754> ; data: { canvas = 0x162b70 <x-coredata://A919979E-75AD-474D-9561-E0E8F3388718/Canvas/p20>; content = <62706c69 73743030 d4010203 04050609 0a582476 65727369 6f6e5424 746f7059 24617263 68697665 7258246f 626a6563 7473>; frameRect = nil; label = nil; order = 1; path = "(...not nil..)"; traits = "(...not nil..)"; type = Path; }) After redo: graphics are ( <NSManagedObject: 0x1cc3f0> (entity: Graphic; id: 0x1c05f0 <x-coredata:///Graphic/t840FE8AD-F2E7-4214-822F-7994FF93D4754> ; data: { canvas = nil; content = nil; frameRect = nil; label = nil; order = 0; path = nil; traits = nil; type = nil; }), You can see it's the same object, just totally bleached by Core Data. The relationship delete rouls apparently have nothing to do with it as I've set them to "No Action" in a test.

    Read the article

< Previous Page | 308 309 310 311 312 313 314 315 316 317 318 319  | Next Page >