Search Results

Search found 7311 results on 293 pages for 'rows'.

Page 73/293 | < Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >

  • Using indexes on/through a MySQL view

    - by Peeja
    We've got a MySQL table in which rows are never updated, but instead new rows are added and the old ones marked obsolete. Think Rails' acts_as_paranoid, but for every update. To make working with Rails sane, we've got a view which selects only the rows which are "current". That makes a much better "table" for our ActiveRecord model. The snag: our indexes aren't being used anymore. Queries on the view don't use the underlying tables' indexes. You can't add an index to a view. Without indexes, the app is unbearably slow. The only solution we've come up with is to build a materialized view, but that's a pain in MySQL because they're not natively supported. Is there a better way to do this?

    Read the article

  • How to extract a Sub-Matrix from a Matrix ?

    - by ZaZu
    Hello, I have a matrix in a txt file and I want to load the matrix based on my input of number of rows and columns For example, I have a 5 by 5 matrix in the file. I want to extract a 3 by 3 matrix, how can I do that ? I created a nested loop using : FILE *sample sample=fopen("randomfile.txt","r"); for(i=0;i<rows;i++){ for(j=0;j<cols;j++){ fscanf(sample,"%f",&matrix[i][j]); } fscanf(sample,"\n",&matrix[i][j]); } fclose(sample); Sadly the code does not work .. If I have this matrix : 5.00 4.00 5.00 6.00 5.00 4.00 3.00 25.00 5.00 3.00 4.00 23.00 5.00 2.00 352.00 6.00 And inputting 3 for rows and 3 for columns, I get : 5.00 4.00 5.00 6.00 5.00 4.00 3.00 25.00 5.00 Which is obviously wrong , its reading line by line rather than skipping the unmentioned column ... What am I doing wrong ? Thanks !

    Read the article

  • Avoid the use of loops (for) with R

    - by albergali
    Hi, I'm working with R and I have a code like this: i<-1 j<-1 for (i in 1:10) for (j in 1:100) if (data[i] == paths[j,1]) cluster[i,4] <- paths[j,2] where : data is a vector with 100 rows and 1 column paths is a matrix with 100 rows and 5 columns cluster is a matrix with 100 rows and 5 columns My question is: how could I avoid the use of "for" loops to iterate through the matrix? I don't know whether apply functions (lapply, tapply...) are useful in this case. This is a problem when j=10000 for example, because execution time is very long. Thank you

    Read the article

  • Why does my report recalculate totals when I scroll in Access '07?

    - by andrewb
    Whenever I make a report in Access '07 and include some sort of total (whether counting or summing values), when I'm viewing the preview and scroll, the total recalculate. This is really annoying as Access takes a while (several tenths of a second) to do this, and while it does that the totals go blank. I've looked for a solution online, but I can't find this issue described anywhere. How can I stop the totals from recalculating when I scroll? I'm hoping for a simple solution that would solve this for all reports, or perhaps a simple property tweak on each report. I don't want to have to add code for every single report! I should describe the report layouts I'm using - they contain rows of data all on one page, and at times I group the rows. The number of rows aren't major, maybe around 50 a time or so.

    Read the article

  • Top 3 Max entries for a Combination which a condition

    - by Asharmb
    I am new to sql side. so if this questoin sound very easy then please spare me. I have a 4 coloumns in a sql table.Let say A,B,C,D . For any BC combination I may get any number of rows. I need to get at max 3 rows (which inturn give me 3 unique value of A for that BC ombination) for these selected rows i should have Top 3 Max value of D. As compare to other entries for that BC combination. So there can be any number of BC combination so the above logic should imply to all of them.

    Read the article

  • Please help me in creating a update query

    - by Rajesh Rolen- DotNet Developer
    I have got a table which contains 5 column and query requirements: update row no 8 (or id=8) set its column 2, column 3's value from id 9th column 2, column 3 value. means all value of column 2, 3 should be shifted to column 2, 3 of upper row (start from row no 8) and value of last row's 2, 3 will be null For example, with just 3 rows, the first row is untouched, the second to N-1th rows are shifted once, and the Nth row has nulls. id math science sst hindi english 1 11 12 13 14 15 2 21 22 23 24 25 3 31 32 33 34 35 The result of query of id=2 should be: id math science sst hindi english 1 11 12 13 14 15 2 31 32 23 24 25 //value of 3rd row (col 2,3) shifted to row 2 3 null null 33 34 35 This process should run for all rows whose id 2 Please help me to create this update query I am using MS sqlserver 2005

    Read the article

  • How to increase query speed without using full-text search?

    - by andre matos
    This is my simple query; By searching selectnothing I'm sure I'll have no hits. SELECT nome_t FROM myTable WHERE nome_t ILIKE '%selectnothing%'; This is the EXPLAIN ANALYZE VERBOSE Seq Scan on myTable (cost=0.00..15259.04 rows=37 width=29) (actual time=2153.061..2153.061 rows=0 loops=1) Output: nome_t Filter: (nome_t ~~* '%selectnothing%'::text) Total runtime: 2153.116 ms myTable has around 350k rows and the table definition is something like: CREATE TABLE myTable ( nome_t text NOT NULL, ) I have an index on nome_t as stated below: CREATE INDEX idx_m_nome_t ON myTable USING btree (nome_t); Although this is clearly a good candidate for Fulltext search I would like to rule that option out for now. This query is meant to be run from a web application and currently it's taking around 2 seconds which is obviously too much; Is there anything I can do, like using other index methods, to improve the speed of this query?

    Read the article

  • MySQL: Is it faster to use inserts and updates instead of insert on duplicate key update?

    - by Nir
    I have a cron job that updates a large number of rows in a database. Some of the rows are new and therefore inserted and some are updates of existing ones and therefore update. I use insert on duplicate key update for the whole data and get it done in one call. But- I actually know which rows are new and which are updated so I can also do inserts and updates seperately. Will seperating the inserts and updates have advantage in terms of performance? What are the mechanics behind this ? Thanks!

    Read the article

  • Adding Postgres table cells based on same value

    - by russell kinch
    I have a table called expenses. There are numerous columns but the ones involved in my php page are date, spplierinv, amount. I have created a page that lists all the expenses in a given month and totals it at the end. However, each row has a value, but many rows might be on the same supplier invoice.This means adding each row with the same supplierinv to get a total as per my bank statement. Is there anyway I can get a total for the rows based on the supplierinv. I mean say I have 10 rows. 5 on supplierinv 4, two on supplierinv 5 and 3 on supplierinv 12, how can a get 3 figures (inv 4, 5 and 12) and the grand total at the bottom. Many thanks

    Read the article

  • How can I loop over a query for a specific number of times that may be greater than the result?

    - by JS
    I need to loop over a query exactly 12 times to complete rows in a form but rarely will the query return 12 rows. The cfquery endRow attribute doesn't force the loop to keep going if the result is < 12. If it did that would be ideal to use something like cfloop query="myQuery" endRow="12"... The 2 options that I have now are to skip the loop and write out all 12 rows but that results in a lot of duplicate code (there are 20 columns), or do a query of queries for each row which seems like a lot of wasted processing. Thanks for any ideas.

    Read the article

  • SQL Trigger dont works...

    - by Gabotron
    Is there a way in which a Trigger is not fired? We have this situation: We have a table and there are rows that are been deleted. We need to know who and/or when these row are deleted. We create this trigger: ALTER TRIGGER [dbo].[AUDITdel_nit] ON [dbo].[Client] FOR DELETE AS Insert into AUDIT select 'Delete', getdate(), 'Row Deleted', SYSTEM_USER, host_name(), (select 'ID Client: ' + convert(varchar(12),Id) from deleted), 'Client' ,APP_NAME() We made somte test: deleting rows vis stored procedures and the deleted rows appears in our AUDIT table. But suddenly today we found a row deleted that dont appears in the AUDIT table... Any idea how it can be done?

    Read the article

  • In SQL, what does Group By mean without Count(*), or Sum(), Max(), avg(), ..., and what are some use

    - by Jian Lin
    In SQL, if we use Group By without Count(*) or Sum(), etc, then the result is as follows: mysql> select * from sentGifts; +--------+------------+--------+------+---------------------+--------+ | sentID | whenSent | fromID | toID | trytryWhen | giftID | +--------+------------+--------+------+---------------------+--------+ | 1 | 2010-04-24 | 123 | 456 | 2010-04-24 01:52:20 | 100 | | 2 | 2010-04-24 | 123 | 4568 | 2010-04-24 01:56:04 | 100 | | 3 | 2010-04-24 | 123 | NULL | NULL | 1 | | 4 | 2010-04-24 | NULL | 111 | 2010-04-24 03:10:42 | 2 | | 5 | 2010-03-03 | 11 | 22 | 2010-03-03 00:00:00 | 6 | | 6 | 2010-04-24 | 11 | 222 | 2010-04-24 03:54:49 | 6 | | 7 | 2010-04-24 | 1 | 2 | 2010-04-24 03:58:45 | 6 | +--------+------------+--------+------+---------------------+--------+ 7 rows in set (0.00 sec) mysql> select *, count(*) from sentGifts group by whenSent; +--------+------------+--------+------+---------------------+--------+----------+ | sentID | whenSent | fromID | toID | trytryWhen | giftID | count(*) | +--------+------------+--------+------+---------------------+--------+----------+ | 5 | 2010-03-03 | 11 | 22 | 2010-03-03 00:00:00 | 6 | 1 | | 1 | 2010-04-24 | 123 | 456 | 2010-04-24 01:52:20 | 100 | 6 | +--------+------------+--------+------+---------------------+--------+----------+ 2 rows in set (0.00 sec) mysql> select * from sentGifts group by whenSent; +--------+------------+--------+------+---------------------+--------+ | sentID | whenSent | fromID | toID | trytryWhen | giftID | +--------+------------+--------+------+---------------------+--------+ | 5 | 2010-03-03 | 11 | 22 | 2010-03-03 00:00:00 | 6 | | 1 | 2010-04-24 | 123 | 456 | 2010-04-24 01:52:20 | 100 | +--------+------------+--------+------+---------------------+--------+ 2 rows in set (0.00 sec) Only 1 row is returned per "group". What does it mean when there is no "Count(*)", etc when using "Group By", and what are it uses? thanks.

    Read the article

  • mysql replace matching but not changing

    - by alex
    I've used mysql's update replace function before, but even though I think I'm following the same syntax, I can't get this to work-it matches the rows, but doesn't replace. Here's what I'm trying to do: mysql> update contained_widgets set preference_values = replace(preference_values, '<li><a_href="/enewsletter"><span class="not-tc">eNewsletter</span></a></li>', '<li><a_href="/enewsletter"><span class="not-tc">eNewsletter</span></a></li> <li> <a_href="/projects"><span class="not-tc">Projects</span></a></li>'); Query OK, 0 rows affected (0.00 sec) Rows matched: 77 Changed: 0 Warnings: 0 I don't see what I'm missing. Any help is appreciated. I edited "a " to "a_" because the site thinks I'm posting spam links otherwise.

    Read the article

  • C# Insert ArrayList in DataRow

    - by Emre Kabaoglu
    I want to insert an arraylist in Datarow. using this code, ArrayList array=new ArrayList(); foreach (string s in array) { valuesdata.Rows.Add(s); } But My datatable must have only one datarow. My code created eight datarows. I tried, valuesdata.Rows.Add(array); But it doesn't work.That should be valuesdata.Rows.Add(array[0],array[1],array[2],array[3]....); How can I solve this problem? Thanks.

    Read the article

  • FaceBook like error

    - by user1150440
    I am using the following code in Page_Load Dim metaTagDesc As New HtmlMeta() 'Create a new instance of META tag object Dim metaTagKeywords As New HtmlMeta() Dim metaTagKeywords1 As New HtmlMeta() Dim metaTagKeywords2 As New HtmlMeta() metaTagDesc.Attributes.Add("property", "og:title") ' Add attributes to the META tag object for identification metaTagDesc.Attributes.Add("content", _table.Rows(0).Item(2)) metaTagKeywords.Attributes.Add("property", "og:type") metaTagKeywords.Attributes.Add("content", "website") metaTagKeywords1.Attributes.Add("property", "og:url") metaTagKeywords1.Attributes.Add("content", "http://citizen.tricedeals.com/Reports/" & _table.Rows(0).Item(0)) metaTagKeywords2.Attributes.Add("property", "og:image") metaTagKeywords2.Attributes.Add("content", "http://citizen.tricedeals.com/ProfilePictures/" & _table.Rows(0).Item(1) & ".jpg") Page.Header.Controls.Add(metaTagDesc) Page.Header.Controls.Add(metaTagKeywords) Page.Header.Controls.Add(metaTagKeywords1) Page.Header.Controls.Add(metaTagKeywords2) But i keep getting this error..."Your og:type object name has disallowed characters in it. It must match [a-z][a-z0-9._]*" Why?

    Read the article

  • c#, can you think of anymore optimization in this code?

    - by Sha Le
    Hi All: Look at the following code: StringBuilder row = TemplateManager.GetRow("xyz"); // no control over this method StringBuilder rows = new StringBuilder(); foreach(Record r in records) { StringBuilder thisRow = new StringBuilder(row.ToString()); thisRow.Replace("%firstName%", r.FirstName) .Replace("%lastName%", r.LastName) // all other replacement goes here .Replace("%LastModifiedDate%", r.LastModifiedDate); //finally append row to rows rows.Append(thisRow); } Currently 3 StringBuilders and row.ToString() is inside a loop. Is there any room for further optimization here? Thanks a bunch. :-)

    Read the article

  • SQL Stored Procedure

    - by Nathan
    I am trying to run a stored procedure with a while loop in it using Aqua Data Studio 6.5 and as soon as the SP starts Aqua Data starts consuming an increasing amount of my CPU's memory which makes absolutely no sense to me because everything should be off on the Sybase server I am working with. I have commented out and tested every piece of the SP and narrowed the issue down to the while loop. Can anyone explain to me what is going on? create procedure sp_check_stuff as begin declare @counter numeric (9), @max_id numeric (9), @exists numeric (1), @rows numeric (1) select @max_id = max(id) from my_table set @counter = 0 set @exists = 0 set @rows = 0 while @count <= @max_id begin //More logic which doesn't affect memory usage based //on commenting it out and running the SP set @counter = @counter + 1 set @exists = 0 set @rows = 0 end end return

    Read the article

  • need to find differences between 2 identically structured sql tables

    - by balalakshmi
    I need to find differences between 2 identically structured sql tables Each table is being uploaded from a 3rd paty tool into sqlserver database. The table structure is: Issue ID-status-Who Issue ID will not repeated within a table, though its not defined explicitly as primary Key There could be additions/deletions/Updations between any 2 tables. What I need Number of rows added & their details Number of rows deleted & their details Number of rows deleted & their details How do I do this 1) is it better to use sql 2) or use datatables

    Read the article

  • Adding a Third Table to a Two-Table Join Query

    - by John
    Hello, The query below works just fine. It pulls fields from two MySQL tables, "comment" and "login". It does this for rows where "username" in the table "login" equals the variable "$profile." It also pulls fields for rows where "loginid" in the table "comment" equals the "loginid" that is also being pulled from "login." I would like to pull data from a third table called "submission," which has the following fields: submissionid loginid title url displayurl datesubmitted I would like to pull fields from rows in "submission" where "loginid" equals the "loginid" that is already being pulled from the other two tables, "login" and "comment." How can I do this? Thanks in advance, John Query: $sqlStrc = "SELECT l.username, l.loginid, c.loginid, c.commentid, c.submissionid, c.comment, c.datecommented FROM comment AS c INNER JOIN login AS l ON c.loginid = l.loginid WHERE l.username = '$profile' ORDER BY c.datecommented DESC LIMIT 10";

    Read the article

  • Java long task - Did it stop writing to file?

    - by rockit
    I am writing a lot of data to a file, and while keeping my eye on the file it eventually stopped growing in size. Essentially my task is getting information from a database, and printing out all non-unique values in column A. Since there are many rows to the database table, and the database table is across my network, this is taking days to complete. Thus I'm concerned that since the file isn't growing, that it isn't actually writing to the file anymore. Which is odd, I have no "catch"'s in my code, so if there was a problem writing to file, wouldn't it have thrown an error?! Should I let the task complete (estimate 2-3 days from today), or is there something else that I don't know going on here making my application not write to the file?! my algorithm goes something like this Declare file Create new file Open file for writing get database connection get resultset from database for each row in the resultset - write column "A" to file - if row# % 100000 then write to screen "completed " + row# + " rows" when no more rows exist close file write to screen - "completed"

    Read the article

  • How to speed up dumping a DataTable into an Excel worksheet?

    - by AngryHacker
    I have the following routine that dumps a DataTable into an Excel worksheet. private void RenderDataTableOnXlSheet(DataTable dt, Excel.Worksheet xlWk, string [] columnNames, string [] fieldNames) { // render the column names (e.g. headers) for (int i = 0; i < columnNames.Length; i++) xlWk.Cells[1, i + 1] = columnNames[i]; // render the data for (int i = 0; i < fieldNames.Length; i++) { for (int j = 0; j < dt.Rows.Count; j++) { xlWk.Cells[j + 2, i + 1] = dt.Rows[j][fieldNames[i]].ToString(); } } } For whatever reason, dumping DataTable of 25 columns and 400 rows takes about 10-15 seconds on my relatively modern PC. Takes even longer testers' machines. Is there anything I can do to speed up this code? Or is interop just inherently slow?

    Read the article

  • Please help me in creating an update query

    - by Rajesh Rolen- DotNet Developer
    I have got a table which contains 5 column and query requirements: update row no 8 (or id=8) set its column 2, column 3's value from id 9th column 2, column 3 value. means all value of column 2, 3 should be shifted to column 2, 3 of upper row (start from row no 8) and value of last row's 2, 3 will be null For example, with just 3 rows, the first row is untouched, the second to N-1th rows are shifted once, and the Nth row has nulls. id math science sst hindi english 1 11 12 13 14 15 2 21 22 23 24 25 3 31 32 33 34 35 The result of query of id=2 should be: id math science sst hindi english 1 11 12 13 14 15 2 31 32 23 24 25 //value of 3rd row (col 2,3) shifted to row 2 3 null null 33 34 35 This process should run for all rows whose id 2 Please help me to create this update query I am using MS sqlserver 2005

    Read the article

  • How SqlDataAdapter works internally?

    - by tigrou
    I wonder how SqlDataAdapter works internally, especially when using UpdateCommand for updating a huge DataTable (since it's usually a lot faster that just sending sql statements from a loop). Here is some idea I have in mind : It creates a prepared sql statement (using SqlCommand.Prepare()) with CommandText filled and sql parameters initialized with correct sql types. Then, it loops on datarows that need to be updated, and for each record, it updates parameters values, and call SqlCommand.ExecuteNonQuery(). It creates a bunch of SqlCommand objects with everything filled inside (CommandText and sql parameters). Several SqlCommands at once are then batched to the server (depending of UpdateBatchSize). It uses some special, low level or undocumented sql driver instructions that allow to perform an update on several rows in a effecient way (rows to update would need to be provided using a special data format and a the same sql query (UpdateCommand here) would be executed against each of these rows).

    Read the article

  • Intermittent/staggered loading of an object

    - by cammy
    Hi guys, I've just recently tried my hand at actionscript 3 and have come across a road block. How do I go about rendering the cubes (cube1) intermittently, ie. staggered loading. I need the cubes to load a split second from each other. Below is a snippet of what I have so far: var rows:int = 5; var cols:int = 3; var spacery:int = 100; var spacerx:int = 120; var box_count:int = 8; for(var i:int; i < box_count; i++) { cube1 = new Cube(ml,100,10,80,1,1,1); cube1.y = ((i % rows)) * (cube1.x + spacery); cube1.x = Math.floor(i/rows) * (cube1.x +spacerx); cube1.z = 0; bigBox.addChild(cube1); }

    Read the article

  • Pagination using Ajax in jquery Datatables

    - by kshtjsnghl
    I am using dataTables plugin for a table on a page I am working on. Its basically fetching rows through an ajax call and in this ajax call, I send the search params that the user selects and the page number required. I need the Next, Previous, First and Last buttons to also fire the same ajax call, but with different page numbers, as the back-end interceptor depends on the page number. This api call would return total no. of rows(say 1000) belonging for these search params and the rows with the page size( say 50). Is there any way, I can use data table to do this?

    Read the article

< Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >