Search Results

Search found 7311 results on 293 pages for 'rows'.

Page 100/293 | < Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >

  • Firing Postgres triggers on different table columns

    - by aatifh
    CONTENT_TABLE id | author | timestamp | title | description ----+-----------------+-----------+----------------+---------------------- (0 rows) SEARCH_TABLE id | content_type_id | object_id | tsvector_title | tsvector_description ----+-----------------+-----------+----------------+---------------------- (0 rows) I have to fire a trigger when ever CONTENT_TABLE is UPDATED/INSERTED Something like this: "CREATE TRIGGER tsvectorupdate BEFORE INSERT OR UPDATE ON course_course FOR EACH ROW EXECUTE PROCEDURE tsvector_update_trigger(SHOULD_BE_THE_COLUMN_OF_SEARCH_TABLE(tsvector_description), 'pg_catalog.english', description);" Actually, i have to add tsvector for title and description of the CONTENT_TABLE to the table SEARCH_TABLE tsvector_title and tsvector_description. Can i just fire one trigger for it? Any sort of help will be appreciated. Thanks in advance.

    Read the article

  • Little Employee/Shift timetable HELP!!!

    - by DAVID
    Morning Guys, I have the following tables: operator(ope_id, ope_name) ope_shift(ope_id, shift_id, shift_date) shift(shift_id, shift_start, shift_end) here is a better view of the data http://latinunit.net/emp_shift.txt here is the screenshot of a select statement to the tables http://img256.imageshack.us/img256/4013/opeshift.jpg im using this code SELECT OPE_ID, COUNT(OPE_ID) AS Total_shifts from operator_shift group by ope_id; to view the current total shifts per operator and it works, BUT if there was 500 more rows it would count them all aswell, THE QUESTION is, anyone has a better way of making my database work, or how can i tell the system that those rows are a whole month, i remember i friend said something about count then devide by 30 but im not sure, what if the month isnt finished? and you want to show the emp with highest shifts to date

    Read the article

  • JSF actionListener is called multiple times from within HtmlTable

    - by Rose
    I have a mix of columns in my htmltable: 1 column is an actionlistener, 2 columns are actions and other columns are simple output. <h:dataTable styleClass="table" id="orderTable" value="#{table.dataModel}" var="anOrder" binding="#{table.dataTable}" rows="#{table.rows}" <an:listenerColumn backingBean="${orderEntry}" entity="${anOrder}" actionListener="closeOrder"/ <an:column label="#{msg.hdr_orderStatus}" entity="#{anOrder}" propertyName="orderStatus" / <an:actionColumn backingBean="${orderEntry}" entity="${anOrder}" action="editOrder" / <an:actionColumn backingBean="${orderEntry}" entity="${anOrder}" action="viewOrder"/ .... I'm using custom tags, but it's the same behavior if I use the default column tags. I've noticed a very strange effect: when clicking the actionlistenercolumn, the actionevent is handled 3 times. If I remove the 2 action columns then the actionevent is handled only once. The managed bean has sessionscope, bean method: public void closeOrder(ActionEvent event) { OrdersDto order; if ((order = orderRow()) == null) { return; } System.out.println("closeOrder() 1 "); orderManager.closeOrder(); System.out.println("closeOrder() 2 "); } the console prints the'debug' text 3 times.

    Read the article

  • How Pick a Column Value from a ListView Row - C#.NET

    - by peace
    How can i fetch the value 500 to a variable from the selected row? One solution would be to get the row position number and then the CustomerID position number. Can you please give a simple solution. SelectedItems means selected row and SubItems means the column values, so SelectedItem 0 and SubItem 0 would represent the value 500. Right? This is how i populate the listview: for (int i = 0; i < tempTable.Rows.Count; i++) { DataRow row = tempTable.Rows[i]; ListViewItem lvi = new ListViewItem(row["customerID"].ToString()); lvi.SubItems.Add(row["companyName"].ToString()); lvi.SubItems.Add(row["firstName"].ToString()); lvi.SubItems.Add(row["lastName"].ToString()); lstvRecordsCus.Items.Add(lvi); }

    Read the article

  • How Do I Prevent Rails From Treating Edit Fields_For Differently From New Fields_For

    - by James
    I am using rails3 beta3 and couchdb via couchrest. I am not using active record. I want to add multiple "Sections" to a "Guide" and add and remove sections dynamically via a little javascript. I have looked at all the screencasts by Ryan Bates and they have helped immensely. The only difference is that I want to save all the sections as an array of sections instead of individual sections. Basically like this: "sections" => [{"title" => "Foo1", "content" => "Bar1"}, {"title" => "Foo2", "content" => "Bar2"}] So, basically I need the params hash to look like that when the form is submitted. When I create my form I am doing the following: <%= form_for @guide, :url => { :action => "create" } do |f| %> <%= render :partial => 'section', :collection => @guide.sections %> <%= f.submit "Save" %> <% end %> And my section partial looks like this: <%= fields_for "sections[]", section do |guide_section_form| %> <%= guide_section_form.text_field :section_title %> <%= guide_section_form.text_area :content, :rows => 3 %> <% end %> Ok, so when I create the guide with sections, it is working perfectly as I would like. The params hash is giving me a sections array just like I would want. The problem comes when I want edit guide/sections and save them again because rails is inserting the id of the guide in the id and name of each form field, which is screwing up the params hash on form submission. Just to be clear, here is the raw form output for a new resource: <input type="text" size="30" name="sections[][section_title]" id="sections__section_title"> <textarea rows="3" name="sections[][content]" id="sections__content" cols="40"></textarea> And here is what it looks like when editing an existing resource: <input type="text" value="Foo1" size="30" name="sections[cd2f2759895b5ae6cb7946def0b321f1][section_title]" id="sections_cd2f2759895b5ae6cb7946def0b321f1_section_title"> <textarea rows="3" name="sections[cd2f2759895b5ae6cb7946def0b321f1][content]" id="sections_cd2f2759895b5ae6cb7946def0b321f1_content" cols="40">Bar1</textarea> How do I force rails to always use the new resource behavior and not automatically add the id to the name and value. Do I have to create a custom form builder? Is there some other trick I can do to prevent rails from putting the id of the guide in there? I have tried a bunch of stuff and nothing is working. Thanks in advance!

    Read the article

  • I DISTINCTly hate MySQL (help building a query)

    - by Alex Mcp
    This is staight forward I believe: I have a table with 30,000 rows. When I SELECT DISTINCT 'location' FROM myTable it returns 21,000 rows, about what I'd expect, but it only returns that one column. What I want is to move those to a new table, but the whole row for each match. My best guess is something like SELECT * from (SELECT DISTINCT 'location' FROM myTable) or something like that, but it says I have a vague syntax error. Is there a good way to grab the rest of each DISTINCT row and move it to a new table all in one go?

    Read the article

  • MySQL inconsistent table scan results

    - by user148207
    What's going on here? mysql> select count(*) from notes where date(updated_at) > date('2010-03-25'); +----------+ | count(*) | +----------+ | 0 | +----------+ 1 row in set (0.59 sec) mysql> select count(*) from notes where message like'%***%' and date(updated_at) > date('2010-03-25'); +----------+ | count(*) | +----------+ | 26 | +----------+ 1 row in set (1.30 sec) mysql> explain select count(*) from notes where date(updated_at) > date('2010-03-25'); +----+-------------+-------+------+---------------+------+---------+------+--------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+------+---------------+------+---------+------+--------+-------------+ | 1 | SIMPLE | notes | ALL | NULL | NULL | NULL | NULL | 588106 | Using where | +----+-------------+-------+------+---------------+------+---------+------+--------+-------------+ 1 row in set (0.07 sec) mysql> explain select updated_at from notes where message like'%***%' and date(updated_at) > date('2010-03-25'); +----+-------------+-------+------+---------------+------+---------+------+--------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+------+---------------+------+---------+------+--------+-------------+ | 1 | SIMPLE | notes | ALL | NULL | NULL | NULL | NULL | 588106 | Using where | +----+-------------+-------+------+---------------+------+---------+------+--------+-------------+ 1 row in set (0.09 sec) mysql>

    Read the article

  • Building html tables from query data... faster?

    - by Andrew Heath
    With my limited experience/knowledge I am using the following structure to generate HTML tables on the fly from MySQL queries: $c = 0; $t = count($results); $table = '<table>'; while ($c < $t) { $table .= "<tr><td>$results[0]</td><td>$results[1]</td> (etc etc) </tr>"; ++$c; } $table .= '</table>'; this works, obviously. But for tables with 300+ rows there is a noticeable delay in pageload while the script builds the table. Currently the maximum results list is only about 1,100 rows, and the wait isn't long, but there's clearly a wait. Are there other methods for outputting an HTML table that are faster than my WHILE loop? (PHP only please...)

    Read the article

  • R: How to pass a list of selection expressions (strings in this case) to the subset function?

    - by John
    Here is some example data: data = data.frame(series = c("1a", "1b", "1e"), reading = c(0.1, 0.4, 0.6)) > data series reading 1 1a 0.1 2 1b 0.4 3 1e 0.6 Which I can pull out selective single rows using subset: > subset (data, series == "1a") series reading 1 1a 0.1 And pull out multiple rows using a logical OR > subset (data, series == "1a" | series == "1e") series reading 1 1a 0.1 3 1e 0.6 But if I have a long list of series expressions, this gets really annoying to input, so I'd prefer to define them in a better way, something like this: series_you_want = c("1a", "1e") (although even this sucks a little) and be able to do something like this, subset (data, series == series_you_want) The above obviously fails, I'm just not sure what the best way to do this is?

    Read the article

  • jquery iterating through newly created elements

    - by jaeyun
    Hi All, I am trying to add new rows in my table, and save them into DB. First, I use .append() to append rows on the table: $("#tablename").append("<tr id='newRow'><td>newly added row</td></tr>"); The appending function works fine. My page displays the correct result. However, I am unable to select them with $("#newRow").each(function () { alert "it never reaches here!"; }); I am guessing it is because the elements are added after the DOM is loaded. Can anyone please tell me how I can iterate through all my newly added elements? Thank you.

    Read the article

  • Creating an Excel Template for different data size

    - by dassouki
    I created an excel template for a file i've done for a routine work calculation. The file takes data from the data logger and does some analysis on it and outputs one number regardless of the input size. The problem I'm having is i have to modify the sheet to suit the number of rows, as everyday the data logger outputs a different number of rows. there are about 15 sheets in the workbook and it's annoying to have to change everyone of them everyday. What i'd like to do input the data logger csv, and boom the result gets outputted. Is there a way through vba or not to ahieve

    Read the article

  • In SQL Server merge replication, how does reinitializing work?

    - by Craig Shearer
    I have set up a pull subscription to a merge publication in SQL Server. I use parameterized row filters on some tables. This works fine with the initial synchronization - just the rows using the filter arrive in the replicated (client) database. However, at some later point I'd like to be able to synchronize the replicated database again from the server and have new rows that match the parameterized row filters appear on the client database. The doucmentation seems to indicate that I can call Reinitialize() to do this. However, when I do try this and Synchronize again, I get an error saying that the script 'snapshot.pre' cannot be applied to the database. I've inspected the script and can see why - it's trying to drop some functions are used by the tables in the database. It would appear that for Reinitialize() to work it requires that the database be blank. Am I misunderstanding something here? Is there a way to make this work?

    Read the article

  • How can i create a submitable form that contains dynamically added and removed controls

    - by bill
    Hi All, I am trying to create a form that is made up of controls with values that represent an entity with multiple child entities. The form will represent a product with multiple properties where the user will then be able to create options with multiple properties which in turn be able to create multiple option-items with multiple properties. My question is what is the best approach? Can i use ajax to avoid postbacks and having to rewrite the controls to the page? If i dynamically add the controls in the form of table rows or grid rows will the data/control values be available in the code-behind when i submit? This is an age old question.. the last time i had to do this was .Net 2.0, pre-ajax (for me) and i was forced to recreate all the controls on each post back. thanks!

    Read the article

  • slicing 2d numpy array

    - by MedicalMath
    I have a 2d numpy array called FilteredOutput that has 2 columns and 10001 rows, though the number of rows is a variable. I am trying to take the 2nd column of FilteredOutput and use it to populate a new 1d numpy array called timeSeriesArray using the following line of code: timeSeriesArray=p.array(FilteredOutput[:,0]) I got this syntax from the following link. But the problem is that I am getting the following error message: TypeError: list indices must be integers, not tuple Can anyone show me the proper syntax for populating the 1d array timeSeriesArray with the contents of the second column of the 2d array FilteredOutput?

    Read the article

  • Cache for large read only database recommendation

    - by paddydub
    I am building site on with Spring, Hibernate and Mysql. The mysql database contains information on coordinates and locations etc, it is never updated only queried. The database contains 15000 rows of coordinates and 48000 rows of coordinate connections. Every time a request is processed, the application needs to read all these coordinates which is taking approx 3-4 seconds. I would like to set up a cache, to allow quick access to the data. I'm researching memcached at the moment, can you please advise if this would be my best option?

    Read the article

  • InnoDB Cascade Rule that looks at 2 columns?

    - by Travis
    I have the following mysql InnoDB tables... TABLE foldersA ( ID title ) TABLE foldersB ( ID title ) TABLE records ( ID folderID folderType title ) folderID in table "records" can point to ID in either "foldersA" or "foldersB" depending on the value of folderType. (0 or 1). I am wondering: Is there a way to create a CASCADE rule such that the appropriate rows in table records are automatically deleted when a row in either foldersA or folderB is deleted? Or in this situation, am I forced to have to delete the rows in table "records" programatically? Thanks for you help!

    Read the article

  • Detect when a UITableView is being reordered

    - by Mike Weller
    I have a UITableView backed by an NSFetchedResultsController which may trigger updates at any time. If the user is currently reordering rows, applying these updates will cause an exception because the table view has temporarily taken over and you get an error like Invalid update: invalid number of rows in section [...] How can I detect when the user has started moving a cell so I can delay updates caused by the fetched results controller? There don't seem to be any table view delegate methods to detect this. This delegate method: - (NSIndexPath *)tableView:(UITableView *)tableView targetIndexPathForMoveFromRowAtIndexPath:(NSIndexPath *)sourceIndexPath toProposedIndexPath:(NSIndexPath *)proposedDestinationIndexPath { Doesn't get called when the user initially detaches the first cell, only when they actually move it somewhere else.

    Read the article

  • DataGridView and checkboxes re-selecting automatically

    - by SuperFurryToad
    I have a strange problem with a DataGridView I'm using which is bound to a table in VB.net I've added a checkbox column to allow a user to tick a bunch of rows that I can then loop through and save off to a different table. All the checkboxes are enabled by default. So it's really a case of unchecking the rows which aren't required. However, the DataGridView re-enables any checkbox that I click after I click on a checkbox in another row. So in effect, only one row can be unchecked at a time. I'm sure I'm probably missing something obvious here?

    Read the article

  • Evenly distribute items on the screen

    - by abolotnov
    I am trying to solve this little puzzle (the algorithm): I have N image icons and I want to distribute them evenly on users screen. Say, I put them in a table. If there is one image, there will be one cell in a table. If two - one row with two columns, if three - one row and three columns, if four - two rows, two columns... and so on until row space is gone and since then the table should only grow in columns without adding extra rows. I'm trying to figure an algorithm for this and perhaps this is something that has a solution already somewhere? My attempt is so far something like this: obtain_max_rows() obtain_visible_columns() if (number_of_pictures > max_rows*max_columns) { columns = roundup(number_of_pictures/max_rows) for(max_rows){generate row;for columns{generate column}} } else { **here comes to trouble...** } This logic is bit silly though - it somehow needs to think cases where there are 12 pictures on first screen and 2 on the other trying to balance it say 8/6 or somehow like that.

    Read the article

  • hibernate not throwing stale state exception nor it is overwriting data

    - by Reddy
    Our application do the following. 1. Start the transaction. 2. Execute a query using prepared statement 3. Check a condition to see the number of rows updated are equal to the required number. 4. It commits on success of above condition otherwise it will roll back However the problem is that when two threads are simultaneously enter this code. Thread-1 is updating a row in step 2. It checked the condition and committed successfully since the condition is successful. Thread-2 started execution somewhere between steps 1 & 4, and it is failing on at condition checking at step 3 (as it is getting number of updated rows as 0). I expected second thread to throw an exception but it is not. What could be the problem?

    Read the article

  • SQLite Transaction fills a table BEFORE the transaction is commited

    - by user1500403
    Halo I have a code that creates a datatable (in memory) from a select SQL statement. However I realised that this datatable is filling during the procedure rather as a result of the transaction comit statment, it does the job but its slow. WHat amI doing wrong ? Inalready.Clear() 'clears a dictionary Using connection As New SQLite.SQLiteConnection(conectionString) connection.Open() Dim sqliteTran As SQLite.SQLiteTransaction = connection.BeginTransaction() Try oMainQueryR = "SELECT * FROM detailstable Where name= :name AND Breed= :Breed" Dim cmdSQLite As SQLite.SQLiteCommand = connection.CreateCommand() Dim oAdapter As New SQLite.SQLiteDataAdapter(cmdSQLite) With cmdSQLite .CommandType = CommandType.Text .CommandText = oMainQueryR .Parameters.Add(":name", SqlDbType.VarChar) .Parameters.Add(":Breed", SqlDbType.VarChar) End With Dim c As Long = 0 For Each row As DataRow In list.Rows 'this is the list with 500 names If Inalready.ContainsKey(row.Item("name")) Then Else c = c + 1 Form1.TextBox1.Text = " Fill .... " & c Application.DoEvents() Inalready.Add(row.Item("name"), row.Item("Breed")) cmdSQLite.Parameters(":name").Value = row.Item("name") cmdSQLite.Parameters(":Breed").Value = row.Item("Breed") oAdapter.Fill(newdetailstable) End If Next oAdapter.FillSchema(newdetailstable, SchemaType.Source) Dim z = newdetailstable.Rows.Count 'At this point the newdetailstable is already filled up and I havent even comited the transaction ' sqliteTran.Commit() Catch ex As Exception End Try End Using

    Read the article

  • How to navigate to another html page?

    - by newbie
    In my application there's a usual login page sending username and password to the server script, where it needs to be authenticated, and in case of an authentic user, the server should redirect to a page student.html. This is my code var ports = 3000; var portt = 3001; var express = require('express'); var student = require('express')(); var teacher = require('express')(); var server_s = require('http').createServer(student); var server_t = require('http').createServer(teacher); var ios = require('socket.io').listen(server_s); var iot = require('socket.io').listen(server_t); var path = require('path'); server_s.listen(ports); server_t.listen(portt); student.use(express.static(path.join(__dirname, 'public'))); student.get('/', function(req,res){ res.sendfile(__dirname + '/login.html'); }); teacher.use(express.static(path.join(__dirname, 'public'))); teacher.get('/', function(req,res){ res.sendfile(__dirname + '/mytry.html'); }); ios.sockets.on('connection', function(socket){ var username, password; socket.on('check',function(data){ username = data[0]; password = data[1]; //************* Database connection and query ************* var mysql = require('mysql'); var connection = mysql.createConnection({ host : 'localhost', user : 'user', password: '*******', database: 'my_db' }); connection.connect(); var qstring = 'SELECT s_id FROM login_student WHERE username='+username+'AND password='+password; connection.query(qstring, function(err, rows, fields) { if (err) { console.log('ERROR: ' + err); socket.emit('login_failure','DB error'); return; } console.log('The solution is: ', rows[0].solution); if (rows>0) //***** Here i want redirection to another page ****** else socket.emit('login_failure','Invalid Username or password'); }); connection.end(); }); }); iot.sockets.on('connection', function(socket){ ; }); }); Can anyone suggest what should I do?

    Read the article

  • Optimizing MySQL statement with lot of count(row) an sum(row+row2)...

    - by Zombies
    I need to use InnoDB storage engine on a table with about 1mil or so records in it at any given time. It has records being inserted to it at a very fast rate, which are then dropped within a few days, maybe a week. The ping table has about a million rows, whereas the website table only about 10,000. My statement is this: select url from website ws, ping pi where ws.idproxy = pi.idproxy and pi.entrytime > curdate() - 3 and contentping+tcpping is not null group by url having sum(contentping+tcpping)/(count(*)-count(errortype)) < 500 and count(*) > 3 and count(errortype)/count(*) < .15 order by sum(contentping+tcpping)/(count(*)-count(errortype)) asc; I added an index on entrytime, yet no dice. Can anyone throw me a bone as to what I should consider to look into for basic optimization of this query. The result set is only like 200 rows, so I'm not getting killed there.

    Read the article

  • mysql foreign key problem.

    - by JP19
    Hi, What is wrong with the foreign key addition here: mysql> create table notes ( id int (11) NOT NULL auto_increment PRIMARY KEY, note_type_id smallint(5) NOT NULL, data TEXT NOT NULL, created_date datetime NOT NULL, modified_date timestamp NOT NULL on update now()) Engine=InnoDB; Query OK, 0 rows affected (0.08 sec) mysql> create table notetypes ( id smallint (5) NOT NULL auto_increment PRIMARY KEY, type varchar(255) NOT NULL UNIQUE) Engine=InnoDB; Query OK, 0 rows affected (0.00 sec) mysql> alter table `notes` add constraint foreign key(`note_type_id`) references `notetypes`.`id` on update cascade on delete restrict; ERROR 1005 (HY000): Can't create table './admin/#sql-43e_b762.frm' (errno: 150) Thanks JP

    Read the article

  • Can I expect a performance gain from removing this JOIN?

    - by makeee
    I have a "items" table with 1 million rows and a "users" table with 20,000 rows. When I select from the "items" table I do a join on the "users" table (items.user_id = user.id), so that I can grab the "username" from the users table. I'm considering adding a username column to the items table and removing the join. Can I expect a decent performance increase from this? It's already quite fast, but it would be nice to decrease my load (which is pretty high). The downside is that if the user changes their username, items will still reflect their old username, but this is okay with me if I can expect a decent performance increase. I'm asking stackoverflow because benchmarks aren't telling me too much. Both queries finish very quickly. Regardless, I'm wondering if removing the join would lighten load on the database to any significant degree.

    Read the article

< Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >