Search Results

Search found 26115 results on 1045 pages for 'table alias'.

Page 290/1045 | < Previous Page | 286 287 288 289 290 291 292 293 294 295 296 297  | Next Page >

  • MySQL managing catalogue views

    - by Mark Lawrence
    A friend of mine has a catalogue that currently holds about 500 rows or 500 items. We are looking at ways that we can provide reports on the catalogue inclduing the number of times an item was viewed, and dates for when its viewed. His site is averaging around 25,000 page impressions per month and if we assumed for a minute that half of these were catalogue items then we'd assume roughly 12,000 catalogue items viewed each month. My question is the best way to manage item views in the database. First option is to insert the catalogue ID into a table and then increment the number of times its viewed. The advantage of this is its compact nature. There will only ever be as many rows in the table as there are catalogue items. `catalogue_id`, `views` The disadvantage is that no date information is being held, short of maintaining the last time an item was viewed. The second option is to insert a new row each time an item is viewed. `catalogue_id`, `timestamp` If we continue with the assumed figure of 12,000 item views that means adding 12,000 rows to the table each month, or 144,000 rows each year. The advantage of this is we know the number of times the item is viewed, and also the dates for when its viewed. The disadvantage is the size of the table. Is a table with 144,000 rows becoming too large for MySQL? Interested to hear any thoughts or suggestions on how to achieve this. Thanks.

    Read the article

  • habtm multiple times with the same model

    - by Ermin
    I am trying to model a publications. A publication can have multiple authors and editors. Since it is possible that one person is an author of one publication and an editor of another, no separate models for Authors and Editors: class Publication < ActiveRecord::Base has_and_belongs_to_many :authors, :class_name=>'Person' has_and_belongs_to_many :editors, :class_name=>'Person' end The above code doesn't work, because it uses the same join table. Now I now that I can specify the name of the join table, but there is a warning in the API documentation is a warning about that which I don't understand: :join_table: Specify the name of the join table if the default based on lexical order isn’t what you want. WARNING: If you’re overwriting the table name of either class, the table_name method MUST be declared underneath any has_and_belongs_to_many declaration in order to work.

    Read the article

  • Check value at insert

    - by ThreeFingerMark
    Hello, i have this three tables. Table: Item Columns: ItemID, Title, Content, NoChange (Date) Table: Tag Columns: TagID, Title Table: ItemTag Columns: ItemID, TagID In the Item Table is a Field with NoChange, if this field = true no Tag is allowed to insert a ItemTag value with this ItemID. How can i check this in the insert? For Updates i have this Statement: UPDATE ItemTag SET TagID = ? where ItemID = ? AND TagID = ? AND exists ( select ItemID from Item where ItemID = ? AND NoChange is null)"); Thank you.

    Read the article

  • JPA: what is the proper pattern for iterating over large result sets?

    - by Caffeine Coma
    Let's say I have a table with millions of rows. Using JPA, what's the proper way to iterate over a query against that table, such that I don't have all an in-memory List with millions of objects? I suspect that the following will blow up if the table is large: List<Model> models = entityManager().createQuery("from Model m", Model.class).getResultList(); for (Model model : models) { // do something with model } Is pagination (looping and manually updating setFirstResult()/setMaxResult()) really the best solution?

    Read the article

  • Unable to save data in database manually and get latest auto increment id, cakePHP

    - by shabby
    I have checked this question as well and this one as well. I am trying to implement the model described in this question. What I want to do is, on the add function of message controller, create a record in thread table(this table only has 1 field which is primary key and auto increment), then take its id and insert it in the message table along with the user id which i already have, and then save it in message_read_state and thread_participant table. This is what I am trying to do in Thread Model: function saveThreadAndGetId(){ //$data= array('Thread' => array()); $data= array('id' => ' '); //Debugger::dump(print_r($data)); $this->save($data); debug('id: '.$this->id); $threadId = $this->getInsertID(); debug($threadId); $threadId = $this->getLastInsertId(); debug($threadId); die(); return $threadId; } $data= array('id' => ' '); This line from the above function adds a row in the thread table, but i am unable to retrieve the id. Is there any way I can get the id, or am I saving it wrongly? Initially I was doing the query thing in the message controller: $this->Thread->query('INSERT INTO threads VALUES();'); but then i found out that lastId function doesnt work on manual queries so i reverted.

    Read the article

  • Need help tuning a SQL statement

    - by jeffself
    I've got a table that has two fields (custno and custno2) that need to be searched from a query. I didn't design this table, so don't scream at me. :-) I need to find all records where either the custno or custno2 matches the value returned from a query on the same table based on a titleno. In other words, the user types in 1234 for the titleno. My query searches the table to find the custno associated with the titleno. It also looks for the custno2 for that titleno. Then it needs to do a search on the same table for all other records that have either the custno or custno2 returned in the previous search in the custno or custno2 fields for those other records. Here is what I've come up with: SELECT BILLYR, BILLNO, TITLENO, VINID, TAXPAID, DUEDATE, DATEPIF, PROPDESC FROM TRCDBA.BILLSPAID WHERE CUSTNO IN (select custno from trcdba.billspaid where titleno = '1234' union select custno2 from trcdba.billspaid where titleno = '1234' and custno2 != '') OR CUSTNO2 IN (select custno from trcdba.billspaid where titleno = '1234' union select custno2 from trcdba.billspaid where titleno = '1234' and custno2 != '') The query takes about 5-10 seconds to return data. Can it be rewritten to work faster?

    Read the article

  • How can fill a variable of my own created data type within Oracle PL/SQL?

    - by Frankie Simon
    In Oracle I've created a data type: TABLE of VARCHAR2(200) I want to have a variable of this type within a Stored Procedure (defined locally, not as an actual table in the DB) and fill it with data. Some online samples show how I'd use my type if it was filled and passed as a parameter to the stored procedure: SELECT column_value currVal FROM table(pMyPassedParameter) However what I want is to fill it during the PL/SQL code itself, with INSERT statements. Anyone knows the syntax of this?

    Read the article

  • MySQL select two tables at the same time...

    - by Jerry
    Hi all I have two tables and want to make a query. I tried to get team AA and team BB's image base on table A. I used: SELECT tableA.team1, tableA.team2, tableB.team, tableB.image, FROM tableA LEFT JOIN tableB ON tableA.team1=tableB.team The result only display imageA on the column. Are there any ways to select imageA and image B without using the second query? I appreciate any helps! Thanks a lot! My table structure are: table A team1 team2 ------------ AA BB table B team image ------------- AA imagaA BB imageB

    Read the article

  • Mass update of data in sql from int to varchar

    - by Christopher Kelly
    we have a large table (5608782 rows and growing) that has 3 columns Zip1,Zip2, distance all columns are currently int, we would like to convert this table to use varchars for international usage but need to do a mass import into the new table convert zip < 5 digits to 0 padded varchars 123 becomes 00123 etc. is there a way to do this short of looping over each row and doing the translation programmaticly?

    Read the article

  • ASP.NET SqlDataSource update and create FK reference

    - by William
    The short version: I have a grid view bound to a data source which has a SelectCommand with a left join in it because the FK can be null. On Update I want to create a record in the FK table if the FK is null and then update the parent table with the new records ID. Is this possible to do with just SqlDataSources? The detailed version: I have two tables: Company and Address. The column Company.AddressId can be null. On my ascx page I am using a SqlDataSource to select a left join of company and address and a GridView to display the results. By having my UpdateCommand and DeleteCommand of the SqlDataSource execute two statements separated by a semi-colon I am able to use the GridView's Edit and Delete functionality to update both table simultaneously. The problem I have is when the Company.AddressId is null. What I need to have happen is have the data source create a record in the Address table and then update the Company table with the new Address.ID then proceed with the update as usual. I would like to do this with just data sources if possible for consistency/simplicity sake. Is it possible to have my data source do this, or perhaps add a second data source to the page to handle some of this? Once I have that working I can probably figure out how to make it work with the InsertCommand as well but if you are on a roll and have an answer for how to make that fly as well feel free to provide it. Thanks.

    Read the article

  • How to update the column of datagridview from the text contents of textbox in c# Windows form

    - by user286546
    I have a datagridview with contents from a table. In that I have a column for Remarks which will be 1-2 lines. When I click on the remarks column, I want to open another form that contains the text box. I have linked the text box with the table using the table adapter. Now when I close the form with the text box, I want to show that in the datagridview column. Please help me

    Read the article

  • jquery selecting all elements except the last per group

    - by Anthony
    I have a table that looks like: <table> <tr> <td>one</td><td>two</td><td>three</td><td>last</td> </tr> <tr> <td>blue</td><td>red</td><td>green</td><td>last</td> </tr> <tr> <td>Monday</td><td>Tuesday</td><td>Wednesday</td><td>last</td> </tr> </table> What I want is a jquery selector that will choose all but the last td of each table row. I tried: $("tr td:not(:last)").css("background-color","red"); //changing color just as a test... But instead of all cells but the last on each row being changed, all cells but the very last one in the table are selected. Similarly, if I change it to: $("tr td:last").css("background-color","red"); the only one that changes is the very last cell. How do I choose the last (or not last) of each row?

    Read the article

  • PostgeSQL: Arrays Data Type with PHP

    - by ArchJ
    I'm working on PostgeSQL with PHP and I know that PosrgeSQL allow columns of a table to be defined as arrays. So let's say I have a table like this: CREATE TABLE sal_emp ( a text ARRAY, b text ARRAY, c text ARRAY, ); These are my arrays: $a = array(aa,bb,cc); $b = array(dd,dd,aa); $c = array(bb,ff,ee); and I want to insert them into respective column each like this: a | b | c -----------+------------+------------ {aa,bb,cc} | {dd,dd,aa} | {bb,ff,ee} Can I insert it this way? $a = implode(',', $a); $b = implode(',', $b); $c = implode(',', $c); $a = array('a' => $a, 'b' => $b, 'c' => $c); pg_insert($dbconn, 'table', $a); Or is there a better way to achieve the same result?

    Read the article

  • Importing Excel spreadsheet data into existing Access DB

    - by Keeb13r
    I've designed an Access 2003 DB with 3 tables: APPLICATIONS, SERVERS, and INSTALLATIONS. Records in the APPLICATIONS and SERVERS tables are uniquely identified by a synthetic primary key (in Access, an "auto number"). The INSTALLATIONS table is essentially a mapping table between APPLICATIONS and SERVERS: it's a list of records of which applications are installed on which servers. A record in the INSTALLATIONS table is also identified by a synthetic primary key, and it consists of an APPLICATION_ID and SERVER_ID for the records in their respective tables. I have an Excel 2003 spreadsheet I would like to import into this database, but it's proving difficult. The spreadsheet is made up of several tabs/worksheets, each one representing a server with its own listing of installed applications. I'm not sure how to proceed with an import - the "Get External Data -- Import" feature in Access has an import "In an Existing Table" option, but it's greyed out. I'm also unsure how I build the relationships between applications and servers for importing records into the INSTALLATIONS table. I had previously fooled around with adding some security to the Access DB file. I think I removed everything but perhaps I didn't and that's causing the problem? Some sample data from the Excel spreadsheet: SERVER101 * Adobe Reader 9 * BMC Remedy User 7.0 * HostExplorer 2008 * Microsoft Office 2003 * Microsoft Office 2007 * Notepad++ SERVER102 * Adobe Reader 9 * DameWare Mini Remote Control * Microsoft Office 2003 * Microsoft .NET Framework 3.5 SP1 * Oracle 9.2 SERVER103 * AWDView * EXTRA! Personal Client 32-bit * Microsoft Office 2003 * Microsoft .NET Framework 3.5 SP1 * Snagit 9.1 * WinZip 12.1 The Access DB design is very simple: APPLICATION * APPLICATION_ID (autonumber) * APPLICATION_NAME (varchar) SERVER * SERVER_ID (autonumber) * SERVER_NAME (varchar) INSTALLATION * INSTALLATION_ID (autonumber) * APPLICATION_ID (number) * SERVER_ID (number)

    Read the article

  • Entity Framework: Setting EntityReference EntityKey causes exception on save

    - by NYSystemsAnalyst
    I have a table with a ModifiedUserID field that is a foreign key to a User table. In entity framework, I'm loading the first table, but not the users table. I have the user ID of the current user, and would like to set the ModifiedUserID to that value for all entities that have been modified prior to saving. Before calling SaveChanges(), I use the ObjectStateManager to get all modified entities. Since I do not have the user object, but I do have the user ID, I set the EntityReference.EntityKey property as follows: entity.UserReference.EntityKey = New EntityKey("MyContainer.User", "UserID", DatabaseUserID) This works fine, but when I execute SaveChanges(), I receive the following error: A relationship is being added or deleted from an AssociationSet 'FK_Table1_User'. With cardinality constraints, a corresponding 'Table1' must also be added or deleted. Now, I see that setting the EntityReference.EntityKey creates a new AssociationSet entry, but how to I prevent this error?

    Read the article

  • Is it possible to keep mysql migration running without keeping connection open?

    - by taw
    ALTER TABLE can easily take a few days - and during this time there's a non-negligible chance of connection getting terminated due to network problems. Is it possible to start ALTER TABLE (or CREATE TABLE ... SELECT ...; or some other very long running query) and leave it running without keeping connection open all the time? (the obvious solution of screen + console mysql client won't easily work as there's no ssh running on that server, only mysqld).

    Read the article

  • Best practice to pass a value from pop up control on iPad.

    - by Tattat
    It is an iPad app based on SDK 3.2. I have a MainUIView, that is subclass from UIView, it have a UIButton and a UILabel. When user press the UIButton, the pop up control will be appeared with a table view. When the user select a cell from the table view, the UILabel changes content base on the user click, and the pop up table view will disappear. The question is, how can I pass the "selected cell" to the UILabel. I am thinking making a "middle man" object. When the user click the UIButton, and the "middle man" will pass to the table. When the cell is selected, the "middle man" will store the idx, and call the UILabel change content from the value of "middle man". But I think it is pretty complex to implement, is there any easier way to implement it? thz u.

    Read the article

  • What's wrong in this SELECT statement

    - by user522211
    Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load Dim SQLData As New System.Data.SqlClient.SqlConnection("Data Source=.\SQLEXPRESS;AttachDbFilename=|DataDirectory|\Database.mdf;Integrated Security=True;User Instance=True") Dim cmdSelect As New System.Data.SqlClient.SqlCommand("SELECT * FROM Table1 WHERE Seats ='" & TextBox1.Text & "'", SQLData) SQLData.Open() Using adapter As New SqlDataAdapter(cmdSelect) Using table As New Data.DataTable() adapter.Fill(table) TextBox1.Text = [String].Join(", ", table.AsEnumerable().[Select](Function(r) r.Field(Of Integer)("seat_select"))) End Using End Using SQLData.Close() End Sub This line will be highlighted with blue line: TextBox1.Text = [String].Join(", ", table.AsEnumerable().[Select](Function(r) r.Field(Of Integer)("seat_select")))

    Read the article

  • Can MySQL reasonably perform queries on billions of rows?

    - by haxney
    I am planning on storing scans from a mass spectrometer in a MySQL database and would like to know whether storing and analyzing this amount of data is remotely feasible. I know performance varies wildly depending on the environment, but I'm looking for the rough order of magnitude: will queries take 5 days or 5 milliseconds? Input format Each input file contains a single run of the spectrometer; each run is comprised of a set of scans, and each scan has an ordered array of datapoints. There is a bit of metadata, but the majority of the file is comprised of arrays 32- or 64-bit ints or floats. Host system |----------------+-------------------------------| | OS | Windows 2008 64-bit | | MySQL version | 5.5.24 (x86_64) | | CPU | 2x Xeon E5420 (8 cores total) | | RAM | 8GB | | SSD filesystem | 500 GiB | | HDD RAID | 12 TiB | |----------------+-------------------------------| There are some other services running on the server using negligible processor time. File statistics |------------------+--------------| | number of files | ~16,000 | | total size | 1.3 TiB | | min size | 0 bytes | | max size | 12 GiB | | mean | 800 MiB | | median | 500 MiB | | total datapoints | ~200 billion | |------------------+--------------| The total number of datapoints is a very rough estimate. Proposed schema I'm planning on doing things "right" (i.e. normalizing the data like crazy) and so would have a runs table, a spectra table with a foreign key to runs, and a datapoints table with a foreign key to spectra. The 200 Billion datapoint question I am going to be analyzing across multiple spectra and possibly even multiple runs, resulting in queries which could touch millions of rows. Assuming I index everything properly (which is a topic for another question) and am not trying to shuffle hundreds of MiB across the network, is it remotely plausible for MySQL to handle this? UPDATE: additional info The scan data will be coming from files in the XML-based mzML format. The meat of this format is in the <binaryDataArrayList> elements where the data is stored. Each scan produces = 2 <binaryDataArray> elements which, taken together, form a 2-dimensional (or more) array of the form [[123.456, 234.567, ...], ...]. These data are write-once, so update performance and transaction safety are not concerns. My naïve plan for a database schema is: runs table | column name | type | |-------------+-------------| | id | PRIMARY KEY | | start_time | TIMESTAMP | | name | VARCHAR | |-------------+-------------| spectra table | column name | type | |----------------+-------------| | id | PRIMARY KEY | | name | VARCHAR | | index | INT | | spectrum_type | INT | | representation | INT | | run_id | FOREIGN KEY | |----------------+-------------| datapoints table | column name | type | |-------------+-------------| | id | PRIMARY KEY | | spectrum_id | FOREIGN KEY | | mz | DOUBLE | | num_counts | DOUBLE | | index | INT | |-------------+-------------| Is this reasonable?

    Read the article

  • Faster way to update 250k rows with SQL

    - by pablo
    I need to update about 250k rows on a table and each field to update will have a different value depending on the row itself (not calculated based on the row id or the key but externally). I tried with a parametrized query but it turns out to be slow (I still can try with a table-value parameter, SqlDbType.Structured, in SQL Server 2008, but I'd like to have a general way to do it on several databases including MySql, Oracle and Firebird). Making a huge concat of individual updates is also slow. What about creating a temp table and running an update joining my table and the tmp one? Will it work faster?

    Read the article

  • Fluent Nhibernate left join

    - by Ronnie
    I want to map a class that result in a left outer join and not in an innner join. My composite user entity is made by one table ("aspnet_users") and an some optional properties in a second table (like FullName in "users"). public class UserMap : ClassMap<User> { public UserMap() { Table("aspnet_Users"); Id(x => x.Id, "UserId").GeneratedBy.Guid(); Map(x => x.UserName, "UserName"); Map(x => x.LoweredUserName, "LoweredUserName"); Join("Users",mm=> { mm.Map(xx => xx.FullName); }); } } this mapping result in an inner join select so no result come out is second table as no data. I'd like to generate an left join. Is this possible only at query level?

    Read the article

< Previous Page | 286 287 288 289 290 291 292 293 294 295 296 297  | Next Page >