Search Results

Search found 10060 results on 403 pages for 'column'.

Page 164/403 | < Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >

  • Storing Tables of Information on the Android Platform.

    - by Tarmon
    I have about twenty pages of information that is stored in tables that needs to be stored in my Android application. Each column is a designated stop on a bus route and the column is filled with times that the bus will be at the stop. There is also certain information that needs to be associated with some times, such as if the bus is handicap accessible at a certain time. Here is an example of one of the tables: Bus Times I have thought about using a SQL lite as that seems as though it would be able to store these tables quite easily; but when I think of using SQL I think of dynamic data storage and this shouldn't be changing more than once a year. Is SQL appropriate for this application? Is there a better way to do this? Thanks, Rob

    Read the article

  • making certain cells of an ExtJS GridPanel un-editable

    - by synchronicity
    I currently have a GridPanel with the Ext.ux.RowEditor plugin. Four fields exist in the row editor: port, ip address, subnet and DHCP. If the DHCP field (checkbox) of the selected row is checked, I need to make the other three fields un-editable. I've been trying to perform this code when the beforeedit event is triggered, but to no avail... I've only found ways to make the entire column un-editable. My code so far: this.rowEditor.on({ scope: this, beforeedit: this.checkIfEditable }); checkIfEditable:function(rowEditor, rowIndex) { if(this.getStore().getAt(rowIndex).get('dhcp')) { // this function makes the entire column un-editable: this.getColumnModel().setEditable(2, false); // I want to make only the other three fields of the current row // uneditable. } } Please let me know if any clarification is needed. Any help potentially extending RowEditor to accomplish the target functionality would be greatly appreciated as well!

    Read the article

  • LINQ-to-SQL and SQL Compact - database file sharing problem

    - by Eye of Hell
    Hello. I'm learing LINQ-to-SQL right now and i have wrote a simple application that define SQL data: [Table( Name = "items" )] public class Item { [ Column( IsPrimaryKey = true, IsDbGenerated = true ) ] public int Id; [ Column ] public string Name; } I have launched 2 copy of application connected to the same .sdf file and tested if all database modifications in one application affects another application. But strange thing arise. If i use InsertOnSubmit() and DeleteOnSubmit() in one application, added/removed items are instantly visible in other application via 'select' LINQ queue. But if i try to modify 'Name' field in one application, it is NOT visible in other applicaton until it reconnects the database :(. The test code i use: var Items = from c in db.Items where Id == c.Id select c; foreach( var Item in Items ) { Item.Name = "new name"; break; } db.SubmitChanges(); Can anyone suggest what i'm doing wrong and why InsertOnSubmit()/DeleteOnSubmit works and SubmitChanges() don't?

    Read the article

  • Why better isolation level means better performance in SQL Server

    - by Oleg Zhylin
    When measuring performance on my query I came up with a dependency between isolation level and elapsed time that was surprising to me READUNCOMMITTED - 409024 READCOMMITTED - 368021 REPEATABLEREAD - 358019 SERIALIZABLE - 348019 Left column is table hint, and the right column is elapsed time in microseconds (sys.dm_exec_query_stats.total_elapsed_time). Why better isolation level gives better performance? This is a development machine and no concurrency whatsoever happens. I would expect READUNCOMMITTED to be the fasted due to less locking overhead. Update: I did measure this with DBCC DROPCLEANBUFFERS DBCC FREEPROCCACHE issued and Profiler confirms there're no cache hits happening. Update2: The query in question is an OLAP one and we need to run it as fast as possible. Closing the production server from outside world to get the computation done is not out of question if this gives performance benefits.

    Read the article

  • R counting the occurance of similar rows of data frame

    - by Matt
    I have data in the following format called DF (this is just a made up simplified sample): eval.num, eval.count, fitness, fitness.mean, green.h.0, green.v.0, offset.0 random 1 1 1500 1500 100 120 40 232342 2 2 1000 1250 100 120 40 11843 3 3 1250 1250 100 120 40 981340234 4 4 1000 1187.5 100 120 40 4363453 5 1 2000 2000 200 100 40 345902 6 1 3000 3000 150 90 10 943 7 1 2000 2000 90 90 100 9304358 8 2 1800 1900 90 90 100 284333 However, the eval.count column is incorrect and I need to fix it. It should report the number of rows with the same values for (green.h.0, green.v.0, and offset.0) by only looking at the previous rows. The example above uses the expected values, but assume they are incorrect. How can I add a new column (say "count") which will count all previous rows which have the same values of the specified variables? I have gotten help on a similar problem of just selecting all rows with the same values for specified columns, so I supposed I could just write a loop around that, but it seems inefficient to me.

    Read the article

  • DB2 load partitioned data in parallel

    - by Erik Paulson
    I have a 10-node DB2 9.5 database, with raw data on each machine (ie node1:/scratch/data/dataset.1 node2:/scratch/data/dataset.2 ... node10:/scratch/data/dataset.10 There is no shared NFS mount - none of my machines could handle all of the datasets combined. each line of a dataset file is a long string of text, column delimited. The first column is the key. I don't know the hash function that DB2 will use, so dataset is not pre-partitioned. Short of renaming all of my files, is there any way to get DB2 to load them all in parallel? I'm trying to do something like load from '/scratch/data/dataset' of del modified by coldel| fastparse messages /dev/null replace into TESTDB.data_table part_file_location '/scratch/data'; but I have no idea how to suggest to db2 that it should look for dataset.1 on the first node, etc.

    Read the article

  • How to get equivalent of ResultSetMetaData without ResultSet

    - by javanix
    Hey Guys - I need to resolve a bunch of column names to column indexes (so as to use some of the nice ResultSetMetaData methods). However, the only way that I know how to get a ResultSetMetaData object is by calling getMetaData() on some ResultSet. The problem I have with that is that grabbing a ResultSet takes up uneccesary resources in my mind - I don't really need to query the data in the table, I just want some information about the table. Does anyone know of any way to get a RSMD object without getting a ResultSet (from a potentially huge table) first? Thanks!

    Read the article

  • SQL - Counting sets of Field-B values for each Field-A value

    - by potrnd
    Hello, First of all sorry that I could not think of a more descriptive title. What I want to do is the following using only SQL: I have some lists of strings, list1, list2 and list3. I have a dataset that contains two interesting columns, A and B. Column A contains a TransactionID and column B contains an ItemID. Naturally, there can be multiple rows that share the same TransactionIDs. I need to catch those transactions that have at least one ItemID in each and every list (list1 AND list2 AND list3). I also need to count how many times does that happen for each transaction. [EDIT] That is, count how many full sets of ItemIDs there are for each TransactionID", "Full Set" being any element of the list1 with any element of the list2 with any element of the list3 I hope that makes enough sense, perhaps I will be able to explain it better with a clear head. Thanks in advance

    Read the article

  • EF4 CTP5 Code First approach ignores Table attributes

    - by Justin
    I'm using EF4 CTP5 code first approach but am having trouble getting it to work. I have a class called "Company" and a database table called "CompanyTable". I want to map the Company class to the CompanyTable table, so have code like this: [Table(Name = "CompanyTable")] public class Company { [Key] [Column(Name = "CompanyIdNumber", DbType = "int")] public int CompanyNumber { get; set; } [Column(Name = "CompanyName", DbType = "varchar")] public string CompanyName { get; set; } } I then call it like so: var db = new Users(); var companies = (from c in db.Companies select c).ToList(); However it errors out: Invalid object name 'dbo.Companies'. It's obviously not respecting the Table attribute on the class, even though it says here that Table attribute is supported. Also it's pluralizing the name it's searching for (Companies instead of Company.) How do I map the class to the table name?

    Read the article

  • Database: storing data from user registration form

    - by teggy
    Let's say I have an user registration form. In this form, I have the option for the user to upload a photo. I have an User table and Photo table. My User table has a "PathToPhoto" column. My question is how do I fill in the "PathToPhoto" column if the photo is uploaded and inserted into Photo table before the user is created? Another way to phrase my question is how to get the newly uploaded photo to be associated to the user that may or may not be created next. I'm using python and postgresql.

    Read the article

  • Does Hibernate's GenericGenerator cause update and saveOrUpdate to always insert instead of update?

    - by Derek Mahar
    When using GenericGenerator to generate unique identifiers, do Hibernate session methods update() and saveOrUpdate() always insert instead of update table rows, even when the given object has an existing identifier (where the identifier is also the table primary key)? Is this the correct behaviour? public class User { private String id; private String name; public User(String id, String name) { this.id = id; this.name = name; } @GenericGenerator(name="generator", strategy="guid")@Id @GeneratedValue(generator="generator") @Column(name="USER_ID", unique=true, nullable=false) public String getId() { return this.id; } public void setId(String id) { this.id = id; } @Column(name="USER_NAME", nullable=false, length=20) public String getUserName() { return this.userName; } public void setUserName(String userName) { this.userName = userName; } } class UserDao extends AbstractDaoHibernate { public void updateUser(final User user) { HibernateTemplate ht = getHibernateTemplate(); ht.saveOrUpdate(user); } }

    Read the article

  • How to get DataGridViewRow from CellFormatting event?

    - by tomaszs
    I have a DataGridView and handle event CellFormatting. It has a parameter called: DataGridViewCellFormattingEventArgs e With e.RowIndex in it. When i do: DataGridView.Rows[e.RowIndex] I get proper row from collection. But when I click at a header of a column to sort it by other column than default one and user DataGridView.Rows[e.RowIndex] I get unproper row. It is because Rows collection do not reflect order of rows in DataGridView. So how to get propert DataGridViewRow from RowIndex in DataGridView?

    Read the article

  • sql select statement with a group by

    - by user85116
    I have data in 2 tables, and I want to create a report. Table A: tableAID (primary key) name Table B: tableBID (primary key) grade tableAID (foreign key, references Table A) There is much more to both tables, but those are the relevant columns. The query I want to run, conceptually, is this: select TableA.name, avg(TableB.grade) where TableB.tableAID = TableA.tableAID The problem of course is that I'm using an aggregate function (avg), and I can rewrite it like this: select avg(grade), tableAID from TableB group by tableAID but then I only get the ID of TableA, whereas I really need that name column which appears in TableA, not just the ID. Is it possible to write a query to do this in one statement, or would I first need to execute the second query I listed, get the list of id's, then query each record in TableA for the name column... seems to me I'm missing something obvious here, but I'm (quite obviously) not an sql guru...

    Read the article

  • How to build a data flow?

    - by salvationishere
    I am running Visual Studio 2008, the SSIS Tutorial described on: http://msdn.microsoft.com/en-us/library/ms167106.aspx I finished all of the tasks but am getting following errors: Error 1 Validation error. Extract Sample Currency Data: Extract Sample Currency Data: input column "CurrencyAlternateKey" (123) has lineage ID 55 that was not previously used in the Data Flow task. Lesson 1.dtsx 0 0 Error 2 Validation error. Extract Sample Currency Data SSIS.Pipeline: input column "CurrencyAlternateKey" (123) has lineage ID 55 that was not previously used in the Data Flow task. Lesson 1.dtsx 0 0 Can you tell what I need to do to make this build without errors?

    Read the article

  • Auto Increment feature of SQL Server

    - by Rahul Tripathi
    I have created a table named as ABC. It has three columns which are as follows:- The column number_pk (int) is the primary key of my table in which I have made the auto increment feature on for that column. Now I have deleted two rows from that table say Number_pk= 5 and Number_pk =6. The table which I get now is like this:- Now if I again enter two new rows in this table with the same value I get the two new Number_pk starting from 7 and 8 i.e, My question is that what is the logic behind this since I have deleted the two rows from the table. I know that a simple answer is because I have set the auto increment on for the primary key of my table. But I want to know is there any way that I can insert the two new entries starting from the last Number_pk without changing the design of my table? And how the SQL Server manage this record since I have deleted the rows from the database??

    Read the article

  • PHP 5.2 Function needed for GENERIC sorting of a recordset array

    - by donbriggs
    Somebody must have come up with a solution for this by now. We are using PHP 5.2. (Don't ask me why.) I wrote a PHP class to display a recordset as an HTML table/datagrid, and I wish to expand it so that we can sort the datagrid by whichever column the user selects. In the below example data, we may need to sort the recordset array by Name, Shirt, Assign, or Age fields. I will take care of the display part, I just need help with sorting the data array. As usual, I query a database to get a result, iterate throught he result, and put the records into an assciateiave array. So, we end up with an array of arrays. (See below.) I need to be able to sort by any column in the dataset. However, I will not know the column names at design time, nor will I know if the colums will be string or numeric values. I have seen a ton of solutions to this, but I have not seen a GOOD and GENERIC solution Can somebody please suggest a way that I can sort the recordset array that is GENERIC, and will work on any recordset? Again, I will not know the fields names or datatypes at design time. The array presented below is ONLY an example. UPDATE: Yes, I would love to have the database do the sorting, but that is just not going to happen. The queries that we are running are very complex. (I am not really querying a table of Star Trek characters.) They include joins, limits, and complex WHERE clauses. Writing a function to pick apart the SQL statement to add an ORDER BY is really not an option. Besides, sometimes we already have the array that is a result of the query, rather than the ability to run a new query. Array ( [0] => Array ( [name] => Kirk [shrit] => Gold [assign] => Bridge ) [1] => Array ( [name] => Spock [shrit] => Blue [assign] => Bridge ) [2] => Array ( [name] => Uhura [shrit] => Red [assign] => Bridge ) [3] => Array ( [name] => Scotty [shrit] => Red [assign] => Engineering ) [4] => Array ( [name] => McCoy [shrit] => Blue [assign] => Sick Bay ) )

    Read the article

  • Rails. Putting update logic in your migrations

    - by Daniel Abrahamsson
    A couple of times I've been in the situation where I've wanted to refactor the design of some model and have ended up putting update logic in migrations. However, as far as I've understood, this is not good practice (especially since you are encouraged to use your schema file for deployment, and not your migrations). How do you deal with these kind of problems? To clearify what I mean, say I have a User model. Since I thought there would only be two kinds of users, namely a "normal" user and an administrator, I chose to use a simple boolean field telling whether the user was an adminstrator or not. However, after I while I figured I needed some third kind of user, perhaps a moderator or something similar. In this case I add a UserType model (and the corresponding migration), and a second migration for removing the "admin" flag from the user table. And here comes the problem. In the "add_user_type_to_users" migration I have to map the admin flag value to a user type. Additionally, in order to do this, the user types have to exist, meaning I can not use the seeds file, but rather create the user types in the migration (also considered bad practice). Here comes some fictional code representing the situation: class CreateUserTypes < ActiveRecord::Migration def self.up create_table :user_types do |t| t.string :name, :nil => false, :unique => true end #Create basic types (can not put in seed, because of future migration dependency) UserType.create!(:name => "BASIC") UserType.create!(:name => "MODERATOR") UserType.create!(:name => "ADMINISTRATOR") end def self.down drop_table :user_types end end class AddTypeIdToUsers < ActiveRecord::Migration def self.up add_column :users, :type_id, :integer #Determine type via the admin flag basic = UserType.find_by_name("BASIC") admin = UserType.find_by_name("ADMINISTRATOR") User.all.each {|u| u.update_attribute(:type_id, (u.admin?) ? admin.id : basic.id)} #Remove the admin flag remove_column :users, :admin #Add foreign key execute "alter table users add constraint fk_user_type_id foreign key (type_id) references user_types (id)" end def self.down #Re-add the admin flag add_column :users, :admin, :boolean, :default => false #Reset the admin flag (this is the problematic update code) admin = UserType.find_by_name("ADMINISTRATOR") execute "update users set admin=true where type_id=#{admin.id}" #Remove foreign key constraint execute "alter table users drop foreign key fk_user_type_id" #Drop the type_id column remove_column :users, :type_id end end As you can see there are two problematic parts. First the row creation part in the first model, which is necessary if I would like to run all migrations in a row, then the "update" part in the second migration that maps the "admin" column to the "type_id" column. Any advice?

    Read the article

  • Dynamic data-entry value store

    - by simendsjo
    I'm creating a data-entry application where users are allowed to create the entry schema. My first version of this just created a single table per entry schema with each entry spanning a single or multiple columns (for complex types) with the appropriate data type. This allowed for "fast" querying (on small datasets as I didn't index all columns) and simple synchronization where the data-entry was distributed on several databases. I'm not quite happy with this solution though; the only positive thing is the simplicity... I can only store a fixed number of columns. I need to create indexes on all columns. I need to recreate the table on schema changes. Some of my key design criterias are: Very fast querying (Using a simple domain specific query language) Writes doesn't have to be fast Many concurrent users Schemas will change often Schemas might contain many thousand columns The data-entries might be distributed and needs syncronization. Preferable MySQL and SQLite - Databases like DB2 and Oracle is out of the question. Using .Net/Mono I've been thinking of a couple of possible designs, but none of them seems like a good choice. Solution 1: Union like table containing a Type column and one nullable column per type. This avoids joins, but will definitly use a lot of space. Solution 2: Key/value store. All values are stored as string and converted when needed. Also use a lot of space, and of course, I hate having to convert everything to string. Solution 3: Use an xml database or store values as xml. Without any experience I would think this is quite slow (at least for the relational model unless there is some very good xpath support). I also would like to avoid an xml database as other parts of the application fits better as a relational model, and being able to join the data is helpful. I cannot help to think that someone has solved (some of) this already, but I'm unable to find anything. Not quite sure what to search for either... I know market research is doing something like this for their questionnaires, but there are few open source implementations, and the ones I've found doesn't quite fit the bill. PSPP has much of the logic I'm thinking of; primitive column types, many columns, many rows, fast querying and merging. Too bad it doesn't work against a database.. And of course... I don't need 99% of the provided functionality, but a lot of stuff not included. I'm not sure this is the right place to ask such a design related question, but I hope someone here has some tips, know of any existing work, or can point me to a better place to ask such a question. Thanks in advance!

    Read the article

  • Showing renames in hg status?

    - by Ryan Thompson
    I know that Mercurial can track renames of files, but how do I get it to show me renames instead of adds/removes when I do hg status? For instance, instead of: A bin/extract-csv-column.pl A bin/find-mirna-binding.pl A bin/xls2csv-separate-sheets.pl A lib/Text/CSV/Euclid.pm R src/extract-csv-column.pl R src/find-mirna-binding.pl R src/modules/Text/CSV/Euclid.pm R src/xls2csv-separate-sheets.pl I want some indication that four files have been moved. I think I read somewhere that the output is like this to preserve backward-compatibility with something-or-other, but I'm not worried about that.

    Read the article

  • calculated colum or stored proceedure or just php funcion needed ?

    - by mcgrailm
    I have an order table in MySQL database and is has a field/column which store the dattime stamp of when the order was placed and I need to calculate when the order must be shipped. I could probably figure out how to right a function to calculate the ship date and call that when ever needed but I think, not sure it may moake more sense to have the shipdate as a column that is somehow calculate in mysql. that being said I have Never used a stored procedure or created a calculated field. the later I think would be best but again not sure. I used to make calculated field all the time in FMP but I've gotten away from that program. if someone could point me in the right direction or tell me why it would be better to do it one way over another I'd appreciate it . thanks Mike

    Read the article

  • Variable length Blob in hibernate?

    - by Seth
    I have a byte[] member in one of my persistable classes. Normally, I'd just annotate it with @Lob and @Column(name="foo", size=). In this particular case, however, the length of the byte[] can vary a lot (from ~10KB all the way up to ~100MB). If I annotate the column with a size of 128MB, I feel like I'll be wasting a lot of space for the small and mid-sized objects. Is there a variable length blob type I can use? Will hibernate take care of all of this for me behind the scenes without wasting space? What's the best way to go about this? Thanks!

    Read the article

  • can't insert xml dml expression as a string

    - by 81967
    Here is the code below that would explain you the problem... I create a table below with an xml column and declare a variable, initialize it and Insert the Value into the xml column, create table CustomerInfo (XmlConfigInfo xml) declare @StrTemp nvarchar(2000) set @StrTemp = '<Test></Test>' insert into [CustomerInfo](XmlConfigInfo) values (@StrTemp) Then comes the part of the question,, if I write this... update [CustomerInfo] set XmlConfigInfo.modify('insert <Info></Info> into (//Test)[1]') -- Works Fine!!! but when I try this, set @StrTemp = 'insert <Info></Info> into (//Test)[1]' update [CustomerInfo] set XmlConfigInfo.modify(@StrTemp) -- Doesn't Work!!! and throws an error The argument 1 of the xml data type method "modify" must be a string literal. is there a way around for this one? I tried this, but it is not working :(

    Read the article

  • SQL Server ORDER BY/WHERE with nested select

    - by Echilon
    I'm trying to get SQL Server to order by a column from a nested select. I know this isn't the best way of doing this but it needs to be done. I have two tables, Bookings and BookingItems. BookingItems contains StartDate and EndDate fields, and there can be multiple BookingItems on a Booking. I need to find the earliest startdate and latest end date from BookingItems, then filter and sort by these values. I've tried with a nested select, but when I try to use one of the selected columns in a WHERE or ORDER BY, I get an "Invalid Column Name". SELECT b.*, (SELECT COUNT(*) FROM bookingitems i WHERE b.BookingID = i.BookingID) AS TotalRooms, (SELECT MIN(i.StartDate) FROM bookingitems i WHERE b.BookingID = i.BookingID) AS StartDate, (SELECT MAX(i.EndDate) FROM bookingitems i WHERE b.BookingID = i.BookingID) AS EndDate FROM bookings b LEFT JOIN customers c ON b.CustomerID = c.CustomerID WHERE StartDate >= '2010-01-01' Am I missing something about SQL ordering? I'm using SQL Server 2008.

    Read the article

  • How to display all the dates between two given dates in SQL

    - by Gopal
    Using SQL server 2000. If the Start date is 06/23/2008 and End date is 06/30/2008 Then I need the Output of query as 06/23/2008 06/24/2008 06/25/2008 . . . 06/30/2008 I Created a Table names as Integer which has 1 Column, column values are 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 then I used the below mentioned query Tried Query SELECT DATEADD(d, H.i * 100 + T .i * 10 + U.i, '" & dtpfrom.Value & "') AS Dates FROM integers H CROSS JOIN integers T CROSS JOIN integers U order by dates The above query is displaying 999 Dates only. 999 Dates means (365 + 365 + 269) Dates Only. Suppose I want to select more than 3 Years (01/01/2003 to 01/01/2008). The above query should not suitable. How to modify my query? Or any other query is available for the above condition. Please kindly provide me the Query.

    Read the article

  • Comparing 2 mysql tables and updating only records that have changed

    - by Roland
    I have the following. Two mysql tables. I want to copy info that has changed from table a to b. For example if row 1 column 2 has changed in table a I want to only update that column in table b. Table b is not the same as a but has same columns that also exist in a. The other solution I have is to just clear table b and replace it with the contents from table a, the problem with this could be that the script will take longer to execute, since there are more than 10000 records. Any advise for which method would work the best will be highly appreciated

    Read the article

< Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >