Search Results

Search found 24784 results on 992 pages for 'table udf'.

Page 325/992 | < Previous Page | 321 322 323 324 325 326 327 328 329 330 331 332  | Next Page >

  • UITableViewCell is transparent when not supposed to be

    - by David Liu
    My UITableViewCell is being transparent when it's not supposed to be. My table view has a background color and it shows through the table cell, even though they're supposed to be opaque. I'm not sure why this is. Relevant code: UITableViewCell *cell = [table dequeueReusableCellWithIdentifier:emptyIdentifier]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithFrame:CGRectZero reuseIdentifier:emptyIdentifier] autorelease]; } cell.textLabel.text = @"Empty"; cell.textLabel.textAlignment = UITextAlignmentCenter; cell.textLabel.backgroundColor = [UIColor whiteColor]; return cell;

    Read the article

  • Grails / GORM, Disable First-level Cache

    - by Stephen Swensen
    Suppose I have the following Domain class mapping to a legacy table, utilizing read-only second-level cache, and having a transient field: class DomainObject { static def transients = ['userId'] Long id Long userId static mapping = { cache usage: 'read-only' table 'SOME_TABLE' } } I have a problem, references to DomainObject are being shared due to first-level caching, and thus transient fields are writing over each other. For example, def r1 = DomainObject.get(1) r1.userId = 22 def r2 = DomainObject.get(1) r2.userId = 34 assert r1.userId == 34 That is, r1 and r2 are references to the same instance. This is undesirable, I would like to cache the table data without sharing references. Any ideas? [Edit] Understanding the situation better now, I believe my question boils down to the following: Is there anyway to disable first level cache for a specific domain class while still using second level cache?

    Read the article

  • username and password check linq query in c#

    - by b0x0rz
    this linq query var users = from u in context.Users where u.UserEMailAdresses.Any(e1 => e1.EMailAddress == userEMail) && u.UserPasswords.Any(e2 => e2.PasswordSaltedHash == passwordSaltedHash) select u; return users.Count(); returns: 1 even when there is nothing in password table. how come? what i am trying to do is get the values of email and passwordHash from two separate tables (UserEMailAddresses and UserPasswords) linked via foreign keys to the third table (Users). it should be simple - checking if email and password mach from form to database. but it is not working for me. i get 1 (for count) even when there are NO entries in the UserPasswords table. is the linq query above completely wrong, or...?

    Read the article

  • Force 'Replace Into' to use a certain index

    - by Bobby
    I have a MySQL (5.0) table with 3 rows which are considered a combined Unique Index: CREATE TABLE `test`.`table_a` ( `Id` int(11) NOT NULL AUTO_INCREMENT, `field1` varchar(5) COLLATE latin1_swedish_ci NOT NULL DEFAULT '', `field2` varchar(5) COLLATE latin1_swedish_ci NOT NULL DEFAULT '', `field3` varchar(5) COLLATE latin1_swedish_ci NOT NULL DEFAULT '', PRIMARY KEY (`Id`), INDEX `IdxUnqiue` (`field1`(5),`field2`(5),`field3`(5)) ) ENGINE=MyISAM; This table should be filled with a REPLACE INTO query: REPLACE INTO table_a ( Field1, Field2, Field3 ) VALUES ( "Test1", "Test2", "Test3" ) The behavior I'd like to see is that this query always overrides the previous inserted row, because IdxUnique is...ahm, triggered. But unfortunately, there's still the primary index which seems to kick in and always inserts a new row. What I get: Query was executed 3 times: +---Id---+---Field1---+---Field2---+---Field3---+ | 1 | Test1 | Test2 | Test2 | | 2 | Test1 | Test2 | Test2 | | 3 | Test1 | Test2 | Test2 | +--------+------------+------------+------------+ What I want: Query was executed 3 times: +---Id---+---Field1---+---Field2---+---Field3---+ | 3 | Test1 | Test2 | Test2 | +--------+------------+------------+------------+ So, can I tell REPLACE INTO to use just a certain Index or to consider one 'more inportant' then another?

    Read the article

  • SQL Server, Remote Stored Procedure, and DTC Transactions

    - by marc
    Our organization has a lot of its essential data in a mainframe Adabas database. We have ODBC access to this data and from C# have queried/updated it successfully using ODBC/Natural "stored procedures". What we'd like to be able to do now is to query a mainframe table from within SQL Server 2005 stored procs, dump the results into a table variable, massage it, and join the result with native SQL data as a result set. The execution of the Natural proc from SQL works fine when we're just selecting it; however, when we insert the result into a table variable SQL seems to be starting a distributed transaction that in turn seems to be wreaking havoc with our connections. Given that we're not performing updates, is it possible to turn off this DTC-escalation behavior? Any tips on getting DTC set up properly to talk to DataDirect's (formerly Neon Systems) Shadow ODBC driver?

    Read the article

  • SQLAlchemy - how to map against a read-only (or calculated) property

    - by Jeff Peck
    I'm trying to figure out how to map against a simple read-only property and have that property fire when I save to the database. A contrived example should make this more clear. First, a simple table: meta = MetaData() foo_table = Table('foo', meta, Column('id', String(3), primary_key=True), Column('description', String(64), nullable=False), Column('calculated_value', Integer, nullable=False), ) What I want to do is set up a class with a read-only property that will insert into the calculated_value column for me when I call session.commit()... import datetime def Foo(object): def __init__(self, id, description): self.id = id self.description = description @property def calculated_value(self): self._calculated_value = datetime.datetime.now().second + 10 return self._calculated_value According to the sqlalchemy docs, I think I am supposed to map this like so: mapper(Foo, foo_table, properties = { 'calculated_value' : synonym('_calculated_value', map_column=True) }) The problem with this is that _calculated_value is None until you access the calculated_value property. It appears that SQLAlchemy is not calling the property on insertion into the database, so I'm getting a None value instead. What is the correct way to map this so that the result of the "calculated_value" property is inserted into the foo table's "calculated_value" column?

    Read the article

  • Complex Join - involving date ranges and sum...

    - by calumbrodie
    I have two tables that I need to join... I want to join table1 and table2 on 'id' - however in table two id is not unique. I only want one value returned for table two, and this value represents the sum of a column called 'total_sold' - within a specified date range (say one month).. SELECT ta.id, tb.total_sold as total_sold_this_week FROM table_a as ta LEFT JOIN table_b as tb ON ta.id=tb.id AND tb.date_sold BETWEEN ADDDATE(NOW(),INTERVAL -3 WEEK) AND NOW() this works but does not SUM the rows - only returning one row for each id... how do I get the sum from table b instead of only one row??? Please criticise if format of question could use more work - I can rewrite and provide sample data if required - this is a trivialised version of a much larger problem. -Thanks

    Read the article

  • How to Select Multiple Records from Multiple Tables at Once - SQL Server/C#/.NET/T-SQL

    - by peace
    I have two tables Customer and CustomerPhone. Customer usually has multiple phone numbers, so when i run select statement on customer 101, i will get multiple records due to the multiple phone numbers. As you see below all the "Phone" and "Fax" field belongs to CustomerPhone table. These are considered as two records in the CustomerPhone table whereas the rest of fields relate to Customer table which is a single record. What should i do to fill Phone and Fax field in this case? Should i run select statement on CustomerPhone first and then run select statement on Customer?

    Read the article

  • query optimization

    - by Gaurav
    I have a query of the form SELECT uid1,uid2 FROM friend WHERE uid1 IN (SELECT uid2 FROM friend WHERE uid1='.$user_id.') and uid2 IN (SELECT uid2 FROM friend WHERE uid1='.$user_id.') The problem now is that the nested query SELECT uid2 FROM friend WHERE uid1='.$user_id.' returns a very large number of ids(approx. 5000). The table structure of the friend table is uid1(int), uid2(int). This table is used to determine whether two users are linked together as friends. Any workaround? Can I write the query in a different way? Or is there some other way to solve this issue. I'm sure I am not the first person to face such a problem. Any help would be greatly appreciated.

    Read the article

  • Determining threshold for lock escalation

    - by Davin
    I have a table with around 2.5 millions records and will be updating around 700k of them and want to update these while still allowing other users to see the data. My update statement looks something like this: UPDATE A WITH (UPDLOCK,ROWLOCK) SET A.field = B.field FROM Table_1 A INNER JOIN Table2 B ON A.id = B.id WHERE A.field IS NULL AND B.field IS NOT NULL I was wondering if there was any way to work out at what point sql server will escalate a lock placed on an update statement (as I don't want the whole table to be locked)? I don't have permissions to run a server trace to see how the locks are being applied, so is there any other way of knowing at what point the lock will be escalated to cover the whole table? Thanks!

    Read the article

  • Using property file in hibernate mapping

    - by Zoltan Hamori
    Hi, I have a two nodes environment using the same database. In the database there is a resource table like RESOURCE_ID, CODE, NODE The content of the NODE column can be 1 or 2 depending on which node can use it. As I need to deploy the same ear to the two nodes, I would like to map this table like this: <hibernate-mapping> <class name="ResourceVO" table="RESOURCE" dynamic-update="true" optimistic-lock="dirty" where="NODE=${node.value}" > I would like to store the node.value property on the file system, so the instances could identify which resource to use. Is it possible in hibernate?

    Read the article

  • Get all related products based on their full-text search relationship

    - by MikeJ
    I have a Product table with the fields Id, Title, Description, Keywords (just comma separated list of keywords). Table is full-text indexed. When I view one product, I do a query and search the full-text catalog for any related products based on the Keywords field. select * from Products where Contains(Products.*, @keywordsFromOneProduct) Works like a charm. Now I would like to list all products and all their related products in a big list and I want to avoid calling this method for each item. Any ideas how could I do it? I was thinking about a job that would go through products one by one and build a one-many mapping table (fields ProductId, RelatedProductId), but I wonder is there a better way?

    Read the article

  • How to find the parent element using javascript

    - by Kalpana
    I want to change the background color of the table cell when radio button inside the cell is clicked. <table> <tr> <c:forEach items="#{books}" var="book"> <td align="center"> <h:selectOneRadio value="#{book.price}" onclick="this.parentElement.style.background-color='red';"> <f:selectItems value="#{books.getDetails()}" /> </h:selectOneRadio> </td> </c:forEach> </tr> </table> How to get the parent element reference.

    Read the article

  • Implementing tagging in JDO

    - by Julie Paltrow
    I am implementing a tagging system for a website that uses JDO . I would like to use this method. However I am new to relationships in JDO. To keep it simple, what I have looks like this: @PersistentCapable class Post { @Persistent String title; @Persistent String body; } @PersistentCapable class Tag { @Persistent String name; } What kind of JDO relationships do I need and how to implement them? I want to be able to list all Tags that belong to a Post, and also be able to list all Posts that have a given Tag. So in the end I would like to have something like this: Table: Post Columns: PostID, Title, Body Table: Tag Columns: TagID, name Table: PostTag Columns: PostID, TagID

    Read the article

  • Is it possible to partition more than one way at a time in SQL Server?

    - by meeting_overload
    I'm considering various ways to partition my data in SQL Server. One approach I'm looking at is to partition a particular huge table into 8 partitions, then within each of these partitions to partition on a different partition column. Is this even possible in SQL Server, or am I limited to definining one parition column+function+scheme per table? I'm interested in the more general answer, but this strategy is one I'm considering for Distributed Partitioned View, where I'd partition the data under the first scheme using DPV to distribute the huge amount of data over 8 machines, and then on each machine partition that portion of the full table on another parition key in order to be able to drop (for example) sub-paritions as required.

    Read the article

  • nested repeaters - how to sort

    - by Cristian Boariu
    Hi guys, I have a table: Category with some sort of categories category1 category2 category3 I have another table Subcategory which has a fk to Category. So, each category1 can have subcategoryB, subcategoryA, subcategoryZ. Well... i've made two repeaters. The first one is binded to a linq data source which takes the categories from the parent table. The children repeater is binded to that foreign key like: DataSource='<%# Eval("Subcategories_1s") %' So the result is: category1 subcategoryB subcategoryA subcategoryZ category2 etc. Well, the strange question is: how can i order the subcategories alphabetically? If i'd have binded the children repeater to a linq it would be easy. But in this case...?

    Read the article

  • Help creating a ColumnName Convention using FluentNHibernate

    - by Rafael E. Belliard
    I've been trying to specify a custom naming convention for my database table columns. So far, I have been able to setup a convention for the table's name, but not the actual columns. I've seen a few guides on the internet, but they're not working using the latest Fluent NHibernate (1.0.0 RTM). public class CamelCaseSplitNamingConvention : IClassConvention, IComponentConvention { public void Apply(IClassInstance instance) { instance.Table(instance.EntityType.Name.ChangeCamelCaseToUnderscore()); } public void Apply(IComponentInstance instance) { // is this the correct call for columns? If not, which one? } } Please help.

    Read the article

  • utf8 and unicode getting warning messages in mysql

    - by BufordTaylor
    I have a mysql table. When I try to insert, I get this: Warning: Incorrect string value: '\xAE</...' for column 'value' at row 1 mysql> show create table Configurations; | Configurations | CREATE TABLE `Configurations` ( `id` int(11) NOT NULL AUTO_INCREMENT, `title` varchar(255) NOT NULL, `ckey` varchar(255) NOT NULL, `value` mediumtext, PRIMARY KEY (`id`), KEY `ckey` (`ckey`), ) ENGINE=InnoDB AUTO_INCREMENT=29 DEFAULT CHARSET=utf8 | mysql> SHOW VARIABLES LIKE 'coll%'; +----------------------+-----------------+ | Variable_name | Value | +----------------------+-----------------+ | collation_connection | utf8_general_ci | | collation_database | utf8_general_ci | | collation_server | utf8_general_ci | +----------------------+-----------------+ I googled the hell out of the error, and it all seemed to boil down to utf8 being set as my default character set. I've been like that for a while. I'm not sure what else to do. Help?

    Read the article

  • Loading tables dynamically with NHibernate

    - by Trevor Goertzen
    I'm working on a project that requires me to load tables based on table names stored in another table. More tables will be added to the DB (and by someone else), so creating NHibernate mapping files for each table isn't an option. Does anyone know if it is possible to load tables dynamically using NHibernate? Edit: I should add that I'm on .NET 2.0, so I can't use Fluent NHibernate. Thanks for the suggestion though guys. I will use that as evidence in convincing my associates to upgrade.

    Read the article

  • how to invoke function writen in a validation.php and stroe all function name in database.

    - by saint
    Respected all, i'm in trouble. I created a file with name validation.php and store all my validation functionality in this file with different function names like, check_textbox(parm1, pram2, pram3, pram4) { // Definition here } check_chkBox(parm1, pram2) { // Definition here } and so on.....! then i created a table in mysql with the name tbValidation and stored all the function name with parameters in a table. the record stored in table look like as: interfaceid---------- functionNameWithReturnValue 1 ------------ check_textbox(parm1, pram2, pram3, pram4) = 1 2 ------------ check_textbox(parm1, pram2, pram4) = 0 3 ------------ check_textbox(parm1, pram2, pram4) = 1) AND (check_chkBox(parm1, pram2)=0) when i fetch record from database i want to invoke those functions that store in validation.php $data = mysql_fetch_array($drow); if($db->row_count > 0) { // when i fetch row one from database. I used this one but not working // @ $data[0] have value "check_textbox(parm1, pram2, pram3, pram4) = 1" if($data[0]) { // Do this } } How i can do this task...? :(

    Read the article

  • Managing Foreign Keys

    - by jwzk
    So I have a database with a few tables. The first table contains the user ID, first name and last name. The second table contains the user ID, interest ID, and interest rating. There is another table that has all of the interest ID's. For every interest ID (even when new ones are added), I need to make sure that each user has an entry for that interest ID (even if its blank, or has defaults). Will foreign keys help with this scenario? or will I need to use PHP to update each and every record when I add a new key?

    Read the article

  • What can I do about a SQL Server ghost FK constraint?

    - by rcook8601
    I'm having some trouble with a SQL Server 2005 database that seems like it's keeping a ghost constraint around. I've got a script that drops the constraint in question, does some work, and then re-adds the same constraint. Normally, it works fine. Now, however, it can't re-add the constraint because the database says that it already exists, even though the drop worked fine! Here are the queries I'm working with: alter table individual drop constraint INDIVIDUAL_EMP_FK ALTER TABLE INDIVIDUAL ADD CONSTRAINT INDIVIDUAL_EMP_FK FOREIGN KEY (EMPLOYEE_ID) REFERENCES EMPLOYEE After the constraint is dropped, I've made sure that the object really is gone by using the following queries: select object_id('INDIVIDUAL_EMP_FK') select * from sys.foreign_keys where name like 'individual%' Both return no results (or null), but when I try to add the query again, I get: The ALTER TABLE statement conflicted with the FOREIGN KEY constraint "INDIVIDUAL_EMP_FK". Trying to drop it gets me a message that it doesn't exist. Any ideas?

    Read the article

  • Plural to Singular conversion trouble in Rails Migrations?

    - by Earlz
    Hi, I'm a beginner at Ruby On Rails and am trying to get a migration to work with the name Priorities So, here is the code I use in my migration: class Priorities < ActiveRecord::Migration def self.up create_table :priorities do |t| t.column :name, :string, :null => false, :limit => 32 end Priority.create :name => "Critical" Priority.create :name => "Major" Priority.create :name => "Minor" end def self.down drop_table :priorities end end This results in the following error though: NOTICE: CREATE TABLE will create implicit sequence "priorities_id_seq" for serial column "priorities.id" NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "priorities_pkey" for table "priorities" rake aborted! An error has occurred, this and all later migrations canceled: uninitialized constant Priorities::Priority Is this some problem with turning ies to y for converting something plural to singular?

    Read the article

  • NHibernate bag is always null

    - by Neville Chinan
    I have set up my mapping file and classes as suggested by many articles class A { ... IList BBag {get;set;} ... } class B { ... A aObject {get;set;} ... } <class name="A">...<bag name="BBag" table="B" inverse="true" lazy="false"><key column="A_ID" /><one-to-many class="B" /></bag>... <class name="B">...<many-to-one name="aObject" class="A" column="A_ID" />... I added a set of A's to the A table and a set of B's to the B table, all the data is stored as expected. However if I try and access aInstance.BBag.Count I get a null reference exception. I think I missing some key knowledge on how an bag gets instantiated. Thanks

    Read the article

  • What's a reasonable number of rows and tables to be able to join in MySQL?

    - by Philip Brocoum
    I have one table that maps locations to postal codes. For example, New York State has about 2000 postal codes. I have another table that maps mail to the postal codes it was sent to, but this table has about 5 million rows. I want to find all the mail that was sent to New York State, which seems simple enough, but the query is unbelievably slow. I haven't been able to even wait long enough for it to finish. Is the problem that there are 5 million rows? I can't help but think that 5 million shouldn't be such a large number for a computer these days... Oh, and everything is indexed. Is SQL just not designed to handle such large joins?

    Read the article

< Previous Page | 321 322 323 324 325 326 327 328 329 330 331 332  | Next Page >