Search Results

Search found 27248 results on 1090 pages for 'table adapter'.

Page 524/1090 | < Previous Page | 520 521 522 523 524 525 526 527 528 529 530 531  | Next Page >

  • update columns when value is numeric in tsql

    - by knittl
    i want to normalize date fields from an old badly designed db dump. i now need to update every row, where the datefield only contains the year. update table set date = '01.01.' + date where date like '____' and isnumeric(date) = 1 and date >= 1950 but this will not work, because sql does not do short circuit evaluation of boolean expressions. thus i get an error "error converting nvarchar '01.07.1989' to int" is there a way to work around this? the column also contains strings with a length of 4, which are not numbers (????, 5/96, 70/8, etc.) the table only has 60000 rows

    Read the article

  • java.util.Date.toString() is printing out wrong format

    - by pacoverflow
    The following code prints out "vmtDataOrig.creationdate=2012-11-03" VmtData vmtDataOrig = VmtDataDao.getInstance().loadVmt(1); System.out.println("vmtDataOrig.creationdate=" + vmtDataOrig.getCreationDate().toString()); Here is the definition of the creationDate field in the VmtData class: private Date creationDate = null; Here is the hibernate mapping of the creationDate field to the database table column: <property name="creationDate" column="CREATIONDATE" type="date"/> The CREATIONDATE column in the MySQL database table is of type "date", and for the record retrieved it has the value "2012-11-03". The Javadoc for the java.util.Date.toString() method says it is supposed to print the Date object in the form "dow mon dd hh:mm:ss zzz yyyy". Anyone know why it is printing it out in the form "yyyy-MM-dd"?

    Read the article

  • How to debug when CakePHP Model::save() doesn't attempt an INSERT

    - by RyOnLife
    I am having a bear of a time saving the simplest record from a model called ItemView: if($this->save($this->data)) { echo 'worked'; } else { echo 'failed'; } Where $this-data is: Array ( [ItemView] => Array ( [list_id] => 1 [user_id] => 1 ) ) And my table is: CREATE TABLE IF NOT EXISTS `item_views` ( `id` int(11) NOT NULL auto_increment, `list_id` int(11) NOT NULL, `user_id` int(11) default NULL, `user_ip` int(10) unsigned default NULL, `created` datetime NOT NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 ROW_FORMAT=FIXED AUTO_INCREMENT=1 ; Looking at the query dump in debug mode, Cake isn't even attempting an INSERT, so I have no idea how to debug. Any help would be appreciated.

    Read the article

  • Matching process , issue with query

    - by Blerta Blerta
    i have this code which helps me match two different tables.. now, each of this tables, has a epos_id and a rbpos_id ! I have another table which has pairs of rbpos_id and epos_id, something like: id | epos_id | rbpos_id 1 a3566 465jd 2 hkiyb rbposi When i join this other table, i need to check this condition, i mean, the matching should be done, only and if, the epos_id and rbpos_id of the join i'm doing, have the same id,i mean, belong to the same row.. Here is my current query... Thanks! SELECT retailer.date, retailer.time, retailer.location, retailer.user_id,imovo.mobile_number ". "FROM retailer LEFT JOIN imovo ". " ON addtime(retailer.time, '0:0:50')>imovo.time AND retailer.time <imovo.time AND retailer.date=imovo.date

    Read the article

  • Generating Running Sum of Ratings in SQL

    - by Koobz
    I have a rating table. It boils down to: rating_value created +2 april 3rd -5 april 20th So, every time someone gets rated, I track that rating event in the database. I want to generate a rating history/time graph where the rating is the sum of all ratings up to that point in time on a graph. I.E. A person's rating on April 5th might be select sum(rating_value) from ratings where created <= april 5th The only problem with this approach is I have to run this day by day across the interval I'm interested in. Is there some trick to generating a running total using this sort of data? Otherwise, I'm thinking the best approach is to create a denormalized "rating history" table alongside the individual ratings.

    Read the article

  • Assigning a view controller to be the delegate of a subview which is not directly descendent?

    - by ambertch
    I am writing an iphone app where in numerous cases a subview needs to talk to its superview. For example: View A has a table view that contains photos A has a subview B which allows users to add photos, upon which I want to auto append them to A's table view So far I have been creating a @protocol in B, and registering A as the delegate. The problem in my case is that B has a subview C that allows users to add content, and I want actions in C to invoke changes in its grandparent, A. Currently I am working around this by passing around a self pointer to my base view controller (C.delegate = B.delegate), but it doesn't seem very proper to me. Any thoughts? (and/or general advice on code organization when all sort of subviews needs to talk to superviews would be greatly appreciated) Thanks!

    Read the article

  • The use of GROUP BY in MySQL

    - by Gustav Bertram
    I'm fishing for a comprehensive and canonical answer for the typical "mysql group by?" question. Here is some sample data: TABLE A +------+------+----------+-----+ | id | foo | bar | baz | +------+------+----------+-----+ | 1 | 1 | hello | 42 | | 2 | 0 | apple | 96 | | 3 | 20 | boot | 11 | | 4 | 31 | unicorn | 99 | | 5 | 19 | pumpkin | 11 | | 6 | 88 | orange | 13 | +------+------+----------+-----+ TABLE B +------+------+ | id | moo | +------+------+ | 1 | 1 | | 2 | 99 | | 3 | 11 | +------+------+ Demonstrate and explain the correct use of the GROUP BY clause in MySQL. Touch upon the following points: The use of MIN, MAX, SUM, AVG The use of HAVING Grouping by date, and ranges of dates Grouping with an ORDER BY Grouping with a JOIN Grouping on multiple columns Bonus points for references to other great answers, the MySQL online manual, and online tutorials on GROUP BY.

    Read the article

  • SQL Server: Cross-tabulation, please help

    - by user335160
    I want to achieve the results shown in the attached image. The table structure and data are: Table relationship: Facility Limit -> one to many -> Facility Sub Limit Tables structure and data Facility Limit Id OverallIBLimitId Product Type 1 1 RPA 2 1 CG 3 2 RPA 4 3 CG Facility Sub Limit Id FacilityLimitId Sub-Limit Type Amount Tenor Status Status Date 1 1 RPA at max 2,000,0000.00 2 months Approved January 5, 2011 2 1 Oil 3,000,0000.00 3 yrs Approved January 5, 2011 3 2 CG at minor 4,000,0000.00 1 yr Approved January 5, 2011 4 2 CG at max 5,000,0000.00 6 months Approved January 5, 2011 5 2 Flood Component 1 5,000,0000.00 6 months Approved January 5, 2011 6 2 Flood Component 2 6,000,0000.00 3 yrs Approved January 5, 2011 7 3 RPA at minor 1,000,0000.00 6 months Approved January 5, 2011 8 4 One-Off 1,000,0000.00 6 months Approved January 5, 2011

    Read the article

  • How do i serlialize the product using php?

    - by Ibrahim Azhar Armar
    hi, i am building a real estate application where in it will store the properties and search it. the property will have different categories like (residential, commercial, industrial or agricultural). based upon the category i want to serailize each and every property listing . for example the property with id 1 belongs to resedential will have the serial code rs_SOMERANDOMUNIQUENUMBER. and for commercial it can be cm_SOMERANDOMUNIQUENUMBER and so on. for this my database table looks like this. CREATE TABLE IF NOT EXISTS `propSerials` ( `id` bigint(20) NOT NULL auto_increment, `serial` varchar(50) NOT NULL, `property_id` int(10) UNIQUE NOT NULL, PRIMARY KEY  (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; what would be the best possible format to store the serial with the prefix according to category? thank you

    Read the article

  • Client-side or server-side processing?

    - by Nick
    So, I'm new to dynamic web design (my sites have been mostly static with some PHP), and I'm trying to learn the latest technologies in web development (which seems to be AJAX), and I was wondering, if you're transferring a lot of data, is it better to construct the page on the server and "push" it to the user, or is it better to "pull" the data needed and create the HTML around it on the clientside using JavaScript? More specifically, I'm using CodeIgniter as my PHP framework, and jQuery for JavaScript, and if I wanted to display a table of data to the user (dynamically), would it be better to format the HTML using CodeIgniter (create the tables, add CSS classes to elements, etc..), or would it be better to just serve the raw data using JSON and then build it into a table with jQuery? My intuition says to do it clientside, as it would save bandwidth and the page would probably load quicker with the new JavaScript optimizations all these browsers have now, however, then the site would break for someone not using JavaScript... Thanks for the help

    Read the article

  • Android SQLiteConstraintException: error code 19: constraint failed

    - by Tom D
    I've seen other questions about this exception, but all of them seem to be resolved with the solution that a row with the primary key specified already exists. This doesn't seem to be the case for me. I have tried replacing all single quotes in my strings with double quotes, but the same problem occurs. I'm trying to insert a row into the Settings table of the SQLite database I've created by doing the following: db.execSQL("DROP TABLE IF EXISTS "+Settings.SETTINGS_TABLE_NAME + ";"); db.execSQL(CREATE_MEDIA_TABLE); db.execSQL(CREATE_SETTINGS_TABLE); Cursor c = getAllSettings(); //If there isn't already a settings row, create a row full of defaults if(c.getCount()==0){ ContentValues cv = new ContentValues(); cv.put(Settings.SETTING_UNIQUE_ID, "'"+Settings.uniqueID+"'"); cv.put(Settings.SETTING_DEVICE_ID, Settings.SETTING_DEVICE_ID_DEFAULT); cv.put(Settings.SETTING_CONNECTION_PREFERENCE, Settings.SETTING_CONNECTION_PREFERENCE_DEFAULT); cv.put(Settings.SETTING_AD_HOC_ENABLED, Settings.SETTING_AD_HOC_ENABLED_DEFAULT); cv.put(Settings.SETTING_SERVER_ADDRESS, Settings.SETTING_SERVER_ADDRESS_DEFAULT); cv.put(Settings.SETTING_RECORDING_MODE, Settings.SETTING_RECORDING_MODE_DEFAULT); cv.put(Settings.SETTING_PREVIEW_ENABLED, Settings.SETTING_PREVIEW_ENABLED_DEFAULT); cv.put(Settings.SETTING_PICTURE_RESOLUTION_X, Settings.SETTING_PICTURE_RESOLUTION_X_DEFAULT); cv.put(Settings.SETTING_PICTURE_RESOLUTION_Y, Settings.SETTING_PICTURE_RESOLUTION_Y_DEFAULT); cv.put(Settings.SETTING_VIDEO_RESOLUTION_X, Settings.SETTING_VIDEO_RESOLUTION_X_DEFAULT); cv.put(Settings.SETTING_VIDEO_RESOLUTION_Y, Settings.SETTING_VIDEO_RESOLUTION_Y_DEFAULT); cv.put(Settings.SETTING_VIDEO_FPS, Settings.SETTING_VIDEO_FPS_DEFAULT); cv.put(Settings.SETTING_AUDIO_BITRATE_KBPS, Settings.SETTING_AUDIO_BITRATE_KBPS_DEFAULT); cv.put(Settings.SETTING_STORE_TO_SD, Settings.SETTING_STORE_TO_SD_DEFAULT); cv.put(Settings.SETTING_STORAGE_LIMIT_MB, Settings.SETTING_STORAGE_LIMIT_MB_DEFAULT); this.db.insert(Settings.SETTINGS_TABLE_NAME, null, cv); } The CREATE_SETTINGS_TABLE string is defined as the following: private static String CREATE_SETTINGS_TABLE = "CREATE TABLE IF NOT EXISTS " + Settings.SETTINGS_TABLE_NAME + "(" + Settings.SETTING_UNIQUE_ID + " TEXT NOT NULL PRIMARY KEY, " + Settings.SETTING_DEVICE_ID + " TEXT NOT NULL , " + Settings.SETTING_CONNECTION_PREFERENCE + " TEXT NOT NULL CHECK("+Settings.SETTING_CONNECTION_PREFERENCE+" IN("+Settings.SETTING_CONNECTION_PREFERENCE_ALLOWED+")), " + Settings.SETTING_AD_HOC_ENABLED + " TEXT NOT NULL CHECK("+Settings.SETTING_AD_HOC_ENABLED+" IN("+Settings.SETTING_AD_HOC_ENABLED_ALLOWED+")), " + Settings.SETTING_SERVER_ADDRESS + " TEXT NOT NULL, " + Settings.SETTING_RECORDING_MODE + " TEXT NOT NULL CHECK("+Settings.SETTING_RECORDING_MODE+" IN("+Settings.SETTING_RECORDING_MODE_ALLOWED+")), " + Settings.SETTING_PREVIEW_ENABLED + " TEXT NOT NULL CHECK("+Settings.SETTING_PREVIEW_ENABLED+" IN("+Settings.SETTING_PREVIEW_ENABLED_ALLOWED+")), " + Settings.SETTING_PICTURE_RESOLUTION_X + " TEXT NOT NULL, " + Settings.SETTING_PICTURE_RESOLUTION_Y + " TEXT NOT NULL, " + Settings.SETTING_VIDEO_RESOLUTION_X + " TEXT NOT NULL, " + Settings.SETTING_VIDEO_RESOLUTION_Y + " TEXT NOT NULL, " + Settings.SETTING_VIDEO_FPS + " TEXT NOT NULL, " + Settings.SETTING_AUDIO_BITRATE_KBPS + " TEXT NOT NULL, " + Settings.SETTING_STORE_TO_SD + " TEXT NOT NULL CHECK("+Settings.SETTING_STORE_TO_SD+" IN("+Settings.SETTING_STORE_TO_SD_ALLOWED+")), " + Settings.SETTING_STORAGE_LIMIT_MB + " TEXT NOT NULL )"; However, when I execute my insert, I always get: 03-19 19:37:36.974: ERROR/Database(386): Error inserting server_address='0.0.0.0' storage_limit='-1' connection='none' preview_enabled='0' sd_enabled='1' video_fps='15' audio_bitrate='96' device_id='-1' recording_mode='none' picture_resolution_x='-1' picture_resolution_y='-1' unique_id='000000000000000' adhoc_enable='0' video_resolution_x='320' video_resolution_y='240' 03-19 19:45:34.284: ERROR/Database(446): android.database.sqlite.SQLiteConstraintException: error code 19: constraint failed It seems as if all the columns in my insert are not null. The row's primary key HAS to be unique, because it's the only row in the table. Therefore, the only thing I can think of is my CHECK conditions aren't true. Here are the predefined strings I'm using: public static final String SETTING_UNIQUE_ID = "unique_id"; public static final String SETTING_DEVICE_ID = "device_id"; public static final String SETTING_DEVICE_ID_DEFAULT = "'-1'"; public static final String SETTING_CONNECTION_PREFERENCE = "connection"; public static final String SETTING_CONNECTION_PREFERENCE_3G = "'3g'"; public static final String SETTING_CONNECTION_PREFERENCE_WIFI = "'wifi'"; public static final String SETTING_CONNECTION_PREFERENCE_NONE = "'none'"; public static final String SETTING_CONNECTION_PREFERENCE_ALLOWED = SETTING_CONNECTION_PREFERENCE_3G+","+SETTING_CONNECTION_PREFERENCE_WIFI+","+SETTING_CONNECTION_PREFERENCE_NONE; public static final String SETTING_CONNECTION_PREFERENCE_DEFAULT = SETTING_CONNECTION_PREFERENCE_NONE; public static final String SETTING_AD_HOC_ENABLED = "adhoc_enable"; public static final String SETTING_AD_HOC_ENABLED_ALLOWED = TRUE+","+FALSE; public static final String SETTING_AD_HOC_ENABLED_DEFAULT = FALSE; public static final String SETTING_SERVER_ADDRESS = "server_address"; public static final String SETTING_SERVER_ADDRESS_DEFAULT = "'0.0.0.0'"; public static final String SETTING_RECORDING_MODE = "recording_mode"; public static final String SETTING_RECORDING_MODE_VIDEO = "'video'"; public static final String SETTING_RECORDING_MODE_AUDIO = "'audio'"; public static final String SETTING_RECORDING_MODE_PICTURE = "'picture'"; public static final String SETTING_RECORDING_MODE_NONE = "'none'"; public static final String SETTING_RECORDING_MODE_ALLOWED = SETTING_RECORDING_MODE_VIDEO+","+SETTING_RECORDING_MODE_AUDIO+","+SETTING_RECORDING_MODE_PICTURE+","+SETTING_RECORDING_MODE_NONE; public static final String SETTING_RECORDING_MODE_DEFAULT = SETTING_RECORDING_MODE_NONE; public static final String SETTING_PREVIEW_ENABLED = "preview_enabled"; public static final String SETTING_PREVIEW_ENABLED_ALLOWED = TRUE+","+FALSE; public static final String SETTING_PREVIEW_ENABLED_DEFAULT = FALSE; public static final String SETTING_PICTURE_RESOLUTION_X = "picture_resolution_x"; public static final String SETTING_PICTURE_RESOLUTION_X_DEFAULT = "'-1'"; public static final String SETTING_PICTURE_RESOLUTION_Y = "picture_resolution_y"; public static final String SETTING_PICTURE_RESOLUTION_Y_DEFAULT = "'-1'"; public static final String SETTING_VIDEO_RESOLUTION_X = "video_resolution_x"; public static final String SETTING_VIDEO_RESOLUTION_X_DEFAULT = "'320'"; public static final String SETTING_VIDEO_RESOLUTION_Y = "video_resolution_y"; public static final String SETTING_VIDEO_RESOLUTION_Y_DEFAULT = "'240'"; public static final String SETTING_VIDEO_FPS = "video_fps"; public static final String SETTING_VIDEO_FPS_DEFAULT = "'15'"; public static final String SETTING_AUDIO_BITRATE_KBPS = "audio_bitrate"; public static final String SETTING_AUDIO_BITRATE_KBPS_DEFAULT = "'96'"; public static final String SETTING_STORE_TO_SD = "sd_enabled"; public static final String SETTING_STORE_TO_SD_ALLOWED = TRUE+","+FALSE; public static final String SETTING_STORE_TO_SD_DEFAULT = TRUE; public static final String SETTING_STORAGE_LIMIT_MB = "storage_limit"; public static final String SETTING_STORAGE_LIMIT_MB_DEFAULT = "'-1'"; public static final String SETTING_CLIP_LENGTH_SECONDS = "clip_length"; public static final String SETTING_CLIP_LENGTH_SECONDS_DEFAULT = "'300'"; Does anyone see what could be going on? I'm stumped. Thanks in advance.

    Read the article

  • Is there alternative way to write this query?

    - by Kugel
    I have tables A, B, C, where A represents items which can have zero or more sub-items stored in C. B table only has 2 foreign keys to connect A and C. I have this sql query: select * from A where not exists (select * from B natural join C where B.id = A.id and C.value > 10); Which says: "Give me every item from table A where all sub-items have value less than 10. Is there a way to optimize this? And is there a way to write this not using exists operator?

    Read the article

  • SSIS - Range lookups

    - by Repieter
      When developing an ETL solution in SSIS we sometimes need to do range lookups in SSIS. Several solutions for this can be found on the internet, but now we have built another solution which I would like to share, since it's pretty easy to implement and the performance is fast.   You can download the sample package to see how it works. Make sure you have the AdventureWorks2008R2 and AdventureWorksDW2008R2 databases installed. (Apologies for the layout of this blog, I don't do this too often :))   To give a little bit more information about the example, this is basically what is does: we load a facttable and do an SCD type 2 lookup operation of the Product dimension. This is done with a script component.   First we query the Data warehouse to create the lookup dataset. The query that is used for that is:   SELECT     [ProductKey]     ,[ProductAlternateKey]     ,[StartDate]     ,ISNULL([EndDate], '9999-01-01') AS EndDate FROM [DimProduct]     The output of this query is stored in a DataTable:     string lookupQuery = @"                         SELECT                             [ProductKey]                             ,[ProductAlternateKey]                             ,[StartDate]                             ,ISNULL([EndDate], '9999-01-01') AS EndDate                         FROM [DimProduct]";           OleDbCommand oleDbCommand = new OleDbCommand(lookupQuery, _oleDbConnection);         OleDbDataAdapter adapter = new OleDbDataAdapter(oleDbCommand);           _dataTable = new DataTable();         adapter.Fill(_dataTable);     Now that the dimension data is stored in the DataTable we use the following method to do the actual lookup:   public int RangeLookup(string businessKey, DateTime lookupDate)     {         // set default return value (Unknown)         int result = -1;           DataRow[] filteredRows;         filteredRows = _dataTable.Select(string.Format("ProductAlternateKey = '{0}'", businessKey));           for (int i = 0; i < filteredRows.Length; i++)         {             // check if the lookupdate is found between the startdate and enddate of any of the records             if (lookupDate >= (DateTime)filteredRows[i][2] && lookupDate < (DateTime)filteredRows[i][3])             {                 result = (filteredRows[i][0] == null) ? -1 : (int)filteredRows[i][0];                 break;             }         }           filteredRows = null;           return result;     }       This method is executed for every row that passes the script component. This is implemented in the ProcessInputRow method   public override void Input0_ProcessInputRow(Input0Buffer Row)     {         // Perform the lookup operation on the current row and put the value in the Surrogate Key Attribute         Row.ProductKey = RangeLookup(Row.ProductNumber, Row.OrderDate);     }   Now what actually happens?!   1. Every record passes the business key and the orderdate to the RangeLookup method. 2. The DataTable is then filtered on the business key of the current record. The output is stored in a DataRow [] object. 3. We loop over the DataRow[] object to see where the orderdate meets the following expression: (lookupDate >= (DateTime)filteredRows[i][2] && lookupDate < (DateTime)filteredRows[i][3]) 4. When the expression returns true (so where the data is between the Startdate and the EndDate), the surrogate key of the dimension record is returned   We have done some testing with this solution and it works great for us. Hope others can use this example to do their range lookups.

    Read the article

  • how to delete fk children in nhibernate

    - by frosty
    I would like to delete the ICollection PriceBreaks from Product. I'm using the following method. However they dont seem to delete. What am i missing. When i step thru. i notice that "product.PriceBreaks.Clear();" doesn't actually clear the items. Do i need to flush or something? public void RemovePriceBreak(int productId) { using (ISession session = EStore.Domain.Helpers.NHibernateHelper.OpenSession()) using (ITransaction transaction = session.BeginTransaction()) { var product = session.Get<Product>(productId); product.PriceBreaks.Clear(); session.SaveOrUpdate(product); transaction.Commit(); } } Here are my hbm files <class name="Product" table="Products"> <id name="Id" type="Int32" column="Id" unsaved-value="0"> <generator class="identity"/> </id> <property name="CompanyId" column="CompanyId" type="Int32" not-null="true" /> <property name="Name" column="Name"/> <set name="PriceBreaks" table="PriceBreaks" generic="true" cascade="all-delete-orphan" inverse="true" > <key column="ProductId" /> <one-to-many class="EStore.Domain.Model.PriceBreak, EStore.Domain" /> </set> </class> <class name="PriceBreak" table="PriceBreaks"> <id name="Id" type="Int32" column="Id" unsaved-value="0"> <generator class="identity"/> </id> <many-to-one name="Product" column="ProductId" not-null="true" cascade="all" class="EStore.Domain.Model.Product, EStore.Domain" /> </class> My Entities public class Product { public virtual int Id { get; set; } public virtual ICollection<PriceBreak> PriceBreaks { get; set; } public virtual void AddPriceBreak(PriceBreak priceBreak) { priceBreak.Product = this; PriceBreaks.Add(priceBreak); } } public class PriceBreak { public virtual int Id { get; set; } public virtual Product Product { get; set; } }

    Read the article

  • Automatic database generation / migration with perl

    - by pistacchio
    Hi, In Ror or Django or web2py you can "describe" a database (as a set of classes that remaps to tables) and the framework (having being provided with a connection string to the desired database) generates the tables, fields, relations and in the case of RoR and web2py it also keeps it up-to-date (eg, removing a class drops the table, adding a property to the class triggers an "alter table add" etc). Is there any perl module that does the same? Eg, it takes the YAML / XML / JSON description of a database as input and modifies / generates the database accordingly? Thanks in advance.

    Read the article

  • How to save array of integer numbers in a column in SQL Server 2005

    - by hamed
    I have a table in SQL Server 2005 with the following properties: Users (UserID, Username, Password) where UserID is primary key I want to save an array of integer numbers in the password attribute in the Users table. -------------------- 0 1 2 3 -------------------- 1543 6543 7658 8765 -------------------- I plan to save this into the password column. On the other hand I use pictures instead of texts for password and each picture has a code (4 digit) and a password include 4 picture that produce 16 digit. I want to save these 16 digits (array of Ints) into the Password column please help me. thanks

    Read the article

  • validates_uniqueness_of...limiting scope - How do I restrict someone from creating a certain number

    - by bgadoci
    I have the following code: class Like < ActiveRecord::Base belongs_to :site validates_uniqueness_of :ip_address, :scope => [:site_id] end Which limits a person from "liking" a site more than one time based on a remote ip request. Essentially when someone "likes" a site, a record is created in the Likes table and I use a hidden field to request and pass their ip address to the :ip_address column in the like table. With the above code I am limiting the user to one "like" per their ip address. I would like to limit this to a certain number for instance 10. My initial thought was do something like this: validates_uniqueness_of :ip_address, :scope => [:site_id, :limit => 10] But that doesn't seem to work. Is there a simple syntax here that will allow me to do such a thing?

    Read the article

  • Implement subitem in SQLite

    - by Mohit Deshpande
    How could I implement a sub item where the item would have a parent to that of the same item. For example, I have a database that holds tasks/todos. Each todo has a title(TEXT), completed(INTEGER [1 if true, 0 if false]), and due date(TEXT) column. I would like to add functionality to have subtasks, where a task could possibly have a parent task. How would I implement this in SQLite for Android? NOT IN SQL! So no foreign keys (triggers are allow, though)! Could I just have a separate table (subtask) and have a foreign key trigger to link subtask(INTEGER parent) to task(INTEGER PRIMARY KEY AUTOINCREMENT _id)? Or should I add a column to my original task table(INTEGER parent)? How could I implement this?

    Read the article

  • Cakephp with OpenID and User Authentication

    - by nolandark
    I have a table "users" and I want to enable my visitors to login with their openID Account. For this I use the OpenId Component for Cakephp and it works fine (When I login with the Google URL I receive the "successfully authenticated!" notification). But now I'm kind of stuck because I don't know how to go on from there. Do I have to create a User-Entry for every user which has a new entry in the "oid_associations" table (I save all OpenId interaction in the mysql database)? Do I have to authenticate the User after the login (or is the OpenID-component doing that automatically?). Am I completely misunderstanding the concept?

    Read the article

  • How to get #entries of last hour in MySQL, correcting for timezone?

    - by Ferdy
    I am storing activity entries in a MySQL table. The table has a date_created field of type timestamp and through PHP I insert the activity entries based on GMT: $timestamp = gmdate("Y-m-d H:i:s", time()); This works fine. On my client I am on GMT+2. If it is 16:00 here and I insert an entry, it is stored in MySQL as 14:00. This is as expected I guess. My Now I would like to get the number of activity entries from MySQL within the last hour. I'm using the following query: SELECT COUNT(id) as cnt FROM karmalog WHERE user_id = ' 89' AND event='IMG_UPD' AND date_created > DATE_SUB(CURRENT_TIMESTAMP, INTERVAL 1 HOUR) This returns nothing, because CURRENT_TIMESTAMP uses the MySQL timezone settings, which is set to SYSTEM, which is set to GMT+2. I guess what I am looking for is a GMT_CURRENTTIMESTAMP in MySQL, is there such a thing?

    Read the article

  • How to control the memory size of continuously running windows service?

    - by Snowill
    Hi, I have created a windows service which is continuously polling a database. For this purpose i have a timer in place. Ever time i am querying a database table i open a connection and close it immediately after my work is done. Right now i am doing this every 20 seconds for testing purpose, but later this time might increase to 5 - 10 minutes. What happens is everytime the database table is polled there is an increase of 10-12 KB in the size of the memory of the service running. This i can see in the task manager. Is there any way to control this.

    Read the article

  • R error message about variable lengths

    - by Abraham
    I ran the following code in order to recode the variable. Unfortunately, when I move to run an logit model (using the Zelig package), I get an error message that the variable length differ for this variable. ## Independent Variable - Partisanship (ANES 2004) data04$V043114 part <- data04$V043114 attributes(part) summary(part) partb < part partb[part %in% levels(part)[4]] <- NA partb[part %in% levels(part)[5]] <- NA partb[part %in% levels(part)[6]] <- NA partb[part %in% levels(part)[7]] <- NA partb <- factor(partb) attributes(partb) summary(partb) table(partb) table(part, partb) cbind(part, partb) partisan041 <- partb partisan042 <- as.numeric(partb) summary(partisan041) summary(partisan042) ## Regression Model - ANES 2004 ## anes04one <- zelig(trade041a ~ age042 + education042 + personal042 + economy042 + partisan042 + employment042 + union042 + home042 + market042 + race042 + income042 + gender042, model="logit", data=data04) summary(anes04one) #Error in model.frame.default(formula = trade041a ~ age042 + education042 + : # variable lengths differ (found for 'partisan042')

    Read the article

  • Microsoft SQL Server 2008 - 99% fragmentation on non-clustered, non-unique index

    - by user550441
    I have a table with several indexes (defined below). One of the indexes (IX_external_guid_3) has 99% fragmentation regardless of rebuilding/reorganizing the index. Anyone have any idea as to what might cause this, or the best way to fix it? We are using Entity Framework 4.0 to query this, the EF queries on the other indexed fields about 10x faster on average then the external_guid_3 field, however an ADO.Net query is roughly the same speed on both (though 2x slower than the EF Query to indexed fields). Table id(PK, int, not null) guid(uniqueidentifier, null, rowguid) external_guid_1(uniqueidentifier, not null) external_guid_2(uniqueidentifier, null) state(varchar(32), null) value(varchar(max), null) infoset(XML(.), null) -- usually 2-4K created_time(datetime, null) updated_time(datetime, null) external_guid_3(uniqueidentifier, not null) FK_id(FK, int, null) locking_guid(uniqueidentifer, null) locked_time(datetime, null) external_guid_4(uniqueidentifier, null) corrected_time(datetime, null) is_add(bit, not null) score(int, null) row_version(timestamp, null) Indexes PK_table(Clustered) IX_created_time(Non-Unique, Non-Clustered) IX_external_guid_1(Non-Unique, Non-Clustered) IX_guid(Non-Unique, Non-Clustered) IX_external_guid_3(Non-Unique, Non-Clustered) IX_state(Non-Unique, Non-Clustered)

    Read the article

  • When should I consider representing the primary-key ...?

    - by JMSA
    When should I consider representing the primary-key as classes? Should we only represent primary keys as classes when a table uses composite-key? For example: public class PrimaryKey { ... ... ...} Then private PrimaryKey _parentID; public PrimaryKey ParentID { get { return _parentID; } set { _parentID = value; } } And public void Delete(PrimaryKey id) {...} When should I consider storing data as comma-separated values in a column in a DB table rather than storing them in different columns?

    Read the article

< Previous Page | 520 521 522 523 524 525 526 527 528 529 530 531  | Next Page >