Search Results

Search found 5233 results on 210 pages for 'records'.

Page 133/210 | < Previous Page | 129 130 131 132 133 134 135 136 137 138 139 140  | Next Page >

  • accepts_nested_attributes_for ignore blank values

    - by Mike
    i have class Profile has_many :favorite_books, :dependent => :destroy has_many :favorite_quotes, :dependent => :destroy accepts_nested_attributes_for :favorite_books, :allow_destroy => true accepts_nested_attributes_for :favorite_quotes, :allow_destroy => true end I have a dynamic form where you press '+' to add new textareas for creating new favorites. What i want to do is ignore the blank ones, I find this harder to sort through in the update controller than a non nested attribute. What i have temporarily is a hack in the after_save callback deleting the empty records. Whats the most rails way to ignore these blank objects? I dont want validation and errors, just a silent deletion/ignore.

    Read the article

  • How to obtain an scroll_insensitive resultSet from a callableStatement in Java JDBC?

    - by rafa colunga
    Hi, I have a stored procedure in an Oracle 10g database, in my java code, i call it with: CallableStatement cs = bdr.prepareCall("Begin ADMBAS01.pck_basilea_reportes.cargar_reporte(?,?,?,?,?); END;", ResultSet.TYPE_SCROLL_INSENSITIVE, ResultSet.CONCUR_READ_ONLY); cs.setInt(1, this.reportNumber); cs.registerOutParameter(2, OracleTypes.CURSOR); cs.registerOutParameter(3, OracleTypes.INTEGER); cs.registerOutParameter(4, OracleTypes.VARCHAR); cs.setDate(5, new java.sql.Date(this.fecha1.getTime())); cs.execute(); ResultSet rs = (ResultSet)cs.getObject(2); i do obtain an ResultSet with correct records in it, but when i try an "scroll_insensitive - only" operation, (like absolute(1) ). I keep getting an SQLException stating that it doesn't work on FORWARD only resultSet. So how can i obtain this ResultSet with scroll_insensitive capabilites? Thanks in Advance.

    Read the article

  • mySQL experts - need help with 'intersect'

    - by MTCreations
    I know that mySQL 5.x does not support INTERSECT, but that seems to be what I need. Table A: Products (p_id) Table B: Prod_cats (cat_id) - category info (name, description, etc) Table C: prod_2cats (p_id, cat_id) - many to many prod_2cats holds the many (1 or more) categories that have been assigned to Products (A). Doing a query/filter lookup, (user interactive) and need to be able to select across multiple categories the products that meet ALL the criteria. Ex: - 80 products assigned to Category X - 50 products assigned to Category Y - but only 10 products (intersect) are assigned to BOTH cat X AND cat Y This sql works for one category: SELECT * FROM products WHERE p_show='Y' AND p_id IN ( SELECT p_id FROM prods_2cats AS PC WHERE PC.cat_id =" . $cat_id ." <-$cat_id is sanitized var passed from query form . I can't seem to find the means to say ' give me the intersect of cat A and cat B' and get back the subset (10 records, from my example) Help!

    Read the article

  • Old Sony Digital Tape Handycam, imported Video is sped up?

    - by thatryan
    Friend gave me their OLD handycam and a bunch of tapes asking me to put them on a DVD. The model is Sony DCR-TVR103 and it records onto Hi8 Digital tapes. It has a firewire port also. I am trying to use import function in iMovie on OS X 10.6. Every import though the video is super fast, like playing in fast forward. Ever seen anything like this? How can I import this video? Thank you.

    Read the article

  • Optimizing encrypted column search

    - by Sung Meister
    I have a table called,tblClient with an encrypted column called SSN. Due to company policy, we encrypted SSN using a symmetric key (chosen over asymmetric key due to performance reasons) using a password. Here is a partial LIKE search on SSN declare @SSN varchar(11) set @SSN = '111-22-%' open symmetric key SSN_KEY decrypt by password = 'secret' select Client_ID from tblClient (nolock) where convert(nvarchar(11), DECRYPTBYKEY(SSN)) like @SSN close symmetric key SSN_KEY Before encryption, searching thru 150,000 records took less than 1 second. but with the mix of decryption, the same search takes around 5 seconds. What strategy can I apply to try to optimize searching thru encrypted column?

    Read the article

  • Shipping Java code with data baked into the .jar

    - by Andrew
    I need to ship some Java code that has an associated set of data. It's a simulator for a device, and I want to be able to include all of the data used for the simulated records in the one .JAR file. In this case, each simulated record contains four fields (calling party, called party, start of call, call duration). What's the best way to do that? I've gone down the path of generating the data as Java statements, but IntelliJ doesn't seem particularly happy dealing with a 100,000 line Java source file! Is there a smarter way to do this? In the C#/.NET world I'd create the data as a separate file, embed it in the assembly as a resource, and then use reflection to pull that out at runtime and access it. I'm unsure of what the appropriate analogy is in the Java world. FWIW, Java 1.6, shipping for Solaris.

    Read the article

  • Create a Python User() class that both creates new users and modifies existing users

    - by ensnare
    I'm trying to figure out the best way to create a class that can modify and create new users all in one. This is what I'm thinking: class User(object): def __init__(self,user_id): if user_id == -1 self.new_user = True else: self.new_user = False #fetch all records from db about user_id self._populateUser() def commit(self): if self.new_user: #Do INSERTs else: #Do UPDATEs def delete(self): if self.new_user == False: return False #Delete user code here def _populate(self): #Query self.user_id from database and #set all instance variables, e.g. #self.name = row['name'] def getFullName(self): return self.name #Create a new user >>u = User() >>u.name = 'Jason Martinez' >>u.password = 'linebreak' >>u.commit() >>print u.getFullName() >>Jason Martinez #Update existing user >>u = User(43) >>u.name = 'New Name Here' >>u.commit() >>print u.getFullName() >>New Name Here Is this a logical and clean way to do this? Is there a better way? Thanks.

    Read the article

  • Get the ranking of highest score with earliest timestamp

    - by Billy
    I have the following records: name score Date Billy 32470 12/18/2010 7:26:35 PM Sally 1100 12/19/2010 12:00:00 AM Kitty 1111 12/21/2010 12:00:00 AM Sally 330 12/21/2010 8:23:34 PM Daisy 32460 12/22/2010 3:10:09 PM Sally 32460 12/23/2010 4:51:11 PM Kitty 32440 12/24/2010 12:00:27 PM Billy 32460 12/24/2010 12:11:36 PM I want to get the leaderboard of the highest score with earliest time stamp using LINQ. In this case, the correct one is rank name 1 Billy 2 Daisy 3 Sally I use the following query: var result = (from s in Submissions group s by s.name into g orderby g.Max(q => q.Score) descending,g.Min(q => q.Date) ascending select new ScoreRecord { name = g.Key Score = g.Max(q => q.Score) }).Take(3).ToList(); I get the following wrong result: rank name 1 Billy 2 Sally 3 Daisy What's the correct linq query in this case?

    Read the article

  • how to effectively modify index

    - by daedlus
    Hej everyone, problem : I am looking for right way to convert an index from clustered to non-clustered Description : I have a table as below in sybase db: dbo.UserLog Id | UserId |time | .... This is hash partitioned using UserId. Currently it has 2 indexes UserId : non-clustered time: clustered This table has about 20 million records. I now want to make UserId as clustered index and time as non-clustered index. is it correct to user alter index to change from clustered to non-clustered or do i drop index and recreate. does the fact that userId is used in hash partitioning have any implications to this? To me alter seems way to go but I have not yet tried this.

    Read the article

  • MDX performance vs. T-SQL

    - by SubPortal
    I have a database containing tables with more than 600 million records and a set of stored procedures that make complex search operations on the database. The performance of the stored procedures is so slow even with suitable indexes on the tables. The design of the database is a normal relational db design. I want to change the database design to be multidimensional and use the MDX queries instead of the traditional T-SQL queries but the question is: Is the MDX query better than the traditional T-SQL query with regard to performance? and if yes, to what extent will that improve the performance of the queries? Thanks for any help.

    Read the article

  • LINQ Searching Only Allowing Equivalency

    - by Mad Halfling
    Hi folks, I'm trying to filter a set of records based on a sub-object's criteria. This compiles ok recordList = recordList.Where(r => r.Areas.Where(a => a.Area == "Z").Count() > 0); but this doesn't recordList = recordList.Where(r => r.Areas.Where(a => a.Area <= "Z").Count() > 0); giving these errors Cannot convert lambda expression to type 'string' because it is not a delegate type Delegate 'System.Func' does not take '1' arguments Operator '<=' cannot be applied to operands of type 'string' and 'string' != works ok, by any sort of less than or greater than operation fails.

    Read the article

  • C# Finisar SQLite DateTime Comparison Problem

    - by Emanuel
    My "task" database table look like this: [title] [content] [start_date] [end_date] [...] [...] [01.06.2010 20:10:36] [06.06.2010 20:10:36] [...] [...] [05.06.2010 20:10:36] [06.06.2010 20:10:36] And I want to find only those records that meet the condition that a given day is between start_date and end_date. I've tried the following SQL expression: SELECT * FROM task WHERE strftime ('%d', start_date) <= @day AND @day <= strftime ('%d', end_date) Where @day is an SQLiteParameter (eq 5). But no result is returned. How can I solve this problem? Thanks.

    Read the article

  • DAO, Spring and Hibernate

    - by EugeneP
    Correct me if anything is wrong. Now when we use Spring DAO for ORM templates, when we use @Transactional attribute, we do not have control over the transaction and/or session when the method is called externally, not within the method. Lazy loading saves resources - less queries to the db, less memory to keep all the collections fetched in the app memory. So, if lazy=false, then everything is fetched, all associated collections, that is not effectively, if there are 10,000 records in a linked set. Now, I have a method in a DAO class that is supposed to return me a User object. It has collections that represent linked tables of the database. I need to get a object by id and then query its collections. Hibernate "failed to lazily initialize a collection" exception occurs when I try to access the linked collection that this DAO method returns. Explain please, what is a workaround here?

    Read the article

  • How to the view count of a question in memory?

    - by Freewind
    My website is like stackoverflow, there are many questions. I want to record how many times a question has been visited. I have a column called "view_count" in the question table to save it. If a user visits a question many times, the view_count should be increased only 1. So I have to record which user has visited which question, and I think it is too much expensive to save this information in the database because the records will be huge. So, I would like to keep the information in memory and only persist the number to the database every 10 minutes. I have searched about "cache" in Rails, but I haven't found an example. I would like a simple sample of how to do this, thanks for help~

    Read the article

  • Filemaker - Getting field values from related table

    - by foobar
    I have the following setup in Filemaker Pro 10. Table1 with: id_table1, related_names Table2 with: id_table2, name, include and a joint-table with: id_table1, id_table2 Now I want either make related_names a calculated field or write a script that sets related_names to a comma separated list of all names which are connected through the joint-table and have Table2.include = True. So for example a data set could look like: Table1 id_table1, related_names 1, "foo,bar" 2, "foo" 3, "" joint-table id_table1, id_table2 1,1 1,2 1,3 2,1 Table2 id_table2, name, include 1, foo, True 2, bar, True 3, baz, False After searching the internet for a few hours the closest I came was a calculated field with list(join-table::id_table2) which gives me a list with a all the id_table2's. But now I would need to find the appropriate records in table2 and check the include field. I hope the problem is clear. any help is highly appreciated.

    Read the article

  • Reshape data frame to convert factors into columns in R

    - by Alexander L. Belikoff
    I have a data frame where one particular column has a set of specific values (let's say, 1, 2, ..., 23). What I would like to do is to convert from this layout to the one, where the frame would have extra 23 (in this case) columns, each one representing one of the factor values. The data in these columns would be booleans indicating whether a particular row had a given factor value... To show a specific example: Source frame: ID DATE SECTOR 123 2008-01-01 1 456 2008-01-01 3 789 2008-01-02 5 ... <more records with SECTOR values from 1 to 5> Desired format: ID DATE SECTOR.1 SECTOR.2 SECTOR.3 SECTOR.4 SECTOR.5 123 2008-01-01 T F F F F 456 2008-01-01 F F T F F 789 2008-01-02 F F F F T I have no problem doing it in a loop but I hoped there would be a better way. So far reshape() didn't yield the desired result. Help would be much appreciated.

    Read the article

  • How to save the view count of a question in memory?

    - by Freewind
    My website is like stackoverflow, there are many questions. I want to record how many times a question has been visited. I have a column called "view_count" in the question table to save it. If a user visits a question many times, the view_count should be increased only 1. So I have to record which user has visited which question, and I think it is too much expensive to save this information in the database because the records will be huge. So, I would like to keep the information in memory and only persist the number to the database every 10 minutes. I have searched about "cache" in Rails, but I haven't found an example. I would like a simple sample of how to do this, thanks for help~

    Read the article

  • Out of Memory When Loading Java Entities

    - by Hugh Buchanan
    I have a terrible problem that hopefully has a very simple answer. I am running out of memory when I perform a basic If I have code like this: MyEntity myEntity; for (Object id: someIdList) { myEntity = find(id); // do something basic with myEntity } And the find() method is a standard EntityManager related method: public MyEntity find(Object id) { return em.find(mycorp.ejb.entity.MyEntity.class, id); } This code worked a couple of weeks ago, and works fine if there are fewer items in the database. The resulting error I am facing is: java.lang.OutOfMemoryError: GC overhead limit exceeded The exception is coming from oracle toplink calling some oracle jdbc methods. The loop exists because an EJBQL such as "select object(o) from MyEntity as o" will overload the application server when there are lots of records.

    Read the article

  • T-SQL Query Results Not as Expected Deduplication

    - by Yoda
    Hi Guys, I am attempting to get all records where and Id field exists more than once, trouble is my query is returning nothing and I have no idea as to why!? And this is the only method I know. Here is my code: select [Customer Number], [Corporate Customer Number], [Order Date], [Order Number], [Order No], [Order Line Status], [Payment Method] , [ProcessOrder], [Order Platform] from Temp_ICOSOrder group by [Customer Number], [Corporate Customer Number], [Order Date], [Order Number], [Order No], [Order Line Status], [Payment Method] , [ProcessOrder] , [Order Platform] having COUNT([Order Number]) > 1 Any help is much appriciated!

    Read the article

  • Hiding Table Rows

    - by David Stein
    I have a table that I'm using to show details from the line items of a quote. I want to hide a particular row depending on the value of the field in it. The expression I've tried is to set the row visibility to: =IIF(isnothing(First(Fields!NEW_PRICEBREAKS.Value, "QuoteDetail")),true,false) When I run the query from the dataset "Null" returns for NEW_PRICEBREAKS for most of the records. Also, when I expanded the row with another column with this expression: =IIF(isnothing(First(Fields!NEW_PRICEBREAKS.Value, "QuoteDetail")),"is nothing","not nothing") I see "not nothing" repeated over and over again. I've attempted to use TRIM inside of the isnothing to remove spaces and it still doesn't work. Also, the sql data type for NEW_PRICEBREAKS is nvarchar(MAX). Any ideas how I can suppress this row correctly?

    Read the article

  • Software Design & Web Service Design

    - by 001
    I'm about to design my Web service API, most of the functions of my API is basically very simular to my web application. Now the question is, should I create 1 single method and reuse them for both the web application and the web service api? (This seems to be the logical solution, however its very complicated; it's much easier to duplicate the method used by the web application, and keep both separate, ie one method for the web application and one method for the web service.) How do you guys do it? 1) REUSE: one main method and reuse them for both web application and web service application (I like this but it's complicated) WebAppMethodX --uses-- COMMONFUNCTIONMETHOD_X APIMethodX ---uses---- COMMONFUNCTIONMETHOD_X ie common function performs functions such as creating/updating/deleting records etc 2) DUPLICATE: two methods, one method for the web application and one method for the web service. WebAppMethodX APIMethodX

    Read the article

  • C# Lambda Expression Speed

    - by Nathan
    I have not used many lambda expressions before and I ran into a case where I thought I could make slick use of one. I have a custom list of ~19,000 records and I need to find out if a record exists or not in the list so instead of writing a bunch of loops or using linq to go through the list I decided to try this: for (int i = MinX; i <= MaxX; ++i) { tempY = MinY; while (tempY <= MaxY) { bool exists = myList.Exists(item => item.XCoord == i && item.YCoord == tempY); ++tempY; } } Only problem is it take ~9 - 11 seconds to execute. Am I doing something wrong is this just a case of where I shouldn't be using an expression like this? Thanks.

    Read the article

  • mysql way to make a lock in a php page

    - by Cris
    Hello, i have the following mysql table: myTable: id int auto_increment voucher int not null id_user int null I've populated voucher field with values from 1 to 100000 so i've got 100000 records; when a user clicks a button in a PHP page, i need to allocate a record for the user so i make something similar like: update myTable set id_user=XXX where voucher=(SELECT * FROM (SELECT MIN(voucher) FROM myTable WHERE id_user is null) v); The problem is that I don't use locks and i should use them because if two users click in the same moment i risk to assign the same voucher to different persons (2 updates in the same record so i lose 1 user) ... I think there must be a correct way to do this, can you help me please? Thanks ! cris

    Read the article

  • How to finish a broken data upload to the production Google App Engine server?

    - by WooYek
    I was uploading the data to App Engine (not dev server) through loader class and remote api, and I hit the quota in the middle of a CSV file. Based on logs and progress sqllite db, how can I select remaining portion of data to be uploaded? Going through tens of records to determine which was and which was not transfered, is not appealing task, so I look for some way to limit the number of record I need to check. Here's relevant (IMO) log portion, how to interpret work item numbers? [DEBUG 2010-03-30 03:22:51,757 bulkloader.py] [Thread-2] [1041-1050] Transferred 10 entities in 3.9 seconds [DEBUG 2010-03-30 03:22:51,757 adaptive_thread_pool.py] [Thread-2] Got work item [1071-1080] <cut> [DEBUG 2010-03-30 03:23:09,194 bulkloader.py] [Thread-1] [1141-1150] Transferred 10 entities in 4.6 seconds [DEBUG 2010-03-30 03:23:09,194 adaptive_thread_pool.py] [Thread-1] Got work item [1161-1170] <cut> [DEBUG 2010-03-30 03:23:09,226 bulkloader.py] [Thread-3] [1151-1160] Transferred 10 entities in 4.2 seconds [DEBUG 2010-03-30 03:23:09,226 adaptive_thread_pool.py] [Thread-3] Got work item [1171-1180] [ERROR 2010-03-30 03:23:10,174 bulkloader.py] Retrying on non-fatal HTTP error: 503 Service Unavailable

    Read the article

  • php warning mysql_fetch_assoc

    - by death the kid
    I am trying to access some information from mysql, but am getting the warning: mysql_fetch_assoc(): supplied argument is not a valid MySQL result resource for the second line of code below, any help would be much appreciated. $musicfiles=getmusicfiles($records['m_id']); $mus=mysql_fetch_assoc($musicfiles); for($j=0;$j<2;$j++) { if(file_exists($mus['musicpath'])) { echo '<a href="'.$mus['musicpath'].'">'.$mus['musicname'].'</a>'; } else { echo 'Hello world'; } } function getmusicfiles($m_id) { $music="select * from music WHERE itemid=".$s_id; $result=getQuery($music,$l); return $result; }

    Read the article

< Previous Page | 129 130 131 132 133 134 135 136 137 138 139 140  | Next Page >