Search Results

Search found 6431 results on 258 pages for 'pthread join'.

Page 184/258 | < Previous Page | 180 181 182 183 184 185 186 187 188 189 190 191  | Next Page >

  • What are your favorite extension methods for C#/.NET? (codeplex.com/extensionoverflow)

    - by bovium
    Let's make a list of answers where you post your excellent and favorite extension methods. The requirement is that the full code must be posted and a example and an explanation on how to use it. Based on the high interest in this topic I have setup an Open Source Project called extensionoverflow on Codeplex. Please mark your answers with an acceptance to put the code in the Codeplex project. Please post the full sourcecode and not a link. Codeplex News: 11.11.2008 XmlSerialize / XmlDeserialize is now Implemented and Unit Tested. 11.11.2008 There is still room for more developers. ;-) Join NOW! 11.11.2008 Third contributer joined ExtensionOverflow, welcome to BKristensen 11.11.2008 FormatWith is now Implemented and Unit Tested. 09.11.2008 Second contributer joined ExtensionOverflow. welcome to chakrit. 09.11.2008 We need more developers. ;-) 09.11.2008 ThrowIfArgumentIsNull in now Implemented and Unit Tested on Codeplex.

    Read the article

  • Polymorphic associations in CakePHP2

    - by Joseph
    I have 3 models, Page , Course and Content Page and Course contain meta data and Content contains HTML content. Page and Course both hasMany Content Content belongsTo Page and Course To avoid having page_id and course_id fields in Content (because I want this to scale to more than just 2 models) I am looking at using Polymorphic Associations. I started by using the Polymorphic Behavior in the Bakery but it is generating waaay too many SQL queries for my liking and it's also throwing an "Illegal Offset" error which I don't know how to fix (it was written in 2008 and nobody seems to have referred to it recently so perhaps the error is due to it not having been designed for Cake 2?) Anyway, I've found that I can almost do everything I need by hardcoding the associations in the models as such: Page Model CREATE TABLE `pages` ( `id` int(11) NOT NULL AUTO_INCREMENT, `title` varchar(255) COLLATE utf8_unicode_ci NOT NULL, `slug` varchar(255) COLLATE utf8_unicode_ci NOT NULL, `created` datetime NOT NULL, `updated` datetime NOT NULL, PRIMARY KEY (`id`) ) <?php class Page extends AppModel { var $name = 'Page'; var $hasMany = array( 'Content' => array( 'className' => 'Content', 'foreignKey' => 'foreign_id', 'conditions' => array('Content.class' => 'Page'), ) ); } ?> Course Model CREATE TABLE `courses` ( `id` int(11) NOT NULL AUTO_INCREMENT, `title` varchar(255) COLLATE utf8_unicode_ci NOT NULL, `slug` varchar(255) COLLATE utf8_unicode_ci NOT NULL, `created` datetime NOT NULL, `updated` datetime NOT NULL, PRIMARY KEY (`id`) ) <?php class Course extends AppModel { var $name = 'Course'; var $hasMany = array( 'Content' => array( 'className' => 'Content', 'foreignKey' => 'foreign_id', 'conditions' => array('Content.class' => 'Course'), ) ); } ?> Content model CREATE TABLE IF NOT EXISTS `contents` ( `id` int(11) unsigned NOT NULL AUTO_INCREMENT, `class` varchar(30) COLLATE utf8_unicode_ci NOT NULL, `foreign_id` int(11) unsigned NOT NULL, `title` varchar(100) COLLATE utf8_unicode_ci NOT NULL, `content` text COLLATE utf8_unicode_ci NOT NULL, `created` datetime DEFAULT NULL, `modified` datetime DEFAULT NULL, PRIMARY KEY (`id`) ) <?php class Content extends AppModel { var $name = 'Content'; var $belongsTo = array( 'Page' => array( 'foreignKey' => 'foreign_id', 'conditions' => array('Content.class' => 'Page') ), 'Course' => array( 'foreignKey' => 'foreign_id', 'conditions' => array('Content.class' => 'Course') ) ); } ?> The good thing is that $this->Content->find('first') only generates a single SQL query instead of 3 (as was the case with the Polymorphic Behavior) but the problem is that the dataset returned includes both of the belongsTo models, whereas it should only really return the one that exists. Here's how the returned data looks: array( 'Content' => array( 'id' => '1', 'class' => 'Course', 'foreign_id' => '1', 'title' => 'something about this course', 'content' => 'The content here', 'created' => null, 'modified' => null ), 'Page' => array( 'id' => null, 'title' => null, 'slug' => null, 'created' => null, 'updated' => null ), 'Course' => array( 'id' => '1', 'title' => 'Course name', 'slug' => 'name-of-the-course', 'created' => '2012-10-11 00:00:00', 'updated' => '2012-10-11 00:00:00' ) ) I only want it to return one of either Page or Course depending on which one is specified in Content.class UPDATE: Combining the Page and Course models would seem like the obvious solution to this problem but the schemas I have shown above are just shown for the purpose of this question. The actual schemas are actually very different in terms of their fields and the each have a different number of associations with other models too. UPDATE 2 Here is the query that results from running $this->Content->find('first'); : SELECT `Content`.`id`, `Content`.`class`, `Content`.`foreign_id`, `Content`.`title`, `Content`.`slug`, `Content`.`content`, `Content`.`created`, `Content`.`modified`, `Page`.`id`, `Page`.`title`, `Page`.`slug`, `Page`.`created`, `Page`.`updated`, `Course`.`id`, `Course`.`title`, `Course`.`slug`, `Course`.`created`, `Course`.`updated` FROM `cakedb`.`contents` AS `Content` LEFT JOIN `cakedb`.`pages` AS `Page` ON (`Content`.`foreign_id` = `Page`.`id` AND `Content`.`class` = 'Page') LEFT JOIN `cakedb`.`courses` AS `Course` ON (`Content`.`foreign_id` = `Course`.`id` AND `Content`.`class` = 'Course') WHERE 1 = 1 LIMIT 1

    Read the article

  • How to sum up an array of integers in C#

    - by Filburt
    Is there a better shorter way than iterating over the array? int[] arr = new int[] { 1, 2, 3 }; int sum = 0; for (int i = 0; i < arr.Length; i++) { sum += arr[i]; } clarification: Better primary means cleaner code but hints on performance improvement are also welcome. (Like already mentioned: splitting large arrays). It's not like I was looking for killer performance improvement - I just wondered if this very kind of syntactic sugar wasn't already available: "There's String.Join - what the heck about int[]?".

    Read the article

  • Rails: render a partial from a plugin

    - by Sam
    I'm getting a missing template error after I try rendering a partial from a plugin. I have included the files with the following: %w{ models controllers helpers views }.each do |dir| path = File.join(File.dirname(__FILE__), 'app', dir) $LOAD_PATH << path ActiveSupport::Dependencies.load_paths << path ActiveSupport::Dependencies.load_once_paths.delete(path) end The Models are getting loaded, but as for other things I'm not sure what's going on. The helpers are not getting loaded too because I just copied the contents of the partial from the plugin instead of the render :partial = and then it came up with a helper error. Question is how to be able to :render :partial = from the views folder in my plugin

    Read the article

  • Nested Select in Rails

    - by James
    I am working on a Rails application which uses categories for items. My category model is self-joined so that categories can be nested: class Category < ActiveRecord::Base has_many :items # Self Join (categories can have subcategories) has_many :subcategories, :class_name => "Category", :foreign_key => "parent_id" belongs_to :parent, :class_name => "Category" ... end I have a form which allows a user to create an item which currently lists all categories in a select, but they are all listed together: <%= f.label :category_id %> <%= select :item, :category_id, Category.all.collect {|c| [ c.title, c.id ]} %> So the select looks something like this: Category1 Category2 Category3BelongsTo2 Category4BelongsTo1 But what I want is: Category1 - Category4BelongsTo1 Category2 - Category3BelongsTo2 Is there a helper for this (which would be awesome!)? If not, how could I accomplish this? Thanks!

    Read the article

  • DataMapper: using auto_migrate! with many-to-many dependencies?

    - by pschuegr
    Hi, I'm trying to migrate my app from MySql to Postgresql, using Rails3-pre and the latest DataMapper. I have several models which are related through many-to-many relationships using :through = Resource, which means that DataMapper creates a join table with foreign keys for both models. I can't auto_migrate! these changes, because I keep getting this: ERROR: cannot drop table users because other objects depend on it DETAIL: constraint artist_users_owner_fk on table artist_users depends on table users constraint site_users_owner_fk on table site_users depends on table users HINT: Use DROP ... CASCADE to drop the dependent objects too. I have tried everything I can think of, and thought I had things working when I added :constraint = :skip to the field definition, but I keep getting that error back when I try and run auto_migrate. I thought that :skip meant that it would ignore the dependents, but maybe that only applies for deleting rows and not dropping tables? I should mention that I can run auto_migrate after i nuke the db once, but after that, errors. Any suggestions or advice much appreciated.

    Read the article

  • Elements are listed in a vob but not able to checkout/checkin in CCRC

    - by sunil devan
    Hi, There are 2 windows domains named as OPR & BDC. In OPR domain the CCRC server is hosted, users accessing from BDC domain can able to connect to CCRC and list the vob....and also able to join the project. To perform any checkout/checkin/loading resources it is taking long time and after a day it is in same state.Connectivity is fine to OPR domain from BDC domain ( ping & tracrt is working fine) . Could you please let me know if you have some idea about it? Thanks, Sunil

    Read the article

  • MySQL easy question CURDATE()

    - by Tristan
    I want to compare two results one is stored in the first query, and the other is exactly the same as the first, but i want only to recieve data < today "SELECT s.GSP_nom as nom, timestamp, COUNT(s.GSP_nom) as nb_votes, AVG(v.vote+v.prix+v.serviceClient+v.interface+v.interface+v.services)/6 as moy FROM votes_serveur AS v INNER JOIN serveur AS s ON v.idServ = s.idServ WHERE s.valide = 1 AND v.date < CURDATE() ROUP BY s.GSP_nom HAVING nb_votes > 9 ORDER BY moy DESC LIMIT 0,15"; is that correct ? thank you

    Read the article

  • What are your favorite extension methods for C#? (codeplex.com/extensionoverflow)

    - by bovium
    Let's make a list of answers where you post your excellent and favorite extension methods. The requirement is that the full code must be posted and a example and an explanation on how to use it. Based on the high interest in this topic I have setup an Open Source Project called extensionoverflow on Codeplex. Please mark your answers with an acceptance to put the code in the Codeplex project. Please post the full sourcecode and not a link. Codeplex News: 11.11.2008 XmlSerialize / XmlDeserialize is now Implemented and Unit Tested. 11.11.2008 There is still room for more developers. ;-) Join NOW! 11.11.2008 Third contributer joined ExtensionOverflow, welcome to BKristensen 11.11.2008 FormatWith is now Implemented and Unit Tested. 09.11.2008 Second contributer joined ExtensionOverflow. welcome to chakrit. 09.11.2008 We need more developers. ;-) 09.11.2008 ThrowIfArgumentIsNull in now Implemented and Unit Tested on Codeplex.

    Read the article

  • Retrieving Top 10 rows ans sum all others in row 11

    - by Mario
    Hello all, I have the following query that retrieve the number of users per country; SELECT C.CountryID AS CountryID, C.CountryName AS Country, Count(FirstName) AS Origin FROM Users AS U INNER JOIN Country AS C ON C.CountryID = U.CountryOfOrgin GROUP BY CASE C.CountryName, C.CountryID What I need is a way to get the top 10 and then sum all other users in a single row. I know how to get the top 10 but I`m stuck on getting the remaining in a single row. Is there a simple way to do it? For example if the above query returns 17 records the top ten are displayed and a sum of the users from the 7 remaining country should appear on row 11. On that row 11 the countryid would be 0 and countryname Others Thanks for your help!

    Read the article

  • sql select with exact outcome

    - by Shiro
    Asking a simple question, just want everyone have fun to solve it. I got 2 tables. 1. Student 2. Course Student +----+--------+ | id | name | +----+--------+ | 1 | User1 | | 2 | User2 | +----+--------+ Course +----+------------+------------+ | id | student_id | course_name| +----+------------+------------+ | 1 | 1 | English | | 2 | 1 | Chinese | | 3 | 2 | English | | 4 | 2 | Japanese | +----+------------+------------+ I would like to get the result all student, who have taken English and Chinese, NOT English or Chinese. Expected result: +----+------------+------------+ | id | student_id | course_name| +----+------------+------------+ | 1 | 1 | English | | 2 | 1 | Chinese | +----+------------+------------+ What we normally do is select * from student join course on (student.id = course.student_id) WHERE course_name = 'English' OR course_name = 'Chinese' but in this result I can get User2 record which is not my expected result. I want the record only display the User take the course English+Chinese only.

    Read the article

  • Avoiding dog-piling or thundering herd in a memcached expiration scenario

    - by Quintin Par
    I have the result of a query that is very expensive. It is the join of several tables and a map reduce job. This is cached in memcached for 15 minutes. Once the cache expires the queries are obviously run and the cache warmed again. But at the point of expiration the thundering herd problem issue can happen. One way to fix this problem, that I do right now is to run a scheduled task that kicks in the 14th minute. But somehow this looks very sub optimal to me. Another approach I like is nginx’s proxy_cache_use_stale updating; mechanism. The webserver/machine continues to deliver stale cache while a thread kicks in the moment expiration happens and updates the cache. Has someone applied this to memcached scenario though I understand this is a client side strategy? If it benefits, I use Django.

    Read the article

  • mysql - filtering a list against keywords, both list and keywords > 20 million records

    - by threecheeseopera
    I have two tables, both having more than 20 million records; table1 is a list of terms, and table2 is a list of keywords that may or may not appear in those terms. I need to identify the terms that contain a keyword. My current strategy is: SELECT table1.term, table2.keyword FROM table1 INNER JOIN table2 ON table1.term LIKE CONCAT('%', table2.keyword, '%'); This is not working, it takes f o r e v e r. It's not the server (see notes). How might I rewrite this so that it runs in under a day? Notes: As for server optimization: both tables are myisam and have unique indexes on the matching fields; the myisam key buffer is greater than the sum of both index file sizes, and it is not even being fully taxed (key_blocks_unused is ... large); the server is a dual-xeon 2U beast with fast sas drives and 8G of ram, fine-tuned for the mysql workload.

    Read the article

  • deleting and reusing a temp table in a stored precedure

    - by Sheagorath
    Hi I need to SELECT INTO a temp table multiple times with a loop but I just can't do it, because after the table created by SELECT INTO you can't simply drop the table at the end of the loop, because you can't delete a table and create it again in the same batch. so how can I delete a table in a stored procedure and create it again? is it possible to this without using a temp table? here is a snippet of where I am actualy using the temp table which is supposed to be a pivoting algorithm: WHILE @offset<@NumDays BEGIN SELECT bg.*, j.ID, j.time, j.Status INTO #TEMP1 FROM #TEMP2 AS bg left outer join PersonSchedule j on bg.PersonID = j.PersonID and bg.TimeSlotDateTime = j.TimeSlotDateTime and j.TimeSlotDateTime = @StartDate + @offset DROP TABLE #TEMP2; SELECT * INTO #TEMP2 FROM #TEMP1 DROP TABLE #TEMP1 SET @offset = @offset + 1 END

    Read the article

  • Killing a script launched in a Process via os.system()

    - by L.J.
    I have a python script which launches several processes. Each process basically just calls a shell script: from multiprocessing import Process import os import logging def thread_method(n = 4): global logger command = "~/Scripts/run.sh " + str(n) + " >> /var/log/mylog.log" if (debug): logger.debug(command) os.system(command) I launch several of these threads, which are meant to run in the background. I want to have a timeout on these threads, such that if it exceeds the timeout, they are killed: t = [] for x in range(10): try: t.append(Process(target=thread_method, args=(x,) ) ) t[-1].start() except Exception as e: logger.error("Error: unable to start thread") logger.error("Error message: " + str(e)) logger.info("Waiting up to 60 seconds to allow threads to finish") t[0].join(60) for n in range(len(t)): if t[n].is_alive(): logger.info(str(n) + " is still alive after 60 seconds, forcibly terminating") t[n].terminate() The problem is that calling terminate() on the process threads isn't killing the launched run.sh script - it continues running in the background until I either force kill it from the command line, or it finishes internally. Is there a way to have terminate also kill the subshell created by os.system()?

    Read the article

  • Oracle/c#: How do i use bind variables with select statements to return multiple records?

    - by twiga
    I have a question regarding Oracle bind variables and select statements. What I would like to achieve is do a select on a number different values for the primary key. I would like to pass these values via an array using bind values. select * from tb_customers where cust_id = :1 int[] cust_id = { 11, 23, 31, 44 , 51 }; I then bind a DataReader to get the values into a table. The problem is that the resulting table only contains a single record (for cust_id=51). Thus it seems that each statement is executed independently (as it should), but I would like the results to be available as a collective (single table). A workaround is to create a temporary table, insert all the values of cust_id and then do a join against tb_customers. The problem with this approach is that I would require temporary tables for every different type of primary key, as I would like to use this against a number of tables (some even have combined primary keys). Is there anything I am missing?

    Read the article

  • I want to get 2 values returned by my query. How to do, using linq-to-entity

    - by Shantanu Gupta
    var dept_list = (from map in DtMapGuestDepartment.AsEnumerable() where map.Field<Nullable<long>>("GUEST_ID") == DRowGuestPI.Field<Nullable<long>>("PK_GUEST_ID") join dept in DtDepartment.AsEnumerable() on map.Field<Nullable<long>>("DEPARTMENT_ID") equals dept.Field<Nullable<long>>("DEPARTMENT_ID") select new { dept_id=dept.Field<long>("DEPARTMENT_ID") ,dept_name=dept.Field<long>("DEPARTMENT_NAME") }).Distinct(); DataTable dt = new DataTable(); dt.Columns.Add("DEPARTMENT_ID"); dt.Columns.Add("DEPARTMENT_NAME"); foreach (long? dept_ in dept_list) { dt.Rows.Add(dept_[0], dept_[1]); } EDIT In the previous question asked by me. I got an answer like this for single value. What is the difference between the two ? foreach (long? dept in dept_list) { dt.Rows.Add(dept); }

    Read the article

  • Long-running Database Query

    - by JamesMLV
    I have a long-running SQL Server 2005 query that I have been hoping to optimize. When I look at the actual execution plan, it says a Clustered Index Seek has 66% of the cost. Execuation Plan Snippit: <RelOp AvgRowSize="31" EstimateCPU="0.0113754" EstimateIO="0.0609028" EstimateRebinds="0" EstimateRewinds="0" EstimateRows="10198.5" LogicalOp="Clustered Index Seek" NodeId="16" Parallel="false" PhysicalOp="Clustered Index Seek" EstimatedTotalSubtreeCost="0.0722782"> <OutputList> <ColumnReference Database="[wf_1]" Schema="[dbo]" Table="[Indices]" Alias="[I]" Column="quoteDate" /> <ColumnReference Database="[wf_1]" Schema="[dbo]" Table="[Indices]" Alias="[I]" Column="price" /> <ColumnReference Database="[wf_1]" Schema="[dbo]" Table="[Indices]" Alias="[I]" Column="tenure" /> </OutputList> <RunTimeInformation> <RunTimeCountersPerThread Thread="0" ActualRows="1067" ActualEndOfScans="1" ActualExecutions="1" /> </RunTimeInformation> <IndexScan Ordered="true" ScanDirection="FORWARD" ForcedIndex="false" NoExpandHint="false"> <DefinedValues> <DefinedValue> <ColumnReference Database="[wf_1]" Schema="[dbo]" Table="[Indices]" Alias="[I]" Column="quoteDate" /> </DefinedValue> <DefinedValue> <ColumnReference Database="[wf_1]" Schema="[dbo]" Table="[Indices]" Alias="[I]" Column="price" /> </DefinedValue> <DefinedValue> <ColumnReference Database="[wf_1]" Schema="[dbo]" Table="[Indices]" Alias="[I]" Column="tenure" /> </DefinedValue> </DefinedValues> <Object Database="[wf_1]" Schema="[dbo]" Table="[Indices]" Index="[_dta_index_Indices_14_320720195__K5_K2_K1_3]" Alias="[I]" /> <SeekPredicates> <SeekPredicate> <Prefix ScanType="EQ"> <RangeColumns> <ColumnReference Database="[wf_1]" Schema="[dbo]" Table="[Indices]" Alias="[I]" Column="HedgeProduct" ComputedColumn="true" /> </RangeColumns> <RangeExpressions> <ScalarOperator ScalarString="(1)"> <Const ConstValue="(1)" /> </ScalarOperator> </RangeExpressions> </Prefix> <StartRange ScanType="GE"> <RangeColumns> <ColumnReference Database="[wf_1]" Schema="[dbo]" Table="[Indices]" Alias="[I]" Column="tenure" /> </RangeColumns> <RangeExpressions> <ScalarOperator ScalarString="[@StartMonth]"> <Identifier> <ColumnReference Column="@StartMonth" /> </Identifier> </ScalarOperator> </RangeExpressions> </StartRange> <EndRange ScanType="LE"> <RangeColumns> <ColumnReference Database="[wf_1]" Schema="[dbo]" Table="[Indices]" Alias="[I]" Column="tenure" /> </RangeColumns> <RangeExpressions> <ScalarOperator ScalarString="[@EndMonth]"> <Identifier> <ColumnReference Column="@EndMonth" /> </Identifier> </ScalarOperator> </RangeExpressions> </EndRange> </SeekPredicate> </SeekPredicates> </IndexScan> </RelOp> From this, does anyone see an obvious problem that would be causing this to take so long? Here is the query: (SELECT quotedate, tenure, price, ActualVolume, HedgePortfolioValue, Price AS UnhedgedPrice, ((ActualVolume*Price - HedgePortfolioValue)/ActualVolume) AS HedgedPrice FROM ( SELECT [quoteDate] ,[price] , tenure ,isnull(wf_1.[Risks].[HedgePortValueAsOfDate2](1,tenureMonth,quotedate,price),0) as HedgePortfolioValue ,[TotalOperatingGasVolume] as ActualVolume FROM [wf_1].[dbo].[Indices] I inner join ( SELECT DISTINCT tenureMonth FROM [wf_1].[Risks].[KnowRiskTrades] WHERE HedgeProduct = 1 AND portfolio <> 'Natural Gas Hedge Transactions' ) B ON I.tenure=B.tenureMonth inner join ( SELECT [Month],[TotalOperatingGasVolume] FROM [wf_1].[Risks].[ActualGasVolumes] ) C ON C.[Month]=B.tenureMonth WHERE HedgeProduct = 1 AND quoteDate>=dateadd(day, -3*365, tenureMonth) AND quoteDate<=dateadd(day,-3,tenureMonth) )A )

    Read the article

  • general things developer must know having 2+ years of exp?

    - by Salil
    Hi All, I have 2 years of experience in Ruby on Rails. I have basic knowledge (Very Basic) of mysql such as data insertion, join, select from more than one table. But now i want to know more about it cause my friends are having trouble in interview when ask questions like 1] What is the trigger. 2] which trigger call when 3] what's views in mysql? etc....... are this questions for developers?is it basic database? Also what other things developer should know having experience of 2 years or more. I am in double mind as i have over two years of exp. in ruby and i am learning new thing everyday in ruby only. if someone ask me to rate yourself i can't give more than 5 out of 10 in ROR only. So my question is What are the general things developer must know having 2+ years of exp? Regards, Salil Gaikwad

    Read the article

  • How can I extract just the elements I want from a Perl array?

    - by Flamewires
    Hey I'm wondering how I can get this code to work. Basically I want to keep the lines of $filename as long as they contain the $user in the path: open STDERR, ">/dev/null"; $filename=`find -H /home | grep $file`; @filenames = split(/\n/, $filename); for $i (@filenames) { if ($i =~ m/$user/) { #keep results } else { delete $i; # does not work. } } $filename = join ("\n", @filenames); close STDERR; I know you can delete like delete $array[index] but I don't have an index with this kind of loop that I know of.

    Read the article

  • Joining tables from 2 different connection strings

    - by krio
    Hello, I need to join two tables from different MySQL (PHP) connection strings and different databases. $conn = mysql_connect('192.168.30.20', 'user', 'pass'); $conn2 = mysql_connect('anotherIPHere', 'user2', 'pass2'); $db = mysql_select_db('1stdb', $conn); $db2 = mysql_select_db('2nddb', $conn2); If I were using the same connection I would just prefix the tables with the db names, such as database1.table1.column and database2.table2.column2, but since I'm using two completely separate connection strings the MySQL Query does not know which connection string to use, thus the resource is not usable. I've read a ton of resources that show how to use two databases, from the SAME connection string and that is working fine, but I can't find anything related to multiple connection strings and databases. Thanks

    Read the article

  • Rails Metaprogramming: How to add instance methods at runtime?

    - by Larry K
    I'm defining my own AR class in Rails that will include dynamically created instance methods for user fields 0-9. The user fields are not stored in the db directly, they'll be serialized together since they'll be used infrequently. Is the following the best way to do this? Alternatives? Where should the start up code for adding the methods be called from? class Info < ActiveRecord::Base end # called from an init file to add the instance methods parts = [] (0..9).each do |i| parts.push "def user_field_#{i}" # def user_field_0 parts.push "get_user_fields && @user_fields[#{i}]" parts.push "end" end Info.class_eval parts.join

    Read the article

  • Using Routing helpers in a Rake task

    - by trobrock
    I have a rake task that sends out the next 'x' invitations to join a beta it uses this code: desc "This will send out the next batch of invites for the beta" task :send_invites => :environment do limit = ENV['limit'] c = 0 invitation = Invitation.all(:conditions => { :sent_at => nil, :sender_id => nil }, :limit => limit).each do |i| Mailer.deliver_invitation(i, register_url(i.token)) c.increment! end puts "Sent #{c} invitations." end I need to pass in the 'register_url' to the Mailer in order for the link to show up in the email, but since this is running from a rake task and not from a request it does not have a access to the helper methods. What is the best way of achieving this?

    Read the article

  • Allowing New Users to Invite Their Gmail Contacts

    - by John
    Hello, For my site, I would like to give new users the option to invite all of their Gmail contacts to join. What is the basic step-by-step process to set this up? (Also, is it necessary to buy an SSL for this?) Thanks in advance, John EDIT: My site has a basic login where users set up a username and password. I would like to give users the option to invite their Gmail contacts right after they create their new profile. I would also like to give them the option to invite their Gmail contacts anytime they want.

    Read the article

  • How can I map "insert='false' update='false'" on a composite-id key-property which is also used in a one-to-many FK?

    - by Gweebz
    I am working on a legacy code base with an existing DB schema. The existing code uses SQL and PL/SQL to execute queries on the DB. We have been tasked with making a small part of the project database-engine agnostic (at first, change everything eventually). We have chosen to use Hibernate 3.3.2.GA and "*.hbm.xml" mapping files (as opposed to annotations). Unfortunately, it is not feasible to change the existing schema because we cannot regress any legacy features. The problem I am encountering is when I am trying to map a uni-directional, one-to-many relationship where the FK is also part of a composite PK. Here are the classes and mapping file... CompanyEntity.java public class CompanyEntity { private Integer id; private Set<CompanyNameEntity> names; ... } CompanyNameEntity.java public class CompanyNameEntity implements Serializable { private Integer id; private String languageId; private String name; ... } CompanyNameEntity.hbm.xml <?xml version="1.0"?> <!DOCTYPE hibernate-mapping PUBLIC "-//Hibernate/Hibernate Mapping DTD 3.0//EN" "http://www.jboss.org/dtd/hibernate/hibernate-mapping-3.0.dtd"> <hibernate-mapping package="com.example"> <class name="com.example.CompanyEntity" table="COMPANY"> <id name="id" column="COMPANY_ID"/> <set name="names" table="COMPANY_NAME" cascade="all-delete-orphan" fetch="join" batch-size="1" lazy="false"> <key column="COMPANY_ID"/> <one-to-many entity-name="vendorName"/> </set> </class> <class entity-name="companyName" name="com.example.CompanyNameEntity" table="COMPANY_NAME"> <composite-id> <key-property name="id" column="COMPANY_ID"/> <key-property name="languageId" column="LANGUAGE_ID"/> </composite-id> <property name="name" column="NAME" length="255"/> </class> </hibernate-mapping> This code works just fine for SELECT and INSERT of a Company with names. I encountered a problem when I tried to update and existing record. I received a BatchUpdateException and after looking through the SQL logs I saw Hibernate was trying to do something stupid... update COMPANY_NAME set COMPANY_ID=null where COMPANY_ID=? Hibernate was trying to dis-associate child records before updating them. The problem is that this field is part of the PK and not-nullable. I found the quick solution to make Hibernate not do this is to add "not-null='true'" to the "key" element in the parent mapping. SO now may mapping looks like this... CompanyNameEntity.hbm.xml <?xml version="1.0"?> <!DOCTYPE hibernate-mapping PUBLIC "-//Hibernate/Hibernate Mapping DTD 3.0//EN" "http://www.jboss.org/dtd/hibernate/hibernate-mapping-3.0.dtd"> <hibernate-mapping package="com.example"> <class name="com.example.CompanyEntity" table="COMPANY"> <id name="id" column="COMPANY_ID"/> <set name="names" table="COMPANY_NAME" cascade="all-delete-orphan" fetch="join" batch-size="1" lazy="false"> <key column="COMPANY_ID" not-null="true"/> <one-to-many entity-name="vendorName"/> </set> </class> <class entity-name="companyName" name="com.example.CompanyNameEntity" table="COMPANY_NAME"> <composite-id> <key-property name="id" column="COMPANY_ID"/> <key-property name="languageId" column="LANGUAGE_ID"/> </composite-id> <property name="name" column="NAME" length="255"/> </class> </hibernate-mapping> This mapping gives the exception... org.hibernate.MappingException: Repeated column in mapping for entity: companyName column: COMPANY_ID (should be mapped with insert="false" update="false") My problem now is that I have tryed to add these attributes to the key-property element but that is not supported by the DTD. I have also tryed changing it to a key-many-to-one element but that didn't work either. So... How can I map "insert='false' update='false'" on a composite-id key-property which is also used in a one-to-many FK?

    Read the article

< Previous Page | 180 181 182 183 184 185 186 187 188 189 190 191  | Next Page >