Search Results

Search found 6905 results on 277 pages for 'fork join'.

Page 201/277 | < Previous Page | 197 198 199 200 201 202 203 204 205 206 207 208  | Next Page >

  • Nested Select in Rails

    - by James
    I am working on a Rails application which uses categories for items. My category model is self-joined so that categories can be nested: class Category < ActiveRecord::Base has_many :items # Self Join (categories can have subcategories) has_many :subcategories, :class_name => "Category", :foreign_key => "parent_id" belongs_to :parent, :class_name => "Category" ... end I have a form which allows a user to create an item which currently lists all categories in a select, but they are all listed together: <%= f.label :category_id %> <%= select :item, :category_id, Category.all.collect {|c| [ c.title, c.id ]} %> So the select looks something like this: Category1 Category2 Category3BelongsTo2 Category4BelongsTo1 But what I want is: Category1 - Category4BelongsTo1 Category2 - Category3BelongsTo2 Is there a helper for this (which would be awesome!)? If not, how could I accomplish this? Thanks!

    Read the article

  • Exception handling in Linq to SQL for customers without orders

    - by stackoverflowuser
    I have the following code to retrieve customer name, total (orders ), sum (order details) for reach customer in Northwind database. The problem with below code is that it raises an exception since a few customers dont have any entry in orders table. I know using the query syntax (join) the exception can be avoided. I want to know if the same can be handled with the extension method syntax. var customerOrders = db.Customers .Select(c => new { CompanyName = c.CompanyName, TotalOrders = c.Orders.Count(), TotalQuantity = c.Orders .SelectMany(o => o.Order_Details).Sum(o=>o.Quantity) });

    Read the article

  • Search filenames in MySQL database table restricted by filetype?

    - by ju
    Hello I have a MySQL database that I replicate from another server. The database contains a table with this columns ID, FileName and FileSize In the table there are more than 4'000'000 records. I want to make fast a search in FileName (varchar) column I found that I can use for this Sphinx search engine. The problem is that I want to restrict searches by filetype. Do I have to and how (trigers?) to extract file extensions for all rows? May be I have to create another table (because this one is replicated) and join them in 1:1 relation? Can you give me some advices please :)

    Read the article

  • how to get the checked id's and unchekced ids using jquery

    - by kumar
    Hello friends using this code I am to get only checked id $('#PbtnSubmit').click(function(event) { $('#PricingEditExceptions input[name=PMchk]').each(function() { if ($("#PricingEditExceptions input:checkbox:checked").length > 0) { var checked = $('#PricingEditExceptions input[type=checkbox]:checked'); var PMstrIDs = checked.map(function() { return $(this).val(); }).get().join(','); $('#1_exceptiontypes').attr('value', exceptiontypes) $('#1_PMstrIDs').attr('value', PMstrIDs); } else { alert("Please select atleast one exception"); event.preventDefault(); } }); }); the beginForm <% using (Html.BeginForm("MassUpdate", "Pricing", FormMethod.Post, new { @id = "exc-"})) I am perfectly getting all my chekced id's to the controler but I need to get both checked as well as uncheked ids' using above code? thanks

    Read the article

  • How to sum up an array of integers in C#

    - by Filburt
    Is there a better shorter way than iterating over the array? int[] arr = new int[] { 1, 2, 3 }; int sum = 0; for (int i = 0; i < arr.Length; i++) { sum += arr[i]; } clarification: Better primary means cleaner code but hints on performance improvement are also welcome. (Like already mentioned: splitting large arrays). It's not like I was looking for killer performance improvement - I just wondered if this very kind of syntactic sugar wasn't already available: "There's String.Join - what the heck about int[]?".

    Read the article

  • Rails: render a partial from a plugin

    - by Sam
    I'm getting a missing template error after I try rendering a partial from a plugin. I have included the files with the following: %w{ models controllers helpers views }.each do |dir| path = File.join(File.dirname(__FILE__), 'app', dir) $LOAD_PATH << path ActiveSupport::Dependencies.load_paths << path ActiveSupport::Dependencies.load_once_paths.delete(path) end The Models are getting loaded, but as for other things I'm not sure what's going on. The helpers are not getting loaded too because I just copied the contents of the partial from the plugin instead of the render :partial = and then it came up with a helper error. Question is how to be able to :render :partial = from the views folder in my plugin

    Read the article

  • What are your favorite extension methods for C#/.NET? (codeplex.com/extensionoverflow)

    - by bovium
    Let's make a list of answers where you post your excellent and favorite extension methods. The requirement is that the full code must be posted and a example and an explanation on how to use it. Based on the high interest in this topic I have setup an Open Source Project called extensionoverflow on Codeplex. Please mark your answers with an acceptance to put the code in the Codeplex project. Please post the full sourcecode and not a link. Codeplex News: 11.11.2008 XmlSerialize / XmlDeserialize is now Implemented and Unit Tested. 11.11.2008 There is still room for more developers. ;-) Join NOW! 11.11.2008 Third contributer joined ExtensionOverflow, welcome to BKristensen 11.11.2008 FormatWith is now Implemented and Unit Tested. 09.11.2008 Second contributer joined ExtensionOverflow. welcome to chakrit. 09.11.2008 We need more developers. ;-) 09.11.2008 ThrowIfArgumentIsNull in now Implemented and Unit Tested on Codeplex.

    Read the article

  • Is there alternative way to write this query?

    - by Kugel
    I have tables A, B, C, where A represents items which can have zero or more sub-items stored in C. B table only has 2 foreign keys to connect A and C. I have this sql query: select * from A where not exists (select * from B natural join C where B.id = A.id and C.value > 10); Which says: "Give me every item from table A where all sub-items have value less than 10. Is there a way to optimize this? And is there a way to write this not using exists operator?

    Read the article

  • deleting and reusing a temp table in a stored precedure

    - by Sheagorath
    Hi I need to SELECT INTO a temp table multiple times with a loop but I just can't do it, because after the table created by SELECT INTO you can't simply drop the table at the end of the loop, because you can't delete a table and create it again in the same batch. so how can I delete a table in a stored procedure and create it again? is it possible to this without using a temp table? here is a snippet of where I am actualy using the temp table which is supposed to be a pivoting algorithm: WHILE @offset<@NumDays BEGIN SELECT bg.*, j.ID, j.time, j.Status INTO #TEMP1 FROM #TEMP2 AS bg left outer join PersonSchedule j on bg.PersonID = j.PersonID and bg.TimeSlotDateTime = j.TimeSlotDateTime and j.TimeSlotDateTime = @StartDate + @offset DROP TABLE #TEMP2; SELECT * INTO #TEMP2 FROM #TEMP1 DROP TABLE #TEMP1 SET @offset = @offset + 1 END

    Read the article

  • DataMapper: using auto_migrate! with many-to-many dependencies?

    - by pschuegr
    Hi, I'm trying to migrate my app from MySql to Postgresql, using Rails3-pre and the latest DataMapper. I have several models which are related through many-to-many relationships using :through = Resource, which means that DataMapper creates a join table with foreign keys for both models. I can't auto_migrate! these changes, because I keep getting this: ERROR: cannot drop table users because other objects depend on it DETAIL: constraint artist_users_owner_fk on table artist_users depends on table users constraint site_users_owner_fk on table site_users depends on table users HINT: Use DROP ... CASCADE to drop the dependent objects too. I have tried everything I can think of, and thought I had things working when I added :constraint = :skip to the field definition, but I keep getting that error back when I try and run auto_migrate. I thought that :skip meant that it would ignore the dependents, but maybe that only applies for deleting rows and not dropping tables? I should mention that I can run auto_migrate after i nuke the db once, but after that, errors. Any suggestions or advice much appreciated.

    Read the article

  • Avoiding dog-piling or thundering herd in a memcached expiration scenario

    - by Quintin Par
    I have the result of a query that is very expensive. It is the join of several tables and a map reduce job. This is cached in memcached for 15 minutes. Once the cache expires the queries are obviously run and the cache warmed again. But at the point of expiration the thundering herd problem issue can happen. One way to fix this problem, that I do right now is to run a scheduled task that kicks in the 14th minute. But somehow this looks very sub optimal to me. Another approach I like is nginx’s proxy_cache_use_stale updating; mechanism. The webserver/machine continues to deliver stale cache while a thread kicks in the moment expiration happens and updates the cache. Has someone applied this to memcached scenario though I understand this is a client side strategy? If it benefits, I use Django.

    Read the article

  • Elements are listed in a vob but not able to checkout/checkin in CCRC

    - by sunil devan
    Hi, There are 2 windows domains named as OPR & BDC. In OPR domain the CCRC server is hosted, users accessing from BDC domain can able to connect to CCRC and list the vob....and also able to join the project. To perform any checkout/checkin/loading resources it is taking long time and after a day it is in same state.Connectivity is fine to OPR domain from BDC domain ( ping & tracrt is working fine) . Could you please let me know if you have some idea about it? Thanks, Sunil

    Read the article

  • Data scheme question

    - by Matt
    I am designing a data model for a local city page, more like requirements for it. So 4 tables: Country, State, City, neighbourhood. Real life relationships is: Country owns multiple State which owns multiple cities which ows multiple neighbourhoods. In the data model: Do we link these with FK the same way or link each with each? Like in each table there will even be a Country ID, State ID, CityID and NeighbourhoodID so each connected with each? Other wise to reach neighbourhood from country we need to join 2 other tables in between? There are more tables I need to maintain for IP addess of cities, latitude/longitude, etc.

    Read the article

  • What are your favorite extension methods for C#? (codeplex.com/extensionoverflow)

    - by bovium
    Let's make a list of answers where you post your excellent and favorite extension methods. The requirement is that the full code must be posted and a example and an explanation on how to use it. Based on the high interest in this topic I have setup an Open Source Project called extensionoverflow on Codeplex. Please mark your answers with an acceptance to put the code in the Codeplex project. Please post the full sourcecode and not a link. Codeplex News: 11.11.2008 XmlSerialize / XmlDeserialize is now Implemented and Unit Tested. 11.11.2008 There is still room for more developers. ;-) Join NOW! 11.11.2008 Third contributer joined ExtensionOverflow, welcome to BKristensen 11.11.2008 FormatWith is now Implemented and Unit Tested. 09.11.2008 Second contributer joined ExtensionOverflow. welcome to chakrit. 09.11.2008 We need more developers. ;-) 09.11.2008 ThrowIfArgumentIsNull in now Implemented and Unit Tested on Codeplex.

    Read the article

  • Oracle/c#: How do i use bind variables with select statements to return multiple records?

    - by twiga
    I have a question regarding Oracle bind variables and select statements. What I would like to achieve is do a select on a number different values for the primary key. I would like to pass these values via an array using bind values. select * from tb_customers where cust_id = :1 int[] cust_id = { 11, 23, 31, 44 , 51 }; I then bind a DataReader to get the values into a table. The problem is that the resulting table only contains a single record (for cust_id=51). Thus it seems that each statement is executed independently (as it should), but I would like the results to be available as a collective (single table). A workaround is to create a temporary table, insert all the values of cust_id and then do a join against tb_customers. The problem with this approach is that I would require temporary tables for every different type of primary key, as I would like to use this against a number of tables (some even have combined primary keys). Is there anything I am missing?

    Read the article

  • sql select with exact outcome

    - by Shiro
    Asking a simple question, just want everyone have fun to solve it. I got 2 tables. 1. Student 2. Course Student +----+--------+ | id | name | +----+--------+ | 1 | User1 | | 2 | User2 | +----+--------+ Course +----+------------+------------+ | id | student_id | course_name| +----+------------+------------+ | 1 | 1 | English | | 2 | 1 | Chinese | | 3 | 2 | English | | 4 | 2 | Japanese | +----+------------+------------+ I would like to get the result all student, who have taken English and Chinese, NOT English or Chinese. Expected result: +----+------------+------------+ | id | student_id | course_name| +----+------------+------------+ | 1 | 1 | English | | 2 | 1 | Chinese | +----+------------+------------+ What we normally do is select * from student join course on (student.id = course.student_id) WHERE course_name = 'English' OR course_name = 'Chinese' but in this result I can get User2 record which is not my expected result. I want the record only display the User take the course English+Chinese only.

    Read the article

  • Retrieving Top 10 rows ans sum all others in row 11

    - by Mario
    Hello all, I have the following query that retrieve the number of users per country; SELECT C.CountryID AS CountryID, C.CountryName AS Country, Count(FirstName) AS Origin FROM Users AS U INNER JOIN Country AS C ON C.CountryID = U.CountryOfOrgin GROUP BY CASE C.CountryName, C.CountryID What I need is a way to get the top 10 and then sum all other users in a single row. I know how to get the top 10 but I`m stuck on getting the remaining in a single row. Is there a simple way to do it? For example if the above query returns 17 records the top ten are displayed and a sum of the users from the 7 remaining country should appear on row 11. On that row 11 the countryid would be 0 and countryname Others Thanks for your help!

    Read the article

  • MySQL easy question CURDATE()

    - by Tristan
    I want to compare two results one is stored in the first query, and the other is exactly the same as the first, but i want only to recieve data < today "SELECT s.GSP_nom as nom, timestamp, COUNT(s.GSP_nom) as nb_votes, AVG(v.vote+v.prix+v.serviceClient+v.interface+v.interface+v.services)/6 as moy FROM votes_serveur AS v INNER JOIN serveur AS s ON v.idServ = s.idServ WHERE s.valide = 1 AND v.date < CURDATE() ROUP BY s.GSP_nom HAVING nb_votes > 9 ORDER BY moy DESC LIMIT 0,15"; is that correct ? thank you

    Read the article

  • mysql - filtering a list against keywords, both list and keywords > 20 million records

    - by threecheeseopera
    I have two tables, both having more than 20 million records; table1 is a list of terms, and table2 is a list of keywords that may or may not appear in those terms. I need to identify the terms that contain a keyword. My current strategy is: SELECT table1.term, table2.keyword FROM table1 INNER JOIN table2 ON table1.term LIKE CONCAT('%', table2.keyword, '%'); This is not working, it takes f o r e v e r. It's not the server (see notes). How might I rewrite this so that it runs in under a day? Notes: As for server optimization: both tables are myisam and have unique indexes on the matching fields; the myisam key buffer is greater than the sum of both index file sizes, and it is not even being fully taxed (key_blocks_unused is ... large); the server is a dual-xeon 2U beast with fast sas drives and 8G of ram, fine-tuned for the mysql workload.

    Read the article

  • Killing a script launched in a Process via os.system()

    - by L.J.
    I have a python script which launches several processes. Each process basically just calls a shell script: from multiprocessing import Process import os import logging def thread_method(n = 4): global logger command = "~/Scripts/run.sh " + str(n) + " >> /var/log/mylog.log" if (debug): logger.debug(command) os.system(command) I launch several of these threads, which are meant to run in the background. I want to have a timeout on these threads, such that if it exceeds the timeout, they are killed: t = [] for x in range(10): try: t.append(Process(target=thread_method, args=(x,) ) ) t[-1].start() except Exception as e: logger.error("Error: unable to start thread") logger.error("Error message: " + str(e)) logger.info("Waiting up to 60 seconds to allow threads to finish") t[0].join(60) for n in range(len(t)): if t[n].is_alive(): logger.info(str(n) + " is still alive after 60 seconds, forcibly terminating") t[n].terminate() The problem is that calling terminate() on the process threads isn't killing the launched run.sh script - it continues running in the background until I either force kill it from the command line, or it finishes internally. Is there a way to have terminate also kill the subshell created by os.system()?

    Read the article

  • I want to get 2 values returned by my query. How to do, using linq-to-entity

    - by Shantanu Gupta
    var dept_list = (from map in DtMapGuestDepartment.AsEnumerable() where map.Field<Nullable<long>>("GUEST_ID") == DRowGuestPI.Field<Nullable<long>>("PK_GUEST_ID") join dept in DtDepartment.AsEnumerable() on map.Field<Nullable<long>>("DEPARTMENT_ID") equals dept.Field<Nullable<long>>("DEPARTMENT_ID") select new { dept_id=dept.Field<long>("DEPARTMENT_ID") ,dept_name=dept.Field<long>("DEPARTMENT_NAME") }).Distinct(); DataTable dt = new DataTable(); dt.Columns.Add("DEPARTMENT_ID"); dt.Columns.Add("DEPARTMENT_NAME"); foreach (long? dept_ in dept_list) { dt.Rows.Add(dept_[0], dept_[1]); } EDIT In the previous question asked by me. I got an answer like this for single value. What is the difference between the two ? foreach (long? dept in dept_list) { dt.Rows.Add(dept); }

    Read the article

  • Long-running Database Query

    - by JamesMLV
    I have a long-running SQL Server 2005 query that I have been hoping to optimize. When I look at the actual execution plan, it says a Clustered Index Seek has 66% of the cost. Execuation Plan Snippit: <RelOp AvgRowSize="31" EstimateCPU="0.0113754" EstimateIO="0.0609028" EstimateRebinds="0" EstimateRewinds="0" EstimateRows="10198.5" LogicalOp="Clustered Index Seek" NodeId="16" Parallel="false" PhysicalOp="Clustered Index Seek" EstimatedTotalSubtreeCost="0.0722782"> <OutputList> <ColumnReference Database="[wf_1]" Schema="[dbo]" Table="[Indices]" Alias="[I]" Column="quoteDate" /> <ColumnReference Database="[wf_1]" Schema="[dbo]" Table="[Indices]" Alias="[I]" Column="price" /> <ColumnReference Database="[wf_1]" Schema="[dbo]" Table="[Indices]" Alias="[I]" Column="tenure" /> </OutputList> <RunTimeInformation> <RunTimeCountersPerThread Thread="0" ActualRows="1067" ActualEndOfScans="1" ActualExecutions="1" /> </RunTimeInformation> <IndexScan Ordered="true" ScanDirection="FORWARD" ForcedIndex="false" NoExpandHint="false"> <DefinedValues> <DefinedValue> <ColumnReference Database="[wf_1]" Schema="[dbo]" Table="[Indices]" Alias="[I]" Column="quoteDate" /> </DefinedValue> <DefinedValue> <ColumnReference Database="[wf_1]" Schema="[dbo]" Table="[Indices]" Alias="[I]" Column="price" /> </DefinedValue> <DefinedValue> <ColumnReference Database="[wf_1]" Schema="[dbo]" Table="[Indices]" Alias="[I]" Column="tenure" /> </DefinedValue> </DefinedValues> <Object Database="[wf_1]" Schema="[dbo]" Table="[Indices]" Index="[_dta_index_Indices_14_320720195__K5_K2_K1_3]" Alias="[I]" /> <SeekPredicates> <SeekPredicate> <Prefix ScanType="EQ"> <RangeColumns> <ColumnReference Database="[wf_1]" Schema="[dbo]" Table="[Indices]" Alias="[I]" Column="HedgeProduct" ComputedColumn="true" /> </RangeColumns> <RangeExpressions> <ScalarOperator ScalarString="(1)"> <Const ConstValue="(1)" /> </ScalarOperator> </RangeExpressions> </Prefix> <StartRange ScanType="GE"> <RangeColumns> <ColumnReference Database="[wf_1]" Schema="[dbo]" Table="[Indices]" Alias="[I]" Column="tenure" /> </RangeColumns> <RangeExpressions> <ScalarOperator ScalarString="[@StartMonth]"> <Identifier> <ColumnReference Column="@StartMonth" /> </Identifier> </ScalarOperator> </RangeExpressions> </StartRange> <EndRange ScanType="LE"> <RangeColumns> <ColumnReference Database="[wf_1]" Schema="[dbo]" Table="[Indices]" Alias="[I]" Column="tenure" /> </RangeColumns> <RangeExpressions> <ScalarOperator ScalarString="[@EndMonth]"> <Identifier> <ColumnReference Column="@EndMonth" /> </Identifier> </ScalarOperator> </RangeExpressions> </EndRange> </SeekPredicate> </SeekPredicates> </IndexScan> </RelOp> From this, does anyone see an obvious problem that would be causing this to take so long? Here is the query: (SELECT quotedate, tenure, price, ActualVolume, HedgePortfolioValue, Price AS UnhedgedPrice, ((ActualVolume*Price - HedgePortfolioValue)/ActualVolume) AS HedgedPrice FROM ( SELECT [quoteDate] ,[price] , tenure ,isnull(wf_1.[Risks].[HedgePortValueAsOfDate2](1,tenureMonth,quotedate,price),0) as HedgePortfolioValue ,[TotalOperatingGasVolume] as ActualVolume FROM [wf_1].[dbo].[Indices] I inner join ( SELECT DISTINCT tenureMonth FROM [wf_1].[Risks].[KnowRiskTrades] WHERE HedgeProduct = 1 AND portfolio <> 'Natural Gas Hedge Transactions' ) B ON I.tenure=B.tenureMonth inner join ( SELECT [Month],[TotalOperatingGasVolume] FROM [wf_1].[Risks].[ActualGasVolumes] ) C ON C.[Month]=B.tenureMonth WHERE HedgeProduct = 1 AND quoteDate>=dateadd(day, -3*365, tenureMonth) AND quoteDate<=dateadd(day,-3,tenureMonth) )A )

    Read the article

  • Joining tables from 2 different connection strings

    - by krio
    Hello, I need to join two tables from different MySQL (PHP) connection strings and different databases. $conn = mysql_connect('192.168.30.20', 'user', 'pass'); $conn2 = mysql_connect('anotherIPHere', 'user2', 'pass2'); $db = mysql_select_db('1stdb', $conn); $db2 = mysql_select_db('2nddb', $conn2); If I were using the same connection I would just prefix the tables with the db names, such as database1.table1.column and database2.table2.column2, but since I'm using two completely separate connection strings the MySQL Query does not know which connection string to use, thus the resource is not usable. I've read a ton of resources that show how to use two databases, from the SAME connection string and that is working fine, but I can't find anything related to multiple connection strings and databases. Thanks

    Read the article

  • killing a separate thread having a socket

    - by user311906
    Hi All I have a separate thread ListenerThread having a socket listening to info broadcasted by some remote server. This is created at the constructor of one class I need to develop. Because of requirements, once the separate thread is started I need to avoid any blocking function on the main thread. Once it comes to the point of calling the destructor of my class I cannot perform a join on the listener thread so the only thing I can do is to KILL it. My questions are: what happens to the network resoruces allocated by the function passed to the thead? Is the socket closed properly or there might be something pending? ( most worried about this ) is this procedure fast enough i.e. is the thread killed so that interrupt immediately ? I am working with Linux ...what command or what can I check to ensure that there is no networking resource left pending or that something went wrong for the Operating system I thank you very much for your help Regards MNSTN NOTE: I am using boost::thread in C++

    Read the article

  • How can I extract just the elements I want from a Perl array?

    - by Flamewires
    Hey I'm wondering how I can get this code to work. Basically I want to keep the lines of $filename as long as they contain the $user in the path: open STDERR, ">/dev/null"; $filename=`find -H /home | grep $file`; @filenames = split(/\n/, $filename); for $i (@filenames) { if ($i =~ m/$user/) { #keep results } else { delete $i; # does not work. } } $filename = join ("\n", @filenames); close STDERR; I know you can delete like delete $array[index] but I don't have an index with this kind of loop that I know of.

    Read the article

< Previous Page | 197 198 199 200 201 202 203 204 205 206 207 208  | Next Page >