Search Results

Search found 15803 results on 633 pages for 'self join'.

Page 529/633 | < Previous Page | 525 526 527 528 529 530 531 532 533 534 535 536  | Next Page >

  • Problem using Hibernate Projections

    - by Lucas
    Hello! I'm using Richfaces + HibernateQuery to create a data list. I'm trying to use Hibernate Projections to group my query result. Here is the code: final DetachedCriteria criteria = DetachedCriteria .forClass(Class.class, "c") .setProjection(Projections.projectionList() .add(Projections.groupProperty("c.id"))); ... in the .xhtml file i have the following code: <rich:dataTable width="100%" id="dataTable" value="#{myBean.dataModel}" var="row"> <f:facet name="header"> <rich:columnGroup> ...... </rich:columnGroup> </f:facet> <h:column> <h:outputText value="#{row.id}"/> </h:column> <h:column> <h:outputText value="#{row.name}"/> </h:column> But when i run the page it gives me the following error: Error: value="#{row.id}": The class 'java.lang.Long' does not have the property 'id'. If i take out the Projection from the code it works correctly, but it doesn't group the result. So, which mistake could be happening here? EDIT: Here is the full criteria: final DetachedCriteria criteria = DetachedCriteria.forClass(Class.class, "c"); criteria.setFetchMode("e.zzzzz", FetchMode.JOIN); criteria.createAlias("e.aaaaaaaa", "aa"); criteria.add(Restrictions.ilike("aa.information", "informations....")); criteria.setProjection(Projections.distinct(Projections.projectionList() .add(Projections.groupProperty("e.id").as("e.id")))); getDao().findByCriteria(criteria); if i take the "setProjection" line it works fine. I don't understand why it gives that error putting that line.

    Read the article

  • More than 100 connection to sql server 2008 in "sleeping" status - Solved

    - by Allende
    I have a big trouble here, well at my server. I have an ASP .net web (framework 4.x) running on muy server, all the transactions/select/update/insert are made with ADO.NET. Well my problem is that after being using for a while (a couple of updates/selects/inserts) sometimes I got more than 100 connections on "sleeping" status when check for the connections on sql server with this query: SELECT spid, a.status, hostname, program_name, cmd, cpu, physical_io, blocked, b.name, loginame FROM master.dbo.sysprocesses a INNER JOIN master.dbo.sysdatabases b ON a.dbid = b.dbid where program_name like '%TMS%' ORDER BY spid I've been checking my code and closing every time I make a connection, I'm gonna test the new class, but I'm afraid the problem doesn't be fixed. It suppose that the connection pooling, keep the connections to re-use them, but until I see don't re-use them always. Any idea besides check for close all the connections open after use them? SOLVED(now I have just one and beautiful connection on "sleeping" status): Besides the anwser of David Stratton, I would like to share this link that help explain really well how the connection pool it works: http://dinesql.blogspot.com/2010/07/sql-server-sleeping-status-and.html Just to be short, you need to close every connection (sql connection objects) in order that the connection pool can re-use the connection and use the same connectinos string, to ensure this is highly recommended use one of the webConfig. Be careful with dataReaders you sould close its connection to (that was what make got out of my mind for while).

    Read the article

  • How to create a view to manage associations between HABTM models? (Rails)

    - by Chris Hart
    Hello, I am using Ruby on Rails and need to create a view that allows the creation of records through a HABTM relationship to another model. Specifically, I have the following models: Customer and ServiceOverride, and a join table customers_serviceoverrides. Using the customer view for create/update, I need to be able to create, update and delete ServiceOverrides and manage the attributes of the associated model(s) from the same view. Visually I'd prefer to have something like a plus/minus sign to add/delete service overrides, and each serviceoverride record has two string entities which need to be displayed and editable as well. However, if I could just get the code (a kind of nested form, I'm assuming?) working, I could work out the UI aspects. The models are pretty simple: class ServiceOverride < ActiveRecord::Base has_and_belongs_to_many :customers end class Customer < ActiveRecord::Base has_and_belongs_to_many :serviceoverrides end The closest thing I've found explaining this online is on this blog but it doesn't really address what I'm trying to do (both manage the linkages to the other model, and edit attributes of that model. Any help is appreciated. Thanks in advance. Chris

    Read the article

  • Problem counting item frequency on T-SQL

    - by Raúl Roa
    I'm trying to count the frequency of numbers from 1 to 100 on different fields of a table. Let's say I have the table "Results" with the following data: LottoId Winner Second Third --------- --------- --------- --------- 1 1 2 3 2 1 2 3 I'd like to be able to get the frequency per numbers. For that I'm using the following code: --Creating numbers temp table CREATE TABLE #Numbers( Number int) --Inserting the numbers into the temp table declare @counter int set @counter = 0 while @counter < 100 begin set @counter = @counter + 1 INSERT INTO #Numbers(Number) VALUES(@counter) end -- SELECT #Numbers.Number, Count(Results.Winner) as Winner,Count(Results.Second) as Second, Count(Results.Third) as Third FROM #Numbers LEFT JOIN Results ON #Numbers.Number = Results.Winner OR #Numbers.Number = Results.Second OR #Numbers.Number = Results.Third GROUP BY #Numbers.Number The problem is that the counts are repeating the same values for each number. In this particular case I'm getting the following result: Number Winner Second Third --------- --------- --------- --------- 1 2 2 2 2 2 2 2 3 2 2 2 ... When I should get this: Number Winner Second Third --------- --------- --------- --------- 1 2 0 0 2 0 2 0 3 0 0 2 ... What am I missing?

    Read the article

  • EJB3 Entity and Lazy Problem

    - by Stefano
    My Entity beAN have 2 list: @Entity @Table(name = "TABLE_INTERNAL") public class Internal implements java.io.Serializable { ...SOME GETTERS AND SETTERS... private List<Match> matchs; private List<Regional> regionals; } mapped one FetchType.LAZY and one FetchType.EAGER : @OneToMany(fetch = FetchType.LAZY,mappedBy = "internal") public List<Match> getMatchs() { return matchs; } public void setMatchs(List<Match> matchs) { this.matchs = matchs; } @ManyToMany(targetEntity = Regional.class, mappedBy = "internals", fetch =FetchType.EAGER) public List<Regional> getRegionals() { return regionals; } public void setRegionals(List<Regional> regionals) { this.regionals = regionals; } I need both lists full ! But I cant put two FetchType.EAGER beacuse it's an error. I try some test: List<Internal> out; out= em.createQuery("from Internal").getResultList(); out= em.createQuery("from Internal i JOIN FETCH i.regionals ").getResultList(); I'm not able to fill both lists...Help!!! Stefano

    Read the article

  • shoulda macros with rspec2 beta 5 and rails3 beta2

    - by Millisami
    I've setup Rspec2 beta5 and shoulda as following to use shoulda macros inside rspec model tests. Gemfile group :test do gem "rspec", ">= 2.0.0.beta.4" gem "rspec-rails", ">= 2.0.0.beta.4" gem 'shoulda', :git => 'git://github.com/bmaddy/ shoulda.git' gem "faker" gem "machinist" gem "pickle", :git => 'git://github.com/codegram/ pickle.git' gem 'capybara', :git => 'git://github.com/jnicklas/ capybara.git' gem 'database_cleaner', :git => 'git://github.com/bmabey/ database_cleaner.git' gem 'cucumber-rails', :git => 'git://github.com/aslakhellesoy/ cucumber-rails.git' end *spec_helper.rb* Dir["#{File.dirname(__FILE__)}/support/**/*.rb"].each {|f| require f} require 'shoulda' Rspec.configure do |config| *spec/models/outlet_spec.rb* require 'spec_helper' describe Outlet do it { should validate_presence_of(:name) } end And when I run the spec, I get the following error. [~/rails_apps/rails3_apps/automation (master)?] ? spec spec/models/ outlet_spec.rb DEPRECATION WARNING: RAILS_ROOT is deprecated! Use Rails.root instead. (called from join at /home/millisami/.rvm/gems/ruby-1.9.1-p378%rails3/ bundler/gems/shoulda-87e75311f83548760114cd4188afa4f83fecdc22-master/ lib/shoulda/autoload_macros.rb:40) F 1) Outlet Failure/Error: it { should validate_presence_of(:name) } undefined method `validate_presence_of' for #<Rspec::Core::ExampleGroup::Nested_1:0xc4dc138 @__memoized={}> # ./spec/models/outlet_spec.rb:4:in `block (2 levels) in <top (required)>' Finished in 0.0399 seconds 1 example, 1 failures [~/rails_apps/rails3_apps/automation (master)?] ? Why the "undefined method" ?? Is the shoulda getting loaded?

    Read the article

  • should this database table be normalized?

    - by oo
    i have taken over a database that stores fitness information and we were having a debate about a certain table and whether it should stay as one table or get broken up into three tables. Today, there is one table called: workouts that has the following fields id, exercise_id, reps, weight, date, person_id So if i did 2 sets of 3 different exercises on one day, i would have 6 records in that table for that day. for example: id, exercise_id, reps, weight, date, person_id 1, 1, 10, 100, 1/1/2010, 10 2, 1, 10, 100, 1/1/2010, 10 3, 1, 10, 100, 1/1/2010, 10 4, 2, 10, 100, 1/1/2010, 10 5, 2, 10, 100, 1/1/2010, 10 6, 2, 10, 100, 1/1/2010, 10 So the question is, given that there is some redundant data (date, personid, exercise_id) in multiple records, should this be normalized to three tables WorkoutSummary: - id - date - person_id WorkoutExercise: - id - workout_id (foreign key into WorkoutSummary) - exercise_id WorkoutSets: - id - workout_exercise_id (foreign key into WorkoutExercise) - reps - weight I would guess the downside is that the queries would be slower after this refactoring as now we would need to join 3 tables to do the same query that had no joins before. The benefit of the refactoring allows up in the future to add new fields at the workout summary level or the exercise level with out adding in more duplication. any feedback on this debate?

    Read the article

  • Is possible to reuse subqueries?

    - by Gothmog
    Hello, I'm having some problems trying to perform a query. I have two tables, one with elements information, and another one with records related with the elements of the first table. The idea is to get in the same row the element information plus several records information. Structure could be explain like this: table [ id, name ] [1, '1'], [2, '2'] table2 [ id, type, value ] [1, 1, '2009-12-02'] [1, 2, '2010-01-03'] [1, 4, '2010-01-03'] [2, 1, '2010-01-02'] [2, 2, '2010-01-02'] [2, 2, '2010-01-03'] [2, 3, '2010-01-07'] [2, 4, '2010-01-07'] And this is want I would like to achieve: result [id, name, Column1, Column2, Column3, Column4] [1, '1', '2009-12-02', '2010-01-03', , '2010-01-03'] [2, '2', '2010-01-02', '2010-01-02', '2010-01-07', '2010-01-07'] The following query gets the proper result, but it seems to me extremely inefficient, having to iterate table2 for each column. Would be possible in anyway to do a subquery and reuse it? SELECT a.id, a.name, (select min(value) from table2 t where t.id = subquery.id and t.type = 1 group by t.type) as Column1, (select min(value) from table2 t where t.id = subquery.id and t.type = 2 group by t.type) as Column2, (select min(value) from table2 t where t.id = subquery.id and t.type = 3 group by t.type) as Column3, (select min(value) from table2 t where t.id = subquery.id and t.type = 4 group by t.type) as Column4 FROM (SELECT distinct id FROM table2 t WHERE (t.type in (1, 2, 3, 4)) AND t.value between '2010-01-01' and '2010-01-07') as subquery LEFT JOIN table a ON a.id = subquery.id

    Read the article

  • SQL database problems with addressbook table design

    - by Sebastian Hoitz
    Hello! I am writing a addressbook module for my software right now. I have the database set up so far that it supports a very flexible address-book configuration. I can create n-entries for every type I want. Type means here data like 'email', 'address', 'telephone' etc. I have a table named 'contact_profiles'. This only has two columns: id Primary key date_created DATETIME And then there is a table called contact_attributes. This one is a little more complex: id PK #profile (Foreign key to contact_profiles.id) type VARCHAR describing the type of the entry (name, email, phone, fax, website, ...) I should probably change this to a SET later. value Text (containing the value for the attribute). I can now link to these profiles, for example from my user's table. But from here I run into problems. At the moment I would have to create a JOIN for each value that I want to retrieve. Is there a possibility to somehow create a View, that gives me a result with the type's as columns? So right now I would get something like #profile type value 1 email [email protected] 1 name Sebastian Hoitz 1 website domain.tld But it would be nice to get a result like this: #profile email name website 1 [email protected] Sebastian Hoitz domain.tld The reason I do not want to create the table layout like this initially is, that there might always be things to add and I want to be able to have multiple attributes of the same type. So do you know if there is any possibility to convert this dynamically? If you need a better description please let me know. Thank you!

    Read the article

  • record output sound in python

    - by aaronstacy
    i want to programatically record sound coming out of my laptop in python. i found PyAudio and came up with the following program that accomplishes the task: import pyaudio, wave, sys chunk = 1024 FORMAT = pyaudio.paInt16 CHANNELS = 1 RATE = 44100 RECORD_SECONDS = 5 WAVE_OUTPUT_FILENAME = sys.argv[1] p = pyaudio.PyAudio() channel_map = (0, 1) stream_info = pyaudio.PaMacCoreStreamInfo( flags = pyaudio.PaMacCoreStreamInfo.paMacCorePlayNice, channel_map = channel_map) stream = p.open(format = FORMAT, rate = RATE, input = True, input_host_api_specific_stream_info = stream_info, channels = CHANNELS) all = [] for i in range(0, RATE / chunk * RECORD_SECONDS): data = stream.read(chunk) all.append(data) stream.close() p.terminate() data = ''.join(all) wf = wave.open(WAVE_OUTPUT_FILENAME, 'wb') wf.setnchannels(CHANNELS) wf.setsampwidth(p.get_sample_size(FORMAT)) wf.setframerate(RATE) wf.writeframes(data) wf.close() the problem is i have to connect the headphone jack to the microphone jack. i tried replacing these lines: input = True, input_host_api_specific_stream_info = stream_info, with these: output = True, output_host_api_specific_stream_info = stream_info, but then i get this error: Traceback (most recent call last): File "./test.py", line 25, in data = stream.read(chunk) File "/Library/Python/2.5/site-packages/pyaudio.py", line 562, in read paCanNotReadFromAnOutputOnlyStream) IOError: [Errno Not input stream] -9975 is there a way to instantiate the PyAudio stream so that it inputs from the computer's output and i don't have to connect the headphone jack to the microphone? is there a better way to go about this? i'd prefer to stick w/ a python app and avoid cocoa.

    Read the article

  • what does this asp.net mvc compile time error states?

    - by Pandiya Chendur
    I have a repository class and it has this, using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Mvc; using CrMVC.BusinessObjects; namespace CrMVC.Models { public class ConstructionRepository { private CRDataContext db = new CRDataContext(); public IQueryable<MaterialsObj> FindAllMaterials() { var materials = from m in db.Materials join Mt in db.MeasurementTypes on m.MeasurementTypeId equals Mt.Id select new MaterialsObj() { Id = Convert.ToInt64(m.Mat_id), Mat_Name = m.Mat_Name, Mes_Name = Mt.Name, }; return materials; } } } And My MaterialsObj class is under CrMVC.BusinessObjects namespace and i using it in my repository class.... namespace CrMVC.BusinessObjects { public class MaterialsObj { //My logic here } } But when i compile this i get this error c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\root\19360d4c\3d21e226\App_Web_materials.aspx.7d2669f4.a8f-zsw5.0.cs(148): error CS0426: The type name 'Materials' does not exist in the type 'CrMVC.Models.ConstructionRepository' Am i missing something any suggestion.... Edit: There is no class named Materials in my repository class then why i get this error..

    Read the article

  • DBD::SQLite::st execute failed: datatype mismatch

    - by Barton Chittenden
    Here's a snippit of perl code: sub insert_timesheet { my $dbh = shift; my $entryref = shift; my $insertme = join(',', @_); my $values_template = '?, ' x scalar(@_); chop $values_template; chop $values_template; #remove trailing comma my $insert = "INSERT INTO timesheet( $insertme ) VALUES ( $values_template );"; my $sth = $dbh->prepare($insert); debug("$insert"); my @values; foreach my $entry (@_){ push @values, $$entryref{$entry} } debug("@values"); my $rv = $sth->execute( @values ) or die $dbh->errstr; debug("sql return value: $rv"); $dbh->disconnect; } The value of $insert: [INSERT INTO timesheet( idx,Start_Time,End_Time,Project,Ticket_Number,Site,Duration,Notes ) VALUES ( ?, ?, ?, ?, ?, ?, ?, ? );] Here are @values: [null '1270950742' '1270951642' 'asdf' 'asdf' 'adsf' 15 ''] Here's the schema of 'timesheet' timesheet( idx INTEGER PRIMARY KEY AUTOINCREMENT, Start_Time VARCHAR, End_Time VARCHAR, Duration INTEGER, Project VARCHAR, Ticket_Number VARCHAR, Site VARCHAR, Notes VARCHAR) Here's how things line up: ---- Insert Statement Schema @values ---- idx idx INTEGER PRIMARY KEY AUTOINCREMENT null: # this is not a mismatch, passing null will allow auto-increment. Start_Time Start_Time VARCHAR '1270950742' End_Time End_Time VARCHAR '1270951642' Project Project VARCHAR 'asdf' Ticket_Number Ticket_Number VARCHAR 'asdf' Site Site VARCHAR 'adsf' Duration Duration INTEGER 15 Notes Notes VARCHAR '' ... I can't see the data-type mis-match.

    Read the article

  • Product Catalog Schema design

    - by FlySwat
    I'm building a proof of concept schema for a product catalog to possibly replace a very aging and crufty one we use. In our business, we sell both physical materials and services (one time and reoccurring charges). The current catalog schema has each distinct category broken out into individual tables, while this is nicely normalized and performs well, it is fairly difficult to extend. Adding a new attribute to a particular product involves changing the table schema and backpopulating old data. An idea I've been toying with has been something along the line of a base set of entity tables in 3rd normal form, these will contain the facts that are common among ALL products. Then, I'd like to build an Attribute-Entity-Value schema that allows each entity type to be extended in a flexible way using just data and no schema changes. Finally, I'd like to denormalize this data model into materialized views for each individual entity type. This views are what the application would access. We also have many tables that contain business rules and compatibility rules. These would join against the base entity tables instead of the views. My big concerns here are: Performance - Attribute-Entity-Value schemas are flexible, but typically perform poorly, should I be concerned? More Performance - Denormalizing using materialized views may have some risks, I'm not positive on this yet. Complexity - While this schema is flexible and maintainable using just data, I worry that the complexity of the design might make future schema changes difficult. For those who have designed product catalogs for large scale enterprises, am I going down the totally wrong path? Is there any good best practice schema design reading available for product catalogs?

    Read the article

  • SQLDeveloper using over 100MB of PGA

    - by Leigh Riffel
    Perhaps this is normal, but in my Oracle 11g database I am seeing programmers using Oracle's SQL Developer regularly consume more than 100MB of combined UGA and PGA memory. I'd like to know if this is normal and what can be done about it. Our database is on the 32 bit version of Windows 2008, so memory limitations are becoming an increasing concern. I am using the following query to show the memory usage: SELECT e.SID, e.username, e.status, b.PGA_MEMORY FROM v$session e LEFT JOIN (select y.SID, y.value pga, TO_CHAR(ROUND(y.value/1024/1024),99999999) || ' MB' PGA_MEMORY from v$sesstat y, v$statname z where y.STATISTIC# = z.STATISTIC# and NAME = 'session pga memory') b ON e.sid=b.sid WHERE (PGA)/1024/1024 > 20 ORDER BY 4 DESC; It seems that the resource usage goes up any time a table is opened in SQLDeveloper, but even when it is closed the memory does not go away. The problem is worse if the table is sorted while it was open as that seems to use even more memory. I understand how this would use memory while it is sorting, and perhaps even while it is still open, but to use memory after it is closed seems wrong to me. Can anyone confirm this? Update: I discovered that my numbers were off due to not understanding that the UGA is stored in the PGA under dedicated server mode. This makes the numbers lower than they were, but the problem still remains that SQL Developer seems to use excessive PGA.

    Read the article

  • complex MySQL Order by not working

    - by Les Reynolds
    Here is the select statement I'm using. The problem happens with the sorting. When it is like below, it only sorts by t2.userdb_user_first_name, doesn't matter if I put that first or second. When I remove that, it sorts just fine by the displayorder field value pair. So I know that part is working, but somehow the combination of the two causes the first_name to override it. What I want is for the records to be sorted by displayorder first, and then first_name within that. SELECT t1.userdb_id FROM default_en_userdbelements as t1 INNER JOIN default_en_userdb AS t2 ON t1.userdb_id = t2.userdb_id WHERE t1.userdbelements_field_name = 'newproject' AND t1.userdbelements_field_value = 'no' AND t2.userdb_user_first_name!='Default' ORDER BY (t1.userdbelements_field_name = 'displayorder' AND t1.userdbelements_field_value), t2.userdb_user_first_name; Edit: here is what I want to accomplish. I want to list the users (that are not new projects) from the userdb table, along with the details about the users that is stored in userdbelements. And I want that to be sorted first by userdbelements.displayorder, then by userdb.first_name. I hope that makes sense? Thanks for the really quick help! Edit: Sorry for disappearing, here is some sample data userdbelements userdbelements_id userdbelements_field_name userdbelements_field_value userdb_id 647 heat 1 648 displayorder 1 - Sponsored 1 645 condofees 1 userdb userdb_id userdb_user_name userdb_emailaddress userdb_user_first_name userdb_user_last_name 10 harbourlights [email protected] Harbourlights 1237 Northshore Blvd, Burlington 11 harbourview [email protected] Harbourview 415 Locust Street, Burlington 12 thebalmoral [email protected] The Balmoral 2075 & 2085 Amherst Heights Drive, Burlington

    Read the article

  • Endless problems with a very simple python subprocess.Popen task

    - by Thomas
    I'd like python to send around a half-million integers in the range 0-255 each to an executable written in C++. This executable will then respond with a few thousand integers. Each on one line. This seems like it should be very simple to do with subprocess but i've had endless troubles. Right now im testing with code: // main() u32 num; std::cin >> num; u8* data = new u8[num]; for (u32 i = 0; i < num; ++i) std::cin >> data[i]; // test output / spit it back out for (u32 i = 0; i < num; ++i) std::cout << data[i] << std::endl; return 0; Building an array of strings ("data"), each like "255\n", in python and then using: output = proc.communicate("".join(data))[0] ...doesn't work (says stdin is closed, maybe too much at one time). Neither has using proc.stdin and proc.stdout worked. This should be so very simple, but I'm getting constant exceptions, and/or no output data returned to me. My Popen is currently: proc = Popen('aux/test_cpp_program', stdin=PIPE, stdout=PIPE, bufsize=1) Advise me before I pull my hair out. ;)

    Read the article

  • Nhibernate get collection by ICriteria

    - by Andrew Kalashnikov
    Hello, colleagues. I've got a problem at getting my entity. MApping: <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="Clients.Core" namespace="Clients.Core.Domains"> <class name="Sales, Clients.Core" table='sales'> <id name="Id" unsaved-value="0"> <column name="id" not-null="true"/> <generator class="native"/> </id> <property name="Guid"> <column name="guid"/> </property> <set name="Accounts" table="sales_users" lazy="false"> <key column="sales_id" /> <element column="user_id" type="Int32" /> </set> </class> Domain: public class Sales : BaseDomain { ICollection<int> accounts = new List<int>(); public virtual ICollection<int> Accounts { get { return accounts; } set { accounts = value; } } public Sales() { } } I want get query such as SELECT * FROM sales s INNER JOIN sales_users su on su.sales_id=s.id WHERE su.user_id=:N How can i do this through ICriterion object? Thanks a lot.

    Read the article

  • Combo-box values automatically update

    - by glinch
    Hi all, hopefully somebody can help The table structure is as follows: tblCompany: compID compName tblOffice: offID, compID, add1, add2, add3 etc... tblEmployee: empID Name, telNo, etc... offID I have a form that contains contact details for employees, all works ok using after update. A cascading combo box, cmbComp, allows me to select a company, and inturn select the appropriate office, cboOff, and updates the corresponding tblEmployee.offID field correctly. Fields are automatically updated for the address also cmbComp: RowSource SELECT DISTINCT tblOffice.compID, tblCompany.compID FROM tblCompany INNER JOIN AdjusterCompanyOffice ON tblCompany.compID=tblOffice.compID ORDER BY tblCompany.compName; cboOff: RowSource SELECT tblCompany.offID, tblCompany.Address1, tblCompany.Address2, tblCompany.Address3, tblCompany.Address4, tblCompany.Address5 FROM tblCompany ORDER BY tblCompany.Address1; The problem I am having is that when i load a new record how to retrieve the data and automatically load the cmbComp and text fields. The cboOff combo box loads correctly as the control source for this is the offID I imagine there must be a way of setting the value on opening the record? Not sure how though. I dont think I can set the controlsource cmbComp or text fields, or can I? Any help/point in the right direction appreciated, have been searching for a way to do this but cant get anywhere!

    Read the article

  • SQL Server: A Grouping question that's annoying me

    - by user366729
    I've been working with SQL Server for the better part of a decade, and this grouping (or partitioning, or ranking...I'm not sure what the answer is!) one has me stumped. Feels like it should be an easy one, too. I'll generalize my problem: Let's say I have 3 employees (don't worry about them quitting or anything...there's always 3), and I keep up with how I distribute their salaries on a monthly basis. Month Employee PercentOfTotal -------------------------------- 1 Alice 25% 1 Barbara 65% 1 Claire 10% 2 Alice 25% 2 Barbara 50% 2 Claire 25% 3 Alice 25% 3 Barbara 65% 3 Claire 10% As you can see, I've paid them the same percent in Months 1 and 3, but in Month 2, I've given Alice the same 25%, but Barbara got 50% and Claire got 25%. What I want to know is all the distinct distributions I've ever given. In this case there would be two -- one for months 1 and 3, and one for month 2. I'd expect the results to look something like this (NOTE: the ID, or sequencer, or whatever, doesn't matter) ID Employee PercentOfTotal -------------------------------- X Alice 25% X Barbara 65% X Claire 10% Y Alice 25% Y Barbara 50% Y Claire 25% Seems easy, right? I'm stumped! Anyone have an elegant solution? I just put together this solution while writing this question, which seems to work, but I'm wondering if there's a better way. Or maybe a different way from which I'll learn something. WITH temp_ids (Month) AS ( SELECT DISTINCT MIN(Month) FROM employees_paid GROUP BY PercentOfTotal ) SELECT EMP.Month, EMP.Employee, EMP.PercentOfTotal FROM employees_paid EMP JOIN temp_ids IDS ON EMP.Month = IDS.Month GROUP BY EMP.Month, EMP.Employee, EMP.PercentOfTotal Thanks y'all! -Ricky

    Read the article

  • Download-from-PyPI-and-install script

    - by zubin71
    Hello, I have written a script which fetches a distribution, given the URL. After downloading the distribution, it compares the md5 hashes to verify that the file has been downloaded properly. This is how I do it. def download(package_name, url): import urllib2 downloader = urllib2.urlopen(url) package = downloader.read() package_file_path = os.path.join('/tmp', package_name) package_file = open(package_file_path, "w") package_file.write(package) package_file.close() I wonder if there is any better(more pythonic) way to do what I have done using the above code snippet. Also, once the package is downloaded this is what is done: def install_package(package_name): if package_name.endswith('.tar'): import tarfile tarfile.open('/tmp/' + package_name) tarfile.extract('/tmp') import shlex import subprocess installation_cmd = 'python %ssetup.py install' %('/tmp/'+package_name) subprocess.Popen(shlex.split(installation_cmd) As there are a number of imports for the install_package method, i wonder if there is a better way to do this. I`d love to have some constructive criticism and suggestions for improvement. Also, I have only implemented the install_package method for .tar files; would there be a better manner by which I could install .tar.gz and .zip files too without having to write seperate methods for each of these?

    Read the article

  • Comparing two date ranges within the same table

    - by Danny Herran
    I have a table with sales per store as follows: SQL> select * from sales; ID ID_STORE DATE TOTAL ---------- -------- ---------- -------------------------------------------------- 1 1 2010-01-01 500.00 2 1 2010-01-02 185.00 3 1 2010-01-03 135.00 4 1 2009-01-01 165.00 5 1 2009-01-02 175.00 6 5 2010-01-01 130.00 7 5 2010-01-02 135.00 8 5 2010-01-03 130.00 9 6 2010-01-01 100.00 10 6 2010-01-02 12.00 11 6 2010-01-03 85.00 12 6 2009-01-01 135.00 13 6 2009-01-02 400.00 14 6 2009-01-07 21.00 15 6 2009-01-08 45.00 16 8 2009-01-09 123.00 17 8 2009-01-10 581.00 17 rows selected. What I need to do is to compare two date ranges within that table. Lets say I need to know the differences in sales between 01 Jan 2009 to 10 Jan 2009 AGAINST 01 Jan 2010 to 10 Jan 2010. I'd like to build a query that returns something like this: ID_STORE_A DATE_A TOTAL_A ID_STORE_B DATE_B TOTAL_B ---------- ---------- --------- ---------- ---------- ------------------- 1 2010-01-01 500.00 1 2009-01-01 165.00 1 2010-01-02 185.00 1 2009-01-02 175.00 1 2010-01-03 135.00 1 NULL NULL 5 2010-01-01 130.00 5 NULL NULL 5 2010-01-02 135.00 5 NULL NULL 5 2010-01-03 130.00 5 NULL NULL 6 2010-01-01 100.00 6 2009-01-01 135.00 6 2010-01-02 12.00 6 2009-01-02 400.00 6 2010-01-03 85.00 6 NULL NULL 6 NULL NULL 6 2009-01-07 21.00 6 NULL NULL 6 2009-01-08 45.00 6 NULL NULL 8 2009-01-09 123.00 6 NULL NULL 8 2009-01-10 581.00 So, even if there are no sales in one range or another, it should just fill the empty space with NULL. So far, I've come up with this quick query, but I the "dates" from sales to sales2 sometimes are different in each row: SELECT sales.*, sales2.* FROM sales LEFT JOIN sales AS sales2 ON (sales.id_store=sales2.id_store) WHERE sales.date >= '2010-01-01' AND sales.date <= '2010-01-10' AND sales2.date >= '2009-01-01' AND sales2.date <= '2009-01-10' ORDER BY sales.id_store ASC, sales.date ASC, sales2.date ASC What am I missing?

    Read the article

  • symfony get data from array

    - by iggnition
    Hi, I'm trying to use an SQL query to get data from my database into the template of a symfony project. my query: SQL: SELECT l.loc_id AS l__loc_id, l.naam AS l__naam, l.straat AS l__straat, l.huisnummer AS l__huisnummer, l.plaats AS l__plaats, l.postcode AS l__postcode, l.telefoon AS l__telefoon, l.opmerking AS l__opmerking, o.org_id AS o__org_id, o.naam AS o__naam FROM locatie l LEFT JOIN organisatie o ON l.org_id = o.org_id This is generated by this DQL: DQL: $this->q = Doctrine_Query::create() ->select('l.naam, o.naam, l.straat, l.huisnummer, l.plaats, l.postcode, l.telefoon, l.opmerking') ->from('Locatie l') ->leftJoin('l.Organisatie o') ->execute(); But now when i try to acces this data in the template by either doing: <?php foreach ($q as $locatie): ?> <?php echo $locatie['o.naam'] ?> or <?php foreach ($q as $locatie): ?> <?php echo $locatie['o__naam'] ?> i get the error from symfony: 500 | Internal Server Error | Doctrine_Record_UnknownPropertyException Unknown record property / related component "o__naam" on "Locatie" Does anyone know what is going wrong here? i dont know how to call the value from the array if the names in both query's dont work.

    Read the article

  • Python MD5 Hash Faster Calculation

    - by balgan
    Hi everyone. I will try my best to explain my problem and my line of thought on how I think I can solve it. I use this code for root, dirs, files in os.walk(downloaddir): for infile in files: f = open(os.path.join(root,infile),'rb') filehash = hashlib.md5() while True: data = f.read(10240) if len(data) == 0: break filehash.update(data) print "FILENAME: " , infile print "FILE HASH: " , filehash.hexdigest() and using start = time.time() elapsed = time.time() - start I measure how long it takes to calculate an hash. Pointing my code to a file with 653megs this is the result: root@Mars:/home/tiago# python algorithm-timer.py FILENAME: freebsd.iso FILE HASH: ace0afedfa7c6e0ad12c77b6652b02ab 12.624 root@Mars:/home/tiago# python algorithm-timer.py FILENAME: freebsd.iso FILE HASH: ace0afedfa7c6e0ad12c77b6652b02ab 12.373 root@Mars:/home/tiago# python algorithm-timer.py FILENAME: freebsd.iso FILE HASH: ace0afedfa7c6e0ad12c77b6652b02ab 12.540 Ok now 12 seconds +- on a 653mb file, my problem is I intend to use this code on a program that will run through multiple files, some of them might be 4/5/6Gb and it will take wayy longer to calculate. What am wondering is if there is a faster way for me to calculate the hash of the file? Maybe by doing some multithreading? I used a another script to check the use of the CPU second by second and I see that my code is only using 1 out of my 2 CPUs and only at 25% max, any way I can change this? Thank you all in advance for the given help.

    Read the article

  • Update one-to-many EntityKey using Foreign Key

    - by User.Anonymous
    To use by the easiest way Entity Framework, I use partial class to add Foreign Key on most important Entities Model. For example, I have an Entity "CONTACT" which have "TITLE", "FUNCTION" and others. When I update a CONTACT, with this code, Foreign Key are automatically updated : public int? TitId { get { if (this.TITLE_TIT != null) return TITLE_TIT.TIT_ID; return new Nullable<int>(); } set { this.TITLE_TITReference.EntityKey = new System.Data.EntityKey("Entities.TITLE_TIT", "TIT_ID", value); } } But I have a join with ACTIVITY, that can have many CONTACT, and I don't know how to update EntityKey on setters. public IEnumerable<EntityKeyMember> ActId { get { List<EntityKeyMember> lst = new List<EntityKeyMember>(); if (this.ACT_CON2 != null) { foreach (ACT_CON2 id in this.ACT_CON2.ToList()) { EntityKeyMember key = new EntityKeyMember(id.CON_ID.ToString(),id.ACT_ID); lst.Add(key); } } return lst; } set { this.ACT_CON2.EntityKey = new System.Data.EntityKey("Entities.ACT_CON2", value); } } How set many EntityKey ? Thank you.

    Read the article

  • Subquery max sequence number

    - by Andy Levesque
    I'm hesitant to ask because I'm sure it's out there, but I just can't seem to come up with the keywords to find the answer. I'm stepping outside my boundaries by starting with subqueries (normally an Access user). I have a query that has TECH_ID, SEQ_NBR, and PELL_FT_AWD_AMT SELECT ISRS_V_NEED_ANAL_RESULT_PARENT.TECH_ID, ISRS_V_NEED_ANAL_RESULT_PARENT.AWD_YR, ISRS_V_NEED_ANAL_RESULT_PARENT.PELL_FT_AWD_AMT, ISRS_V_NEED_ANAL_RESULT_PARENT.SEQ_NBR FROM ISRS_V_NEED_ANAL_RESULT_PARENT GROUP BY ISRS_V_NEED_ANAL_RESULT_PARENT.TECH_ID, ISRS_V_NEED_ANAL_RESULT_PARENT.AWD_YR, ISRS_V_NEED_ANAL_RESULT_PARENT.PELL_FT_AWD_AMT, ISRS_V_NEED_ANAL_RESULT_PARENT.SEQ_NBR HAVING (((ISRS_V_NEED_ANAL_RESULT_PARENT.AWD_YR)="2013")) ORDER BY ISRS_V_NEED_ANAL_RESULT_PARENT.TECH_ID; What I want to return is add a subquery that selects only the max SEQ_NUM for each record, but I can't seem to get the syntax right. In the past I would cheat and have a separate query that first gave me the TECH_ID and max SEQ_NUM, and then have a second query that use the original table and the first query in a join to get the rest. How can I do this in one query? Example: TECH_ID SEQ_NUM PELL 1 1 4000 1 2 4000 1 3 5000 Using just the max of the sequence number still returns: 1; 2; 4000 and 1; 3; 5000 when I'm only wanting the latter.

    Read the article

< Previous Page | 525 526 527 528 529 530 531 532 533 534 535 536  | Next Page >