Im using c# windows form application. I have a database with many tables. Each table has several columns. I need to populate the combo box with the column names for a selected table.
I've got a little time-tracking web app (implemented in Rails 3.2.8 & MySQL). The app has several users who add their time to specific tasks, on a given date. The system is set up so a user can only have 1 time entry (i.e. row) per task per date. I.e. if you add time twice on the same task and date, it'll add time to the existing row, rather than create a new one.
Now I'm looking to merge 2 tasks. In the simplest terms, merging task ID 2 into task ID 1 would take this
time | user_id | task_id | date
------+----------+----------+-----------
10 | 1 | 1 | 2012-10-29
15 | 2 | 1 | 2012-10-29
10 | 1 | 2 | 2012-10-29
5 | 3 | 2 | 2012-10-29
and change it into this
time | user_id | task_id | date
------+----------+----------+-----------
20 | 1 | 1 | 2012-10-29 <-- time values merged (summed)
15 | 2 | 1 | 2012-10-29 <-- no change
5 | 3 | 1 | 2012-10-29 <-- task_id changed (no merging necessary)
I.e. merge by summing the time values, where the given user_id/date/task combo would conflict.
I figure I can use a unique constraint to do a ON DUPLICATE KEY UPDATE ... if I do an insert for every task_id=2 entry. But that seems pretty inelegant.
I've also tried to figure a way to first update all the rows in task 1 with the summed-up times, but I can't quite figure that one out.
Any ideas?
I have created a dataset with fields "LastRunBuild" and "project" .The LastRunBuild field contain string of data seperated by commas according to each project. But Some Projects have no value in LastRunBuild field.When i am using this expression
" iif(Fields!LastRunBuild.Value=nothing,
nothing,Split(Fields!LastRunBuild.Value,",").GetValue(3)) "
a #Error value returns every time. Please reply...
I've set up a view which combines all the data across several tables. Is there a way to write this so that only columns which contain non-null data are displayed, and those columns which contain all NULL values are not included?
I have a table MRU, that has 3 columns.
(VALUE varchar(255); TYPE varchar(20); DT_ADD datetime)
This is a table simply storing an entry and recording the date time it was recorded. What I wanted to do is: delete the oldest entry whenever I add a new entry that exceeds a certain number.
here is my query:
delete from MRU
where type = 'FILENAME'
ORDER BY DT_ADD limit 1;
The query returns an error.
Thanks
This is my Html code and I'm trying to pass my id(test1) to fileUp function. It only works when I type 'test1' and stops working when I'm trying to pass a variable. Please let me know if there is a way to overcome this problem.
<a id='test1' href="#" onclick="fileUp(1, 'test1')">Upload Your file</a>
function fileUp(id,nameTest){
var test=nameTest;
new AjaxUpload(nameTest , {
action: 'upload-test.php',
onComplete: function(file, response){
alert(response);
}
});
};
Q:
I have the following case :
set of letters (grades)
A,A+,A-,B,B+,B- stored as strings in
the database i wanna to order these
grades logically from the small
one to the big one ,, but this not
what happen in real.. because these
are strings the order is:
A,A+,A- i wanna
ASC
A-,A,A+
DESC
A+,A,A-
i bind those grades in drop down list
and i wanna these grades with this
logical order in it..
is there any idea how to do something
like this..
Hi
In my mode I am selecting a field as
$query1 = $this->db->query("SELECT dPassword
FROM tbl_login
WHERE dEmailID='[email protected]'");
How to return dpassword as a variable to my controller
I tried this way return dpassword;
Coming from a C background, I may be getting too anal about this and worrying unnecessarily about bits and bytes here.
Still, I cant help thinking how the data is actually stored and that if I choose an N which is easily factorizable into a power of 2, the database will be more effecient in how it packs data etc.
Using this "logic", I have a string field in a table which is a variable length up to 21 chars. I am tempted to use 32 instead of 21, for the reason given above - however now I am thinking that I am wasting disk space because there will be space allocated for 11 extra chars that are guaranteed to be never used. Since I envisage storing several tens of thousands of rows a day, it all adds up.
Question:
Mindful of all of the above, Should I declare varchar(21) or varchar(32) and why?
I'm going through the exercise of building a CMS that will organize a lot of the common documents that my employer generates each time we get a new sales order. Each new sales order gets a 5 digit number (12222,12223,122224, etc...) but internally we have applied a hierarchy to these numbers:
+ 121XX
|--01
|--02
+ 122XX
|--22
|--23
|--24
In my table for sales orders, is it better to use the 5 digital number as an ID and populate up or would it be better to use the hierarchical structure that we use when referring to jobs in regular conversation? The only benefit to not populating sequentially seems to be formatting the data later on in my view, but that doesn't sound like a good enough reason to go through the extra work.
Thanks
I have a MYSQL table that looks as follows:
id id_jugador id_partido team1 team2
1 2 1 5 2
2 2 2 1 1
3 1 2 0 0
I need to create a query to either INSERT new rows in the table or UPDATE the table. The condition is based on id_jugador and id_partido, meaning that if I wanted to insert id_jugador = 2 and id_partido = 1, then it should just UPDATE the existing row with the new team1 and team2 values I am sending. And dont duplicate the row.
However, if I have an entry id_jugador=2 and id_partido=3, since this combination does not exist yet, it should add the new row.
I read about the REPLACE INTO but it seems to be unable to check combination of UNIQUE KEYS.
I have a table whose 'path' column has values and I would like to update the table's 'child_count' column so that I get the following output.
path | child_count
--------+-------------
| 5
/a | 3
/a/a | 0
/a/b | 1
/a/b/c | 0
/b | 0
My present solution - which is way too inefficient - uses a stored procedure as follows:
CREATE FUNCTION child_count() RETURNS VOID AS $$
DECLARE
parent VARCHAR;
BEGIN
FOR parent IN
SELECT path FROM my_table
LOOP
DECLARE
tokens VARCHAR[] := REGEXP_SPLIT_TO_ARRAY(parent, '/');
str VARCHAR := '';
BEGIN
FOR i IN 2..ARRAY_LENGTH(tokens, 1)
LOOP
UPDATE my_table
SET child_count = child_count + 1
WHERE path = str;
str := str || '/' || tokens[i];
END LOOP;
END;
END LOOP;
END;
$$ LANGUAGE plpgsql;
Anyone knows of a single UPDATE statement that does the same thing?
I have this PHP page where the user can select and un-select items. The interface looks like this:
Now I'm using these code, when the user hit the save changes button:
foreach( $value as $al_id ){ //al_id is actually location id
//check if a record exists
//if location were assigned and leave it as is
$assigned_count = $this->AssignedLoc->checkIfAssigned( $tab_user_id, $al_id );
if( $assigned_count == 0 ){
//else if not, insert this new record
$this->insertAssigned( $tab_user_id, $company_id, $al_id );
}
}
Now my question is, how do I delete the un assigned locations? For example in the screenshot above, there are 4 assigned locations, if I'm gonna remove (or unassign) "Mercury Morong" and "GP Hagonoy" from the assigned locations, only two must remain. What are the possible solutions using PHP?
Thanks for any help!
I have a stored procedure that returns about 50000 records in 10sec using at most 2 cores in SSMS. The SSRS report using the stored procedure was taking 20min and would max out the processor on an 8 core server for the entire time. The report was relatively simple (i.e. no graphs, calculations). The report did not appear to be the issue as I wrote the 50K rows to a temp table and the report could display the data in a few seconds. I tried many different ideas for testing altering the stored procedure each time, but keeping the original code in a separate window to revert back to. After one Alter of the stored procedure, going back to the original code, the report and server utilization started running fast, comparable to the performance of the stored procedure alone. Everything is fine for now, but I am would like to get to the bottom of what caused this in case it happens again. Any ideas?
I am fairly new to LINQ and can't get my head around some inconsistency in behaviour. Any knowledgeable input would be much appreciated. I see similar issues on SO and elsewhere but they don't seem to help.
I have a very simple setup - a company table and an addresses table. Each company can have 0 or more addresses, and if 0 one must be specified as the main address. I'm trying to handle the cases where there are 0 addresses, using an outer join and altering the select statement accordingly.
Please note I'm currently binding the output straight to a GridView so I would like to keep all processing within the query.
The following DOES work
IQueryable query =
from comp in context.Companies
join addr in context.Addresses on comp.CompanyID equals addr.CompanyID into outer // outer join companies to addresses table to include companies with no address
from addr in outer.DefaultIfEmpty()
where (addr.IsMain == null ? true : addr.IsMain) == true // if a company has no address ensure it is not ruled out by the IsMain condition - default to true if null
select new {
comp.CompanyID,
comp.Name,
AddressID = (addr.AddressID == null ? -1 : addr.AddressID), // use -1 to represent a company that has no addresses
MainAddress = String.Format("{0}, {1}, {2} {3} ({4})", addr.Address1, addr.City, addr.Region, addr.PostalCode, addr.Country)
};
but this displays an empty address in the GridView as ", , ()"
So I updated the MainAddress field to be
MainAddress = (addr.AddressID == null ? "" : String.Format("{0}, {1}, {2} {3} ({4})", addr.Address1, addr.City, addr.Region, addr.PostalCode, addr.Country))
and now I'm getting the Could not translate expression error and a bunch of spewey auto-generated code in the error which means very little to me.
The condition I added to MainAddress is no different to the working condition on AddressID, so can anybody tell me what's going on here?
Any help greatly appreciated.
how do i write a query that returns aggregate sales data for California in the past x months.
----------------------- -----------------------
| order | | customer |
|-----------------------| |-----------------------|
| orderId int | | customerId int |
| customerId int | | state varchar |
| deposit decimal | -----------------------
| orderDate date |
-----------------------
-----------------------
| orderItem |
|-----------------------|
| orderId int |
| itemId int |
| qty int |
| lineTotal decimal |
| itemPrice decimal |
-----------------------
I'm working on a metaprogramming task, where I'm trying to use a single method to define a polymorphic association in the calling class, while also defining the association in the target class. I need to pass in the name of the calling class to get the association right. Here's a snippet that should get the idea across:
class SomeClass < ActiveRecord::Base
has_many :join_models, :dependent=:destroy
end
class JoinModel < ActiveRecord::Base
belongs_to :some_class
belongs_to :entity, :polymorphic=true
end
module Foo
module ClassMethods
def acts_as_entity
has_many :join_models, :as=:entity, :dependent=:destroy
has_many :some_classes, :through=:join_models
klass = self.name.tableize
SomeClass.class_eval "has_many :#{klass}, :through=:join_models"
end
end
end
I'd like to eliminate the klass= line, but don't know how else to pass a reference to self from the calling class into class_eval.
any suggestions?
At my job, we have pseudo-standard of creating one table to hold the "standard" information for an entity, and a second table, named like 'TableNameDetails', which holds optional data elements. On average, for every row in the main table will have about 8-10 detail rows in it.
My question is: What kind of performance impacts does this have over adding these details as additional nullable columns on the main table?
I have a table with 4 columns, and I need to check to see if a Column Pair exists before inserting a row into the database:
INSERT INTO dbo.tblCallReport_Detail (fkCallReport, fkProductCategory, Discussion, Action) VALUES (?, ?, ?, ?)
The pair in question is fkCallReport and fkProductCategory.
For example if the row trying to be inserted has fkCallReport = 3 and fkProductCategory = 5, and the database already has both of those values together, it should display an error and ask
if they would like to combine the Disuccsion and Action with the current record.
Keep in mind I'm doing this in VBA Access 2010 and am still very new.
Hello I have a database for a certain record where it needs to store a 1 or a 0 for each day of the week. So which one would be better? Bitshifting each bit into an integer and just having an integer in the database named days or should we make all of them separate boolean values so to have sunday, monday, tuesday... columns?
So, I have this funny requirement of creating an index on a table only on a certain set of rows.
This is what my table looks like:
USER: userid, friendid, created, blah0, blah1, ..., blahN
Now, I'd like to create an index on:
(userid, friendid, created)
but only on those rows where userid = friendid. The reason being that this index is only going to be used to satisfy queries where the WHERE clause contains "userid = friendid". There will be many rows where this is NOT the case, and I really don't want to waste all that extra space on the index.
Another option would be to create a table (query table) which is populated on insert/update of this table and create a trigger to do so, but again I am guessing an index on that table would mean that the data would be stored twice.
How does mysql store Primary Keys? I mean is the table ordered on the Primary Key or is it ordered by insert order and the PK is like a normal unique index?
I checked up on clustered indexes (http://dev.mysql.com/doc/refman/5.0/en/innodb-index-types.html), but it seems only InnoDB supports them. I am using MyISAM (I mention this because then I could have created a clustered index on these 3 fields in the query table).
I am basically looking for something like this:
ALTER TABLE USERS ADD INDEX (userid, friendid, created) WHERE userid=friendid
If one has a number of databases (due to separate application front-ends) that provide a complete picture - for example a CRM, accounting, and product database - what methods are available to centralize/abstract this data for easy reporting?
Essentially, I'm wondering if there is a way to automatically pull data from multiple databases into a central repository that is continuously updated from the three databases and which can be used for reporting?
I'm also open to alternative best practice suggestions?
Simplified table structure (the tables can't be merged at this time):
TableA:
dts_received (datetime)
dts_completed (datetime)
task_a (varchar)
TableB:
dts_started (datetime)
task_b (varchar)
What I would like to do is determine how long a task took to complete.
The join parameter would be something like
ON task_a = task_b AND dts_completed < dts_started
The issue is that there may be multiple date-times that occur after the dts_completed.
How do I create a join that only returns the first tableB-datetime that occurs after the tableA-datetime?