i have a asp.net mvc website which runs a number of queries for each page. Should i open up a single connection or open and close a connection on each query?
This is an extremely common situation, so I'm expecting a good solution. Basically we need to update counters in our tables. As an example a web page visit:
Web_Page
--------
Id
Url
Visit_Count
So in hibernate, we might have this code:
webPage.setVisitCount(webPage.getVisitCount()+1);
The problem there is reads in mysql by default don't pay attention to transactions. So a highly trafficked webpage will have inaccurate counts.
The way I'm used to doing this type of thing is simply call:
update Web_Page set Visit_Count=Visit_Count+1 where Id=12345;
I guess my question is, how do I do that in Hibernate? And secondly, how can I do an update like this in Hibernate which is a bit more complex?
update Web_Page wp set wp.Visit_Count=(select stats.Visits from Statistics stats where stats.Web_Page_Id=wp.Id) + 1 where Id=12345;
i will be printing the access report. the report will not be printed a regular white paper. it will be printed on top of a paper with checkboxes and fields on it. i need those checkboxes and fields to be printed on according to the access data.
are there any libraries for access that make this easier? is there a feature that will help to print on specific coordinates?
So, I have this funny requirement of creating an index on a table only on a certain set of rows.
This is what my table looks like:
USER: userid, friendid, created, blah0, blah1, ..., blahN
Now, I'd like to create an index on:
(userid, friendid, created)
but only on those rows where userid = friendid. The reason being that this index is only going to be used to satisfy queries where the WHERE clause contains "userid = friendid". There will be many rows where this is NOT the case, and I really don't want to waste all that extra space on the index.
Another option would be to create a table (query table) which is populated on insert/update of this table and create a trigger to do so, but again I am guessing an index on that table would mean that the data would be stored twice.
How does mysql store Primary Keys? I mean is the table ordered on the Primary Key or is it ordered by insert order and the PK is like a normal unique index?
I checked up on clustered indexes (http://dev.mysql.com/doc/refman/5.0/en/innodb-index-types.html), but it seems only InnoDB supports them. I am using MyISAM (I mention this because then I could have created a clustered index on these 3 fields in the query table).
I am basically looking for something like this:
ALTER TABLE USERS ADD INDEX (userid, friendid, created) WHERE userid=friendid
I have three master tables for location information
Country {ID, Name}
State {ID, Name, CountryID}
City {ID, Name, StateID}
Now I have one transcation table called Person which hold the person name and his location information.
My Question is shall I have only CityID in the Person table like this:
Person {ID, Name, CityID}'
And have view of join query which give me detail like "Person{ID,Name,City,State,Country}"
or Shall I replicate the mapping
Person {ID, Name, CityID, StateID, CountryID}
Please suggest which do you feel is to be selected and why? if there is any other option available, please suggest.
Thanks in advance.
I have three tables similar to the following:
tblInvoices: Number | Date | Customer
tblInvDetails: Invoice | Quantity | Rate | Description
tblPayments: Invoice | Date | Amount
I have created a query called exInvDetails that adds an Amount column to tblInvDetails:
SELECT tblInvDetails.*, [tblInvDetails.Quantity]*[tblInvDetails.Rate]* AS Amount
FROM tblInvDetails;
I then created a query exInvoices to add Total and Balance columns to tblInvoices:
SELECT tblInvoices.*,
(SELECT Sum(exInvDetails.Amount) FROM exInvDetails WHERE exInvDetails.Invoice = tblInvoices.Number) AS Total,
(SELECT Sum(tblPayments.Amount) FROM tblPayments WHERE tblPayments.Invoice = tblInvoices.Number) AS Payments,
(Total-Payments) AS Balance
FROM tblInvoices;
If there are no corresponding payments in tblPayments, the fields are null instead of 0. Is there a way to force the resulting query to put a 0 in this column?
Hi,
I have a big database (~4GB), with 2 large tables (~3M records) having ~180K SELECTs/hour, ~2k UPDATEs/hour and ~1k INSERTs+DELETEs/hour.
What would be the best practice to guarantee no locks for the reading tasks while inserting/updating/deleting?
I was thinking about using a NOLOCK hint, but there is so much discussed about this (is good, is bad, it depends) that I'm a bit lost. I must say I've tried this in a dev environment and I didn't find any problems, but I don't want to put it on production until I get some feedback...
Thank you!
Luiggi
I have select, insert, update and delete query.
if i have to write all query in same store procedure that is good for performance or i should write all query in separate store procedure?
I am preparing a chart which will display the number of orders placed for a particular day in the current month and year. I wanted the count of orders placed for each day.
I am showing the count of orders on the y-axis and the day on the x-axis.
In my database, there is table called "order" in which order data is placed: order date, user_id, order_price, etc. For example, if on 4 July, 10 orders are placed, on 5 july, 20 orders are placed, and so on.
How can I get the count of orders placed for day of the current month?
Hi,
i have two tables
Order(id, date, note)
and
Delivery(Id, Note, Date)
I want to create a trigger that updates the date in Delivery when the date is updated in Order.
I was thinking to do something like
CREATE OR REPLACE TRIGGER your_trigger_name
BEFORE UPDATE
ON Order
DECLARE
BEGIN
UPDATE Delivery set date = ??? where id = ???
END;
How do I get the date and row id?
thanks
Have a LinqtoSql query that I now want to precompile.
var unorderedc =
from insp in sq.Inspections
where insp.TestTimeStamp > dStartTime && insp.TestTimeStamp < dEndTime
&& insp.Model == "EP" && insp.TestResults != "P"
group insp by new { insp.TestResults, insp.FailStep } into grp
select new
{
FailedCount = (grp.Key.TestResults == "F" ? grp.Count() : 0),
CancelCount = (grp.Key.TestResults == "C" ? grp.Count() : 0),
grp.Key.TestResults,
grp.Key.FailStep,
PercentFailed = Convert.ToDecimal(1.0 * grp.Count() / tcount * 100)
};
I have created this delegate:
public static readonly Funct<SQLDataDataContext, int, string, string, DateTime, DateTime, IQueryable<CalcFailedTestResult>>
GetInspData = CompiledQuery.Compile((SQLDataDataContext sq, int tcount, string strModel, string strTest, DateTime dStartTime,
DateTime dEndTime, IQueryable<CalcFailedTestResult> CalcFailed) =>
from insp in sq.Inspections
where insp.TestTimeStamp > dStartTime && insp.TestTimeStamp < dEndTime
&& insp.Model == strModel && insp.TestResults != strTest
group insp by new { insp.TestResults, insp.FailStep } into grp
select new
{
FailedCount = (grp.Key.TestResults == "F" ? grp.Count() : 0),
CancelCount = (grp.Key.TestResults == "C" ? grp.Count() : 0),
grp.Key.TestResults,
grp.Key.FailStep,
PercentFailed = Convert.ToDecimal(1.0 * grp.Count() / tcount * 100)
});
The syntax error is on the CompileQuery.Compile() statement
It appears to be related to the use of the select new {} syntax.
In other pre-compiled queries I have written I have had to just use the select projection by it self. In this case I need to perform the grp.count() and the immediate if logic.
I have searched SO and other references but cannot find the answer.
Hi
In my mode I am selecting a field as
$query1 = $this->db->query("SELECT dPassword
FROM tbl_login
WHERE dEmailID='[email protected]'");
How to return dpassword as a variable to my controller
I tried this way return dpassword;
I need to know how to add 2 hours to the below 'Completed' timestamp.
Here is the Select statement
Select Tsk.task_id,Tsk.org_id,Tsk.completed,Tsk.assgn_acct_id,name
FROM tdstelecom.tasks As Tsk
WHERE Tsk.task_id = '11094836'
AND DATE(Tsk.completed) < CURDATE() AND DATE(Tsk.completed) >= DATE_SUB(CURDATE
(),INTERVAL 180 DAY)
Here are the results: 2012-08-22 14:18:14
Desired results: 2012-08-22 16:18:14
Hello,
I'm developing a database to store statistics for a sports league.
I'd like to show several tables:
- league table that indicates the position of the team in the current and previous fixture
- table that shows the position of a team in every fixture in the championship
I have a matches table:
Matches (IdMatch, IdTeam1, IdTeam2, GoalsTeam1, GoalsTeam2)
Whith this table I can calculate the total points of every team based on the matches the team played. But every time I want to show the league table I have to calculate the points.
Also I have a problem to calculate in which position classified a team in the last 10 fixtures cause I have to make 10 queries.
To store the league table for every fixture in a database table is another approach, but every time I change a match already played I have to recalculate every fixture from there...
Is there a better approach for this problem?
Thanks
I have a string of length 1,44,000 which has to be passed as a parameter to a stored procedure which is a select query on a table.
When a give this is in a query (in c# ) its working fine. But when i pass it as a parameter to stored procedure its not working.
Here is my stored procedure where in i have declared this parameter as NVARCHAR(MAX)
------------------------------------------------------
set ANSI_NULLS ON
set QUOTED_IDENTIFIER ON
go
CREATE PROCEDURE [dbo].[ReadItemData](@ItemNames NVARCHAR(MAX),@TimeStamp as DATETIME)
AS
select * from ItemData
where ItemName in (@ItemNames) AND TimeStamp=@TimeStamp
---------------------------------------------------------------------
Here the parameter @ItemNames is a string concatinated with different names such as
'Item1','Item2','Item3'....etc.
Can anyone tell what went wrong here?
Thanks & Regards
Padma
I'm using the Repository Pattern with some LinqToSql objects. My repository objects all implement IDisposable, and the Dispose() method does only thing--calls Dispose() on the DataContext. Whenever I use a repository, I wrap it in a using person, like this:
public IEnumerable<Person> SelectPersons()
{
using (var repository = _repositorySource.GetNew<Person>(dc => dc.Person))
{
return repository.GetAll();
}
}
This method returns an IEnumerable<Person>, so if my understanding is correct, no querying of the database actually takes place until Enumerable<Person> is traversed (e.g., by converting it to a list or array or by using it in a foreach loop), as in this example:
var persons = gateway.SelectPersons();
// Dispose() is fired here
var personViewModels = (
from b in persons
select new PersonViewModel
{
Id = b.Id,
Name = b.Name,
Age = b.Age,
OrdersCount = b.Order.Count()
}).ToList(); // executes queries
In this example, Dispose() gets called immediately after setting persons, which is an IEnumerable<Person>, and that's the only time it gets called.
So, a couple questions:
How does this work? How can a disposed DataContext still query the database for results when I walk the IEnumerable<Person>?
What does Dispose() actually do?
I've heard that it is not necessary (e.g., see this question) to dispose of a DataContext, but my impression was that it's not a bad idea. Is there any reason not to dispose of it?
I'm new to MySQL struggling to find a version and workbench which works stably on my 64 bit windows 7 machine.
I've decided to attempt to completely remove MySQL from my machine and to restart the installation process from scratch.
However, after uninstalling all software linked with MySQL using conventional control panel uninstalling means some MySQL windows services still remain on my machine.
I can't see any obvious method to remove these and they have since been causing me difficulties when trying to install different versions of MySQL.
Could anyone please advise?
I am trying to add the SQL_CALC_FOUND_ROWS into a query (Please note this isn't for pagination)
please note I am trying to add this to a cakePHP query the code I currently have is below:
return $this->find('all', array(
'conditions' => $conditions,
'fields'=>array('SQL_CALC_FOUND_ROWS','Category.*','COUNT(`Entity`.`id`) as `entity_count`'),
'joins' => array('LEFT JOIN `entities` AS Entity ON `Entity`.`category_id` = `Category`.`id`'),
'group' => '`Category`.`id`',
'order' => $sort,
'limit'=>$params['limit'],
'offset'=>$params['start'],
'contain' => array('Domain' => array('fields' => array('title')))
));
Note the 'fields'=>array('SQL_CALC_FOUND_ROWS',' this obviously doesn't work as It tries to apply the SQL_CALC_FOUND_ROWS to the table e.g. SELECTCategory.SQL_CALC_FOUND_ROWS,
Is there anyway of doing this? Any help would be greatly appreciated, thanks.
INFORMIX-SE with ISQL 7.3:
I have separate tables for Loan, Purchase & Sales transactions. Each tables rows are
joined to their respective customer rows by:
customer.id [serial] = loan.foreign_id [integer];
= purchase.foreign_id [integer];
= sale.foreign_id [integer];
I would like to consolidate the three tables into one table called "transaction",
where a column "transaction.trx_type" [char(1)] {L=Loan, P=Purchase, S=Sale} identifies
the transaction type. Is this a good idea or is it better to keep them in separate tables?
Storage space is not a concern, I think it would be easier programming & user=wise to
have all types of transactions under one table.
I have a table with products, their amount and their price. I need to select all entries where the average price per article is between a range.
My query so far:
SELECT productid,AVG(SUM(price)/SUM(amount)) AS avg FROM stock WHERE avg=$from AND avg<=$to GROUP BY productid
If do this, it tells me avg doesnt exist.
Also i obviously need to group by because the sum and average need to be per wine
I am creating a table (table A) that will have a number of columns(of course) and there will be another table (table B) that holds metadata associated to rows in table A.
I am working with a multi site implementation that has one database for the whole shabang. Rows in table A could belong to any number of sites but must belong to at least one.
The problem I have is I am not sure what the best practice is for defining what site each row in table A belongs to. I want performance and scalability. There is no finite number of sites going forward. Rows in table A could belong to any number of sites in the future. Right now there are only 3.
My initial thoughts are to have a primary site ID in Table A and then metadata in table B will have rows defining additional sites as needed.
Another thought is to have a column in Table A for each site and it is a boolean as to wether it belongs to that site.
Lastly I have thought about having another table to map rows in Table A to each site.
What is the best way to associate rows in a table with any number of sites with performance and scalability in mind?
I recently decided to crawl over the indexes on one of our most heavily used databases to see which were suboptimal. I generated the built-in Index Usage Statistics report from SSMS, and it's showing me a great deal of information that I'm unsure how to understand.
I found an article at Carpe Datum about the report, but it doesn't tell me much more than I could assume from the column titles.
In particular, the report differentiates between User activity and system activity, and I'm unsure what qualifies as each type of activity.
I assume that any query that uses a given index increases the '# of user X' columns. But what increases the system columns? building statistics?
Is there anything that depends on the user or role(s) of a user that's running the query?