Search Results

Search found 48190 results on 1928 pages for 'mysql slow query log'.

Page 90/1928 | < Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >

  • MySQL: Is it faster to use inserts and updates instead of insert on duplicate key update?

    - by Nir
    I have a cron job that updates a large number of rows in a database. Some of the rows are new and therefore inserted and some are updates of existing ones and therefore update. I use insert on duplicate key update for the whole data and get it done in one call. But- I actually know which rows are new and which are updated so I can also do inserts and updates seperately. Will seperating the inserts and updates have advantage in terms of performance? What are the mechanics behind this ? Thanks!

    Read the article

  • How to return the value from MySql Stored Proc ??

    - by karthik
    I am using the below storedproc to generate the Insert statements of a specified table It is build-ed without any errors. Now i want to return the result set in "V_string" as output of the SP DELIMITER $$ DROP PROCEDURE IF EXISTS `demo`.`InsertGenerator` $$ CREATE DEFINER=`root`@`localhost` PROCEDURE `InsertGenerator`() SWL_return: BEGIN -- SQLWAYS_EVAL# to retrieve column specific information -- SQLWAYS_EVAL# table DECLARE v_string NATIONAL VARCHAR(3000); -- SQLWAYS_EVAL# first half -- SQLWAYS_EVAL# tement DECLARE v_stringData NATIONAL VARCHAR(3000); -- SQLWAYS_EVAL# data -- SQLWAYS_EVAL# statement DECLARE v_dataType NATIONAL VARCHAR(1000); -- SQLWAYS_EVAL# -- SQLWAYS_EVAL# columns DECLARE v_colName NATIONAL VARCHAR(50); DECLARE NO_DATA INT DEFAULT 0; DECLARE cursCol CURSOR FOR SELECT column_name,data_type FROM `columns` -- WHERE table_name = v_tableName; WHERE table_name = 'tbl_users'; DECLARE CONTINUE HANDLER FOR SQLEXCEPTION BEGIN SET NO_DATA = -2; END; DECLARE CONTINUE HANDLER FOR NOT FOUND SET NO_DATA = -1; OPEN cursCol; SET v_string = CONCAT('INSERT ',v_tableName,'('); SET v_stringData = ''; SET NO_DATA = 0; FETCH cursCol INTO v_colName,v_dataType; IF NO_DATA <> 0 then -- NOT SUPPORTED print CONCAT('Table ',@tableName, ' not found, processing skipped.') close cursCol; LEAVE SWL_return; end if; WHILE NO_DATA = 0 DO IF v_dataType in('varchar','char','nchar','nvarchar') then SET v_stringData = CONCAT(v_stringData,'SQLWAYS_EVAL# ll(',v_colName,'SQLWAYS_EVAL# ''+'); ELSE if v_dataType in('text','ntext') then -- SQLWAYS_EVAL# -- SQLWAYS_EVAL# else SET v_stringData = CONCAT(v_stringData,'SQLWAYS_EVAL# ll(cast(',v_colName,'SQLWAYS_EVAL# 00)),'''')+'''''',''+'); ELSE IF v_dataType = 'money' then -- SQLWAYS_EVAL# doesn't get converted -- SQLWAYS_EVAL# implicitly SET v_stringData = CONCAT(v_stringData,'SQLWAYS_EVAL# y,''''''+ isnull(cast(',v_colName,'SQLWAYS_EVAL# 0)),''0.0000'')+''''''),''+'); ELSE IF v_dataType = 'datetime' then SET v_stringData = CONCAT(v_stringData,'SQLWAYS_EVAL# time,''''''+ isnull(cast(',v_colName, 'SQLWAYS_EVAL# 0)),''0'')+''''''),''+'); ELSE IF v_dataType = 'image' then SET v_stringData = CONCAT(v_stringData,'SQLWAYS_EVAL# ll(cast(convert(varbinary,',v_colName, 'SQLWAYS_EVAL# 6)),''0'')+'''''',''+'); ELSE SET v_stringData = CONCAT(v_stringData,'SQLWAYS_EVAL# ll(cast(',v_colName,'SQLWAYS_EVAL# 0)),''0'')+'''''',''+'); end if; end if; end if; end if; end if; SET v_string = CONCAT(v_string,v_colName,','); SET NO_DATA = 0; FETCH cursCol INTO v_colName,v_dataType; END WHILE; END $$ DELIMITER ;

    Read the article

  • Can not set password for mysql server in cent os 6.2

    - by HarshanaD
    I have installed mysql and then mysql-server. Then i start the mysql demon and follow below steps, # chkconfig --level 2345 mysqld on # mysqladmin -u root password testpassword But i can not set the password because it gives me the below error, Access denied for user root@localhost (using password: no) I logged in as root user and perform those steps. I even uninstalled mysql server and reinstalled but same problem occurred.

    Read the article

  • Can you reuse a mysql result set in PHP?

    - by MarathonStudios
    I have a result set I pull from a large database: $result = mysql_query($sql); I loop through this recordset once to pull specific bits of data and get averages using while($row = mysql_fetch_array($result)). Later in the page, I want to loop through this same recordset again and output everything - but because I used the recordset earlier, my second loop returns nothing. I finally hacked around this by looping through a second identical recordset ($result2 = mysql_query($sql);), but I hate to make the same SQL call twice. Any way I can loop through the same dataset multiple times?

    Read the article

  • MySql selecting default value if there are no result?

    - by Kenan
    i'm having 2 tables: members and comments. I select all members, and then join comments. But in comments I'm selecting some SUM of points, and if user never commented, I can't get that user in listing?! So how to select default value for SUM, or some other solution: SELECT c.comment_id AS item_id, m.member_id AS member_id, m.avatar, SUM(c.vote_value) AS vote_value, SUM(c.best) AS best, SUM(c.vote_value) + SUM(c.najbolji)*10 AS total FROM members m LEFT JOIN comments c ON m.member_id = c.author_id GROUP BY c.author_id ORDER BY m.member_id DESC LIMIT {$sql_start}, {$sql_pokazi}

    Read the article

  • mysql query not running correctly from inside the application

    - by Mala
    I am completely stumped. Here is my php (CodeIgniter) code: function mod() { $uid = $this->session->userdata('uid'); $pid = $this->input->post('pid'); if ($this->_verify($uid,$pid)) { $name = $this->input->post('name'); $price = $this->input->post('price'); $curr = $this->input->post('curr'); $url = $this->input->post('url'); $query = $this->db->query("UPDATE items SET name=".$this->db->escape($name).", price=".$this->db->escape($price).", currency=".$this->db->escape($curr),", url=".$this->db->escape($url)." WHERE pid=".$this->db->escape($pid)." LIMIT 1"); } header('location: '.$this->session->userdata('current')); } The purpose of this code is to modify the properties (name, price, currency, url) of a row in the 'items' table (priary key is pid). However, for some reason, allowing this function to run once modifies the name, price, currency and url of ALL entries in the table, regardless of their pid and of the LIMIT 1 thing I tacked on the end of the query. It's as if the last line of the query is being completely ignored. As if this wasn't strance enough, I replaced "$query = $this->db->query(" with an "echo" to see the SQL query being run, and it outputs a query much like I would expect: UPDATE items SET name='newname', price='newprice', currency='newcurrency', url='newurl' WHERE pid='10' LIMIT 1 Copy-pasting this into a MySQL window acts exactly as I want: it modifies the row with the selected pid. What is going on here???

    Read the article

  • Most efficient way to make an activity log

    - by Nathan
    I am making a "recent activity" tab to profiles on my site and I also am going to have a log for moderators to see everything that happens on the site. This would require making an activity log of some sort. I just don't know what would be better. I have 2 options: Make a table called "activity" and then every time someone does something, add a record to it with the type of action, user id, timestamp, etc. Problem: table could get very long. Join all 3 tables (questions, answers, answer_comments) and then somehow show all these on the page in the order in which the action was taken. Problem: this would be extremely hard because I have no clue how I could make it say "John commented on an answer on Question Title Here" by just joining 3 tables. Does anyone know of a better way of making an activity log in this situation? I am using PHP and MySQL. If this is either too inefficient or hard I will probably just forget the Recent Activity tab on profiles but I still need an activity log for moderators. Here's some SQL that I started making for option 2, but this would not work because there is no way of detecting whether the action is a comment, question, or answer when I echo the info in a while loop: SELECT q.*, a.*, ac.* FROM questions q JOIN answers a ON a.questionid = q.qid JOIN answer_comments ac ON c.answerid = a.ans_id WHERE q.user = $userid AND a.userid = $userid AND ac.userid = $userid ORDER BY q.created DESC, a.created DESC, ac.created DESC Thanks in advance for any help!

    Read the article

  • How do you DELETE rows in a mysql table that have a field IN the results of another query?

    - by user354825
    here's what the statement looks like: DELETE FROM videoswatched vw2 WHERE vw2.userID IN ( SELECT vw.userID FROM videoswatched vw JOIN users u ON vw.userID=u.userID WHERE u.companyID = 1000 GROUP BY userID ) that looks decent to me, and the SELECT statement works on its own (producing rows with a single column 'userID'. basically, i want to delete entries in the 'videoswatched' table where the userID in the videoswatched entry, after joining to the users table, is found to have companyID=1000. how can i do this without getting the error in my sql syntax? it says the error is near: vw2 WHERE vw2.userID IN ( SELECT vw.userID FROM videoswatched vw JOIN users u and on line 1. thanks!

    Read the article

  • How can I join 3 tables with mysql & php?

    - by steven
    check out the page [url]http://www.mujak.com/test/test3.php[/url] It pulls the users Post,username,xbc/xlk tags etc which is perfect... BUT since I am pulling information from a MyBB bulletin board system, its quite different. When replying, people are are allowed to change the "Thread Subject" by simplying replying and changing it. I dont want it to SHOW the changed subject title, just the original title of all posts in that thread. By default it repies with "RE:thread title". They can easily edit this and it will show up in the "Subject" cell & people wont know which thread it was posted in because they changed their thread to when replying to the post. So I just want to keep the orginial thread title when they are replying. Make sense~?? Tables:mybb_users Fields:uid,username Tables:mybb_userfields Fields:ufid Tables:mybb_posts Fields:pid,tid,replyto,subject,ufid,username,uid,message Tables:mybb_threads Fields:tid,fid,subject,uid,username,lastpost,lastposter,lastposteruid I haev tried multiple queries with no success: $result = mysql_query(" SELECT * FROM mybb_users LEFT JOIN (mybb_posts, mybb_userfields, mybb_threads) ON ( mybb_userfields.ufid=mybb_posts.uid AND mybb_threads.tid=mybb_posts.tid AND mybb_users.uid=mybb_userfields.ufid ) WHERE mybb_posts.fid=42"); $result = mysql_query(" SELECT * FROM mybb_users LEFT JOIN (mybb_posts, mybb_userfields, mybb_threads) ON ( mybb_userfields.ufid=mybb_posts.uid AND mybb_threads.tid=mybb_posts.tid AND mybb_users.uid=mybb_posts.uid ) WHERE mybb_threads.fid=42"); $result = mysql_query(" SELECT * FROM mybb_posts LEFT JOIN (mybb_userfields, mybb_threads) ON ( mybb_userfields.ufid=mybb_posts.uid AND mybb_threads.tid=mybb_posts.tid ) WHERE mybb_posts.fid=42");

    Read the article

  • Mysql console slow on import of huge sql files

    - by Kennethvr
    My import of sql via the mysql console is rather slow and as our sql file is increasing every day I would like to know if there are any alternatives on how to import a sql file faster. Changing to oracle or other systems is no option, the configuration has to stay the same. Currently the sql file is: 1.5 Gb I'm on Wamp with Apache 2.2.14, PHP 5.2.11 and MySQL 5.1.41. Any suggestions?

    Read the article

  • Mysql: how do I make an exact working copy of a table?

    - by epo
    I am working on a restructuring of a legacy database and its associated applications. To keep old and new working in parallel I have created a development database and copied the tables of interest to it, e.g. create database devdb; drop table if exists devdb.tab1; CREATE TABLE devdb.tab1 like working.tab1; insert into devdb.tab1 select * from working.tab1; Having done this I notice that triggers affecting tab1 have not been copied over. Is there any way in which I can produce a working copy of tab1, i.e. data, permissions, triggers, everything?

    Read the article

  • MySQL - Limit a left join to the first date-time that occurs?

    - by John M
    Simplified table structure (the tables can't be merged at this time): TableA: dts_received (datetime) dts_completed (datetime) task_a (varchar) TableB: dts_started (datetime) task_b (varchar) What I would like to do is determine how long a task took to complete. The join parameter would be something like ON task_a = task_b AND dts_completed < dts_started The issue is that there may be multiple date-times that occur after the dts_completed. How do I create a join that only returns the first tableB-datetime that occurs after the tableA-datetime?

    Read the article

  • BackUp MySql Database from CommandLine

    - by srk
    I am able to backup mysql database via command line by executing the below : C:\MySQL\MySQL Server 5.0\bin\mysqldump\" -uroot -ppassword sample \"D:/admindb/AAR12.sql\" But there is no DROP and CREATE database queries in my .mysql file What should i add in the syntax to get the create info to my generated .sql file ?

    Read the article

  • MSYQL ~ Why does this query only select a single row?

    - by Joe
    SELECT * FROM tbl_houses WHERE (SELECT HousesList FROM tbl_lists WHERE tbl_lists.ID = '123') LIKE CONCAT('% ', tbl_houses.ID, '#') ^ It only selects the row from tbl_houses of the last occuring "tbl_houses.ID" inside tbl_lists.HousesList I need it to select ALL the rows where any ID from tbl_houses exists within tbl_lists.HousesList

    Read the article

  • Problems while trying to make a query with variables in the conditions (stored procedure)

    - by pablo89
    Hi!! Im having a problem, Im trying to do a query... I remember that in the past I did something like this but today this query is returning nothing, no error, no data, just nothing... the query is something like this: SELECT field1, @variableX:=field2 FROM table WHERE (SELECT COUNT(fieldA) FROM table2 WHERE fieldB=@variableX AND fieldC=0)0 AND (SELECT COUNT(fieldA) FROM table2 WHERE fieldB=@variableX AND fieldC=4)=0; I also tried this query but it didnt work (also it gaves no error): SELECT field1, @variableX:=field2, @variableY:=(SELECT COUNT(fieldA) FROM table2 WHERE fieldB=@variableX AND fieldC=0), @variableZ:=(SELECT COUNT(fieldA) FROM table2 WHERE fieldB=@variableX AND fieldC=4) FROM table WHERE @variableY0 AND @variableZ=0; As you can see, what Im trying to do in the 1st query is use a variable in the conditions; in the 2nd query Im trying to create some variables and evaluate them in the conditions. At the end in the 2nd query the @variableY=1 AND @variableZ=0 but I dont know what the query returns an empty data What could be wrong here??? Any comment or suggest is welcome!!! thanks!!! Bye!!!

    Read the article

  • MySQL Non Index Queries Analysis

    - by Markii
    I'm using the log queries not using index but it logs all that use indexes but just more advanced or using IFs. Is there a parser or a program out there that can analyze the log and give me a literal output of saying "table.column should be a index" Thanks

    Read the article

  • Is there a word or description for this type of query?

    - by Nick
    We have the requirement to find a result in a collection of records based on a prioritised set of search criteria against a relational db (I'm talking indexed field matching here rather than text search). The way we are thinking about designing the query is to begin with a highly refined and specific set of criteria. If there are no results for this initial query we want to progressively reduce the criteria one by one in order of reducing priority, querying each time such a less specific set of criteria until we find a result we can accept. Alternatively, we have considered starting with a smaller set of criteria and increasing until we have reduced number of results down to the last set. What I would like to know is if an existing term to describe this type of query exists? So that we can look to model our own on existing patterns and use best practice.

    Read the article

  • Late feedback

    - by Sveta Smirnova
    MySQL Community team asked me to write about Devconf 2013 few months ago. Conference was in June, 2013, but I remembered about this my promise only now: month later after my participating in MySQL Connect and Expert Troubleshooting seminar (change country to United Kingdom if you see blank page). I think it is too late for the feedback, but I still have few thoughts which I want to record.DevConf (former PHPConf) always was a place where I tried new topics. At first, because I know audience there very well and they will be bored if I repeat a story which I was telling last year, but also because it is much easier to get feedback in your own native language. But last years my habit seems started to change and I presented improved version of my 2012 MySQL Connect talk about MySQL backups. Of course, I also had a seminar with unique topic, made for this conference first time: Troubleshooting MySQL Performance with EXPLAIN and Using Performance Schema to Troubleshoot MySQL. And these topics, improved, were presented at the expert seminar. It is interesting how my habit changes and one public speaking activity interferes next one.What is good about DevConf is it forces you to create new ideas and do it really well, because audience is not forgiving at all, so they catch everything you miss or prepared not good enough. This can be bad if you want to make a marketing-style topic for free, but allows to present technical features in really good details: all these sudden discussions really help.In year 2013 Oracle had a booth at the conference and was presented by a bunch of people. Dmitry Lenev presented topic "New features of replication in MySQL 5.6" and Victoria Reznichenko worked on the booth. What was new at the conference this year is greater interest in NoSQL, scale and fast development solutions. This, unfortunately, means not so huge interest in MySQL as it was earlier. However, at the same time, "Common" track was really MySQL track: not only Oracle, but people from other companies presented about it.

    Read the article

  • Many Associations Leading to Slow Query

    - by Joey Cadle
    I currently have a database that has a lot of many to many associations. I have services which have many variations which have many staff who can perform the variation who then have details on themselves like name, role, etc... At 10 services with 3 variations each and up to 4 out of 20 staff attached to each service even doing something as getting all variations and the staff associated with them takes 4s. Is there a way I can reduce these queries that take a while to process? I've cut down the queries by doing eager loading in my DBM to reduce the problems that arise from 1+N issues, but still 4s is a long query for just a testing stage. Is there a structure out there that would help make such nested many to many associations much quicker to select? Maybe combining everything past the service level into a single table with a 'TYPE' column ?? I'm just not knowledgable enough to know the solution that turns this 4s query into a 300MS query... Any suggestions would be helpful.

    Read the article

  • How To see keyboard log of 3 computer from another computer?

    - by NT
    Hello, I have a small office with 3 computers + my own laptop without any network between these computers. I would like to see keyboard log of each worker computer from my laptop without disturbing my workers. I should be able to see each keyboard log from my laptop (from GUI or e-mail message) Also, is it possible to limit the logging? For example I would not to see log of msn messenger, but I should see log of IE,Outlook etc...?

    Read the article

  • SQL SERVER – How to Roll Back SQL Server Database Changes

    - by Pinal Dave
    In a perfect scenario, no unexpected and unplanned changes occur. There are no unpleasant surprises, no inadvertent changes. However, even with all precautions and testing, there is sometimes a need to revert a structure or data change. One of the methods that can be used in this situation is to use an older database backup that has the records or database object structure you want to revert to. For this method, you have to have the adequate full database backup and a tool that will help you with comparison and synchronization is preferred. In this article, we will focus on another method: rolling back the changes. This can be done by using: An option in SQL Server Management Studio T-SQL, or ApexSQL Log The first two solutions have been described in this article The disadvantages of these methods are that you have to know when exactly the change you want to revert happened and that all transactions on the database executed in a specific time range are rolled back – the ones you want to undo and the ones you don’t. How to easily roll back SQL Server database changes using ApexSQL Log? The biggest challenge is to roll back just specific changes, not all changes that happened in a specific time range. While SQL Server Management Studio option and T-SQL read and roll forward all transactions in the transaction log files, I will show you a solution that finds and scripts only the specific changes that match your criteria. Therefore, you don’t need to worry about all other database changes that you don’t want to roll back. ApexSQL Log is a SQL Server disaster recovery tool that reads transaction logs and provides a wide range of filters that enable you to easily rollback only specific data changes. First, connect to the online database where you want to roll back the changes. Once you select the database, ApexSQL Log will show its recovery model. Note that changes can be rolled back even for a database in the Simple recovery model, when no database and transaction log backups are available. However, ApexSQL Log achieves best results when the database is in the Full recovery model and you have a chain of subsequent transaction log backups, back to the moment when the change occurred. In this example, we will use only the online transaction log. In the next step, use filters to read only the transactions that happened in a specific time range. To remove noise, it’s recommended to use as many filters as possible. Besides filtering by the time of the transaction, ApexSQL Log can filter by the operation type: Table name: As well as transaction state (committed, aborted, running, and unknown), name of the user who committed the change, specific field values, server process IDs, and transaction description. You can select only the tables affected by the changes you want to roll back. However, if you’re not certain which tables were affected, you can leave them all selected and once the results are shown in the main grid, analyze them to find the ones you to roll back. When you set the filters, you can select how to present the results. ApexSQL Log can automatically create undo or redo scripts, export the transactions into an XML, HTML, CSV, SQL, or SQL Bulk file, and create a batch file that you can use for unattended transaction log reading. In this example, I will open the results in the grid, as I want to analyze them before rolling back the transactions. The results contain information about the transaction, as well as who and when made it. For UPDATEs, ApexSQL Log shows both old and new values, so you can easily see what has happened. To create an UNDO script that rolls back the changes, select the transactions you want to roll back and click Create undo script in the menu. For the DELETE statement selected in the screenshot above, the undo script is: INSERT INTO [Sales].[PersonCreditCard] ([BusinessEntityID], [CreditCardID], [ModifiedDate]) VALUES (297, 8010, '20050901 00:00:00.000') When it comes to rolling back database changes, ApexSQL Log has a big advantage, as it rolls back only specific transactions, while leaving all other transactions that occurred at the same time range intact. That makes ApexSQL Log a good solution for rolling back inadvertent data and schema changes on your SQL Server databases. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: ApexSQL

    Read the article

  • SQL SERVER – SQL in Sixty Seconds – 5 Videos from Joes 2 Pros Series – SQL Exam Prep Series 70-433

    - by pinaldave
    Joes 2 Pros SQL Server Learning series is indeed fun. Joes 2 Pros series is written for beginners and who wants to build expertise for SQL Server programming and development from fundamental. In the beginning of the series author Rick Morelan is not shy to explain the simplest concept of how to open SQL Server Management Studio. Honestly the book starts with that much basic but as it progresses further Rick discussing about various advanced concepts from query tuning to Core Architecture. This five part series is written with keeping SQL Server Exam 70-433. Instead of just focusing on what will be there in exam, this series is focusing on learning the important concepts thoroughly. This book no way take short cut to explain any concepts and at times, will go beyond the topic at length. The best part is that all the books has many companion videos explaining the concepts and videos. Every Wednesday I like to post a video which explains something in quick few seconds. Today we will go over five videos which I posted in my earlier posts related to Joes 2 Pros series. Introduction to XML Data Type Methods – SQL in Sixty Seconds #015 The XML data type was first introduced with SQL Server 2005. This data type continues with SQL Server 2008 where expanded XML features are available, most notably is the power of the XQuery language to analyze and query the values contained in your XML instance. There are five XML data type methods available in SQL Server 2008: query() – Used to extract XML fragments from an XML data type. value() – Used to extract a single value from an XML document. exist() – Used to determine if a specified node exists. Returns 1 if yes and 0 if no. modify() – Updates XML data in an XML data type. node() – Shreds XML data into multiple rows (not covered in this blog post). [Detailed Blog Post] | [Quiz with Answer] Introduction to SQL Error Actions – SQL in Sixty Seconds #014 Most people believe that when SQL Server encounters an error severity level 11 or higher the remaining SQL statements will not get executed. In addition, people also believe that if any error severity level of 11 or higher is hit inside an explicit transaction, then the whole statement will fail as a unit. While both of these beliefs are true 99% of the time, they are not true in all cases. It is these outlying cases that frequently cause unexpected results in your SQL code. To understand how to achieve consistent results you need to know the four ways SQL Error Actions can react to error severity levels 11-16: Statement Termination – The statement with the procedure fails but the code keeps on running to the next statement. Transactions are not affected. Scope Abortion – The current procedure, function or batch is aborted and the next calling scope keeps running. That is, if Stored Procedure A calls B and C, and B fails, then nothing in B runs but A continues to call C. @@Error is set but the procedure does not have a return value. Batch Termination – The entire client call is terminated. XACT_ABORT – (ON = The entire client call is terminated.) or (OFF = SQL Server will choose how to handle all errors.) [Detailed Blog Post] | [Quiz with Answer] Introduction to Basics of a Query Hint – SQL in Sixty Seconds #013 Query hints specify that the indicated hints should be used throughout the query. Query hints affect all operators in the statement and are implemented using the OPTION clause. Cautionary Note: Because the SQL Server Query Optimizer typically selects the best execution plan for a query, it is highly recommended that hints be used as a last resort for experienced developers and database administrators to achieve the desired results. [Detailed Blog Post] | [Quiz with Answer] Introduction to Hierarchical Query – SQL in Sixty Seconds #012 A CTE can be thought of as a temporary result set and are similar to a derived table in that it is not stored as an object and lasts only for the duration of the query. A CTE is generally considered to be more readable than a derived table and does not require the extra effort of declaring a Temp Table while providing the same benefits to the user. However; a CTE is more powerful than a derived table as it can also be self-referencing, or even referenced multiple times in the same query. A recursive CTE requires four elements in order to work properly: Anchor query (runs once and the results ‘seed’ the Recursive query) Recursive query (runs multiple times and is the criteria for the remaining results) UNION ALL statement to bind the Anchor and Recursive queries together. INNER JOIN statement to bind the Recursive query to the results of the CTE. [Detailed Blog Post] | [Quiz with Answer] Introduction to SQL Server Security – SQL in Sixty Seconds #011 Let’s get some basic definitions down first. Take the workplace example where “Tom” needs “Read” access to the “Financial Folder”. What are the Securable, Principal, and Permissions from that last sentence? A Securable is a resource that someone might want to access (like the Financial Folder). A Principal is anything that might want to gain access to the securable (like Tom). A Permission is the level of access a principal has to a securable (like Read). [Detailed Blog Post] | [Quiz with Answer] Please leave a comment explain which one was your favorite video as that will help me understand what works and what needs improvement. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

  • Using Query Classes With NHibernate

    - by Liam McLennan
    Even when using an ORM, such as NHibernate, the developer still has to decide how to perform queries. The simplest strategy is to get access to an ISession and directly perform a query whenever you need data. The problem is that doing so spreads query logic throughout the entire application – a clear violation of the Single Responsibility Principle. A more advanced strategy is to use Eric Evan’s Repository pattern, thus isolating all query logic within the repository classes. I prefer to use Query Classes. Every query needed by the application is represented by a query class, aka a specification. To perform a query I: Instantiate a new instance of the required query class, providing any data that it needs Pass the instantiated query class to an extension method on NHibernate’s ISession type. To query my database for all people over the age of sixteen looks like this: [Test] public void QueryBySpecification() { var canDriveSpecification = new PeopleOverAgeSpecification(16); var allPeopleOfDrivingAge = session.QueryBySpecification(canDriveSpecification); } To be able to query for people over a certain age I had to create a suitable query class: public class PeopleOverAgeSpecification : Specification<Person> { private readonly int age; public PeopleOverAgeSpecification(int age) { this.age = age; } public override IQueryable<Person> Reduce(IQueryable<Person> collection) { return collection.Where(person => person.Age > age); } public override IQueryable<Person> Sort(IQueryable<Person> collection) { return collection.OrderBy(person => person.Name); } } Finally, the extension method to add QueryBySpecification to ISession: public static class SessionExtensions { public static IEnumerable<T> QueryBySpecification<T>(this ISession session, Specification<T> specification) { return specification.Fetch( specification.Sort( specification.Reduce(session.Query<T>()) ) ); } } The inspiration for this style of data access came from Ayende’s post Do You Need a Framework?. I am sick of working through multiple layers of abstraction that don’t do anything. Have you ever seen code that required a service layer to call a method on a repository, that delegated to a common repository base class that wrapped and ORMs unit of work? I can achieve the same thing with NHibernate’s ISession and a single extension method. If you’re interested you can get the full Query Classes example source from Github.

    Read the article

  • Benchmarking MySQL on win7

    - by Patrick
    I've setup a nginx server running php 5.3.6 and mysql 5.5.1.3. My computer is an AMD quadcore 9650, 4gb ram, 500gb 7200rpm HD. I ran the PHP MySQL Benchmark Tool v. 0.1, and got the following results: Testing a(n) MYISAM table using 100000 rows. Successfully created database speedtestdb Sucessfully created table speedtesttable Table Type Verified: MYISAM .. Done. 100000 inserts in 19.73628 seconds or 5067 inserts per second. Done. 100000 row reads in 0.2801 seconds or 357015 row reads per second. Done. 100000 updates in 4.03876 seconds or 24760 updates per second. I'm wondering where this stands as far as performance goes, and what are some steps I can take if any to improve on this. I'm not trying to make anything fantastic, just getting a feel for how to best optimize a web server in this configuration.

    Read the article

< Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >