Search Results

Search found 36756 results on 1471 pages for 'mysql real query'.

Page 123/1471 | < Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >

  • Reduce durability in MySQL for performance

    - by Paul Prescod
    My site occasionally has fairly predictable bursts of traffic that increase the throughput by 100 times more than normal. For example, we are going to be featured on a television show, and I expect in the hour after the show, I'll get more than 100 times more traffic than normal. My understanding is that MySQL (InnoDB) generally keeps my data in a bunch of different places: RAM Buffers commitlog binary log actual tables All of the above places on my DB slave This is too much "durability" given that I'm on an EC2 node and most of the stuff goes across the same network pipe (file systems are network attached). Plus the drives are just slow. The data is not high value and I'd rather take a small chance of a few minutes of data loss rather than have a high probability of an outage when the crowd arrives. During these traffic bursts I would like to do all of that I/O only if I can afford it. I'd like to just keep as much in RAM as possible (I have a fair chunk of RAM compared to the data size that would be touched over an hour). If buffers get scarce, or the I/O channel is not too overloaded, then sure, I'd like things to go to the commitlog or binary log to be sent to the slave. If, and only if, the I/O channel is not overloaded, I'd like to write back to the actual tables. In other words, I'd like MySQL/InnoDB to use a "write back" cache algorithm rather than a "write through" cache algorithm. Can I convince it to do that? If this is not possible, I am interested in general MySQL write-performance optimization tips. Most of the docs are about optimizing read performance, but when I get a crowd of users, I am creating accounts for all of them, so that's a write-heavy workload.

    Read the article

  • XAMPP, MAMP, MySQL, PDO - A deadly combination?

    - by Rich
    Hey folks, Previously I've worked on a Symfony project (MySQL PDO based) with XAMPP, with no problems. Since then, I've moved to MAMP - I prefer this - but have hit a snag with my database connection. I've created a test.php like this: <?php try { $dbh = new PDO('mysql:host=localhost;dbname=xxx;port=8889', 'xxx', 'xxx'); foreach($dbh->query('SELECT * from FOO') as $row) { print_r($row); } $dbh = null; } catch (PDOException $e) { print "Error!: " . $e->getMessage() . "<br/>"; die(); } ?> Obviously the *xxx*s are real db connection details. Which when served by MAMP seems to work fine. From terminal however I keep getting the following error when running the file: Error!: SQLSTATE[28000] [1045] Access denied for user 'xxx'@'localhost' (using password: YES) Not sure if the terminal is aiming at a different MySQL socket or something along those lines; but I've tried pointing it to the MAMP socket with a local php.ini file. Any help would be greatly appreciated.

    Read the article

  • Indexing table with duplicates MySQL/MSSQL with millions of records

    - by Tesnep
    I need help in indexing in MySQL. I have a table in MySQL with following rows: ID Store_ID Feature_ID Order_ID Viewed_Date Deal_ID IsTrial The ID is auto generated. Store_ID goes from 1 - 8. Feature_ID from 1 - let's say 100. Viewed Date is Date and time on which the data is inserted. IsTrial is either 0 or 1. You can ignore Order_ID and Deal_ID from this discussion. There are millions of data in the table and we have a reporting backend that needs to view the number of views in a certain period or overall where trial is 0 for a particular store id and for a particular feature. The query takes the form of: select count(viewed_date) from theTable where viewed_date between '2009-12-01' and '2010-12-31' and store_id = '2' and feature_id = '12' and Istrial = 0 In MSSQL you can have a filtered index to use for Istrial. Is there anything similar to this in MySQL? Also, Store_ID and Feature_ID have a lot of duplicate data. I created an index using Store_ID and Feature_ID. Although this seems to have decreased the search period, I need better improvement than this. Right now I have more than 4 million rows. To search for a particular query like the one above, it looks at 3.5 million rows in order to give me the count of 500k rows. PS. I forgot to add view_date filter in the query. Now I have done this.

    Read the article

  • trying to backup mysql database using php

    - by user225269
    I got this code from this site: http://www.php-mysql-tutorial.com/wikis/mysql-tutorials/using-php-to-backup-mysql-databases.aspx But I'm just a beginner so I don't know what the config.php and opendb.php suppose to mean. Do I have to create those 2 files in order for this code to work? If yes, then how do I create it, it isn't included in the site how to create it. <?php include 'config.php'; include 'opendb.php'; $tableName = 'mypet'; $backupFile = 'backup/mypet.sql'; $query = "SELECT * INTO OUTFILE '$backupFile' FROM $tableName"; $result = mysql_query($query); include 'closedb.php'; ?> can I just include these lines on the top code so that I will not be putting the include 'opendb.php' anymore: $con = mysql_connect("localhost","root",""); if (!$con) { die('Could not connect: ' . mysql_error()); } mysql_select_db("Hospital", $con);

    Read the article

  • how to delete in Mysql

    - by Ian Moss
    i want to delete a element in mysql. the problem is that my connection not succesfully open and they give me error unable to connect even same connectionstring work elsewhere in current project. well when my code open the connection they work fine. but a small function try to delete a row in Mysql. i am confused what is goes wrong because :- same connectionstring work elsewhere in project i trying but a function only have a project [unable to connect] the [unable to connect] problem come when my code trying to delete the rows in mysql. i use sqlyog to open the connection and they work fine as other code work and their is no problem i got when i run the command on sqlyog. conclusion:- why connection not open if they work elsewhere in the project and in also in sqlyog. any reason for unable to connect. because connection can not open offcourse command never run so what is reason upon the connection unable to connect. well any suggestion , thing you feel and trick you have to solve this issue i have. thanks

    Read the article

  • Error when feeding a mysql db with a python-parsed data

    - by Barnabe
    I use this bit of code to feed some data i have parsed from a web page to a mysql database c=db.cursor() c.executemany( """INSERT INTO data (SID, Time, Value1, Level1, Value2, Level2, Value3, Level3, Value4, Level4, Value5, Level5, ObsDate) VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s)""", clean_data ) The parsed data looks like this (there are several hundred such lines) clean_data = [(161,00:00:00,8.19,1,4.46,4,7.87,4,6.54,null,4.45,6,2010-04-12),(162,00:00:00,7.55,1,9.52,1,1.90,1,4.76,null,0.14,1,2010-04-12),(164,00:00:00,8.01,1,8.09,1,0,null,8.49,null,0.20,2,2010-04-12),(166,00:00:00,8.30,1,4.77,4,10.99,5,9.11,null,0.36,2,2010-04-12)] if i hard code the data as above mySQL accepts my request (except for some quibbles about formatting) but if the variable clean_data is instead defined as the result of the parsing code, like this: cleaner = [(""" $!!'""", ')]'),(' $!!', ') etc etc] def processThis(str,lst): for find, replace in lst: str = str.replace(find, replace) return str clean_data = processThis(data,cleaner) then i get the dreaded "TypeError: not enough arguments for format string" After playing with formatting options for a few hours (I am very new to this) I am confused... what is the difference between the hard coded data and the result of the processThis function as fas as mySQL is concerned? Any idea greatly appreciated...

    Read the article

  • Indexing table with duplicates MySQL/SQL Server with millions of records

    - by Tesnep
    I need help in indexing in MySQL. I have a table in MySQL with following rows: ID Store_ID Feature_ID Order_ID Viewed_Date Deal_ID IsTrial The ID is auto generated. Store_ID goes from 1 - 8. Feature_ID from 1 - let's say 100. Viewed Date is Date and time on which the data is inserted. IsTrial is either 0 or 1. You can ignore Order_ID and Deal_ID from this discussion. There are millions of data in the table and we have a reporting backend that needs to view the number of views in a certain period or overall where trial is 0 for a particular store id and for a particular feature. The query takes the form of: select count(viewed_date) from theTable where viewed_date between '2009-12-01' and '2010-12-31' and store_id = '2' and feature_id = '12' and Istrial = 0 In SQL Server you can have a filtered index to use for Istrial. Is there anything similar to this in MySQL? Also, Store_ID and Feature_ID have a lot of duplicate data. I created an index using Store_ID and Feature_ID. Although this seems to have decreased the search period, I need better improvement than this. Right now I have more than 4 million rows. To search for a particular query like the one above, it looks at 3.5 million rows in order to give me the count of 500k rows. PS. I forgot to add view_date filter in the query. Now I have done this.

    Read the article

  • Stable/repeatable random sort (MySQL, Rails)

    - by Matt Rogish
    I'd like to paginate through a randomly sorted list of ActiveRecord models (rows from MySQL database). However, this randomization needs to persist on a per-session basis, so that other people that visit the website also receive a random, paginate-able list of records. Let's say there are enough entities (tens of thousands) that storing the randomly sorted ID values in either the session or a cookie is too large, so I must temporarily persist it in some other way (MySQL, file, etc.). Initially I thought I could create a function based on the session ID and the page ID (returning the object IDs for that page) however since the object ID values in MySQL are not sequential (there are gaps), that seemed to fall apart as I was poking at it. The nice thing is that it would require no/minimal storage but the downsides are that it is likely pretty complex to implement and probably CPU intensive. My feeling is I should create an intersection table, something like: random_sorts( sort_id, created_at, user_id NULL if guest) random_sort_items( sort_id, item_id, position ) And then simply store the 'sort_id' in the session. Then, I can paginate the random_sorts WHERE sort_id = n ORDER BY position LIMIT... as usual. Of course, I'd have to put some sort of a reaper in there to remove them after some period of inactivity (based on random_sorts.created_at). Unfortunately, I'd have to invalidate the sort as new objects were created (and/or old objects being removed, although deletion is very rare). And, as load increases the size/performance of this table (even properly indexed) drops. It seems like this ought to be a solved problem but I can't find any rails plugins that do this... Any ideas? Thanks!!

    Read the article

  • Print table data mysql php

    - by Marcelo
    Hi people, i'm having a problem trying to print some data of a table. I'm new at this php mysql stuff but i think my code is right. Here it is: <html> <body> <h1>Lista de usuários</h1> <?php $host="localhost"; // Host name $username="root"; // Mysql username $password=""; // Mysql password $db_name="sabs"; // Database name $tbl_name="doador"; // Table name // Connect to server and select database. mysql_connect("$host", "$username", "$password")or die("cannot connect"); mysql_select_db("$db_name")or die("cannot select DB"); $sql="SELECT * FROM $tbl_name"; $result=mysql_query($sql); while($rows = mysql_fetch_array($result)){ echo $row['id'] . " " .$row['nome'] . " " . $row['sobrenome'] . " " . $row['email'] . " " . $row['login'] . " " . $row['senha'] . " " . $row['idade'] . " ". $row['peso'] . " " . $row['fuma'] . " " . $row['sexo'] . " " . $row['doencas']; echo "<BR/>"; } mysql_close(); ?> </body> </html> All columns of the echo command exist in my table in the database. Don't get why it's not printing those values. Thanks for the attention.

    Read the article

  • Require reasonably random results from an SQL SELECT query within a Joomla article (Cache enabled)

    - by Shrinivas
    Setup: Joomla website on LAMP stack I have a MySQL table containing some records, these are queried by a simple SELECT on the Joomla article, as pasted below. This specific Joomla website has Caching turned on in Joomla's Global Configuration. I need to randomize the order in which I display the resultset, each time the page is loaded. Regular php/mysql would offer me two approaches for this: 1. use 'order by RAND()' or any of a number of methods to allow a SELECT query to return reasonably random results. 2. once php gets the result from the SELECT into an array, shuffle the array to get a reasonably random order of array items. However, as this Joomla instance has Caching turned ON in its Global Configuration, either of the above approaches fails. The first time I load the page the order is randomized, however any further reloads do not cause the order to change, as the page is delivered from cache. The instant the Cache is disabled, both approaches (shuffle/order by rand) work perfectly. What am I missing? How do I override the Global Cache for this specific article? A very simple requirement, that is met by both php and mysql reasonably well, is blocked by the Joomla Cache that I cannot turn off. The php that returns results from the database. <pre> $db = JFactory::getDBO(); $select = "SELECT id FROM jos_mytable;"; //order by RAND() $db->setQuery($select); echo $db->getQuery(); //Show me the Query! $rows = $db->loadObjectList(); //shuffle($rows); foreach($rows as $row) { echo "$row->id"; }

    Read the article

  • How to output multiple rows from an SQL query using the mysqli object

    - by Jonathan
    Assuming that the mysqli object is already instantiatied (and connected) with the global variable $mysql, here is the code I am trying to work with. class Listing { private $mysql; function getListingInfo($l_id = "", $category = "", $subcategory = "", $username = "", $status = "active") { $condition = "`status` = '$status'"; if (!empty($l_id)) $condition .= "AND `L_ID` = '$l_id'"; if (!empty($category)) $condition .= "AND `category` = '$category'"; if (!empty($subcategory)) $condition .= "AND `subcategory` = '$subcategory'"; if (!empty($username)) $condition .= "AND `username` = '$username'"; $result = $this->mysql->query("SELECT * FROM listing WHERE $condition") or die('Error fetching values'); $this->listing = $result->fetch_array() or die('could not create object'); foreach ($this->listing as $key => $value) : $info[$key] = stripslashes(html_entity_decode($value)); endforeach; return $info; } } there are several hundred listings in the db and when I call $result-fetch_array() it places in an array the first row in the db. however when I try to call the object, I can't seem to access more than the first row. for instance: $listing_row = new Listing; while ($listing = $listing_row-getListingInfo()) { echo $listing[0]; } this outputs an infinite loop of the same row in the db. Why does it not advance to the next row? if I move the code: $this->listing = $result->fetch_array() or die('could not create object'); foreach ($this->listing as $key => $value) : $info[$key] = stripslashes(html_entity_decode($value)); endforeach; if I move this outside the class, it works exactly as expected outputting a row at a time while looping through the while statement. Is there a way to write this so that I can keep the fetch_array() call in the class and still loop through the records?

    Read the article

  • MYSQL variables - SET @var

    - by Lizard
    I am attempting to create a mysql snippet that will analyse a table and remove duplicate entries (duplicates are based on two fields not entire record) I have the following code that works when I hard code the variables in the queries, but when I take them out and put them as variables I get mysql errors, below is the script SET @tblname = 'mytable'; SET @fieldname = 'myfield'; SET @concat1 = 'checkfield1'; SET @concat2 = 'checkfield2'; ALTER TABLE @tblname ADD `tmpcheck` VARCHAR( 255 ) NOT NULL; UPDATE @tblname SET `tmpcheck` = CONCAT(@concat1,'-',@concat2); CREATE TEMPORARY TABLE `tmp_table` ( `tmpfield` VARCHAR( 100 ) NOT NULL ) ENGINE = MYISAM ; INSERT INTO `tmp_table` (`tmpfield`) SELECT @fieldname FROM @tblname GROUP BY `tmpcheck` HAVING ( COUNT(`tmpcheck`) > 1 ); DELETE FROM @tblname WHERE @fieldname IN (SELECT `tmpfield` FROM `tmp_table`); ALTER TABLE @tblname DROP `tmpcheck`; I am getting the following error: #1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '@tblname ADD `tmpcheck` VARCHAR( 255 ) NOT NULL' at line 1 Is this because I can't use a variable for a table name? What else could be wrong or how wopuld I get around this issue. Thanks in adavnce

    Read the article

  • UIDs for data objects in MySQL

    - by Callash
    Hi there, I am using C++ and MySQL. I have data objects I want to persist to the database. They need to have a unique ID for identification purposes. The question is, how to get this unique ID? Here is what I came up with: 1) Use the auto_increment feature of MySQL. But how to get the ID then? I am aware that MySQL offers this "SELECT LAST_INSERT_ID()" feature, but that would be a race condition, as two objects could be inserted quite fast after each other. Also, there is nothing else that makes the objects discernable. Two objects could be created pretty much at the same time with exactly the same data. 2) Generate the UID on the C++ side. No dice, either. There are multiple programs that will write to and read from the database, who do not know of each other. 3) Insert with MAX(uid)+1 as the uid value. But then, I basically have the same problem as in 1), because we still have the race condition. Now I am stumped. I am assuming that this problem must be something other people ran into as well, but so far, I did not find any answers. Any ideas?

    Read the article

  • A way of doing real-world test-driven development (and some thoughts about it)

    - by Thomas Weller
    Lately, I exchanged some arguments with Derick Bailey about some details of the red-green-refactor cycle of the Test-driven development process. In short, the issue revolved around the fact that it’s not enough to have a test red or green, but it’s also important to have it red or green for the right reasons. While for me, it’s sufficient to initially have a NotImplementedException in place, Derick argues that this is not totally correct (see these two posts: Red/Green/Refactor, For The Right Reasons and Red For The Right Reason: Fail By Assertion, Not By Anything Else). And he’s right. But on the other hand, I had no idea how his insights could have any practical consequence for my own individual interpretation of the red-green-refactor cycle (which is not really red-green-refactor, at least not in its pure sense, see the rest of this article). This made me think deeply for some days now. In the end I found out that the ‘right reason’ changes in my understanding depending on what development phase I’m in. To make this clear (at least I hope it becomes clear…) I started to describe my way of working in some detail, and then something strange happened: The scope of the article slightly shifted from focusing ‘only’ on the ‘right reason’ issue to something more general, which you might describe as something like  'Doing real-world TDD in .NET , with massive use of third-party add-ins’. This is because I feel that there is a more general statement about Test-driven development to make:  It’s high time to speak about the ‘How’ of TDD, not always only the ‘Why’. Much has been said about this, and me myself also contributed to that (see here: TDD is not about testing, it's about how we develop software). But always justifying what you do is very unsatisfying in the long run, it is inherently defensive, and it costs time and effort that could be used for better and more important things. And frankly: I’m somewhat sick and tired of repeating time and again that the test-driven way of software development is highly preferable for many reasons - I don’t want to spent my time exclusively on stating the obvious… So, again, let’s say it clearly: TDD is programming, and programming is TDD. Other ways of programming (code-first, sometimes called cowboy-coding) are exceptional and need justification. – I know that there are many people out there who will disagree with this radical statement, and I also know that it’s not a description of the real world but more of a mission statement or something. But nevertheless I’m absolutely sure that in some years this statement will be nothing but a platitude. Side note: Some parts of this post read as if I were paid by Jetbrains (the manufacturer of the ReSharper add-in – R#), but I swear I’m not. Rather I think that Visual Studio is just not production-complete without it, and I wouldn’t even consider to do professional work without having this add-in installed... The three parts of a software component Before I go into some details, I first should describe my understanding of what belongs to a software component (assembly, type, or method) during the production process (i.e. the coding phase). Roughly, I come up with the three parts shown below:   First, we need to have some initial sort of requirement. This can be a multi-page formal document, a vague idea in some programmer’s brain of what might be needed, or anything in between. In either way, there has to be some sort of requirement, be it explicit or not. – At the C# micro-level, the best way that I found to formulate that is to define interfaces for just about everything, even for internal classes, and to provide them with exhaustive xml comments. The next step then is to re-formulate these requirements in an executable form. This is specific to the respective programming language. - For C#/.NET, the Gallio framework (which includes MbUnit) in conjunction with the ReSharper add-in for Visual Studio is my toolset of choice. The third part then finally is the production code itself. It’s development is entirely driven by the requirements and their executable formulation. This is the delivery, the two other parts are ‘only’ there to make its production possible, to give it a decent quality and reliability, and to significantly reduce related costs down the maintenance timeline. So while the first two parts are not really relevant for the customer, they are very important for the developer. The customer (or in Scrum terms: the Product Owner) is not interested at all in how  the product is developed, he is only interested in the fact that it is developed as cost-effective as possible, and that it meets his functional and non-functional requirements. The rest is solely a matter of the developer’s craftsmanship, and this is what I want to talk about during the remainder of this article… An example To demonstrate my way of doing real-world TDD, I decided to show the development of a (very) simple Calculator component. The example is deliberately trivial and silly, as examples always are. I am totally aware of the fact that real life is never that simple, but I only want to show some development principles here… The requirement As already said above, I start with writing down some words on the initial requirement, and I normally use interfaces for that, even for internal classes - the typical question “intf or not” doesn’t even come to mind. I need them for my usual workflow and using them automatically produces high componentized and testable code anyway. To think about their usage in every single situation would slow down the production process unnecessarily. So this is what I begin with: namespace Calculator {     /// <summary>     /// Defines a very simple calculator component for demo purposes.     /// </summary>     public interface ICalculator     {         /// <summary>         /// Gets the result of the last successful operation.         /// </summary>         /// <value>The last result.</value>         /// <remarks>         /// Will be <see langword="null" /> before the first successful operation.         /// </remarks>         double? LastResult { get; }       } // interface ICalculator   } // namespace Calculator So, I’m not beginning with a test, but with a sort of code declaration - and still I insist on being 100% test-driven. There are three important things here: Starting this way gives me a method signature, which allows to use IntelliSense and AutoCompletion and thus eliminates the danger of typos - one of the most regular, annoying, time-consuming, and therefore expensive sources of error in the development process. In my understanding, the interface definition as a whole is more of a readable requirement document and technical documentation than anything else. So this is at least as much about documentation than about coding. The documentation must completely describe the behavior of the documented element. I normally use an IoC container or some sort of self-written provider-like model in my architecture. In either case, I need my components defined via service interfaces anyway. - I will use the LinFu IoC framework here, for no other reason as that is is very simple to use. The ‘Red’ (pt. 1)   First I create a folder for the project’s third-party libraries and put the LinFu.Core dll there. Then I set up a test project (via a Gallio project template), and add references to the Calculator project and the LinFu dll. Finally I’m ready to write the first test, which will look like the following: namespace Calculator.Test {     [TestFixture]     public class CalculatorTest     {         private readonly ServiceContainer container = new ServiceContainer();           [Test]         public void CalculatorLastResultIsInitiallyNull()         {             ICalculator calculator = container.GetService<ICalculator>();               Assert.IsNull(calculator.LastResult);         }       } // class CalculatorTest   } // namespace Calculator.Test       This is basically the executable formulation of what the interface definition states (part of). Side note: There’s one principle of TDD that is just plain wrong in my eyes: I’m talking about the Red is 'does not compile' thing. How could a compiler error ever be interpreted as a valid test outcome? I never understood that, it just makes no sense to me. (Or, in Derick’s terms: this reason is as wrong as a reason ever could be…) A compiler error tells me: Your code is incorrect, but nothing more.  Instead, the ‘Red’ part of the red-green-refactor cycle has a clearly defined meaning to me: It means that the test works as intended and fails only if its assumptions are not met for some reason. Back to our Calculator. When I execute the above test with R#, the Gallio plugin will give me this output: So this tells me that the test is red for the wrong reason: There’s no implementation that the IoC-container could load, of course. So let’s fix that. With R#, this is very easy: First, create an ICalculator - derived type:        Next, implement the interface members: And finally, move the new class to its own file: So far my ‘work’ was six mouse clicks long, the only thing that’s left to do manually here, is to add the Ioc-specific wiring-declaration and also to make the respective class non-public, which I regularly do to force my components to communicate exclusively via interfaces: This is what my Calculator class looks like as of now: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult         {             get             {                 throw new NotImplementedException();             }         }     } } Back to the test fixture, we have to put our IoC container to work: [TestFixture] public class CalculatorTest {     #region Fields       private readonly ServiceContainer container = new ServiceContainer();       #endregion // Fields       #region Setup/TearDown       [FixtureSetUp]     public void FixtureSetUp()     {        container.LoadFrom(AppDomain.CurrentDomain.BaseDirectory, "Calculator.dll");     }       ... Because I have a R# live template defined for the setup/teardown method skeleton as well, the only manual coding here again is the IoC-specific stuff: two lines, not more… The ‘Red’ (pt. 2) Now, the execution of the above test gives the following result: This time, the test outcome tells me that the method under test is called. And this is the point, where Derick and I seem to have somewhat different views on the subject: Of course, the test still is worthless regarding the red/green outcome (or: it’s still red for the wrong reasons, in that it gives a false negative). But as far as I am concerned, I’m not really interested in the test outcome at this point of the red-green-refactor cycle. Rather, I only want to assert that my test actually calls the right method. If that’s the case, I will happily go on to the ‘Green’ part… The ‘Green’ Making the test green is quite trivial. Just make LastResult an automatic property:     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult { get; private set; }     }         One more round… Now on to something slightly more demanding (cough…). Let’s state that our Calculator exposes an Add() method:         ...   /// <summary>         /// Adds the specified operands.         /// </summary>         /// <param name="operand1">The operand1.</param>         /// <param name="operand2">The operand2.</param>         /// <returns>The result of the additon.</returns>         /// <exception cref="ArgumentException">         /// Argument <paramref name="operand1"/> is &lt; 0.<br/>         /// -- or --<br/>         /// Argument <paramref name="operand2"/> is &lt; 0.         /// </exception>         double Add(double operand1, double operand2);       } // interface ICalculator A remark: I sometimes hear the complaint that xml comment stuff like the above is hard to read. That’s certainly true, but irrelevant to me, because I read xml code comments with the CR_Documentor tool window. And using that, it looks like this:   Apart from that, I’m heavily using xml code comments (see e.g. here for a detailed guide) because there is the possibility of automating help generation with nightly CI builds (using MS Sandcastle and the Sandcastle Help File Builder), and then publishing the results to some intranet location.  This way, a team always has first class, up-to-date technical documentation at hand about the current codebase. (And, also very important for speeding up things and avoiding typos: You have IntelliSense/AutoCompletion and R# support, and the comments are subject to compiler checking…).     Back to our Calculator again: Two more R# – clicks implement the Add() skeleton:         ...           public double Add(double operand1, double operand2)         {             throw new NotImplementedException();         }       } // class Calculator As we have stated in the interface definition (which actually serves as our requirement document!), the operands are not allowed to be negative. So let’s start implementing that. Here’s the test: [Test] [Row(-0.5, 2)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); } As you can see, I’m using a data-driven unit test method here, mainly for these two reasons: Because I know that I will have to do the same test for the second operand in a few seconds, I save myself from implementing another test method for this purpose. Rather, I only will have to add another Row attribute to the existing one. From the test report below, you can see that the argument values are explicitly printed out. This can be a valuable documentation feature even when everything is green: One can quickly review what values were tested exactly - the complete Gallio HTML-report (as it will be produced by the Continuous Integration runs) shows these values in a quite clear format (see below for an example). Back to our Calculator development again, this is what the test result tells us at the moment: So we’re red again, because there is not yet an implementation… Next we go on and implement the necessary parameter verification to become green again, and then we do the same thing for the second operand. To make a long story short, here’s the test and the method implementation at the end of the second cycle: // in CalculatorTest:   [Test] [Row(-0.5, 2)] [Row(295, -123)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); }   // in Calculator: public double Add(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }     if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }     throw new NotImplementedException(); } So far, we have sheltered our method from unwanted input, and now we can safely operate on the parameters without further caring about their validity (this is my interpretation of the Fail Fast principle, which is regarded here in more detail). Now we can think about the method’s successful outcomes. First let’s write another test for that: [Test] [Row(1, 1, 2)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } Again, I’m regularly using row based test methods for these kinds of unit tests. The above shown pattern proved to be extremely helpful for my development work, I call it the Defined-Input/Expected-Output test idiom: You define your input arguments together with the expected method result. There are two major benefits from that way of testing: In the course of refining a method, it’s very likely to come up with additional test cases. In our case, we might add tests for some edge cases like ‘one of the operands is zero’ or ‘the sum of the two operands causes an overflow’, or maybe there’s an external test protocol that has to be fulfilled (e.g. an ISO norm for medical software), and this results in the need of testing against additional values. In all these scenarios we only have to add another Row attribute to the test. Remember that the argument values are written to the test report, so as a side-effect this produces valuable documentation. (This can become especially important if the fulfillment of some sort of external requirements has to be proven). So your test method might look something like that in the end: [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 2)] [Row(0, 999999999, 999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, double.MaxValue)] [Row(4, double.MaxValue - 2.5, double.MaxValue)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } And this will produce the following HTML report (with Gallio):   Not bad for the amount of work we invested in it, huh? - There might be scenarios where reports like that can be useful for demonstration purposes during a Scrum sprint review… The last requirement to fulfill is that the LastResult property is expected to store the result of the last operation. I don’t show this here, it’s trivial enough and brings nothing new… And finally: Refactor (for the right reasons) To demonstrate my way of going through the refactoring portion of the red-green-refactor cycle, I added another method to our Calculator component, namely Subtract(). Here’s the code (tests and production): // CalculatorTest.cs:   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtract(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, result); }   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtractGivesExpectedLastResult(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, calculator.LastResult); }   ...   // ICalculator.cs: /// <summary> /// Subtracts the specified operands. /// </summary> /// <param name="operand1">The operand1.</param> /// <param name="operand2">The operand2.</param> /// <returns>The result of the subtraction.</returns> /// <exception cref="ArgumentException"> /// Argument <paramref name="operand1"/> is &lt; 0.<br/> /// -- or --<br/> /// Argument <paramref name="operand2"/> is &lt; 0. /// </exception> double Subtract(double operand1, double operand2);   ...   // Calculator.cs:   public double Subtract(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }       if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }       return (this.LastResult = operand1 - operand2).Value; }   Obviously, the argument validation stuff that was produced during the red-green part of our cycle duplicates the code from the previous Add() method. So, to avoid code duplication and minimize the number of code lines of the production code, we do an Extract Method refactoring. One more time, this is only a matter of a few mouse clicks (and giving the new method a name) with R#: Having done that, our production code finally looks like that: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         #region ICalculator           public double? LastResult { get; private set; }           public double Add(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 + operand2).Value;         }           public double Subtract(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 - operand2).Value;         }           #endregion // ICalculator           #region Implementation (Helper)           private static void ThrowIfOneOperandIsInvalid(double operand1, double operand2)         {             if (operand1 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand1");             }               if (operand2 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand2");             }         }           #endregion // Implementation (Helper)       } // class Calculator   } // namespace Calculator But is the above worth the effort at all? It’s obviously trivial and not very impressive. All our tests were green (for the right reasons), and refactoring the code did not change anything. It’s not immediately clear how this refactoring work adds value to the project. Derick puts it like this: STOP! Hold on a second… before you go any further and before you even think about refactoring what you just wrote to make your test pass, you need to understand something: if your done with your requirements after making the test green, you are not required to refactor the code. I know… I’m speaking heresy, here. Toss me to the wolves, I’ve gone over to the dark side! Seriously, though… if your test is passing for the right reasons, and you do not need to write any test or any more code for you class at this point, what value does refactoring add? Derick immediately answers his own question: So why should you follow the refactor portion of red/green/refactor? When you have added code that makes the system less readable, less understandable, less expressive of the domain or concern’s intentions, less architecturally sound, less DRY, etc, then you should refactor it. I couldn’t state it more precise. From my personal perspective, I’d add the following: You have to keep in mind that real-world software systems are usually quite large and there are dozens or even hundreds of occasions where micro-refactorings like the above can be applied. It’s the sum of them all that counts. And to have a good overall quality of the system (e.g. in terms of the Code Duplication Percentage metric) you have to be pedantic on the individual, seemingly trivial cases. My job regularly requires the reading and understanding of ‘foreign’ code. So code quality/readability really makes a HUGE difference for me – sometimes it can be even the difference between project success and failure… Conclusions The above described development process emerged over the years, and there were mainly two things that guided its evolution (you might call it eternal principles, personal beliefs, or anything in between): Test-driven development is the normal, natural way of writing software, code-first is exceptional. So ‘doing TDD or not’ is not a question. And good, stable code can only reliably be produced by doing TDD (yes, I know: many will strongly disagree here again, but I’ve never seen high-quality code – and high-quality code is code that stood the test of time and causes low maintenance costs – that was produced code-first…) It’s the production code that pays our bills in the end. (Though I have seen customers these days who demand an acceptance test battery as part of the final delivery. Things seem to go into the right direction…). The test code serves ‘only’ to make the production code work. But it’s the number of delivered features which solely counts at the end of the day - no matter how much test code you wrote or how good it is. With these two things in mind, I tried to optimize my coding process for coding speed – or, in business terms: productivity - without sacrificing the principles of TDD (more than I’d do either way…).  As a result, I consider a ratio of about 3-5/1 for test code vs. production code as normal and desirable. In other words: roughly 60-80% of my code is test code (This might sound heavy, but that is mainly due to the fact that software development standards only begin to evolve. The entire software development profession is very young, historically seen; only at the very beginning, and there are no viable standards yet. If you think about software development as a kind of casting process, where the test code is the mold and the resulting production code is the final product, then the above ratio sounds no longer extraordinary…) Although the above might look like very much unnecessary work at first sight, it’s not. With the aid of the mentioned add-ins, doing all the above is a matter of minutes, sometimes seconds (while writing this post took hours and days…). The most important thing is to have the right tools at hand. Slow developer machines or the lack of a tool or something like that - for ‘saving’ a few 100 bucks -  is just not acceptable and a very bad decision in business terms (though I quite some times have seen and heard that…). Production of high-quality products needs the usage of high-quality tools. This is a platitude that every craftsman knows… The here described round-trip will take me about five to ten minutes in my real-world development practice. I guess it’s about 30% more time compared to developing the ‘traditional’ (code-first) way. But the so manufactured ‘product’ is of much higher quality and massively reduces maintenance costs, which is by far the single biggest cost factor, as I showed in this previous post: It's the maintenance, stupid! (or: Something is rotten in developerland.). In the end, this is a highly cost-effective way of software development… But on the other hand, there clearly is a trade-off here: coding speed vs. code quality/later maintenance costs. The here described development method might be a perfect fit for the overwhelming majority of software projects, but there certainly are some scenarios where it’s not - e.g. if time-to-market is crucial for a software project. So this is a business decision in the end. It’s just that you have to know what you’re doing and what consequences this might have… Some last words First, I’d like to thank Derick Bailey again. His two aforementioned posts (which I strongly recommend for reading) inspired me to think deeply about my own personal way of doing TDD and to clarify my thoughts about it. I wouldn’t have done that without this inspiration. I really enjoy that kind of discussions… I agree with him in all respects. But I don’t know (yet?) how to bring his insights into the described production process without slowing things down. The above described method proved to be very “good enough” in my practical experience. But of course, I’m open to suggestions here… My rationale for now is: If the test is initially red during the red-green-refactor cycle, the ‘right reason’ is: it actually calls the right method, but this method is not yet operational. Later on, when the cycle is finished and the tests become part of the regular, automated Continuous Integration process, ‘red’ certainly must occur for the ‘right reason’: in this phase, ‘red’ MUST mean nothing but an unfulfilled assertion - Fail By Assertion, Not By Anything Else!

    Read the article

  • URL Multiple Query Parameters Encoded with HTML Entities

    - by BRADINO
    I came across a situation where a URL with multiple query parameters was encoded using htmlentities() and PHP was not recognizing the query parameters using $_GET. A common case for encoding urls using htmlentities() is to use them inside XML documents. So a url with multiple query parameters, encoded using htmlentities() would look like this: http://www.bradino.com/?color=white&amp;size=medium&amp;quantity=3 and when that url is accessed the second and third query parameters are not recognized because instead of separating the subsequent variables with an & that character gets converted into &amp;. I could not find a good way to resolve this, so basically I just encoded the query string back to normal using html_entity_decode() and then slammed the parameters back into the $_GET array using parse_str(). $query = html_entity_decode($_SERVER['QUERY_STRING']); parse_str($query,$_GET); There must be a better way! Anyone come across this before?

    Read the article

  • Ordering by top commented.

    - by MILESMIBALERR
    How would I list a page of the top commented pages on the site with PHP and mysql. The database is set up sort of like this: page_id | username | comment | date_submitted page 1-------bob-------hello-----current date page 1-------joe-------byebye-----current date page 4-------joe--------stuff-------date page 3-------mark--------this--------a date how would you query it so that it orders them by top commented pages? here is a simple query to start with: $querycomments = sprintf("SELECT * FROM comments WHERE !!!!!HELLLLLP HERRE!!!!! = %s ORDER BY !!!!!!HELLLP HERE!!!!!! DESC", GetSQLValueString(????????????, "text"));

    Read the article

  • I need to calculate the date / time difference between one date time column

    - by Stringz
    Details. I have the notes table having the following columns. ID - INT(3) Date - DateTime Note - VARCHAR(100) Tile - Varchar(100) UserName - Varchar(100) Now this table will be having NOTES along with the Titles entered by UserName on the specified date / time. I need to calculate the DateTimeDifference between the TWO ROWS in the SAME COLUMN For example the above table has this peice of information in the table. 64, '2010-03-26 18:16:13', 'Action History', 'sending to Level 2.', 'Salman Khwaja' 65, '2010-03-26 18:19:48', 'Assigned By', 'This is note one for the assignment of RF.', 'Salman Khwaja' 66, '2010-03-27 19:19:48', 'Assigned By', 'This is note one for the assignment of CRF.', 'Salman Khwaja' Now I need to have the following resultset in query reports using MYSQL. TASK - TIME Taken ACTION History - 2010-03-26 18:16:13 Assigned By - 00:03:35 Assigned By - 25:00:00 More smarter approach would be TASK - TIME Taken ACTION History - 2010-03-26 18:16:13 Assigned By - 3 minutes 35 seconds Assigned By - 1 day, 1 hour. I would appreciate if one could give me the PLAIN QUERY along with PHP code to embed it too.

    Read the article

  • How would I UPDATE these table entries with SQL and PHP?

    - by CT
    I am working on an Asset Database problem. I enter assets into a database. Every object is an asset and has variables within the asset table. An object is also a type of asset. In this example the type is server. Here is the Query to retrieve all necessary data: SELECT asset.id ,asset.company ,asset.location ,asset.purchaseDate ,asset.purchaseOrder ,asset.value ,asset.type ,asset.notes ,server.manufacturer ,server.model ,server.serialNumber ,server.esc ,server.warranty ,server.user ,server.prevUser ,server.cpu ,server.memory ,server.hardDrive FROM asset LEFT JOIN server ON server.id = asset.id WHERE asset.id = '$id' I then assign all results into single php variables. How would I write a query/script to update an asset?

    Read the article

  • Fetch posts with attachments in a certain category?

    - by TiuTalk
    I need to retreive a list of posts that have (at least) one attachment that belongs to a category in WordPress. The relation between attachments and categories I made by myself using the WordPress default method. Here's the query that i'm running right now: SELECT p.* FROM `wp_posts` AS p # The post INNER JOIN `wp_posts` AS a # The attachment ON p.`ID` = a.`post_parent` AND a.`post_type` = 'attachment' INNER JOIN `wp_term_relationships` AS ra ON a.`ID` = ra.`object_id` AND ra.`term_taxonomy_id` IN (3) # The category ID list WHERE p.`post_type` = 'post' ORDER BY p.`post_date` DESC LIMIT 15 The problem here is that the query only use the first found attachment, and if it doesn't belongs to the category, the result isn't returned.

    Read the article

  • Understanding #DAX Query Plans for #powerpivot and #tabular

    - by Marco Russo (SQLBI)
    Alberto Ferrari wrote a very interesting white paper about DAX query plans. We published it on a page where we'll gather articles and tools about DAX query plans: http://www.sqlbi.com/topics/query-plans/I reviewed the paper and this is the result of many months of study - we know that we just scratched the surface of this topic, also because we still don't have enough information about internal behavior of many of the operators contained in a query plan. However, by reading the paper you will start reading a query plan and you will understand how it works the optimization found by Chris Webb one month ago to the events-in-progress scenario. The white paper also contains a more optimized query (10 time faster), even if the performance depends on data distribution and the best choice really depends on the data you have. Now you should be curious enough to read the paper until the end, because the more optimized query is the last example in the paper!

    Read the article

  • How would I UPDATE these table entries with SQL?

    - by CT
    I am working on an Asset Database problem. I enter assets into a database. Every object is an asset and has variables within the asset table. An object is also a type of asset. In this example the type is server. Here is the Query to retrieve all necessary data: SELECT asset.id ,asset.company ,asset.location ,asset.purchaseDate ,asset.purchaseOrder ,asset.value ,asset.type ,asset.notes ,server.manufacturer ,server.model ,server.serialNumber ,server.esc ,server.warranty ,server.user ,server.prevUser ,server.cpu ,server.memory ,server.hardDrive FROM asset LEFT JOIN server ON server.id = asset.id WHERE asset.id = '$id' How would I write a query to update an asset?

    Read the article

  • SQL SELECT across two tables

    - by Brett Spurrier
    Hi there, I am a little confused as to how to approach this SQL query. I have two takes (equal number of records), and I would like to return a column with which is the division between the two. In other words, here is my not-working-correctly query: SELECT( (SELECT v FROM Table1) / (SELECT DotProduct FROM Table2) ); How would I do this? All I want it a column where each row equals the same row in Table1 divided by the same row in Table2. The resulting table should have the same number of rows, but I am getting something with a lot more rows than the original two tables. I am at a complete loss. Any advice?

    Read the article

  • PHP While Loops from Arrays

    - by Michael
    I have a table that contains members names and a field with multiple ID numbers. I want to create a query that returns results where any values in the ID fields overlap with any values in the array. For example: lastname: Smith firstname: John id: 101, 103 I have Array #1 with the values 101, 102, 103 I want the query to output all members who have the values 101 or 102 or 103 listed in their id field with multiple ids listed. Array ( [0] => 101 [1] => 102 [2] => 103 ) $sql="SELECT firstname, lastname, id FROM members WHERE id LIKE '%".$array_num_1."%'"; $result=mysql_query($sql); while ($rows=mysql_fetch_array($result)) { echo $rows['lastname'].', '.$rows['firstname'].'-'.$rows['id']; }

    Read the article

  • How to get count of another table in a left join

    - by Sinan
    I have multiple tables post id Name 1 post-name1 2 post-name2 user id username 1 user1 2 user2 post_user post_id user_id 1 1 2 1 post_comments post_id comment_id 1 1 1 2 1 3 I am using a query like this: SELECT post.id, post.title, user.id AS uid, username FROM `post` LEFT JOIN post_user ON post.id = post_user.post_id LEFT JOIN user ON user.id = post_user.user_id ORDER BY post_date DESC It works as intended. However I would like the get the number of comments for each post too. So how can i modify the this query so I can get the count of comments. Any ideas?

    Read the article

  • Why is doing a top(1) on an indexed column in SQL Server slow?

    - by reinier
    I'm puzzled by the following. I have a DB with around 10 million rows, and (among other indices) on 1 column (campaignid_int) is an index. Now I have 700k rows where the campaignid is indeed 3835 For all these rows, the connectionid is the same. I just want to find out this connectionid. use messaging_db; SELECT TOP (1) connectionid FROM outgoing_messages WITH (NOLOCK) WHERE (campaignid_int = 3835) Now this query takes approx 30 seconds to perform! I (with my small db knowledge) would expect that it would take any of the rows, and return me that connectionid If I test this same query for a campaign which only has 1 entry, it goes really fast. So the index works. How would I tackle this and why does this not work? edit: estimated execution plan: select (0%) - top (0%) - clustered index scan (100%)

    Read the article

< Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >