Search Results

Search found 1335 results on 54 pages for 'dylan smith'.

Page 3/54 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Copying content on webpages in safari. To HTML

    - by Carl Smith
    Hi, is there an easier way to copy and paste website content in html? Want to copy and look like this. Product Information: Length: S / M / L Material: Polyester and Elasthane Brand: Roxana Exclusive Style: Basque But when i paste it into my content box it looks like this- Product Information Length: S / M / L Material: Polyester and Elasthane Brand: Roxana Exclusive Style: Basque Then i need to edit it in the html editor to rearrange it. Is the some sort of app or plugin that i can get so i can turn the text of the page into html so it looks right straight away when i copy it into my content box? If that makes any sense? Thanks Carl Smith :-)

    Read the article

  • Gateway IP Returns to Zero

    - by Robert Smith
    When you set a static IP under Ubuntu 12.04.1, you must supply the desired machine IP and the gateway IP, all using the Network Manager. When I first entered them and rebooted, everything worked great. On the second boot, however, Firefox could find no Web page. Upon checking, I discovered that the gateway IP had returned to zero. Now, no matter how often I resupply it, it returns to zero immediately after NM "saves" it: that is, appears as zero when redisplayed. The only way I can get to the Internet is to restore DHCP operation. I need to use static IP for access to my home network. Would appreciate any suggestion. --Robert Smith

    Read the article

  • Ways to ensure unique instances of a class?

    - by Peanut
    I'm looking for different ways to ensure that each instance of a given class is a uniquely identifiable instance. For example, I have a Name class with the field name. Once I have a Name object with name initialised to John Smith I don't want to be able to instantiate a different Name object also with the name as John Smith, or if instantiation does take place I want a reference to the orginal object to be passed back rather than a new object. I'm aware that one way of doing this is to have a static factory that holds a Map of all the current Name objects and the factory checks that an object with John Smith as the name doesn't already exist before passing back a reference to a Name object. Another way I could think of off the top of my head is having a static Map in the Name class and when the constructor is called throwing an exception if the value passed in for name is already in use in another object, however I'm aware throwing exceptions in a constructor is generally a bad idea. Are there other ways of achieving this?

    Read the article

  • What is the effect of this order_by clause?

    - by bread
    I don't understand what this order_by clause is doing and whether I need it or not: select c.customerid, c.firstname, c.lastname, i.order_date, i.item, i.price from items_ordered i, customers c where i.customerid = c.customerid group by c.customerid, i.item, i.order_date order by i.order_date desc; This produces this data: 10330 Shawn Dalton 30-Jun-1999 Pogo stick 28.00 10101 John Gray 30-Jun-1999 Raft 58.00 10410 Mary Ann Howell 30-Jan-2000 Unicycle 192.50 10101 John Gray 30-Dec-1999 Hoola Hoop 14.75 10449 Isabela Moore 29-Feb-2000 Flashlight 4.50 10410 Mary Ann Howell 28-Oct-1999 Sleeping Bag 89.22 10339 Anthony Sanchez 27-Jul-1999 Umbrella 4.50 10449 Isabela Moore 22-Dec-1999 Canoe 280.00 10298 Leroy Brown 19-Sep-1999 Lantern 29.00 10449 Isabela Moore 19-Mar-2000 Canoe paddle 40.00 10413 Donald Davids 19-Jan-2000 Lawnchair 32.00 10330 Shawn Dalton 19-Apr-2000 Shovel 16.75 10439 Conrad Giles 18-Sep-1999 Tent 88.00 10298 Leroy Brown 18-Mar-2000 Pocket Knife 22.38 10299 Elroy Keller 18-Jan-2000 Inflatable Mattress 38.00 10438 Kevin Smith 18-Jan-2000 Tent 79.99 10101 John Gray 18-Aug-1999 Rain Coat 18.30 10449 Isabela Moore 15-Dec-1999 Bicycle 380.50 10439 Conrad Giles 14-Aug-1999 Ski Poles 25.50 10449 Isabela Moore 13-Aug-1999 Unicycle 180.79 10101 John Gray 08-Mar-2000 Sleeping Bag 88.70 10299 Elroy Keller 06-Jul-1999 Parachute 1250.00 10438 Kevin Smith 02-Nov-1999 Pillow 8.50 10101 John Gray 02-Jan-2000 Lantern 16.00 10315 Lisa Jones 02-Feb-2000 Compass 8.00 10449 Isabela Moore 01-Sep-1999 Snow Shoes 45.00 10438 Kevin Smith 01-Nov-1999 Umbrella 6.75 10298 Leroy Brown 01-Jul-1999 Skateboard 33.00 10101 John Gray 01-Jul-1999 Life Vest 125.00 10330 Shawn Dalton 01-Jan-2000 Flashlight 28.00 10298 Leroy Brown 01-Dec-1999 Helmet 22.00 10298 Leroy Brown 01-Apr-2000 Ear Muffs 12.50 While if I remove the order_by clause completely, as in this query: select c.customerid, c.firstname, c.lastname, i.order_date, i.item, i.price from items_ordered i, customers c where i.customerid = c.customerid group by c.customerid, i.item, i.order_date; I get these results: 10101 John Gray 30-Dec-1999 Hoola Hoop 14.75 10101 John Gray 02-Jan-2000 Lantern 16.00 10101 John Gray 01-Jul-1999 Life Vest 125.00 10101 John Gray 30-Jun-1999 Raft 58.00 10101 John Gray 18-Aug-1999 Rain Coat 18.30 10101 John Gray 08-Mar-2000 Sleeping Bag 88.70 10298 Leroy Brown 01-Apr-2000 Ear Muffs 12.50 10298 Leroy Brown 01-Dec-1999 Helmet 22.00 10298 Leroy Brown 19-Sep-1999 Lantern 29.00 10298 Leroy Brown 18-Mar-2000 Pocket Knife 22.38 10298 Leroy Brown 01-Jul-1999 Skateboard 33.00 10299 Elroy Keller 18-Jan-2000 Inflatable Mattress 38.00 10299 Elroy Keller 06-Jul-1999 Parachute 1250.00 10315 Lisa Jones 02-Feb-2000 Compass 8.00 10330 Shawn Dalton 01-Jan-2000 Flashlight 28.00 10330 Shawn Dalton 30-Jun-1999 Pogo stick 28.00 10330 Shawn Dalton 19-Apr-2000 Shovel 16.75 10339 Anthony Sanchez 27-Jul-1999 Umbrella 4.50 10410 Mary Ann Howell 28-Oct-1999 Sleeping Bag 89.22 10410 Mary Ann Howell 30-Jan-2000 Unicycle 192.50 10413 Donald Davids 19-Jan-2000 Lawnchair 32.00 10438 Kevin Smith 02-Nov-1999 Pillow 8.50 10438 Kevin Smith 18-Jan-2000 Tent 79.99 10438 Kevin Smith 01-Nov-1999 Umbrella 6.75 10439 Conrad Giles 14-Aug-1999 Ski Poles 25.50 10439 Conrad Giles 18-Sep-1999 Tent 88.00 10449 Isabela Moore 15-Dec-1999 Bicycle 380.50 10449 Isabela Moore 22-Dec-1999 Canoe 280.00 10449 Isabela Moore 19-Mar-2000 Canoe paddle 40.00 10449 Isabela Moore 29-Feb-2000 Flashlight 4.50 10449 Isabela Moore 01-Sep-1999 Snow Shoes 45.00 10449 Isabela Moore 13-Aug-1999 Unicycle 180.79 I'm not sure what the order_by is doing here and if it's having the intended effects.

    Read the article

  • Tricky SQL query involving consecutive values

    - by Gabriel
    I need to perform a relatively easy to explain but (given my somewhat limited skills) hard to write SQL query. Assume we have a table similar to this one: exam_no | name | surname | result | date ---------+------+---------+--------+------------ 1 | John | Doe | PASS | 2012-01-01 1 | Ryan | Smith | FAIL | 2012-01-02 <-- 1 | Ann | Evans | PASS | 2012-01-03 1 | Mary | Lee | FAIL | 2012-01-04 ... | ... | ... | ... | ... 2 | John | Doe | FAIL | 2012-02-01 <-- 2 | Ryan | Smith | FAIL | 2012-02-02 2 | Ann | Evans | FAIL | 2012-02-03 2 | Mary | Lee | PASS | 2012-02-04 ... | ... | ... | ... | ... 3 | John | Doe | FAIL | 2012-03-01 3 | Ryan | Smith | FAIL | 2012-03-02 3 | Ann | Evans | PASS | 2012-03-03 3 | Mary | Lee | FAIL | 2012-03-04 <-- Note that exam_no and date aren't necessarily related as one might expect from the kind of example I chose. Now, the query that I need to do is as follows: From the latest exam (exam_no = 3) find all the students that have failed (John Doe, Ryan Smith and Mary Lee). For each of these students find the date of the first of the batch of consecutively failing exams. Another way to put it would be: for each of these students find the date of the first failing exam that comes after their last passing exam. (Look at the arrows in the table). The resulting table should be something like this: name | surname | date_since_failing ------+---------+-------------------- John | Doe | 2012-02-01 Ryan | Smith | 2012-01-02 Mary | Lee | 2012-01-04 Ann | Evans | 2012-02-03 How can I perform such a query? Thank you for your time.

    Read the article

  • Odd SQL Results

    - by Ryan Burnham
    So i have the following query Select id, [First], [Last] , [Business] as contactbusiness, (Case When ([Business] != '' or [Business] is not null) Then [Business] Else 'No Phone Number' END) from contacts The results look like id First Last contactbusiness (No column name) 2 John Smith 3 Sarah Jane 0411 111 222 0411 111 222 6 John Smith 0411 111 111 0411 111 111 8 NULL No Phone Number 11 Ryan B 08 9999 9999 08 9999 9999 14 David F NULL No Phone Number I'd expect record 2 to also show No Phone Number If i change the "[Business] is not null" to [Business] != null then i get the correct results id First Last contactbusiness (No column name) 2 John Smith No Phone Number 3 Sarah Jane 0411 111 222 0411 111 222 6 John Smith 0411 111 111 0411 111 111 8 NULL No Phone Number 11 Ryan B 08 9999 9999 08 9999 9999 14 David F NULL No Phone Number Normally you need to use is not null rather than != null. whats going on here?

    Read the article

  • Creating an excel macro to sum lines with duplicate values

    - by john
    I need a macro to look at the list of data below, provide a number of instances it appears and sum the value of each of them. I know a pivot table or series of forumlas could work but i'm doing this for a coworker and it has to be a 'one click here' kinda deal. The data is as follows. A B Smith 200.00 Dean 100.00 Smith 100.00 Smith 50.00 Wilson 25.00 Dean 25.00 Barry 100.00 The end result would look like this Smith 3 350.00 Dean 2 125.00 Wilson 1 25.00 Barry 1 100.00 Thanks in advance for any help you can offer!

    Read the article

  • Simple Select Statement on MySQL Database Hanging

    - by AlishahNovin
    I have a very simple sql select statement on a very large table, that is non-normalized. (Not my design at all, I'm just trying to optimize while simultaneously trying to convince the owners of a redesign) Basically, the statement is like this: SELECT FirstName, LastName, FullName, State FROM Activity Where (FirstName=@name OR LastName=@name OR FullName=@name) AND State=@state; Now, FirstName, LastName, FullName and State are all indexed as BTrees, but without prefix - the whole column is indexed. State column is a 2 letter state code. What I'm finding is this: When @name = 'John Smith', and @state = '%' the search is really fast and yields results immediately. When @name = 'John Smith', and @state = 'FL' the search takes 5 minutes (and usually this means the web service times out...) When I remove the FirstName and LastName comparisons, and only use the FullName and State, both cases above work very quickly. When I replace FirstName, LastName, FullName, and State searches, but use LIKE for each search, it works fast for @name='John Smith%' and @state='%', but slow for @name='John Smith%' and @state='FL' When I search against 'John Sm%' and @state='FL' the search finds results immediately When I search against 'John Smi%' and @state='FL' the search takes 5 minutes. Now, just to reiterate - the table is not normalized. The John Smith appears many many times, as do many other users, because there is no reference to some form of users/people table. I'm not sure how many times a single user may appear, but the table itself has 90 Million records. Again, not my design... What I'm wondering is - though there are many many problems with this design, what is causing this specific problem. My guess is that the index trees are just too large that it just takes a very long time traversing the them. (FirstName, LastName, FullName) Anyway, I appreciate anyone's help with this. Like I said, I'm working on convincing them of a redesign, but in the meantime, if I someone could help me figure out what the exact problem is, that'd be fantastic.

    Read the article

  • Is it possible to use ContainsTable to get results for more than one column?

    - by LockeCJ
    Consider the following table: People FirstName nvarchar(50) LastName nvarchar(50) Let's assume for the moment that this table has a full-text index on it for both columns. Let's suppose that I wanted to find all of the people named "John Smith" in this table. The following query seems like a perfectly rational way to accomplish this: SELECT * from People p INNER JOIN CONTAINSTABLE(People,*,'"John*" AND "Smith*"') Unfortunately, this will return no results, assuming that there is no record in the People table that contains both "John" and "Smith" in either the FirstName or LastName columns. It will not match a record with "John" in the FirstName column, and "Smith" in the LastName column, or vice-versa. My question is this: How does one accomplish what I'm trying to do above? Please consider that the example above is simplified. The real table I'm working with has ten columns and the input I'm receiving is a single string which is split up based on standard word breakers (space, dash, etc.)

    Read the article

  • Parsing complex string using regex

    - by wojtek_z
    My regex skills are not very good and recently a new data element has thrown my parser into a loop Take the following string "+USER=Bob Smith-GROUP=Admin+FUNCTION=Read/FUNCTION=Write" Previously I had the following for my regex : [+\\-/] Which would turn the result into USER=Bob Smith GROUP=Admin FUNCTION=Read FUNCTION=Write FUNCTION=Read But now I have values with dashes in them which is causing bad output New string looks like "+USER=Bob Smith-GROUP=Admin+FUNCTION=Read/FUNCTION=Write/FUNCTION=Read-Write" Which gives me the following result , and breaks the key = value structure. USER=Bob Smith GROUP=Admin FUNCTION=Read FUNCTION=Write FUNCTION=Read Write Can someone help me formulate a valid regex for handling this or point me to some key / value examples. Basically I need to be able to handle + - / signs in order to get combinations.

    Read the article

  • PHP array taking up too much memory

    - by Dylan Taylor
    I have a multidimensional array. The array itself is fine. My problem is that the script takes up monster amounts of memory, and since I'm running this on my MAMP install on my iBook G4, my computer freezes up. Below is the full script. $query = "SELECT * FROM posts ORDER BY id DESC LIMIT 10"; $result = mysql_query($query); $posts = array(); while($row = mysql_fetch_array($result)){ $posts[$row["id"]]['post_id'] = $row["id"]; $posts[$row["id"]]['post_title'] = $row["title"]; $posts[$row["id"]]['post_text'] = $row["text"]; $posts[$row["id"]]['post_tags'] = $row["tags"]; $posts[$row["id"]]['post_category'] = $row["category"]; foreach ($posts as $post) { echo $post["post_id"]; } Is there a workaround that still achieves my goal (to export the MySQL query rows to an array)? -Dylan

    Read the article

  • PHP array taking up to much memory

    - by Dylan Taylor
    I have a multidimensional array. The array itself is fine. My problem is that the script takes up monster amounts of memory, and since I'm running this on my MAMP install on my iBook G4, my computer freezes up. Below is the full script. $query = "SELECT * FROM posts ORDER BY id DESC LIMIT 10"; $result = mysql_query($query); $posts = array(); while($row = mysql_fetch_array($result)){ $posts[$row["id"]]['post_id'] = $row["id"]; $posts[$row["id"]]['post_title'] = $row["title"]; $posts[$row["id"]]['post_text'] = $row["text"]; $posts[$row["id"]]['post_tags'] = $row["tags"]; $posts[$row["id"]]['post_category'] = $row["category"]; foreach ($posts as $post) { echo $post["post_id"]; } Is there a workaround that still achieves my goal (to export the MySQL query rows to an array)? -Dylan

    Read the article

  • Are there any well known algorithms to detect the presence of names?

    - by Rhubarb
    For example, given a string: "Bob went fishing with his friend Jim Smith." Bob and Jim Smith are both names, but bob and smith are both words. Weren't for them being uppercase, there would be less indication of this outside of our knowledge of the sentence. Without doing grammar analysis, are there any well known algorithms for detecting the presence of names, at least Western names?

    Read the article

  • Need help joining tables...

    - by yuudachi
    I am a MySQL newbie, so sorry if this is a dumb question.. These are my tables. student table: SID (primary) student_name advisor (foreign key to faculty.facultyID) requested_advisor (foreign key to faculty.facultyID) faculty table: facultyID (primary key) advisor_name I want to query a table that shows everything in the student table, but I want advisor and requested_advisor to show up as names, not the ID numbers. so like it displays like this on the webpage: Student Name: Jane Smith SID: 860123456 Current Advisor: John Smith Requested advisor: James Smith not like this Student Name: Jane Smith SID: 860123456 Current Advisor: 1 Requested advisor: 2 SELECT student.student_name, SID, student_email, faculty.advisor_name FROM student INNER JOIN faculty ON student.advisor = faculty.facultyID; this comes out close, but I don't know how to get the requested_advisor to show up as a name.

    Read the article

  • Setting variables in shell script by running commands

    - by rajya vardhan
    >cat /tmp/list1 john jack >cat /tmp/list2 smith taylor It is guaranteed that list1 and list2 will have equal number of lines. f(){ i=1 while read line do var1 = `sed -n '$ip' /tmp/list1` var2 = `sed -n '$ip' /tmp/list2` echo $i,$var1,$var2 i=`expr $i+1` echo $i,$var1,$var2 done < $INFILE } So output of f() should be: 1,john,smith 2,jack,taylor But getting 1,p,p 1+1,p,p If i replace following: var1 = `sed -n '$ip' /tmp/list1` var2 = `sed -n '$ip' /tmp/list2` with this: var1=`head -$i /tmp/vip_list|tail -1` var2=`head -$i /tmp/lb_list|tail -1` Then output: 1,john,smith 1,john,smith Not an expert of shell, so please excuse if sounds childish :)

    Read the article

  • Creating one row of information in excel using a unique value

    - by user1426513
    This is my first post. I am currently working on a project at work which requires that I work with several different worksheets in order to create one mail master worksheet, as it were, in order to do a mail merge. The worksheet contains information regarding different purchases, and each purchaser is identified with their own ID number. Below is an example of what my spreadsheet looks like now (however I do have more columns): ID Salutation Address ID Name Donation ID Name Tickets 9 Mr. John Doe 123 12 Ms. Jane Smith 100.00 12 Ms.Jane Smith 300.00 12 Ms. Jane Smith 456 22 Mr. Mike Man 500.00 84 Ms. Jo Smith 300.00 What I would like to do is somehow sort my data so that everythign with the same unique identifier (ID) lines up on the same row. For example ID 12 Jane Smith - all the information for her will show up under her name matched by her ID number, and ID 22 will match up with 22 etc... When I merged all of my spreadsheets together, I sorted them all by ID number, however my problem is, not everyone who made a donation bought a ticket or some people just bought tickets and nothing us, so sorting doesn't work. Hopefully this makes sense. Thanks in advance.

    Read the article

  • Run script after switching user account "to the same account"

    - by Peter Sivák
    In Ubuntu, when I click on Switch User Account... and then choose the same account to log in (for example if my name is John Smith, I click on switch user account and then log into the John Smith account again), how can I run a script after that? (I know, that I can run a script after "first" login by putting it in /etc/profile file, but this script is not executed again when I choose switch user account and then immediately log in back to the same account.)

    Read the article

  • GDL Presents: Women Techmakers with JESS3

    GDL Presents: Women Techmakers with JESS3 Join Leslie, COO and Co-founder of JESS3, in conversation with Megan Smith and Betsy Masiello, as they discuss Leslie's experience growing a design business from two employees to a transnational operation. Hosts: Megan Smith - Vice President, Google [x] | Betsy Masiello - Policy Manager Guest: Leslie Bradshaw - President, COO and Co-founder, JESS3 From: GoogleDevelopers Views: 0 3 ratings Time: 01:00:00 More in Science & Technology

    Read the article

  • What's a good AJAX Autocomplete Plugin for jQuery?

    - by Murat Ayfer
    I usually use jQuery as my JS library on my sites, and I would like to stick with it since I'm familiar with it. I need to implement an AJAX autocomplete, mainly for suggesting search results. Here are a few I have found: Dylan Verheul's version Jörn Zaefferer's version A modification of Dylan Verheul's version If you have tried any of these plugins, were you happy with them? Which one do you think is the most (and easily) customizable?

    Read the article

  • CQRS &ndash; Questions and Concerns

    - by Dylan Smith
    I’ve been doing a lot of learning on CQRS and Event Sourcing over the last little while and I have a number of questions that I haven’t been able to answer. 1. What is the benefit of CQRS when compared to a typical DDD architecture that uses Event Sourcing and properly captures intent and behavior via verb-based commands? (other than Scalability) 2. When using CQRS what do you do with complex query-based logic? I’m going to elaborate on #1 in this blog post and I’ll do a follow-up post on #2. I watched through Greg Young’s video on the business benefits of CQRS + Event Sourcing and first let me say that I thought it was an excellent presentation that really drives home a lot of the benefits to this approach to architecture (I watched it twice in a row I enjoyed it so much!). But it didn’t answer some of my questions fully (I wish I had been there to ask these of Greg in person!). So let me pick apart some of the points he makes and how they relate to my first question above. I’m completely sold on the idea of event sourcing and have a clear understanding of the benefits that it brings to the table, so I’m not going to question that. But you can use event sourcing without going to a CQRS architecture, so my main question is around the benefits of CQRS + Event Sourcing vs Event Sourcing + Typical DDD architecture Architecture with Event Sourcing + Commands on Left, CQRS on Right Greg talks about how the stereotypical architecture doesn’t support DDD, but is that only because his diagram shows DTO’s coming up from the client. If we use the same diagram but allow the client to send commands doesn’t that remove a lot of the arguments that Greg makes against the stereotypical architecture? We can now introduce verbs into the system. We can capture intent now (storing it still requires event sourcing, but you can implement event sourcing without doing CQRS) We can create a rich domain model (as opposed to an anemic domain model) Scalability is obviously a benefit that CQRS brings to the table, but like Greg says, very few of the systems we create truly need significant scalability Greg talks about the ability to scale your development efforts. He says CQRS allows you to split the system into 3 parts (Client, Domain/Commands, Reads) and assign 3 teams of developers to work on them in parallel; letting you scale your development efforts by 3x with nearly linear gains. But in the stereotypical architecture don’t you already have 2 separate modules that you can split your dev efforts between: The client that sends commands/queries and receives DTO’s, and the Domain which accepts commands/queries, and generates events/DTO’s. If this is true it’s not really a 3x scaling you achieve with CQRS but merely a 1.5x scaling which while great doesn’t sound nearly as dramatic (“I can do it with 10 devs in 12 months – let me hire 5 more and we can have it done in 8 months”). Making the Query side “stupid simple” such that you can assign junior developers (or even outsource it) sounds like a valid benefit, but I have some concerns over what you do with complex query-based logic/behavior. I’m going to go into more detail on this in a follow-up blog post shortly. He also seemed to focus on how “stupid-simple” it is doing queries against the de-normalized data store, but I imagine there is still significant complexity in the event handlers that interpret the events and apply them to the de-normalized tables. It sounds like Greg suggests that because we’re doing CQRS that allows us to apply Event Sourcing when we otherwise wouldn’t be able to (~33:30 in the video). I don’t believe this is true. I don’t see why you wouldn’t be able to apply Event Sourcing without separating out the Commands and Queries. The queries would just operate against the domain model instead of the database. But you’d still get the benefits of Event Sourcing. Without CQRS the queries would only be able to operate against the current state rather than the event history, but even in CQRS the domain behaviors can only operate against the current state and I don’t see that being a big limiting factor. If some query needs to operate against something that is not captured by the current state you would just have to update the domain model to capture that information (no different than if that statement were made about a Command under CQRS). Some of the benefits I do see being applicable are that your domain model might end up being simpler/smaller since it only needs to represent the state needed to process commands and not worry about the reads (like the Deactivate Inventory Item and associated comment example that Greg provides). And also commands that can be handled in a Transaction Script style manner by the command handler simply generating events and not touching the domain model. It also makes it easier for your senior developers to focus on the command behavior and ignore the queries, which is usually going to be a better use of their time. And of course scalability. If anybody out there has any thoughts on this and can help educate me further, please either leave a comment or feel free to get in touch with me via email:

    Read the article

  • CQRS - Benefits

    - by Dylan Smith
    Thanks to all the comments and feedback from the last post I think I have a better understanding now of the benefits of CQRS (separate from the benefits of Event Sourcing). I’m going to try and sum it up here, and point out some areas where I could still use some advice: CQRS Benefits Sounds like the primary benefit of CQRS as an architecture is it allows you to create a simpler domain model by sucking out everything related to queries. I can definitely see the benefit to this, in general the domain logic related to commands is the high-value behavior in the software, but the logic required to service the queries would add a lot of low-value “noise” to the domain model that would dilute the high-value (command) behavior – sorting, paging, filtering, pre-fetch paths, etc. Also the most appropriate domain structure for implementing commands might not be the most optimal for implementing queries. To paraphrase Greg, this usually results in a domain model that is mediocre at both, piss-poor at one, or more likely piss-poor at both commands and queries. Not only will you be able to simplify your domain model by pulling out all the query logic, but at least a handful of commands in most systems will probably be “pass-though” type commands with little to no logic that just generate events. If these can be implemented directly in the command-handler and never touch the domain model, this allows you to slim down the domain model even more. Also, if you were to do event sourcing without CQRS, you no longer have a database containing the current state (only the domain model would) which makes it difficult (or impossible) to support ad-hoc querying and/or reporting that is common in most business software. Of course CQRS provides some great scalability benefits, not only scalability but I have to assume that it provides extremely low latency for most operations, especially if you have an asynchronous event bus. I know Greg says that you get a 3x scaling (Commands, Queries, Client) of your ability to perform parallel development, but IMHO, it seems like it only provides 1.5x scaling since even without CQRS you’re going to have your client loosely coupled to your domain - which is still a great benefit to be able to realize. Questions / Concerns If all the queries against an aggregate get pulled out to the Query layer, what if the only commands for that aggregate can be handled in a “pass-through” manner with the command handler directly generating events. Is it possible to have an aggregate that isn’t modeled in the domain model? Are there any issues or downsides to this? I know in the feedback from my previous posts it was suggested that having one domain model handling both commands and queries requires implementing a lot of traversals between objects that wouldn’t be necessary if it was only servicing commands. My question is, do you include traversals in your domain model based on the needs of the code, or based on the conceptual domain model? If none of my Commands require a Customer.Orders traversal, but the conceptual domain includes the concept of a set of orders belonging to a customer – should I model that in my domain model or not? I like the idea of using the Query side of the architecture as a place to put junior devs where the risk of them screwing something up has minimal impact. But I’m not sold on the idea that you can actually outsource it. Like I said in one of my comments on my previous post, the code to handle a query and generate DTO’s is going to be dead simple, but the code to process events and apply them to the tables on the query side is going to require a significant amount of domain knowledge to know which events to listen for to update each of the de-normalized tables (and what changes need to be made when each event is processed). I don’t know about everybody else, but having Indian/Russian/whatever outsourced developers have to do anything that requires significant domain knowledge has never been successful in my experience. And if you need to spec out for each new query which events to listen to and what to do with each one, well that’s probably going to be just as much work to document as it would be to just implement it. Greg made the point in a comment that doing an aggregate query like “Total Sales By Customer” is going to be inefficient if you use event sourcing but not CQRS. I don’t understand why that would be the case. I imagine in that case you’d simply have a method/property on the Customer object that calculated total sales for that customer by enumerating over the Orders collection. Then the application services layer would generate DTO’s off of the Customers collection that included say the CustomerID, CustomerName, TotalSales, or whatever the case may be. As long as you use a snapshotting implementation, I don’t see why that would be anymore inefficient in a DDD+Event Sourcing implementation than in a typical DDD implementation. Like I mentioned in my last post I still have some questions about query logic that haven’t been answered yet, but before I start asking those I want to make sure I have a strong grasp on what benefits CQRS provides.  My main concern with the query logic was that I know I could just toss it all into the query side, but I was concerned that I would be losing the benefits of using CQRS in the first place if I did that.  I want to elaborate more on this though with some example situations in an upcoming post.

    Read the article

  • Software Engineering Practices &ndash; Different Projects should have different maturity levels

    - by Dylan Smith
    I’ve had a lot of discussions at the office lately about the drastically different sets of software engineering practices used on our various projects, if what we are doing is appropriate, and what factors should you be considering when determining what practices are most appropriate in a given context. I wanted to write up my thoughts in a little more detail on this subject, so here we go: If you compare any two software projects (specifically comparing their codebases) you’ll often see very different levels of maturity in the software engineering practices employed. By software engineering practices, I’m specifically referring to the quality of the code and the amount of technical debt present in the project. Things such as Test Driven Development, Domain Driven Design, Behavior Driven Development, proper adherence to the SOLID principles, etc. are all practices that you would expect at the mature end of the spectrum. At the other end of the spectrum would be the quick-and-dirty solutions that are done using something like an Access Database, Excel Spreadsheet, or maybe some quick “drag-and-drop coding”. For this blog post I’m going to refer to this as the Software Engineering Maturity Spectrum (SEMS). I believe there is a time and a place for projects at every part of that SEMS. The risks and costs associated with under-engineering solutions have been written about a million times over so I won’t bother going into them again here, but there are also (unnecessary) costs with over-engineering a solution. Sometimes putting multiple layers, and IoC containers, and abstracting out the persistence, etc is complete overkill if a one-time use Access database could solve the problem perfectly well. A lot of software developers I talk to seem to automatically jump to the very right-hand side of this SEMS in everything they do. A common rationalization I hear is that it may seem like a small trivial application today, but these things always grow and stick around for many years, then you’re stuck maintaining a big ball of mud. I think this is a cop-out. Sure you can’t always anticipate how an application will be used or grow over its lifetime (can you ever??), but that doesn’t mean you can’t manage it and evolve the underlying software architecture as necessary (even if that means having to toss the code out and re-write it at some point…maybe even multiple times). My thoughts are that we should be making a conscious decision around the start of each project approximately where on the SEMS we want the project to exist. I believe this decision should be based on 3 factors: 1. Importance - How important to the business is this application? What is the impact if the application were to suddenly stop working? 2. Complexity - How complex is the application functionality? 3. Life-Expectancy - How long is this application expected to be in use? Is this a one-time use application, does it fill a short-term need, or is it more strategic and is expected to be in-use for many years to come? Of course this isn’t an exact science. You can’t say that Project X should be at the 73% mark on the SEMS and expect that to be helpful. My point is not that you need to precisely figure out what point on the SEMS the project should be at then translate that into some prescriptive set of practices and techniques you should be using. Rather my point is that we need to be aware that there is a spectrum, and that not everything is going to be (or should be) at the edges of that spectrum, indeed a large number of projects should probably fall somewhere within the middle; and different projects should adopt a different level of software engineering practices and maturity levels based on the needs of that project. To give an example of this way of thinking from my day job: Every couple of years my company plans and hosts a large event where ~400 of our customers all fly in to one location for a multi-day event with various activities. We have some staff whose job it is to organize the logistics of this event, which includes tracking which flights everybody is booked on, arranging for transportation to/from airports, arranging for hotel rooms, name tags, etc The last time we arranged this event all these various pieces of data were tracked in separate spreadsheets and reconciliation and cross-referencing of all the data was literally done by hand using printed copies of the spreadsheets and several people sitting around a table going down each list row by row. Obviously there is some room for improvement in how we are using software to manage the event’s logistics. The next time this event occurs we plan to provide the event planning staff with a more intelligent tool (either an Excel spreadsheet or probably an Access database) that can track all the information in one location and make sure that the various pieces of data are properly linked together (so for example if a person cancels you only need to delete them from one place, and not a dozen separate lists). This solution would fall at or near the very left end of the SEMS meaning that we will just quickly create something with very little attention paid to using mature software engineering practices. If we examine this project against the 3 criteria I listed above for determining it’s place within the SEMS we can see why: Importance – If this application were to stop working the business doesn’t grind to a halt, revenue doesn’t stop, and in fact our customers wouldn’t even notice since it isn’t a customer facing application. The impact would simply be more work for our event planning staff as they revert back to the previous way of doing things (assuming we don’t have any data loss). Complexity – The use cases for this project are pretty straightforward. It simply needs to manage several lists of data, and link them together appropriately. Precisely the task that access (and/or Excel) can do with minimal custom development required. Life-Expectancy – For this specific project we’re only planning to create something to be used for the one event (we only hold these events every 2 years). If it works well this may change (see below). Let’s assume we hack something out quickly and it works great when we plan the next event. We may decide that we want to make some tweaks to the tool and adopt it for planning all future events of this nature. In that case we should examine where the current application is on the SEMS, and make a conscious decision whether something needs to be done to move it further to the right based on the new objectives and goals for this application. This may mean scrapping the access database and re-writing it as an actual web or windows application. In this case, the life-expectancy changed, but let’s assume the importance and complexity didn’t change all that much. We can still probably get away with not adopting a lot of the so-called “best practices”. For example, we can probably still use some of the RAD tooling available and might have an Autonomous View style design that connects directly to the database and binds to typed datasets (we might even choose to simply leave it as an access database and continue using it; this is a decision that needs to be made on a case-by-case basis). At Anvil Digital we have aspirations to become a primarily product-based company. So let’s say we use this tool to plan a handful of events internally, and everybody loves it. Maybe a couple years down the road we decide we want to package the tool up and sell it as a product to some of our customers. In this case the project objectives/goals change quite drastically. Now the tool becomes a source of revenue, and the impact of it suddenly stopping working is significantly less acceptable. Also as we hold focus groups, and gather feedback from customers and potential customers there’s a pretty good chance the feature-set and complexity will have to grow considerably from when we were using it only internally for planning a small handful of events for one company. In this fictional scenario I would expect the target on the SEMS to jump to the far right. Depending on how we implemented the previous release we may be able to refactor and evolve the existing codebase to introduce a more layered architecture, a robust set of automated tests, introduce a proper ORM and IoC container, etc. More likely in this example the jump along the SEMS would be so large we’d probably end up scrapping the current code and re-writing. Although, if it was a slow phased roll-out to only a handful of customers, where we collected feedback, made some tweaks, and then rolled out to a couple more customers, we may be able to slowly refactor and evolve the code over time rather than tossing it out and starting from scratch. The key point I’m trying to get across is not that you should be throwing out your code and starting from scratch all the time. But rather that you should be aware of when and how the context and objectives around a project changes and periodically re-assess where the project currently falls on the SEMS and whether that needs to be adjusted based on changing needs. Note: There is also the idea of “spectrum decay”. Since our industry is rapidly evolving, what we currently accept as mature software engineering practices (the right end of the SEMS) probably won’t be the same 3 years from now. If you have a project that you were to assess at somewhere around the 80% mark on the SEMS today, but don’t touch the code for 3 years and come back and re-assess its position, it will almost certainly have changed since the right end of the SEMS will have moved farther out (maybe the project is now only around 60% due to decay). Developer Skills Another important aspect to this whole discussion is around the skill sets of your architects and lead developers. When talking about the progression of a developers skills from junior->intermediate->senior->… they generally start by only being able to write code that belongs on the left side of the SEMS and as they gain more knowledge and skill they become capable of working at a higher and higher level along the SEMS. We all realize that the learning never stops, but eventually you’ll get to the point where you can comfortably develop at the right-end of the SEMS (the exact practices and techniques that translates to is constantly changing, but that’s not the point here). A critical skill that I’d love to see more evidence of in our industry is the most senior guys not only being able to work at the right-end of the SEMS, but more importantly be able to consciously work at any point along the SEMS as project needs dictate. An even more valuable skill would be if you could make the conscious decision to move a projects code further right on the SEMS (based on changing needs) and do so in an incremental manner without having to start from scratch. An exercise that I’m planning to go through with all of our projects here at Anvil in the near future is to map out where I believe each project currently falls within this SEMS, where I believe the project *should* be on the SEMS based on the business needs, and for those that don’t match up (i.e. most of them) come up with a plan to improve the situation.

    Read the article

  • Meta package / quick reference for string manipulation commands?

    - by Dylan McCall
    The latest version of the Scribes text editor lets us select some text, hit Alt+X, and then run an arbitrary command. For example, I can run the sort command and the selected text is replaced appropriately. This is quite useful but I am also not very well-versed in awk and the like. Is there something I can grab that will provide more of these commands like sort? Maybe a package with a whole bunch of handy, task-specific string manipulation commands?

    Read the article

  • Running Unity 2d - Does not work on actual system but works fine in VM

    - by Dylan
    So I'm running Ubuntu 10.10 and I cannot get Unity 2d to work with my system. This is particularly frustrating as it works just fine in all the VMs I've tested it on. I actually really like Unity and I want to get to know it (in part) before Ubuntu 11.04. I checked Synaptic and it looks like everything's there. The only thing not installed are dev libs and so on. Should I install those as well? Obviously the difference between my system and a VM is that the VM is running off a basically brand new OS. I only use VMs to test new stuff out and remake them often, so my only guess is that I have something installed on my system that is preventing Unity from running. Any thoughts?

    Read the article

  • PrairieDevCon &ndash; Slide Decks

    - by Dylan Smith
    PrairieDevCon 2010 was an awesome time.  Learned a lot, and had some amazing conversations.  You guys even managed to convince me that NoSQL databases might actually be useful after all.   For those interested here’s my slide decks from my two sessions: Agile In Action Database Change Management With Visual Studio

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >