Search Results

Search found 4647 results on 186 pages for 'customers'.

Page 18/186 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • Append SQL table name with today's date

    - by Ricardo Deano
    Hello all. I understand that I can change a sql table using the follow sp: EXEC sp_rename 'customers', 'custs' How would I go about appending this so that the new table has today's date as a suffix? I've attempt variations on the below theme with little success!! EXEC sp_rename 'customers', 'customers +(CONVERT(VARCHAR(8),GETDATE(),3))' Any help greatly appreciated.

    Read the article

  • SQL Query Returning Duplicate Results

    - by Jesse Bunch
    Hi, I've been working out this query now for a while and I thought I had it where I wanted it, but apparently not. There are two records in the database (orders). The query should return two different rows, but instead returns two rows that have exactly the same values. I think it may be something to do with the GROUP BY or derived tables I'm using but my eyes are tired and not seeing the problem. Can any of you help? Thanks in advance. SELECT orders.billerID, orders.invoiceDate, orders.txnID, orders.bName, orders.bStreet1, orders.bStreet2, orders.bCity, orders.bState, orders.bZip, orders.bCountry, orders.sName, orders.sStreet1, orders.sStreet2, orders.sCity, orders.sState, orders.sZip, orders.sCountry, orders.paymentType, orders.invoiceNotes, orders.pFee, orders.shipping, orders.tax, orders.reasonCode, orders.txnType, orders.customerID, customers.firstName AS firstName, customers.lastName AS lastName, customers.businessName AS businessName, orderStatus.statusName AS orderStatus, IFNULL(orderItems.itemTotal, 0.00) + orders.shipping + orders.tax AS orderTotal, IFNULL(orderItems.itemTotal, 0.00) + orders.shipping + orders.tax - IFNULL(payments.totalPayments, 0.00) AS orderBalance FROM orders LEFT JOIN customers ON orders.customerID = customers.id LEFT JOIN orderStatus ON orders.orderStatus = orderStatus.id LEFT JOIN ( SELECT orderItems.orderID, SUM(orderItems.itemPrice * orderItems.itemQuantity) as itemTotal FROM orderItems GROUP BY orderItems.orderID ) orderItems ON orderItems.orderID = orders.id LEFT JOIN ( SELECT payments.orderID, SUM(payments.amount) as totalPayments FROM payments GROUP BY payments.orderID ) payments ON payments.orderID = orders.id

    Read the article

  • Looking for Bugtracker with specific features

    - by Thorsten Dittmar
    Hi, we're looking into bugtracking systems at our firm. We're quite small (4 developers only). On the other hand we have quite a large number of customers we develop individual software for. Most software is built explicitly for one customer, apart from two or three standard tools we ship. To make support easier for us (and to avoid being interrupted by phone calls all the time) we're looking for a bugtracker that must support a specific set of features. We want the customers to report bugs/feature/change requests themselves and be notified about these reports by email. Then we'd like to track what we've done and how much time it took, notifying the customer about that per email (private notes for just us must be possible). At the end of the month we'd like to bill all closed reports according to the time it took to solve/implement them. The following must be possible: It must have a web based interface where the users must log in with credentials we provide. The users must not be able to create accounts themselves/we must be able to turn off such a feature. We must be able to configure projects and assign customer logins to these projects. The customers must only see projects they are assigned to, not any other projects. Also, customers must not "see" other customers. We would name the projects, so that standard tools are listed as separate projects for each customer. A monthly report must be available that we can use to get information about the requests we worked on per customer. I'd like to introduce some standard product like Mantis (I've played with that a little, but didn't quite figure out whether it provides all the features I listed above). The product should be Open Source and work on a XAMPP Windows Server 2003 environment. Does anybody have any good suggestions?

    Read the article

  • display records which exist in file2 but not in file1

    - by Phoenix
    log file1 contains records of customers(name,id,date) who visited yesterday log file2 contains records of customers(name,id,date) who visited today How would you display customers who visited yesterday but not today? Constraint is: Don't use auxiliary data structure because file contains millions of records. [So, no hashes] Is there a way to do this using Unix commands ??

    Read the article

  • LINQ to SQL: making a "double IN" query crashes

    - by Alex
    I need to do the following thing: var a = from c in DB.Customers where (from t1 in DB.Table1 where t1.Date >= DataTime.Now select t1.ID).Contains(c.ID) && (from t2 in DB.Table2 where t2.Date >= DataTime.Now select t2.ID).Contains(c.ID) select a It doesn't want to run. I get the following error: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. But when I try to run var a = from c in DB.Customers where (from t1 in DB.Table1 where t1.Date >= DataTime.Now select t1.ID).Contains(c.ID) select a OR var a = from c in DB.Customers where (from t2 in DB.Table2 where t2.Date = DataTime.Now select t2.ID).Contains(c.ID) select a It works! I'm sure that there both IN queries contain some customers ids.

    Read the article

  • How to publish internal data to the internet - as simple as possible

    - by mlarsen
    We have a .net 2-tier application where a desktop program is talking to a database. We support MS SQL Server 2000, 2005, 2008 and Oracle 9, 10 and 11. The application is sold, not as shrink-wrap, but pretty close. It is quite important for us that the installation and configuration is as easy as possible as installation instructions are usually supplied in written form to the customers internal IT-department. Our application is usually not seen as mission critical for the IT-department, so we need to keep their work down to a minimum. Now we are starting to get wishes for a web application build on top of the same data. The web application will be hosted by us and delivered as a SaaS application. Now the challenge is how to move data back and forth between the web application and the customers internal database. as I see it we have some requirements: We must be ready to handle the situation where the customers database is not accessible from the DMZ. I guess the easiest solution is that all communication is initiated from inside the customers lan. As little firewall configuration as possible. The best is if we can run without any special configuration as long as outgoing traffic from the customers lan are not blocked. If we need something changed in the firewall, we must be able to document that the change is secure. It doesn't have to be real time. Moving data in batches every ten minutes or so is OK. Data moves both ways, but not the same tables, so we don't have to support merges. It would be nice if we don't have to roll our own framework completely. Looking forward to hear your suggestions.

    Read the article

  • Deleting a Collection with NHibernate Using the Criteria API

    - by lomaxx
    I think I know what the answer to this question is probably going to be, but I thought I'd go ahead and ask it anyway. It appears that within NHibernate if I do something like this: IList<Customer> customers = Session.CreateCriteria(typeof(Customer)) .Add(Restrictions.Eq("Name", "Steve") .List<Customer>(); And I want to then delete that list of customers. From what I can tell the only way to do it is like this: foreach(var customer in customers) { Session.Delete(customer); } But what I'm wondering is if there's any way I can just go: Session.Delete(customers); And delete the entire collection with a single call?

    Read the article

  • Ad distribution problem: an optimal solution?

    - by Mokuchan
    I'm asked to find a 2 approximate solution to this problem: You’re consulting for an e-commerce site that receives a large number of visitors each day. For each visitor i, where i € {1, 2 ..... n}, the site has assigned a value v[i], representing the expected revenue that can be obtained from this customer. Each visitor i is shown one of m possible ads A1, A2 ..... An as they enter the site. The site wants a selection of one ad for each customer so that each ad is seen, overall, by a set of customers of reasonably large total weight. Thus, given a selection of one ad for each customer, we will define the spread of this selection to be the minimum, over j = 1, 2 ..... m, of the total weight of all customers who were shown ad Aj. Example Suppose there are six customers with values 3, 4, 12, 2, 4, 6, and there are m = 3 ads. Then, in this instance, one could achieve a spread of 9 by showing ad A1 to customers 1, 2, 4, ad A2 to customer 3, and ad A3 to customers 5 and 6. The ultimate goal is to find a selection of an ad for each customer that maximizes the spread. Unfortunately, this optimization problem is NP-hard (you don’t have to prove this). So instead give a polynomial-time algorithm that approximates the maximum spread within a factor of 2. The solution I found is the following: Order visitors values in descending order Add the next visitor value (i.e. assign the visitor) to the Ad with the current lowest total value Repeat This solution actually seems to always find the optimal solution, or I simply can't find a counterexample. Can you find it? Is this a non-polinomial solution and I just can't see it?

    Read the article

  • Rails - undefined method `name' for nil:NilClass

    - by sscirrus
    Hi guys, Quick question. Here is my code: #routes map.resources :customers, :has_many => [:addresses, :matchings] map.connect ":controller/:action/:id" #url path: http://127.0.0.1:3000/customers/index/3 #customers controller def index @customer = Customer.find(params[:id]) end #customers view/index.html.erb ... <%= @customer.name %> ... Error: undefined method `name' for nil:NilClass. Here's my reasoning. The parameter :id is coming from my url path (i.e. we're looking for customer #3 in the above path). @customer should find that array easily, then @customer.name should produce the name, but apparently @customer is blank. Why? I assume the problem is that I'm not producing an array in my controller?

    Read the article

  • Why is FubuMVC new()ing up my view model in PartialForEach?

    - by Jon M
    I'm getting started with FubuMVC and I have a simple Customer - Order relationship I'm trying to display using nested partials. My domain objects are as follows: public class Customer { private readonly IList<Order> orders = new List<Order>(); public string Name { get; set; } public IEnumerable<Order> Orders { get { return orders; } } public void AddOrder(Order order) { orders.Add(order); } } public class Order { public string Reference { get; set; } } I have the following controller classes: public class CustomersController { public IndexViewModel Index(IndexInputModel inputModel) { var customer1 = new Customer { Name = "John Smith" }; customer1.AddOrder(new Order { Reference = "ABC123" }); return new IndexViewModel { Customers = new[] { customer1 } }; } } public class IndexInputModel { } public class IndexViewModel { public IEnumerable<Customer> Customers { get; set; } } public class IndexView : FubuPage<IndexViewModel> { } public class CustomerPartial : FubuControl<Customer> { } public class OrderPartial : FubuControl<Order> { } IndexView.aspx: (standard html stuff trimmed) <div> <%= this.PartialForEach(x => x.Customers).Using<CustomerPartial>() %> </div> CustomerPartial.ascx: <%@ Control Language="C#" Inherits="FubuDemo.Controllers.Customers.CustomerPartial" %> <div> Customer Name: <%= this.DisplayFor(x => x.Name) %> <br /> Orders: (<%= Model.Orders.Count() %>) <br /> <%= this.PartialForEach(x => x.Orders) %> </div> OrderPartial.ascx: <%@ Control Language="C#" Inherits="FubuDemo.Controllers.Customers.OrderPartial" %> <div> Order <br /> Ref: <%= this.DisplayFor(x => x.Reference) %> </div> When I view Customers/Index, I see the following: Customers Customer Name: John Smith Orders: (1) It seems that in CustomerPartial.ascx, doing Model.Orders.Count() correctly picks up that 1 order exists. However PartialForEach(x = x.Orders) does not, as nothing is rendered for the order. If I set a breakpoint on the Order constructor, I see that it initially gets called by the Index method on CustomersController, but then it gets called by FubuMVC.Core.Models.StandardModelBinder.Bind, so it is getting re-instantiated by FubuMVC and losing the content of the Orders collection. This isn't quite what I'd expect, I would think that PartialForEach would just pass the domain object directly into the partial. Am I missing the point somewhere? What is the 'correct' way to achieve this kind of result in Fubu?

    Read the article

  • In MYSQL is it better to have one big table or many smaller tables

    - by user307922
    Hi All, I am making a database of my client's customers to send email promotions to. The database will include all about 12 of my clients and each of them has an average of 2100 customers. I was wondering if it would be better to have a table in the db for each one of my clients that contains a list of their customers or if I should just make one big table... The customers will be queried daily. I know it is a broad question but any advice would be appreciated. Cheers, Chuck

    Read the article

  • Template approach for a PHP application

    - by Industrial
    Hi everyone, We're in the middle of making a new e-commerce related PHP application and we have come to the point where we have started to think about how we should solve templating for our customers needs. What we would like to do is offer our customers the possibility of uploading/modifying templates to suit their company:s profile. The initial thought is that we shall not reinvent the wheel, so instead letting our customers upload their templates with FTP, so there will be basic HTML skills required. For those customers that want to modify/customize template and doesnt have the knowledge, we offer that service as well. I know that there's a number of issues to solve before this could be considered safe, like preventing XSS and writing scripts that check through each uploaded file for potential security threats and so on. Of course, there are some part that probably will be to complex for the customer to modify by themselves, so maybe this approach won't apply to all<< template files in the frontend application. But besides that, what would be a good way to handle this?

    Read the article

  • Grails one-to-many mapping with joinTable

    - by intargc
    I have two domain-classes. One is a "Partner" the other is a "Customer". A customer can be a part of a Partner and a Partner can have 1 or more Customers: class Customer { Integer id String name static hasOne = [partner:Partner] static mapping = { partner joinTable:[name:'PartnerMap',column:'partner_id',key:'customer_id'] } } class Partner { Integer id static hasMany = [customers:Customer] static mapping = { customers joinTable:[name:'PartnerMap',column:'customer_id',key:'partner_id'] } } However, whenever I try to see if a customer is a part of a partner, like this: def customers = Customer.list() customers.each { if (it.partner) { println "Partner!" } } I get the following error: org.springframework.dao.InvalidDataAccessResourceUsageException: could not execute query; SQL [select this_.customer_id as customer1_162_0_, this_.company as company162_0_, this_.display_name as display3_162_0_, this_.parent_customer_id as parent4_162_0_, this_.partner_id as partner5_162_0_, this_.server_id as server6_162_0_, this_.status as status162_0_, this_.vertical_market as vertical8_162_0_ from Customer this_]; nested exception is org.hibernate.exception.SQLGrammarException: could not execute query It looks as if Grails is thinking partner_id is a part of the Customer query, and it's not... It is in the PartnerMap table, which is supposed to find the customer_id, then get the Partner from the corresponding partner_id. Anyone have any clue what I'm doing wrong? Edit: I forgot to mention I'm doing this with legacy database tables. So I have a Partner, Customer and PartnerMap table. PartnerMap has simply a customer_id and partner_id field.

    Read the article

  • Writing to the DataContext

    - by user738383
    I have a function. This function takes an IEnumerable<Customer> (Customer being an entity). What the function needs to do is tell the DataContext (which has a collection of Customers as a property) that its Customers property needs to be overwritten with this passed in IEnumerable<Customer>. I can't use assignment because DomainContext.Customers cannot be assigned to, as it is read only. I guess it's not clear what I'm asking, so I suppose I should say... how do I do that? So we have DataContext.Customers (of type System.Data.Linq.Table) which wants to be replaced with a System.Collections.Generic.IEnumerable. I can't just assign the latter to the former because DataContext's properties are read only. But there must be a way. Edit: here's an image: Further edit: Yes, this image does not feature a collection of the type 'Customer' but rather 'Connection'. It doesn't matter though, they are both created from tables within the linked SQL database. So there is a dc.Connections, a dc.Customers, a dc.Media and so on. Thanks in advance.

    Read the article

  • Using Entity Framework 4.0 with Code-First and POCO: How to Get Parent Object with All its Children

    - by SirEel
    I'm new to EF 4.0, so maybe this is an easy question. I've got VS2010 RC and the latest EF CTP. I'm trying to implement the "Foreign Keys" code-first example on the EF Team's Design Blog, http://blogs.msdn.com/efdesign/archive/2009/10/12/code-only-further-enhancements.aspx. public class Customer { public int Id { get; set; public string CustomerDescription { get; set; public IList<PurchaseOrder> PurchaseOrders { get; set; } } public class PurchaseOrder { public int Id { get; set; } public int CustomerId { get; set; } public Customer Customer { get; set; } public DateTime DateReceived { get; set; } } public class MyContext : ObjectContext { public RepositoryContext(EntityConnection connection) : base(connection){} public IObjectSet<Customer> Customers { get {return base.CreateObjectSet<Customer>();} } } I use a ContextBuilder to configure MyContext: { var builder = new ContextBuilder<MyContext>(); var customerConfig = _builder.Entity<Customer>(); customerConfig.Property(c => c.Id).IsIdentity(); var poConfig = _builder.Entity<PurchaseOrder>(); poConfig.Property(po => po.Id).IsIdentity(); poConfig.Relationship(po => po.Customer) .FromProperty(c => c.PurchaseOrders) .HasConstraint((po, c) => po.CustomerId == c.Id); ... } This works correctly when I'm adding new Customers, but not when I try to retrieve existing Customers. This code successfully saves a new Customer and all its child PurchaseOrders: using (var context = builder.Create(connection)) { context.Customers.AddObject(customer); context.SaveChanges(); } But this code only retrieves Customer objects; their PurchaseOrders lists are always empty. using (var context = _builder.Create(_conn)) { var customers = context.Customers.ToList(); } What else do I need to do to the ContextBuilder to make MyContext always retrieve all the PurchaseOrders with each Customer?

    Read the article

  • Keeping Linq to SQL alive when using ViewModels (ASP.NET MVC)

    - by Kohan
    I have recently started using custom ViewModels (For example, CustomerViewModel) public class CustomerViewModel { public IList<Customer> Customers{ get; set; } public int ProductId{ get; set; } public CustomerViewModel(IList<Customer> customers, int productId) { this.Customers= customers; this.ProductId= productId; } public CustomerViewModel() { } } ... and am now passing them to my view instead of the Entities themselves (for example, var Custs = repository.getAllCusts(id) ) as it seems good practice to do so. The problem i have encountered is that when using ViewModels; by the time it has got to the the view i have lost the ability to lazy load on customers. I do not believe this was the case before using them. Is it possible to retain the ability of Lazy Loading while still using ViewModels? Or do i have to eager load using this method? Thanks, Kohan.

    Read the article

  • data ownership and performance

    - by Ami
    We're designing a new application and we ran into some architectural question when thinking about data ownership. we broke down the system into components, for example Customer and Order. each of this component/module is responsible for a specific business domain, i.e. Customer deals with CRUD of customers and business process centered around customers (Register a n new customer, block customer account, etc.). each module is the owner of a set of database tables, and only that module may access them. if another module needs data that is owned by another module, it retrieves it by requesting it from that module. so far so good, the question is how to deal with scenarios such as a report that needs to show all the customers and for each customer all his orders? in such a case we need to get all the customers from the Customer module, iterate over them and for each one get all the data from the Order module. performance won't be good...obviously it would be much better to have a stored proc join customers table and orders table, but that would also mean direct access to the data that is owned by another module, creating coupling and dependencies that we wish to avoid. this is a simplified example, we're dealing with an enterprise application with a lot of business entities and relationships, and my goal is to keep it clean and as loosely coupled as possible. I foresee in the future many changes to the data scheme, and possibly splitting the system into several completely separate systems. I wish to have a design that would allow this to be done in a relatively easy way. Thanks!

    Read the article

  • Group Objects by Common Key

    - by Marshmellow1328
    I have a list of Customers. Each customer has an address and some customers may actually have the same address. My ultimate goal is to group customers based on their address. I figure I could either put the customers in some sort of list-based structure and sort on the addresses, or I could drop the objects into some sort of map that allows multiple values per key. I will now make a pretty picture: List: A1 - C1, A1 - C2, A2 - C3, A3 - C4, A3 - C5 Map: A1 A2 A3 C1 C3 C4 C2 C5 Which option (or any others) do you see as the best solution? Are there any existing classes that will make development easier?

    Read the article

  • Beginner SQL question(s)

    - by unit
    I am two months in to an intro sql course, it's late at night, and I am drawing a blank. I have two tables, one customers, and one orders. I have to increase any customers credit limit by twenty five percent for all customers who have made two or more orders in which each order is more than the amount of 250.00. I get how to UPDATE CreditLimit * 1.25 and Cust with an order 250, but how the hell do I get it to check if they have made two orders over 250? Second question, we are just starting to take subqueries, and I am having a difficult time getting it into my skull. Another question posed by the prof of our class is to increase the credit limit of a customer who has an order that exceeds their credit limit. (Credit limit is on a customers table, order and amount are on an orders table). I then take that customer and UPDATE his CreditLimit +1000. Thanks for any help.

    Read the article

  • php help hiding navigation with cookie

    - by user342391
    I have these tabs on my navigation: <li<?php if ($thisPage=="Customers") echo " class=\"current\""; ?>><a href="/customers/">Customers</a></li> <li<?php if ($thisPage=="Trunks") echo " class=\"current\""; ?>><a href="/trunks/">Trunks</a></li> <li<?php if ($thisPage=="Settings") echo " class=\"current\""; ?>><a href="/settings/">Settings</a></li> and I only want to show them when admin is logged in: if ($_COOKIE['custid'] == "admin") { echo "Customers"; echo "Trunks"; echo "Settings"; } How can I combine the two of these scripts???

    Read the article

  • MySQL query with JOINS and GROUP BY

    - by user1854049
    I'm building a MySQL query but I can't seem to get it right. I have four tables: - customers - orders - sales_rates - purchase_rates There is a 1:n relation 'customernr' between customers and orders. There is a 1:n relation 'ordernr' between orders and sales_rates. There is a 1:n relation 'ordernr' between orders and purchase_rates. What I would like to do is produce an output of all customers with their total purchase and sales amounts. So far I have the following query. SELECT c.customernr, c.customer_name, SUM(sr.sales_price) AS sales_price, SUM(pr.purchase_price) AS purchase_price FROM orders o, customers c, sales_rates sr, purchase_rates pr WHERE o.customernr = c.customernr AND o.ordernr = sr.ordernr AND o.ordernr = pr.ordernr GROUP BY k.bedrijfsnaam The result of the sales_price and purchase_price is far too high. I seem to be getting double counts. What am I doing wrong? Is it possible to perform this in a single query? Thank for your response!

    Read the article

  • Software Engineering Practices &ndash; Different Projects should have different maturity levels

    - by Dylan Smith
    I’ve had a lot of discussions at the office lately about the drastically different sets of software engineering practices used on our various projects, if what we are doing is appropriate, and what factors should you be considering when determining what practices are most appropriate in a given context. I wanted to write up my thoughts in a little more detail on this subject, so here we go: If you compare any two software projects (specifically comparing their codebases) you’ll often see very different levels of maturity in the software engineering practices employed. By software engineering practices, I’m specifically referring to the quality of the code and the amount of technical debt present in the project. Things such as Test Driven Development, Domain Driven Design, Behavior Driven Development, proper adherence to the SOLID principles, etc. are all practices that you would expect at the mature end of the spectrum. At the other end of the spectrum would be the quick-and-dirty solutions that are done using something like an Access Database, Excel Spreadsheet, or maybe some quick “drag-and-drop coding”. For this blog post I’m going to refer to this as the Software Engineering Maturity Spectrum (SEMS). I believe there is a time and a place for projects at every part of that SEMS. The risks and costs associated with under-engineering solutions have been written about a million times over so I won’t bother going into them again here, but there are also (unnecessary) costs with over-engineering a solution. Sometimes putting multiple layers, and IoC containers, and abstracting out the persistence, etc is complete overkill if a one-time use Access database could solve the problem perfectly well. A lot of software developers I talk to seem to automatically jump to the very right-hand side of this SEMS in everything they do. A common rationalization I hear is that it may seem like a small trivial application today, but these things always grow and stick around for many years, then you’re stuck maintaining a big ball of mud. I think this is a cop-out. Sure you can’t always anticipate how an application will be used or grow over its lifetime (can you ever??), but that doesn’t mean you can’t manage it and evolve the underlying software architecture as necessary (even if that means having to toss the code out and re-write it at some point…maybe even multiple times). My thoughts are that we should be making a conscious decision around the start of each project approximately where on the SEMS we want the project to exist. I believe this decision should be based on 3 factors: 1. Importance - How important to the business is this application? What is the impact if the application were to suddenly stop working? 2. Complexity - How complex is the application functionality? 3. Life-Expectancy - How long is this application expected to be in use? Is this a one-time use application, does it fill a short-term need, or is it more strategic and is expected to be in-use for many years to come? Of course this isn’t an exact science. You can’t say that Project X should be at the 73% mark on the SEMS and expect that to be helpful. My point is not that you need to precisely figure out what point on the SEMS the project should be at then translate that into some prescriptive set of practices and techniques you should be using. Rather my point is that we need to be aware that there is a spectrum, and that not everything is going to be (or should be) at the edges of that spectrum, indeed a large number of projects should probably fall somewhere within the middle; and different projects should adopt a different level of software engineering practices and maturity levels based on the needs of that project. To give an example of this way of thinking from my day job: Every couple of years my company plans and hosts a large event where ~400 of our customers all fly in to one location for a multi-day event with various activities. We have some staff whose job it is to organize the logistics of this event, which includes tracking which flights everybody is booked on, arranging for transportation to/from airports, arranging for hotel rooms, name tags, etc The last time we arranged this event all these various pieces of data were tracked in separate spreadsheets and reconciliation and cross-referencing of all the data was literally done by hand using printed copies of the spreadsheets and several people sitting around a table going down each list row by row. Obviously there is some room for improvement in how we are using software to manage the event’s logistics. The next time this event occurs we plan to provide the event planning staff with a more intelligent tool (either an Excel spreadsheet or probably an Access database) that can track all the information in one location and make sure that the various pieces of data are properly linked together (so for example if a person cancels you only need to delete them from one place, and not a dozen separate lists). This solution would fall at or near the very left end of the SEMS meaning that we will just quickly create something with very little attention paid to using mature software engineering practices. If we examine this project against the 3 criteria I listed above for determining it’s place within the SEMS we can see why: Importance – If this application were to stop working the business doesn’t grind to a halt, revenue doesn’t stop, and in fact our customers wouldn’t even notice since it isn’t a customer facing application. The impact would simply be more work for our event planning staff as they revert back to the previous way of doing things (assuming we don’t have any data loss). Complexity – The use cases for this project are pretty straightforward. It simply needs to manage several lists of data, and link them together appropriately. Precisely the task that access (and/or Excel) can do with minimal custom development required. Life-Expectancy – For this specific project we’re only planning to create something to be used for the one event (we only hold these events every 2 years). If it works well this may change (see below). Let’s assume we hack something out quickly and it works great when we plan the next event. We may decide that we want to make some tweaks to the tool and adopt it for planning all future events of this nature. In that case we should examine where the current application is on the SEMS, and make a conscious decision whether something needs to be done to move it further to the right based on the new objectives and goals for this application. This may mean scrapping the access database and re-writing it as an actual web or windows application. In this case, the life-expectancy changed, but let’s assume the importance and complexity didn’t change all that much. We can still probably get away with not adopting a lot of the so-called “best practices”. For example, we can probably still use some of the RAD tooling available and might have an Autonomous View style design that connects directly to the database and binds to typed datasets (we might even choose to simply leave it as an access database and continue using it; this is a decision that needs to be made on a case-by-case basis). At Anvil Digital we have aspirations to become a primarily product-based company. So let’s say we use this tool to plan a handful of events internally, and everybody loves it. Maybe a couple years down the road we decide we want to package the tool up and sell it as a product to some of our customers. In this case the project objectives/goals change quite drastically. Now the tool becomes a source of revenue, and the impact of it suddenly stopping working is significantly less acceptable. Also as we hold focus groups, and gather feedback from customers and potential customers there’s a pretty good chance the feature-set and complexity will have to grow considerably from when we were using it only internally for planning a small handful of events for one company. In this fictional scenario I would expect the target on the SEMS to jump to the far right. Depending on how we implemented the previous release we may be able to refactor and evolve the existing codebase to introduce a more layered architecture, a robust set of automated tests, introduce a proper ORM and IoC container, etc. More likely in this example the jump along the SEMS would be so large we’d probably end up scrapping the current code and re-writing. Although, if it was a slow phased roll-out to only a handful of customers, where we collected feedback, made some tweaks, and then rolled out to a couple more customers, we may be able to slowly refactor and evolve the code over time rather than tossing it out and starting from scratch. The key point I’m trying to get across is not that you should be throwing out your code and starting from scratch all the time. But rather that you should be aware of when and how the context and objectives around a project changes and periodically re-assess where the project currently falls on the SEMS and whether that needs to be adjusted based on changing needs. Note: There is also the idea of “spectrum decay”. Since our industry is rapidly evolving, what we currently accept as mature software engineering practices (the right end of the SEMS) probably won’t be the same 3 years from now. If you have a project that you were to assess at somewhere around the 80% mark on the SEMS today, but don’t touch the code for 3 years and come back and re-assess its position, it will almost certainly have changed since the right end of the SEMS will have moved farther out (maybe the project is now only around 60% due to decay). Developer Skills Another important aspect to this whole discussion is around the skill sets of your architects and lead developers. When talking about the progression of a developers skills from junior->intermediate->senior->… they generally start by only being able to write code that belongs on the left side of the SEMS and as they gain more knowledge and skill they become capable of working at a higher and higher level along the SEMS. We all realize that the learning never stops, but eventually you’ll get to the point where you can comfortably develop at the right-end of the SEMS (the exact practices and techniques that translates to is constantly changing, but that’s not the point here). A critical skill that I’d love to see more evidence of in our industry is the most senior guys not only being able to work at the right-end of the SEMS, but more importantly be able to consciously work at any point along the SEMS as project needs dictate. An even more valuable skill would be if you could make the conscious decision to move a projects code further right on the SEMS (based on changing needs) and do so in an incremental manner without having to start from scratch. An exercise that I’m planning to go through with all of our projects here at Anvil in the near future is to map out where I believe each project currently falls within this SEMS, where I believe the project *should* be on the SEMS based on the business needs, and for those that don’t match up (i.e. most of them) come up with a plan to improve the situation.

    Read the article

  • How to Avoid Your Next 12-Month Science Project

    - by constant
    While most customers immediately understand how the magic of Oracle's Hybrid Columnar Compression, intelligent storage servers and flash memory make Exadata uniquely powerful against home-grown database systems, some people think that Exalogic is nothing more than a bunch of x86 servers, a storage appliance and an InfiniBand (IB) network, built into a single rack. After all, isn't this exactly what the High Performance Computing (HPC) world has been doing for decades? On the surface, this may be true. And some people tried exactly that: They tried to put together their own version of Exalogic, but then they discover there's a lot more to building a system than buying hardware and assembling it together. IT is not Ikea. Why is that so? Could it be there's more going on behind the scenes than merely putting together a bunch of servers, a storage array and an InfiniBand network into a rack? Let's explore some of the special sauce that makes Exalogic unique and un-copyable, so you can save yourself from your next 6- to 12-month science project that distracts you from doing real work that adds value to your company. Engineering Systems is Hard Work! The backbone of Exalogic is its InfiniBand network: 4 times better bandwidth than even 10 Gigabit Ethernet, and only about a tenth of its latency. What a potential for increased scalability and throughput across the middleware and database layers! But InfiniBand is a beast that needs to be tamed: It is true that Exalogic uses a standard, open-source Open Fabrics Enterprise Distribution (OFED) InfiniBand driver stack. Unfortunately, this software has been developed by the HPC community with fastest speed in mind (which is good) but, despite the name, not many other enterprise-class requirements are included (which is less good). Here are some of the improvements that Oracle's InfiniBand development team had to add to the OFED stack to make it enterprise-ready, simply because typical HPC users didn't have the need to implement them: More than 100 bug fixes in the pieces that were not related to the Message Passing Interface Protocol (MPI), which is the protocol that HPC users use most of the time, but which is less useful in the enterprise. Performance optimizations and tuning across the whole IB stack: From Switches, Host Channel Adapters (HCAs) and drivers to low-level protocols, middleware and applications. Yes, even the standard HPC IB stack could be improved in terms of performance. Ethernet over IB (EoIB): Exalogic uses InfiniBand internally to reach high performance, but it needs to play nicely with datacenters around it. That's why Oracle added Ethernet over InfiniBand technology to it that allows for creating many virtual 10GBE adapters inside Exalogic's nodes that are aggregated and connected to Exalogic's IB gateway switches. While this is an open standard, it's up to the vendor to implement it. In this case, Oracle integrated the EoIB stack with Oracle's own IB to 10GBE gateway switches, and made it fully virtualized from the beginning. This means that Exalogic customers can completely rewire their server infrastructure inside the rack without having to physically pull or plug a single cable - a must-have for every cloud deployment. Anybody who wants to match this level of integration would need to add an InfiniBand switch development team to their project. Or just buy Oracle's gateway switches, which are conveniently shipped with a whole server infrastructure attached! IPv6 support for InfiniBand's Sockets Direct Protocol (SDP), Reliable Datagram Sockets (RDS), TCP/IP over IB (IPoIB) and EoIB protocols. Because no IPv6 = not very enterprise-class. HA capability for SDP. High Availability is not a big requirement for HPC, but for enterprise-class application servers it is. Every node in Exalogic's InfiniBand network is connected twice for redundancy. If any cable or port or HCA fails, there's always a replacement link ready to take over. This requires extra magic at the protocol level to work. So in addition to Weblogic's failover capabilities, Oracle implemented IB automatic path migration at the SDP level to avoid unnecessary failover operations at the middleware level. Security, for example spoof-protection. Another feature that is less important for traditional users of InfiniBand, but very important for enterprise customers. InfiniBand Partitioning and Quality-of-Service (QoS): One of the first questions we get from customers about Exalogic is: “How can we implement multi-tenancy?” The answer is to partition your IB network, which effectively creates many networks that work independently and that are protected at the lowest networking layer possible. In addition to that, QoS allows administrators to prioritize traffic flow in multi-tenancy environments so they can keep their service levels where it matters most. Resilient IB Fabric Management: InfiniBand is a self-managing network, so a lot of the magic lies in coming up with the right topology and in teaching the subnet manager how to properly discover and manage the network. Oracle's Infiniband switches come with pre-integrated, highly available fabric management with seamless integration into Oracle Enterprise Manager Ops Center. In short: Oracle elevated the OFED InfiniBand stack into an enterprise-class networking infrastructure. Many years and multiple teams of manpower went into the above improvements - this is something you can only get from Oracle, because no other InfiniBand vendor can give you these features across the whole stack! Exabus: Because it's not About the Size of Your Network, it's How You Use it! So let's assume that you somehow were able to get your hands on an enterprise-class IB driver stack. Or maybe you don't care and are just happy with the standard OFED one? Anyway, the next step is to actually leverage that InfiniBand performance. Here are the choices: Use traditional TCP/IP on top of the InfiniBand stack, Develop your own integration between your middleware and the lower-level (but faster) InfiniBand protocols. While more bandwidth is always a good thing, it's actually the low latency that enables superior performance for your applications when running on any networking infrastructure: The lower the latency, the faster the response travels through the network and the more transactions you can close per second. The reason why InfiniBand is such a low latency technology is that it gets rid of most if not all of your traditional networking protocol stack: Data is literally beamed from one region of RAM in one server into another region of RAM in another server with no kernel/drivers/UDP/TCP or other networking stack overhead involved! Which makes option 1 a no-go: Adding TCP/IP on top of InfiniBand is like adding training wheels to your racing bike. It may be ok in the beginning and for development, but it's not quite the performance IB was meant to deliver. Which only leaves option 2: Integrating your middleware with fast, low-level InfiniBand protocols. And this is what Exalogic's "Exabus" technology is all about. Here are a few Exabus features that help applications leverage the performance of InfiniBand in Exalogic: RDMA and SDP integration at the JDBC driver level (SDP), for Oracle Weblogic (SDP), Oracle Coherence (RDMA), Oracle Tuxedo (RDMA) and the new Oracle Traffic Director (RDMA) on Exalogic. Using these protocols, middleware can communicate a lot faster with each other and the Oracle database than by using standard networking protocols, Seamless Integration of Ethernet over InfiniBand from Exalogic's Gateway switches into the OS, Oracle Weblogic optimizations for handling massive amounts of parallel transactions. Because if you have an 8-lane Autobahn, you also need to improve your ramps so you can feed it with many cars in parallel. Integration of Weblogic with Oracle Exadata for faster performance, optimized session management and failover. As you see, “Exabus” is Oracle's word for describing all the InfiniBand enhancements Oracle put into Exalogic: OFED stack enhancements, protocols for faster IB access, and InfiniBand support and optimizations at the virtualization and middleware level. All working together to deliver the full potential of InfiniBand performance. Who else has 100% control over their middleware so they can develop their own low-level protocol integration with InfiniBand? Even if you take an open source approach, you're looking at years of development work to create, test and support a whole new networking technology in your middleware! The Extras: Less Hassle, More Productivity, Faster Time to Market And then there are the other advantages of Engineered Systems that are true for Exalogic the same as they are for every other Engineered System: One simple purchasing process: No headaches due to endless RFPs and no “Will X work with Y?” uncertainties. Everything has been engineered together: All kinds of bugs and problems have been already fixed at the design level that would have only manifested themselves after you have built the system from scratch. Everything is built, tested and integrated at the factory level . Less integration pain for you, faster time to market. Every Exalogic machine world-wide is identical to Oracle's own machines in the lab: Instant replication of any problems you may encounter, faster time to resolution. Simplified patching, management and operations. One throat to choke: Imagine finger-pointing hell for systems that have been put together using several different vendors. Oracle's Engineered Systems have a single phone number that customers can call to get their problems solved. For more business-centric values, read The Business Value of Engineered Systems. Conclusion: Buy Exalogic, or get ready for a 6-12 Month Science Project And here's the reason why it's not easy to "build your own Exalogic": There's a lot of work required to make such a system fly. In fact, anybody who is starting to "just put together a bunch of servers and an InfiniBand network" is really looking at a 6-12 month science project. And the outcome is likely to not be very enterprise-class. And it won't have Exalogic's performance either. Because building an Engineered System is literally rocket science: It takes a lot of time, effort, resources and many iterations of design/test/analyze/fix to build such a system. That's why InfiniBand has been reserved for HPC scientists for such a long time. And only Oracle can bring the power of InfiniBand in an enterprise-class, ready-to use, pre-integrated version to customers, without the develop/integrate/support pain. For more details, check the new Exalogic overview white paper which was updated only recently. P.S.: Thanks to my colleagues Ola, Paul, Don and Andy for helping me put together this article! var flattr_uid = '26528'; var flattr_tle = 'How to Avoid Your Next 12-Month Science Project'; var flattr_dsc = 'While most customers immediately understand how the magic of Oracle's Hybrid Columnar Compression, intelligent storage servers and flash memory make Exadata uniquely powerful against home-grown database systems, some people think that Exalogic is nothing more than a bunch of x86 servers, a storage appliance and an InfiniBand (IB) network, built into a single rack.After all, isn't this exactly what the High Performance Computing (HPC) world has been doing for decades?On the surface, this may be true. And some people tried exactly that: They tried to put together their own version of Exalogic, but then they discover there's a lot more to building a system than buying hardware and assembling it together. IT is not Ikea.Why is that so? Could it be there's more going on behind the scenes than merely putting together a bunch of servers, a storage array and an InfiniBand network into a rack? Let's explore some of the special sauce that makes Exalogic unique and un-copyable, so you can save yourself from your next 6- to 12-month science project that distracts you from doing real work that adds value to your company.'; var flattr_tag = 'Engineered Systems,Engineered Systems,Infiniband,Integration,latency,Oracle,performance'; var flattr_cat = 'text'; var flattr_url = 'http://constantin.glez.de/blog/2012/04/how-avoid-your-next-12-month-science-project'; var flattr_lng = 'en_GB'

    Read the article

  • DiscountASP.NET Launches SQL Server Profiling as a Service

    - by wisecarver
    DiscountASP.NET announces enhancing our SQL Server hosting with the launch of SQL Server Profiling as a service. SQL Profiler is a powerful tool that allows the application and database developer to troubleshoot general SQL locking problems, performance issues, and perform database tuning. With our SQL Profiling as a Service customers can schedule a database trace at a specific time of their choosing and offers a new way to help our customers troubleshoot. For more information, visit: http://www...(read more)

    Read the article

  • DiscountASP.NET Launches SQL Server Profiling as a Service

    - by wisecarver
    DiscountASP.NET announces enhancing our SQL Server hosting with the launch of SQL Server Profiling as a service. SQL Profiler is a powerful tool that allows the application and database developer to troubleshoot general SQL locking problems, performance issues, and perform database tuning. With our SQL Profiling as a Service customers can schedule a database trace at a specific time of their choosing and offers a new way to help our customers troubleshoot. For more information, visit: http://www...(read more)

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >