Search Results

Search found 5375 results on 215 pages for 'jeremy person'.

Page 88/215 | < Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >

  • Silverlight Navigation using Mvvm-light(oobe)+MEF?

    - by deliberative assembly
    What is the best approach for navigating between UserControls/Pages(out of browser experience)? I'm fairly new to Silverlight and even newer to the mvvm pattern. How well does the Navigation Framework Integrate with the MVVM Light Toolkit? A snippet for general application flow control with the two would be great. The plan was to use the Navigation Framework for general flow or using Jeremy Likeness's approach to region management(http://csharperimage.jeremylikness.com/search/label/regions) and swapping out regions as needed. I've seen a few places mention replacing the Visual Root, but that sounded like a hack to me. Any advice, snippets, or a nudge in the general direction would be greatly appreciated. Thank you.

    Read the article

  • ASP.NET XML as Datasource error

    - by nekko
    Hello I am trying to use an XML as a datasource in ASP and then display it as a datagrid. The XML has the following format: <?xml version="1.0" encoding="UTF-8"?> <people type="array"> <person> <id type="integer"></id> <first_name></first_name> <last_name></last_name> <title></title> <company></company> <tags> </tags> <locations> <location primary="false" label="work"> <email></email> <website></website> <phone></phone> <cell></cell> <fax></fax> <street_1/> <street_2/> <city/> <state/> <postal_code/> <country/> </location> </locations> <notes></notes> <created_at></created_at> <updated_at></updated_at> </person> </people> When I try to run the simple page I receive the following error Server Error in '/' Application. The data source for GridView with id 'GridView1' did not have any properties or attributes from which to generate columns. Ensure that your data source has content. Here is my page code <%@ Page Language="vb" AutoEventWireup="false" CodeBehind="Default.aspx.vb" Inherits="shout._Default" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > <head runat="server"> <title>Untitled Page</title> </head> <body> <form id="form1" runat="server"> <div> <asp:XmlDataSource ID="XmlDataSource1" runat="server" DataFile="~/App_Data/people.xml" XPath="people/person"></asp:XmlDataSource> <asp:GridView ID="GridView1" runat="server" AllowPaging="True" DataSourceID="XmlDataSource1"> </asp:GridView> </div> </form> </body> </html> Please help. Thanks in advance.

    Read the article

  • ASP.NET MVC 2 "value" in DataAnnotation attribute passed is null, when incorrect date is submitted.

    - by goldenelf2
    Hello to all! This is my first question here on stack overflow. i need help on a problem i encountered during an ASP.NET MVC2 project i am currently working on. I should note that I'm relatively new to MVC design, so pls bear my ignorance. Here goes : I have a regular form on which various details about a person are shown. One of them is "Date of Birth". My view is like this <div class="form-items"> <%: Html.Label("DateOfBirth", "Date of Birth:") %> <%: Html.EditorFor(m => m.DateOfBirth) %> <%: Html.ValidationMessageFor(m => m.DateOfBirth) %> </div> I'm using an editor template i found, to show only the date correctly : <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl<System.DateTime?>"%> <%= Html.TextBox("", (Model.HasValue ? Model.Value.ToShortDateString() : string.Empty))%> I used LinqToSql designer to create my model from an sql database. In order to do some validation i made a partial class Person to extend the one created by the designer (under the same namespace) : [MetadataType(typeof(IPerson))] public partial class Person : IPerson { //To create buddy class } public interface IPerson { [Required(ErrorMessage="Please enter a name")] string Name { get; set; } [Required(ErrorMessage="Please enter a surname")] string Surname { get; set; } [Birthday] DateTime? DateOfBirth { get; set; } [Email(ErrorMessage="Please enter a valid email")] string Email { get; set; } } I want to make sure that a correct date is entered. So i created a custom DataAnnotation attribute in order to validate the date : public class BirthdayAttribute : ValidationAttribute { private const string _errorMessage = "Please enter a valid date"; public BirthdayAttribute() : base(_errorMessage) { } public override bool IsValid(object value) { if (value == null) { return true; } DateTime temp; bool result = DateTime.TryParse(value.ToString(), out temp); return result; } } Well, my problem is this. Once i enter an incorrect date in the DateOfBirth field then no custom message is displayed even if use the attribute like [Birthday(ErrorMessage=".....")]. The message displayed is the one returned from the db ie "The value '32/4/1967' is not valid for DateOfBirth.". I tried to enter some break points around the code, and found out that the "value" in attribute is always null when the date is incorrect, but always gets a value if the date is in correct format. The same ( value == null) is passed also in the code generated by the designer. This thing is driving me nuts. Please can anyone help me deal with this? Also if someone can tell me where exactly is the point of entry from the view to the database. Is it related to the model binder? because i wanted to check exactly what value is passed once i press the "submit" button. Thank you.

    Read the article

  • Join 2 children tables with a parent tables without duplicated

    - by user1847866
    Problem I have 3 tables: People, Phones and Emails. Each person has an UNIQUE ID, and each person can have multiple numbers or multiple emails. Simplified it looks like this: +---------+----------+ | ID | Name | +---------+----------+ | 5000003 | Amy | | 5000004 | George | | 5000005 | John | | 5000008 | Steven | | 8000009 | Ashley | +---------+----------+ +---------+-----------------+ | ID | Number | +---------+-----------------+ | 5000005 | 5551234 | | 5000005 | 5154324 | | 5000008 | 2487312 | | 8000009 | 7134584 | | 5000008 | 8451384 | +---------+-----------------+ +---------+------------------------------+ | ID | Email | +---------+------------------------------+ | 5000005 | [email protected] | | 5000005 | [email protected] | | 5000008 | [email protected] | | 5000008 | [email protected] | | 5000008 | [email protected] | | 8000009 | [email protected] | | 5000004 | [email protected] | +---------+------------------------------+ I am trying to joining them together without duplicates. It works great, when I try to join only Emails with People or only Phones with People. SELECT People.Name, People.ID, Phones.Number FROM People LEFT OUTER JOIN Phones ON People.ID=Phones.ID ORDER BY Name, ID, Number; +----------+---------+-----------------+ | Name | ID | Number | +----------+---------+-----------------+ | Steven | 5000008 | 8451384 | | Steven | 5000008 | 24887312 | | John | 5000005 | 5551234 | | John | 5000005 | 5154324 | | George | 5000004 | NULL | | Ashley | 8000009 | 7134584 | | Amy | 5000003 | NULL | +----------+---------+-----------------+ SELECT People.Name, People.ID, Emails.Email FROM People LEFT OUTER JOIN Emails ON People.ID=Emails.ID ORDER BY Name, ID, Email; +----------+---------+------------------------------+ | Name | ID | Email | +----------+---------+------------------------------+ | Steven | 5000008 | [email protected] | | Steven | 5000008 | [email protected] | | Steven | 5000008 | [email protected] | | John | 5000005 | [email protected] | | John | 5000005 | [email protected] | | George | 5000004 | [email protected] | | Ashley | 8000009 | [email protected] | | Amy | 5000003 | NULL | +----------+---------+------------------------------+ However, when I try to join Emails and Phones on People - I get this: SELECT People.Name, People.ID, Phones.Number, Emails.Email FROM People LEFT OUTER JOIN Phones ON People.ID = Phones.ID LEFT OUTER JOIN Emails ON People.ID = Emails.ID ORDER BY Name, ID, Number, Email; +----------+---------+-----------------+------------------------------+ | Name | ID | Number | Email | +----------+---------+-----------------+------------------------------+ | Steven | 5000008 | 8451384 | [email protected] | | Steven | 5000008 | 8451384 | [email protected] | | Steven | 5000008 | 8451384 | [email protected] | | Steven | 5000008 | 24887312 | [email protected] | | Steven | 5000008 | 24887312 | [email protected] | | Steven | 5000008 | 24887312 | [email protected] | | John | 5000005 | 5551234 | [email protected] | | John | 5000005 | 5551234 | [email protected] | | John | 5000005 | 5154324 | [email protected] | | John | 5000005 | 5154324 | [email protected] | | George | 5000004 | NULL | [email protected] | | Ashley | 8000009 | 7134584 | [email protected] | | Amy | 5000003 | NULL | NULL | +----------+---------+-----------------+------------------------------+ What happens is - if a Person has 2 numbers, all his emails are shown twice (They can not be sorted! which means they can not be removed by @last) What I want: Bottom line, playing with the @last, I want to end up with somethig like this, but @last won't work if I don't arrange ORDER columns in the righ way - and this seems like a big problem..Orderin the email column. Because seen from the example above: Steven has 2 phone number and 3 emails. The JOIN Emails with Numbers happens with each email - thus duplicated values that can not be sorted (SORT BY does not work on them). **THIS IS WHAT I WANT** +----------+---------+-----------------+------------------------------+ | Name | ID | Number | Email | +----------+---------+-----------------+------------------------------+ | Steven | 5000008 | 8451384 | [email protected] | | | | 24887312 | [email protected] | | | | | [email protected] | | John | 5000005 | 5551234 | [email protected] | | | | 5154324 | [email protected] | | George | 5000004 | NULL | [email protected] | | Ashley | 8000009 | 7134584 | [email protected] | | Amy | 5000003 | NULL | NULL | +----------+---------+-----------------+------------------------------+ Now I'm told that it's best to keep emails and number in separated tables because one can have many emails. So if it's such a common thing to do, what isn't there a simple solution? I'd be happy with a PHP Solution aswell. What I know how to do by now that satisfies it, but is not as pretty. If I do it with GROUP_CONTACT I geat a satisfactory result, but it doesn't look as pretty: I can't put a "Email type = work" next to it. SELECT People.Ime, GROUP_CONCAT(DISTINCT Phones.Number), GROUP_CONCAT(DISTINCT Emails.Email) FROM People LEFT OUTER JOIN Phones ON People.ID=Phones.ID LEFT OUTER JOIN Emails ON People.ID=Emails.ID GROUP BY Name; +----------+----------------------------------------------+---------------------------------------------------------------------+ | Name | GROUP_CONCAT(DISTINCT Phones.Number) | GROUP_CONCAT(DISTINCT Emails.Email) | +----------+----------------------------------------------+---------------------------------------------------------------------+ | Steven | 8451384,24887312 | [email protected],[email protected],[email protected] | | John | 5551234,5154324 | [email protected],[email protected] | | George | NULL | [email protected] | | Ashley | 7134584 | [email protected] | | Amy | NULL | NULL | +----------+----------------------------------------------+---------------------------------------------------------------------+

    Read the article

  • Settings up a Mercurial server on IIS 6

    - by TheCodeJunkie
    Hi, I've set up a Mercurial server on a Windows 2003 / IIS 6 machine and when I try to pull the repository I get the following sequence requesting all changes adding changesets adding manifests adding file changes transaction abort! rollback completed abort: premature EOF reading chunk (got 91303 bytes, expected 1542634) I've tried pretty much everything I can think of, but with no success. I followed the steps of Jeremy Skinners guide on doing it for IIS7, but on an IIS6 server. I found a post where the author was experiencing the same issue, but was unable to find a solution. So far it looks like the solution is to migrate to Apache or upgrade to Windows 2008/II7 .. but if someone knows how to solve this, please let me know

    Read the article

  • Validate a single property with the Fluent Validation Library for .Net

    - by Blegger
    Can you validate just a single property with the Fluent Validation Library, and if so how? I thought this discussion thread from January of 2009 showed me how to do it via the following syntax: validator.Validate(new Person(), x => x.Surname); Unfortunately it doesn't appear this works in the current version of the library. One other thing that led me to believe that validating a single property might be possible is the following quote from Jeremy Skinners' blog post: "Finally, I added the ability to be able to execute some of FluentValidation’s Property Validators without needing to validate the entire object. This means it is now possible to stop the default “A value was required” message from being added to ModelState. " However I do not know if that necessarily means it supports just validating a single property or the fact that you can tell the validation library to stop validating after the first validation error.

    Read the article

  • How to overcome shortcomings in reporting from EAV database?

    - by David Archer
    The major shortcomings with Entity-Attribute-Value database designs in SQL all seem to be related to being able to query and report on the data efficiently and quickly. Most of the information I read on the subject warn against implementing EAV due to these problems and the commonality of querying/reporting for almost all applications. I am currently designing a system where almost all the fields necessary for data storage are not known at design/compile time and are defined by the end-user of the system. EAV seems like a good fit for this requirement but due to the problems I've read about, I am hesitant in implementing it as there are also some pretty heavy reporting requirements for this system as well. I think I've come up with a way around this but would like to pose the question to the SO community. Given that typical normalized database (OLTP) still isn't always the best option for running reports, a good practice seems to be having a "reporting" database (OLAP) where the data from the normalized database is copied to, indexed extensively, and possibly denormalized for easier querying. Could the same idea be used to work around the shortcomings of an EAV design? The main downside I see are the increased complexity of transferring the data from the EAV database to reporting as you may end up having to alter the tables in the reporting database as new fields are defined in the EAV database. But that is hardly impossible and seems to be an acceptable tradeoff for the increased flexibility given by the EAV design. This downside also exists if I use a non-SQL data store (i.e. CouchDB or similar) for the main data storage since all the standard reporting tools are expecting a SQL backend to query against. Do the issues with EAV systems mostly go away if you have a seperate reporting database for querying? EDIT: Thanks for the comments so far. One of the important things about the system I'm working on it that I'm really only talking about using EAV for one of the entities, not everything in the system. The whole gist of the system is to be able to pull data from multiple disparate sources that are not known ahead of time and crunch the data to come up with some "best known" data about a particular entity. So every "field" I'm dealing with is multi-valued and I'm also required to track history for each. The normalized design for this ends up being 1 table per field which makes querying it kind of painful anyway. Here are the table schemas and sample data I'm looking at (obviously changed from what I'm working on but I think it illustrates the point well): EAV Tables Person ------------------- - Id - Name - ------------------- - 123 - Joe Smith - ------------------- Person_Value ------------------------------------------------------------------- - PersonId - Source - Field - Value - EffectiveDate - ------------------------------------------------------------------- - 123 - CIA - HomeAddress - 123 Cherry Ln - 2010-03-26 - - 123 - DMV - HomeAddress - 561 Stoney Rd - 2010-02-15 - - 123 - FBI - HomeAddress - 676 Lancas Dr - 2010-03-01 - ------------------------------------------------------------------- Reporting Table Person_Denormalized ---------------------------------------------------------------------------------------- - Id - Name - HomeAddress - HomeAddress_Confidence - HomeAddress_EffectiveDate - ---------------------------------------------------------------------------------------- - 123 - Joe Smith - 123 Cherry Ln - 0.713 - 2010-03-26 - ---------------------------------------------------------------------------------------- Normalized Design Person ------------------- - Id - Name - ------------------- - 123 - Joe Smith - ------------------- Person_HomeAddress ------------------------------------------------------ - PersonId - Source - Value - Effective Date - ------------------------------------------------------ - 123 - CIA - 123 Cherry Ln - 2010-03-26 - - 123 - DMV - 561 Stoney Rd - 2010-02-15 - - 123 - FBI - 676 Lancas Dr - 2010-03-01 - ------------------------------------------------------ The "Confidence" field here is generated using logic that cannot be expressed easily (if at all) using SQL so my most common operation besides inserting new values will be pulling ALL data about a person for all fields so I can generate the record for the reporting table. This is actually easier in the EAV model as I can do a single query. In the normalized design, I end up having to do 1 query per field to avoid a massive cartesian product from joining them all together.

    Read the article

  • Which Dependency Injection Tool Should I Use? (2)

    - by Mendy
    The original post is: Which Dependency Injection Tool Should I Use? While the original post is good, in this days I see a lot of people using StructureMap as their Dependency Injection tool, and in the original post no one even took it seriously. In addition, this quote: If I had to choose today: I would probably go with StructureMap. It has the best support for C# 3.0 language features, and the most flexibility in initialization. Which Dependency Injection Tool Should I Use? Out of this ones: Unity Framework - Microsoft StructureMap - Jeremy Miller Castle Windsor NInject Spring Framework Autofac Managed Extensibility Framework

    Read the article

  • ASP.NET MVC 2 "value" in IsValid override in DataAnnotation attribute passed is null, when incorrect

    - by goldenelf2
    Hello to all! This is my first question here on stack overflow. i need help on a problem i encountered during an ASP.NET MVC2 project i am currently working on. I should note that I'm relatively new to MVC design, so pls bear my ignorance. Here goes : I have a regular form on which various details about a person are shown. One of them is "Date of Birth". My view is like this <div class="form-items"> <%: Html.Label("DateOfBirth", "Date of Birth:") %> <%: Html.EditorFor(m => m.DateOfBirth) %> <%: Html.ValidationMessageFor(m => m.DateOfBirth) %> </div> I'm using an editor template i found, to show only the date correctly : <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl<System.DateTime?>"%> <%= Html.TextBox("", (Model.HasValue ? Model.Value.ToShortDateString() : string.Empty))%> I used LinqToSql designer to create my model from an sql database. In order to do some validation i made a partial class Person to extend the one created by the designer (under the same namespace) : [MetadataType(typeof(IPerson))] public partial class Person : IPerson { //To create buddy class } public interface IPerson { [Required(ErrorMessage="Please enter a name")] string Name { get; set; } [Required(ErrorMessage="Please enter a surname")] string Surname { get; set; } [Birthday] DateTime? DateOfBirth { get; set; } [Email(ErrorMessage="Please enter a valid email")] string Email { get; set; } } I want to make sure that a correct date is entered. So i created a custom DataAnnotation attribute in order to validate the date : public class BirthdayAttribute : ValidationAttribute { private const string _errorMessage = "Please enter a valid date"; public BirthdayAttribute() : base(_errorMessage) { } public override bool IsValid(object value) { if (value == null) { return true; } DateTime temp; bool result = DateTime.TryParse(value.ToString(), out temp); return result; } } Well, my problem is this. Once i enter an incorrect date in the DateOfBirth field then no custom message is displayed even if use the attribute like [Birthday(ErrorMessage=".....")]. The message displayed is the one returned from the db ie "The value '32/4/1967' is not valid for DateOfBirth.". I tried to enter some break points around the code, and found out that the "value" in attribute is always null when the date is incorrect, but always gets a value if the date is in correct format. The same ( value == null) is passed also in the code generated by the designer. This thing is driving me nuts. Please can anyone help me deal with this? Also if someone can tell me where exactly is the point of entry from the view to the database. Is it related to the model binder? because i wanted to check exactly what value is passed once i press the "submit" button. Thank you.

    Read the article

  • How to parse HTML with TouchXML or some other alternative.

    - by 0SX
    Hi, I'm trying to parse the HTML presented below with TouchXML but it keeps crashing when I try to extract certain attributes. I'm totally new to the parser world so I apologize for being a complete idiot. I need help to parse this HTML. What I'm trying to accomplish is to parse each attribute and value or what not and copy them to a string. I've been trying to find a good parser to parse HTML and I believe TouchXML is the best I've seen because of Tidy. Speaking of Tidy, How could I run this HTML through Tidy first then parse it? I'm not sure how to do this. Here is the code that I have so far that doesn't work due to it's not pulling everything I need from the HTML. Any help or advice would be much appreciated. Thanks My current code: NSMutableArray *res = [[NSMutableArray alloc] init]; // using local resource file NSString *XMLPath = [[[NSBundle mainBundle] resourcePath] stringByAppendingPathComponent:@"example.html"]; NSData *XMLData = [NSData dataWithContentsOfFile:XMLPath]; CXMLDocument *doc = [[[CXMLDocument alloc] initWithData:XMLData options:0 error:nil] autorelease]; NSArray *nodes = NULL; nodes = [doc nodesForXPath:@"//div" error:nil]; for (CXMLElement *node in nodes) { NSMutableDictionary *item = [[NSMutableDictionary alloc] init]; [item setObject:[[node attributeForName:@"id"] stringValue] forKey:@"id"]; [res addObject:item]; [item release]; } NSLog(@"%@", res); [res release]; HTML file that needs to be parsed: <html> <head> <base target="_blank" /> </head> <body style="margin:2;"> <div id="group"> <div id="groupURL"><a href="http://www.example.com/groups">Group URL</a></div> <img id="grouplogo" src="http://images.example.com/groups/image.png" /> <div id="groupcomputer"><a href="http://www.example.com/groups/page" title="Group Title">Group title this would be here</a></div> <div id="groupinfos"> <div id="groupinfo-l">Person</div><div id="groupinfo-r">Ralph</div> <div id="groupinfo-l">Years</div><div id="groupinfo-r">4 years</div> <div id="groupinfo-l">Salary</div><div id="groupinfo-r">100K</div> <div id="groupinfo-l">Other</div><div id="groupoth" style="width:15px">other info</div> </body> </html> EDIT: I could use Element Parser but I need to know how to extract the Person's Name from the following example which would be Ralph in this case. <div id="groupinfo-l">Person</div><div id="groupinfo-r">Ralph</div>

    Read the article

  • How can I implement the Gale-Shapley stable marriage algorithm in Perl?

    - by srk
    Problem : We have equal number of men and women.each men has a preference score toward each woman. So do the woman for each man. each of the men and women have certain interests. Based on the interest we calculate the preference scores. So initially we have an input in a file having x columns. First column is the person(men/woman) id. id are nothing but 0.. n numbers.(first half are men and next half woman) the remaining x-1 columns will have the interests. these are integers too. now using this n by x-1 matrix... we have come up with a n by n/2 matrix. the new matrix has all men and woman as their rows and scores for opposite sex in columns. We have to sort the scores in descending order, also we need to know the id of person related to the scores after sorting. So here i wanted to use hash table. once we get the scores we need to make up pairs.. for which we need to follow some rules. My trouble is with the second matrix of n by n/2 that needs to give information of which man/woman has how much preference on a woman/man. I need these scores sorted so that i know who is the first preferred woman/man, 2nd preferred and so on for a man/woman. I hope to get good suggestions on the data structures i use.. I prefer php or perl. Thank you in advance Hey guys this is not an home work. This a little modified version of stable marriage algorithm. I have working solution. I am only working on optimizing my code. more info: It is very similar to stable marriage problem but here we need to calculate the scores based on the interests they share. So i have implemented it as the way you see in the wiki page http://en.wikipedia.org/wiki/Stable_marriage_problem. my problem is not solving the problem. i solved it and can run it. I am just trying to have a better solution. so i am asking suggestions on the type of data structure to use. Conceptually I tried using an array of hashes. where the array index give the person id and the hash in it gives the id's <= score's in sorted manner. I initially start with an array of hashes. now i sort the hashes on values, but i could not store the sorted hashes back in an array.So just stored the keys after sorting and used these to get the values from my initial unsorted hashes. Can we store the hashes after sorting ? Can you suggest a better structure ?

    Read the article

  • How do I use constructor dependency injection to supply Models from a collection to their ViewModels

    - by GraemeF
    I'm using constructor dependency injection in my WPF application and I keep running into the following pattern, so would like to get other people's opinion on it and hear about alternative solutions. The goal is to wire up a hierarchy of ViewModels to a similar hierarchy of Models, so that the responsibility for presenting the information in each model lies with its own ViewModel implementation. (The pattern also crops up under other circumstances but MVVM should make for a good example.) Here's a simplified example. Given that I have a model that has a collection of further models: public interface IPerson { IEnumerable<IAddress> Addresses { get; } } public interface IAddress { } I would like to mirror this hierarchy in the ViewModels so that I can bind a ListBox (or whatever) to a collection in the Person ViewModel: public interface IPersonViewModel { ObservableCollection<IAddressViewModel> Addresses { get; } void Initialize(); } public interface IAddressViewModel { } The child ViewModel needs to present the information from the child Model, so it's injected via the constructor: public class AddressViewModel : IAddressViewModel { private readonly IAddress _address; public AddressViewModel(IAddress address) { _address = address; } } The question is, what is the best way to supply the child Model to the corresponding child ViewModel? The example is trivial, but in a typical real case the ViewModels have more dependencies - each of which has its own dependencies (and so on). I'm using Unity 1.2 (although I think the question is relevant across the other IoC containers), and I am using Caliburn's view strategies to automatically find and wire up the appropriate View to a ViewModel. Here is my current solution: The parent ViewModel needs to create a child ViewModel for each child Model, so it has a factory method added to its constructor which it uses during initialization: public class PersonViewModel : IPersonViewModel { private readonly Func<IAddress, IAddressViewModel> _addressViewModelFactory; private readonly IPerson _person; public PersonViewModel(IPerson person, Func<IAddress, IAddressViewModel> addressViewModelFactory) { _addressViewModelFactory = addressViewModelFactory; _person = person; Addresses = new ObservableCollection<IAddressViewModel>(); } public ObservableCollection<IAddressViewModel> Addresses { get; private set; } public void Initialize() { foreach (IAddress address in _person.Addresses) Addresses.Add(_addressViewModelFactory(address)); } } A factory method that satisfies the Func<IAddress, IAddressViewModel> interface is registered with the main UnityContainer. The factory method uses a child container to register the IAddress dependency that is required by the ViewModel and then resolves the child ViewModel: public class Factory { private readonly IUnityContainer _container; public Factory(IUnityContainer container) { _container = container; } public void RegisterStuff() { _container.RegisterInstance<Func<IAddress, IAddressViewModel>>(CreateAddressViewModel); } private IAddressViewModel CreateAddressViewModel(IAddress model) { IUnityContainer childContainer = _container.CreateChildContainer(); childContainer.RegisterInstance(model); return childContainer.Resolve<IAddressViewModel>(); } } Now, when the PersonViewModel is initialized, it loops through each Address in the Model and calls CreateAddressViewModel() (which was injected via the Func<IAddress, IAddressViewModel> argument). CreateAddressViewModel() creates a temporary child container and registers the IAddress model so that when it resolves the IAddressViewModel from the child container the AddressViewModel gets the correct instance injected via its constructor. This seems to be a good solution to me as the dependencies of the ViewModels are very clear and they are easily testable and unaware of the IoC container. On the other hand, performance is OK but not great as a lot of temporary child containers can be created. Also I end up with a lot of very similar factory methods. Is this the best way to inject the child Models into the child ViewModels with Unity? Is there a better (or faster) way to do it in other IoC containers, e.g. Autofac? How would this problem be tackled with MEF, given that it is not a traditional IoC container but is still used to compose objects?

    Read the article

  • Cutting large XML file into smaller pieces in C#

    - by NDraskovic
    I have a problem that I'm working on for quite some time now. I have an XML file with over 50000 records (one record has 3 levels). This file is used by one of my applications to control document sending (the record holds, among other informations, the type of document that has to be sent to a certain person). So in my application I load the XML file into a XmlDocument, and then by using SelectNodes method, I create a XmlNodeList from which I read the data I want. The process is like this - our worker takes the persons ID card (simple eith barcode) and reads it with barcode reader. When the barcode value has been read, my application finds the person with that ID in the XML file, and stores the type of the document into a string variable. Then the worker takes the document and reads its barcode, and if the value of documents barcode and the value in the value in the string variable match, the application makes a record that document of type xxxxxxxx will be sent to the person with ID yyyyyyyyy. This is very simple code, it works perfectly for now, and this is how it looks: On textBox1_TextChanged event (worker read persons ID): foreach(XmlNode node in NodeList){ if(String.Compare(node.Attributes.GetNamedItem("ID").Value.ToString(),textBox1.Text)==0) { ControlString = node.ChildNode[3].FirstChild.Attributes.GetNamedItem("doctype").Value.ToString(); break; } } textBox2.Focus(); And on textBox2_TextChanged event (worker read the documents barcode): if(String.Compare(textBox2.Text,ControlString)==0) { //Create a record and insert it into a SQL database } My question is - how will my application perform with larger XML files (I was told that the XML file might be up to 500,000 records large), will this approach be valid, or will I need to cut the file into smaller files. If I have to cut it, please give me an idea with some code samples, I've tried to do it like this: Reading entire record and storing it into a string: private void WriteXml(XmlNode record) { tempXML = record.InnerXml; temp = "<" + record.Name + " code=\"" + record.Attributes.GetNamedItem("code").Value + "\">" + Environment.NewLine; temp += tempXML + Environment.NewLine; temp += "</" + record.Name + ">"; SmallerXMLDocument += temp + Environment.NewLine; temp = ""; i++; } tempXML, temp and SmallerXMLDocument are all string variables. And then in button_Click method I load the XML file into a XmlNodeList (again by using XmlDocument.SelectNodes method) and I try to create one big string value that would hold all records like this: foreach(XmlNode node in nodes) { if(String.Compare(node.ChildNode[3].FirstChild.Attributes.GetNamedItem("doctype").Value.ToString(),doctype1)==0) { WriteXML(node); } } My idea was to create a string value (in this case called SmallerXmlDocument), and when I pass trough the entire XML file, to simply copy the value of that string into a new file. This works, but only for files that have up to 2000 records (and my has way more than that). So, if I need to cut the file into smaller pieces, what would be the best way to do it (keep in mind that there could be up to half a million records in a XML file)? Thanks

    Read the article

  • ZF: Form array field - how to display values in the view correctly

    - by Wojciech Fracz
    Let's say I have a Zend_Form form that has a few text fields, e.g: $form = new Zend_Form(); $form->addElement('text', 'name', array( 'required' => true, 'isArray' => true, 'filters' => array( /* ... */ ), 'validators' => array( /* ... */ ), )); $form->addElement('text', 'surname', array( 'required' => true, 'isArray' => true, 'filters' => array( /* ... */ ), 'validators' => array( /* ... */ ), )); After rendering it I have following HTML markup (simplified): <div id="people"> <div class="person"> <input type="text" name="name[]" /> <input type="text" name="surname[]" /> </div> </div> Now I want to have the ability to add as many people as I want. I create a "+" button that in Javascript appends next div.person to the container. Before I submit the form, I have for example 5 names and 5 surnames, posted to the server as arrays. Everything is fine unless somebody puts the value in the field that does not validate. Then the whole form validation fails and when I want to display the form again (with errors) I see the PHP Warning: htmlspecialchars() expects parameter 1 to be string, array given Which is more or less described in ticket: http://framework.zend.com/issues/browse/ZF-8112 However, I came up with a not-very-elegant solution. What I wanted to achieve: have all fields and values rendered again in the view have error messages only next to the fields that contained bad values Here is my solution (view script): <div id="people"> <?php $names = $form->name->getValue(); // will have an array here if the form were submitted $surnames= $form->surname->getValue(); // only if the form were submitted we need to validate fields' values // and display errors next to them; otherwise when user enter the page // and render the form for the first time - he would see Required validator // errors $needsValidation = is_array($names) || is_array($surnames); // print empty fields when the form is displayed the first time if(!is_array($names))$names= array(''); if(!is_array($surnames))$surnames= array(''); // display all fields! foreach($names as $index => $name): $surname = $surnames[$index]; // validate value if needed if($needsValidation){ $form->name->isValid($name); $form->surname->isValid($surname); } ?> <div class="person"> <?=$form->name->setValue($name); // display field with error if did not pass the validation ?> <?=$form->surname->setValue($surname);?> </div> <?php endforeach; ?> </div> The code work, but I want to know if there is an appropriate, more comfortable way to do this? I often hit this problem when there is a need for a more dynamic - multivalue forms and have not find better solution for a long time.

    Read the article

  • Mysqldump create empty sql file? [php & mysql on Windows]

    - by mysqllearner
    Hi all, I tried to dump a database: <?php $dbhost = "localhost"; $dbuser = "XXXX"; $dbpass = "XXXXXXXX"; $dbname = 'testdb'; $list = shell_exec ("C:\wamp\bin\mysql\mysql5.1.33\bin\mysqldump.exe $dbname --user=$dbuser--password=$dbpass > dumpfile.sql"); ?> I tried both specified full path to mysqldump.exe or just use mysqldump, it still give me a 0kb dumpfile.sql Details: Programming Language: PHP Database: MySql 5.XX OS(server): Windows Server 2003. (currently testing on Windows Vista machine) EDIT @ Jeremy Heslop: I tried: shell_exec("C:\wamp\bin\mysql\mysql5.1.33\bin\mysqldump.exe --opt -h $dbhost -u$dbuser -p$dbpass $dbname > test.sql"); safe_mode = off Still no luck man.

    Read the article

  • access PowerPoint chart c#

    - by babar11
    Hi, I have a problem in a c# projet. In fact, i did a PowerPoint-add-in and i want to generate Charts on Slides. I create a slide with : using PowerPoint = Microsoft.Office.Interop.PowerPoint; using Microsoft.Office.Interop.Graph; Microsoft.Office.Interop.Graph.Chart objChart; objChart = (Microsoft.Office.Interop.Graph.Chart)objShape.OLEFormat.Object;` The chart is create on the slide but i can't access to the data to update or insert. I have try with the Datasheet like below : //DataSheet test = objChart.Application.DataSheet; //test.Cells.Clear() This delete the data of the chart but i dont find a solution to insert values in the chart data after. Best Regards, Chomel Jeremy

    Read the article

  • How can I sqldump a huge database?

    - by meder
    SELECT count(*) from table gives me 3296869 rows. The table only contains 4 columns, storing dropped domains. I tried to dump the sql through: $backupFile = $dbname . date("Y-m-d-H-i-s") . '.gz'; $command = "mysqldump --opt -h $dbhost -u $dbuser -p $dbpass $dbname | gzip > $backupFile"; However, this just dumps an empty 20 KB gzipped file. My client is using shared hosting so the server specs and resource usage aren't top of the line. I'm not even given ssh access or access directly to the database so I have to make queries through PHP scripts I upload via FTP ( SFTP isn't an option, again ). Is there some way I can perhaps sequentially download portions of it, or pass an argument to mysqldump that will optimize it? I came across http://jeremy.zawodny.com/blog/archives/000690.html which mentions the -q flag and tried that but it didn't seem to do anything differently.

    Read the article

  • model binding of non-sequential arrays

    - by user281180
    I am having a table in which i`m dynamically creating and deleting rows. How can I change the code such that the rows be added and deleted and the model info property filled accordingly. Bearing in mind that the rows can be dynamically created and deleted, I may have Info[0], Inf0[3], info[4]... My objective is to be able to bind the array even if it`s not in sequence. Model public class Person { public int[] Size { get; set; } public string[] Name { get; set; } public Info[]info { get; set; } } public class Info { public string Address { get; set; } public string Tel { get; set; } View <script type="text/javascript" language="javascript"> $(function () { var count = 1; $('#AddSize').live('click', function () { $("#divSize").append('</br><input type="text" id="Size" name="Size" value=""/><input type = "button" id="AddSize" value="Add"/>'); }); $('#AddName').live('click', function () { $("#divName").append('</br><input type="text" id="Name" name="Name" value=""/><input type = "button" id="AddName" value="Add"/>'); }); $('#AddRow').live('click', function () { $('#details').append('<tr><td>Address</td><td> <input type="text" name="Info[' + count + '].Address"/></td><td>Tel</td><td><input type="text" name="Info[' + count++ + '].Tel"/></td> <td><input type="button" id="AddRow" value="Add"/> </td></tr>'); }); }); </script> </head> <body> <form id="closeForm" action="<%=Url.Action("Create",new{Action="Create"}) %>" method="post" enctype="multipart/form-data"> <div id="divSize"> <input type="text" name="Size" value=""/> <input type="button" value="Add" id="AddSize" /> </div> <div id="divName"> <input type="text" name="Name" value=""/> <input type="button" value="Add" id="AddName" /> </div> <div id="Tab"> <table id="details"> <tr><td>Address</td><td> <input type="text" name="Info[0].Address"/></td><td>Tel</td><td><input type="text" name="Info[0].Tel"/></td> <td><input type="button" id="AddRow" value="Add"/> </td></tr> </table> </div> <input type="submit" value="Submit" /> </form> </body> } Controller public ActionResult Create(Person person) { return new EmptyResult(); }

    Read the article

  • Static assembly initialization

    - by ph0enix
    I'm attempting to develop an Interceptor framework (in C#) where I can simply implement some interfaces, and through the use of some static initialization, register all my Interceptors with a common Dispatcher to be invoked at a later time. The problem lies in the fact that my Interceptor implementations are never actually referenced by my application so the static constructors never get called, and as a result, the Interceptors are never registered. If possible, I would like to keep all references to my Interceptor libraries out of my application, as this is my way of (hopefully) enforcing loose coupling across different modules. Hopefully this makes some sense. Let me know if there's anything I can clarify... Does anyone have any ideas, or perhaps a better way to go about implementing my Interceptor pattern? TIA, Jeremy

    Read the article

  • CruiseControl.NET Silverlight Unit Tests Interact with Desktop Windows Server 2008

    - by user292195
    Hi, Currently we have CCService running as a Domain account because the build scripts deploy to a network location. However this causes any unit tests that test the view to fail. Due to not being allowed to interact with desktop. I can change the CCService to run as local system which works however i loose network connectivity. I also have tried setting up a /interactive cmd.exe but this has been deprecated in Windows Server 2008. Any Ideas on this one? Thanks Jeremy

    Read the article

  • Nested loop traversing arrays

    - by alecco
    There are 2 very big series of elements, the second 100 times bigger than the first. For each element of the first series, there are 0 or more elements on the second series. This can be traversed and processed with 2 nested loops. But the unpredictability of the amount of matching elements for each member of the first array makes things very, very slow. The actual processing of the 2nd series of elements involves logical and (&) and a population count. I couldn't find good optimizations using C but I am considering doing inline asm, doing rep* mov* or similar for each element of the first series and then doing the batch processing of the matching bytes of the second series, perhaps in buffers of 1MB or something. But the code would be get quite messy. Does anybody know of a better way? C preferred but x86 ASM OK too. Many thanks! Sample/demo code with simplified problem, first series are "people" and second series are "events", for clarity's sake. (the original problem is actually 100m and 10,000m entries!) #include <stdio.h> #include <stdint.h> #define PEOPLE 1000000 // 1m struct Person { uint8_t age; // Filtering condition uint8_t cnt; // Number of events for this person in E } P[PEOPLE]; // Each has 0 or more bytes with bit flags #define EVENTS 100000000 // 100m uint8_t P1[EVENTS]; // Property 1 flags uint8_t P2[EVENTS]; // Property 2 flags void init_arrays() { for (int i = 0; i < PEOPLE; i++) { // just some stuff P[i].age = i & 0x07; P[i].cnt = i % 220; // assert( sum < EVENTS ); } for (int i = 0; i < EVENTS; i++) { P1[i] = i % 7; // just some stuff P2[i] = i % 9; // just some other stuff } } int main(int argc, char *argv[]) { uint64_t sum = 0, fcur = 0; int age_filter = 7; // just some init_arrays(); // Init P, P1, P2 for (int64_t p = 0; p < PEOPLE ; p++) if (P[p].age < age_filter) for (int64_t e = 0; e < P[p].cnt ; e++, fcur++) sum += __builtin_popcount( P1[fcur] & P2[fcur] ); else fcur += P[p].cnt; // skip this person's events printf("(dummy %ld %ld)\n", sum, fcur ); return 0; } gcc -O5 -march=native -std=c99 test.c -o test

    Read the article

  • SQLAuthority Book Review – DBA Survivor: Become a Rock Star DBA

    - by pinaldave
    DBA Survivor: Become a Rock Star DBA – Thomas LaRock Link to Amazon Link to Flipkart First of all, I thank all my readers when I wrote that I could not get this book in any local book stores, because they offered me to send a copy of this good book. A very special mention goes to Sripada and Jayesh for they gave so much effort in finding my home address and sending me the hard copy. Before, I did not have the copy of the book, but now I have two of it already! It surprises me how my readers were able to find my home address, which I have not publicly shared. Quick Review: This is indeed a one easy-to-read and fun book. We all work day and night with technology yet we should not forget to show our love and care for our family at home. For our souls that starve for peace and guidance, this one book is the “it” book for all the technology enthusiasts. Though this book was specifically written for DBAs, the reach is not limited to DBAs only because the lessons incorporated in it actually applies to all. This is one of the most motivating technical books I have read. Detailed Review: Let us go over a few questions first: Who wants to be as famous as rockstars in the field of Database Administration? How can one learn what it takes to become a top notch software developer? If you are a beginner in your field, how will you go to next level? Your boss may be very kind or like Dilbert’s Boss, what will you do? How do you keep growing when Eco-system around you does not support you? You are almost at top but there is someone else at the TOP, what do you do and how do you avoid office politics? As a database developer what should be your basic responsibility? and many more… I was able to completely read book in one sitting and I loved it. Before I continue with my opinion, I want to echo the opinion of Kevin Kline who has written the Forward of the book. He has truly suggested that “You hold in your hands a collection of insights and wisdom on the topic of database administration gained through many years of hard-won experience, long nights of study, and direct mentorship under some of the industry’s most talented database professionals and information technology (IT) experts.” Today, IT field is getting bigger and better, while talking about terabytes of the database becomes “more” normal every single day. The gods and demigods of database professionals are taking care of these large scale databases and are carefully maintaining them. In this world, there are only a few beginnings on the first step. There are many experts in different technology fields who are asked to address the issues with databases. There is YOU and ME, who is just new to this work. So we ask ourselves WHERE to begin and HOW to begin. We adore and follow the religion of our rockstars, but oftentimes we really have no idea about their background and their struggles. Every rockstar has his success story which needs to be digested before learning his tricks and tips. This book starts with the same note and teaches the two most important lessons for anybody who wants to be a DBA Rockstar –  to focus on their single goal of learning and to excel the technology. The story starts with three simple guidelines – Get Prepared, Get Trained, Get Certified. Once a person learns the skills, and then, it would be about time that he needs to enrich or to improve those skills you have learned. I am sure that the right opportunity will come finding themselves and they will not have to go run behind it. However, the real challenge for any person is the first day or first week. A new employee, no matter how much experienced he is, sometimes has no clue about what should one do at new job. Chapter 2 and chapter 3 precisely talk about what one should do as soon as the new job begins. It is also written with keeping the fact in focus that each job can be very much different but there are few infrastructure setups and programming concepts are the same. Learning basics of database was really interesting. I like to focus on the roots of any technology. It is important to understand the structure of the database before suggesting what indexes needs to be created, the same way this book covers the most essential knowledge one must learn by most database developers. I think the title of the fourth chapter is my favorite sentence in this book. I can see that I will be saying this again and again in the future – “A Development Server Is a Production Server to a Developer“. I have worked in the software industry for almost 8 years now and I have seen so many developers sitting on their chairs and waiting for instructions from their lead about how to improve the code or what to do the next. When I talk to them, I suggest that the experiment with their server and try various techniques. I think they all should understand that for them, a development server is their production server and needs to pay proper attention to the code from the beginning. There should be NO any inappropriate code from the beginning. One has to fully focus and give their best, if they are not sure they should ask but should do something and stay active. Chapter 5 and 6 talks about two essential skills for any developer and database administration – what are the ethics of developers when they are working with production server and how to support software which is running on the production server. I have met many people who know the theory by heart but when put in front of keyboard they do not know where to start. The first thing they do opening the browser and searching online, instead of opening SQL Server Management Studio. This can very well happen to anybody who is experienced as well. Chapter 5 and 6 addresses that situation as well includes the handy scripts which can solve almost all the basic trouble shooting issues. “Where’s the Buffet?” By far, this is the best chapter in this book. If you have ever met me, you would know that I love food. I think after reading this chapter, I felt Thomas has written this just keeping me in mind. I think there will be many other people who feel the same way, too. Even my wife who read this chapter thought this was specifically written for me. I will not talk any more about this chapter as this is one must read chapter. And of course this is about real ‘FOOD‘. I am an SQL Server Trainer and Consultant and I totally agree with the point made in the chapter 8 of this book. Yes, it says here that what is necessary to train employees and people. Millions of dollars worth the labor is continuously done in the world which has faults and incorrect. Once something goes wrong, very expensive consultant comes in and fixes the problem. This whole cycle which can be stopped and improved if proper training is done. There is plenty of free trainings available as well, if one cannot afford paid training. “Connect. Learn. Share” – I think this is a great summary and bird’s eye view of this book. Networking is the key. Everything which is discussed in this book can be taken to next level if one properly uses this tips and continuously grow with it. Connecting with others, helping learn each other and building the good knowledge sharing environment should be the goal of everyone. Before I end the review I want to share a real experience. I have personally met one DBA who has worked in a single department in a company for so long that when he was put in a different department in his company due to closing that department, he could not adjust and quit the job despite the same people and company around him. Adjusting in the new environment gets much tougher as one person gets more and more experienced. This book precisely addresses the same issue along with their solutions. I just cannot stop comparing the book with my personal journey. I found so many things which are coincidently in the book is written as how we developer and DBA think. I must express special thanks to Thomas for taking time in his personal life and write this book for us. This book is indeed a book for everybody who wants to grow healthy in the tough and competitive environment. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Book Review, SQLAuthority News, SQLServer, T SQL, Technology

    Read the article

  • Right-Time Retail Part 3

    - by David Dorf
    This is part three of the three-part series.  Read Part 1 and Part 2 first. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Right-Time Marketing Real-time isn’t just about executing faster; it extends to interactions with customers as well. As an industry, we’ve spent many years analyzing all the data that’s been collected. Yes, that data has been invaluable in helping us make better decisions like where to open new stores, how to assort those stores, and how to price our products. But the recent advances in technology are now making it possible to analyze and deliver that data very quickly… fast enough to impact a potential sale in near real-time. Let me give you two examples. Salesmen in car dealerships get pretty good at sizing people up. When a potential customer walks in the door, it doesn’t take long for the salesman to figure out the revenue at stake. Is this person a real buyer, or just looking for a fun test drive? Will this person buy today or three months from now? Will this person opt for the expensive packages, or go bare bones? While the salesman certainly asks some leading questions, much of information is discerned through body language. But body language doesn’t translate very well over the web. Eloqua, which was acquired by Oracle earlier this year, reads internet body language. By tracking the behavior of the people visiting your web site, Eloqua categorizes visitors based on their propensity to buy. While Eloqua’s roots have been in B2B, we’ve been looking at leveraging the technology with ATG to target B2C. Knowing what sites were previously visited, how often the customer has been to your site recently, and how long they’ve spent searching can help understand where the customer is in their purchase journey. And knowing that bit of information may be enough to help close the deal with a real-time offer, follow-up email, or online customer service pop-up. This isn’t so different from the days gone by when the clerk behind the counter of the corner store noticed you were lingering in a particular aisle, so he walked over to help you compare two products and close the sale. You appreciated the personalized service, and he knew the value of the long-term relationship. Move that same concept into the digital world and you have Oracle’s CX Suite, a cloud-based offering of end-to-end customer experience tools, assembled primarily from acquisitions. Those tools are Oracle Marketing (Eloqua), Oracle Commerce (ATG, Endeca), Oracle Sales (Oracle CRM On Demand), Oracle Service (RightNow), Oracle Social (Collective Intellect, Vitrue, Involver), and Oracle Content (Fatwire). We are providing the glue that binds the CIO and CMO together to unleash synergies that drive the top-line higher, and by virtue of the cloud-approach, keep costs at bay. My second example of real-time marketing takes place in the store but leverages the concepts of Web marketing. In 1962 the decline of personalized service in retail began. Anyone know the significance of that year? That’s when Target, K-Mart, and Walmart each opened their first stores, and over the succeeding years the industry chose scale over personal service. No longer were you known as “Jane with the snotty kid so make sure we check her out fast,” but you suddenly became “time-starved female age 20-30 with kids.” I’m not saying that was a bad thing – it was the right thing for our industry at the time, and it enabled a huge amount of growth, cheaper prices, and more variety of products. But scale alone is no longer good enough. Today’s sophisticated consumer demands scale, experience, and personal attention. To some extent we’ve delivered that on websites via the magic of cookies, your willingness to log in, and sophisticated data analytics. What store manager wouldn’t love a report detailing all the visitors to his store, where they came from, and which products that examined? People trackers are getting more sophisticated, incorporating infrared, video analytics, and even face recognition. (Next time you walk in front on a mannequin, don’t be surprised if it’s looking back.) But the ultimate marketing conduit is the mobile phone. Since each mobile phone emits a unique number on WiFi networks, it becomes the cookie of the physical world. Assuming congress keeps privacy safeguards reasonable, we’ll have a win-win situation for both retailers and consumers. Retailers get to know more about the consumer’s purchase journey, and consumers get higher levels of service with the retailer. When I call my bank, a couple things happen before the call is connected. A reverse look-up on my phone number identifies me so my accounts can be retrieved from Siebel CRM. Then the system anticipates why I’m calling based on recent transactions. In this example, it sees that I was just charged a foreign currency fee, so it assumes that’s the reason I’m calling. It puts all the relevant information on the customer service rep’s screen as it connects the call. When I complain about the fee, the rep immediately sees I’m a great customer and I travel lots, so she suggests switching me to their traveler’s card that doesn’t have foreign transaction fees. That technology is powered by a product called Oracle Real-Time Decisions, a rules engine built to execute very quickly, basically in the time it takes the phone to ring once. So let’s combine the power of that product with our new-found mobile cookie and provide contextual customer interactions in real-time. Our first opportunity comes when a customer crosses a pre-defined geo-fence, typically a boundary around the store. Context is the key to our interaction: that’s the customer (known or anonymous), the time of day and day of week, and location. Thomas near the downtown store on a Wednesday at noon means he’s heading to lunch. If he were near the mall location on a Saturday morning, that’s a completely different context. But on his way to lunch, we’ll let Thomas know that we’ve got a new shipment of ASICS running shoes on display with a simple text message. We used the context to look-up Thomas’ past purchases and understood he was an avid runner. We used the fact that this was lunchtime to select the type of message, in this case an informational message instead of an offer. Thomas enters the store, phone in hand, and walks to the shoe department. He scans one of the new ASICS shoes using the convenient QR Codes we provided on the shelf-tags, but then he starts scanning low-end Nikes. Each scan is another opportunity to both learn from Thomas and potentially interact via another message. Since he historically buys low-end Nikes and keeps scanning them, he’s likely falling back into his old ways. Our marketing rules are currently set to move loyal customer to higher margin products. We could have set the dials to increase visit frequency, move overstocked items, increase basket size, or many other settings, but today we are trying to move Thomas to higher-margin products. We send Thomas another text message, this time it’s a personalized offer for 10% off ASICS good for 24 hours. Offering him a discount on Nikes would be throwing margin away since he buys those anyway. We are using our marketing dollars to change behavior that increases the long-term value of Thomas. He decides to buy the ASICS and scans the discount code on his phone at checkout. Checkout is yet another opportunity to interact with Thomas, so the transaction is sent back to Oracle RTD for evaluation. Since Thomas didn’t buy anything with the shoes, we’ll print a bounce-back coupon on the receipt offering 30% off ASICS socks if he returns within seven days. We have successfully started moving Thomas from low-margin to high-margin products. In both of these marketing scenarios, we are able to leverage data in near real-time to decide how best to interact with the customer and lead to an increase in the lifetime value of the customer. The key here is acting at the moment the customer shows interest using the context of the situation. We aren’t pushing random products at haphazard times. We are tailoring the marketing to be very specific to this customer, and it’s the technology that allows this to happen in near real-time. Conclusion As we enable more right-time integrations and interactions, retailers will begin to offer increased service to their customers. Localized and personalized service at scale will drive loyalty and lead to meaningful revenue growth for the retailers that execute well. Our industry needs to support Commerce Anywhere…and commerce anytime as well.

    Read the article

  • Software development is (mostly) a trade, and what to do about it

    - by Jeff
    (This is another cross-post from my personal blog. I don’t even remember when I first started to write it, but I feel like my opinion is well enough baked to share.) I've been sitting on this for a long time, particularly as my opinion has changed dramatically over the last few years. That I've encountered more crappy code than maintainable, quality code in my career as a software developer only reinforces what I'm about to say. Software development is just a trade for most, and not a huge academic endeavor. For those of you with computer science degrees readying your pitchforks and collecting your algorithm interview questions, let me explain. This is not an assault on your way of life, and if you've been around, you know I'm right about the quality problem. You also know the HR problem is very real, or we wouldn't be paying top dollar for mediocre developers and importing people from all over the world to fill the jobs we can't fill. I'm going to try and outline what I see as some of the problems, and hopefully offer my views on how to address them. The recruiting problem I think a lot of companies are doing it wrong. Over the years, I've had two kinds of interview experiences. The first, and right, kind of experience involves talking about real life achievements, followed by some variation on white boarding in pseudo-code, drafting some basic system architecture, or even sitting down at a comprooder and pecking out some basic code to tackle a real problem. I can honestly say that I've had a job offer for every interview like this, save for one, because the task was to debug something and they didn't like me asking where to look ("everyone else in the company died in a plane crash"). The other interview experience, the wrong one, involves the classic torture test designed to make the candidate feel stupid and do things they never have, and never will do in their job. First they will question you about obscure academic material you've never seen, or don't care to remember. Then they'll ask you to white board some ridiculous algorithm involving prime numbers or some kind of string manipulation no one would ever do. In fact, if you had to do something like this, you'd Google for a solution instead of waste time on a solved problem. Some will tell you that the academic gauntlet interview is useful to see how people respond to pressure, how they engage in complex logic, etc. That might be true, unless of course you have someone who brushed up on the solutions to the silly puzzles, and they're playing you. But here's the real reason why the second experience is wrong: You're evaluating for things that aren't the job. These might have been useful tactics when you had to hire people to write machine language or C++, but in a world dominated by managed code in C#, or Java, people aren't managing memory or trying to be smarter than the compilers. They're using well known design patterns and techniques to deliver software. More to the point, these puzzle gauntlets don't evaluate things that really matter. They don't get into code design, issues of loose coupling and testability, knowledge of the basics around HTTP, or anything else that relates to building supportable and maintainable software. The first situation, involving real life problems, gives you an immediate idea of how the candidate will work out. One of my favorite experiences as an interviewee was with a guy who literally brought his work from that day and asked me how to deal with his problem. I had to demonstrate how I would design a class, make sure the unit testing coverage was solid, etc. I worked at that company for two years. So stop looking for algorithm puzzle crunchers, because a guy who can crush a Fibonacci sequence might also be a guy who writes a class with 5,000 lines of untestable code. Fashion your interview process on ways to reveal a developer who can write supportable and maintainable code. I would even go so far as to let them use the Google. If they want to cut-and-paste code, pass on them, but if they're looking for context or straight class references, hire them, because they're going to be life-long learners. The contractor problem I doubt anyone has ever worked in a place where contractors weren't used. The use of contractors seems like an obvious way to control costs. You can hire someone for just as long as you need them and then let them go. You can even give them the work that no one else wants to do. In practice, most places I've worked have retained and budgeted for the contractor year-round, meaning that the $90+ per hour they're paying (of which half goes to the person) would have been better spent on a full-time person with a $100k salary and benefits. But it's not even the cost that is an issue. It's the quality of work delivered. The accountability of a contractor is totally transient. They only need to deliver for as long as you keep them around, and chances are they'll never again touch the code. There's no incentive for them to get things right, there's little incentive to understand your system or learn anything. At the risk of making an unfair generalization, craftsmanship doesn't matter to most contractors. The education problem I don't know what they teach in college CS courses. I've believed for most of my adult life that a college degree was an essential part of being successful. Of course I would hold that bias, since I did it, and have the paper to show for it in a box somewhere in the basement. My first clue that maybe this wasn't a fully qualified opinion comes from the fact that I double-majored in journalism and radio/TV, not computer science. Eventually I worked with people who skipped college entirely, many of them at Microsoft. Then I worked with people who had a masters degree who sucked at writing code, next to the high school diploma types that rock it every day. I still think there's a lot to be said for the social development of someone who has the on-campus experience, but for software developers, college might not matter. As I mentioned before, most of us are not writing compilers, and we never will. It's actually surprising to find how many people are self-taught in the art of software development, and that should reveal some interesting truths about how we learn. The first truth is that we learn largely out of necessity. There's something that we want to achieve, so we do what I call just-in-time learning to meet those goals. We acquire knowledge when we need it. So what about the gaps in our knowledge? That's where the most valuable education occurs, via our mentors. They're the people we work next to and the people who write blogs. They are critical to our professional development. They don't need to be an encyclopedia of jargon, but they understand the craft. Even at this stage of my career, I probably can't tell you what SOLID stands for, but you can bet that I practice the principles behind that acronym every day. That comes from experience, augmented by my peers. I'm hell bent on passing that experience to others. Process issues If you're a manager type and don't do much in the way of writing code these days (shame on you for not messing around at least), then your job is to isolate your tradespeople from nonsense, while bringing your business into the realm of modern software development. That doesn't mean you slap up a white board with sticky notes and start calling yourself agile, it means getting all of your stakeholders to understand that frequent delivery of quality software is the best way to deal with change and evolving expectations. It also means that you have to play technical overlord to make sure the education and quality issues are dealt with. That's why I make the crack about sticky notes, because without the right technique being practiced among your code monkeys, you're just a guy with sticky notes. You're asking your business to accept frequent and iterative delivery, now make sure that the folks writing the code can handle the same thing. This means unit testing, the right instrumentation, integration tests, automated builds and deployments... all of the stuff that makes it easy to see when change breaks stuff. The prognosis I strongly believe that education is the most important part of what we do. I'm encouraged by things like The Starter League, and it's the kind of thing I'd love to see more of. I would go as far as to say I'd love to start something like this internally at an existing company. Most of all though, I can't emphasize enough how important it is that we mentor each other and share our knowledge. If you have people on your staff who don't want to learn, fire them. Seriously, get rid of them. A few months working with someone really good, who understands the craftsmanship required to build supportable and maintainable code, will change that person forever and increase their value immeasurably.

    Read the article

  • Of transactions and Mongo

    - by Nuri Halperin
    Originally posted on: http://geekswithblogs.net/nuri/archive/2014/05/20/of-transactions-and-mongo-again.aspxWhat's the first thing you hear about NoSQL databases? That they lose your data? That there's no transactions? No joins? No hope for "real" applications? Well, you *should* be wondering whether a certain of database is the right one for your job. But if you do so, you should be wondering that about "traditional" databases as well! In the spirit of exploration let's take a look at a common challenge: You are a bank. You have customers with accounts. Customer A wants to pay B. You want to allow that only if A can cover the amount being transferred. Let's looks at the problem without any context of any database engine in mind. What would you do? How would you ensure that the amount transfer is done "properly"? Would you prevent a "transaction" from taking place unless A can cover the amount? There are several options: Prevent any change to A's account while the transfer is taking place. That boils down to locking. Apply the change, and allow A's balance to go below zero. Charge person A some interest on the negative balance. Not friendly, but certainly a choice. Don't do either. Options 1 and 2 are difficult to attain in the NoSQL world. Mongo won't save you headaches here either. Option 3 looks a bit harsh. But here's where this can go: ledger. See, and account doesn't need to be represented by a single row in a table of all accounts with only the current balance on it. More often than not, accounting systems use ledgers. And entries in ledgers - as it turns out – don't actually get updated. Once a ledger entry is written, it is not removed or altered. A transaction is represented by an entry in the ledger stating and amount withdrawn from A's account and an entry in the ledger stating an addition of said amount to B's account. For sake of space-saving, that entry in the ledger can happen using one entry. Think {Timestamp, FromAccountId, ToAccountId, Amount}. The implication of the original question – "how do you enforce non-negative balance rule" then boils down to: Insert entry in ledger Run validation of recent entries Insert reverse entry to roll back transaction if validation failed. What is validation? Sum up the transactions that A's account has (all deposits and debits), and ensure the balance is positive. For sake of efficiency, one can roll up transactions and "close the book" on transactions with a pseudo entry stating balance as of midnight or something. This lets you avoid doing math on the fly on too many transactions. You simply run from the latest "approved balance" marker to date. But that's an optimization, and premature optimizations are the root of (some? most?) evil.. Back to some nagging questions though: "But mongo is only eventually consistent!" Well, yes, kind of. It's not actually true that Mongo has not transactions. It would be more descriptive to say that Mongo's transaction scope is a single document in a single collection. A write to a Mongo document happens completely or not at all. So although it is true that you can't update more than one documents "at the same time" under a "transaction" umbrella as an atomic update, it is NOT true that there' is no isolation. So a competition between two concurrent updates is completely coherent and the writes will be serialized. They will not scribble on the same document at the same time. In our case - in choosing a ledger approach - we're not even trying to "update" a document, we're simply adding a document to a collection. So there goes the "no transaction" issue. Now let's turn our attention to consistency. What you should know about mongo is that at any given moment, only on member of a replica set is writable. This means that the writable instance in a set of replicated instances always has "the truth". There could be a replication lag such that a reader going to one of the replicas still sees "old" state of a collection or document. But in our ledger case, things fall nicely into place: Run your validation against the writable instance. It is guaranteed to have a ledger either with (after) or without (before) the ledger entry got written. No funky states. Again, the ledger writing *adds* a document, so there's no inconsistent document state to be had either way. Next, we might worry about data loss. Here, mongo offers several write-concerns. Write-concern in Mongo is a mode that marshals how uptight you want the db engine to be about actually persisting a document write to disk before it reports to the application that it is "done". The most volatile, is to say you don't care. In that case, mongo would just accept your write command and say back "thanks" with no guarantee of persistence. If the server loses power at the wrong moment, it may have said "ok" but actually no written the data to disk. That's kind of bad. Don't do that with data you care about. It may be good for votes on a pole regarding how cute a furry animal is, but not so good for business. There are several other write-concerns varying from flushing the write to the disk of the writable instance, flushing to disk on several members of the replica set, a majority of the replica set or all of the members of a replica set. The former choice is the quickest, as no network coordination is required besides the main writable instance. The others impose extra network and time cost. Depending on your tolerance for latency and read-lag, you will face a choice of what works for you. It's really important to understand that no data loss occurs once a document is flushed to an instance. The record is on disk at that point. From that point on, backup strategies and disaster recovery are your worry, not loss of power to the writable machine. This scenario is not different from a relational database at that point. Where does this leave us? Oh, yes. Eventual consistency. By now, we ensured that the "source of truth" instance has the correct data, persisted and coherent. But because of lag, the app may have gone to the writable instance, performed the update and then gone to a replica and looked at the ledger there before the transaction replicated. Here are 2 options to deal with this. Similar to write concerns, mongo support read preferences. An app may choose to read only from the writable instance. This is not an awesome choice to make for every ready, because it just burdens the one instance, and doesn't make use of the other read-only servers. But this choice can be made on a query by query basis. So for the app that our person A is using, we can have person A issue the transfer command to B, and then if that same app is going to immediately as "are we there yet?" we'll query that same writable instance. But B and anyone else in the world can just chill and read from the read-only instance. They have no basis to expect that the ledger has just been written to. So as far as they know, the transaction hasn't happened until they see it appear later. We can further relax the demand by creating application UI that reacts to a write command with "thank you, we will post it shortly" instead of "thank you, we just did everything and here's the new balance". This is a very powerful thing. UI design for highly scalable systems can't insist that the all databases be locked just to paint an "all done" on screen. People understand. They were trained by many online businesses already that your placing of an order does not mean that your product is already outside your door waiting (yes, I know, large retailers are working on it... but were' not there yet). The second thing we can do, is add some artificial delay to a transaction's visibility on the ledger. The way that works is simply adding some logic such that the query against the ledger never nets a transaction for customers newer than say 15 minutes and who's validation flag is not set. This buys us time 2 ways: Replication can catch up to all instances by then, and validation rules can run and determine if this transaction should be "negated" with a compensating transaction. In case we do need to "roll back" the transaction, the backend system can place the timestamp of the compensating transaction at the exact same time or 1ms after the original one. Effectively, once A or B visits their ledger, both transactions would be visible and the overall balance "as of now" would reflect no change.  The 2 transactions (attempted/ reverted) would be visible , since we do actually account for the attempt. Hold on a second. There's a hole in the story: what if several transfers from A to some accounts are registered, and 2 independent validators attempt to compute the balance concurrently? Is there a chance that both would conclude non-sufficient-funds even though rolling back transaction 100 would free up enough for transaction 117 (some random later transaction)? Yes. there is that chance. But the integrity of the business rule is not compromised, since the prime rule is don't dispense money you don't have. To minimize or eliminate this scenario, we can also assign a single validation process per origin account. This may seem non-scalable, but it can easily be done as a "sharded" distribution. Say we have 11 validation threads (or processing nodes etc.). We divide the account number space such that each validator is exclusively responsible for a certain range of account numbers. Sounds cunningly similar to Mongo's sharding strategy, doesn't it? Each validator then works in isolation. More capacity needed? Chop the account space into more chunks. So where  are we now with the nagging questions? "No joins": Huh? What are those for? "No transactions": You mean no cross-collection and no cross-document transactions? Granted - but don't always need them either. "No hope for real applications": well... There are more issues and edge cases to slog through, I'm sure. But hopefully this gives you some ideas of how to solve common problems without distributed locking and relational databases. But then again, you can choose relational databases if they suit your problem.

    Read the article

< Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >