Search Results

Search found 30250 results on 1210 pages for 'good guy'.

Page 206/1210 | < Previous Page | 202 203 204 205 206 207 208 209 210 211 212 213  | Next Page >

  • The Product Owner

    - by Robert May
    In a previous post, I outlined the rules of Scrum.  This post details one of those rules. Picking a most important part of Scrum is difficult.  All of the rules are required, but if there were one rule that is “more” required that every other rule, its having a good Product Owner.  Simply put, the Product Owner can make or break the project. Duties of the Product Owner A Product Owner has many duties and responsibilities.  I’ll talk about each of these duties in detail below. A Product Owner: Discovers and records stories for the backlog. Prioritizes stories in the Product Backlog, Release Backlog and Iteration Backlog. Determines Release dates and Iteration Dates. Develops story details and helps the team understand those details. Helps QA to develop acceptance tests. Interact with the Customer to make sure that the product is meeting the customer’s needs. Discovers and Records Stories for the Backlog When I do Scrum, I always use User Stories as the means for capturing functionality that’s required in the system.  Some people will use Use Cases, but the same rule applies.  The Product Owner has the ultimate responsibility for figuring out what functionality will be in the system.  Many different mechanisms for capturing this input can be used.  User interviews are great, but all sources should be considered, including talking with Customer Support types.  Often, they hear what users are struggling with the most and are a great source for stories that can make the application easier to use. Care should be taken when soliciting user stories from technical types such as programmers and the people that manage them.  They will almost always give stories that are very technical in nature and may not have a direct benefit for the end user.  Stories are about adding value to the company.  If the stories don’t have direct benefit to the end user, the Product Owner should question whether or not the story should be implemented.  In general, technical stories should be included as tasks in User Stories.  Technical stories are often needed, but the ultimate value to the user is in user based functionality, so technical stories should be considered nothing more than overhead in providing that user functionality. Until the iteration prior to development, stories should be nothing more than short, one line placeholders. An exercise called Story Planning can be used to brainstorm and come up with stories.  I’ll save the description of this activity for another blog post. For more information on User Stories, please read the book User Stories Applied by Mike Cohn. Prioritizes Stories in the Product Backlog, Release Backlog and Iteration Backlog Prioritization of stories is one of the most difficult tasks that a Product Owner must do.  A key concept of Scrum done right is the need to have the team working from a single set of prioritized stories.  If the team does not have a single set of prioritized stories, Scrum will likely fail at your organization.  The Product Owner is the ONLY person who has the responsibility to prioritize that list.  The Product Owner must be very diplomatic and sincerely listen to the people around him so that he can get the priorities correct. Just listening will still not yield the proper priorities.  Care must also be taken to ensure that Return on Investment is also considered.  Ultimately, determining which stories give the most value to the company for the least cost is the most important factor in determining priorities.  Product Owners should be willing to look at cold, hard numbers to determine the order for stories.  Even when many people want a feature, if that features is costly to develop, it may not have as high of a return on investment as features that are cheaper, but not as popular. The act of prioritization often causes conflict in an environment.  Customer Service thinks that feature X is the most important, because it will stop people from calling.  Operations thinks that feature Y is the most important, because it will stop servers from crashing.  Developers think that feature Z is most important because it will make writing software much easier for them.  All of these are useful goals, but the team can have only one list of items, and each item must have a priority that is different from all other stories.  The Product Owner will determine which feature gives the best return on investment and the other features will have to wait their turn, which means that someone will not have their top priority feature implemented first. A weak Product Owner will refuse to do prioritization.  I’ve heard from multiple Product Owners the following phrase, “Well, it’s all got to be done, so what does it matter what order we do it in?”  If your product owner is using this phrase, you need a new Product Owner.  Order is VERY important.  In Scrum, every release is potentially shippable.  If the wrong priority items are developed, then the value added in each release isn’t what it should be.  Additionally, the Product Owner with this mindset doesn’t understand Agile.  A product is NEVER finished, until the company has decided that it is no longer a going concern and they are no longer going to sell the product.  Therefore, prioritization isn’t an event, its something that continues every day.  The logical extension of the phrase “It’s all got to be done” is that you will never ship your product, since a product is never “done.”  Once stories have been prioritized, assigning them to the Release Backlog and the Iteration Backlog becomes relatively simple.  The top priority items are copied into the respective backlogs in order and the task is complete.  The team does have the right to shuffle things around a little in the iteration backlog.  For example, they may determine that working on story C with story A is appropriate because they’re related, even though story B is technically a higher priority than story C.  Or they may decide that story B is too big to complete in the time available after Story A has tasks created, so they’ll work on Story C since it’s smaller.  They can’t, however, go deep into the backlog to pick stories to implement.  The team and the Product Owner should work together to determine what’s best for the company. Prioritization is time consuming, but its one of the most important things a Product Owner does. Determines Release Dates and Iteration Dates Product owners are responsible for determining release dates for a product.  A common misconception that Product Owners have is that every “release” needs to correspond with an actual release to customers.  This is not the case.  In general, releases should be no more than 3 months long.  You  may decide to release the product to the customers, and many companies do release the product to customers, but it may also be an internal release. If a release date is too far away, developers will fall into the trap of not feeling a sense of urgency.  The date is far enough away that they don’t need to give the release their full attention.  Additionally, important tasks, such as performance tuning, regression testing, user documentation, and release preparation, will not happen regularly, making them much more difficult and time consuming to do.  The more frequently you do these tasks, the easier they are to accomplish. The Product Owner will be a key participant in determining whether or not a release should be sent out to the customers.  The determination should be made on whether or not the features contained in the release are valuable enough  and complete enough that the customers will see real value in the release.  Often, some features will take more than three months to get them to a state where they qualify for a release or need additional supporting features to be released.  The product owner has the right to make this determination. In addition to release dates, the Product Owner also will help determine iteration dates.  In general, an iteration length should be chosen and the team should follow that iteration length for an extended period of time.  If the iteration length is changed every iteration, you’re not doing Scrum.  Iteration lengths help the team and company get into a rhythm of developing quality software.  Iterations should be somewhere between 2 and 4 weeks in length.  Any shorter, and significant software will likely not be developed.  Any longer, and the team won’t feel urgency and planning will become very difficult. Iterations may not be extended during the iteration.  Companies where Scrum isn’t really followed will often use this as a strategy to complete all stories.  They don’t want to face the harsh reality of what their true performance is, and looking good is more important than seeking visibility and improving the process and team.  Companies like this typically don’t allow failure.  This is unhealthy.  Failure is part of life and unless we learn from it, we can’t improve.  I would much rather see a team push out stories to the next iteration and then have healthy discussions about why they failed rather than extend the iteration and not deal with the core problems. If iteration length varies, retrospectives become more difficult.  For example, evaluating the performance of the team’s estimation efforts becomes much more difficult if the iteration length varies.  Also, the team must have a velocity measurement.  If the iteration length varies, measuring velocity becomes impossible and upper management no longer will have the ability to evaluate the teams performance.  People external to the team will no longer have the ability to determine when key features are likely to be developed.  Variable iterations cause the entire company to fail and likely cause Scrum to fail at an organization. Develops Story Details and Helps the Team Understand Those Details A key concept in Scrum is that the stories are nothing more than a placeholder for a conversation.  Stories should be nothing more than short, one line statements about the functionality.  The team will then converse with the Product Owner about the details about that story.  The product owner needs to have a very good idea about what the details of the story are and needs to be able to help the team understand those details. Too often, we see this requirement as being translated into the need for comprehensive documentation about the story, including old fashioned requirements documentation.  The team should only develop the documentation that is required and should not develop documentation that is only created because their is a process to do so. In general, what we see that works best is the iteration before a team starts development work on a story, the Product Owner, with other appropriate business analysts, will develop the details of that story.  They’ll figure out what business rules are required, potentially make paper prototypes or other light weight mock-ups, and they seek to understand the story and what is implied.  Note that the time allowed for this task is deliberately short.  The Product Owner only has a single iteration to develop all of the stories for the next iteration. If more than one iteration is used, I’ve found that teams will end up with Big Design Up Front and traditional requirements documents.  This is a waste of time, since the team will need to then have discussions with the Product Owner to figure out what the requirements document says.  Instead of this, skip making the pretty pictures and detailing the nuances of the requirements and build only what is minimally needed by the team to do development.  If something comes up during development, you can address it at that time and figure out what you want to do.  The goal is to keep things as light weight as possible so that everyone can move as quickly as possible. Helps QA to Develop Acceptance Tests In Scrum, no story can be counted until it is accepted by QA.  Because of this, acceptance tests are very important to the team.  In general, acceptance tests need to be developed prior to the iteration or at the very beginning of the iteration so that the team can make sure that the tasks that they develop will fulfill the acceptance criteria. The Product Owner will help the team, including QA, understand what will make the story acceptable.  Note that the Product Owner needs to be careful about specifying that the feature will work “Perfectly” at the end of the iteration.  In general, features are developed a little bit at a time, so only the bit that is being developed should be considered as necessary for acceptance. A weak Product Owner will make statements like “Do it right the first time.”  Not only are these statements damaging to the team (like they would try to do it WRONG the first time . . .), they’re also ignoring the iterative nature of Scrum.  Additionally, a weak product owner will seek to add scope in the acceptance testing.  For example, they will refuse to determine acceptance at the beginning of the iteration, and then, after the team has planned and committed to the iteration, they will expand scope by defining acceptance.  This often causes the team to miss the iteration because scope that wasn’t planned on is included.  There are ways that the team can mitigate this problem.  For example, include extra “Product Owner” time to deal with the uncertainty that you know will be introduced by the Product Owner.  This will slow the perceived velocity of the team and is not ideal, since they’ll be doing more work than they get credit for. Interact with the Customer to Make Sure that the Product is Meeting the Customer’s Needs Once development is complete, what the team has worked on should be put in front of real live people to see if it meets the needs of the customer.  One of the great things about Agile is that if something doesn’t work, we can revisit it in a future iteration!  This frees up the team to make the best decision now and know that if that decision proves to be incorrect, the team can revisit it and change that decision. Features are about adding value to the customer, so if the customer doesn’t find them useful, then having the team make tweaks is valuable.  In general, most software will be 80 to 90 percent “right” after the initial round and only minor tweaks are required.  If proper coding standards are followed, these tweaks are usually minor and easy to accomplish.  Product Owners that are doing a good job will encourage real users to see and use the software, since they know that they are trying to add value to the customer. Poor product owners will think that they know the answers already, that their customers are silly and do stupid things and that they don’t need customer input.  If you have a product owner that is afraid to show the team’s work to real customers, you probably need a different product owner. Up Next, “Who Makes a Good Product Owner.” Followed by, “Messing with the Team.” Technorati Tags: Scrum,Product Owner

    Read the article

  • Another Marketing Conference, part one – the best morning sessions.

    - by Roger Hart
    Yesterday I went to Another Marketing Conference. I honestly can’t tell if the title is just tipping over into smug, but in the balance of things that doesn’t matter, because it was a good conference. There was an enjoyable blend of theoretical and practical, and enough inter-disciplinary spread to keep my inner dilettante grinning from ear to ear. Sure, there was a bumpy bit in the middle, with two back-to-back sales pitches and a rather thin overview of the state of the web. But the signal:noise ratio at AMC2012 was impressively high. Here’s the first part of my write-up of the sessions. It’s a bit of a mammoth. It’s also a bit of a mash-up of what was said and what I thought about it. I’ll add links to the videos and slides from the sessions as they become available. Although it was in the morning session, I’ve not included Vanessa Northam’s session on the power of internal comms to build brand ambassadors. It’ll be in the next roundup, as this is already pushing 2.5k words. First, the important stuff. I was keeping a tally, and nobody said “synergy” or “leverage”. I did, however, hear the term “marketeers” six times. Shame on you – you know who you are. 1 – Branding in a post-digital world, Graham Hales This initially looked like being a sales presentation for Interbrand, but Graham pulled it out of the bag a few minutes in. He introduced a model for brand management that was essentially Plan >> Do >> Check >> Act, with Do and Check rolled up together, and went on to stress that this looks like on overall business management model for a reason. Brand has to be part of your overall business strategy and metrics if you’re going to care about it at all. This was the first iteration of what proved to be one of the event’s emergent themes: do it throughout the stack or don’t bother. Graham went on to remind us that brands, in so far as they are owned at all, are owned by and co-created with our customers. Advertising can offer a message to customers, but they provide the expression of a brand. This was a preface to talking about an increasingly chaotic marketplace, with increasingly hard-to-manage purchase processes. Services like Amazon reviews and TripAdvisor (four presenters would make this point) saturate customers with information, and give them a kind of vigilante power to comment on and define brands. Consequentially, they experience a number of “moments of deflection” in our sales funnels. Our control is lessened, and failure to engage can negatively-impact buying decisions increasingly poorly. The clearest example given was the failure of NatWest’s “caring bank” campaign, where staff in branches, customer support, and online presences didn’t align. A discontinuity of experience basically made the campaign worthless, and disgruntled customers talked about it loudly on social media. This in turn presented an opportunity to engage and show caring, but that wasn’t taken. What I took away was that brand (co)creation is ongoing and needs monitoring and metrics. But reciprocally, given you get what you measure, strategy and metrics must include brand if any kind of branding is to work at all. Campaigns and messages must permeate product and service design. What that doesn’t mean (and Graham didn’t say it did) is putting Marketing at the top of the pyramid, and having them bawl demands at Product Management, Support, and Development like an entitled toddler. It’s going to have to be collaborative, and session 6 on internal comms handled this really well. The main thing missing here was substantiating data, and the main question I found myself chewing on was: if we’re building brands collaboratively and in the open, what about the cultural politics of trolling? 2 – Challenging our core beliefs about human behaviour, Mark Earls This was definitely the best show of the day. It was also some of the best content. Mark talked us through nudging, behavioural economics, and some key misconceptions around decision making. Basically, people aren’t rational, they’re petty, reactive, emotional sacks of meat, and they’ll go where they’re led. Comforting stuff. Examples given were the spread of the London Riots and the “discovery” of the mountains of Kong, and the popularity of Susan Boyle, which, in turn made me think about Per Mollerup’s concept of “social wayshowing”. Mark boiled his thoughts down into four key points which I completely failed to write down word for word: People do, then think – Changing minds to change behaviour doesn’t work. Post-rationalization rules the day. See also: mere exposure effects. Spock < Kirk - Emotional/intuitive comes first, then we rationalize impulses. The non-thinking, emotive, reactive processes run much faster than the deliberative ones. People are not really rational decision makers, so  intervening with information may not be appropriate. Maximisers or satisficers? – Related to the last point. People do not consistently, rationally, maximise. When faced with an abundance of choice, they prefer to satisfice than evaluate, and will often follow social leads rather than think. Things tend to converge – Behaviour trends to a consensus normal. When faced with choices people overwhelmingly just do what they see others doing. Humans are extraordinarily good at mirroring behaviours and receiving influence. People “outsource the cognitive load” of choices to the crowd. Mark’s headline quote was probably “the real influence happens at the table next to you”. Reference examples, word of mouth, and social influence are tremendously important, and so talking about product experiences may be more important than talking about products. This reminded me of Kathy Sierra’s “creating bad-ass users” concept of designing to make people more awesome rather than products they like. If we can expose user-awesome, and make sharing easy, we can normalise the behaviours we want. If we normalize the behaviours we want, people should make and post-rationalize the buying decisions we want.  Where we need to be: “A bigger boy made me do it” Where we are: “a wizard did it and ran away” However, it’s worth bearing in mind that some purchasing decisions are personal and informed rather than social and reactive. There’s a quadrant diagram, in fact. What was really interesting, though, towards the end of the talk, was some advice for working out how social your products might be. The standard technology adoption lifecycle graph is essentially about social product diffusion. So this idea isn’t really new. Geoffrey Moore’s “chasm” idea may not strictly apply. However, his concepts of beachheads and reference segments are exactly what is required to normalize and thus enable purchase decisions (behaviour change). The final thing is that in only very few categories does a better product actually affect purchase decision. Where the choice is personal and informed, this is true. But where it’s personal and impulsive, or in any way social, “better” is trumped by popularity, endorsement, or “point of sale salience”. UX, UCD, and e-commerce know this to be true. A better (and easier) experience will always beat “more features”. Easy to use, and easy to observe being used will beat “what the user says they want”. This made me think about the astounding stickiness of rational fallacies, “common sense” and the pathological willful simplifications of the media. Rational fallacies seem like they’re basically the heuristics we use for post-rationalization. If I were profoundly grimy and cynical, I’d suggest deploying a boat-load in our messaging, to see if they’re really as sticky and appealing as they look. 4 – Changing behaviour through communication, Stephen Donajgrodzki This was a fantastic follow up to Mark’s session. Stephen basically talked us through some tactics used in public information/health comms that implement the kind of behavioural theory Mark introduced. The session was largely about how to get people to do (good) things they’re predisposed not to do, and how communication can (and can’t) make positive interventions. A couple of things stood out, in particular “implementation intentions” and how they can be linked to goals. For example, in order to get people to check and test their smoke alarms (a goal intention, rarely actualized  an information campaign will attempt to link this activity to the clocks going back or forward (a strong implementation intention, well-actualized). The talk reinforced the idea that making behaviour changes easy and visible normalizes them and makes them more likely to succeed. To do this, they have to be embodied throughout a product and service cycle. Experiential disconnects undermine the normalization. So campaigns, products, and customer interactions must be aligned. This is underscored by the second section of the presentation, which talked about interventions and pre-conditions for change. Taking the examples of drug addiction and stopping smoking, Stephen showed us a framework for attempting (and succeeding or failing in) behaviour change. He noted that when the change is something people fundamentally want to do, and that is easy, this gets a to simpler. Coordinated, easily-observed environmental pressures create preconditions for change and build motivation. (price, pub smoking ban, ad campaigns, friend quitting, declining social acceptability) A triggering even leads to a change attempt. (getting a cold and panicking about how bad the cough is) Interventions can be made to enable an attempt (NHS services, public information, nicotine patches) If it succeeds – yay. If it fails, there’s strong negative enforcement. Triggering events seem largely personal, but messaging can intervene in the creation of preconditions and in supporting decisions. Stephen talked more about systems of thinking and “bounded rationality”. The idea being that to enable change you need to break through “automatic” thinking into “reflective” thinking. Disruption and emotion are great tools for this, but that is only the start of the process. It occurs to me that a great deal of market research is focused on determining triggers rather than analysing necessary preconditions. Although they are presumably related. The final section talked about setting goals. Marketing goals are often seen as deriving directly from business goals. However, marketing may be unable to deliver on these directly where decision and behaviour-change processes are involved. In those cases, marketing and communication goals should be to create preconditions. They should also consider priming and norms. Content marketing and brand awareness are good first steps here, as brands can be heuristics in decision making for choice-saturated consumers, or those seeking education. 5 – The power of engaged communities and how to build them, Harriet Minter (the Guardian) The meat of this was that you need to let communities define and establish themselves, and be quick to react to their needs. Harriet had been in charge of building the Guardian’s community sites, and learned a lot about how they come together, stabilize  grow, and react. Crucially, they can’t be about sales or push messaging. A community is not just an audience. It’s essential to start with what this particular segment or tribe are interested in, then what they want to hear. Eventually you can consider – in light of this – what they might want to buy, but you can’t start with the product. A community won’t cohere around one you’re pushing. Her tips for community building were (again, sorry, not verbatim): Set goals Have some targets. Community building sounds vague and fluffy, but you can have (and adjust) concrete goals. Think like a start-up This is the “lean” stuff. Try things, fail quickly, respond. Don’t restrict platforms Let the audience choose them, and be aware of their differences. For example, LinkedIn is very different to Twitter. Track your stats Related to the first point. Keeping an eye on the numbers lets you respond. They should be qualified, however. If you want a community of enterprise decision makers, headcount alone may be a bad metric – have you got CIOs, or just people who want to get jobs by mingling with CIOs? Build brand advocates Do things to involve people and make them awesome, and they’ll cheer-lead for you. The last part really got my attention. Little bits of drive-by kindness go a long way. But more than that, genuinely helping people turns them into powerful advocates. Harriet gave an example of the Guardian engaging with an aspiring journalist on its Q&A forums. Through a series of serendipitous encounters he became a BBC producer, and now enthusiastically speaks up for the Guardian community sites. Cultivating many small, authentic, influential voices may have a better pay-off than schmoozing the big guys. This could be particularly important in the context of Mark and Stephen’s models of social, endorsement-led, and example-led decision making. There’s a lot here I haven’t covered, and it may be worth some follow-up on community building. Thoughts I was quite sceptical of nudge theory and behavioural economics. First off it sounds too good to be true, and second it sounds too sinister to permit. But I haven’t done the background reading. So I’m going to, and if it seems to hold real water, and if it’s possible to do it ethically (Stephen’s presentations suggests it may be) then it’s probably worth exploring. The message seemed to be: change what people do, and they’ll work out why afterwards. Moreover, the people around them will do it too. Make the things you want them to do extraordinarily easy and very, very visible. Normalize and support the decisions you want them to make, and they’ll make them. In practice this means not talking about the thing, but showing the user-awesome. Glib? Perhaps. But it feels worth considering. Also, if I ever run a marketing conference, I’m going to ban speakers from using examples from Apple. Quite apart from not being consistently generalizable, it’s becoming an irritating cliché.

    Read the article

  • DevConnections Session Slides, Samples and Links

    - by Rick Strahl
    Finally coming up for air this week, after catching up with being on the road for the better part of three weeks. Here are my slides, samples and links for my four DevConnections Session two weeks ago in Vegas. I ended up doing one extra un-prepared for session on WebAPI and AJAX, as some of the speakers were either delayed or unable to make it at all to Vegas due to Sandy's mayhem. It was pretty hectic in the speaker room as Erik (our event coordinator extrodinaire) was scrambling to fill session slots with speakers :-). Surprisingly it didn't feel like the storm affected attendance drastically though, but I guess it's hard to tell without actual numbers. The conference was a lot of fun - it's been a while since I've been speaking at one of these larger conferences. I'd been taking a hiatus, and I forgot how much I enjoy actually giving talks. Preparing - well not  quite so much, especially since I ended up essentially preparing or completely rewriting for all three of these talks and I was stressing out a bit as I was sick the week before the conference and didn't get as much time to prepare as I wanted to. But - as always seems to be the case - it all worked out, but I guess those that attended have to be the judge of that… It was great to catch up with my speaker friends as well - man I feel out of touch. I got to spend a bunch of time with Dan Wahlin, Ward Bell, Julie Lerman and for about 10 minutes even got to catch up with the ever so busy Michele Bustamante. Lots of great technical discussions including a fun and heated REST controversy with Ward and Howard Dierking. There were also a number of great discussions with attendees, describing how they're using the technologies touched in my talks in live applications. I got some great ideas from some of these and I wish there would have been more opportunities for these kinds of discussions. One thing I miss at these Vegas events though is some sort of coherent event where attendees and speakers get to mingle. These Vegas conferences are just like "go to sessions, then go out and PARTY on the town" - it's Vegas after all! But I think that it's always nice to have at least one evening event where everybody gets to hang out together and trade stories and geek talk. Overall there didn't seem to be much opportunity for that beyond lunch or the small and short exhibit hall events which it seemed not many people actually went to. Anyways, a good time was had. I hope those of you that came to my sessions learned something useful. There were lots of great questions and discussions after the sessions - always appreciate hearing the real life scenarios that people deal with in relation to the abstracted scenarios in sessions. Here are the Session abstracts, a few comments and the links for downloading slides and  samples. It's not quite like being there, but I hope this stuff turns out to be useful to some of you. I'll be following up a couple of these sessions with white papers in the following weeks. Enjoy. ASP.NET Architecture: How ASP.NET Works at the Low Level Abstract:Interested in how ASP.NET works at a low level? ASP.NET is extremely powerful and flexible technology, but it's easy to forget about the core framework that underlies the higher level technologies like ASP.NET MVC, WebForms, WebPages, Web Services that we deal with on a day to day basis. The ASP.NET core drives all the higher level handlers and frameworks layered on top of it and with the core power comes some complexity in the form of a very rich object model that controls the flow of a request through the ASP.NET pipeline from Windows HTTP services down to the application level. To take full advantage of it, it helps to understand the underlying architecture and model. This session discusses the architecture of ASP.NET along with a number of useful tidbits that you can use for building and debugging your ASP.NET applications more efficiently. We look at overall architecture, how requests flow from the IIS (7 and later) Web Server to the ASP.NET runtime into HTTP handlers, modules and filters and finally into high-level handlers like MVC, Web Forms or Web API. Focus of this session is on the low-level aspects on the ASP.NET runtime, with examples that demonstrate the bootstrapping of ASP.NET, threading models, how Application Domains are used, startup bootstrapping, how configuration files are applied and how all of this relates to the applications you write either using low-level tools like HTTP handlers and modules or high-level pages or services sitting at the top of the ASP.NET runtime processing chain. Comments:I was surprised to see so many people show up for this session - especially since it was the last session on the last day and a short 1 hour session to boot. The room was packed and it was to see so many people interested the abstracts of architecture of ASP.NET beyond the immediate high level application needs. Lots of great questions in this talk as well - I only wish this session would have been the full hour 15 minutes as we just a little short of getting through the main material (didn't make it to Filters and Error handling). I haven't done this session in a long time and I had to pretty much re-figure all the system internals having to do with the ASP.NET bootstrapping in light for the changes that came with IIS 7 and later. The last time I did this talk was with IIS6, I guess it's been a while. I love doing this session, mainly because in my mind the core of ASP.NET overall is so cleanly designed to provide maximum flexibility without compromising performance that has clearly stood the test of time in the 10 years or so that .NET has been around. While there are a lot of moving parts, the technology is easy to manage once you understand the core components and the core model hasn't changed much even while the underlying architecture that drives has been almost completely revamped especially with the introduction of IIS 7 and later. Download Samples and Slides   Introduction to using jQuery with ASP.NET Abstract:In this session you'll learn how to take advantage of jQuery in your ASP.NET applications. Starting with an overview of jQuery client features via many short and fun examples, you'll find out about core features like the power of selectors for document element selection, manipulating these elements with jQuery's wrapped set methods in a browser independent way, how to hook up and handle events easily and generally apply concepts of unobtrusive JavaScript principles to client scripting. The second half of the session then delves into jQuery's AJAX features and several different ways how you can interact with ASP.NET on the server. You'll see examples of using ASP.NET MVC for serving HTML and JSON AJAX content, as well as using the new ASP.NET Web API to serve JSON and hypermedia content. You'll also see examples of client side templating/databinding with Handlebars and Knockout. Comments:This session was in a monster of a room and to my surprise it was nearly packed, given that this was a 100 level session. I can see that it's a good idea to continue to do intro sessions to jQuery as there appeared to be quite a number of folks who had not worked much with jQuery yet and who most likely could greatly benefit from using it. Seemed seemed to me the session got more than a few people excited to going if they hadn't yet :-).  Anyway I just love doing this session because it's mostly live coding and highly interactive - not many sessions that I can build things up from scratch and iterate on in an hour. jQuery makes that easy though. Resources: Slides and Code Samples Introduction to jQuery White Paper Introduction to ASP.NET Web API   Hosting the Razor Scripting Engine in Your Own Applications Abstract:The Razor Engine used in ASP.NET MVC and ASP.NET Web Pages is a free-standing scripting engine that can be disassociated from these Web-specific implementations and can be used in your own applications. Razor allows for a powerful mix of code and text rendering that makes it a wonderful tool for any sort of text generation, from creating HTML output in non-Web applications, to rendering mail merge-like functionality, to code generation for developer tools and even as a plug-in scripting engine. In this session, we'll look at the components that make up the Razor engine and how you can bootstrap it in your own applications to hook up templating. You'll find out how to create custom templates and manage Razor requests that can be pre-compiled, detecting page changes and act in ways similar to a full runtime. We look at ways that you can pass data into the engine and retrieve both the rendered output as well as result values in a package that makes it easy to plug Razor into your own applications. Comments:That this session was picked was a bit of a surprise to me, since it's a bit of a niche topic. Even more of a surprise was that during the session quite a few people who attended had actually used Razor externally and were there to find out more about how the process works and how to extend it. In the session I talk a bit about a custom Razor hosting implementation (Westwind.RazorHosting) and drilled into the various components required to build a custom Razor Hosting engine and a runtime around it. This sessions was a bit of a chore to prepare for as there are lots of technical implementation details that needed to be dealt with and squeezing that into an hour 15 is a bit tight (and that aren't addressed even by some of the wrapper libraries that exist). Found out though that there's quite a bit of interest in using a templating engine outside of web applications, or often side by side with the HTML output generated by frameworks like MVC or WebForms. An extra fun part of this session was that this was my first session and when I went to set up I realized I forgot my mini-DVI to VGA adapter cable to plug into the projector in my room - 6 minutes before the session was about to start. So I ended up sprinting the half a mile + back to my room - and back at a full sprint. I managed to be back only a couple of minutes late, but when I started I was out of breath for the first 10 minutes or so, while trying to talk. Musta sounded a bit funny as I was trying to not gasp too much :-) Resources: Slides and Code Samples Westwind.RazorHosting GitHub Project Original RazorHosting Blog Post   Introduction to ASP.NET Web API for AJAX Applications Abstract:WebAPI provides a new framework for creating REST based APIs, but it can also act as a backend to typical AJAX operations. This session covers the core features of Web API as it relates to typical AJAX application development. We’ll cover content-negotiation, routing and a variety of output generation options as well as managing data updates from the client in the context of a small Single Page Application style Web app. Finally we’ll look at some of the extensibility features in WebAPI to customize and extend Web API in a number and useful useful ways. Comments:This session was a fill in for session slots not filled due MIA speakers stranded by Sandy. I had samples from my previous Web API article so decided to go ahead and put together a session from it. Given that I spent only a couple of hours preparing and putting slides together I was glad it turned out as it did - kind of just ran itself by way of the examples I guess as well as nice audience interactions and questions. Lots of interest - and also some confusion about when Web API makes sense. Both this session and the jQuery session ended up getting a ton of questions about when to use Web API vs. MVC, whether it would make sense to switch to Web API for all AJAX backend work etc. In my opinion there's no need to jump to Web API for existing applications that already have a good AJAX foundation. Web API is awesome for real externally consumed APIs and clearly defined application AJAX APIs. For typical application level AJAX calls, it's still a good idea, but ASP.NET MVC can serve most if not all of that functionality just as well. There's no need to abandon MVC (or even ASP.NET AJAX or third party AJAX backends) just to move to Web API. For new projects Web API probably makes good sense for isolation of AJAX calls, but it really depends on how the application is set up. In some cases sharing business logic between the HTML and AJAX interfaces with a single MVC API can be cleaner than creating two completely separate code paths to serve essentially the same business logic. Resources: Slides and Code Samples Sample Code on GitHub Introduction to ASP.NET Web API White Paper© Rick Strahl, West Wind Technologies, 2005-2012Posted in Conferences  ASP.NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Enabling SFTP Access within PLESK

    - by spelley
    Hello everyone, I have a client who wants to ensure his upload is secure, so we are trying to enable SFTP for him on our Linux PLESK server. I have enabled SSH access to bin/bash for FTP accounts, and created a new user. When I attempt to SFTP using either the IP address or the domain name, this is the error FileZilla is giving me: Error: Authentication failed. Error: Critical error Error: Could not connect to server Here is some basic information regarding the server: Operating system Linux 2.6.24.5-20080421a Plesk Control Panel version psa v8.6.0_build86080930.03 os_CentOS 5 I had read in some places that I should reboot the SSH Service in Server - Services, however, there is no SSH Service within the list. I'm not really a server guy so it's quite possible I'm missing something obvious. Thanks for any help that you guys can provide!

    Read the article

  • Cisco VPN Connection - No internet no nothing

    - by Kevin
    Hi all, Sorry if this has been posted, I tried searching but I am not exactly sure what I am looking for, I am a developer not a networking guy. We have a client whom we need to use Cisco VPN client to connect to their servers. I have installed the software, dropped in the provided .pcf file, and I can connect. However, when I do, I lose all local and internet capabilities, no hosts resolve, and I still can't connect to their internal FTP and development sites. This leads me to believe either a setting is wrong in my Cisco software, and/or their network is not correctly configured. Does anyone know anything about Cisco VPN'ing that can give me a hand? My colleague seems to indicate that they need to enable split tunneling on their end (or a similar setting).

    Read the article

  • SysAdmin Career Question: Internal or Client Based

    - by Malnizzle
    ServerFault Community, It seems there are two positions SysAdmins find themselves in, either you are working for a non-IT services based single client (your employer) and providing in-house IT support or you work for a company who provides out sourced IT services to multiple clients. Right now I work for a company who does the latter, and I often consider how nice it would be doing the in-house side of things, to just have one network I am focused on and instead of feeling like I have a dozen bosses between clients and internal management, I would just have one set of management and people to appease. There is also the technical aspect of every client wanting something different, and having to manage numerous different technology platforms, or trying to force clients into using the technologies we prefer, neither situation is enjoyable. Is this just "the grass is greener on the other side" syndrome, or is there some legitimacy to the the stress of client based IT work compared to being an in-house IT guy? Thanks!

    Read the article

  • FTP ASCII file from Windows to Mainframe (iSeries) — special characters

    - by MikeM
    I have a text file created on a Windows machine, the page coding used on the file is 1252 This file is then ftp'd to an iSeries machine for processing As far as I can see, it appears on the iSeries. It has a CCSID of 037. Sometimes this file contains French characters (e.g. é). When this happens, the FTP will fail with a truncation error as the french character gets converted to some extra junk: �. The file is fixed block so the line does get truncated due to the one character turning into 3. I can convert the French characters to characters without the accents before sending but would prefer to keep everything intact. So is there a way to retain them and send the file over properly? I'm very green on iSeries, mainly a Windows guy.

    Read the article

  • ADSL Modem Goes Slower Than Dialup

    - by peter
    Hi All, I have two ADSL modems, the first one does not have wireless, but is configured and working fine at around 6 - 7 mbps (ADSL) on Orcon in New Zealand. I bought a Belkin N150 wireless router to replace the first one. I configured it exactly the same as the first one, but a speed test confirms that it is running slower than dial up. One difference I noticed is that the first modem (a linksys) came from Orcon, and didn't have an ADSL username and password set up. The Belkin modem on the other hand wouldn't let me leave the username and password field blank. Any ideas? I am a techy guy, so it doesn't appear to be anything obvious with the settings I have missed. Thanks.

    Read the article

  • Server Administration

    - by Kassem
    Hi everyone, My client asked me for a job description of a system administration because I might be assigned this position along with the other guy I'm working with. To be honest, I do not know much about a System Administrator's job but I'm willing to learn. Questions: What are the security requirements of a server? * What are the key responsibilities in a system admin's job description? What are some of the day to day tasks of a system admin? What is the average monthly salary of a system admin? Note: I will be working inside a Windows environment. But your replies do not necessarily need to be constricted to a Windows environment. (*) Other software I know will be required are: Windows Server 2008 IIS 7.0 MS SQL Server .NET 4.0 Runtime Let me know if there are other things I should be aware of as well. Thanks!

    Read the article

  • How to create a Linux user without a password but being able to set it?

    - by Leonid Shevtsov
    I have a username and an SSH key for a (hypothetical) guy and I need to give him admin access to a Linux (Ubuntu) server. I want him to be able to log in via SSH and then set his password by himself over a secure connection, instead of passing the password around. I know how to make the password expire and force him to reset it on first login. But this doesn't work unless he has some password already, which I then have to tell him. I thought about making the password blank - SSH wouldn't allow login, but then anyone can su into the user. My question is, is there some best practice to creating accounts in such a way? Or setting a default password is unavoidable?

    Read the article

  • Fixing Windows 7 hibernation

    - by 80skeys
    I've been mucking around with the partitions on my laptop (I'm an experienced Linux/grub guy) and have somehow ended up affecting the ability of Windows 7 to go into hibernate mode. All other functionality seems to be okay. But when I press Hibernate, it behaves as if it starts to (screen goes dark, a little disk activity) but never powers off and if I move the mouse the login screen instantly comes up. I don't know if Window uses a separate partition for hibernation? There is a 200MB partition on the drive - I seem to recall it was related to diagnostics or some other Windows- boot menu stuff. In any case, wondering if there's some commands I can run to restore the ability to hibernate and also which partitions need to be marked "active" and if there's anything I need to do to the MBR of the hard drive or the MBR of the Windows partition? As I said, Windows boots fine as long as it is designated the Active partition. I just need to fix Hibernation.

    Read the article

  • How do I get a hold of the Reply-To header and remunge it in postfix

    - by Mikhail
    I have a legacy application that emails via php. 5% of the emails aren't going through. The solution is to route all email through a fancy verified mail server like Amazon's SES. I am having some trouble implementing this functionality. It seems this guy had a similar problem. My question is where in postfix can I set a filter that will take as input the the message headers, so that I can manually set the From field and the Reply-To field to [email protected] and [email protected], respectively. Where whatever_php_wants is dictated by the php program and the users registration email. I know where to set the noreply portion, but I don't know the exact place in postfix's configuration files where I can intercept complete emails and pass them to a script. Edit So I want emails to look like: FROM: [email protected] REPLY-TO: the_users_address@their_email_service.com

    Read the article

  • Can't access Linux machine from the network, network from the machine is fine

    - by Matt
    I'm having issues with a machine that stopped replicating with mysql. It's managed by a guy on another continent but recently I've had to get involved. The server is running Ubuntu server 9.10 I can't log in with SSH, there is no response. On the server itself I can ssh to localhost fine. I thought maybe it's the firewall rules. I'm no expert on IP Tables, but I believe that's not the issue as I removed all the rules. But it still won't let me in. Any ideas? it's acting from other machines as though the service isn't listening, but I know that it is. It's like this for all services.

    Read the article

  • ADSL Modem Goes Slower Than Dialup

    - by peter
    Hi All, I have two ADSL modems, the first one does not have wireless, but is configured and working fine at around 6 - 7 mbps (ADSL) on Orcon in New Zealand. I bought a Belkin N150 wireless router to replace the first one. I configured it exactly the same as the first one, but a speed test confirms that it is running slower than dial up. One difference I noticed is that the first modem (a linksys) came from Orcon, and didn't have an ADSL username and password set up. The Belkin modem on the other hand wouldn't let me leave the username and password field blank. Any ideas? I am a techy guy, so it doesn't appear to be anything obvious with the settings I have missed. Thanks.

    Read the article

  • Symmetrix gatekeepers on Solaris 10

    - by Milner
    I have some Solaris machines that are connected to EMC Symmetrix for SAN storage. Apparently the Symm has a gatekeeper device that is used with the symmetrix CLI. We don't need the CLI, but I have these gatekeeper devices that constantly fill /var/adm/messages and the like with corrupt label errors. Is there anything I can do (short of deleting the devices on machine start) to get rid of them? Or should I just try to get our SAN guy to get the installer for the CLI? These things are getting annoying, and the devfsadmd daemon keeps rediscovering them on boot.

    Read the article

  • iproute2 rules and iptables NAT... what is the difference?

    - by Jakobud
    We have 2 different ISP connections. Our previous "IT guy" setup our firewall like so: When /etc/rc.local was executed on startup, it did a bunch of ip rule add and ip route add commands in order to route certain internal hosts to use certain ISP connections. Then at the end of /etc/rc.local, he executed our iptables firewall rules that were generated by Firewall Builder. These iptables rules have both Policy and NAT rules setup in them. What I don't understand, is why did he use iproute2 to specify rules and routes but also specify NAT rules for iptables? Why didn't he just do it all in one or the other instead of using them both? Could he have got rid of the iproute2 rules and routes and just put all those same rules into the iptables NAT settings?

    Read the article

  • Windows 7 Audio tracks messed up

    - by Crash893
    I'm not an audio guy so it might not be the most articulate description of the problem I'm having but it seems like ever since I went to Windows 7 some video (netflix and some youtube) the background track (music sound effects etc) plays appropriately but the foreground track (i.e. the actors and/or narrator) barely comes in at all Right now I have just a simple set of PC speakers and some times a pair of headphones that plug into the PC speakers (no fancy 5.1) I've looked at every setting I can think of but I can't find anything that could be causing this Ive uninstalled the driver and reinstalled and I still get the same results so I think its a software issue any ideas?

    Read the article

  • Have to enter google sites through second-level domain

    - by Anton Geraschenko
    I'm having the same problem as this guy. I own two domains hosted on google sites, mydomain.com and mydomain.net. When I go to mydomain.com, it redirects me to the site located at www.mydomain.com (this is the desired behavior). This used to also work on mydomain.net, but now when I go to mydomain.net, I get a Google 404. To see the content, I have to go to www.mydomain.net. As far as I can tell, the DNS settings and Google apps settings for both domains are identical. Does anybody have any idea about what could be happening?

    Read the article

  • Can't set 1280x1024 on Ubuntu with Nvidia Geforce 8400 GS

    - by torbengb
    Ubuntu doesn't offer 1280x1024 but that's what my monitor has -- I want to set that resolution but need help doing so. Ubuntu offers 1360x768, 1152x864, 1024x768 when using the Nvidia driver version 180. This is Ubuntu 9.04 installed as "Wubi" through Windows, I believe it's the amd64 version. Google gave me this precise explanation which is complete latin to me; I'm a Windows guy and not familiar with Terminal. These SU questions also don't provide answers, hence this new question. I hope you can help, and I will provide additional info if required.

    Read the article

  • Spoofing MAC address to match other device

    - by boj
    At my school, there is a WiFi network where you can register up to one computer or phone. I, however, wish to connect my phone and my computer (Windows 7). I talked to an IT guy at my school and he told me that it registers the computer based on the MAC address (For the record, he's the one who suggested I spoof). So, I got my phone's MAC and now I want to change my computer's address to the same thing. I found this link and this video, so I know how to change it. I was wondering if that would run into problems because they are normally on the same network at home.

    Read the article

  • Public IP Routing over Private GRE tunnel

    - by Paul
    I have a GRE tunnel configured between two linux boxes. The tunnel works fine. I can ping from each host the other private ip. Head privateip: 10.0.0.1 publicip: 8.8.8.8 Tail privateip: 10.0.0.2 publicip: 7.7.7.7 The public IP on Tail has the network block 9.9.9.0/23 statically routed over the 7.7.7.7 interface. The idea is to make the 9.9.9.0/23 ips work on servers on the 8.8.8.8 network. I configure the tail host to route the /23 block. I mounted a 9.9 IP on the head server. I can ping the 9.9 ip from the tail to the head. I can't ping the 9.9 ip from the public internet. I think I need to add some other routes because of gateway issues, but I can't seem to wrap my mind around it (not a router guy, just beating my way through something that I have never done before and vaguely understand) --danks

    Read the article

  • Locking down a box on the web

    - by glowcoder
    I'm a Java developer who is looking to put a game on the web. I'm not much of a web or server guy, though, and frankly I seem a little lost at where I should start with putting something on the web. My application works fine on my machine, and I'm sure I can make it work fine on any box I put it on. But the security of that box is pretty important. If I sign up for a standard hosting package (let's say from GoDaddy or something) can I simply tell them "make port 12345 open for communication" and let them handle the rest of the security details? If I can't, what are the things I'm going to need to know to prevent my game server from getting hacked to shreds? (Links to solid resources fine by me!) Thanks!

    Read the article

  • DSL connection dropping when starting game

    - by bootoo
    New connection set up last week - have called tech support, signal fine, sending out tech guy tomorrow - however i cant make sense of this using Vista, 2wire connection 6mb ATT conn, wireless, i can browse website, stream video from my PC wirelessly to my xbox, watch Hulu in high res, connection seems fine I load up World of Warcraft, i select Enter World and without fail my 2wire modem 'DSL' light goes red, my internet light goes out, im no longer connected, then after 10 or 15 seconds it works again, but game has kicked me Noticed the same thing on opening or closing Utorrent as well No other phones/devices plugged into the phone jacks in the apartment - why would joining a game cause my DSL to drop? i can only guess it throws a bunch of information to the server when i hit 'enter world' and that causes an issue that shuts my connection down or my router hates me - all help appreciated

    Read the article

  • Horde complains that Imp is not running

    - by Eric J.
    I'm a mostly-Windows guy tasked with setting up email on an Ubuntu 12.04 instance at AWS and hit the following error: When I browse to Horde, after entering my administrative credentials, I get the error message: A fatal error has occurred imp is not activated. Details have been logged for the administrator. I am following the following, quite detailed guide http://www.exratione.com/2012/05/a-mailserver-on-ubuntu-1204-postfix-dovecot-mysql/ This is happening at Step 20, at the text Now fire up you web browser and navigate to your server at http://mail.example.com/ to verify that you can log in as the configured administrative mail user. (of course I used my actual domain). Questions Where is Horde logging the "details"? Any thoughts on why this might happen? I found Google hits suggesting that php5-mcrypt might be missing, but I verified it is installed and up-to-date in my case.

    Read the article

  • Fedora vs Ubuntu to host Subversion and Bugzilla over Apache

    - by Tone
    I'm not interested in a flame war of Ubuntu vs Fedora vs whatever. What I am interested in is whether or not I should move my current Ubuntu server to Fedora. I have been able to get Subversion setup and hosted via Apache over https and it works quite well (I'm a .NET guy so this was all new to me). I'm having trouble though with installing Bugszilla - have run into some issues getting all the perl scripts to run successfully so my questions are: 1) Will Bugszilla will install easier on Fedora? Can I just install a package instead of having to download the tar.gz file and untar it, run perl scripts, etc. 2) Is Fedora considered to be a better production server system? I have no desire for a GUI, just need it to host Subversion, Bugzilla over Apache2, and act as a file and print server for my home network.

    Read the article

< Previous Page | 202 203 204 205 206 207 208 209 210 211 212 213  | Next Page >