Search Results

Search found 44090 results on 1764 pages for 'working conditions'.

Page 266/1764 | < Previous Page | 262 263 264 265 266 267 268 269 270 271 272 273  | Next Page >

  • IPMSG problem in windows 7 ?

    - by FrozenKing
    I have always used IPMSG for chatting across the LAN. It has been working well under Windows XP, but after I installed Windows 7 it has stopped working. When I am connected to the internet I cannot see anyone else on the local network. As soon as I disconnect the internet I can see everyone on the local network perfectly fine. I am using the latest versino of IPMSG. Does anyone have any ideas or knowledge of this problem?

    Read the article

  • PHP and Javascript: Getting file contents with Javascript variable

    - by celliott1997
    Could someone help me to understand why this isn't working? var uname = "<?php echo strtolower($_GET['un']) ?>"; var source = "<?php echo file_get_contents('accounts/"+uname+"') ?>"; console.log(source); I've been trying for a while to get this working and it just doesn't seem to. Before I added in the source variable, it was working fine and displayed the un variable on the page. Thanks.

    Read the article

  • Rescuing files and commits from "no branch" in git

    - by Xeoncross
    I started working on some files I had in a git submodule under another project. However, since it was a git submodule it never checked out "master" and instead just checked out the head and placed all the files in the folder in "no branch". Now that I've made some changes by accident to these files I just realized that I was working in a "no branch", submodule of my project. How do I get those files into a branch (like master) so I can rescue them?

    Read the article

  • Mercurial repositories hosting with different user access levels

    - by kender
    I want to set a few Mercurial 'central' repositories on one machine. There are few things I need to have working though: Each repository should have its own ACL, with different users allowed to push/pull It shouldn't be ssh-based (it shouldn't require users to have shell accounts on that machine) So, I guess that leaves me with some https with basic authentication, right? Are there any working solutions that provide this kind of functions?

    Read the article

  • All of a Sudden , Sql Server Timeout

    - by Adinochestva
    Hey Guys We got a legacy vb.net applicaction that was working for years But all of a sudden it stops working yesterday and gives sql server timeout Most part of application gives time out error , one part for example is below code : command2 = New SqlCommand("select * from Acc order by AccDate,AccNo,AccSeq", SBSConnection2) reader2 = command2.ExecuteReader() If reader2.HasRows() Then While reader2.Read() If IndiAccNo <> reader2("AccNo") Then CAccNo = CAccNo + 1 CAccSeq = 10001 IndiAccNo = reader2("AccNo") Else CAccSeq = CAccSeq + 1 End If command3 = New SqlCommand("update Acc Set AccNo=@NewAccNo,AccSeq=@NewAccSeq where AccNo=@AccNo and AccSeq=@AccSeq", SBSConnection3) command3.Parameters.Add("@AccNo", SqlDbType.Int).Value = reader2("AccNo") command3.Parameters.Add("@AccSeq", SqlDbType.Int).Value = reader2("AccSeq") command3.Parameters.Add("@NewAccNo", SqlDbType.Int).Value = CAccNo command3.Parameters.Add("@NewAccSeq", SqlDbType.Int).Value = CAccSeq command3.ExecuteNonQuery() End While End If It was working and now gives time out in command3.ExecuteNonQuery() Any ideas ?

    Read the article

  • How does one find a '.' in a string object in Object-C

    - by NaZer
    I am working on getting a simple calculator working as part of my adventure to learning Object-C and iOS development. In Object-C using NSString, how does one look for a period in a string? Based on the comments this is what I got so far. NSString * tmp = [display text]; NSLog(@"%@", tmp); // Shows the number on the display correctly int x = [tmp rangeOfString:@"."].location; NSLog(@"%i", x); // Shows some random signed number if (x < 0) { [display setText:[[display text] stringByAppendingFormat:@"."]]; } It is still not working :(

    Read the article

  • Small Business Setup SSO LDAP VPN [closed]

    - by outsmartin
    We are not sure how to setup an efficient network. Things we got so far: Linux Server ( probably Debian ) 3 Desktops + some Laptops ( Win / linux ) NAS ~10 people working 50/50 devs/normal people :) Things we want to achieve: Working from home should be easy, VPN and firewall single username/password for everybody windows/linux desktops should have automatic synched home folders / preferably from the NAS automated hostnames for apps so others can access them like http//john.dev_app from everywhere in the VPN Need starting point and documentation on setting up with Open source tools like OpenVPN and OpenLDAP Any recommendations or links to further literature are welcome.

    Read the article

  • C#.Net window application EXE and SQL SERVER 2000 Database at Client Machine

    - by user1397931
    Friends I have install a .net window application exe at client machine and its database in sql server 2000. For Exe I install .net framework and other support software and for DB i install sql server 2000 and Create Database over there and connect it. Working Properly But now client change Xp to window7. I search for installation of SQL Server 2000 , but its not working over there. So what i did, i send the MDF file of database with exe application and make connection string and its also working. Now i change something in DB , and trying to update .mdf file but its not reflecting the new one mdf its getting the old one. i am Stuck...... am i did any wrong ? because its difficult for me to fix. OR i want to know what is the efficient way of use of SQL Server 2000 database on window7 or any else solution is there? Please help me...

    Read the article

  • Cannot access data from hbase with amazone ec2

    - by Najeeb Thalakkatt
    I have a single node hadoop machine in which hbase is running in the amazone ec2 instance. Due to some reason the server got restarted. So i need to start hadoop and hbase again. Then its working fine, but the old data in the hbase cannot be accessed through the web-servies. While i use the shell command its working fine, i am getting the data. So i created the scenario on my local server machine, but its working fine. The version details are as follows. hadoop-0.20.2 hbase-0.90.5 apache-tomcat-7.0.30 in Amazon ec2 medium instance And i use restful web-services with Orm-hbase to access data.

    Read the article

  • Finding a Relative Path in .NET

    - by Rick Strahl
    Here’s a nice and simple path utility that I’ve needed in a number of applications: I need to find a relative path based on a base path. So if I’m working in a folder called c:\temp\templates\ and I want to find a relative path for c:\temp\templates\subdir\test.txt I want to receive back subdir\test.txt. Or if I pass c:\ I want to get back ..\..\ – in other words always return a non-hardcoded path based on some other known directory. I’ve had a routine in my library that does this via some lengthy string parsing routines, but ran into some Uri processing today that made me realize that this code could be greatly simplified by using the System.Uri class instead. Here’s the simple static method: /// <summary> /// Returns a relative path string from a full path based on a base path /// provided. /// </summary> /// <param name="fullPath">The path to convert. Can be either a file or a directory</param> /// <param name="basePath">The base path on which relative processing is based. Should be a directory.</param> /// <returns> /// String of the relative path. /// /// Examples of returned values: /// test.txt, ..\test.txt, ..\..\..\test.txt, ., .., subdir\test.txt /// </returns> public static string GetRelativePath(string fullPath, string basePath ) { // ForceBasePath to a path if (!basePath.EndsWith("\\")) basePath += "\\"; Uri baseUri = new Uri(basePath); Uri fullUri = new Uri(fullPath); Uri relativeUri = baseUri.MakeRelativeUri(fullUri); // Uri's use forward slashes so convert back to backward slashes return relativeUri.ToString().Replace("/", "\\"); } You can then call it like this: string relPath = FileUtils.GetRelativePath("c:\temp\templates","c:\temp\templates\subdir\test.txt") It’s not exactly rocket science but it’s useful in many scenarios where you’re working with files based on an application base directory. Right now I’m working on a templating solution (using the Razor Engine) where templates live in a base directory and are supplied as relative paths to that base directory. Resolving these relative paths both ways is important in order to properly check for existance of files and their change status in this case. Not the kind of thing you use every day, but useful to remember.© Rick Strahl, West Wind Technologies, 2005-2010Posted in .NET  CSharp  

    Read the article

  • Failure retrieving contents of directory

    - by Bondye
    Currently I have a couple of websites. My problem is that if I login on 1 specific domain with any of my programs (using notepadd++, FileZilla and Netbeans) the program stops at the content listing. I had it correctly running, (I'm working on a project on this domain for more than a year now) and suddenly I broke it somehow. This only happens on 1 specific domain, all other domains (from other hosts) are working. My colleague (next to me with same ip address) is able to login on this domain. Notepadd++ says: Failure retrieving contents of directory Filezilla says: Failed to retrieve directory listing Netbean popups: Upload files on save failed. (Because I have the setting upload on save enabled.) What I tried: First I thought it's my firewall, I disabled firewall but no result. Also notice that all other domain are working. Maby a blacklist with my ip address? No my colleague has the same ip address. Could anyone help me on this? Notepad++ Log [NppFTP] Everything initialized -> TYPE I Connecting -> Quit 220 ProFTPD 1.3.3e Server ready. -> USER username 331 Password required for domain -> PASS *HIDDEN* 230 User username logged in -> TYPE A 200 Type set to A -> MODE S 200 Mode set to S -> STRU F 200 Structure set to F -> CWD /domains/domain.nl/ 250 CWD command successful Connected -> CWD /domains/domain.nl/ 250 CWD command successful -> PASV 227 Entering Passive Mode (194,247,31,xx,137,xx). -> LIST -al Failure retrieving contents of directory /domains/domain.nl/ Filezilla log Status: Verbinden met 194.247.xx.xx:21... Status: Verbinding aangemaakt, welkomstbericht afwachten... Antwoord: 220 ProFTPD 1.3.3e Server ready. Commando: USER username Antwoord: 331 Password required for username Commando: PASS ******** Antwoord: 230 User username logged in Commando: SYST Antwoord: 215 UNIX Type: L8 Commando: FEAT Antwoord: 211-Features: Antwoord: MDTM Antwoord: MFMT Antwoord: LANG en-US;ja-JP;zh-TW;it-IT;fr-FR;zh-CN;ru-RU;bg-BG;ko-KR Antwoord: TVFS Antwoord: UTF8 Antwoord: AUTH TLS Antwoord: MFF modify;UNIX.group;UNIX.mode; Antwoord: MLST modify*;perm*;size*;type*;unique*;UNIX.group*;UNIX.mode*;UNIX.owner*; Antwoord: PBSZ Antwoord: PROT Antwoord: REST STREAM Antwoord: SIZE Antwoord: 211 End Commando: OPTS UTF8 ON Antwoord: 200 UTF8 set to on Status: Verbonden Status: Mappenlijst ophalen... Commando: PWD Antwoord: 257 "/" is the current directory Commando: TYPE I Antwoord: 200 Type set to I Commando: PASV Antwoord: 227 Entering Passive Mode (194,247,31,xx,xxx,xx). Commando: MLSD Fout: Verbinding verloren Fout: Ontvangen van mappenlijst is mislukt Sorry that it's dutch.

    Read the article

  • jQuery Templates and Data Linking (and Microsoft contributing to jQuery)

    - by ScottGu
    The jQuery library has a passionate community of developers, and it is now the most widely used JavaScript library on the web today. Two years ago I announced that Microsoft would begin offering product support for jQuery, and that we’d be including it in new versions of Visual Studio going forward. By default, when you create new ASP.NET Web Forms and ASP.NET MVC projects with VS 2010 you’ll find jQuery automatically added to your project. A few weeks ago during my second keynote at the MIX 2010 conference I announced that Microsoft would also begin contributing to the jQuery project.  During the talk, John Resig -- the creator of the jQuery library and leader of the jQuery developer team – talked a little about our participation and discussed an early prototype of a new client templating API for jQuery. In this blog post, I’m going to talk a little about how my team is starting to contribute to the jQuery project, and discuss some of the specific features that we are working on such as client-side templating and data linking (data-binding). Contributing to jQuery jQuery has a fantastic developer community, and a very open way to propose suggestions and make contributions.  Microsoft is following the same process to contribute to jQuery as any other member of the community. As an example, when working with the jQuery community to improve support for templating to jQuery my team followed the following steps: We created a proposal for templating and posted the proposal to the jQuery developer forum (http://forum.jquery.com/topic/jquery-templates-proposal and http://forum.jquery.com/topic/templating-syntax ). After receiving feedback on the forums, the jQuery team created a prototype for templating and posted the prototype at the Github code repository (http://github.com/jquery/jquery-tmpl ). We iterated on the prototype, creating a new fork on Github of the templating prototype, to suggest design improvements. Several other members of the community also provided design feedback by forking the templating code. There has been an amazing amount of participation by the jQuery community in response to the original templating proposal (over 100 posts in the jQuery forum), and the design of the templating proposal has evolved significantly based on community feedback. The jQuery team is the ultimate determiner on what happens with the templating proposal – they might include it in jQuery core, or make it an official plugin, or reject it entirely.  My team is excited to be able to participate in the open source process, and make suggestions and contributions the same way as any other member of the community. jQuery Template Support Client-side templates enable jQuery developers to easily generate and render HTML UI on the client.  Templates support a simple syntax that enables either developers or designers to declaratively specify the HTML they want to generate.  Developers can then programmatically invoke the templates on the client, and pass JavaScript objects to them to make the content rendered completely data driven.  These JavaScript objects can optionally be based on data retrieved from a server. Because the jQuery templating proposal is still evolving in response to community feedback, the final version might look very different than the version below. This blog post gives you a sense of how you can try out and use templating as it exists today (you can download the prototype by the jQuery core team at http://github.com/jquery/jquery-tmpl or the latest submission from my team at http://github.com/nje/jquery-tmpl).  jQuery Client Templates You create client-side jQuery templates by embedding content within a <script type="text/html"> tag.  For example, the HTML below contains a <div> template container, as well as a client-side jQuery “contactTemplate” template (within the <script type="text/html"> element) that can be used to dynamically display a list of contacts: The {{= name }} and {{= phone }} expressions are used within the contact template above to display the names and phone numbers of “contact” objects passed to the template. We can use the template to display either an array of JavaScript objects or a single object. The JavaScript code below demonstrates how you can render a JavaScript array of “contact” object using the above template. The render() method renders the data into a string and appends the string to the “contactContainer” DIV element: When the page is loaded, the list of contacts is rendered by the template.  All of this template rendering is happening on the client-side within the browser:   Templating Commands and Conditional Display Logic The current templating proposal supports a small set of template commands - including if, else, and each statements. The number of template commands was deliberately kept small to encourage people to place more complicated logic outside of their templates. Even this small set of template commands is very useful though. Imagine, for example, that each contact can have zero or more phone numbers. The contacts could be represented by the JavaScript array below: The template below demonstrates how you can use the if and each template commands to conditionally display and loop the phone numbers for each contact: If a contact has one or more phone numbers then each of the phone numbers is displayed by iterating through the phone numbers with the each template command: The jQuery team designed the template commands so that they are extensible. If you have a need for a new template command then you can easily add new template commands to the default set of commands. Support for Client Data-Linking The ASP.NET team recently submitted another proposal and prototype to the jQuery forums (http://forum.jquery.com/topic/proposal-for-adding-data-linking-to-jquery). This proposal describes a new feature named data linking. Data Linking enables you to link a property of one object to a property of another object - so that when one property changes the other property changes.  Data linking enables you to easily keep your UI and data objects synchronized within a page. If you are familiar with the concept of data-binding then you will be familiar with data linking (in the proposal, we call the feature data linking because jQuery already includes a bind() method that has nothing to do with data-binding). Imagine, for example, that you have a page with the following HTML <input> elements: The following JavaScript code links the two INPUT elements above to the properties of a JavaScript “contact” object that has a “name” and “phone” property: When you execute this code, the value of the first INPUT element (#name) is set to the value of the contact name property, and the value of the second INPUT element (#phone) is set to the value of the contact phone property. The properties of the contact object and the properties of the INPUT elements are also linked – so that changes to one are also reflected in the other. Because the contact object is linked to the INPUT element, when you request the page, the values of the contact properties are displayed: More interesting, the values of the linked INPUT elements will change automatically whenever you update the properties of the contact object they are linked to. For example, we could programmatically modify the properties of the “contact” object using the jQuery attr() method like below: Because our two INPUT elements are linked to the “contact” object, the INPUT element values will be updated automatically (without us having to write any code to modify the UI elements): Note that we updated the contact object above using the jQuery attr() method. In order for data linking to work, you must use jQuery methods to modify the property values. Two Way Linking The linkBoth() method enables two-way data linking. The contact object and INPUT elements are linked in both directions. When you modify the value of the INPUT element, the contact object is also updated automatically. For example, the following code adds a client-side JavaScript click handler to an HTML button element. When you click the button, the property values of the contact object are displayed using an alert() dialog: The following demonstrates what happens when you change the value of the Name INPUT element and click the Save button. Notice that the name property of the “contact” object that the INPUT element was linked to was updated automatically: The above example is obviously trivially simple.  Instead of displaying the new values of the contact object with a JavaScript alert, you can imagine instead calling a web-service to save the object to a database. The benefit of data linking is that it enables you to focus on your data and frees you from the mechanics of keeping your UI and data in sync. Converters The current data linking proposal also supports a feature called converters. A converter enables you to easily convert the value of a property during data linking. For example, imagine that you want to represent phone numbers in a standard way with the “contact” object phone property. In particular, you don’t want to include special characters such as ()- in the phone number - instead you only want digits and nothing else. In that case, you can wire-up a converter to convert the value of an INPUT element into this format using the code below: Notice above how a converter function is being passed to the linkFrom() method used to link the phone property of the “contact” object with the value of the phone INPUT element. This convertor function strips any non-numeric characters from the INPUT element before updating the phone property.  Now, if you enter the phone number (206) 555-9999 into the phone input field then the value 2065559999 is assigned to the phone property of the contact object: You can also use a converter in the opposite direction also. For example, you can apply a standard phone format string when displaying a phone number from a phone property. Combining Templating and Data Linking Our goal in submitting these two proposals for templating and data linking is to make it easier to work with data when building websites and applications with jQuery. Templating makes it easier to display a list of database records retrieved from a database through an Ajax call. Data linking makes it easier to keep the data and user interface in sync for update scenarios. Currently, we are working on an extension of the data linking proposal to support declarative data linking. We want to make it easy to take advantage of data linking when using a template to display data. For example, imagine that you are using the following template to display an array of product objects: Notice the {{link name}} and {{link price}} expressions. These expressions enable declarative data linking between the SPAN elements and properties of the product objects. The current jQuery templating prototype supports extending its syntax with custom template commands. In this case, we are extending the default templating syntax with a custom template command named “link”. The benefit of using data linking with the above template is that the SPAN elements will be automatically updated whenever the underlying “product” data is updated.  Declarative data linking also makes it easier to create edit and insert forms. For example, you could create a form for editing a product by using declarative data linking like this: Whenever you change the value of the INPUT elements in a template that uses declarative data linking, the underlying JavaScript data object is automatically updated. Instead of needing to write code to scrape the HTML form to get updated values, you can instead work with the underlying data directly – making your client-side code much cleaner and simpler. Downloading Working Code Examples of the Above Scenarios You can download this .zip file to get with working code examples of the above scenarios.  The .zip file includes 4 static HTML page: Listing1_Templating.htm – Illustrates basic templating. Listing2_TemplatingConditionals.htm – Illustrates templating with the use of the if and each template commands. Listing3_DataLinking.htm – Illustrates data linking. Listing4_Converters.htm – Illustrates using a converter with data linking. You can un-zip the file to the file-system and then run each page to see the concepts in action. Summary We are excited to be able to begin participating within the open-source jQuery project.  We’ve received lots of encouraging feedback in response to our first two proposals, and we will continue to actively contribute going forward.  These features will hopefully make it easier for all developers (including ASP.NET developers) to build great Ajax applications. Hope this helps, Scott P.S. [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu]

    Read the article

  • Database Vault 11gR2 11.2.0.1 Certified with Oracle E-Business Suite

    - by Steven Chan
    Oracle Database Vault allows security administrators to protect a database from privileged account access to application data.  Database objects can be placed in protected realms, which can be accessed only if a specific set of conditions are met.  Oracle Database Vault 11gR2 11.2.0.1 is now certified with Oracle E-Business Suite Release 11i and 12.You can now enable Database Vault 11gR2 on your existing E-Business Suite 11.2.0.1 Database instance.  If you already have DB Vault 10gR2 or 11gR1 enabled in your E-Business Suite environment, you can now upgrade to the 11gR2 Database.  We also support EBS patching with Database Vault 11.2.0.1 enabled. Our DB Vault realm creation and grants-related scripts have been updated to reduce patching downtimes.

    Read the article

  • The Challenge with HTML5 – In Pictures

    - by dwahlin
    I love working with Web technologies and am looking forward to the new functionality that HTML5 will ultimately bring to the table (some of which can be used today). Having been through the div versus layer battle back in the IE4 and Netscape 4 days I think we’re headed down that road again as a result of browsers implementing features differently. I’ve been spending a lot of time researching and playing around with HTML5 samples and features (mainly because we’re already seeing demand for training on HTML5) and there’s a lot of great stuff there that will truly revolutionize web applications as we know them. However, browsers just aren’t there yet and many people outside of the development world don’t really feel a need to upgrade their browser if it’s working reasonably well (Mom and Dad come to mind) so it’s going to be awhile. There’s a nice test site at http://www.HTML5Test.com that runs through different HTML5 features and scores how well they’re supported. They don’t test for everything and are very clear about that on the site: “The HTML5 test score is only an indication of how well your browser supports the upcoming HTML5 standard and related specifications. It does not try to test all of the new features offered by HTML5, nor does it try to test the functionality of each feature it does detect. Despite these shortcomings we hope that by quantifying the level of support users and web developers will get an idea of how hard the browser manufacturers work on improving their browsers and the web as a development platform. The score is calculated by testing for the many new features of HTML5. Each feature is worth one or more points. Apart from the main HTML5 specification and other specifications created the W3C HTML Working Group, this test also awards points for supporting related drafts and specifications. Some of these specifications were initially part of HTML5, but are now further developed by other W3C working groups. WebGL is also part of this test despite not being developed by the W3C, because it extends the HTML5 canvas element with a 3d context. The test also awards bonus points for supporting audio and video codecs and supporting SVG or MathML embedding in a plain HTML document. These test do not count towards the total score because HTML5 does not specify any required audio or video codec. Also SVG and MathML are not required by HTML5, the specification only specifies rules for how such content should be embedded inside a plain HTML file. Please be aware that the specifications that are being tested are still in development and could change before receiving an official status. In the future new tests will be added for the pieces of the specification that are currently still missing. The maximum number of points that can be scored is 300 at this moment, but this is a moving goalpost.” It looks like their tests haven’t been updated since June, but the numbers are pretty scary as a developer because it means I’m going to have to do a lot of browser sniffing before assuming a particular feature is available to use. Not that much different from what we do today as far as browser sniffing you say? I’d have to disagree since HTML5 takes it to a whole new level. In today’s world we have script libraries such as jQuery (my personal favorite), Prototype, script.aculo.us, YUI Library, MooTools, etc. that handle the heavy lifting for us. Until those libraries handle all of the key HTML5 features available it’s going to be a challenge. Certain features such as Canvas are supported fairly well across most of the major browsers while other features such as audio and video are hit or miss depending upon what codec you want to use. Run the tests yourself to see what passes and what fails for different browsers. You can also view the HTML5 Test Suite Conformance Results at http://test.w3.org/html/tests/reporting/report.htm (a work in progress). The table below lists the scores that the HTML5Test site returned for different browsers I have installed on my desktop PC and laptop. A specific list of tests run and features supported are given when you go to the site. Note that I went ahead and tested the IE9 beta and it didn’t do nearly as good as I expected it would, but it’s not officially out yet so I expect that number will change a lot. Am I opposed to HTML5 as a result of these tests? Of course not - I’m actually really excited about what it offers.  However, I’m trying to be realistic and feel it'll definitely add a new level of headache to the Web application development process having been through something like this many years ago. On the flipside, developers that are able to target a specific browser (typically Intranet apps) or master the cross-browser issues are going to release some pretty sweet applications. Check out http://html5gallery.com/ for a look at some of the more cutting-edge sites out there that use HTML5. Also check out the http://www.beautyoftheweb.com site that Microsoft put together to showcase IE9. Chrome 8 Safari 5 for Windows     Opera 10 Firefox 3.6     Internet Explorer 9 Beta (Note that it’s still beta) Internet Explorer 8

    Read the article

  • Announcing SonicAgile – An Agile Project Management Solution

    - by Stephen.Walther
    I’m happy to announce the public release of SonicAgile – an online tool for managing software projects. You can register for SonicAgile at www.SonicAgile.com and start using it with your team today. SonicAgile is an agile project management solution which is designed to help teams of developers coordinate their work on software projects. SonicAgile supports creating backlogs, scrumboards, and burndown charts. It includes support for acceptance criteria, story estimation, calculating team velocity, and email integration. In short, SonicAgile includes all of the tools that you need to coordinate work on a software project, get stuff done, and build great software. Let me discuss each of the features of SonicAgile in more detail. SonicAgile Backlog You use the backlog to create a prioritized list of user stories such as features, bugs, and change requests. Basically, all future work planned for a product should be captured in the backlog. We focused our attention on designing the user interface for the backlog. Because the main function of the backlog is to prioritize stories, we made it easy to prioritize a story by just drag and dropping the story from one location to another. We also wanted to make it easy to add stories from the product backlog to a sprint backlog. A sprint backlog contains the stories that you plan to complete during a particular sprint. To add a story to a sprint, you just drag the story from the product backlog to the sprint backlog. Finally, we made it easy to track team velocity — the average amount of work that your team completes in each sprint. Your team’s average velocity is displayed in the backlog. When you add too many stories to a sprint – in other words, you attempt to take on too much work – you are warned automatically: SonicAgile Scrumboard Every workday, your team meets to have their daily scrum. During the daily scrum, you can use the SonicAgile Scrumboard to see (at a glance) what everyone on the team is working on. For example, the following scrumboard shows that Stephen is working on the Fix Gravatar Bug story and Pete and Jane have finished working on the Product Details Page story: Every story can be broken into tasks. For example, to create the Product Details Page, you might need to create database objects, do page design, and create an MVC controller. You can use the Scrumboard to track the state of each task. A story can have acceptance criteria which clarify the requirements for the story to be done. For example, here is how you can specify the acceptance criteria for the Product Details Page story: You cannot close a story — and remove the story from the list of active stories on the scrumboard — until all tasks and acceptance criteria associated with the story are done. SonicAgile Burndown Charts You can use Burndown charts to track your team’s progress. SonicAgile supports Release Burndown, Sprint Burndown by Task Estimates, and Sprint Burndown by Story Points charts. For example, here’s a sample of a Sprint Burndown by Story Points chart: The downward slope shows the progress of the team when closing stories. The vertical axis represents story points and the horizontal axis represents time. Email Integration SonicAgile was designed to improve your team’s communication and collaboration. Most stories and tasks require discussion to nail down exactly what work needs to be done. The most natural way to discuss stories and tasks is through email. However, you don’t want these discussions to get lost. When you use SonicAgile, all email discussions concerning a story or a task (including all email attachments) are captured automatically. At any time in the future, you can view all of the email discussion concerning a story or a task by opening the Story Details dialog: Why We Built SonicAgile We built SonicAgile because we needed it for our team. Our consulting company, Superexpert, builds websites for financial services, startups, and large corporations. We have multiple teams working on multiple projects. Keeping on top of all of the work that needs to be done to complete a software project is challenging. You need a good sense of what needs to be done, who is doing it, and when the work will be done. We built SonicAgile because we wanted a lightweight project management tool which we could use to coordinate the work that our team performs on software projects. How We Built SonicAgile We wanted SonicAgile to be easy to use, highly scalable, and have a highly interactive client interface. SonicAgile is very close to being a pure Ajax application. We built SonicAgile using ASP.NET MVC 3, jQuery, and Knockout. We would not have been able to build such a complex Ajax application without these technologies. Almost all of our MVC controller actions return JSON results (While developing SonicAgile, I would have given my left arm to be able to use the new ASP.NET Web API). The controller actions are invoked from jQuery Ajax calls from the browser. We built SonicAgile on Windows Azure. We are taking advantage of SQL Azure, Table Storage, and Blob Storage. Windows Azure enables us to scale very quickly to handle whatever demand is thrown at us. Summary I hope that you will try SonicAgile. You can register at www.SonicAgile.com (there’s a free 30-day trial). The goal of SonicAgile is to make it easier for teams to get more stuff done, work better together, and build amazing software. Let us know what you think!

    Read the article

  • Hekaton – SQL Server’s in-memory database engine

    - by Christian
    Microsoft have just gone public at the PASS Summit in Seattle about a new SQL Server engine that they’re working on which is optimized for high-memory servers – an in-memory OLTP database engine which is built-in to SQL Server rather than a separate entity.  This means that you can move just the performance critical parts of your database to Hekaton. The new engine really pushes the performance boundaries by eliminating as many instructions as possible: Main memory optimized tables which are decoupled from on-disk structures; Everything is lock and latch free; More work is pushed to compile time so your T-SQL code is compiled natively into low-level code. We’re already working with a customer on an early adoption program so expect to hear from us on what we learn about implementing it!   Christian Bolton - MCA, MCM, MVP Technical Director http://coeo.com - SQL Server Consulting & Managed Services

    Read the article

  • Ubuntu 14.04:LTS , HPLIP loses USB connection to HP laserjet

    - by Gareth
    This is my first post, so please let me know if i have inadvertanly broken any rules. Problem There seems to be a problem with HPLIP and USB connections in ubuntu 14.04LTS. After upgrading i managed to get the printing to work but today it has broken. Initial Issue (Solved) After upgrading to unbutntu 14.04 LTS my printer lHP LaserJet 1018 stopped printing (code=12) Looking through the Forumsthere are several issues with printitng and HPLIP so I was able to troubleshoot this. The steps I took were : Reran HPdoctor Ran hp-check Un-installed and installed the latest version of HPLIP (3.14.4) Checked the USB connections lsusb and lsusb-v Re-ran hpcheck Removed the printer from HPLIP Re-ran hpcheck Manually configued HPLIP to the printer hp-setup-g <xxx:yyy> And this worked HPLIP was able to see the printer in the USB , test page printed and was happily working for a few weeks. Current Issue Printer Not working However today my wife complains the printer is not working and checking see that although HPLIP has the same error code and did not seem to be able to see the printer although running lsusb could see the printer. Initially thought this may be due to usb given a new bus/device after being turned on and off and went to repeat the steps above at the moment still seeing an error in that the HPLIP is complaining that it cannot see the device **error: Device not found. Please make sure your printer is properly connected and powered-on.** current Observations lsusb output ## Bus 002 Device 007: ID 03f0:4117 Hewlett-Packard LaserJet 1018 sudo hp-check output *> "duan@duan-Lenovo-B550:~$ sudo hp-check [sudo] password for duan: Saving output in log file: /home/duan/hp-check.log HP Linux Imaging and Printing System (ver. 3.14.4) Dependency/Version Check Utility ver. 15.1 Copyright (c) 2001-13 Hewlett-Packard Development Company, LP This software comes with ABSOLUTELY NO WARRANTY. This is free software, and you are welcome to distribute it under certain conditions. See COPYING file for more details. Note: hp-check can be run in three modes: 1. Compile-time check mode (-c or --compile): Use this mode before compiling the HPLIP supplied tarball (.tar.gz or .run) to determine if the proper dependencies are installed to successfully compile HPLIP. Run-time check mode (-r or --run): Use this mode to determine if a distro supplied package (.deb, .rpm, etc) or an already built HPLIP supplied tarball has the proper dependencies installed to successfully run. Both compile- and run-time check mode (-b or --both) (Default): This mode will check both of the above cases (both compile- and run-time dependencies). Full Output output of hp-setup -g 002:007 window box "device not found please make sure your printer is properly connected and powered on" duan@duan-Lenovo-B550:~$ sudo hp-setup -g 002:007 [sudo] password for duan: > HP Linux Imaging and Printing System (ver. 3.14.4) Printer/Fax Setup > Utility ver. 9.0 > > Copyright (c) 2001-13 Hewlett-Packard Development Company, LP This > software comes with ABSOLUTELY NO WARRANTY. This is free software, and > you are welcome to distribute it under certain conditions. See COPYING > file for more details. > > hp-setup[18461]: debug: param=002:007 hp-setup[18461]: debug: > selected_device_name=None Fontconfig error: > "/etc/fonts/conf.d/65-khmer.conf", line 14: out of memory Fontconfig > error: "/etc/fonts/conf.d/65-khmer.conf", line 23: out of memory > Fontconfig error: "/etc/fonts/conf.d/65-khmer.conf", line 32: out of > memory hp-setup[18461]: debug: Sys.argv=['/usr/bin/hp-setup', '-g', > '002:007'] printer_name=None param=002:007 jd_port=1 device_uri=None > remove=False Searching for device... hp-setup[18461]: debug: Trying > USB with bus=002 dev=007... hp-setup[18461]: debug: Not found. > hp-setup[18461]: debug: Trying serial number 002:007 hp-setup[18461]: > debug: Probing bus: usb hp-setup[18461]: debug: Probing bus: par > error: Device not found. Please make sure your printer is properly > connected and powered-on. hp-setup[18461]: debug: Starting GUI loop. .. USB lead Works with the Windows 7 laptop Printer Works with windows 7 laptop Questions Is this a Bug with HPLIP or an issue with laptop/printer? Supplementary question if it is a bug what information is needed and where should it be sent ? Any suggestions on how to get the printer to work correctly with Ubuntu 14.04LTS/HPLIP 13.4.3 so that it stays working ?

    Read the article

  • How To Make NVIDIA’s Optimus Work on Linux

    - by Chris Hoffman
    Many new laptops come with NVIDIA’s Optimus technology – the laptop includes both a discrete NVIDIA GPU for gaming power and an onboard Intel GPU for power savings. The notebook switches between the two when necessary. However, this isn’t yet well-supported on Linux. Linus Torvalds had some choice words for NVIDIA regarding Optimus not working on Linux, and NVIDIA is now currently working on official support. However, if you have a laptop with Optimus support, you don’t have to wait for NVIDIA — you can use the Bumblebee project’s solution to enable Optimus on Linux today. Image Credit: Jemimus on Flickr How To Create a Customized Windows 7 Installation Disc With Integrated Updates How to Get Pro Features in Windows Home Versions with Third Party Tools HTG Explains: Is ReadyBoost Worth Using?

    Read the article

  • Install SharePoint 2013 on a two server farm

    - by sreejukg
    When SharePoint 2010 was released, I published an article on how to install SharePoint on a two server farm. You can find that article from the below link. http://weblogs.asp.net/sreejukg/archive/2010/09/28/install-sharepoint-2010-in-a-farm-environment.aspx Now it is the time for SharePoint 2013. SharePoint 2013 brings lots of improvements to the topologies, but still supports two-server architecture. Be noted that “two-server architecture” is meant for small implementations with limited service applications. Refer the below link to understand more about the SharePoint architecture http://technet.microsoft.com/en-us/sharepoint/fp123594.aspx A two tier farm consists of a database server and a web/application server as follows. In this article I am going to explain how to install SharePoint in a two server farm. I prepared 2 servers, both of them joined to a domain(SP2013Domain), and in one server I installed SQL Server 2012 (Server name: SP2013_DB). Now I am going to install SharePoint 2013 in the second server (Server Name: SP2013). The following domain accounts are created for the installation.   User Account Purpose Server roles required SQLService - SQL Server service account - This account is used as the service account for SQL Server. - domain user account / local account spSetup - You will be running SharePoint setup and SharePoint products and configuration wizard using this account. -domain user account - Member of the Administrators group on each server on which Setup is run(In our case SP2013) - SQL Server login on the computer running SQL Server - Member of the Server admin SQL Server security role spDataaccess - Configure and manage server farm. This - Application pool identity for central admin website - Microsoft SharePoint Foundation Workflow Timer Service Domain user account (Other permissions will be set to this account automatically)   The above are the minimum list of accounts needed for SharePoint 2013 installation. Now you need additional accounts for services, application pool identities for web applications etc. Refer the service accounts requirements for SharePoint from the below link. http://technet.microsoft.com/en-us/library/cc263445.aspx In order to install SharePoint 2013 login to the server using setup account(spsetup). Now run the setup from the installation media. First you need to install the pre-requisites. During the installation process, the server may restart several times. The installation wizard will guide you through the installation. In the next step, you need to agree on the terms and conditions as usual. Once you click next, the installation will start immediately. The installation wizard will let you know the progress of the installation. During the installation you may receive notifications to restart the server, you need to just click the finish button so that the system will be restarted. Once all the pre-requisites are installed, you will get the success message as below. Click finish to close the dialog. Now from the media, run the setup again and this time you choose install SharePoint server. In the next screen, you need to enter the product key, and then click continue. Now you need to agree on the terms and conditions for SharePoint 2013, and click continue. Choose the file location as per your policies and click on the install now button. You will see the installation progress. Once completed, you will see the installation completed dialog. Make sure you select the run products and configuration wizard option and click close. From the start screen, click next to start the configuration wizard. You will receive warning telling you some of the services will be stopped during the installation. Select “create new server farm” radio button and click next. In the next step, you need to enter the configuration database settings. Enter the database server details and then specify the database access account. You need to specify the farm account(spdataaccess). The wizard will grant additional privileges to the account as needed. In the next step you need to specify the passphrase, you need to note this as you need this passphrase if you add additional server to the farm. In the next step, you need to enter the central administration website port and security settings. You can choose a port or just keep it as suggested by the wizard. Click next, you will see the summary of what you have been selected. Verify the selected settings and if you want to change any, just click back and change them, or click continue to start the configuration. The configuration may take some time, you can view the progress, in case of any error, you will get the log file, you need to fix any error and again start the configuration wizard. Once the configuration successful, you will see the success message. Just click finish. Now you can browse the central administration website. It is good to check the health analyzer to review whether there are any errors/warnings. No warnings/errors indicate a good installation. Two-Server architecture is the least configuration for production environments. For small firms with less number of employees can implement SharePoint 2013 using this topology and as the workload increases, they can add more servers to the farm without reconstructing everything.

    Read the article

  • Scrum in 5 Minutes

    - by Stephen.Walther
    The goal of this blog entry is to explain the basic concepts of Scrum in less than five minutes. You learn how Scrum can help a team of developers to successfully complete a complex software project. Product Backlog and the Product Owner Imagine that you are part of a team which needs to create a new website – for example, an e-commerce website. You have an overwhelming amount of work to do. You need to build (or possibly buy) a shopping cart, install an SSL certificate, create a product catalog, create a Facebook page, and at least a hundred other things that you have not thought of yet. According to Scrum, the first thing you should do is create a list. Place the highest priority items at the top of the list and the lower priority items lower in the list. For example, creating the shopping cart and buying the domain name might be high priority items and creating a Facebook page might be a lower priority item. In Scrum, this list is called the Product Backlog. How do you prioritize the items in the Product Backlog? Different stakeholders in the project might have different priorities. Gary, your division VP, thinks that it is crucial that the e-commerce site has a mobile app. Sally, your direct manager, thinks taking advantage of new HTML5 features is much more important. Multiple people are pulling you in different directions. According to Scrum, it is important that you always designate one person, and only one person, as the Product Owner. The Product Owner is the person who decides what items should be added to the Product Backlog and the priority of the items in the Product Backlog. The Product Owner could be the customer who is paying the bills, the project manager who is responsible for delivering the project, or a customer representative. The critical point is that the Product Owner must always be a single person and that single person has absolute authority over the Product Backlog. Sprints and the Sprint Backlog So now the developer team has a prioritized list of items and they can start work. The team starts implementing the first item in the Backlog — the shopping cart — and the team is making good progress. Unfortunately, however, half-way through the work of implementing the shopping cart, the Product Owner changes his mind. The Product Owner decides that it is much more important to create the product catalog before the shopping cart. With some frustration, the team switches their developmental efforts to focus on implementing the product catalog. However, part way through completing this work, once again the Product Owner changes his mind about the highest priority item. Getting work done when priorities are constantly shifting is frustrating for the developer team and it results in lower productivity. At the same time, however, the Product Owner needs to have absolute authority over the priority of the items which need to get done. Scrum solves this conflict with the concept of Sprints. In Scrum, a developer team works in Sprints. At the beginning of a Sprint the developers and the Product Owner agree on the items from the backlog which they will complete during the Sprint. This subset of items from the Product Backlog becomes the Sprint Backlog. During the Sprint, the Product Owner is not allowed to change the items in the Sprint Backlog. In other words, the Product Owner cannot shift priorities on the developer team during the Sprint. Different teams use Sprints of different lengths such as one month Sprints, two-week Sprints, and one week Sprints. For high-stress, time critical projects, teams typically choose shorter sprints such as one week sprints. For more mature projects, longer one month sprints might be more appropriate. A team can pick whatever Sprint length makes sense for them just as long as the team is consistent. You should pick a Sprint length and stick with it. Daily Scrum During a Sprint, the developer team needs to have meetings to coordinate their work on completing the items in the Sprint Backlog. For example, the team needs to discuss who is working on what and whether any blocking issues have been discovered. Developers hate meetings (well, sane developers hate meetings). Meetings take developers away from their work of actually implementing stuff as opposed to talking about implementing stuff. However, a developer team which never has meetings and never coordinates their work also has problems. For example, Fred might get stuck on a programming problem for days and never reach out for help even though Tom (who sits in the cubicle next to him) has already solved the very same problem. Or, both Ted and Fred might have started working on the same item from the Sprint Backlog at the same time. In Scrum, these conflicting needs – limiting meetings but enabling team coordination – are resolved with the idea of the Daily Scrum. The Daily Scrum is a meeting for coordinating the work of the developer team which happens once a day. To keep the meeting short, each developer answers only the following three questions: 1. What have you done since yesterday? 2. What do you plan to do today? 3. Any impediments in your way? During the Daily Scrum, developers are not allowed to talk about issues with their cat, do demos of their latest work, or tell heroic stories of programming problems overcome. The meeting must be kept short — typically about 15 minutes. Issues which come up during the Daily Scrum should be discussed in separate meetings which do not involve the whole developer team. Stories and Tasks Items in the Product or Sprint Backlog – such as building a shopping cart or creating a Facebook page – are often referred to as User Stories or Stories. The Stories are created by the Product Owner and should represent some business need. Unlike the Product Owner, the developer team needs to think about how a Story should be implemented. At the beginning of a Sprint, the developer team takes the Stories from the Sprint Backlog and breaks the stories into tasks. For example, the developer team might take the Create a Shopping Cart story and break it into the following tasks: · Enable users to add and remote items from shopping cart · Persist the shopping cart to database between visits · Redirect user to checkout page when Checkout button is clicked During the Daily Scrum, members of the developer team volunteer to complete the tasks required to implement the next Story in the Sprint Backlog. When a developer talks about what he did yesterday or plans to do tomorrow then the developer should be referring to a task. Stories are owned by the Product Owner and a story is all about business value. In contrast, the tasks are owned by the developer team and a task is all about implementation details. A story might take several days or weeks to complete. A task is something which a developer can complete in less than a day. Some teams get lazy about breaking stories into tasks. Neglecting to break stories into tasks can lead to “Never Ending Stories” If you don’t break a story into tasks, then you can’t know how much of a story has actually been completed because you don’t have a clear idea about the implementation steps required to complete the story. Scrumboard During the Daily Scrum, the developer team uses a Scrumboard to coordinate their work. A Scrumboard contains a list of the stories for the current Sprint, the tasks associated with each Story, and the state of each task. The developer team uses the Scrumboard so everyone on the team can see, at a glance, what everyone is working on. As a developer works on a task, the task moves from state to state and the state of the task is updated on the Scrumboard. Common task states are ToDo, In Progress, and Done. Some teams include additional task states such as Needs Review or Needs Testing. Some teams use a physical Scrumboard. In that case, you use index cards to represent the stories and the tasks and you tack the index cards onto a physical board. Using a physical Scrumboard has several disadvantages. A physical Scrumboard does not work well with a distributed team – for example, it is hard to share the same physical Scrumboard between Boston and Seattle. Also, generating reports from a physical Scrumboard is more difficult than generating reports from an online Scrumboard. Estimating Stories and Tasks Stakeholders in a project, the people investing in a project, need to have an idea of how a project is progressing and when the project will be completed. For example, if you are investing in creating an e-commerce site, you need to know when the site can be launched. It is not enough to just say that “the project will be done when it is done” because the stakeholders almost certainly have a limited budget to devote to the project. The people investing in the project cannot determine the business value of the project unless they can have an estimate of how long it will take to complete the project. Developers hate to give estimates. The reason that developers hate to give estimates is that the estimates are almost always completely made up. For example, you really don’t know how long it takes to build a shopping cart until you finish building a shopping cart, and at that point, the estimate is no longer useful. The problem is that writing code is much more like Finding a Cure for Cancer than Building a Brick Wall. Building a brick wall is very straightforward. After you learn how to add one brick to a wall, you understand everything that is involved in adding a brick to a wall. There is no additional research required and no surprises. If, on the other hand, I assembled a team of scientists and asked them to find a cure for cancer, and estimate exactly how long it will take, they would have no idea. The problem is that there are too many unknowns. I don’t know how to cure cancer, I need to do a lot of research here, so I cannot even begin to estimate how long it will take. So developers hate to provide estimates, but the Product Owner and other product stakeholders, have a legitimate need for estimates. Scrum resolves this conflict by using the idea of Story Points. Different teams use different units to represent Story Points. For example, some teams use shirt sizes such as Small, Medium, Large, and X-Large. Some teams prefer to use Coffee Cup sizes such as Tall, Short, and Grande. Finally, some teams like to use numbers from the Fibonacci series. These alternative units are converted into a Story Point value. Regardless of the type of unit which you use to represent Story Points, the goal is the same. Instead of attempting to estimate a Story in hours (which is doomed to failure), you use a much less fine-grained measure of work. A developer team is much more likely to be able to estimate that a Story is Small or X-Large than the exact number of hours required to complete the story. So you can think of Story Points as a compromise between the needs of the Product Owner and the developer team. When a Sprint starts, the developer team devotes more time to thinking about the Stories in a Sprint and the developer team breaks the Stories into Tasks. In Scrum, you estimate the work required to complete a Story by using Story Points and you estimate the work required to complete a task by using hours. The difference between Stories and Tasks is that you don’t create a task until you are just about ready to start working on a task. A task is something that you should be able to create within a day, so you have a much better chance of providing an accurate estimate of the work required to complete a task than a story. Burndown Charts In Scrum, you use Burndown charts to represent the remaining work on a project. You use Release Burndown charts to represent the overall remaining work for a project and you use Sprint Burndown charts to represent the overall remaining work for a particular Sprint. You create a Release Burndown chart by calculating the remaining number of uncompleted Story Points for the entire Product Backlog every day. The vertical axis represents Story Points and the horizontal axis represents time. A Sprint Burndown chart is similar to a Release Burndown chart, but it focuses on the remaining work for a particular Sprint. There are two different types of Sprint Burndown charts. You can either represent the remaining work in a Sprint with Story Points or with task hours (the following image, taken from Wikipedia, uses hours). When each Product Backlog Story is completed, the Release Burndown chart slopes down. When each Story or task is completed, the Sprint Burndown chart slopes down. Burndown charts typically do not always slope down over time. As new work is added to the Product Backlog, the Release Burndown chart slopes up. If new tasks are discovered during a Sprint, the Sprint Burndown chart will also slope up. The purpose of a Burndown chart is to give you a way to track team progress over time. If, halfway through a Sprint, the Sprint Burndown chart is still climbing a hill then you know that you are in trouble. Team Velocity Stakeholders in a project always want more work done faster. For example, the Product Owner for the e-commerce site wants the website to launch before tomorrow. Developers tend to be overly optimistic. Rarely do developers acknowledge the physical limitations of reality. So Project stakeholders and the developer team often collude to delude themselves about how much work can be done and how quickly. Too many software projects begin in a state of optimism and end in frustration as deadlines zoom by. In Scrum, this problem is overcome by calculating a number called the Team Velocity. The Team Velocity is a measure of the average number of Story Points which a team has completed in previous Sprints. Knowing the Team Velocity is important during the Sprint Planning meeting when the Product Owner and the developer team work together to determine the number of stories which can be completed in the next Sprint. If you know the Team Velocity then you can avoid committing to do more work than the team has been able to accomplish in the past, and your team is much more likely to complete all of the work required for the next Sprint. Scrum Master There are three roles in Scrum: the Product Owner, the developer team, and the Scrum Master. I’v e already discussed the Product Owner. The Product Owner is the one and only person who maintains the Product Backlog and prioritizes the stories. I’ve also described the role of the developer team. The members of the developer team do the work of implementing the stories by breaking the stories into tasks. The final role, which I have not discussed, is the role of the Scrum Master. The Scrum Master is responsible for ensuring that the team is following the Scrum process. For example, the Scrum Master is responsible for making sure that there is a Daily Scrum meeting and that everyone answers the standard three questions. The Scrum Master is also responsible for removing (non-technical) impediments which the team might encounter. For example, if the team cannot start work until everyone installs the latest version of Microsoft Visual Studio then the Scrum Master has the responsibility of working with management to get the latest version of Visual Studio as quickly as possible. The Scrum Master can be a member of the developer team. Furthermore, different people can take on the role of the Scrum Master over time. The Scrum Master, however, cannot be the same person as the Product Owner. Using SonicAgile SonicAgile (SonicAgile.com) is an online tool which you can use to manage your projects using Scrum. You can use the SonicAgile Product Backlog to create a prioritized list of stories. You can estimate the size of the Stories using different Story Point units such as Shirt Sizes and Coffee Cup sizes. You can use SonicAgile during the Sprint Planning meeting to select the Stories that you want to complete during a particular Sprint. You can configure Sprints to be any length of time. SonicAgile calculates Team Velocity automatically and displays a warning when you add too many stories to a Sprint. In other words, it warns you when it thinks you are overcommitting in a Sprint. SonicAgile also includes a Scrumboard which displays the list of Stories selected for a Sprint and the tasks associated with each story. You can drag tasks from one task state to another. Finally, SonicAgile enables you to generate Release Burndown and Sprint Burndown charts. You can use these charts to view the progress of your team. To learn more about SonicAgile, visit SonicAgile.com. Summary In this post, I described many of the basic concepts of Scrum. You learned how a Product Owner uses a Product Backlog to create a prioritized list of tasks. I explained why work is completed in Sprints so the developer team can be more productive. I also explained how a developer team uses the daily scrum to coordinate their work. You learned how the developer team uses a Scrumboard to see, at a glance, who is working on what and the state of each task. I also discussed Burndown charts. You learned how you can use both Release and Sprint Burndown charts to track team progress in completing a project. Finally, I described the crucial role of the Scrum Master – the person who is responsible for ensuring that the rules of Scrum are being followed. My goal was not to describe all of the concepts of Scrum. This post was intended to be an introductory overview. For a comprehensive explanation of Scrum, I recommend reading Ken Schwaber’s book Agile Project Management with Scrum: http://www.amazon.com/Agile-Project-Management-Microsoft-Professional/dp/073561993X/ref=la_B001H6ODMC_1_1?ie=UTF8&qid=1345224000&sr=1-1

    Read the article

  • URL Parts available to URL Rewrite Rules

    URL Rewrite is a powerful URL rewriting tool available for IIS7 and newer.  Your rewriting options are almost unlimited, giving you the ability to optimize URLs for search engine optimization (SEO), support multiple domain names on a single site, hiding complex paths and much more. URL Rewrite allows you to use any Server Variable as conditions, and with URL Rewrite 2.0, you can also update them on the fly.  To see all variables available to your site, see this post. An understanding...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Surface development: it&rsquo;s just like software development

    - by Dennis Vroegop
    Surface is magic. Everyone using it seems to think that way. And I have to be honest, after working for almost 2 years with the platform I still get that special feeling the moment I turn on the unit to do some more work. The whole user experience, the rich environment of the SDK, the touch, even the look and feel of the Surface environment is so much different from the stuff I’ve been working on all my career that I am still bewildered by it. But… and this is a big but.. in the end we’re still talking about a computer and that needs software to become useful. Deep down the magic of the Surface unit there is a PC somewhere, running Windows Vista and the .net framework 3.5. When you write that magic software that makes the platform come alive you’re still working with .net, WPF/XNA, C#, VB.Net and all those other tools and technologies you know so well. Sure, the whole user experience is different from what you’ve known. And the way of thinking about users, their interaction and the positioning of screen elements requires a whole new paradigm. And that takes time. It took me about half a year before I had the feeling I got it nailed down. But when that moment came (about 18 months ago…) I realized that everything I had learned so far on software development still is true when it comes to Surface. The last 6 months I have been working with some people with a different background to start a new company. The idea was that the new company would be focussing on Surface and Surface only. These people come from a marketing background and had some good ideas for some applications. And I have to admit: their ideas were good. Very good. Where it all fell down of course is that these ideas need to be implemented in a piece of software. And creating great software takes skilled developers and a lot of time and money. That’s where things went wrong: the marketing guys didn’t realize and didn’t want to realize that software development is a job that takes skill. You can’t just hire a bunch of developers and expect them to deliver the best sort of software, especially not when it comes to Surface. I tried to explain that yes, their User Interface in Photoshop looked great, but no: I couldn’t develop an application like that in a weeks time. Even worse: the while backend of the software (WCF for communications, SQL Server for the database, etc) would take a lot more time than the frontend. They didn’t understand. It took them a couple of days to drawn the UI in Photoshop so in Blend I should be able to build the software in about the same amount of time. Well, you and I know that it doesn’t work that way. Software is hard to write, and even harder to write well, and it takes skill and dedication. It’s not something you can do as fast as you can draw a mock up for a Surface application in Photohop. The same holds true for web applications of course. A lot of designers there fail to appreciate the hard work that goes into writing the plumbing for a good web app that can handle thousands of users. Although the UI is very important, it’s not all there is to it. And in Surface development this is the same. The UI should create the feeling of magic, but the software behind it is what makes it come alive. And that takes time. A lot of time. So brush of you skills and don’t throw them away if you start developing for Surface. Because projects (and colaborations) can fail there as hard as they can in any other area of software development. On a side note: we decided to stop the colaboration (something the other parties involved didn’t appreciate and were very angry about) and decided to hire a designer for the Surface projects. The focus is back where it belongs: on the software development we know so well and have been doing very well for 13 years. UI is just a part of the whole project and not the end product. So my company Detrio is still going strong when it comes to develivering Surface solutions but once again from a technological background, not a marketing background.

    Read the article

  • BizTalk host throttling &ndash; Singleton pattern and High database size

    - by S.E.R.
    Originally posted on: http://geekswithblogs.net/SERivas/archive/2013/06/30/biztalk-host-throttling-ndash-singleton-pattern-and-high-database-size.aspxI have worked for some days around the singleton pattern (for those unfamiliar with it, read this post by Victor Fehlberg) and have come across a few very interesting posts, among which one dealt with performance issues (here, also by Victor Fehlberg). Simply put: if you have an orchestration which implements the singleton pattern, then performances will continuously decrease as the orchestration receives and consumes messages, and that behavior is more obvious when the orchestration never ends (ie : it keeps looping and never terminates or completes). As I experienced the same kind of problem (actually I was alerted by SCOM, which told me that the host was being throttled because of High database size), I thought it would be a good idea to dig a little bit a see what happens deep inside BizTalk and thus understand the reasons for this behavior. NOTE: in this article, I will focus on this High database size throttling condition. I will try and work on the other conditions in some not too distant future… Test conditions The singleton orchestration For the purpose of this study, I have created the following orchestration, which is a very basic implementation of a singleton that piles up incoming messages, then does something else when a certain timeout has been reached without receiving another message: Throttling settings I have two distinct hosts : one that hosts the receive port (basic FILE port) : Ports_ReceiveHostone that hosts the orchestration : ProcessingHost In order to emphasize the throttling mechanism, I have modified the throttling settings for each of these hosts are as follows (all other parameters are set to the default value): [Throttling thresholds] Message count in database: 500 (default value : 50000) Evolution of performance counters when submitting messages Since we are investigating the High database size throttling condition, here are the performance counter that we should take a look at (all of them are in the BizTalk:Message Agent performance object): Database sizeHigh database sizeMessage delivery throttling stateMessage publishing throttling stateMessage delivery delay (ms)Message publishing delay (ms)Message delivery throttling state durationMessage publishing throttling state duration (If you are not used to Perfmon, I strongly recommend that you start using it right now: it is a wonderful tool that allows you to open the hood and see what is going on inside BizTalk – and other systems) Database size It is quite obvious that we will start by watching the database size and high database size counters, just to see when the first reaches the configured threshold (500) and when the second rings the alarm. NOTE : During this test I submitted 600 messages, one message at a time every 10ms to see the evolution of the counters we have previously selected. It might not show very well on this screenshot, but here is what happened: From 15:46:50 to 15:47:50, the database size for the Ports_ReceiveHost host (blue line) kept growing until it reached a maximum of 504.At 15:47:50, the high database size alert fires At first I was surprised by this result: why is it the database size of the receiving host that keeps growing since it is the processing host that piles up messages? Actually, it makes total sense. This counter measures the size of the database queue that is being filled by the host, not consumed. Therefore, the high database size alert is raised on the host that fills the queue: Ports_ReceiveHost. More information is available on the Public MPWiki page. Now, looking at the Message publishing throttling state for the receiving host (green line), we can see that a throttling condition has been reached at 15:47:50: We can also see that the Message publishing delay(ms) (blue line) has begun growing slowly from this point. All of this explains why performances keep decreasing when a singleton keeps processing new messages: the database size grows and when it has exceeded the Message count in database threshold, the host is throttled and the publishing delay keeps increasing. Digging further So, what happens to the database queue then? Is it flushed some day or does it keep growing and growing indefinitely? The real question being: will the host be throttled forever because of this singleton? To answer this question, I set the Message count in database threshold to 20 (this value is very low in order not to wait for too long, otherwise I certainly would have fallen asleep in front of my screen) and I submitted 30 messages. The test was started at 18:26. At 18:56 (ie : exactly 30min later) the throttling was stopped and the database size was divided by 2. 30 min later again, the database size had dropped to almost zero: I guess I’ll have to find some documentation and do some more testing before I sort this out! My guess is that some maintenance job is at work here, though I cannot tell which one Digging even further If we take a look at the Message delivery throttling state counter for the processing host, we can see that this host was also throttled during the submission of the 600 documents: The value for the counter was 1, meaning that Message delivery incoming rate for the host instance exceeds the Message delivery outgoing rate * the specified Rate overdrive factor (percent) value. We will see this another day… :) A last word Let’s end this article with a warning: DO NOT CHANGE THE THROTTLING SETTINGS LIGHTLY! The temptation can be great to just bypass throttling by setting very high values for each parameter (or zero in some cases, which simply disables throttling). Nevertheless, always keep in mind that this mechanism is here for a very good reason: prevent your BizTalk infrastructure from exploding!! So whatever you do with those settings, do a lot of testing and benchmarking!

    Read the article

  • Scrum for a single programmer?

    - by Rob Perkins
    I'm billed as the "Windows Expert" in my very small company, which consists of myself, a mechanical engineer working in a sales and training role, and the company's president, working in a design, development, and support role. My role is equally as general, but primarily I design and implement whatever programming on our product needs to get done in order for our stuff to run on whichever versions of Windows are current. I just finished watching a high-level overview of the Scrum paradigm, given in a webcast. My question is: Is it worth my time to learn more about this approach to product development, given that my development work items are usually given at a very high level, such as "internationalize and localize the product". If it is, how would you suggest adapting Scrum for the use of just one programmer? What tools, cloud-based or otherwise, would be useful to that end? If it is not, what approach would you suggest for a single programmer to organize his efforts from day to day? (Perhaps the question reduces to that simple question.)

    Read the article

  • Oracle Healthcare Data Warehouse Foundations RELEASED!

    - by Glen McCallum
    Since I joined Oracle I've been working on Oracle Healthcare Data Warehouse Foundations (OHDF). It was officially released earlier this month at HIMSS. But for over 2 months prior to that I had to keep it a secret. It was so tough; I didn't even tell my family when they asked me what I was working on. Anyway, OHDF is an enterprise healthcare data model. Unlike Healthcare Transaction Base, OHDF is in 3rd normal form. It is logical and reasonably easy to understand for anyone with some experience in the healthcare domain. OHDF is emerging as the core of Oracle's healthcare business intelligence applications.

    Read the article

< Previous Page | 262 263 264 265 266 267 268 269 270 271 272 273  | Next Page >