Search Results

Search found 4240 results on 170 pages for 'thoughts'.

Page 168/170 | < Previous Page | 164 165 166 167 168 169 170  | Next Page >

  • Laissez les bon temps rouler! (Microsoft BI Conference 2010)

    - by smisner
    "Laissez les bons temps rouler" is a Cajun phrase that I heard frequently when I lived in New Orleans in the mid-1990s. It means "Let the good times roll!" and encapsulates a feeling of happy expectation. As I met with many of my peers and new acquaintances at the Microsoft BI Conference last week, this phrase kept running through my mind as people spoke about their plans in their respective businesses, the benefits and opportunities that the recent releases in the BI stack are providing, and their expectations about the future of the BI stack. Notwithstanding some jabs here and there to point out the platform is neither perfect now nor will be anytime soon (along with admissions that the competitors are also not perfect), and notwithstanding several missteps by the event organizers (which I don't care to enumerate), the overarching mood at the conference was positive. It was a refreshing change from the doom and gloom hovering over several conferences that I attended in 2009. Although many people expect economic hardships to continue over the coming year or so, everyone I know in the BI field is busier than ever and expects to stay busy for quite a while. Self-Service BI Self-service was definitely a theme of the BI conference. In the keynote, Ted Kummert opened with a look back to a fairy tale vision of self-service BI that he told in 2008. At that time, the fairy tale future was a time when "every end user was able to use BI technologies within their job in order to move forward more effectively" and transitioned to the present time in which SQL Server 2008 R2, Office 2010, and SharePoint 2010 are available to deliver managed self-service BI. This set of technologies is presumably poised to address the needs of the 80% of users that Kummert said do not use BI today. He proceeded to outline a series of activities that users ought to be able to do themselves--from simple changes to a report like formatting or an addtional data visualization to integration of an additional data source. The keynote then continued with a series of demonstrations of both current and future technology in support of self-service BI. Some highlights that interested me: PowerPivot, of course, is the flagship product for self-service BI in the Microsoft BI stack. In the TechEd keynote, which was open to the BI conference attendees, Amir Netz (twitter) impressed the audience by demonstrating interactivity with a workbook containing 100 million rows. He upped the ante at the BI keynote with his demonstration of a future-state PowerPivot workbook containing over 2 billion records. It's important to note that this volume of data is being processed by a server engine, and not in the PowerPivot client engine. (Yes, I think it's impressive, but none of my clients are typically wrangling with 2 billion records at a time. Maybe they're thinking too small. This ability to work quickly with large data sets has greater implications for BI solutions than for self-service BI, in my opinion.) Amir also demonstrated KPIs for the future PowerPivot, which appeared to be easier to implement than in any other Microsoft product that supports KPIs, apart from simple KPIs in SharePoint. (My initial reaction is that we have one more place to build KPIs. Great. It's confusing enough. I haven't seen how well those KPIs integrate with other BI tools, which will be important for adoption.) One more PowerPivot feature that Amir showed was a graphical display of the lineage for calculations. (This is hugely practical, especially if you build up calculations incrementally. You can more easily follow the logic from calculation to calculation. Furthermore, if you need to make a change to one calculation, you can assess the impact on other calculations.) Another product demonstration will be available within the next 30 days--Pivot for Reporting Services. If you haven't seen this technology yet, check it out at www.getpivot.com. (It definitely has a wow factor, but I'm skeptical about its practicality. However, I'm looking forward to trying it out with data that I understand.) Michael Tejedor (twitter) demonstrated a feature that I think is really interesting and not emphasized nearly enough--overshadowed by PowerPivot, no doubt. That feature is the Microsoft Business Intelligence Indexing Connector, which enables search of the content of Excel workbooks and Reporting Services reports. (This capability existed in MOSS 2007, but was more cumbersome to implement. The search results in SharePoint 2010 are not only cooler, but more useful by describing whether the content is found in a table or a chart, for example.) This may yet be the dawning of the age of self-service BI - a phrase I've heard repeated from time to time over the last decade - but I think BI professionals are likely to stay busy for a long while, and need not start looking for a new line of work. Kummert repeatedly referenced strategic BI solutions in contrast to self-service BI to emphasize that self-service BI is not a replacement for the services that BI professionals provide. After all, self-service BI does not appear magically on user desktops (or whatever device they want to use). A supporting infrastructure is necessary, and grows in complexity in proportion to the need to simplify BI for users. It's one thing to hear the party line touted by Microsoft employees at the BI keynote, but it's another to hear from the people who are responsible for implementing and supporting it within an organization. Rob Collie (blog | twitter), Kasper de Jonge (blog | twitter), Vidas Matelis (site | twitter), and I were invited to join Andrew Brust (blog | twitter) as he led a Birds of a Feather session at TechEd entitled "PowerPivot: Is It the BI Deal-Changer for Developers and IT Pros?" I would single out the prevailing concern in this session as the issue of control. On one side of this issue were those who were concerned that they would lose control once PowerPivot is implemented. On the other side were those who believed that data should be freely accessible to users in PowerPivot, and even acknowledgment that users would get the data they want even if it meant they would have to manually enter into a workbook to have it ready for analysis. For another viewpoint on how PowerPivot played out at the conference, see Rob Collie's observations. Collaborative BI I have been intrigued by the notion of collaborative BI for a very long time. Before I discovered BI, I was a Lotus Notes developer and later a manager of developers, working in a software company that enabled collaboration in the legal industry. Not only did I help create collaborative systems for our clients, I created a complete project management from the ground up to collaboratively manage our custom development work. In that case, collaboration involved my team, my client contacts, and me. I was also able to produce my own BI from that system as well, but didn't know that's what I was doing at the time. Only in recent years has SharePoint begun to catch up with the capabilities that I had with Lotus Notes more than a decade ago. Eventually, I had the opportunity at that job to formally investigate BI as another product offering for our software, and the rest - as they say - is history. I built my first data warehouse with Scott Cameron (who has also ventured into the authoring world by writing Analysis Services 2008 Step by Step and was at the BI Conference last week where I got to reminisce with him for a bit) and that began a career that I never imagined at the time. Fast forward to 2010, and I'm still lauding the virtues of collaborative BI, if only the tools will catch up to my vision! Thus, I was anxious to see what Donald Farmer (blog | twitter) and Rita Sallam of Gartner had to say on the subject in their session "Collaborative Decision Making." As I suspected, the tools aren't quite there yet, but the vendors are moving in the right direction. One thing I liked about this session was a non-Microsoft perspective of the state of the industry with regard to collaborative BI. In addition, this session included a better demonstration of SharePoint collaborative BI capabilities than appeared in the BI keynote. Check out the video in the link to the session to see the demonstration. One of the use cases that was demonstrated was linking from information to a person, because, as Donald put it, "People don't trust data, they trust people." The Microsoft BI Stack in General A question I hear all the time from students when I'm teaching is how to know what tools to use when there is overlap between products in the BI stack. I've never taken the time to codify my thoughts on the subject, but saw that my friend Dan Bulos provided good insight on this topic from a variety of perspectives in his session, "So Many BI Tools, So Little Time." I thought one of his best points was that ideally you should be able to design in your tool of choice, and then deploy to your tool of choice. Unfortunately, the ideal is yet to become real across the platform. The closest we come is with the RDL in Reporting Services which can be produced from two different tools (Report Builder or Business Intelligence Development Studio's Report Designer), manually, or by a third-party or custom application. I have touted the idea for years (and publicly said so about 5 years ago) that eventually more products would be RDL producers or consumers, but we aren't there yet. Maybe in another 5 years. Another interesting session that covered the BI stack against a backdrop of competitive products was delivered by Andrew Brust. Andrew did a marvelous job of consolidating a lot of information in a way that clearly communicated how various vendors' offerings compared to the Microsoft BI stack. He also made a particularly compelling argument about how the existence of an ecosystem around the Microsoft BI stack provided innovation and opportunities lacking for other vendors. Check out his presentation, "How Does the Microsoft BI Stack...Stack Up?" Expo Hall I had planned to spend more time in the Expo Hall to see who was doing new things with the BI stack, but didn't manage to get very far. Each time I set out on an exploratory mission, I got caught up in some fascinating conversations with one or more of my peers. I find interacting with people that I meet at conferences just as important as attending sessions to learn something new. There were a couple of items that really caught me eye, however, that I'll share here. Pragmatic Works. Whether you develop SSIS packages, build SSAS cubes, or author SSRS reports (or all of the above), you really must take a look at BI Documenter. Brian Knight (twitter) walked me through the key features, and I must say I was impressed. Once you've seen what this product can do, you won't want to document your BI projects any other way. You can download a free single-user database edition, or choose from more feature-rich standard or professional editions. Microsoft Press ebooks. I also stopped by the O'Reilly Media booth to meet some folks that one of my acquisitions editors at Microsoft Press recommended. In case you haven't heard, Microsoft Press has partnered with O'Reilly Media for distribution and publishing. Apart from my interest in learning more about O'Reilly Media as an author, an advertisement in their booth caught me eye which I think is a really great move. When you buy Microsoft Press ebooks through the O'Reilly web site, you can receive it in any (or all) of the following formats where possible: PDF, epub, .mobi for Kindle and .apk for Android. You also have lifetime DRM-free access to the ebooks. As someone who is an avid collector of books, I fnd myself running out of room for storage. In addition, I travel a lot, and it's hard to lug my reference library with me. Today's e-reader options make the move to digital books a more viable way to grow my library. Having a variety of formats means I am not limited to a single device, and lifetime access means I don't have to worry about keeping track of where I've stored my files. Because the e-books are DRM-free, I can copy and paste when I'm compiling notes, and I can print pages when necessary. That's a winning combination in my mind! Overall, I was pleased with the BI conference. There were many more sessions that I couldn't attend, either because the room was full when I got there or there were multiple sessions running concurrently that I wanted to see. Fortunately, many of the sessions are accessible for viewing online at http://www.msteched.com/2010/NorthAmerica along with the TechEd sessions. You can spot the BI sessions by the yellow skyline on the title slide of the presentation as shown below. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • CodePlex Daily Summary for Wednesday, June 29, 2011

    CodePlex Daily Summary for Wednesday, June 29, 2011Popular ReleasesCandescent NUI: Candescent NUI (8263): This is the binary version of the source code in change set 8263.Coding4Fun Tools: Coding4Fun.Phone.Toolkit v1.4.4: Fix for http://coding4fun.codeplex.com/workitem/6869 was incomplete. Back button wouldn't return app bar. Corrected now. High impact bugSiteMap Editor for Microsoft Dynamics CRM 2011: SiteMap Editor (1.0.528.279): Added keyboard shortcuts: - Cut (CTRL+X) - Copy (CTRL+C) - Paste (CTRL+V) - Delete (CTRL+D) - Move up (CTRL+UP ARROW) - Move down (CTRL+DOWN ARROW) Added ability to save/load SiteMap from/to a Xml file on disk Bug fix: - Connect to a server through the status bar was throwing error "Object Reference not set to an instance of an object" - Rename TreeNode.Name after changing TreeNode.TextMicrosoft - Domain Oriented N-Layered .NET 4.0 App Sample: V2.01 ALPHA N-Layered SampleApp .NET 4.0 and EF4.1: V2.0.01 - ALPHARequired Software (Microsoft Base Software needed for Development environment) Visual Studio 2010 RTM & .NET 4.0 RTM (Final Versions) Expression Blend 4 SQL Server 2008 R2 Express/Standard/Enterprise Unity Application Block 2.0 - Published May 5th 2010 http://www.microsoft.com/downloads/en/details.aspx?FamilyID=2D24F179-E0A6-49D7-89C4-5B67D939F91B&displaylang=en http://unity.codeplex.com/releases/view/31277 PEX & MOLES 0.94.51023.0, 29/Oct/2010 - Visual Studio 2010 Power ...Mosaic Project: Mosaic Alpha build 261: - Fixed crash when pinning applications in x64 OS - Added Hub to video widget. It shows videos from Video library (only .wmv and .avi). Can work slow if there are too much files. - Fixed some issues with scrolling - Fixed bug with html widgets - Fixed bug in Gmail widget - Added html today widget missed in previous release - Now Mosaic saves running widgets if you restarting from optionsEnhSim: EnhSim 2.4.9 BETA: 2.4.9 BETAThis release supports WoW patch 4.2 at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 - Added in some of th....NET Reflector Add-Ins: Reflector V7 Add-Ins: All the add-ins compiled for Reflector V7TerrariViewer: TerrariViewer v4.1 [4.0 Bug Fixes]: Version 4.1 ChangelogChanged how users will Open Player files (This change makes it much easier) This allowed me to remove the "Current player file" labels that were present Changed file control icons Added submit bug button Various Bug Fixes Fixed crashes related to clicking on buffs before a character is loaded Fixed crashes related to selecting "No Buff" when choosing a new buff Fixed crashes related to clicking on a "Max" button on the buff tab before a character is loaded Cor...AcDown????? - Anime&Comic Downloader: AcDown????? v3.0 Beta8: ??AcDown???????????????,?????????????????????。????????????????????,??Acfun、Bilibili、???、???、?????,???????????、???????。 AcDown???????????????????????????,???,???????????????????。 AcDown???????C#??,?????"Acfun?????"。 ????32??64? Windows XP/Vista/7 ????????????? ??:????????Windows XP???,?????????.NET Framework 2.0???(x86)?.NET Framework 2.0???(x64),?????"?????????"??? ??????????????,??????????: ??"AcDown?????"????????? ??v3.0 Beta8 ?? ??????????????? ???????????????(??????????) ???????...BlogEngine.NET: BlogEngine.NET 2.5: Get DotNetBlogEngine for 3 Months Free! Click Here for More Info 3 Months FREE – BlogEngine.NET Hosting – Click Here! If you want to set up and start using BlogEngine.NET right away, you should download the Web project. If you want to extend or modify BlogEngine.NET, you should download the source code. If you are upgrading from a previous version of BlogEngine.NET, please take a look at the Upgrading to BlogEngine.NET 2.5 instructions. To get started, be sure to check out our installatio...PHP Manager for IIS: PHP Manager 1.2 for IIS 7: This release contains all the functionality available in 62183 plus the following additions: Command Line Support via PowerShell - now it is possible to manage and script PHP installations on IIS by using Windows PowerShell. More information is available at Managing PHP installations with PHP Manager command line. Detection and alert when using local PHP handler - if a web site or a directory has a local copy of PHP handler mapping then the configuration changes made on upper configuration ...MiniTwitter: 1.71: MiniTwitter 1.71 ???? ?? OAuth ???????????? ????????、??????????????????? ???????????????????????SizeOnDisk: 1.0.10.0: Fix: issue 327: size format error when save settings Fix: some UI bindings trouble (sorting, refresh) Fix: user settings file deletion when corrupted Feature: TreeView virtualization (better speed with many folders) Feature: New file type DataGrid column Feature: In KByte view, show size of file < 1024B and > 0 with 3 decimal Feature: New language: Italian Task: Cleanup for speedRawr: Rawr 4.2.0: This is the Downloadable WPF version of Rawr!For web-based version see http://elitistjerks.com/rawr.php You can find the version notes at: http://rawr.codeplex.com/wikipage?title=VersionNotes Rawr AddonWe now have a Rawr Official Addon for in-game exporting and importing of character data hosted on Curse. The Addon does not perform calculations like Rawr, it simply shows your exported Rawr data in wow tooltips and lets you export your character to Rawr (including bag and bank items) like Char...N2 CMS: 2.2: * Web platform installer support available ** Nuget support available What's newDinamico Templates (beta) - an MVC3 & Razor based template pack using the template-first! development paradigm Boilerplate CSS & HTML5 Advanced theming with css comipilation (concrete, dark, roadwork, terracotta) Template-first! development style Content, news, listing, slider, image sizes, search, sitemap, globalization, youtube, google map Display Tokens - replaces text tokens with rendered content (usag...KinectNUI: Jun 25 Alpha Release: Initial public version. No installer needed, just run the EXE.Terraria World Viewer: Version 1.5: Update June 24th Made compatible with the new tiles found in Terraria 1.0.5Kinect Earth Move: KinectEarthMove sample code: Sample code releasedThis is a sample code for Kinect for Windows SDK beta, which was demonstrated on Channel 9 Kinect for Windows SKD beta launch event on June 17 2011. Using color image and skeleton data from Kinect and user in front of Kinect can manipulate the earth between his/her hands.NetOffice - The easiest way to use Office in .NET: NetOffice Release 0.9b: Changes: - fix critical issue 262334 (AccessViolationException while using events in a COMAddin) - remove x64 Assemblies (not necessary) Includes: - Runtime Binaries and Source Code for .NET Framework:......v2.0, v3.0, v3.5, v4.0 - Tutorials in C# and VB.Net:..............................................................COM Proxy Management, Events, etc. - Examples in C# and VB.Net:............................................................Excel, Word, Outlook, PowerPoint, Access - COMAddi...patterns & practices: Project Silk: Project Silk Community Drop 12 - June 22, 2011: Changes from previous drop: Minor code changes. New "Introduction" chapter. New "Modularity" chapter. Updated "Architecture" chapter. Updated "Server-Side Implementation" chapter. Updated "Client Data Management and Caching" chapter. Guidance Chapters Ready for Review The Word documents for the chapters are included with the source code in addition to the CHM to help you provide feedback. The PDF is provided as a separate download for your convenience. Installation Overview To ins...New ProjectsA web interface for data search and download to CUAHSI HIS: Hydroweb is an innovative user interface (GUI) created for web driven hydrology data search and download from CUAHSI Hydrologic Information System (HIS). The project has been developed in c#/Silverlight programming environment by leveraging the Bing map Silverlight tools.AcesUp: It's a desktop game developed using Windows Forms. The Game can currently run only if the screen resolution is 1024 x 786. It's a modern implementation of the classic cards game Aces Up. The version that is currently available is called AcesUp Ultimate. Enjoy!AES Encryptor: AES Encryptor (AES.E) is an simple, user-friendly text file encryption program using the Advanced Encryption System (AES). Encryption keys are based on the password that is registered with the program. AES.E is written in Visual Basic.AML Studio: AML Studio makes it easier for developers who are extending Aras Innovator to develop AML queries. Queries can be developed and run using a smart editor with syntax highlighting, code folding, and Intellisense. It's developed in C#.Arca4: Arca4 is a chat server for the Ares Galaxy File Sharing Network. It's developed in C# 4.0.Basic SharePoint-Google-Maps-WebPart for SharePoint-Lists: This JavaScript-Solution improves the standard-functionality of SharePoint-Lists. It displays a new Menu-Link in the standard Menu-Toolbar of a SharePoint-List (which contains addresses / coordinates). By clicking this link Google-Maps will be displayed under the SharePoint-List.BikeBouncer, bike protection for all: Bike registration and protection for free! BikeBouncer helps cyclists keeping their bikes away from thieves. Website: http://bikebouncer.com. The source is now open so people can contribute with new ideas.Cli: General purpose commandline interface for c# projects. Inherit this class and get cli for free. Plan for other languages in the future.CLIRES-3 Clinical Study/Trial Research (MVC3 - Web Application) by Tateeda.com: Clinical Study/trials research application to track subjects and their medication, visits. Dynamically create questioners/survey forms, visits, manage medications, sites, visit schedules and so on. Application is pre loaded with forms for Bipolar disorder study and 2500 related medications. Full administrative functionality HIPPA and CFR part 11 implementation. Easy to adopt for any other type of clinical study research. Technology: MVC3, C3 4.0, EF 4.0, jQuery 1.6, MS SQL 2008 R2CloudShot: CloudShot is a simple application to create screenshots and automatically upload it to your dropbox.CodingWheels.DataTypes: DataTypes tries to make it easier for developers to have concrete typesafe objects for working with many common forms of data. Many times these data objects are just doubles or ints floating through your code with abbreviations on them describing what they represent.CommonLibrary.NET Extensions: Highly re-usable code and components that are extensions to CommonLibrary.NET.CommonLibrary.NET Web: Highly re-usable web based code and components that are extensions to the CommonLibrary.NET.CRM 2011 Maintenance Job Editor: This utility is to be used for editing the CRM 2011 maintenance jobs which are automatically scheduled by the installation of CRM. This utility provides similar functionality to CRM 4.0's Scale Group Job Editor [url:http://archive.msdn.microsoft.com/ScaleGroupJobEditor]. Due to the changes in CRM 2011 many modifications had to be made and the functionality has been altered slightly. I look forward to your thoughts and comments on the changes. Excel add-in for BLAS routines: summaryExcel add-in for floating point numbers: Excel add-in for floating point number routines and utilities.Excel add-in for LAPACK routines: LAPACKExcel add-in for market aware date and time routines.: Date and time functions for Excel that know about market conventions such as day count and roll conventions.F# Math Visualizer: MathVisualizer, scirtto in F#, permette di visualizzare espressioni matematiche. Le espressioni devo essere scritte dall'utente seguendo una determinata sintassi. In particolare un espressione puo' essere scritta in maniera estesa, contratta o ibrida.FSharp Toolkit: Contents for build applications on F#.HiFreamWork: Hi,FreamWork C# Custom Library Used Microsoft.Practices.EnterpriseLibrary. ruiyuxing MSN/Mail:ryx1984ryx#hotmail.com QQ:120897051 http://www.cnfield.comhozoroghiab: hozoroghiab is absent o present systemHtml Parametric Web Part: A Web Part to build html with parameters taken from the context of SharePoint 2007ios-framework: ios framework projectITextSharp Sample: ????IText Sharp????,????????PDF??,????flash,????LinGoRoom: Language lab technology-based "thin client". NAudio used for audio capture and playback. The project was developed in C #.Microsoft Dynamics CRM 2011 Customization Editor: Visual Studio 2010 and stand-alone tool that will allow the Customizations.xml to be viewed in a tree structure, with custom editors for each component. Initially editors for the supported Ribbon Editor, Sitemap and ISV.config will be included, with Read-only views for Forms.Modbus for .NET: C# implementation of Modbus communication protocol.Multi-Server MVC Elmah Log Viewer: An Elmah Log viewer for multiple elmah logs.MVC Scaffolding for WCF Data Services: This project contains a set of custom templates for MVC Scaffolding that will allow you to scaffold against a WCF Data Service (oData). Using these templates you can scaffold your client side DataServiceContext, Controller and Views using MVC Scaffolding, allowing you to quickly get a milti tiered MVC application up and running. You can use pretty much any oData feed as long as you have the entities for your ADO .NET Data Service defined in your solution these templates should work. OABValidate: This tool is for tracking down unresolvable DNs on directory objects when you have an Exchange server where Offline Address List generation (oabgen) is failing with event 9339 and error code 8004010e.onForms: Weekly fresh this is deletedOpenNETCF Calendar Controls: OpenNETCF Calendar Controls provide a Month View, Week View and Agenda View similar to what is used in the Pocket Outlook Calendar application. Pedal Architecture: Pedal Architecture allows you to quickly build enteprise applications. It currently supports building and hosting composable Windows Communication Foundation services using MEF.powerdown: ?????。rl: rlRooBooks: RooBooks - Books management tool for students and such. (circa 2004)SciFun: simple playground..SIGPRO Desktop: FUNCERNSistema De Tallet: Vehicles tallerSiteMap Editor for Microsoft Dynamics CRM 2011: SiteMap Editor for Microsoft Dynamics CRM 2011 helps developer and customizers to configure the Site Map in a graphical way. You'll no longer have to create solution, add component, export, update Xml and reimport the solution to update the SiteMap.The Electrolytes Website: The Electrolytes Band WebsiteThe Professions: The Professions work items are generalised for professional use. The professions are for highly skilled professionals who have earned their place in society through education, dedication, and specialization in some cases. The associations for particular professions are encouragedUpdateTool: A tool used to update client This project is for personal use. Please do not download in now.WcfFront: Automatic publisher of WCF ServicesWolfpack.Contrib: Contrib project for WolfPack monitoring

    Read the article

  • MSCC: Global Windows Azure Bootcamp

    Mauritius participated and contributed to the Global Windows Azure Bootcamp 2014 (GWAB). Again! And this time stronger than ever, and together with 137 other locations in 56 countries world-wide. We had 62 named registrations, 7 guest additions and approximately 10 offline participants prior to the event day. Most interestingly the organisation of the GWAB through the MSCC helped to increased the number of craftsmen. The Mauritius Software Craftsmanship Community has currently 138 registered members - in less than one year! Only with those numbers we can proudly say that all the preparations and hard work towards this event already paid off. Personally, I'm really grateful that we had this kind of response and the feedback from some attendees confirmed that the MSCC is on the right track here on Cyber Island Mauritius. Inspired and motivated by the success of this event, rest assured that there will be more public events like the GWAB. This time it took some time to reflect on our meetup, following my first impression right on spot: "Wow, what an experience to organise and participate in this global event. Overall, I've been very pleased with the preparations and the event itself. Surely, there have been some nicks that we have to address and to improve for future activities like this. Quite frankly, we are not professional event organisers (not yet) but we learned a lot over the past couple of days. A big Thank You to our event sponsors, namely Microsoft Indian Ocean Islands & French Pacific, Ceridian Mauritius and Emtel. Without them this event wouldn't have happened - Thank You! And to the cool team members of Microsoft Student Partners (MSPs). You geeks did a great job! Thanks!" So, how many attendees did we actually have? 61! - Awesome - 61 cloud computing instances to help on the research of diabetes. During Saturday afternoon there was even an online publication on L'Express: Les développeurs mauriciens se joignent au combat contre le diabète Reactions of other attendees Don't take my word for granted... Here are some impressions and feedback from our participants: "Awesome event, really appreciated the presentations :-)" -- Kevin on event comments "very interesting and enriching." -- Diana on event comments "#gwab #gwabmru 2014 great success. Looking forward for gwab 2015" -- Wasiim on Twitter "Was there till the end. Awesome Event. I'll surely join upcoming meetup sessions :)" -- Luchmun on event comments "#gwabmru was not that cool. left early" -- Mohammad on Twitter The overall feedback is positive but we are absolutely aware that there quite a number of problems we had to face. We are already looking into that and ideas / action plans on how we will be able to improve it for future events. The sessions We started the day with welcoming speeches by Thierry Coret, Sr. Marketing Manager of Microsoft Indian Ocean Islands & French Pacific and Vidia Mooneegan, Managing Director and Sr. Vice President of Ceridian Mauritius. The clear emphasis was on the endless possibilities of cloud computing and how it can enable any kind of sectors here in the country. Then it was about time to set up the cloud computing services in order to contribute each attendees cloud computing resources to the global research of diabetes, a step by step guide presented by Arnaud Meslier, Technical Evangelist at Microsoft. Given a rendering package and a configuration file it was very interesting to follow the single steps in Windows Azure. Also, during the day we were not sure whether the set up had been correctly, as Mauritius didn't show up on the results board - which should have been the case after approximately 20 to 30 minutes. Anyways, let the minions work... Next, Arnaud gave a brief overview of the variety of services Windows Azure has to offer. Whether you need a development environment for your websites or mobiles app, running a virtual machine with your existing applications or simply putting a SQL database online. No worries, Windows Azure has the right packages available and the online management portal is really easy t handle. After this, we got a little bit more business oriented while Wasiim Hosenbocus, employee at Ceridian, took the attendees through the inerts of a real-life application, and demoed a couple of the existing features. He did a great job by showing how the different services of Windows Azure can be created and pulled together. After the lunch break it is always tough to keep the audience awake... And it was my turn. I gave a brief overview on operating and managing a SQL database on Windows Azure. Well, there are actually two options available and depending on your individual requirements you should be aware of both. The simpler version is called SQL Database and while provisioning only takes a couple of seconds, you should take into consideration that SQL Database has a number of constraints, like limitations on the actual database size - up to 5 GB as web edition and up to 150 GB maximum as business edition -, among others. Next, it was Chervine Bhiwoo's session on Windows Azure Mobile Services. It was absolutely amazing to see that the mobiles services directly offers you various project templates, like Windows 8 Store App, Android app, iOS app, and even Xamarin cross-platform app development. So, within a couple of minutes you can have your first mobile app active and running on Windows Azure. Furthermore, Chervine showed the attendees that adding another user interface, like Web Sites running on ASP.NET MVC 4 only takes a couple of minutes, too. And last but not least, we rounded up the day with Windows Azure Websites and hosting of Virtual Machines presented by some members of the local Microsoft Students Partners programme. Surely, one of the big advantages using Windows Azure is the availability of pre-defined installation packages of known web applications, like WordPress, Joomla!, or Ghost. Compared to running your own web site with a traditional web hoster it is surely en par, and depending on your personal level of expertise, Windows Azure provides you more liberty in terms of configuration than maybe a shared hosting environment. Running a pre-defined web application is one thing but in case that you would like to have more control over your hosting environment it is highly advised to opt for a virtual machine. Provisioning of an Ubuntu 12.04 LTS system was very simple to do but it takes some more minutes than you might expect. So, please be patient and take your time while Windows Azure gets everything in place for you. Afterwards, you can use a SecureShell (ssh) client like Putty in case of a Linux-based machine, or Remote Desktop Services when operating a Windows Server system to log in into your virtual machine. At the end of the day we had a great Q&A session and we finalised the event with our raffle of goodies. Participation in the raffle was bound to submission of the session survey and most gratefully we had a give-away for everyone. What a nice coincidence to finish of the day. Note: All session slides (and demo codes) will be made available on the MSCC event page. Please, check the Files section over there. (Some) Visual impressions from the event Just to give you an idea about what has happened during the GWAB 2014 at Ebene... Speakers and Microsoft Student Partners are getting ready for the Global Windows Azure Bootcamp 2014 GWAB 2014 attendees are fully integrated into the hands-on-labs and setting up their individuals cloud computing services 60 attendees at the GWAB 2014. Despite some technical difficulties we had a great time running the event GWAB 2014: Using the lunch break for networking and exchange of ideas - Great conversations and topics amongst attendees There are more pictures on the original event page: Questions & Answers Following are a couple of questions which have been asked and didn't get an answer during the event: Q: Is it possible to upload static pages via FTP? A: Yes, you can. Have a look at the right side column on the dashboard of your website. There you'll find information about the FTP and SFTP host names. You can use any FTP client, like ie. FileZilla to log in. FTP also gives you access to your log files. Q: What are the size limitations on SQL Database? A: 5 GB on the Web Edition, and 150 GB on the business edition. A maximum 150 databases (inclusing 'master') per SQL Database server. More details here: General Guidelines and Limitations (Azure SQL Database) Q: What's the maximum size of a SQL Server running in a Virtual Machine? A: The maximum Windows Azure VM has currently 8 CPU cores, 14 or 56 GB of RAM and 16x 1 TB hard drives. More details here: Virtual Machine and Cloud Service Sizes for Azure Q: How can we register for Windows Azure? A: Mauritius is currently not listed for phone verification. Please get in touch with Arnaud Meslier at Microsoft IOI & FP Q: Can I use my own domain name for Windows Azure websites? A: Yes, you can. But this might require to upscale your account to Standard. In case that I missed a question and answer, please use the comment section at the end of the article. Thanks! Final results Every participant was instructed during the hands-on-lab session on how to set up a cloud computing service in their account. Of course, I won't keep the results from you... Global Azure Lab GWAB 2014: Our cloud computing contribution to the research of diabetes And I would say Mauritius did a good job! Upcoming Events What are the upcoming events here in Mauritius? So far, we have the following ones (incomplete list as usual) in chronological order: Launch of Microsoft SQL Server 2014 (15.4.2014) Corsair Hackers Reboot (19.4.2014) WebCup (TBA ~ June 2014) Developers Conference (TBA ~ July 2014) Linuxfest 2014 (TBA ~ November 2014) Hopefully, there will be more announcements during the next couple of weeks and months. If you know about any other event, like a bootcamp, a code challenge or hackathon here in Mauritius, please drop me a note in the comment section below this article. Thanks! Networking and job/project opportunities Despite having technical presentations on Windows Azure an event like this always offers a great bunch of possibilities and opportunities to get in touch with new people in IT, have an exchange of experience with other like-minded people. As I already wrote about Communities - The importance of exchange and discussion - I had a great conversation with representatives of the University des Mascareignes which are currently embracing cloud infrastructure and cloud computing for their various campuses in the Indian Ocean. As for the MSCC it would be a great experience to stay in touch with them, and to work on upcoming, common activities. Furthermore, I had a very good conversation with Thierry and Ludovic of Microsoft IOI & FP on the necessity of user groups and IT communities here on the island. It's great to see that the MSCC is currently on a run and that local companies are sharing our thoughts on promoting IT careers and exchange of IT knowledge in such an open way. I'm also looking forward to be able to participate and to contribute on more events in the near future. My resume of the day We learned a lot today and there is always room for improvement! It was an awesome event and quite frankly it was a pleasure to spend the day with so many enthuastic IT people in the same room. It was a great experience to organise such event locally and participate on a global scale to support the GlyQ-IQ Technology in their research on diabetes. I was so pleased to see the involvement of new MSCC members in taking the opportunity to share and learn about the power of cloud computing. The Mauritius Software Craftsmanship Community is on the right way and this year's bootcamp on Windows Azure only marked the beginning of our journey. Thank you to our sponsors and my kudos to the MSPs! Update: Media coverage The event has been reported in local media, too. Following are some resources: Orange - Local - Business: Le cloud, pour des recherches approfondies sur le diabète Maurice Info.mu: Le cloud, pour des recherches approfondies sur le diabète Le Quotidien Pg 2: Global Windows Azure Bootcamp 2014 - Le cloud pour des recherches approfondies sur le diabète The Observer Pg 12: Le cloud, pour des recherches approfondies sur le diabète

    Read the article

  • Understanding the value of Customer Experience & Loyalty for the Telecommunications Industry

    - by raul.goycoolea
    Worried by economic woes and market forces, especially in mature markets, communications service providers (CSPs) increasingly focus on improving customer experience. In fact, it seems difficult to find a major message by a C-level executive in the developed world that does not include something on "meeting and exceeding customers' needs". Frequently in customer satisfaction studies by prominent firms, CSPs fall short of the leadership demonstrated by other industries that take customer-centric approaches to their bottom-line strategies. Consider the following:Despite the continued impact of global economic crisis, in July 2010, Apple Computer posted record revenue and net quarterly profit. Those who attribute the results primarily to the iPhone 4 launch should note that Apple also shipped around 30% more Macintosh computers than the same period the previous year. Even sales of the iPod line increased by 8% in a highly commoditized, shrinking media player market. Finally, Apple began selling iPads during the quarter, with total sales of more than 3 million units. What does Apple have that the others lack? Well, some great products (and services) to be sure, but it also excels at customer service and support, marketing, and distribution, and has one of the strongest brands globally. Its products are useful, simple to use, easy to acquire and augment, high quality, and considered very cool. They also evoke such an emotional response from many of Apple's customers, which they turn up their noses at competitive products.In other words, Apple appears to have mastered virtually every aspect of customer experience and the resultant loyalty of its customer base - even in difficult financial times. Through that unwavering customer focus, Apple continues to drive its revenues and profits to new heights. Other customer loyalty leaders like Wal-Mart, Google, Toyota and Honda are also doing well by focusing on customer experience as an essential driver of profitability. Service providers should note this performance and ask themselves how they might leverage the same principles to increase their own profitability. After all, that is what customer experience and loyalty are all about: profitability.To successfully manage all the critical touch points of customer experience, CSPs must shun the one-size-fits-all approach. They can no longer afford to view customer service fundamentally as an act of altruism - which mentality dates back to the industry's civil service days, when CSPs were typically government organizations that were critical to economic development and public safety.As regulators and public officials have pushed, and continue to push, service providers to new heights of reliability - using incentives and punishments - most CSPs already have some of the fundamental building blocks of customer service in place. Yet despite that history and experience, service providers still lag other industries in providing what is seen as good customer service.As we observed in the TMF's 2009 Insights Research report, Customer Experience Management: Driving Loyalty & Profitability there has been resurgence in interest by CSPs. More and more of them have stated ambitions to catch up other industries, and they are realizing that good customer service is a powerful strategy for increasing business performance and profitability, not an act of good will.CSPs are recognizing the connection between customer experience and profitability, as demonstrated in many studies. For example, according to research by Bain & Company, a 5 percent improvement in customer retention rates can yield as much as a 75 percent increase in profits for companies across a range of industries.After decades of customer experience strategy formulation, Bain partner and business author, Frederick Reichheld, considers "would you recommend us to a friend?" as the ultimate question for a customer. How many times have you or your friends recommended an iPod, iPhone or a Mac? What do your children recommend to their peers? Their peers to them?There are certain steps service providers have to take to create more personalized relationships with their customers, as well as reduce churn and increase profitability, all while becoming leaner and more agile. First, they have to define customer experience, we define it as the result of the sum of observations, perceptions, thoughts and feelings arising from interactions and relationships between customers and their service provider(s). Virtually every customer touch point - whether directly or indirectly linked to service providers and their partners - contributes to customer perception, satisfaction, loyalty, and ultimately profitability. Gaining leadership in customer experience and satisfaction will not be a simple task, as it is affected by virtually every customer-facing aspect of the service provider, and in turn impacts the service provider deeply - especially on the all-important bottom line. The scope of issues affecting customer experience is complex and dynamic.With new services, devices and applications extending the basis of customer experience to domains beyond the direct control of the service provider, it is likely to increase in complexity and dynamism.Customer loyalty = increased profitsAs stated earlier, customer experience programs are not fundamentally altruistic exercises, but a strategic means of improving competitiveness and profitability in the short and long term. Loyalty is essential to deriving long term profits from customers.Some of the earliest loyalty programs date back to the 1930s, when packaged goods companies offered embedded coupons for rewards to buyers, and eventually retail chains began offering reward programs to frequent shoppers. These programs continued for decades but were leapfrogged in the 1980s by more aggressive programs from the airlines.This movement was led by American Airlines, which launched the first full-scale loyalty marketing program of the modern era with the AAdvantage frequent flyer scheme. It was the first to reward frequent fliers with notional air miles that could be accumulated and later redeemed for free travel. Figure 1: Opportunities example of Customer loyalty driven profitOther airlines and travel providers were quick to grasp the incredible value of providing customers with an incentive to use their company exclusively. Within a few years, dozens of travel industry companies launched similar initiatives and now loyalty programs are achieving near-ubiquity in many service industries, especially those in which it is difficult to differentiate offerings by product attributes.The belief is that increased profitability will result from customer retention efforts because:•    The cost of acquisition occurs only at the beginning of a relationship: the longer the relationship, the lower the amortized cost;•    Account maintenance costs decline as a percentage of total costs, or as a percentage of revenue, over the lifetime of the relationship;•    Long term customers tend to be less inclined to switch and less price sensitive which can result in stable unit sales volume and increases in dollar-sales volume;•    Long term customers may initiate word-of-mouth promotions and referrals, which cost the company nothing and arguably are the most effective form of advertising;•    Long-term customers are more likely to buy ancillary products and higher margin supplemental products;•    Long term customers tend to be satisfied with their relationship with the company and are less likely to switch to competitors, making market entry or competitors gaining market share difficult;•    Regular customers tend to be less expensive to service, as they are familiar with the processes involved, require less 'education', and are consistent in their order placement;•    Increased customer retention and loyalty makes the employees' jobs easier and more satisfying. In turn, happy employees feed back into higher customer satisfaction in a virtuous circle. Figure 2: The virtuous circle of customer loyaltyFigure 2 represents a high-level example of a virtuous cycle driven by customer satisfaction and loyalty, depicting how superiority in product and service offerings, as well as strong customer support by competent employees, lead to higher sales and ultimately profitability. As stated above, this is not a new concept, but succeeding with it is difficult. It has eluded many a company driven to achieve profitability goals. Of course, for this circle to be virtuous, the customer relationship(s) must be profitable.Trying to maintain the loyalty of unprofitable customers is not a viable business strategy. It is, therefore, important that marketers can assess the profitability of each customer (or customer segment), and either improve or terminate relationships that are not profitable. This means each customer's 'relationship costs' must be understood and compared to their 'relationship revenue'. Customer lifetime value (CLV) is the most commonly used metric here, as it is generally accepted as a representation of exactly how much each customer is worth in monetary terms, and therefore a determinant of exactly how much a service provider should be willing to spend to acquire or retain that customer.CLV models make several simplifying assumptions and often involve the following inputs:•    Churn rate represents the percentage of customers who end their relationship with a company in a given period;•    Retention rate is calculated by subtracting the churn rate percentage from 100;•    Period/horizon equates to the units of time into which a customer relationship can be divided for analysis. A year is the most commonly used period for this purpose. Customer lifetime value is a multi-period calculation, often projecting three to seven years into the future. In practice, analysis beyond this point is viewed as too speculative to be reliable. The model horizon is the number of periods used in the calculation;•    Periodic revenue is the amount of revenue collected from a customer in a given period (though this is often extended across multiple periods into the future to understand lifetime value), such as usage revenue, revenues anticipated from cross and upselling, and often some weighting for referrals by a loyal customer to others; •    Retention cost describes the amount of money the service provider must spend, in a given period, to retain an existing customer. Again, this is often forecast across multiple periods. Retention costs include customer support, billing, promotional incentives and so on;•    Discount rate means the cost of capital used to discount future revenue from a customer. Discounting is an advanced method used in more sophisticated CLV calculations;•    Profit margin is the projected profit as a percentage of revenue for the period. This may be reflected as a percentage of gross or net profit. Again, this is generally projected across the model horizon to understand lifetime value.A strong focus on managing these inputs can help service providers realize stronger customer relationships and profits, but there are some obstacles to overcome in achieving accurate calculations of CLV, such as the complexity of allocating costs across the customer base. There are many costs that serve all customers which must be properly allocated across the base, and often a simple proportional allocation across the whole base or a segment may not accurately reflect the true cost of serving that customer;  This is made worse by the fragmentation of customer information, which is likely to be across a variety of product or operations groups, and may be difficult to aggregate due to different representations.In addition, there is the complexity of account relationships and structures to take into consideration. Complex account structures may not be understood or properly represented. For example, a profitable customer may have a separate account for a second home or another family member, which may appear to be unprofitable. If the service provider cannot relate the two accounts, CLV is not properly represented and any resultant cancellation of the apparently unprofitable account may result in the customer churning from the profitable one.In summary, if service providers are to realize strong customer relationships and their attendant profits, there must be a very strong focus on data management. This needs to be coupled with analytics that help business managers and those who work in customer-facing functions offer highly personalized solutions to customers, while maintaining profitability for the service provider. It's clear that acquiring new customers is expensive. Advertising costs, campaign management expenses, promotional service pricing and discounting, and equipment subsidies make a serious dent in a new customer's profitability. That is especially true given the rising subsidies for Smartphone users, which service providers hope will result in greater profits from profits from data services profitability in future.  The situation is made worse by falling prices and greater competition in mature markets.Customer acquisition through industry consolidation isn't cheap either. A North American service provider spent about $2,000 per subscriber in its acquisition of a smaller company earlier this year. While this has allowed it to leapfrog to become the largest mobile service provider in the country, it required a total investment of more than $28 billion (including assumption of the acquiree's debt).While many operating cost synergies clearly made this deal more attractive to the acquiring company, this is certainly an expensive way to acquire customers: the cost per subscriber in this case is not out of line with the prices others have paid for acquisitions.While growth by acquisition certainly increases overall revenues, it often creates tremendous challenges for profitability. Organic growth through increased customer loyalty and retention is a more effective driver of profit, as well as a stronger predictor of future profitability. Service providers, especially those in mature markets, are increasingly recognizing this and taking steps toward a creating a more personalized, flexible and satisfying experience for their customers.In summary, the clearest path to profitability for companies in virtually all industries is through customer retention and maximization of lifetime value. Service providers would do well to recognize this and focus attention on profitable customer relationships.

    Read the article

  • The Incremental Architect&acute;s Napkin - #2 - Balancing the forces

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/06/02/the-incremental-architectacutes-napkin---2---balancing-the-forces.aspxCategorizing requirements is the prerequisite for ecconomic architectural decisions. Not all requirements are created equal. However, to truely understand and describe the requirement forces pulling on software development, I think further examination of the requirements aspects is varranted. Aspects of Functionality There are two sides to Functionality requirements. It´s about what a software should do. I call that the Operations it implements. Operations are defined by expressions and control structures or calls to frameworks of some sort, i.e. (business) logic statements. Operations calculate, transform, aggregate, validate, send, receive, load, store etc. Operations are about behavior; they take input and produce output by considering state. I´m not using the term “function” here, because functions - or methods or sub-programs - are not necessary to implement Operations. Functions belong to a different sub-aspect of requirements (see below). Operations alone are not enough, though, to make a customer happy with regard to his/her Functionality requirements. Only correctly implemented Operations provide full value. This should make clear, why testing is so important. And not just manual tests during development of some operational feature, but automated tests. Because only automated tests scale when over time the number of operations increases. Without automated tests there is no guarantee formerly correct operations are still correct after more got added. To retest all previous operations manually is infeasible. So whoever relies just on manual tests is not really balancing the two forces Operations and Correctness. With manual tests more weight is put on the side of the scale of Operations. That might be ok for a short period of time - but in the long run it will bite you. You need to plan for Correctness in the long run from the first day of your project on. Aspects of Quality As important as Functionality is, it´s not the driver for software development. No software has ever been written to just implement some operation in code. We don´t need computers just to do something. All computers can do with software we can do without them. Well, at least given enough time and resources. We could calculate the most complex formulas without computers. We could do auctions with millions of people without computers. The only reason we want computers to help us with this and a million other Operations is… We don´t want to wait for the results very long. Or we want less errors. Or we want easier accessability to complicated solutions. So the main reason for customers to buy/order software is some Quality. They want some Functionality with a higher Quality (e.g. performance, scalability, usability, security…) than without the software. But Qualities come in at least two flavors: Most important are Primary Qualities. That´s the Qualities software truely is written for. Take an online auction website for example. Its Primary Qualities are performance, scalability, and usability, I´d say. Auctions should come within reach of millions of people; setting up an auction should be very easy; finding a suitable auction and bidding on it should be as fast as possible. Only if those Qualities have been implemented does security become relevant. A secure auction website is important - but not as important as a fast auction website. Nobody would want to use the most secure auction website if it was unbearably slow. But there would be people willing to use the fastest auction website even it was lacking security. That´s why security - with regard to online auction software - is not a Primary Quality, but just a Secondary Quality. It´s a supporting quality, so to speak. It does not deliver value by itself. With a password manager software this might be different. There security might be a Primary Quality. Please get me right: I don´t want to denigrate any Quality. There´s a long list of non-functional requirements at Wikipedia. They are all created equal - but that does not mean they are equally important for all software projects. When confronted with Quality requirements check with the customer which are primary and which are secondary. That will help to make good economical decisions when in a crunch. Resources are always limited - but requirements are a bottomless ocean. Aspects of Security of Investment Functionality and Quality are traditionally the requirement aspects cared for most - by customers and developers alike. Even today, when pressure rises in a project, tunnel vision will focus on them. Any measures to create and hold up Security of Investment (SoI) will be out of the window pretty quickly. Resistance to customers and/or management is futile. As long as SoI is not placed on equal footing with Functionality and Quality it´s bound to suffer under pressure. To look closer at what SoI means will help to become more conscious about it and make customers and management aware of the risks of neglecting it. SoI to me has two facets: Production Efficiency (PE) is about speed of delivering value. Customers like short response times. Short response times mean less money spent. So whatever makes software development faster supports this requirement. This must not lead to duct tape programming and banging out features by the dozen, though. Because customers don´t just want Operations and Quality, but also Correctness. So if Correctness gets compromised by focussing too much on Production Efficiency it will fire back. Customers want PE not just today, but over the whole course of a software´s lifecycle. That means, it´s not just about coding speed, but equally about code quality. If code quality leads to rework the PE is on an unsatisfactory level. Also if code production leads to waste it´s unsatisfactory. Because the effort which went into waste could have been used to produce value. Rework and waste cost money. Rework and waste abound, however, as long as PE is not addressed explicitly with management and customers. Thanks to the Agile and Lean movements that´s increasingly the case. Nevertheless more could and should be done in many teams. Each and every developer should keep in mind that Production Efficiency is as important to the customer as Functionality and Quality - whether he/she states it or not. Making software development more efficient is important - but still sooner or later even agile projects are going to hit a glas ceiling. At least as long as they neglect the second SoI facet: Evolvability. Delivering correct high quality functionality in short cycles today is good. But not just any software structure will allow this to happen for an indefinite amount of time.[1] The less explicitly software was designed the sooner it´s going to get stuck. Big ball of mud, monolith, brownfield, legacy code, technical debt… there are many names for software structures that have lost the ability to evolve, to be easily changed to accomodate new requirements. An evolvable code base is the opposite of a brownfield. It´s code which can be easily understood (by developers with sufficient domain expertise) and then easily changed to accomodate new requirements. Ideally the costs of adding feature X to an evolvable code base is independent of when it is requested - or at least the costs should only increase linearly, not exponentially.[2] Clean Code, Agile Architecture, and even traditional Software Engineering are concerned with Evolvability. However, it seems no systematic way of achieving it has been layed out yet. TDD + SOLID help - but still… When I look at the design ability reality in teams I see much room for improvement. As stated previously, SoI - or to be more precise: Evolvability - can hardly be measured. Plus the customer rarely states an explicit expectation with regard to it. That´s why I think, special care must be taken to not neglect it. Postponing it to some large refactorings should not be an option. Rather Evolvability needs to be a core concern for every single developer day. This should not mean Evolvability is more important than any of the other requirement aspects. But neither is it less important. That´s why more effort needs to be invested into it, to bring it on par with the other aspects, which usually are much more in focus. In closing As you see, requirements are of quite different kinds. To not take that into account will make it harder to understand the customer, and to make economic decisions. Those sub-aspects of requirements are forces pulling in different directions. To improve performance might have an impact on Evolvability. To increase Production Efficiency might have an impact on security etc. No requirement aspect should go unchecked when deciding how to allocate resources. Balancing should be explicit. And it should be possible to trace back each decision to a requirement. Why is there a null-check on parameters at the start of the method? Why are there 5000 LOC in this method? Why are there interfaces on those classes? Why is this functionality running on the threadpool? Why is this function defined on that class? Why is this class depending on three other classes? These and a thousand more questions are not to mean anything should be different in a code base. But it´s important to know the reason behind all of these decisions. Because not knowing the reason possibly means waste and having decided suboptimally. And how do we ensure to balance all requirement aspects? That needs practices and transparency. Practices means doing things a certain way and not another, even though that might be possible. We´re dealing with dangerous tools here. Like a knife is a dangerous tool. Harm can be done if we use our tools in just any way at the whim of the moment. Over the centuries rules and practices have been established how to use knifes. You don´t put them in peoples´ legs just because you´re feeling like it. You hand over a knife with the handle towards the receiver. You might not even be allowed to cut round food like potatos or eggs with it. The same should be the case for dangerous tools like object-orientation, remote communication, threads etc. We need practices to use them in a way so requirements are balanced almost automatically. In addition, to be able to work on software as a team we need transparency. We need means to share our thoughts, to work jointly on mental models. So far our tools are focused on working with code. Testing frameworks, build servers, DI containers, intellisense, refactoring support… That´s all nice and well. I don´t want to miss any of that. But I think it´s not enough. We´re missing mental tools, tools for making thinking and talking about software (independently of code) easier. You might think, enough of such tools already exist like all those UML diagram types or Flow Charts. But then, isn´t it strange, hardly any team is using them to design software? Or is that just due to a lack of education? I don´t think so. It´s a matter value/weight ratio: the current mental tools are too heavy weight compared to the value they deliver. So my conclusion is, we need lightweight tools to really be able to balance requirements. Software development is complex. We need guidance not to forget important aspects. That´s like with flying an airplane. Pilots don´t just jump in and take off for their destination. Yes, there are times when they are “flying by the seats of their pants”, when they are just experts doing thing intuitively. But most of the time they are going through honed practices called checklist. See “The Checklist Manifesto” for very enlightening details on this. Maybe then I should say it like this: We need more checklists for the complex businss of software development.[3] But that´s what software development mostly is about: changing software over an unknown period of time. It needs to be corrected in order to finally provide promised operations. It needs to be enhanced to provide ever more operations and qualities. All this without knowing when it´s going to stop. Probably never - until “maintainability” hits a wall when the technical debt is too large, the brownfield too deep. Software development is not a sprint, is not a marathon, not even an ultra marathon. Because to all this there is a foreseeable end. Software development is like continuously and foreever running… ? And sometimes I dare to think that costs could even decrease over time. Think of it: With each feature a software becomes richer in functionality. So with each additional feature the chance of there being already functionality helping its implementation increases. That should lead to less costs of feature X if it´s requested later than sooner. X requested later could stand on the shoulders of previous features. Alas, reality seems to be far from this despite 20+ years of admonishing developers to think in terms of reusability.[1] ? Please don´t get me wrong: I don´t want to bog down the “art” of software development with heavyweight practices and heaps of rules to follow. The framework we need should be lightweight. It should not stand in the way of delivering value to the customer. It´s purpose is even to make that easier by helping us to focus and decreasing waste and rework. ?

    Read the article

  • Laissez les bon temps rouler! (Microsoft BI Conference 2010)

    - by smisner
    Laissez les bons temps rouler" is a Cajun phrase that I heard frequently when I lived in New Orleans in the mid-1990s. It means "Let the good times roll!" and encapsulates a feeling of happy expectation. As I met with many of my peers and new acquaintances at the Microsoft BI Conference last week, this phrase kept running through my mind as people spoke about their plans in their respective businesses, the benefits and opportunities that the recent releases in the BI stack are providing, and their expectations about the future of the BI stack.Notwithstanding some jabs here and there to point out the platform is neither perfect now nor will be anytime soon (along with admissions that the competitors are also not perfect), and notwithstanding several missteps by the event organizers (which I don't care to enumerate), the overarching mood at the conference was positive. It was a refreshing change from the doom and gloom hovering over several conferences that I attended in 2009. Although many people expect economic hardships to continue over the coming year or so, everyone I know in the BI field is busier than ever and expects to stay busy for quite a while.Self-Service BISelf-service was definitely a theme of the BI conference. In the keynote, Ted Kummert opened with a look back to a fairy tale vision of self-service BI that he told in 2008. At that time, the fairy tale future was a time when "every end user was able to use BI technologies within their job in order to move forward more effectively" and transitioned to the present time in which SQL Server 2008 R2, Office 2010, and SharePoint 2010 are available to deliver managed self-service BI.This set of technologies is presumably poised to address the needs of the 80% of users that Kummert said do not use BI today. He proceeded to outline a series of activities that users ought to be able to do themselves--from simple changes to a report like formatting or an addtional data visualization to integration of an additional data source. The keynote then continued with a series of demonstrations of both current and future technology in support of self-service BI. Some highlights that interested me:PowerPivot, of course, is the flagship product for self-service BI in the Microsoft BI stack. In the TechEd keynote, which was open to the BI conference attendees, Amir Netz (twitter) impressed the audience by demonstrating interactivity with a workbook containing 100 million rows. He upped the ante at the BI keynote with his demonstration of a future-state PowerPivot workbook containing over 2 billion records. It's important to note that this volume of data is being processed by a server engine, and not in the PowerPivot client engine. (Yes, I think it's impressive, but none of my clients are typically wrangling with 2 billion records at a time. Maybe they're thinking too small. This ability to work quickly with large data sets has greater implications for BI solutions than for self-service BI, in my opinion.)Amir also demonstrated KPIs for the future PowerPivot, which appeared to be easier to implement than in any other Microsoft product that supports KPIs, apart from simple KPIs in SharePoint. (My initial reaction is that we have one more place to build KPIs. Great. It's confusing enough. I haven't seen how well those KPIs integrate with other BI tools, which will be important for adoption.)One more PowerPivot feature that Amir showed was a graphical display of the lineage for calculations. (This is hugely practical, especially if you build up calculations incrementally. You can more easily follow the logic from calculation to calculation. Furthermore, if you need to make a change to one calculation, you can assess the impact on other calculations.)Another product demonstration will be available within the next 30 days--Pivot for Reporting Services. If you haven't seen this technology yet, check it out at www.getpivot.com. (It definitely has a wow factor, but I'm skeptical about its practicality. However, I'm looking forward to trying it out with data that I understand.)Michael Tejedor (twitter) demonstrated a feature that I think is really interesting and not emphasized nearly enough--overshadowed by PowerPivot, no doubt. That feature is the Microsoft Business Intelligence Indexing Connector, which enables search of the content of Excel workbooks and Reporting Services reports. (This capability existed in MOSS 2007, but was more cumbersome to implement. The search results in SharePoint 2010 are not only cooler, but more useful by describing whether the content is found in a table or a chart, for example.)This may yet be the dawning of the age of self-service BI - a phrase I've heard repeated from time to time over the last decade - but I think BI professionals are likely to stay busy for a long while, and need not start looking for a new line of work. Kummert repeatedly referenced strategic BI solutions in contrast to self-service BI to emphasize that self-service BI is not a replacement for the services that BI professionals provide. After all, self-service BI does not appear magically on user desktops (or whatever device they want to use). A supporting infrastructure is necessary, and grows in complexity in proportion to the need to simplify BI for users.It's one thing to hear the party line touted by Microsoft employees at the BI keynote, but it's another to hear from the people who are responsible for implementing and supporting it within an organization. Rob Collie (blog | twitter), Kasper de Jonge (blog | twitter), Vidas Matelis (site | twitter), and I were invited to join Andrew Brust (blog | twitter) as he led a Birds of a Feather session at TechEd entitled "PowerPivot: Is It the BI Deal-Changer for Developers and IT Pros?" I would single out the prevailing concern in this session as the issue of control. On one side of this issue were those who were concerned that they would lose control once PowerPivot is implemented. On the other side were those who believed that data should be freely accessible to users in PowerPivot, and even acknowledgment that users would get the data they want even if it meant they would have to manually enter into a workbook to have it ready for analysis. For another viewpoint on how PowerPivot played out at the conference, see Rob Collie's observations.Collaborative BII have been intrigued by the notion of collaborative BI for a very long time. Before I discovered BI, I was a Lotus Notes developer and later a manager of developers, working in a software company that enabled collaboration in the legal industry. Not only did I help create collaborative systems for our clients, I created a complete project management from the ground up to collaboratively manage our custom development work. In that case, collaboration involved my team, my client contacts, and me. I was also able to produce my own BI from that system as well, but didn't know that's what I was doing at the time. Only in recent years has SharePoint begun to catch up with the capabilities that I had with Lotus Notes more than a decade ago. Eventually, I had the opportunity at that job to formally investigate BI as another product offering for our software, and the rest - as they say - is history. I built my first data warehouse with Scott Cameron (who has also ventured into the authoring world by writing Analysis Services 2008 Step by Step and was at the BI Conference last week where I got to reminisce with him for a bit) and that began a career that I never imagined at the time.Fast forward to 2010, and I'm still lauding the virtues of collaborative BI, if only the tools will catch up to my vision! Thus, I was anxious to see what Donald Farmer (blog | twitter) and Rita Sallam of Gartner had to say on the subject in their session "Collaborative Decision Making." As I suspected, the tools aren't quite there yet, but the vendors are moving in the right direction. One thing I liked about this session was a non-Microsoft perspective of the state of the industry with regard to collaborative BI. In addition, this session included a better demonstration of SharePoint collaborative BI capabilities than appeared in the BI keynote. Check out the video in the link to the session to see the demonstration. One of the use cases that was demonstrated was linking from information to a person, because, as Donald put it, "People don't trust data, they trust people."The Microsoft BI Stack in GeneralA question I hear all the time from students when I'm teaching is how to know what tools to use when there is overlap between products in the BI stack. I've never taken the time to codify my thoughts on the subject, but saw that my friend Dan Bulos provided good insight on this topic from a variety of perspectives in his session, "So Many BI Tools, So Little Time." I thought one of his best points was that ideally you should be able to design in your tool of choice, and then deploy to your tool of choice. Unfortunately, the ideal is yet to become real across the platform. The closest we come is with the RDL in Reporting Services which can be produced from two different tools (Report Builder or Business Intelligence Development Studio's Report Designer), manually, or by a third-party or custom application. I have touted the idea for years (and publicly said so about 5 years ago) that eventually more products would be RDL producers or consumers, but we aren't there yet. Maybe in another 5 years.Another interesting session that covered the BI stack against a backdrop of competitive products was delivered by Andrew Brust. Andrew did a marvelous job of consolidating a lot of information in a way that clearly communicated how various vendors' offerings compared to the Microsoft BI stack. He also made a particularly compelling argument about how the existence of an ecosystem around the Microsoft BI stack provided innovation and opportunities lacking for other vendors. Check out his presentation, "How Does the Microsoft BI Stack...Stack Up?"Expo HallI had planned to spend more time in the Expo Hall to see who was doing new things with the BI stack, but didn't manage to get very far. Each time I set out on an exploratory mission, I got caught up in some fascinating conversations with one or more of my peers. I find interacting with people that I meet at conferences just as important as attending sessions to learn something new. There were a couple of items that really caught me eye, however, that I'll share here.Pragmatic Works. Whether you develop SSIS packages, build SSAS cubes, or author SSRS reports (or all of the above), you really must take a look at BI Documenter. Brian Knight (twitter) walked me through the key features, and I must say I was impressed. Once you've seen what this product can do, you won't want to document your BI projects any other way. You can download a free single-user database edition, or choose from more feature-rich standard or professional editions.Microsoft Press ebooks. I also stopped by the O'Reilly Media booth to meet some folks that one of my acquisitions editors at Microsoft Press recommended. In case you haven't heard, Microsoft Press has partnered with O'Reilly Media for distribution and publishing. Apart from my interest in learning more about O'Reilly Media as an author, an advertisement in their booth caught me eye which I think is a really great move. When you buy Microsoft Press ebooks through the O'Reilly web site, you can receive it in any (or all) of the following formats where possible: PDF, epub, .mobi for Kindle and .apk for Android. You also have lifetime DRM-free access to the ebooks. As someone who is an avid collector of books, I fnd myself running out of room for storage. In addition, I travel a lot, and it's hard to lug my reference library with me. Today's e-reader options make the move to digital books a more viable way to grow my library. Having a variety of formats means I am not limited to a single device, and lifetime access means I don't have to worry about keeping track of where I've stored my files. Because the e-books are DRM-free, I can copy and paste when I'm compiling notes, and I can print pages when necessary. That's a winning combination in my mind!Overall, I was pleased with the BI conference. There were many more sessions that I couldn't attend, either because the room was full when I got there or there were multiple sessions running concurrently that I wanted to see. Fortunately, many of the sessions are accessible for viewing online at http://www.msteched.com/2010/NorthAmerica along with the TechEd sessions. You can spot the BI sessions by the yellow skyline on the title slide of the presentation as shown below. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Office 2010: It&rsquo;s not just DOC(X) and XLS(X)

    - by andrewbrust
    Office 2010 has released to manufacturing.  The bits have left the (product team’s) building.  Will you upgrade? This version of Office is officially numbered 14, a designation that correlates with the various releases, through the years, of Microsoft Word.  There were six major versions of Word for DOS, during whose release cycles came three 16-bit Windows versions.  Then, starting with Word 95 and counting through Word 2007, there have been six more versions – all for the 32-bit Windows platform.  Skip version 13 to ward off folksy bad luck (and, perhaps, the bugs that could come with it) and that brings us to version 14, which includes implementations for both 32- and 64-bit Windows platforms.  We’ve come a long way baby.  Or have we? As it does every three years or so, debate will now start to rage on over whether we need a “14th” version the PC platform’s standard word processor, or a “13th” version of the spreadsheet.  If you accept the premise of that question, then you may be on a slippery slope toward answering it in the negative.  Thing is, that premise is valid for certain customers and not others. The Microsoft Office product has morphed from one that offered core word processing, spreadsheet, presentation and email functionality to a suite of applications that provides unique, new value-added features, and even whole applications, in the context of those core services.  The core apps thus grow in mission: Excel is a BI tool.  Word is a collaborative editorial system for the production of publications.  PowerPoint is a media production platform for for live presentations and, increasingly, for delivering more effective presentations online.  Outlook is a time and task management system.  Access is a rich client front-end for data-driven self-service SharePoint applications.  OneNote helps you capture ideas, corral random thoughts in a semi-structured way, and then tie them back to other, more rigidly structured, Office documents. Google Docs and other cloud productivity platforms like Zoho don’t really do these things.  And there is a growing chorus of voices who say that they shouldn’t, because those ancillary capabilities are over-engineered, over-produced and “under-necessary.”  They might say Microsoft is layering on superfluous capabilities to avoid admitting that Office’s core capabilities, the ones people really need, have become commoditized. It’s hard to take sides in that argument, because different people, and the different companies that employ them, have different needs.  For my own needs, it all comes down to three basic questions: will the new version of Office save me time, will it make the mundane parts of my job easier, and will it augment my services to customers?  I need my time back.  I need to spend more of it with my family, and more of it focusing on my own core capabilities rather than the administrative tasks around them.  And I also need my customers to be able to get more value out of the services I provide. Help me triage my inbox, help me get proposals done more quickly and make them easier to read.  Let me get my presentations done faster, make them more effective and make it easier for me to reuse materials from other presentations.  And, since I’m in the BI and data business, help me and my customers manage data and analytics more easily, both on the desktop and online. Those are my criteria.  And, with those in mind, Office 2010 is looking like a worthwhile upgrade.  Perhaps it’s not earth-shattering, but it offers a combination of incremental improvements and a few new major capabilities that I think are quite compelling.  I provide a brief roundup of them here.  It’s admittedly arbitrary and not comprehensive, but I think it tells the Office 2010 story effectively. Across the Suite More than any other, this release of Office aims to give collaboration a real workout.  In certain apps, for the first time, documents can be opened simultaneously by multiple users, with colleagues’ changes appearing in near real-time.  Web-browser-based versions of Word, Excel, PowerPoint and OneNote will be available to extend collaboration to contributors who are off the corporate network. The ribbon user interface is now more pervasive (for example, it appears in OneNote and in Outlook’s main window).  It’s also customizable, allowing users to add, easily, buttons and options of their choosing, into new tabs, or into new groups within existing tabs. Microsoft has also taken the File menu (which was the “Office Button” menu in the 2007 release) and made it into a full-screen “Backstage” view where document-wide operations, like saving, printing and online publishing are performed. And because, more and more, heavily formatted content is cut and pasted between documents and applications, Office 2010 makes it easier to manage the retention or jettisoning of that formatting right as the paste operation is performed.  That’s much nicer than stripping it off, or adding it back, afterwards. And, speaking of pasting, a number of Office apps now make it especially easy to insert screenshots within their documents.  I know that’s useful to me, because I often document or critique applications and need to show them in action.  For the vast majority of users, I expect that this feature will be more useful for capturing snapshots of Web pages, but we’ll have to see whether this feature becomes popular.   Excel At first glance, Excel 2010 looks and acts nearly identically to the 2007 version.  But additional glances are necessary.  It’s important to understand that lots of people in the working world use Excel as more of a database, analytics and mathematical modeling tool than merely as a spreadsheet.  And it’s also important to understand that Excel wasn’t designed to handle such workloads past a certain scale.  That all changes with this release. The first reason things change is that Excel has been tuned for performance.  It’s been optimized for multi-threaded operation; previously lengthy processes have been shortened, especially for large data sets; more rows and columns are allowed and, for the first time, Excel (and the rest of Office) is available in a 64-bit version.  For Excel, this means users can take advantage of more than the 2GB of memory that the 32-bit version is limited to. On the analysis side, Excel 2010 adds Sparklines (tiny charts that fit into a single cell and can therefore be presented down an entire column or across a row) and Slicers (a more user-friendly filter mechanism for PivotTables and charts, which visually indicates what the filtered state of a given data member is).  But most important, Excel 2010 supports the new PowerPIvot add-in which brings true self-service BI to Office.  PowerPivot allows users to import data from almost anywhere, model it, and then analyze it.  Rather than forcing users to build “spreadmarts” or use corporate-built data warehouses, PowerPivot models function as true columnar, in-memory OLAP cubes that can accommodate millions of rows of data and deliver fast drill-down performance. And speaking of OLAP, Excel 2010 now supports an important Analysis Services OLAP feature called write-back.  Write-back is especially useful in financial forecasting scenarios for which Excel is the natural home.  Support for write-back is long overdue, but I’m still glad it’s there, because I had almost given up on it.   PowerPoint This version of PowerPoint marks its progression from a presentation tool to a video and photo editing and production tool.  Whether or not it’s successful in this pursuit, and if offering this is even a sensible goal, is another question. Regardless, the new capabilities are kind of interesting.  A greatly enhanced set of slide transitions with 3D effects; in-product photo and video editing; accommodation of embedded videos from services such as YouTube; and the ability to save a presentation as a video each lay testimony to PowerPoint’s transformation into a media tool and away from a pure presentation tool. These capabilities also recognize the importance of the Web as both a source for materials and a channel for disseminating PowerPoint output. Congruent with that is PowerPoint’s new ability to broadcast a slide presentation, using a quickly-generated public URL, without involving the hassle or expense of a Web meeting service like GoToMeeting or Microsoft’s own LiveMeeting.  Slides presented through this broadcast feature retain full color fidelity and transitions and animations are preserved as well.   Outlook Microsoft’s ubiquitous email/calendar/contact/task management tool gains long overdue speed improvements, especially against POP3 email accounts.  Outlook 2010 also supports multiple Exchange accounts, rather than just one; tighter integration with OneNote; and a new Social Connector providing integration with, and presence information from, online social network services like LinkedIn and Facebook (not to mention Windows Live).  A revamped conversation view now includes messages that are part of a given thread regardless of which folder they may be stored in. I don’t know yet how well the Social Connector will work or whether it will keep Outlook relevant to those who live on Facebook and LinkedIn.  But among the other features, there’s very little not to like.   OneNote To me, OneNote is the part of Office that just keeps getting better.  There is one major caveat to this, which I’ll cover in a moment, but let’s first catalog what new stuff OneNote 2010 brings.  The best part of OneNote, is the way each of its versions have managed hierarchy: Notebooks have sections, sections have pages, pages have sub pages, multiple notes can be contained in either, and each note supports infinite levels of indentation.  None of that is new to 2010, but the new version does make creation of pages and subpages easier and also makes simple work out of promoting and demoting pages from sub page to full page status.  And relationships between pages are quite easy to create now: much like a Wiki, simply typing a page’s name in double-square-brackets (“[[…]]”) creates a link to it. OneNote is also great at integrating content outside of its notebooks.  With a new Dock to Desktop feature, OneNote becomes aware of what window is displayed in the rest of the screen and, if it’s an Office document or a Web page, links the notes you’re typing, at the time, to it.  A single click from your notes later on will bring that same document or Web page back on-screen.  Embedding content from Web pages and elsewhere is also easier.  Using OneNote’s Windows Key+S combination to grab part of the screen now allows you to specify the destination of that bitmap instead of automatically creating a new note in the Unfiled Notes area.  Using the Send to OneNote buttons in Internet Explorer and Outlook result in the same choice. Collaboration gets better too.  Real-time multi-author editing is better accommodated and determining author lineage of particular changes is easily carried out. My one pet peeve with OneNote is the difficulty using it when I’m not one a Windows PC.  OneNote’s main competitor, Evernote, while I believe inferior in terms of features, has client versions for PC, Mac, Windows Mobile, Android, iPhone, iPad and Web browsers.  Since I have an Android phone and an iPad, I am practically forced to use it.  However, the OneNote Web app should help here, as should a forthcoming version of OneNote for Windows Phone 7.  In the mean time, it turns out that using OneNote’s Email Page ribbon button lets you move a OneNote page easily into EverNote (since every EverNote account gets a unique email address for adding notes) and that Evernote’s Email function combined with Outlook’s Send to OneNote button (in the Move group of the ribbon’s Home tab) can achieve the reverse.   Access To me, the big change in Access 2007 was its tight integration with SharePoint lists.  Access 2010 and SharePoint 2010 continue this integration with the introduction of SharePoint’s Access Services.  Much as Excel Services provides a SharePoint-hosted experience for viewing (and now editing) Excel spreadsheet, PivotTable and chart content, Access Services allows for SharePoint browser-hosted editing of Access data within the forms that are built in the Access client itself. To me this makes all kinds of sense.  Although it does beg the question of where to draw the line between Access, InfoPath, SharePoint list maintenance and SharePoint 2010’s new Business Connectivity Services.  Each of these tools provide overlapping data entry and data maintenance functionality. But if you do prefer Access, then you’ll like  things like templates and application parts that make it easier to get off the blank page.  These features help you quickly get tables, forms and reports built out.  To make things look nice, Access even gets its own version of Excel’s Conditional Formatting feature, letting you add data bars and data-driven text formatting.   Word As I said at the beginning of this post, upgrades to Office are about much more than enhancing the suite’s flagship word processing application. So are there any enhancements in Word worth mentioning?  I think so.  The most important one has to be the collaboration features.  Essentially, when a user opens a Word document that is in a SharePoint document library (or Windows Live SkyDrive folder), rather than the whole document being locked, Word has the ability to observe more granular locks on the individual paragraphs being edited.  Word also shows you who’s editing what and its Save function morphs into a sync feature that both saves your changes and loads those made by anyone editing the document concurrently. There’s also a new navigation pane that lets you manage sections in your document in much the same way as you manage slides in a PowerPoint deck.  Using the navigation pane, you can reorder sections, insert new ones, or promote and demote sections in the outline hierarchy.  Not earth shattering, but nice.   Other Apps and Summarized Findings What about InfoPath, Publisher, Visio and Project?  I haven’t looked at them yet.  And for this post, I think that’s fine.  While those apps (and, arguably, Access) cater to specific tasks, I think the apps we’ve looked at in this post service the general purpose needs of most users.  And the theme in those 2010 apps is clear: collaboration is key, the Web and productivity are indivisible, and making data and analytics into a self-service amenity is the way to go.  But perhaps most of all, features are still important, as long as they get you through your day faster, rather than adding complexity for its own sake.  I would argue that this is true for just about every product Microsoft makes: users want utility, not complexity.

    Read the article

  • DotNetOpenAuth OpenID Provider "Sequence contains more than one element"

    - by Matthew Johnson
    Hello, all, I'm having trouble implementing my OpenID provider with DNOA 3.4.3. Everything was going absolutely peachy until I needed AX support as well. I set AXFetchAsSregTransform in the web config, as recommended by Andrew at http://groups.google.com/group/dotnetopenid/browse_thread/thread/5629a24c0a7e8d99. Doing this caused me to get the exception "Sequence Contains More Than One Element" on my decide.aspx page, however, and I haven't been able to get past it. The following line is throwing the exception: Edit: Strangely enough, this is not the line throwing the error anymore. The SendResponse() is now triggering the exception ClaimsRequest requestedFields = ProviderEndpoint.PendingRequest.GetExtension(); ProviderEndpoint.SendResponse() Any thoughts on why this may be? Any help would be greatly appreciated! The logs leading up to the error are as follows: 2010-04-28 12:38:20,247 (GMT-7) [5] INFO DotNetOpenAuth.Messaging.Channel - Scanning incoming request for messages: https://myprovider/provider.ashx?openid.ns=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0&openid.claimed_id=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0%2Fidentifier_select&openid.identity=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0%2Fidentifier_select&openid.mode=checkid_setup&openid.ns.ext1=http%3A%2F%2Fopenid.net%2Fsrv%2Fax%2F1.0&openid.ext1.mode=fetch_request&openid.ext1.type.email=http%3A%2F%2Faxschema.org%2Fcontact%2Femail&openid.ext1.type.fullname=http%3A%2F%2Faxschema.org%2FnamePerson&openid.ext1.type.language=http%3A%2F%2Faxschema.org%2Fpref%2Flanguage&openid.ext1.required=email&openid.return_to=http%3A%2F%2Fmyrelyingparty%2Flogin.jsp%3Foidreturn%3D%252Fhome&openid.assoc_handle=%7B634080802953194640%7D%7BHxjFNw==%7D%7B20%7D&openid.realm=http%3A%2F%2Fmyrelyingparty 2010-04-28 12:38:20,285 (GMT-7) [5] INFO DotNetOpenAuth.Messaging.Channel - Processing incoming CheckIdRequest (2.0) message: openid.claimed_id: http://specs.openid.net/auth/2.0/identifier_select openid.identity: http://specs.openid.net/auth/2.0/identifier_select openid.assoc_handle: {634080802953194640}{HxjFNw==}{20} openid.return_to: http://myrelyingparty/login.jsp?oidreturn=%2Fhome openid.realm: http://myrelyingparty/ openid.mode: checkid_setup openid.ns: http://specs.openid.net/auth/2.0 openid.ns.ext1: http://openid.net/srv/ax/1.0 openid.ext1.mode: fetch_request openid.ext1.type.email: http://axschema.org/contact/email openid.ext1.type.fullname: http://axschema.org/namePerson openid.ext1.type.language: http://axschema.org/pref/language openid.ext1.required: email 2010-04-28 12:38:22,773 (GMT-7) [14] INFO DotNetOpenAuth.Messaging.Channel - Scanning incoming request for messages: https://myprovider/login.aspx?ReturnUrl=%2fdecide.aspx 2010-04-28 12:38:36,167 (GMT-7) [5] INFO DotNetOpenAuth.Messaging.Channel - Scanning incoming request for messages: https://myprovider/login.aspx?ReturnUrl=%2fdecide.aspx 2010-04-28 12:38:38,147 (GMT-7) [14] ERROR DotNetOpenAuth.Messaging - Protocol error: An HTTP request to the realm URL (http://myrelyingparty/) resulted in a redirect, which is not allowed during relying party discovery. at DotNetOpenAuth.Messaging.ErrorUtilities.VerifyProtocol(Boolean condition, String message, Object[] args) at DotNetOpenAuth.OpenId.Realm.Discover(IDirectWebRequestHandler requestHandler, Boolean allowRedirects) at DotNetOpenAuth.OpenId.Realm.DiscoverReturnToEndpoints(IDirectWebRequestHandler requestHandler, Boolean allowRedirects) at DotNetOpenAuth.OpenId.Provider.HostProcessedRequest.IsReturnUrlDiscoverableCore(OpenIdProvider provider) at DotNetOpenAuth.OpenId.Provider.HostProcessedRequest.IsReturnUrlDiscoverable(OpenIdProvider provider) at OpenIdProviderWebForms.decide.Page_Load(Object src, EventArgs e) at System.Web.Util.CalliHelper.EventArgFunctionCaller(IntPtr fp, Object o, Object t, EventArgs e) at System.Web.Util.CalliEventHandlerDelegateProxy.Callback(Object sender, EventArgs e) at System.Web.UI.Control.OnLoad(EventArgs e) at System.Web.UI.Control.LoadRecursive() at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) at System.Web.UI.Page.ProcessRequest(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) at System.Web.UI.Page.ProcessRequest() at System.Web.UI.Page.ProcessRequest(HttpContext context) at ASP.decide_aspx.ProcessRequest(HttpContext context) at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) at System.Web.HttpApplication.PipelineStepManager.ResumeSteps(Exception error) at System.Web.HttpApplication.BeginProcessRequestNotification(HttpContext context, AsyncCallback cb) at System.Web.HttpRuntime.ProcessRequestNotificationPrivate(IIS7WorkerRequest wr, HttpContext context) at System.Web.Hosting.PipelineRuntime.ProcessRequestNotificationHelper(IntPtr managedHttpContext, IntPtr nativeRequestContext, IntPtr moduleData, Int32 flags) at System.Web.Hosting.PipelineRuntime.ProcessRequestNotification(IntPtr managedHttpContext, IntPtr nativeRequestContext, IntPtr moduleData, Int32 flags) at System.Web.Hosting.PipelineRuntime.ProcessRequestNotificationHelper(IntPtr managedHttpContext, IntPtr nativeRequestContext, IntPtr moduleData, Int32 flags) at System.Web.Hosting.PipelineRuntime.ProcessRequestNotification(IntPtr managedHttpContext, IntPtr nativeRequestContext, IntPtr moduleData, Int32 flags) 2010-04-28 12:38:38,149 (GMT-7) [14] INFO DotNetOpenAuth.Yadis - Relying party discovery at URL http://myrelyingparty/ failed. DotNetOpenAuth.Messaging.ProtocolException: An HTTP request to the realm URL (http://myrelyingparty/) resulted in a redirect, which is not allowed during relying party discovery. at DotNetOpenAuth.Messaging.ErrorUtilities.VerifyProtocol(Boolean condition, String message, Object[] args) in c:\TeamCity\buildAgent\work\bf9e2ca68b75a334\src\DotNetOpenAuth\Messaging\ErrorUtilities.cs:line 235 at DotNetOpenAuth.OpenId.Realm.Discover(IDirectWebRequestHandler requestHandler, Boolean allowRedirects) in c:\TeamCity\buildAgent\work\bf9e2ca68b75a334\src\DotNetOpenAuth\OpenId\Realm.cs:line 446 at DotNetOpenAuth.OpenId.Realm.DiscoverReturnToEndpoints(IDirectWebRequestHandler requestHandler, Boolean allowRedirects) in c:\TeamCity\buildAgent\work\bf9e2ca68b75a334\src\DotNetOpenAuth\OpenId\Realm.cs:line 424 at DotNetOpenAuth.OpenId.Provider.HostProcessedRequest.IsReturnUrlDiscoverableCore(OpenIdProvider provider) in c:\TeamCity\buildAgent\work\bf9e2ca68b75a334\src\DotNetOpenAuth\OpenId\Provider\HostProcessedRequest.cs:line 142 2010-04-28 12:38:42,076 (GMT-7) [8] ERROR OpenIdProviderWebForms.Global - An unhandled exception was raised. Details follow: System.Web.HttpUnhandledException: Exception of type 'System.Web.HttpUnhandledException' was thrown. --- System.InvalidOperationException: Sequence contains more than one element at System.Linq.Enumerable.SingleOrDefault[TSource](IEnumerable`1 source) at DotNetOpenAuth.OpenId.Provider.Request.GetExtension[T]() in c:\TeamCity\buildAgent\work\bf9e2ca68b75a334\src\DotNetOpenAuth\OpenId\Provider\Request.cs:line 176 at DotNetOpenAuth.OpenId.Extensions.ExtensionsInteropHelper.ConvertSregToMatchRequest(IHostProcessedRequest request) in c:\TeamCity\buildAgent\work\bf9e2ca68b75a334\src\DotNetOpenAuth\OpenId\Extensions\ExtensionsInteropHelper.cs:line 180 at DotNetOpenAuth.OpenId.Behaviors.AXFetchAsSregTransform.DotNetOpenAuth.OpenId.Provider.IProviderBehavior.OnOutgoingResponse(IAuthenticationRequest request) in c:\TeamCity\buildAgent\work\bf9e2ca68b75a334\src\DotNetOpenAuth\OpenId\Behaviors\AXFetchAsSregTransform.cs:line 139 at DotNetOpenAuth.OpenId.Provider.OpenIdProvider.ApplyBehaviorsToResponse(IRequest request) in c:\TeamCity\buildAgent\work\bf9e2ca68b75a334\src\DotNetOpenAuth\OpenId\Provider\OpenIdProvider.cs:line 482 at DotNetOpenAuth.OpenId.Provider.OpenIdProvider.SendResponse(IRequest request) in c:\TeamCity\buildAgent\work\bf9e2ca68b75a334\src\DotNetOpenAuth\OpenId\Provider\OpenIdProvider.cs:line 325 at OpenIdProviderWebForms.decide.Yes_Click(Object sender, EventArgs e) in C:\Projects\OpenIdProviderWebForms\decide.aspx.cs:line 130 at System.Web.UI.WebControls.Button.OnClick(EventArgs e) at System.Web.UI.WebControls.Button.RaisePostBackEvent(String eventArgument) at System.Web.UI.Page.RaisePostBackEvent(IPostBackEventHandler sourceControl, String eventArgument) at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) --- End of inner exception stack trace --- at System.Web.UI.Page.HandleError(Exception e) at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) at System.Web.UI.Page.ProcessRequest(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) at System.Web.UI.Page.ProcessRequest() at System.Web.UI.Page.ProcessRequest(HttpContext context) at ASP.decide_aspx.ProcessRequest(HttpContext context) in c:\Windows\Microsoft.NET\Framework64\v2.0.50727\Temporary ASP.NET Files\root\7f580b93\b3e4d917\App_Web_tulh9ymv.1.cs:line 0 at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously)

    Read the article

  • JQuery > XSLT Plugin > Component returned failure code: 0x80600011 [nsIXSLTProcessorObsolete.transfo

    - by Sean Ochoa
    So, I'm using the XSLT plugin for JQuery, and here's my code: function AddPlotcardEventHandlers(){ // some code } function reportError(exception){ alert(exception.constructor.name + " Exception: " + ((exception.name) ? exception.name : "[unknown name]") + " - " + exception.message); } function GetPlotcards(){ $("#content").xslt("../xml/plotcards.xml","../xslt/plotcards.xsl", AddPlotcardEventHandlers,reportError); } Here's the modified jquery plugin. I say that its modified because I've added callbacks for success and error handling. /* * jquery.xslt.js * * Copyright (c) 2005-2008 Johann Burkard (<mailto:[email protected]>) * <http://eaio.com> * * Permission is hereby granted, free of charge, to any person obtaining a * copy of this software and associated documentation files (the "Software"), * to deal in the Software without restriction, including without limitation * the rights to use, copy, modify, merge, publish, distribute, sublicense, * and/or sell copies of the Software, and to permit persons to whom the * Software is furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included * in all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS * OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN * NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE * USE OR OTHER DEALINGS IN THE SOFTWARE. * */ /** * jQuery client-side XSLT plugins. * * @author <a href="mailto:[email protected]">Johann Burkard</a> * @version $Id: jquery.xslt.js,v 1.10 2008/08/29 21:34:24 Johann Exp $ */ (function($) { $.fn.xslt = function() { return this; } var str = /^\s*</; if (document.recalc) { // IE 5+ $.fn.xslt = function(xml, xslt, onSuccess, onError) { try{ var target = $(this); var change = function() { try{ var c = 'complete'; if (xm.readyState == c && xs.readyState == c) { window.setTimeout(function() { target.html(xm.transformNode(xs.XMLDocument)); if (onSuccess) onSuccess(); }, 50); } }catch(exception){ if (onError) onError(exception); } }; var xm = document.createElement('xml'); xm.onreadystatechange = change; xm[str.test(xml) ? "innerHTML" : "src"] = xml; var xs = document.createElement('xml'); xs.onreadystatechange = change; xs[str.test(xslt) ? "innerHTML" : "src"] = xslt; $('body').append(xm).append(xs); return this; }catch(exception){ if (onError) onError(exception); } }; } else if (window.DOMParser != undefined && window.XMLHttpRequest != undefined && window.XSLTProcessor != undefined) { // Mozilla 0.9.4+, Opera 9+ var processor = new XSLTProcessor(); var support = false; if ($.isFunction(processor.transformDocument)) { support = window.XMLSerializer != undefined; } else { support = true; } if (support) { $.fn.xslt = function(xml, xslt, onSuccess, onError) { try{ var target = $(this); var transformed = false; var xm = { readyState: 4 }; var xs = { readyState: 4 }; var change = function() { try{ if (xm.readyState == 4 && xs.readyState == 4 && !transformed) { var processor = new XSLTProcessor(); if ($.isFunction(processor.transformDocument)) { // obsolete Mozilla interface resultDoc = document.implementation.createDocument("", "", null); processor.transformDocument(xm.responseXML, xs.responseXML, resultDoc, null); target.html(new XMLSerializer().serializeToString(resultDoc)); } else { processor.importStylesheet(xs.responseXML); resultDoc = processor.transformToFragment(xm.responseXML, document); target.empty().append(resultDoc); } transformed = true; if (onSuccess) onSuccess(); } }catch(exception){ if (onError) onError(exception); } }; if (str.test(xml)) { xm.responseXML = new DOMParser().parseFromString(xml, "text/xml"); } else { xm = $.ajax({ dataType: "xml", url: xml}); xm.onreadystatechange = change; } if (str.test(xslt)) { xs.responseXML = new DOMParser().parseFromString(xslt, "text/xml"); change(); } else { xs = $.ajax({ dataType: "xml", url: xslt}); xs.onreadystatechange = change; } }catch(exception){ if (onError) onError(exception); }finally{ return this; } }; } } })(jQuery); And, here's my error msg: Object Exception: [unknown name] - Component returned failure code: 0x80600011 [nsIXSLTProcessorObsolete.transformDocument] Here's the info on the browser that I'm using for testing (with firebug v1.5.4 add-on installed): Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 I'm really not sure what to do about this.... any thoughts?

    Read the article

  • Can you help me get my head around openssl public key encryption with rsa.h in c++?

    - by Ben
    Hi there, I am trying to get my head around public key encryption using the openssl implementation of rsa in C++. Can you help? So far these are my thoughts (please do correct if necessary) Alice is connected to Bob over a network Alice and Bob want secure communications Alice generates a public / private key pair and sends public key to Bob Bob receives public key and encrypts a randomly generated symmetric cypher key (e.g. blowfish) with the public key and sends the result to Alice Alice decrypts the ciphertext with the originally generated private key and obtains the symmetric blowfish key Alice and Bob now both have knowledge of symmetric blowfish key and can establish a secure communication channel Now, I have looked at the openssl/rsa.h rsa implementation (since I already have practical experience with openssl/blowfish.h), and I see these two functions: int RSA_public_encrypt(int flen, unsigned char *from, unsigned char *to, RSA *rsa, int padding); int RSA_private_decrypt(int flen, unsigned char *from, unsigned char *to, RSA *rsa, int padding); If Alice is to generate *rsa, how does this yield the rsa key pair? Is there something like rsa_public and rsa_private which are derived from rsa? Does *rsa contain both public and private key and the above function automatically strips out the necessary key depending on whether it requires the public or private part? Should two unique *rsa pointers be generated so that actually, we have the following: int RSA_public_encrypt(int flen, unsigned char *from, unsigned char *to, RSA *rsa_public, int padding); int RSA_private_decrypt(int flen, unsigned char *from, unsigned char *to, RSA *rsa_private, int padding); Secondly, in what format should the *rsa public key be sent to Bob? Must it be reinterpreted in to a character array and then sent the standard way? I've heard something about certificates -- are they anything to do with it? Sorry for all the questions, Best Wishes, Ben. EDIT: Coe I am currently employing: /* * theEncryptor.cpp * * * Created by ben on 14/01/2010. * Copyright 2010 __MyCompanyName__. All rights reserved. * */ #include "theEncryptor.h" #include <iostream> #include <sys/socket.h> #include <sstream> theEncryptor::theEncryptor() { } void theEncryptor::blowfish(unsigned char *data, int data_len, unsigned char* key, int enc) { // hash the key first! unsigned char obuf[20]; bzero(obuf,20); SHA1((const unsigned char*)key, 64, obuf); BF_KEY bfkey; int keySize = 16;//strlen((char*)key); BF_set_key(&bfkey, keySize, obuf); unsigned char ivec[16]; memset(ivec, 0, 16); unsigned char* out=(unsigned char*) malloc(data_len); bzero(out,data_len); int num = 0; BF_cfb64_encrypt(data, out, data_len, &bfkey, ivec, &num, enc); //for(int i = 0;i<data_len;i++)data[i]=out[i]; memcpy(data, out, data_len); free(out); } void theEncryptor::generateRSAKeyPair(int bits) { rsa = RSA_generate_key(bits, 65537, NULL, NULL); } int theEncryptor::publicEncrypt(unsigned char* data, unsigned char* dataEncrypted,int dataLen) { return RSA_public_encrypt(dataLen, data, dataEncrypted, rsa, RSA_PKCS1_OAEP_PADDING); } int theEncryptor::privateDecrypt(unsigned char* dataEncrypted, unsigned char* dataDecrypted) { return RSA_private_decrypt(RSA_size(rsa), dataEncrypted, dataDecrypted, rsa, RSA_PKCS1_OAEP_PADDING); } void theEncryptor::receivePublicKeyAndSetRSA(int sock, int bits) { int max_hex_size = (bits / 4) + 1; char keybufA[max_hex_size]; bzero(keybufA,max_hex_size); char keybufB[max_hex_size]; bzero(keybufB,max_hex_size); int n = recv(sock,keybufA,max_hex_size,0); n = send(sock,"OK",2,0); n = recv(sock,keybufB,max_hex_size,0); n = send(sock,"OK",2,0); rsa = RSA_new(); BN_hex2bn(&rsa->n, keybufA); BN_hex2bn(&rsa->e, keybufB); } void theEncryptor::transmitPublicKey(int sock, int bits) { const int max_hex_size = (bits / 4) + 1; long size = max_hex_size; char keyBufferA[size]; char keyBufferB[size]; bzero(keyBufferA,size); bzero(keyBufferB,size); sprintf(keyBufferA,"%s\r\n",BN_bn2hex(rsa->n)); sprintf(keyBufferB,"%s\r\n",BN_bn2hex(rsa->e)); int n = send(sock,keyBufferA,size,0); char recBuf[2]; n = recv(sock,recBuf,2,0); n = send(sock,keyBufferB,size,0); n = recv(sock,recBuf,2,0); } void theEncryptor::generateRandomBlowfishKey(unsigned char* key, int bytes) { /* srand( (unsigned)time( NULL ) ); std::ostringstream stm; for(int i = 0;i<bytes;i++){ int randomValue = 65 + rand()% 26; stm << (char)((int)randomValue); } std::string str(stm.str()); const char* strs = str.c_str(); for(int i = 0;bytes;i++)key[i]=strs[i]; */ int n = RAND_bytes(key, bytes); if(n==0)std::cout<<"Warning key was generated with bad entropy. You should not consider communication to be secure"<<std::endl; } theEncryptor::~theEncryptor(){}

    Read the article

  • Silverlight Chat WrapPanel Crash / Bug

    - by Matt
    I've been given the task to create a simple Silverlight chat box for two people. My control must adhere to the following requirements Scrollable Text must wrap if it's too long When a new item / message is added it must scroll that item into view Now I've successfully made a usercontrol to meet these requirements, but I've run into a possible bug / crash that I can't for the life of me fix. I'm looking for either a fix to the bug, or a different approach to creating a scrollable chat control. Here's the code I've been using. We'll start with my XAML for the chat window <ListBox x:Name="lbChatHistory" Grid.Row="0" Grid.Column="0" Grid.ColumnSpan="2" ScrollViewer.VerticalScrollBarVisibility="Visible" ScrollViewer.HorizontalScrollBarVisibility="Disabled" > <ListBox.ItemTemplate> <DataTemplate> <Grid Background="Beige"> <Grid.ColumnDefinitions> <ColumnDefinition Width="70"></ColumnDefinition> <ColumnDefinition Width="Auto"></ColumnDefinition> </Grid.ColumnDefinitions> <TextBlock x:Name="lblPlayer" Foreground="{Binding ForeColor}" Text="{Binding Player}" Grid.Column="0"></TextBlock> <ContentPresenter Grid.Column="1" Width="200" Content="{Binding Message}" /> </Grid> </DataTemplate> </ListBox.ItemTemplate> </ListBox> The idea is to add a new Item to the listbox. The Item (as layed out in the XAML) is a simple 2 column grid. One column for the username, and one column for the message. Now the "items" that I add to the ListBox is a custom class. It has three properties (Player, ForeColor, and Message) that I using binding on within my XAML Player is a string of the current user to display. ForeColor is just a foreground color preference. It helps distinguish the difference between messages. Message is a WrapPanel. I programmatically break the supplied string on the white space for each word. Then for each word, I add a new TextBlock element to the WrapPanel Here is the custom class. public class ChatMessage :DependencyObject, INotifyPropertyChanged { public event PropertyChangedEventHandler PropertyChanged; public static DependencyProperty PlayerProperty = DependencyProperty.Register( "Player", typeof( string ), typeof( ChatMessage ), new PropertyMetadata( new PropertyChangedCallback( OnPlayerPropertyChanged ) ) ); public static DependencyProperty MessageProperty = DependencyProperty.Register( "Message", typeof( WrapPanel ), typeof( ChatMessage ), new PropertyMetadata( new PropertyChangedCallback( OnMessagePropertyChanged ) ) ); public static DependencyProperty ForeColorProperty = DependencyProperty.Register( "ForeColor", typeof( SolidColorBrush ), typeof( ChatMessage ), new PropertyMetadata( new PropertyChangedCallback( OnForeColorPropertyChanged ) ) ); private static void OnForeColorPropertyChanged( DependencyObject d, DependencyPropertyChangedEventArgs e ) { ChatMessage c = d as ChatMessage; c.ForeColor = ( SolidColorBrush ) e.NewValue; } public ChatMessage() { Message = new WrapPanel(); ForeColor = new SolidColorBrush( Colors.White ); } private static void OnMessagePropertyChanged( DependencyObject d, DependencyPropertyChangedEventArgs e ) { ChatMessage c = d as ChatMessage; c.Message = ( WrapPanel ) e.NewValue; } private static void OnPlayerPropertyChanged( DependencyObject d, DependencyPropertyChangedEventArgs e ) { ChatMessage c = d as ChatMessage; c.Player = e.NewValue.ToString(); } public SolidColorBrush ForeColor { get { return ( SolidColorBrush ) GetValue( ForeColorProperty ); } set { SetValue( ForeColorProperty, value ); if(PropertyChanged != null) PropertyChanged(this, new PropertyChangedEventArgs( "ForeColor" )); } } public string Player { get { return ( string ) GetValue( PlayerProperty ); } set { SetValue( PlayerProperty, value ); if ( PropertyChanged != null ) PropertyChanged( this, new PropertyChangedEventArgs( "Player" ) ); } } public WrapPanel Message { get { return ( WrapPanel ) GetValue( MessageProperty ); } set { SetValue( MessageProperty, value ); if ( PropertyChanged != null ) PropertyChanged( this, new PropertyChangedEventArgs( "Message" ) ); } } } Lastly I add my items to the ListBox. Here's the simple method. It takes the above ChatMessage class as a parameter public void AddChatItem( ChatMessage msg ) { lbChatHistory.Items.Add( msg ); lbChatHistory.ScrollIntoView( msg ); } Now I've tested this and it all works. The problem I'm getting is when I use the scroll bar. You can scroll down using the side scroll bar or arrow keys, but when you scroll up Silverlight crashes. FireBug returns a ManagedRuntimeError #4004 with a XamlParseException. I'm soo close to having this control work, I can taste it! Any thoughts on what I should do or change? Is there a better approach than the one I've taken? Thanks in advance. UPDATE I've found an alternative solution using a ScrollViewer and an ItemsControl instead of a ListBox control. For the most part it's stable.

    Read the article

  • Issue with WIC image resizing on ASP.NET MVC 2

    - by Dave
    I am attempting to implement image resizing on user uploads in ASP.NET MVC 2 using a version of the method found: here on asp.net. This works great on my dev machine, but as soon as I put it on my production machine, I start getting the error 'Exception from HRESULT: 0x88982F60' which is supposed to mean that there is an issue decoding the image. However, when I use WICExplorer to open the image, it looks ok. I've also tried this with dozens of images of various sources and still get the error (though possible, I doubt all of them are corrupted). Here is the relevant code (with my debugging statements in there): MVC Controller [Authorize, HttpPost] public ActionResult Upload(string file) { //Check file extension string fx = file.Substring(file.LastIndexOf('.')).ToLowerInvariant(); string key; if (ConfigurationManager.AppSettings["ImageExtensions"].Contains(fx)) { key = Guid.NewGuid().ToString() + fx; } else { return Json("extension not found"); } //Check file size if (Request.ContentLength <= Convert.ToInt32(ConfigurationManager.AppSettings["MinImageSize"]) || Request.ContentLength >= Convert.ToInt32(ConfigurationManager.AppSettings["MaxImageSize"])) { return Json("content length out of bounds: " + Request.ContentLength); } ImageResizerResult irr, irr2; //Check if this image is coming from FF, Chrome or Safari (XHR) HttpPostedFileBase hpf = null; if (Request.Files.Count <= 0) { //Scale and encode image and thumbnail irr = ImageResizer.CreateMaxSizeImage(Request.InputStream); irr2 = ImageResizer.CreateThumbnail(Request.InputStream); } //Or IE else { hpf = Request.Files[0] as HttpPostedFileBase; if (hpf.ContentLength == 0) return Json("hpf.length = 0"); //Scale and encode image and thumbnail irr = ImageResizer.CreateMaxSizeImage(hpf.InputStream); irr2 = ImageResizer.CreateThumbnail(hpf.InputStream); } //Check if image and thumbnail encoded and scaled correctly if (irr == null || irr.output == null || irr2 == null || irr2.output == null) { if (irr != null && irr.output != null) irr.output.Dispose(); if (irr2 != null && irr2.output != null) irr2.output.Dispose(); if(irr == null) return Json("irr null"); if (irr2 == null) return Json("irr2 null"); if (irr.output == null) return Json("irr.output null. irr.error = " + irr.error); if (irr2.output == null) return Json("irr2.output null. irr2.error = " + irr2.error); } if (irr.output.Length > Convert.ToInt32(ConfigurationManager.AppSettings["MaxImageSize"]) || irr2.output.Length > Convert.ToInt32(ConfigurationManager.AppSettings["MaxImageSize"])) { if(irr.output.Length > Convert.ToInt32(ConfigurationManager.AppSettings["MaxImageSize"])) return Json("irr.output.Length > maximage size. irr.output.Length = " + irr.output.Length + ", irr.error = " + irr.error); return Json("irr2.output.Length > maximage size. irr2.output.Length = " + irr2.output.Length + ", irr2.error = " + irr2.error); } //Store scaled and encoded image and thumbnail .... return Json("success"); } The code is always failing when checking if the output stream is null (i.e. irr.output == null is true). ImageResizerResult and ImageResizer public class ImageResizerResult : IDisposable { public MemoryIStream output; public int width; public int height; public string error; public void Dispose() { output.Dispose(); } } public static class ImageResizer { private static Object thislock = new Object(); public static ImageResizerResult CreateMaxSizeImage(Stream input) { uint maxSize = Convert.ToUInt32(ConfigurationManager.AppSettings["MaxImageDimension"]); try { lock (thislock) { // Read the source image var photo = ByteArrayFromStream(input); var factory = (IWICComponentFactory)new WICImagingFactory(); var inputStream = factory.CreateStream(); inputStream.InitializeFromMemory(photo, (uint)photo.Length); var decoder = factory.CreateDecoderFromStream(inputStream, null, WICDecodeOptions.WICDecodeMetadataCacheOnDemand); var frame = decoder.GetFrame(0); // Compute target size uint width, height, outWidth, outHeight; frame.GetSize(out width, out height); if (width > height) { //Check if width is greater than maxSize if (width > maxSize) { outWidth = maxSize; outHeight = height * maxSize / width; } //Width is less than maxSize, so use existing dimensions else { outWidth = width; outHeight = height; } } else { //Check if height is greater than maxSize if (height > maxSize) { outWidth = width * maxSize / height; outHeight = maxSize; } //Height is less than maxSize, so use existing dimensions else { outWidth = width; outHeight = height; } } // Prepare output stream to cache file var outputStream = new MemoryIStream(); // Prepare JPG encoder var encoder = factory.CreateEncoder(Consts.GUID_ContainerFormatJpeg, null); encoder.Initialize(outputStream, WICBitmapEncoderCacheOption.WICBitmapEncoderNoCache); // Prepare output frame IWICBitmapFrameEncode outputFrame; var arg = new IPropertyBag2[1]; encoder.CreateNewFrame(out outputFrame, arg); var propBag = arg[0]; var propertyBagOption = new PROPBAG2[1]; propertyBagOption[0].pstrName = "ImageQuality"; propBag.Write(1, propertyBagOption, new object[] { 0.85F }); outputFrame.Initialize(propBag); outputFrame.SetResolution(96, 96); outputFrame.SetSize(outWidth, outHeight); // Prepare scaler var scaler = factory.CreateBitmapScaler(); scaler.Initialize(frame, outWidth, outHeight, WICBitmapInterpolationMode.WICBitmapInterpolationModeFant); // Write the scaled source to the output frame outputFrame.WriteSource(scaler, new WICRect { X = 0, Y = 0, Width = (int)outWidth, Height = (int)outHeight }); outputFrame.Commit(); encoder.Commit(); return new ImageResizerResult { output = outputStream, height = (int)outHeight, width = (int)outWidth }; } } catch (Exception e) { return new ImageResizerResult { error = "Create maxsizeimage = " + e.Message }; } } } Thoughts on where this is going wrong? Thanks in advance for the effort.

    Read the article

  • Generic Http Module

    - by MartinF
    The problem I am trying to make a generic http module in asp.net C# for handling roles defined by an enum which i want to be able to change by a generic parameter. This will make it possible to use the generic module with any kind of enum defined for each project. The module hooks into the Authenticate event of the FormsAuthenticationModule, and is called on each request to the website. The module exposes public events which could be defined in the global.asax. But i cant seem to figure out how to make the generic http module work like a non generic module. There is 3 main problems. I cant register the generic http module in the web.config like any other module as i cant specify the generic parameter, or is possible somehow ? The way to solve that as far as i can figure out is to create a non-generic http module that intializes the generic HttpModule (the generic parameter is defined in a custom section for the module in the web.config). But that introduces the next problem. I cant find out how to make the public events exposed by the generic module available to hook into through the global.asax as you would normally do with a non-generic module by just making a public method with the name like ModuleClassName_PublicEventName. The init() method on the http module gets an reference to the HttpApplication object created in the global.asax. I dont know if it somehow could be possible with reflection to search for the methods and if they are defined in the global.asax (HttpApplication super class) hook them up with the correct event handler ? or if any methods on the HttpApplication object can be used? How would i store and later get a reference to the generic module created in the non-generic module ? I can get the non-generic module with HttpContext.Current.ApplicationInstance.Modules.Get("TheModule"); but is there any way i can store a reference to the generic module in the non-generic module (cant figure out how it should be possible), or store it somewhere else so i can always get it? If I can get a reference to the generic module from the global.asax etc. the events mentioned in nr. 2 can be manually wired to the methods. Thoughts and other possible solutions Instead of registering the module in the web.config it can be manually initialized by overridding the Init method of the HttpApplication and calling the Init method on the module. But that will introduce some new problems. The module will no longer be added to the the ModulesCollection. So I will need to store a reference somewhere else. This could be done with a property in the global.asax, and by implementing an interface, or by creating an generic abstract base type inheriting from HttpApplication, that the global.asax could inherit from. In the generic abstract base type i could also override the init method. It will still not automatically hook up methods in the global.asax with events in the generic module. If it is possible with reflection to search for defined methods in the super type of the HttpApplication it could be automatically done that way. But i can wire the methods in the global.asax with the events in the generic module manually either in the Init method or anywhere else by getting reference to the generic module. It doesnst really need to implement the IHttpModule interface if i choose to manually initalize the generic module. I could just aswell move all the code to the abstract base type inheriting from the HttpApplication. I would prefer to register the module simply by defining it in the web.config as it will be the easiest and most natural / logical solution. Also it would be great if it could be kept as a HttpModule instead of having to define a an abstract base type inheriting from HttpApplication, else it will be more thighed up and not as loose and plugable as i wanted it to be (but maybe it is not possible). Another alternative would be to make it all static. As far as i can figure out i would have to somehow make sure that only one method can be added to the public static events, so it wont add a reference each time a new instance of the global.asax is created. I simply cant find out what is the best solution. I have been messing around with this and thinking about it for days now. Maybe there is an option that i havent thought of ? Hope anyone out there can help me.

    Read the article

  • Ruby (Rack) application could not be started - Passenger (3.0.9) error for rails 3.1.0 app on ubuntu and nginx (1.0.6) after deploying

    - by user938363
    Here is the error saying bcrypt was not loaded. The rails app is not using the Devise for authentication and gem bcrypt is not in Gemfile. Sometime, the webserver throws out the error saying spawn server can not start. gem list shows that both bcrypt-ruby 3.0.1 and 3.0.0 were installed. Ruby (Rack) application could not be started A source file that the application requires, is missing. * It is possible that you didn't upload your application files correctly. Please check whether all your application files are uploaded. * A required library may not installed. Please install all libraries that this application requires. Further information about the error may have been written to the application's log file. Please check it in order to analyse the problem. Error message: no such file to load -- bcrypt Exception class: LoadError Application root: /vol/www/emclab/current Backtrace: # File Line Location 0 /vol/www/emclab/shared/bundle/ruby/1.9.1/gems/activesupport-3.1.0/lib/active_support/dependencies.rb 240 in `require' 1 /vol/www/emclab/shared/bundle/ruby/1.9.1/gems/activesupport-3.1.0/lib/active_support/dependencies.rb 240 in `block in require' 2 /vol/www/emclab/shared/bundle/ruby/1.9.1/gems/activesupport-3.1.0/lib/active_support/dependencies.rb 225 in `load_dependency' 3 /vol/www/emclab/shared/bundle/ruby/1.9.1/gems/activesupport-3.1.0/lib/active_support/dependencies.rb 240 in `require' 4 /vol/www/emclab/shared/bundle/ruby/1.9.1/gems/activemodel-3.1.0/lib/active_model/secure_password.rb 1 in `' 5 /vol/www/emclab/shared/bundle/ruby/1.9.1/gems/activerecord-3.1.0/lib/active_record/base.rb 2160 in `block in ' 6 /vol/www/emclab/shared/bundle/ruby/1.9.1/gems/activerecord-3.1.0/lib/active_record/base.rb 2140 in `class_eval' 7 /vol/www/emclab/shared/bundle/ruby/1.9.1/gems/activerecord-3.1.0/lib/active_record/base.rb 2140 in `' 8 /vol/www/emclab/shared/bundle/ruby/1.9.1/gems/activerecord-3.1.0/lib/active_record/base.rb 31 in `' 9 /vol/www/emclab/shared/bundle/ruby/1.9.1/gems/activerecord-3.1.0/lib/active_record/session_store.rb 77 in `' 10 /vol/www/emclab/shared/bundle/ruby/1.9.1/gems/activerecord-3.1.0/lib/active_record/session_store.rb 51 in `' 11 /vol/www/emclab/shared/bundle/ruby/1.9.1/gems/activerecord-3.1.0/lib/active_record/session_store.rb 1 in `' 12 /vol/www/emclab/shared/bundle/ruby/1.9.1/gems/railties-3.1.0/lib/rails/application/configuration.rb 123 in `session_store' 13 /vol/www/emclab/shared/bundle/ruby/1.9.1/gems/railties-3.1.0/lib/rails/application.rb 168 in `block in default_middleware_stack' 14 /vol/www/emclab/shared/bundle/ruby/1.9.1/gems/railties-3.1.0/lib/rails/application.rb 142 in `tap' 15 /vol/www/emclab/shared/bundle/ruby/1.9.1/gems/railties-3.1.0/lib/rails/application.rb 142 in `default_middleware_stack' 16 /vol/www/emclab/shared/bundle/ruby/1.9.1/gems/railties-3.1.0/lib/rails/engine.rb 445 in `app' 17 /vol/www/emclab/shared/bundle/ruby/1.9.1/gems/railties-3.1.0/lib/rails/application/finisher.rb 37 in `block in ' 18 /vol/www/emclab/shared/bundle/ruby/1.9.1/gems/railties-3.1.0/lib/rails/initializable.rb 25 in `instance_exec' 19 /vol/www/emclab/shared/bundle/ruby/1.9.1/gems/railties-3.1.0/lib/rails/initializable.rb 25 in `run' 20 /vol/www/emclab/shared/bundle/ruby/1.9.1/gems/railties-3.1.0/lib/rails/initializable.rb 50 in `block in run_initializers' 21 /vol/www/emclab/shared/bundle/ruby/1.9.1/gems/railties-3.1.0/lib/rails/initializable.rb 49 in `each' 22 /vol/www/emclab/shared/bundle/ruby/1.9.1/gems/railties-3.1.0/lib/rails/initializable.rb 49 in `run_initializers' 23 /vol/www/emclab/shared/bundle/ruby/1.9.1/gems/railties-3.1.0/lib/rails/application.rb 92 in `initialize!' 24 /vol/www/emclab/shared/bundle/ruby/1.9.1/gems/railties-3.1.0/lib/rails/railtie/configurable.rb 30 in `method_missing' 25 /vol/www/emclab/releases/20111115184804/config/environment.rb 5 in `' 26 config.ru 3 in `require' 27 config.ru 3 in `block in ' 28 /vol/www/emclab/shared/bundle/ruby/1.9.1/gems/rack-1.3.2/lib/rack/builder.rb 51 in `instance_eval' 29 /vol/www/emclab/shared/bundle/ruby/1.9.1/gems/rack-1.3.2/lib/rack/builder.rb 51 in `initialize' 30 config.ru 1 in `new' 31 config.ru 1 in `' 32 /home/dtt/.rvm/gems/ruby-1.9.2-p290/gems/passenger-3.0.9/lib/phusion_passenger/rack/application_spawner.rb 222 in `eval' 33 /home/dtt/.rvm/gems/ruby-1.9.2-p290/gems/passenger-3.0.9/lib/phusion_passenger/rack/application_spawner.rb 222 in `load_rack_app' 34 /home/dtt/.rvm/gems/ruby-1.9.2-p290/gems/passenger-3.0.9/lib/phusion_passenger/rack/application_spawner.rb 156 in `block in initialize_server' 35 /home/dtt/.rvm/gems/ruby-1.9.2-p290/gems/passenger-3.0.9/lib/phusion_passenger/utils.rb 572 in `report_app_init_status' 36 /home/dtt/.rvm/gems/ruby-1.9.2-p290/gems/passenger-3.0.9/lib/phusion_passenger/rack/application_spawner.rb 153 in `initialize_server' 37 /home/dtt/.rvm/gems/ruby-1.9.2-p290/gems/passenger-3.0.9/lib/phusion_passenger/abstract_server.rb 204 in `start_synchronously' 38 /home/dtt/.rvm/gems/ruby-1.9.2-p290/gems/passenger-3.0.9/lib/phusion_passenger/abstract_server.rb 180 in `start' 39 /home/dtt/.rvm/gems/ruby-1.9.2-p290/gems/passenger-3.0.9/lib/phusion_passenger/rack/application_spawner.rb 128 in `start' 40 /home/dtt/.rvm/gems/ruby-1.9.2-p290/gems/passenger-3.0.9/lib/phusion_passenger/spawn_manager.rb 253 in `block (2 levels) in spawn_rack_application' 41 /home/dtt/.rvm/gems/ruby-1.9.2-p290/gems/passenger-3.0.9/lib/phusion_passenger/abstract_server_collection.rb 132 in `lookup_or_add' 42 /home/dtt/.rvm/gems/ruby-1.9.2-p290/gems/passenger-3.0.9/lib/phusion_passenger/spawn_manager.rb 246 in `block in spawn_rack_application' 43 /home/dtt/.rvm/gems/ruby-1.9.2-p290/gems/passenger-3.0.9/lib/phusion_passenger/abstract_server_collection.rb 82 in `block in synchronize' 44 prelude> 10:in `synchronize' 45 /home/dtt/.rvm/gems/ruby-1.9.2-p290/gems/passenger-3.0.9/lib/phusion_passenger/abstract_server_collection.rb 79 in `synchronize' 46 /home/dtt/.rvm/gems/ruby-1.9.2-p290/gems/passenger-3.0.9/lib/phusion_passenger/spawn_manager.rb 244 in `spawn_rack_application' 47 /home/dtt/.rvm/gems/ruby-1.9.2-p290/gems/passenger-3.0.9/lib/phusion_passenger/spawn_manager.rb 137 in `spawn_application' 48 /home/dtt/.rvm/gems/ruby-1.9.2-p290/gems/passenger-3.0.9/lib/phusion_passenger/spawn_manager.rb 275 in `handle_spawn_application' 49 /home/dtt/.rvm/gems/ruby-1.9.2-p290/gems/passenger-3.0.9/lib/phusion_passenger/abstract_server.rb 357 in `server_main_loop' 50 /home/dtt/.rvm/gems/ruby-1.9.2-p290/gems/passenger-3.0.9/lib/phusion_passenger/abstract_server.rb 206 in `start_synchronously' 51 /home/dtt/.rvm/gems/ruby-1.9.2-p290/gems/passenger-3.0.9/helper-scripts/passenger-spawn-server 99 in `' cap deploy:check returns: You appear to have all necessary dependencies installed Any thoughts about the problem? thanks!

    Read the article

  • Why is android:FLAG_BLUR_BEHIND creating a gradient background in my new activity instead of bluring

    - by nderraugh
    Hi, I've got two activities. One is supposed to be a blur in front of the other. The background activity has several ImageViews which are set up as thin gradients extending across most of the screen and 10dip high. When I start the second activity it sets the background as a gradient occupying the entire window space, that is it appears to be fill_parent'd for both height and width. If I comment out the ImageViews then it blurs and looks as expected. Any thoughts? Here's the code doing the blur. import android.app.Activity; import android.os.Bundle; import android.view.View; import android.view.WindowManager; import android.view.View.OnClickListener; public class TransluscentBlurSummaryB extends Activity { @Override protected void onCreate(Bundle icicle) { super.onCreate(icicle); getWindow().setFlags(WindowManager.LayoutParams.FLAG_BLUR_BEHIND, WindowManager.LayoutParams.FLAG_BLUR_BEHIND); getWindow().getAttributes().dimAmount = 0.5f; getWindow().setFlags(WindowManager.LayoutParams.FLAG_DIM_BEHIND, WindowManager.LayoutParams.FLAG_DIM_BEHIND); setContentView(R.layout.sheetbdetails); OnClickListener clickListener = new OnClickListener() { public void onClick(View v) { TransluscentBlurSummaryB.this.finish(); } }; findViewById(R.id.sheetbdetailstable).setOnClickListener(clickListener); } } And here's the layout with the ImageView gradients. <?xml version="1.0" encoding="utf-8"?> <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:layout_height="fill_parent" android:id="@+id/summarysparent" > <!-- view1 goes on top --> <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" android:id="@+id/view2" android:layout_width="fill_parent" android:layout_height="wrap_content" android:layout_alignParentBottom="true"> <Button android:layout_height="wrap_content" android:id="@+id/ButtonBack" android:layout_width="wrap_content" android:text="Back" android:width="100dp"></Button> <Button android:layout_height="wrap_content" android:id="@+id/ButtonNext" android:layout_width="wrap_content" android:layout_alignParentRight="true" android:text="Start Over" android:width="100dp"></Button> </RelativeLayout> <TextView android:id="@+id/view1" android:layout_height="wrap_content" android:layout_alignParentTop="true" android:layout_width="wrap_content" android:layout_centerHorizontal="true" android:textSize="10pt" android:text="Summary"/> <ScrollView xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:layout_height="wrap_content" android:id="@+id/summaryscrollview" android:layout_below="@+id/view1" android:layout_above="@+id/view2"> <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:layout_height="wrap_content" android:id="@+id/summarydetails" > <!-- view2 goes on the bottom --> <TextView android:id="@+id/textview2" android:layout_height="wrap_content" android:layout_width="wrap_content" android:layout_below="@+id/view1" android:layout_centerHorizontal="true" android:text="Recommended Child Support Order" android:layout_marginTop="10dip" /> <ImageView android:id="@+id/horizontalLine1" android:layout_width="fill_parent" android:layout_marginLeft="5dip" android:layout_marginRight="5dip" android:layout_height="10dip" android:src="@drawable/black_white_gradient" android:layout_below="@+id/textview2" android:layout_marginTop="10dip" /> <TextView android:id="@+id/textview3" android:layout_height="wrap_content" android:layout_width="wrap_content" android:layout_below="@+id/horizontalLine1" android:layout_centerHorizontal="true" android:text="You" android:layout_marginTop="10dip" /> <TextView android:id="@+id/textview10" android:layout_height="wrap_content" android:layout_width="150dp" android:layout_below="@+id/textview3" android:layout_centerHorizontal="true" android:layout_marginTop="10dip" android:gravity="center_horizontal" /> <ImageView android:id="@+id/horizontalLine2" android:layout_width="fill_parent" android:layout_marginLeft="5dip" android:layout_marginRight="5dip" android:layout_height="10dip" android:src="@drawable/black_white_gradient" android:layout_below="@+id/textview10" android:layout_marginTop="10dip" /> <TextView android:id="@+id/textview4" android:layout_height="wrap_content" android:layout_width="wrap_content" android:layout_below="@+id/horizontalLine2" android:layout_centerHorizontal="true" android:text="Other Parent" android:layout_marginTop="10dip" /> <TextView android:id="@+id/textview11" android:layout_height="wrap_content" android:layout_width="150dp" android:layout_below="@+id/textview4" android:layout_centerHorizontal="true" android:layout_marginTop="10dip" android:text="$536.18" android:gravity="center_horizontal" /> <ImageView android:id="@+id/horizontalLine3" android:layout_width="fill_parent" android:layout_marginLeft="5dip" android:layout_marginRight="5dip" android:layout_height="10dip" android:src="@drawable/black_white_gradient" android:layout_below="@+id/textview11" android:layout_marginTop="10dip" /> <TextView android:id="@+id/textview5" android:layout_height="wrap_content" android:layout_width="wrap_content" android:layout_below="@+id/horizontalLine3" android:layout_centerHorizontal="true" android:text="Calculation Details" android:layout_marginTop="15dip" /> <ImageView android:id="@+id/infoButton" android:src="@drawable/ic_menu_info_details" android:layout_height="wrap_content" android:layout_width="wrap_content" android:layout_below="@+id/horizontalLine3" android:layout_toRightOf="@+id/textview5" android:clickable="true" /> <ImageView android:id="@+id/horizontalLine4" android:layout_width="fill_parent" android:layout_marginLeft="5dip" android:layout_marginRight="5dip" android:layout_height="10dip" android:src="@drawable/black_white_gradient" android:layout_below="@+id/textview5" android:layout_marginTop="18dip" /> </RelativeLayout> </ScrollView> </RelativeLayout> The gradient drawable is this. <?xml version="1.0" encoding="utf-8"?> <shape xmlns:android="http://schemas.android.com/apk/res/android" android:shape="rectangle"> <gradient android:startColor="#FFFFFF" android:centerColor="#000000" android:endColor="#FFFFFF" android:angle="270"/> <padding android:left="7dp" android:top="7dp" android:right="7dp" android:bottom="7dp" /> <corners android:radius="8dp" /> </shape> And here's the layout from the activity doing the blurring on top. <?xml version="1.0" encoding="utf-8"?> <ScrollView xmlns:android="http://schemas.android.com/apk/res/android" android:id="@+id/sheetbdetails" android:layout_width="fill_parent" android:layout_height="fill_parent" android:clickable="true" > <TableLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:layout_height="fill_parent" android:scrollbars="vertical" android:shrinkColumns="0" android:id="@+id/sheetbdetailstable" > <TableRow> <TextView android:padding="3dip" /> <TextView android:text="You" android:padding="3dip" /> <TextView android:text="@string/otherparent" android:padding="3dip" /> <TextView android:text="Combined" android:padding="3dip" /> </TableRow> </TableLayout> </ScrollView> The transparent windows are themed from styles.xml in the apidemos using @style/Theme.Transparent.

    Read the article

  • Fast block placement algorithm, advice needed?

    - by James Morris
    I need to emulate the window placement strategy of the Fluxbox window manager. As a rough guide, visualize randomly sized windows filling up the screen one at a time, where the rough size of each results in an average of 80 windows on screen without any window overlapping another. It is important to note that windows will close and the space that closed windows previously occupied becomes available once more for the placement of new windows. The window placement strategy has three binary options: Windows build horizontal rows or vertical columns (potentially) Windows are placed from left to right or right to left Windows are placed from top to bottom or bottom to top Why is the algorithm a problem? It needs to operate to the deadlines of a real time thread in an audio application. At this moment I am only concerned with getting a fast algorithm, don't concern yourself over the implications of real time threads and all the hurdles in programming that that brings. So far I have two choices which I have built loose prototypes for: 1) A port of the Fluxbox placement algorithm into my code. The problem with this is, the client (my program) gets kicked out of the audio server (JACK) when I try placing the worst case scenario of 256 blocks using the algorithm. This algorithm performs over 14000 full (linear) scans of the list of blocks already placed when placing the 256th window. 2) My alternative approach. Only partially implemented, this approach uses a data structure for each area of rectangular free unused space (the list of windows can be entirely separate, and is not required for testing of this algorithm). The data structure acts as a node in a doubly linked list (with sorted insertion), as well as containing the coordinates of the top-left corner, and the width and height. Furthermore, each block data structure also contains four links which connect to each immediately adjacent (touching) block on each of the four sides. IMPORTANT RULE: Each block may only touch with one block per side. The problem with this approach is, it's very complex. I have implemented the straightforward cases where 1) space is removed from one corner of a block, 2) splitting neighbouring blocks so that the IMPORTANT RULE is adhered to. The less straightforward case, where the space to be removed can only be found within a column or row of boxes, is only partially implemented - if one of the blocks to be removed is an exact fit for width (ie column) or height (ie row) then problems occur. And don't even mention the fact this only checks columns one box wide, and rows one box tall. I've implemented this algorithm in C - the language I am using for this project (I've not used C++ for a few years and am uncomfortable using it after having focused all my attention to C development, it's a hobby). The implementation is 700+ lines of code (including plenty of blank lines, brace lines, comments etc). The implementation only works for the horizontal-rows + left-right + top-bottom placement strategy. So I've either got to add some way of making this +700 lines of code work for the other 7 placement strategy options, or I'm going to have to duplicate those +700 lines of code for the other seven options. Neither of these is attractive, the first, because the existing code is complex enough, the second, because of bloat. The algorithm is not even at a stage where I can use it in the real time worst case scenario, because of missing functionality, so I still don't know if it actually performs better or worse than the first approach. What else is there? I've skimmed over and discounted: Bin Packing algorithms: their emphasis on optimal fit does not match the requirements of this algorithm. Recursive Bisection Placement algorithms: sounds promising, but these are for circuit design. Their emphasis is optimal wire length. Both of these, especially the latter, all elements to be placed/packs are known before the algorithm begins. I need an algorithm which works accumulatively with what it is given to do when it is told to do it. What are your thoughts on this? How would you approach it? What other algorithms should I look at? Or even what concepts should I research seeing as I've not studied computer science/software engineering? Please ask questions in comments if further information is needed. [edit] If it makes any difference, the units for the coordinates will not be pixels. The units are unimportant, but the grid where windows/blocks/whatever can be placed will be 127 x 127 units.

    Read the article

  • T-SQL Dynamic SQL and Temp Tables

    - by George
    It looks like #temptables created using dynamic SQL via the EXECUTE string method have a different scope and can't be referenced by "fixed" SQLs in the same stored procedure. However, I can reference a temp table created by a dynamic SQL statement in a subsequence dynamic SQL but it seems that a stored procedure does not return a query result to a calling client unless the SQL is fixed. A simple 2 table scenario: I have 2 tables. Let's call them Orders and Items. Order has a Primary key of OrderId and Items has a Primary Key of ItemId. Items.OrderId is the foreign key to identify the parent Order. An Order can have 1 to n Items. I want to be able to provide a very flexible "query builder" type interface to the user to allow the user to select what Items he want to see. The filter criteria can be based on fields from the Items table and/or from the parent Order table. If an Item meets the filter condition including and condition on the parent Order if one exists, the Item should be return in the query as well as the parent Order. Usually, I suppose, most people would construct a join between the Item table and the parent Order tables. I would like to perform 2 separate queries instead. One to return all of the qualifying Items and the other to return all of the distinct parent Orders. The reason is two fold and you may or may not agree. The first reason is that I need to query all of the columns in the parent Order table and if I did a single query to join the Orders table to the Items table, I would be repoeating the Order information multiple times. Since there are typically a large number of items per Order, I'd like to avoid this because it would result in much more data being transfered to a fat client. Instead, as mentioned, I would like to return the two tables individually in a dataset and use the two tables within to populate a custom Order and child Items client objects. (I don't know enough about LINQ or Entity Framework yet. I build my objects by hand). The second reason I would like to return two tables instead of one is because I already have another procedure that returns all of the Items for a given OrderId along with the parent Order and I would like to use the same 2-table approach so that I could reuse the client code to populate my custom Order and Client objects from the 2 datatables returned. What I was hoping to do was this: Construct a dynamic SQL string on the Client which joins the orders table to the Items table and filters appropriate on each table as specified by the custom filter created on the Winform fat-client app. The SQL build on the client would have looked something like this: TempSQL = " INSERT INTO #ItemsToQuery OrderId, ItemsId FROM Orders, Items WHERE Orders.OrderID = Items.OrderId AND /* Some unpredictable Order filters go here */ AND /* Some unpredictable Items filters go here */ " Then, I would call a stored procedure, CREATE PROCEDURE GetItemsAndOrders(@tempSql as text) Execute (@tempSQL) --to create the #ItemsToQuery table SELECT * FROM Items WHERE Items.ItemId IN (SELECT ItemId FROM #ItemsToQuery) SELECT * FROM Orders WHERE Orders.OrderId IN (SELECT DISTINCT OrderId FROM #ItemsToQuery) The problem with this approach is that #ItemsToQuery table, since it was created by dynamic SQL, is inaccessible from the following 2 static SQLs and if I change the static SQLs to dynamic, no results are passed back to the fat client. 3 around come to mind but I'm look for a better one: 1) The first SQL could be performed by executing the dynamically constructed SQL from the client. The results could then be passed as a table to a modified version of the above stored procedure. I am familiar with passing table data as XML. If I did this, the stored proc could then insert the data into a temporary table using a static SQL that, because it was created by dynamic SQL, could then be queried without issue. (I could also investigate into passing the new Table type param instead of XML.) However, I would like to avoid passing up potentially large lists to a stored procedure. 2) I could perform all the queries from the client. The first would be something like this: SELECT Items.* FROM Orders, Items WHERE Order.OrderId = Items.OrderId AND (dynamic filter) SELECT Orders.* FROM Orders, Items WHERE Order.OrderId = Items.OrderId AND (dynamic filter) This still provides me with the ability to reuse my client sided object-population code because the Orders and Items continue to be returned in two different tables. I have a feeling to, that I might have some options using a Table data type within my stored proc, but that is also new to me and I would appreciate a little bit of spoon feeding on that one. If you even scanned this far in what I wrote, I am surprised, but if so, I woul dappreciate any of your thoughts on how to accomplish this best.

    Read the article

  • Prefer extension methods for encapsulation and reusability?

    - by tzaman
    edit4: wikified, since this seems to have morphed more into a discussion than a specific question. In C++ programming, it's generally considered good practice to "prefer non-member non-friend functions" instead of instance methods. This has been recommended by Scott Meyers in this classic Dr. Dobbs article, and repeated by Herb Sutter and Andrei Alexandrescu in C++ Coding Standards (item 44); the general argument being that if a function can do its job solely by relying on the public interface exposed by the class, it actually increases encapsulation to have it be external. While this confuses the "packaging" of the class to some extent, the benefits are generally considered worth it. Now, ever since I've started programming in C#, I've had a feeling that here is the ultimate expression of the concept that they're trying to achieve with "non-member, non-friend functions that are part of a class interface". C# adds two crucial components to the mix - the first being interfaces, and the second extension methods: Interfaces allow a class to formally specify their public contract, the methods and properties that they're exposing to the world. Any other class can choose to implement the same interface and fulfill that same contract. Extension methods can be defined on an interface, providing any functionality that can be implemented via the interface to all implementers automatically. And best of all, because of the "instance syntax" sugar and IDE support, they can be called the same way as any other instance method, eliminating the cognitive overhead! So you get the encapsulation benefits of "non-member, non-friend" functions with the convenience of members. Seems like the best of both worlds to me; the .NET library itself providing a shining example in LINQ. However, everywhere I look I see people warning against extension method overuse; even the MSDN page itself states: In general, we recommend that you implement extension methods sparingly and only when you have to. (edit: Even in the current .NET library, I can see places where it would've been useful to have extensions instead of instance methods - for example, all of the utility functions of List<T> (Sort, BinarySearch, FindIndex, etc.) would be incredibly useful if they were lifted up to IList<T> - getting free bonus functionality like that adds a lot more benefit to implementing the interface.) So what's the verdict? Are extension methods the acme of encapsulation and code reuse, or am I just deluding myself? (edit2: In response to Tomas - while C# did start out with Java's (overly, imo) OO mentality, it seems to be embracing more multi-paradigm programming with every new release; the main thrust of this question is whether using extension methods to drive a style change (towards more generic / functional C#) is useful or worthwhile..) edit3: overridable extension methods The only real problem identified so far with this approach, is that you can't specialize extension methods if you need to. I've been thinking about the issue, and I think I've come up with a solution. Suppose I have an interface MyInterface, which I want to extend - I define my extension methods in a MyExtension static class, and pair it with another interface, call it MyExtensionOverrider. MyExtension methods are defined according to this pattern: public static int MyMethod(this MyInterface obj, int arg, bool attemptCast=true) { if (attemptCast && obj is MyExtensionOverrider) { return ((MyExtensionOverrider)obj).MyMethod(arg); } // regular implementation here } The override interface mirrors all of the methods defined in MyExtension, except without the this or attemptCast parameters: public interface MyExtensionOverrider { int MyMethod(int arg); string MyOtherMethod(); } Now, any class can implement the interface and get the default extension functionality: public class MyClass : MyInterface { ... } Anyone that wants to override it with specific implementations can additionally implement the override interface: public class MySpecializedClass : MyInterface, MyExtensionOverrider { public int MyMethod(int arg) { //specialized implementation for one method } public string MyOtherMethod() { // fallback to default for others MyExtension.MyOtherMethod(this, attemptCast: false); } } And there we go: extension methods provided on an interface, with the option of complete extensibility if needed. Fully general too, the interface itself doesn't need to know about the extension / override, and multiple extension / override pairs can be implemented without interfering with each other. I can see three problems with this approach - It's a little bit fragile - the extension methods and override interface have to be kept synchronized manually. It's a little bit ugly - implementing the override interface involves boilerplate for every function you don't want to specialize. It's a little bit slow - there's an extra bool comparison and cast attempt added to the mainline of every method. Still, all those notwithstanding, I think this is the best we can get until there's language support for interface functions. Thoughts?

    Read the article

  • jQuery CSS Custom Flyout Menu Styling Issue

    - by aherrick
    I'm close to nailing this flyout menu I have been working on, just have a couple of current pain points. I'm trying to get left/right padding on my submenu items, as you can see I am not quite there. Also when the first submenu is displayed, I want to create a bit of a gap between the first row of list items and the child. Below is my current code and a screen shot displaying what I want. Based on my current CSS, any thoughts on how to get this done in a clean way? <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> <title></title> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js"></script> <script type="text/javascript"> function mainmenu() { $("#nav ul").css({ display: "none" }); // Opera Fix $("#nav li").hover(function() { $(this).find('ul:first').css({ visibility: "visible", display: "none" }).show(400); }, function() { $(this).find('ul:first').css({ visibility: "hidden" }); }); } $(document).ready(function() { mainmenu(); }); </script> <style type="text/css"> * { padding: 0px; margin: 0px; } body { font-size: 0.85em; font-family: Verdana, Arial, Helvetica, sans-serif; } #nav, #nav ul { margin: 0; padding: 0; list-style-type: none; list-style-position: outside; position: relative; } #nav a { display: block; padding: 4px 0px 4px 0px; color: #dfca90; text-decoration: none; background-color: #ECE9D8; font-size: 9px; font-weight: bold; font: bold 15px Palatino, 'Palatino Linotype' , Georgia, serif; } #nav > li > a { font-size: 16px; font-variant: small-caps; border-right: 1px solid #dfca90; padding-right: 10px; padding-left: 10px; padding-bottom: 6px; padding-top: 6px; background-color: #fff; color: #dfca90; } #nav li ul li a:hover { color: #999; } #nav li { float: left; position: relative; } #nav ul { position: absolute; display: none; width: 170px; border: 2px solid #dfca90; } #nav ul li { } #nav li ul a { width: 170px; height: auto; float: left; } #nav ul ul { top: -2px; } #nav li ul ul { left: 170px; background-color: #ECE9D8; } #nav li:hover ul ul, #nav li:hover ul ul ul, #nav li:hover ul ul ul ul { display: none; } #nav li:hover ul, #nav li li:hover ul, #nav li li li:hover ul, #nav li li li li:hover ul { display: block; } </style> </head> <body> <ul id="nav"> <li><a href="#">1 HTML</a></li> <li><a href="#">2 CSS</a></li> <li><a href="#">3 Javascript </a> <ul> <li><a href="#">3.1 jQuery</a> <ul> <li><a href="#">3.1.1 Download</a> </li> <li><a href="#">3.1.2 Tutorial</a> </li> </ul> </li> <li><a href="#">3.2 Mootools</a></li> <li><a href="#">3.3 Prototype</a></li> </ul> </li> </ul> </body> </html>

    Read the article

  • Am I crazy? (How) should I create a jQuery content editor?

    - by Brendon Muir
    Ok, so I created a CMS mainly aimed at Primary Schools. It's getting fairly popular in New Zealand but the one thing I hate with a passion is the largely bad quality of in browser WYSIWYG editors. I've been using KTML (made by InterAKT which was purchased by Adobe a few years ago). In my opinion this editor does a lot of great things (image editing/management, thumbnailing and pretty good content editing). Unfortunately time has had its nasty way with this product and new browsers are beginning to break features and generally degrade the performance of this tool. It's also quite scary basing my livelihood on a defunct product! I've been hunting, in fact I regularly hunt around to see if anything has changed in the WYSIWYG arena. The closest thing I've seen that excites me is the WYSIHAT framework, but they've decided to ignore a pretty relevant editing paradigm which I'm going to outline below. This is the idea for my proposed editor, and I don't know of any existing products that can do this properly: Right, so the traditional model for editing let's say a Page in a CMS is to log into a 'back end' and click edit on the page. This will then load another screen with the editor in it and perhaps a few other fields. More advanced CMS's will maybe have several editing boxes that are for different portions of the page. Anyway, the big problem with this way of doing things is that the user is editing a document outside of the final context it will appear in. In the simplest terms, this means the page template. Many things can be wrong, e.g. the with of the editing area might be different to the width of the actual template area. The height is nearly always fixed because existing editors always seem to use IFRAMES for backward compatibility. And there are plenty of other beefs which I'm sure you're quite aware of if you're in this development area. Here's my editor utopia: You click 'Edit Page': The actual page (with its actual template) displays. Portions of the page have been marked as editable via a class name. You click on one of these areas (in my case it'd just be the big 'body' area in the middle of the template) and a editing bar drops down from the top of the screen with all your standard controls (bold, italic, insert image etc...). Iframes are never used, instead we rely on setting contentEditable to true on the DIV's in question. Firefox 2 and IE6 can go away, let's move on. You can edit the page knowing exactly how it will look when you save it. Because all the styles for this template are loaded, your headings will look correct, everything will be just dandy. Is this such a radical concept? Why are we still content with TinyMCE and that other editor that is too embarrassing to use because it sounds like a swear word!? Let's face the facts: I'm a JavaScript novice. I did once play around in this area using the Javascript Anthology from SitePoint as a guide. It was quite a cool learning experience, but they of course used the IFRAME to make their lives easier. I tried to go a different route and just use contentEditable and even tried to sidestep the native content editing routines (execCommand) and instead wrote my own. They kind of worked but there were always issues. Now we have jQuery, and a few libraries that abstract things like IE's lack of Range support. I'm wondering, am I crazy, or is it actually a good idea to try and build an editor around this editing paradigm using jQuery and relevant plugins to make the job easier? My actual questions: Where would you start? What plugins do you know of that would help the most? Is it worth it, or is there a magical project that already exists that I should join in on? What are the biggest hurdles to overcome in a project like this? Am I crazy? I hope this question has been posted on the right board. I figured it is a technical question as I'm wanting to know specific hurdles and pitfalls to watch out for and also if it is technically feasible with todays technology. Looking forward to hearing peoples thoughts and opinions.

    Read the article

  • Representing robot's elbow angle in 3-D

    - by Onkar Deshpande
    I am given coordinates of two points in 3-D viz. shoulder point and object point(to which I am supposed to reach). I am also given the length from my shoulder-to-elbow arm and the length of my forearm. I am trying to solve for the unknown position(the position of the joint elbow). I am using cosine rule to find out the elbow angle. Here is my code - #include <stdio.h> #include <math.h> #include <stdlib.h> struct point { double x, y, z; }; struct angles { double clock_wise; double counter_clock_wise; }; double max(double a, double b) { return (a > b) ? a : b; } /* * Check if the combination can make a triangle by considering the fact that sum * of any two sides of a triangle is greater than the remaining side. The * overlapping condition of links is handled separately in main(). */ int valid_triangle(struct point p0, double l0, struct point p1, double l1) { double dist = sqrt(pow((fabs(p1.z - p0.z)), 2) + pow((fabs(p1.y - p0.y)), 2) + pow((fabs(p1.x - p0.x)), 2)); if((max(dist, l0) == dist) && max(dist, l1) == dist) { return (dist < (l0 + l1)); } else if((max(dist, l0) == l0) && (max(l0, l1) == l0)) { return (l0 < (dist + l1)); } else { return (l1 < (dist + l0)); } } /* * Cosine rule is used to find the elbow angle. Positive value indicates a * counter clockwise angle while negative value indicates a clockwise angle. * Since this problem has at max 2 solutions for any given position of P0 and * P1, I am returning a structure of angles which can be used to consider angles * from both direction viz. clockwise-negative and counter-clockwise-positive */ void return_config(struct point p0, double l0, struct point p1, double l1, struct angles *a) { double dist = sqrt(pow((fabs(p1.z - p0.z)), 2) + pow((fabs(p1.y - p0.y)), 2) + pow((fabs(p1.x - p0.x)), 2)); double degrees = (double) acos((l0 * l0 + l1 * l1 - dist * dist) / (2 * l0 * l1)) * (180.0f / 3.1415f); a->clock_wise = -degrees; a->counter_clock_wise = degrees; } int main() { struct point p0, p1; struct angles a; p0.x = 15, p0.y = 4, p0.z = 0; p1.x = 20, p1.y = 4, p1.z = 0; double l0 = 5, l1 = 8; if(valid_triangle(p0, l0, p1, l1)) { printf("Three lengths can make a valid configuration \n"); return_config(p0, l0, p1, l1, &a); printf("Angle of the elbow point (clockwise) = %lf, (counter clockwise) = %lf \n", a.clock_wise, a.counter_clock_wise); } else { double dist = sqrt(pow((fabs(p1.z - p0.z)), 2) + pow((fabs(p1.y - p0.y)), 2) + pow((fabs(p1.x - p0.x)), 2)); if((dist <= (l0 + l1)) && (dist > l0)) { a.clock_wise = -180.0f; a.counter_clock_wise = 180.0f; printf("Angle of the elbow point (clockwise) = %lf, (counter clockwise) = %lf \n", a.clock_wise, a.counter_clock_wise); } else if((dist <= fabs(l0 - l1)) && (dist < l0)){ a.clock_wise = -0.0f; a.counter_clock_wise = 0.0f; printf("Angle of the elbow point (clockwise) = %lf, (counter clockwise) = %lf \n", a.clock_wise, a.counter_clock_wise); } else printf("Given combination cannot make a valid configuration\n"); } return 0; } However, this solution makes sense only in 2-D. Because clockwise and counter-clockwise are meaningless without an axis and direction of rotation. Returning only an angle is technically correct but it leaves a lot of work for the client of this function to use the result in meaningful way. How can I make the changes to get the axis and direction of rotation ? Also, I want to know how many possible solution could be there for this problem. Please let me know your thoughts ! Any help is highly appreciated ...

    Read the article

  • Stupid problem with Javascript calculations and postbacks

    - by rockinthesixstring
    I'm working on an ASP.NET web app where I'm using a Wizard to take the client through a large series of steps. One of the steps includes calculating a bunch of numbers on the fly... the numbers calculate properly but when I click "next" and then go back again... some of the numbers are not retained. Here is the calculation function function CalculateFields() { txtSellingPrice = document.getElementById('<%=txtSellingPrice.ClientID %>'); txtBalanceSheet = document.getElementById('<%=txtBalanceSheet.ClientID %>'); txtDownPayment = document.getElementById('<%=txtDownPayment.ClientID %>'); txtSusEarn = document.getElementById('<%=txtSusEarn.ClientID %>'); txtSusRev = document.getElementById('<%=txtSusRev.ClientID %>'); txtBalanceMult = document.getElementById('<%=txtBalanceMult.ClientID %>'); txtGoodwillMult = document.getElementById('<%=txtGoodwillMult.ClientID %>'); txtSellingPriceMult = document.getElementById('<%=txtSellingPriceMult.ClientID %>'); txtGoodWill = document.getElementById('<%=txtGoodWill.ClientID %>'); txtBalance = document.getElementById('<%=txtBalance.ClientID %>'); chkTakeBack = document.getElementById('<%=chkTakeBack.ClientID %>'); txtVendorTakeBackPercentage = document.getElementById('<%=txtVendorTakeBackPercentage.ClientID %>'); txtSusEarnPercentage = document.getElementById('<%=txtSusEarnPercentage.ClientID %>'); txtBalanceMultPercentage = document.getElementById('<%=txtBalanceMultPercentage.ClientID %>'); txtGoodwillMultPercentage = document.getElementById('<%=txtGoodwillMultPercentage.ClientID %>'); txtSellingPriceMultPercentage = document.getElementById('<%=txtSellingPriceMultPercentage.ClientID %>'); var regexp = /[$,]/g; //Empty value checks SellingPrice = (SellingPrice == "" ? "$0" : SellingPrice); BalanceSheet = (BalanceSheet == "" ? "$0" : BalanceSheet); DownPayment = (DownPayment == "" ? "$0" : DownPayment); susearn = (susearn == "" ? "$0" : susearn); susrev = (susrev == "" ? "$0" : susrev); balmult = (balmult == "" ? "$0" : balmult); goodmult = (goodmult == "" ? "$0" : goodmult); sellmult = (sellmult == "" ? "$0" : sellmult); //Replace $ with String.Empty SellingPrice = txtSellingPrice.value.replace(regexp, ""); BalanceSheet = txtBalanceSheet.value.replace(regexp, ""); DownPayment = txtDownPayment.value.replace(regexp, ""); susearn = txtSusEarn.value.replace(regexp, ""); susrev = txtSusRev.value.replace(regexp, ""); balmult = txtBalanceMult.value.replace(regexp, ""); goodmult = txtGoodwillMult.value.replace(regexp, ""); sellmult = txtSellingPriceMult.value.replace(regexp, ""); //Set the new values txtGoodWill.value = "$" + (SellingPrice - BalanceSheet); txtBalance.value = "$" + (SellingPrice - DownPayment); txtSellingPriceMult.value = "$" + SellingPrice; txtGoodwillMult.value = "$" + (SellingPrice - BalanceSheet); txtBalanceMult.value = "$" + BalanceSheet; if (chkTakeBack.checked == 1) { txtVendorTakeBackPercentage.value = Math.round((SellingPrice - DownPayment) / SellingPrice * 100); } else { txtVendorTakeBackPercentage.value = "0"; } if (!(susearn == "") && !(susearn == "0") && !(susearn == "$0")) { txtSusEarnPercentage.value = Math.round(susearn / susrev * 100); txtBalanceMultPercentage.value = Math.round(balmult / susearn); txtGoodwillMultPercentage.value = Math.round(goodmult / susearn); txtSellingPriceMultPercentage.value = Math.round(sellmult / susearn); } else { txtSusEarnPercentage.value = "0"; txtBalanceMultPercentage.value = "0"; txtGoodwillMultPercentage.value = "0"; txtSellingPriceMultPercentage.value = "0"; } } all of these calculate properly and retain their value across postbacks txtGoodWill.value = "$" + (SellingPrice - BalanceSheet); txtBalance.value = "$" + (SellingPrice - DownPayment); txtSellingPriceMult.value = "$" + SellingPrice; txtGoodwillMult.value = "$" + (SellingPrice - BalanceSheet); txtBalanceMult.value = "$" + BalanceSheet; These ones however do not retain their value across postbacks if (chkTakeBack.checked == 1) { txtVendorTakeBackPercentage.value = Math.round((SellingPrice - DownPayment) / SellingPrice * 100); } else { txtVendorTakeBackPercentage.value = "0"; } if (!(susearn == "") && !(susearn == "0") && !(susearn == "$0")) { txtSusEarnPercentage.value = Math.round(susearn / susrev * 100); txtBalanceMultPercentage.value = Math.round(balmult / susearn); txtGoodwillMultPercentage.value = Math.round(goodmult / susearn); txtSellingPriceMultPercentage.value = Math.round(sellmult / susearn); } else { txtSusEarnPercentage.value = "0"; txtBalanceMultPercentage.value = "0"; txtGoodwillMultPercentage.value = "0"; txtSellingPriceMultPercentage.value = "0"; } The txtVendorTakeBackPercentage always comes back BLANK and the other three always come back as 0 I'm firing these functions by using the onkeyup event within the form fields. If Not Page.IsPostBack Then txtSellingPrice.Attributes.Add("onkeyup", "CalculateFields()") txtBalanceSheet.Attributes.Add("onkeyup", "CalculateFields()") txtDownPayment.Attributes.Add("onkeyup", "CalculateFields()") txtSusRev.Attributes.Add("onkeyup", "CalculateFields()") txtSusEarn.Attributes.Add("onkeyup", "CalculateFields()") End If any thoughts/help/direction would be greatly appreciated.

    Read the article

  • Js/Jquery bind Datatable or Datalist

    - by Robin Rieger
    Scenario I have a input box for post codes. When you put in a postcode it makes a call to a web service and gets all the suburb names back and binds them to a list, like an input dropdown list. This works when I return a List<string> from the web service for just the names. No problem. The problem arises when I go to save the values to the db. When it gets saved I cannot save the name of the suburb in the suburb column in the db. I have to save the id of that suburb for the foreign key constraint. So then I changed the List<> thing slighty and cheated and returned the following. <ArrayOfString> <string>8213</string> <string>BROOKFIELD</string> <string>8214</string> <string>CHAPEL HILL</string> <string>8215</string> <string>FIG TREE POCKET</string> etc.... The reason for this is so I can have the value of the item in the drop down as the id and hence can save that in the code behind on save. To do this, I did the following: $(result).each(function (index, value) { var suburbname; var pattern = new RegExp('[0-9*]'); var m = pattern.exec(value); if (m == null) { suburbname = value var o = new Option(suburbname, suburbid); /// jquerify the DOM object 'o' so we can use the html method $(o).html(suburbname); $(document.getElementById('<%= suburb.ClientID %>')).append($("<option></option>") .attr("value", suburbid) .text(suburbname)); } else { suburbid = value } That works as well... the drop down only has name and the value for those is the number The problem....? I am making assumptions above which I don't like... I.e. that the name never has a number it, that the web service will always return the id first to set the id before running the add to the dropdown for the first time. It just feels wrong.... :S (thoughts?) So, if I change the web service to return a datatable, to out the following (which is what the whole query returns): <Suburbs diffgr:id="Suburbs1" msdata:rowOrder="0"> <SuburbID>8213</SuburbID> <SuburbName>BROOKFIELD</SuburbName> <StateID>4</StateID> <StateCode>07</StateCode> <CountryCode>61</CountryCode> <TimeZones>10.00</TimeZones> <Postcode>4069</Postcode> </Suburbs> Is there a way of acheiving the same as above in js/jquery. So when the user inputs the postcode the webservice returns the datatable and then binds it to the select with the SuburbID going into the value and the name going into the text??? Any other ways I could return the data from the web service to solve this? Note: essentially I need the option to look like this: <option value="8213">BROOKFIELD</option> I also thought I could make a second call to get just the id on the bind of the text... but I kind of want to only make one call.... Thanks in adance, Cheers Robin .net, C#, js, jquery, web service..... Solution with guidance from Billy The adding to the select using the other method below did not work, but my original way did once I had the correct variables so just used that.... The service returns: <ArrayOfMyClass> <MyClass> <ID>8213</ID> <Name>BROOKFIELD</Name> </MyClass> ... etc The js is: (note: it is onchange of the postcode input box. it runs a web service and then on success runs the following) function OnCompleted(result) { var _suburbs = result; var i = 0; $(_suburbs).each(function () { var SuburbName = _suburbs[i].Name; var SuburbID = _suburbs[i].ID; $(document.getElementById('<%= suburb.ClientID %>')).append($("<option></option>") .attr("value", SuburbID) .text(SuburbName)); i = i + 1; });

    Read the article

  • Python - Converting CSV to Objects - Code Design

    - by victorhooi
    Hi, I have a small script we're using to read in a CSV file containing employees, and perform some basic manipulations on that data. We read in the data (import_gd_dump), and create an Employees object, containing a list of Employee objects (maybe I should think of a better naming convention...lol). We then call clean_all_phone_numbers() on Employees, which calls clean_phone_number() on each Employee, as well as lookup_all_supervisors(), on Employees. import csv import re import sys #class CSVLoader: # """Virtual class to assist with loading in CSV files.""" # def import_gd_dump(self, input_file='Gp Directory 20100331 original.csv'): # gd_extract = csv.DictReader(open(input_file), dialect='excel') # employees = [] # for row in gd_extract: # curr_employee = Employee(row) # employees.append(curr_employee) # return employees # #self.employees = {row['dbdirid']:row for row in gd_extract} # Previously, this was inside a (virtual) class called "CSVLoader". # However, according to here (http://tomayko.com/writings/the-static-method-thing) - the idiomatic way of doing this in Python is not with a class-fucntion but with a module-level function def import_gd_dump(input_file='Gp Directory 20100331 original.csv'): """Return a list ('employee') of dict objects, taken from a Group Directory CSV file.""" gd_extract = csv.DictReader(open(input_file), dialect='excel') employees = [] for row in gd_extract: employees.append(row) return employees def write_gd_formatted(employees_dict, output_file="gd_formatted.csv"): """Read in an Employees() object, and write out each Employee() inside this to a CSV file""" gd_output_fieldnames = ('hrid', 'mail', 'givenName', 'sn', 'dbcostcenter', 'dbdirid', 'hrreportsto', 'PHFull', 'PHFull_message', 'SupervisorEmail', 'SupervisorFirstName', 'SupervisorSurname') try: gd_formatted = csv.DictWriter(open(output_file, 'w', newline=''), fieldnames=gd_output_fieldnames, extrasaction='ignore', dialect='excel') except IOError: print('Unable to open file, IO error (Is it locked?)') sys.exit(1) headers = {n:n for n in gd_output_fieldnames} gd_formatted.writerow(headers) for employee in employees_dict.employee_list: # We're using the employee object's inbuilt __dict__ attribute - hmm, is this good practice? gd_formatted.writerow(employee.__dict__) class Employee: """An Employee in the system, with employee attributes (name, email, cost-centre etc.)""" def __init__(self, employee_attributes): """We use the Employee constructor to convert a dictionary into instance attributes.""" for k, v in employee_attributes.items(): setattr(self, k, v) def clean_phone_number(self): """Perform some rudimentary checks and corrections, to make sure numbers are in the right format. Numbers should be in the form 0XYYYYYYYY, where X is the area code, and Y is the local number.""" if self.telephoneNumber is None or self.telephoneNumber == '': return '', 'Missing phone number.' else: standard_format = re.compile(r'^\+(?P<intl_prefix>\d{2})\((?P<area_code>\d)\)(?P<local_first_half>\d{4})-(?P<local_second_half>\d{4})') extra_zero = re.compile(r'^\+(?P<intl_prefix>\d{2})\(0(?P<area_code>\d)\)(?P<local_first_half>\d{4})-(?P<local_second_half>\d{4})') missing_hyphen = re.compile(r'^\+(?P<intl_prefix>\d{2})\(0(?P<area_code>\d)\)(?P<local_first_half>\d{4})(?P<local_second_half>\d{4})') if standard_format.search(self.telephoneNumber): result = standard_format.search(self.telephoneNumber) return '0' + result.group('area_code') + result.group('local_first_half') + result.group('local_second_half'), '' elif extra_zero.search(self.telephoneNumber): result = extra_zero.search(self.telephoneNumber) return '0' + result.group('area_code') + result.group('local_first_half') + result.group('local_second_half'), 'Extra zero in area code - ask user to remediate. ' elif missing_hyphen.search(self.telephoneNumber): result = missing_hyphen.search(self.telephoneNumber) return '0' + result.group('area_code') + result.group('local_first_half') + result.group('local_second_half'), 'Missing hyphen in local component - ask user to remediate. ' else: return '', "Number didn't match recognised format. Original text is: " + self.telephoneNumber class Employees: def __init__(self, import_list): self.employee_list = [] for employee in import_list: self.employee_list.append(Employee(employee)) def clean_all_phone_numbers(self): for employee in self.employee_list: #Should we just set this directly in Employee.clean_phone_number() instead? employee.PHFull, employee.PHFull_message = employee.clean_phone_number() # Hmm, the search is O(n^2) - there's probably a better way of doing this search? def lookup_all_supervisors(self): for employee in self.employee_list: if employee.hrreportsto is not None and employee.hrreportsto != '': for supervisor in self.employee_list: if supervisor.hrid == employee.hrreportsto: (employee.SupervisorEmail, employee.SupervisorFirstName, employee.SupervisorSurname) = supervisor.mail, supervisor.givenName, supervisor.sn break else: (employee.SupervisorEmail, employee.SupervisorFirstName, employee.SupervisorSurname) = ('Supervisor not found.', 'Supervisor not found.', 'Supervisor not found.') else: (employee.SupervisorEmail, employee.SupervisorFirstName, employee.SupervisorSurname) = ('Supervisor not set.', 'Supervisor not set.', 'Supervisor not set.') #Is thre a more pythonic way of doing this? def print_employees(self): for employee in self.employee_list: print(employee.__dict__) if __name__ == '__main__': db_employees = Employees(import_gd_dump()) db_employees.clean_all_phone_numbers() db_employees.lookup_all_supervisors() #db_employees.print_employees() write_gd_formatted(db_employees) Firstly, my preamble question is, can you see anything inherently wrong with the above, from either a class design or Python point-of-view? Is the logic/design sound? Anyhow, to the specifics: The Employees object has a method, clean_all_phone_numbers(), which calls clean_phone_number() on each Employee object inside it. Is this bad design? If so, why? Also, is the way I'm calling lookup_all_supervisors() bad? Originally, I wrapped the clean_phone_number() and lookup_supervisor() method in a single function, with a single for-loop inside it. clean_phone_number is O(n), I believe, lookup_supervisor is O(n^2) - is it ok splitting it into two loops like this? In clean_all_phone_numbers(), I'm looping on the Employee objects, and settings their values using return/assignment - should I be setting this inside clean_phone_number() itself? There's also a few things that I'm sorted of hacked out, not sure if they're bad practice - e.g. print_employee() and gd_formatted() both use __dict__, and the constructor for Employee uses setattr() to convert a dictionary into instance attributes. I'd value any thoughts at all. If you think the questions are too broad, let me know and I can repost as several split up (I just didn't want to pollute the boards with multiple similar questions, and the three questions are more or less fairly tightly related). Cheers, Victor

    Read the article

  • How should I change my Graph structure (very slow insertion)?

    - by Nazgulled
    Hi, This program I'm doing is about a social network, which means there are users and their profiles. The profiles structure is UserProfile. Now, there are various possible Graph implementations and I don't think I'm using the best one. I have a Graph structure and inside, there's a pointer to a linked list of type Vertex. Each Vertex element has a value, a pointer to the next Vertex and a pointer to a linked list of type Edge. Each Edge element has a value (so I can define weights and whatever it's needed), a pointer to the next Edge and a pointer to the Vertex owner. I have a 2 sample files with data to process (in CSV style) and insert into the Graph. The first one is the user data (one user per line); the second one is the user relations (for the graph). The first file is quickly inserted into the graph cause I always insert at the head and there's like ~18000 users. The second file takes ages but I still insert the edges at the head. The file has about ~520000 lines of user relations and takes between 13-15mins to insert into the Graph. I made a quick test and reading the data is pretty quickly, instantaneously really. The problem is in the insertion. This problem exists because I have a Graph implemented with linked lists for the vertices. Every time I need to insert a relation, I need to lookup for 2 vertices, so I can link them together. This is the problem... Doing this for ~520000 relations, takes a while. How should I solve this? Solution 1) Some people recommended me to implement the Graph (the vertices part) as an array instead of a linked list. This way I have direct access to every vertex and the insertion is probably going to drop considerably. But, I don't like the idea of allocating an array with [18000] elements. How practically is this? My sample data has ~18000, but what if I need much less or much more? The linked list approach has that flexibility, I can have whatever size I want as long as there's memory for it. But the array doesn't, how am I going to handle such situation? What are your suggestions? Using linked lists is good for space complexity but bad for time complexity. And using an array is good for time complexity but bad for space complexity. Any thoughts about this solution? Solution 2) This project also demands that I have some sort of data structures that allows quick lookup based on a name index and an ID index. For this I decided to use Hash Tables. My tables are implemented with separate chaining as collision resolution and when a load factor of 0.70 is reach, I normally recreate the table. I base the next table size on this http://planetmath.org/encyclopedia/GoodHashTablePrimes.html. Currently, both Hash Tables hold a pointer to the UserProfile instead of duplication the user profile itself. That would be stupid, changing data would require 3 changes and it's really dumb to do it that way. So I just save the pointer to the UserProfile. The same user profile pointer is also saved as value in each Graph Vertex. So, I have 3 data structures, one Graph and two Hash Tables and every single one of them point to the same exact UserProfile. The Graph structure will serve the purpose of finding the shortest path and stuff like that while the Hash Tables serve as quick index by name and ID. What I'm thinking to solve my Graph problem is to, instead of having the Hash Tables value point to the UserProfile, I point it to the corresponding Vertex. It's still a pointer, no more and no less space is used, I just change what I point to. Like this, I can easily and quickly lookup for each Vertex I need and link them together. This will insert the ~520000 relations pretty quickly. I thought of this solution because I already have the Hash Tables and I need to have them, then, why not take advantage of them for indexing the Graph vertices instead of the user profile? It's basically the same thing, I can still access the UserProfile pretty quickly, just go to the Vertex and then to the UserProfile. But, do you see any cons on this second solution against the first one? Or only pros that overpower the pros and cons on the first solution? Other Solution) If you have any other solution, I'm all ears. But please explain the pros and cons of that solution over the previous 2. I really don't have much time to be wasting with this right now, I need to move on with this project, so, if I'm doing to do such a change, I need to understand exactly what to change and if that's really the way to go. Hopefully no one fell asleep reading this and closed the browser, sorry for the big testament. But I really need to decide what to do about this and I really need to make a change. P.S: When answering my proposed solutions, please enumerate them as I did so I know exactly what are you talking about and don't confuse my self more than I already am.

    Read the article

< Previous Page | 164 165 166 167 168 169 170  | Next Page >