Search Results

Search found 12537 results on 502 pages for 'solid state'.

Page 473/502 | < Previous Page | 469 470 471 472 473 474 475 476 477 478 479 480  | Next Page >

  • More Animation - Self Dismissing Dialogs

    - by Duncan Mills
    In my earlier articles on animation, I discussed various slide, grow and  flip transitions for items and containers.  In this article I want to discuss a fade animation and specifically the use of fades and auto-dismissal for informational dialogs.  If you use a Mac, you may be familiar with Growl as a notification system, and the nice way that messages that are informational just fade out after a few seconds. So in this blog entry I wanted to discuss how we could make an ADF popup behave in the same way. This can be an effective way of communicating information to the user without "getting in the way" with modal alerts. This of course, has been done before, but everything I've seen previously requires something like JQuery to be in the mix when we don't really need it to be.  The solution I've put together is nice and generic and will work with either <af:panelWindow> or <af:dialog> as a the child of the popup. In terms of usage it's pretty simple to use we  just need to ensure that the popup itself has clientComponent is set to true and includes the animation JavaScript (animateFadingPopup) on a popupOpened event: <af:popup id="pop1" clientComponent="true">   <af:panelWindow title="A Fading Message...">    ...  </af:panelWindow>   <af:clientListener method="animateFadingPopup" type="popupOpened"/> </af:popup>   The popup can be invoked in the normal way using showPopupBehavior or JavaScript, no special code is required there. As a further twist you can include an additional clientAttribute called preFadeDelay to define a delay before the fade itself starts (the default is 5 seconds) . To set the delay to just 2 seconds for example: <af:popup ...>   ...   <af:clientAttribute name="preFadeDelay" value="2"/>   <af:clientListener method="animateFadingPopup" type="popupOpened"/>  </af:popup> The Animation Styles  As before, we have a couple of CSS Styles which define the animation, I've put these into the skin in my case, and, as in the other articles, I've only defined the transitions for WebKit browsers (Chrome, Safari) at the moment. In this case, the fade is timed at 5 seconds in duration. .popupFadeReset {   opacity: 1; } .popupFadeAnimate {   opacity: 0;   -webkit-transition: opacity 5s ease-in-out; } As you can see here, we are achieving the fade by simply setting the CSS opacity property. The JavaScript The final part of the puzzle is, of course, the JavaScript, there are four functions, these are generic (apart from the Style names which, if you've changed above, you'll need to reflect here): The initial function invoked from the popupOpened event,  animateFadingPopup which starts a timer and provides the initial delay before we start to fade the popup. The function that applies the fade animation to the popup - initiatePopupFade. The callback function - closeFadedPopup used to reset the style class and correctly hide the popup so that it can be invoked again and again.   A utility function - findFadeContainer, which is responsible for locating the correct child component of the popup to actually apply the style to. Function - animateFadingPopup This function, as stated is the one hooked up to the popupOpened event via a clientListener. Because of when the code is called it does not actually matter how you launch the popup, or if the popup is re-used from multiple places. All usages will get the fade behavior. /**  * Client listener which will kick off the animation to fade the dialog and register  * a callback to correctly reset the popup once the animation is complete  * @param event  */ function animateFadingPopup(event) { var fadePopup = event.getSource();   var fadeCandidate = false;   //Ensure that the popup is initially Opaque   //This handles the situation where the user has dismissed   //the popup whilst it was in the process of fading   var fadeContainer = findFadeContainer(fadePopup);   if (fadeContainer != null) {     fadeCandidate = true;     fadeContainer.setStyleClass("popupFadeReset");   }   //Only continue if we can actually fade this popup   if (fadeCandidate) {   //See if a delay has been specified     var waitTimeSeconds = event.getSource().getProperty('preFadeDelay');     //Default to 5 seconds if not supplied     if (waitTimeSeconds == undefined) {     waitTimeSeconds = 5;     }     // Now call the fade after the specified time     var fadeFunction = function () {     initiatePopupFade(fadePopup);     };     var fadeDelayTimer = setTimeout(fadeFunction, (waitTimeSeconds * 1000));   } } The things to note about this function is the initial check that we have to do to ensure that the container is currently visible and reset it's style to ensure that it is.  This is to handle the situation where the popup has begun the fade, and yet the user has still explicitly dismissed the popup before it's complete and in doing so has prevented the callback function (described later) from executing. In this particular situation the initial display of the dialog will be (apparently) missing it's normal animation but at least it becomes visible to the user (and most users will probably not notice this difference in any case). You'll notice that the style that we apply to reset the  opacity - popupFadeReset, is not applied to the popup component itself but rather the dialog or panelWindow within it. More about that in the description of the next function findFadeContainer(). Finally, assuming that we have a suitable candidate for fading, a JavaScript  timer is started using the specified preFadeDelay wait time (or 5 seconds if that was not supplied). When this timer expires then the main animation styleclass will be applied using the initiatePopupFade() function Function - findFadeContainer As a component, the <af:popup> does not support styleClass attribute, so we can't apply the animation style directly.  Instead we have to look for the container within the popup which defines the window object that can have a style attached.  This is achieved by the following code: /**  * The thing we actually fade will be the only child  * of the popup assuming that this is a dialog or window  * @param popup  * @return the component, or null if this is not valid for fading  */ function findFadeContainer(popup) { var children = popup.getDescendantComponents();   var fadeContainer = children[0];   if (fadeContainer != undefined) {   var compType = fadeContainer.getComponentType();     if (compType == "oracle.adf.RichPanelWindow" || compType == "oracle.adf.RichDialog") {     return fadeContainer;     }   }   return null; }  So what we do here is to grab the first child component of the popup and check its type. Here I decided to limit the fade behaviour to only <af:dialog> and <af:panelWindow>. This was deliberate.  If  we apply the fade to say an <af:noteWindow> you would see the text inside the balloon fade, but the balloon itself would hang around until the fade animation was over and then hide.  It would of course be possible to make the code smarter to walk up the DOM tree to find the correct <div> to apply the style to in order to hide the whole balloon, however, that means that this JavaScript would then need to have knowledge of the generated DOM structure, something which may change from release to release, and certainly something to avoid. So, all in all, I think that this is an OK restriction and frankly it's windows and dialogs that I wanted to fade anyway, not balloons and menus. You could of course extend this technique and handle the other types should you really want to. One thing to note here is the selection of the first (children[0]) child of the popup. It does not matter if there are non-visible children such as clientListener before the <af:dialog> or <af:panelWindow> within the popup, they are not included in this array, so picking the first element in this way seems to be fine, no matter what the underlying ordering is within the JSF source.  If you wanted a super-robust version of the code you might want to iterate through the children array of the popup to check for the right type, again it's up to you.  Function -  initiatePopupFade  On to the actual fading. This is actually very simple and at it's heart, just the application of the popupFadeAnimate style to the correct component and then registering a callback to execute once the fade is done. /**  * Function which will kick off the animation to fade the dialog and register  * a callback to correctly reset the popup once the animation is complete  * @param popup the popup we are animating  */ function initiatePopupFade(popup) { //Only continue if the popup has not already been dismissed    if (popup.isPopupVisible()) {   //The skin styles that define the animation      var fadeoutAnimationStyle = "popupFadeAnimate";     var fadeAnimationResetStyle = "popupFadeReset";     var fadeContainer = findFadeContainer(popup);     if (fadeContainer != null) {     var fadeContainerReal = AdfAgent.AGENT.getElementById(fadeContainer.getClientId());       //Define the callback this will correctly reset the popup once it's disappeared       var fadeCallbackFunction = function (event) {       closeFadedPopup(popup, fadeContainer, fadeAnimationResetStyle);         event.target.removeEventListener("webkitTransitionEnd", fadeCallbackFunction);       };       //Initiate the fade       fadeContainer.setStyleClass(fadeoutAnimationStyle);       //Register the callback to execute once fade is done       fadeContainerReal.addEventListener("webkitTransitionEnd", fadeCallbackFunction, false);     }   } } I've added some extra checks here though. First of all we only start the whole process if the popup is still visible. It may be that the user has closed the popup before the delay timer has finished so there is no need to start animating in that case. Again we use the findFadeContainer() function to locate the correct component to apply the style to, and additionally we grab the DOM id that represents that container.  This physical ID is required for the registration of the callback function. The closeFadedPopup() call is then registered on the callback so as to correctly close the now transparent (but still there) popup. Function -  closeFadedPopup The final function just cleans things up: /**  * Callback function to correctly cancel and reset the style in the popup  * @param popup id of the popup so we can close it properly  * @param contatiner the window / dialog within the popup to actually style  * @param resetStyle the syle that sets the opacity back to solid  */ function closeFadedPopup(popup, container, resetStyle) { container.setStyleClass(resetStyle);   popup.cancel(); }  First of all we reset the style to make the popup contents opaque again and then we cancel the popup.  This will ensure that any of your user code that is waiting for a popup cancelled event will actually get the event, additionally if you have done this as a modal window / dialog it will ensure that the glasspane is dismissed and you can interact with the UI again.  What's Next? There are several ways in which this technique could be used, I've been working on a popup here, but you could apply the same approach to in-line messages. As this code (in the popup case) is generic it will make s pretty nice declarative component and maybe, if I get time, I'll look at constructing a formal Growl component using a combination of this technique, and active data push. Also, I'm sure the above code can be improved a little too.  Specifically things like registering a popup cancelled listener to handle the style reset so that we don't loose the subtle animation that takes place when the popup is opened in that situation where the user has closed the in-fade dialog.

    Read the article

  • The Incremental Architect&acute;s Napkin - #2 - Balancing the forces

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/06/02/the-incremental-architectacutes-napkin---2---balancing-the-forces.aspxCategorizing requirements is the prerequisite for ecconomic architectural decisions. Not all requirements are created equal. However, to truely understand and describe the requirement forces pulling on software development, I think further examination of the requirements aspects is varranted. Aspects of Functionality There are two sides to Functionality requirements. It´s about what a software should do. I call that the Operations it implements. Operations are defined by expressions and control structures or calls to frameworks of some sort, i.e. (business) logic statements. Operations calculate, transform, aggregate, validate, send, receive, load, store etc. Operations are about behavior; they take input and produce output by considering state. I´m not using the term “function” here, because functions - or methods or sub-programs - are not necessary to implement Operations. Functions belong to a different sub-aspect of requirements (see below). Operations alone are not enough, though, to make a customer happy with regard to his/her Functionality requirements. Only correctly implemented Operations provide full value. This should make clear, why testing is so important. And not just manual tests during development of some operational feature, but automated tests. Because only automated tests scale when over time the number of operations increases. Without automated tests there is no guarantee formerly correct operations are still correct after more got added. To retest all previous operations manually is infeasible. So whoever relies just on manual tests is not really balancing the two forces Operations and Correctness. With manual tests more weight is put on the side of the scale of Operations. That might be ok for a short period of time - but in the long run it will bite you. You need to plan for Correctness in the long run from the first day of your project on. Aspects of Quality As important as Functionality is, it´s not the driver for software development. No software has ever been written to just implement some operation in code. We don´t need computers just to do something. All computers can do with software we can do without them. Well, at least given enough time and resources. We could calculate the most complex formulas without computers. We could do auctions with millions of people without computers. The only reason we want computers to help us with this and a million other Operations is… We don´t want to wait for the results very long. Or we want less errors. Or we want easier accessability to complicated solutions. So the main reason for customers to buy/order software is some Quality. They want some Functionality with a higher Quality (e.g. performance, scalability, usability, security…) than without the software. But Qualities come in at least two flavors: Most important are Primary Qualities. That´s the Qualities software truely is written for. Take an online auction website for example. Its Primary Qualities are performance, scalability, and usability, I´d say. Auctions should come within reach of millions of people; setting up an auction should be very easy; finding a suitable auction and bidding on it should be as fast as possible. Only if those Qualities have been implemented does security become relevant. A secure auction website is important - but not as important as a fast auction website. Nobody would want to use the most secure auction website if it was unbearably slow. But there would be people willing to use the fastest auction website even it was lacking security. That´s why security - with regard to online auction software - is not a Primary Quality, but just a Secondary Quality. It´s a supporting quality, so to speak. It does not deliver value by itself. With a password manager software this might be different. There security might be a Primary Quality. Please get me right: I don´t want to denigrate any Quality. There´s a long list of non-functional requirements at Wikipedia. They are all created equal - but that does not mean they are equally important for all software projects. When confronted with Quality requirements check with the customer which are primary and which are secondary. That will help to make good economical decisions when in a crunch. Resources are always limited - but requirements are a bottomless ocean. Aspects of Security of Investment Functionality and Quality are traditionally the requirement aspects cared for most - by customers and developers alike. Even today, when pressure rises in a project, tunnel vision will focus on them. Any measures to create and hold up Security of Investment (SoI) will be out of the window pretty quickly. Resistance to customers and/or management is futile. As long as SoI is not placed on equal footing with Functionality and Quality it´s bound to suffer under pressure. To look closer at what SoI means will help to become more conscious about it and make customers and management aware of the risks of neglecting it. SoI to me has two facets: Production Efficiency (PE) is about speed of delivering value. Customers like short response times. Short response times mean less money spent. So whatever makes software development faster supports this requirement. This must not lead to duct tape programming and banging out features by the dozen, though. Because customers don´t just want Operations and Quality, but also Correctness. So if Correctness gets compromised by focussing too much on Production Efficiency it will fire back. Customers want PE not just today, but over the whole course of a software´s lifecycle. That means, it´s not just about coding speed, but equally about code quality. If code quality leads to rework the PE is on an unsatisfactory level. Also if code production leads to waste it´s unsatisfactory. Because the effort which went into waste could have been used to produce value. Rework and waste cost money. Rework and waste abound, however, as long as PE is not addressed explicitly with management and customers. Thanks to the Agile and Lean movements that´s increasingly the case. Nevertheless more could and should be done in many teams. Each and every developer should keep in mind that Production Efficiency is as important to the customer as Functionality and Quality - whether he/she states it or not. Making software development more efficient is important - but still sooner or later even agile projects are going to hit a glas ceiling. At least as long as they neglect the second SoI facet: Evolvability. Delivering correct high quality functionality in short cycles today is good. But not just any software structure will allow this to happen for an indefinite amount of time.[1] The less explicitly software was designed the sooner it´s going to get stuck. Big ball of mud, monolith, brownfield, legacy code, technical debt… there are many names for software structures that have lost the ability to evolve, to be easily changed to accomodate new requirements. An evolvable code base is the opposite of a brownfield. It´s code which can be easily understood (by developers with sufficient domain expertise) and then easily changed to accomodate new requirements. Ideally the costs of adding feature X to an evolvable code base is independent of when it is requested - or at least the costs should only increase linearly, not exponentially.[2] Clean Code, Agile Architecture, and even traditional Software Engineering are concerned with Evolvability. However, it seems no systematic way of achieving it has been layed out yet. TDD + SOLID help - but still… When I look at the design ability reality in teams I see much room for improvement. As stated previously, SoI - or to be more precise: Evolvability - can hardly be measured. Plus the customer rarely states an explicit expectation with regard to it. That´s why I think, special care must be taken to not neglect it. Postponing it to some large refactorings should not be an option. Rather Evolvability needs to be a core concern for every single developer day. This should not mean Evolvability is more important than any of the other requirement aspects. But neither is it less important. That´s why more effort needs to be invested into it, to bring it on par with the other aspects, which usually are much more in focus. In closing As you see, requirements are of quite different kinds. To not take that into account will make it harder to understand the customer, and to make economic decisions. Those sub-aspects of requirements are forces pulling in different directions. To improve performance might have an impact on Evolvability. To increase Production Efficiency might have an impact on security etc. No requirement aspect should go unchecked when deciding how to allocate resources. Balancing should be explicit. And it should be possible to trace back each decision to a requirement. Why is there a null-check on parameters at the start of the method? Why are there 5000 LOC in this method? Why are there interfaces on those classes? Why is this functionality running on the threadpool? Why is this function defined on that class? Why is this class depending on three other classes? These and a thousand more questions are not to mean anything should be different in a code base. But it´s important to know the reason behind all of these decisions. Because not knowing the reason possibly means waste and having decided suboptimally. And how do we ensure to balance all requirement aspects? That needs practices and transparency. Practices means doing things a certain way and not another, even though that might be possible. We´re dealing with dangerous tools here. Like a knife is a dangerous tool. Harm can be done if we use our tools in just any way at the whim of the moment. Over the centuries rules and practices have been established how to use knifes. You don´t put them in peoples´ legs just because you´re feeling like it. You hand over a knife with the handle towards the receiver. You might not even be allowed to cut round food like potatos or eggs with it. The same should be the case for dangerous tools like object-orientation, remote communication, threads etc. We need practices to use them in a way so requirements are balanced almost automatically. In addition, to be able to work on software as a team we need transparency. We need means to share our thoughts, to work jointly on mental models. So far our tools are focused on working with code. Testing frameworks, build servers, DI containers, intellisense, refactoring support… That´s all nice and well. I don´t want to miss any of that. But I think it´s not enough. We´re missing mental tools, tools for making thinking and talking about software (independently of code) easier. You might think, enough of such tools already exist like all those UML diagram types or Flow Charts. But then, isn´t it strange, hardly any team is using them to design software? Or is that just due to a lack of education? I don´t think so. It´s a matter value/weight ratio: the current mental tools are too heavy weight compared to the value they deliver. So my conclusion is, we need lightweight tools to really be able to balance requirements. Software development is complex. We need guidance not to forget important aspects. That´s like with flying an airplane. Pilots don´t just jump in and take off for their destination. Yes, there are times when they are “flying by the seats of their pants”, when they are just experts doing thing intuitively. But most of the time they are going through honed practices called checklist. See “The Checklist Manifesto” for very enlightening details on this. Maybe then I should say it like this: We need more checklists for the complex businss of software development.[3] But that´s what software development mostly is about: changing software over an unknown period of time. It needs to be corrected in order to finally provide promised operations. It needs to be enhanced to provide ever more operations and qualities. All this without knowing when it´s going to stop. Probably never - until “maintainability” hits a wall when the technical debt is too large, the brownfield too deep. Software development is not a sprint, is not a marathon, not even an ultra marathon. Because to all this there is a foreseeable end. Software development is like continuously and foreever running… ? And sometimes I dare to think that costs could even decrease over time. Think of it: With each feature a software becomes richer in functionality. So with each additional feature the chance of there being already functionality helping its implementation increases. That should lead to less costs of feature X if it´s requested later than sooner. X requested later could stand on the shoulders of previous features. Alas, reality seems to be far from this despite 20+ years of admonishing developers to think in terms of reusability.[1] ? Please don´t get me wrong: I don´t want to bog down the “art” of software development with heavyweight practices and heaps of rules to follow. The framework we need should be lightweight. It should not stand in the way of delivering value to the customer. It´s purpose is even to make that easier by helping us to focus and decreasing waste and rework. ?

    Read the article

  • CodePlex Daily Summary for Wednesday, January 05, 2011

    CodePlex Daily Summary for Wednesday, January 05, 2011Popular ReleasesBloodSim: BloodSim - 1.3.2.0: - Simulation Log is now automatically disabled and hidden when running 10 or more iterations - Hit and Expertise are now entered by Rating, and include option for a Racial Expertise bonus - Added option for boss to use a periodic magic ability (Dragon Breath) - Added option for boss to periodically Enrage, gaining a Damage/Attack Speed buffTemporary Data Storage Folder: Temporary Data Storage Folder 0.3 Stable: With lots of features and bug fixes we are releasing stable 0.3 version. Just download it and check it out. Added Features: 1. Rename TDS Folder with a name of your choice 2. Control what to do on program termination 3. Move TDS Folder to a new locationASP.NET MVC CMS ( Using CommonLibrary.NET ): CommonLibrary.NET CMS 0.9.5 Alpha: CommonLibrary CMSA simple yet powerful CMS system in ASP.NET MVC 2 using C# 4.0. ActiveRecord based components for Blogs, Widgets, Pages, Parts, Events, Feedback, BlogRolls, Links Includes several widgets ( tag cloud, archives, recent, user cloud, links twitter, blog roll and more ) Built using the http://commonlibrarynet.codeplex.com framework. ( Uses TDD, DDD, Models/Entities, Code Generation ) Can run w/ In-Memory Repositories or Sql Server Database See Documentation tab for Ins...AllNewsManager.NET: AllNewsManager.NET 1.2.1: AllNewsManager.NET 1.2.1 It is a minor update from version 1.2Mini Memory Dump Diagnosis using Asp.Net: MiniDump HealthMonitor: Enable developers to generate mini memory dumps in case of unhandled exceptions or for specific exception scenarios without using any external tools , only using Asp.Net 2.0 and above. Many times in production , QA Servers the issues require post-mortem or low-level system debugging efforts. This Memory dump generator will help in those scenarios.EnhSim: EnhSim 2.2.9 BETA: 2.2.9 BETAThis release supports WoW patch 4.03a at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 - Added in the Gobl...xUnit.net - Unit Testing for .NET: xUnit.net 1.7 Beta: xUnit.net release 1.7 betaBuild #1533 Important notes for Resharper users: Resharper support has been moved to the xUnit.net Contrib project. Important note for TestDriven.net users: If you are having issues running xUnit.net tests in TestDriven.net, especially on 64-bit Windows, we strongly recommend you upgrade to TD.NET version 3.0 or later. This release adds the following new features: Added support for ASP.NET MVC 3 Added Assert.Equal(double expected, double actual, int precision)...Json.NET: Json.NET 4.0 Release 1: New feature - Added Windows Phone 7 project New feature - Added dynamic support to LINQ to JSON New feature - Added dynamic support to serializer New feature - Added INotifyCollectionChanged to JContainer in .NET 4 build New feature - Added ReadAsDateTimeOffset to JsonReader New feature - Added ReadAsDecimal to JsonReader New feature - Added covariance to IJEnumerable type parameter New feature - Added XmlSerializer style Specified property support New feature - Added ...StyleCop for ReSharper: StyleCop for ReSharper 5.1.14977.000: Prerequisites: ============== o Visual Studio 2008 / Visual Studio 2010 o ReSharper 5.1.1753.4 o StyleCop 4.4.1.2 Preview This release adds no new features, has bug fixes around performance and unhandled errors reported on YouTrack.DbDocument: DbDoc Initial Version: DbDoc Initial versionASP .NET MVC CMS (Content Management System): Atomic CMS 2.1.2: Atomic CMS 2.1.2 release notes Atomic CMS installation guide Kind Of Magic MSBuild Task: Beta 4: Update to keep up with latest bug fixes. To those who don't like Magic/NoMagic attributes, you may change these names in KindOfMagic.targets file: Change this line: <MagicTask Assembly="@(IntermediateAssembly)" References="@(ReferencePath)"/> to something like this: <MagicTask Assembly="@(IntermediateAssembly)" References="@(ReferencePath)" MagicAttribute="MyMagicAttribute" NoMagicAttribute="MyNoMagicAttribute"/>N2 CMS: 2.1: N2 is a lightweight CMS framework for ASP.NET. It helps you build great web sites that anyone can update. Major Changes Support for auto-implemented properties ({get;set;}, based on contribution by And Poulsen) All-round improvements and bugfixes File manager improvements (multiple file upload, resize images to fit) New image gallery Infinite scroll paging on news Content templates First time with N2? Try the demo site Download one of the template packs (above) and open the proj...Wii Backup Fusion: Wii Backup Fusion 1.0: - Norwegian translation - French translation - German translation - WBFS dump for analysis - Scalable full HQ cover - Support for log file - Load game images improved - Support for image splitting - Diff for images after transfer - Support for scrubbing modes - Search functionality for log - Recurse depth for Files/Load - Show progress while downloading game cover - Supports more databases for cover download - Game cover loading routines improvedAutoLoL: AutoLoL v1.5.1: Fix: Fixed a bug where pressing Save As would not select the Mastery Directory by default Unexpected errors are now always reported to the user before closing AutoLoL down.* Extracted champion data to Data directory** Added disclaimer to notify users this application has nothing to do with Riot Games Inc. Updated Codeplex image * An error report will be shown to the user which can help the developers to find out what caused the error, this should improve support ** We are working on ...TortoiseHg: TortoiseHg 1.1.8: TortoiseHg 1.1.8 is a minor bug fix release, with minor improvementsBlogEngine.NET: BlogEngine.NET 2.0: Get DotNetBlogEngine for 3 Months Free! Click Here for More Info 3 Months FREE – BlogEngine.NET Hosting – Click Here! If you want to set up and start using BlogEngine.NET right away, you should download the Web project. If you want to extend or modify BlogEngine.NET, you should download the source code. If you are upgrading from a previous version of BlogEngine.NET, please take a look at the Upgrading to BlogEngine.NET 2.0 instructions. To get started, be sure to check out our installatio...Free Silverlight & WPF Chart Control - Visifire: Visifire SL and WPF Charts v3.6.6 Released: Hi, Today we are releasing final version of Visifire, v3.6.6 with the following new feature: * TextDecorations property is implemented in Title for Chart. * TitleTextDecorations property is implemented in Axis. * MinPointHeight property is now applicable for Column and Bar Charts. Also this release includes few bug fixes: * ToolTipText property of DataSeries was not getting applied from Style. * Chart threw exception if IndicatorEnabled property was set to true and Too...StyleCop Compliant Visual Studio Code Snippets: Visual Studio Code Snippets - January 2011: StyleCop Compliant Visual Studio Code Snippets Visual Studio 2010 provides C# developers with 38 code snippets, enhancing developer productivty and increasing the consistency of the code. Within this project the original code snippets have been refactored to provide StyleCop compliant versions of the original code snippets while also adding many new code snippets. Within the January 2011 release you'll find 82 code snippets to make you more productive and the code you write more consistent!...WPF Application Framework (WAF): WPF Application Framework (WAF) 2.0.0.2: Version: 2.0.0.2 (Milestone 2): This release contains the source code of the WPF Application Framework (WAF) and the sample applications. Requirements .NET Framework 4.0 (The package contains a solution file for Visual Studio 2010) The unit test projects require Visual Studio 2010 Professional Remark The sample applications are using Microsoft’s IoC container MEF. However, the WPF Application Framework (WAF) doesn’t force you to use the same IoC container in your application. You can use ...New Projectsbsp4airplay: BSP files support for Airplay SDK. This project target is to support: - Quake 1 - Quake 2 - Quake 3 - HL 1 - HL 2 ClassyBlog - Personal Blogging Engine in ASP.NET MVC: A modern blogging engine for classy people.Cybernux Linux® Bulid Script: Cybernux Linux® Build Script is a set of bash scripts designed for usage within the Cybernux Linux® Project. Doing things like settup a repository, building and rebuilding packages for Cybernux Linux®, which uses Debian and portions of Debian repos combined with the Cybernux onesCybernux Linux® Multimedia Bulid Script: Cybernux Linux® Multia Build Script is a set of bash scripts designed for usage within the Cybernux Linux® Project. Doing things like settup a repository, building and rebuilding Multimedia packages for Cybernux Linux® Multimedia ExceptionMessageBox: ExceptionMessageBox is development of messagebox to show exception detail in your app like .NET unhandled exception dialog. More and better features and view, similar to Microsoft.ExceptionMessageBox from SQL Server, shall be added by contributors. This is developed in C# WPF.FractalZoom: FractalZoom is a fun little application that allows people to explore the fractal landscape of Julia sets: http://en.wikipedia.org/wiki/Julia_set FSharpJump: Visual Studio 2010 Extension for F# source files. Pops up a window in the F# editor which outlines the namespaces, 'modules, types and "let bindings" in the current document. Pressing enter key on the selected list item will jump the caret to that line. GoJan: A web application that introduce deep infomations about travelling to japan, using DNN framework.List2Code: List2Code is very Simple Application that take a comma delimited list and runs it trough a template to create code that can be copied and pasted in your application. Its Developed in CSharp using WPF for the GUI.MicroNET Framework UAV: El proyecto UAV se realiza para controlar una avion de radio control a traves de .NET MicroFramework y permitir un vuelo autonomo. Mini Memory Dump Diagnosis using Asp.Net: Enable developers to generate mini memory dumps in case of unhandled exceptions or for specific exception scenarios without using any external tools , only using Asp.Net 2.0 and above. Many times in production , QA Servers the issues require detailed system debugging dumps.MultiConvert: console based utility primary to convert solid edge files into data exchange formats like *.stp and *.dxf. It's using the API of the original software and can't be run alone. The goal of this project is creating routines with desired pre-instructions for batch conversionsPDI 2009: Ferramenta para processamento digital de imagens.RoC: RoC is a mvc 3 razor based cms system. Also RoC use Composition, Policy Injection, EF 4 Model First, Windows Identity Foundation technologies. Now RoC is draft project and it'll be more strong in the futureShot In The Dark: Shot In The Dark is a multiplayer top-down 2D shooter framework developed in Flash/AS3.0, and uses the Nonoba Multiplayer API.SilveR - Online Statistics Application: SilveR is a Silverlight based online statistical analysis application written in c# with a WCF backend exposing an R serviceSimple Scheduler: Simple task scheduling application for windows built on top of Quartz.NETSj's Personal Arena: This project project hosting space is for all my personal projects I have been working on and will be working on. For any further details you can contact me at mukhs18[at]gmail[dot]comskymet: JAlquerqueThe Electric Clam: A web console for PowerShell. Features include: - near real-time web based console - multiple consoles and multiple users - shared or exclusive consoles - latest technology (jQuery, WCF Web Sockets, ASP.Net MVC 3 with Razor) - more...UCI Protocol Sniffer [UCIPlug]: UCIPlug allows dumping the UCI messages exchanged between a UCI compliant GUI (for ex., Chessbase) and an UCI engine (for ex., Rybka 2.2). UCIPlug is written in C# and is targeted at .NET 3.5 (though, can be recompiled for .NET 2.0). Umbraco for Windows Phone: Umbraco for Windows PhoneYetAnotherErp: This is a 2 years of work building yet another erp system that integrates CRM, SCM, complex inventory management, EDI. This software is powered using XAF Application Framework from Developer Express and Expand Framework. YetAnotherErp is a fully integrated solution.Your Store: “Your Store” is beta e-commerce webapplication created using ASP.NET MVC 2

    Read the article

  • C# XNA Handle mouse events?

    - by user406470
    I'm making a 2D game engine called Clixel over on GitHub. The problem I have relates to two classes, ClxMouse and ClxButton. In it I have a mouse class - the code for that can be viewed here. ClxMouse using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Graphics; using Microsoft.Xna.Framework.Input; namespace org.clixel { public class ClxMouse : ClxSprite { private MouseState _curmouse, _lastmouse; public int Sensitivity = 3; public bool Lock = true; public Vector2 Change { get { return new Vector2(_curmouse.X - _lastmouse.X, _curmouse.Y - _lastmouse.Y); } } private int _scrollwheel; public int ScrollWheel { get { return _scrollwheel; } } public bool LeftDown { get { if (_curmouse.LeftButton == ButtonState.Pressed) return true; else return false; } } public bool RightDown { get { if (_curmouse.RightButton == ButtonState.Pressed) return true; else return false; } } public bool MiddleDown { get { if (_curmouse.MiddleButton == ButtonState.Pressed) return true; else return false; } } public bool LeftPressed { get { if (_curmouse.LeftButton == ButtonState.Pressed && _lastmouse.LeftButton == ButtonState.Released) return true; else return false; } } public bool RightPressed { get { if (_curmouse.RightButton == ButtonState.Pressed && _lastmouse.RightButton == ButtonState.Released) return true; else return false; } } public bool MiddlePressed { get { if (_curmouse.MiddleButton == ButtonState.Pressed && _lastmouse.MiddleButton == ButtonState.Released) return true; else return false; } } public bool LeftReleased { get { if (_curmouse.LeftButton == ButtonState.Released && _lastmouse.LeftButton == ButtonState.Pressed) return true; else return false; } } public bool RightReleased { get { if (_curmouse.RightButton == ButtonState.Released && _lastmouse.RightButton == ButtonState.Pressed) return true; else return false; } } public bool MiddleReleased { get { if (_curmouse.MiddleButton == ButtonState.Released && _lastmouse.MiddleButton == ButtonState.Pressed) return true; else return false; } } public MouseState CurMouse { get { return _curmouse; } } public MouseState LastMouse { get { return _lastmouse; } } public ClxMouse() : base(ClxG.Textures.Default.Cursor) { _curmouse = Mouse.GetState(); _lastmouse = _curmouse; CollisionBox = new Rectangle(ClxG.Screen.Center.X, ClxG.Screen.Center.Y, Texture.Width, Texture.Height); this.Solid = false; DefaultPosition = new Vector2(CollisionBox.X, CollisionBox.Y); Mouse.SetPosition(CollisionBox.X, CollisionBox.Y); } public ClxMouse(Texture2D _texture) : base(_texture) { _curmouse = Mouse.GetState(); _lastmouse = _curmouse; CollisionBox = new Rectangle(ClxG.Screen.Center.X, ClxG.Screen.Center.Y, Texture.Width, Texture.Height); DefaultPosition = new Vector2(CollisionBox.X, CollisionBox.Y); } public override void Update() { _lastmouse = _curmouse; _curmouse = Mouse.GetState(); if (_curmouse != _lastmouse) { if (ClxG.Game.IsActive) { _scrollwheel = _curmouse.ScrollWheelValue; Velocity = new Vector2(Change.X / Sensitivity, Change.Y / Sensitivity); if (Lock) Mouse.SetPosition(ClxG.Screen.Center.X, ClxG.Screen.Center.Y); _curmouse = Mouse.GetState(); } base.Update(); } } public override void Draw(SpriteBatch _sb) { base.Draw(_sb); } } } ClxButton using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Graphics; namespace org.clixel { public class ClxButton : ClxSprite { /// <summary> /// The color when the mouse is over the button /// </summary> public Color HoverColor; /// <summary> /// The color when the color is being clicked /// </summary> public Color ClickColor; /// <summary> /// The color when the button is inactive /// </summary> public Color InactiveColor; /// <summary> /// The color when the button is active /// </summary> public Color ActiveColor; /// <summary> /// The color after the button has been clicked. /// </summary> public Color ClickedColor; /// <summary> /// The text to be displayed on the button, set to "" if no text is needed. /// </summary> public string Text; /// <summary> /// The ClxText object to be displayed. /// </summary> public ClxText TextRender; /// <summary> /// The ClxState that should be ResetAndShow() when the button is clicked. /// </summary> public ClxState ClickState; /// <summary> /// Collision check to make sure onCollide() only runs once per frame, /// since only the mouse needs to be collision checked. /// </summary> private bool _runonce = false; /// <summary> /// Gets a value indicating whether this instance is colliding. /// </summary> /// <value> /// <c>true</c> if this instance is colliding; otherwise, <c>false</c>. /// </value> public bool IsColliding { get { return _runonce; } } /// <summary> /// Initializes a new instance of the <see cref="ClxButton"/> class. /// </summary> public ClxButton() : base(ClxG.Textures.Default.Button) { HoverColor = Color.Red; ClickColor = Color.Blue; InactiveColor = Color.Gray; ActiveColor = Color.White; ClickedColor = Color.Yellow; Text = Name + ID + " Unset!"; TextRender = new ClxText(); TextRender.Text = Text; TextRender.TextPadding = new Vector2(5, 5); ClickState = null; CollideObjects(ClxG.Mouse); } /// <summary> /// Initializes a new instance of the <see cref="ClxButton"/> class. /// </summary> /// <param name="_texture">The button texture.</param> public ClxButton(Texture2D _texture) : base(_texture) { HoverColor = Color.Red; ClickColor = Color.Blue; InactiveColor = Color.Gray; ActiveColor = Color.White; ClickedColor = Color.Yellow; Texture = _texture; Text = Name + ID; TextRender = new ClxText(); TextRender.Name = this.Name + ".TextRender"; TextRender.Text = Text; TextRender.TextPadding = new Vector2(5, 5); TextRender.Reset(); ClickState = null; CollideObjects(ClxG.Mouse); } /// <summary> /// Draws the debug information, run from ClxG.DrawDebug unless manual control is assumed. /// </summary> /// <param name="_sb">SpriteBatch used for drawing.</param> public override void DrawDebug(SpriteBatch _sb) { _runonce = false; TextRender.DrawDebug(_sb); _sb.Draw(Texture, ActualRectangle, new Rectangle(0, 0, Texture.Width, Texture.Height), DebugColor, Rotation, Origin, Flip, Layer); _sb.Draw(ClxG.Textures.Default.DebugBG, new Rectangle(ActualRectangle.X - DebugLineWidth, ActualRectangle.Y - DebugLineWidth, ActualRectangle.Width + DebugLineWidth * 2, ActualRectangle.Height + DebugLineWidth * 2), new Rectangle(0, 0, ClxG.Textures.Default.DebugBG.Width, ClxG.Textures.Default.DebugBG.Height), DebugOutline, Rotation, Origin, Flip, Layer - 0.1f); _sb.Draw(ClxG.Textures.Default.DebugBG, ActualRectangle, new Rectangle(0, 0, ClxG.Textures.Default.DebugBG.Width, ClxG.Textures.Default.DebugBG.Height), DebugBGColor, Rotation, Origin, Flip, Layer - 0.01f); } /// <summary> /// Draws using the SpriteBatch, run from ClxG.Draw unless manual control is assumed. /// </summary> /// <param name="_sb">SpriteBatch used for drawing.</param> public override void Draw(SpriteBatch _sb) { _runonce = false; TextRender.Draw(_sb); if (Visible) if (Debug) { DrawDebug(_sb); } else _sb.Draw(Texture, ActualRectangle, new Rectangle(0, 0, Texture.Width, Texture.Height), Color, Rotation, Origin, Flip, Layer); } /// <summary> /// Updates this instance. /// </summary> public override void Update() { if (this.Color != ActiveColor) this.Color = ActiveColor; TextRender.Layer = this.Layer + 0.03f; TextRender.Text = Text; TextRender.Scale = .5f; TextRender.Name = this.Name + ".TextRender"; TextRender.Origin = new Vector2(TextRender.CollisionBox.Center.X, TextRender.CollisionBox.Center.Y); TextRender.Center(this); TextRender.Update(); this.CollisionBox.Width = (int)(TextRender.CollisionBox.Width * TextRender.Scale) + (int)(TextRender.TextPadding.X * 2); this.CollisionBox.Height = (int)(TextRender.CollisionBox.Height * TextRender.Scale) + (int)(TextRender.TextPadding.Y * 2); base.Update(); } /// <summary> /// Collide event, takes the colliding object to call it's proper collision code. /// You'd want to use something like if(typeof(collider) == typeof(ClxObject) /// </summary> /// <param name="collider">The colliding object.</param> public override void onCollide(ClxObject collider) { if (!_runonce) { _runonce = true; UpdateEvents(); base.onCollide(collider); } } /// <summary> /// Updates the mouse based events. /// </summary> public void UpdateEvents() { onHover(); if (ClxG.Mouse.LeftReleased) { onLeftReleased(); return; } if (ClxG.Mouse.RightReleased) { onRightReleased(); return; } if (ClxG.Mouse.MiddleReleased) { onMiddleReleased(); return; } if (ClxG.Mouse.LeftPressed) { onLeftClicked(); return; } if (ClxG.Mouse.RightPressed) { onRightClicked(); return; } if (ClxG.Mouse.MiddlePressed) { onMiddleClicked(); return; } if (ClxG.Mouse.LeftDown) { onLeftClick(); return; } if (ClxG.Mouse.RightDown) { onRightClick(); return; } if (ClxG.Mouse.MiddleDown) { onMiddleClick(); return; } } /// <summary> /// Shows the state of the click. /// </summary> public void ShowClickState() { if (ClickState != null) { ClickState.ResetAndShow(); } } /// <summary> /// Hover event /// </summary> virtual public void onHover() { this.Color = HoverColor; } /// <summary> /// Left click event /// </summary> virtual public void onLeftClick() { this.Color = ClickColor; } /// <summary> /// Right click event /// </summary> virtual public void onRightClick() { } /// <summary> /// Middle click event /// </summary> virtual public void onMiddleClick() { } /// <summary> /// Left click event, called once per click /// </summary> virtual public void onLeftClicked() { ShowClickState(); } /// <summary> /// Right click event, called once per click /// </summary> virtual public void onRightClicked() { this.Reset(); } /// <summary> /// Middle click event, called once per click /// </summary> virtual public void onMiddleClicked() { } /// <summary> /// Ons the left released. /// </summary> virtual public void onLeftReleased() { this.Color = ClickedColor; } virtual public void onRightReleased() { } virtual public void onMiddleReleased() { } } } The issue I have is that I have all these have event styled methods, especially in ClxButton with all the onLeftClick, onRightClick, etc, etc. Is there a better way for me to handle these events to be a lot more easier for a programmer to use? I was looking at normal events on some other sites, (I'd post them but I need more rep.) and didn't really see a good way to implement delegate events into my framework. I'm not really sure how these events work, could someone possibly lay out how these events are processed for me? TL:DR * Is there a better way to handle events like this? * Are events a viable solution to this problem? Thanks in advance for any help.

    Read the article

  • Testing Entity Framework applications, pt. 3: NDbUnit

    - by Thomas Weller
    This is the third of a three part series that deals with the issue of faking test data in the context of a legacy app that was built with Microsoft's Entity Framework (EF) on top of an MS SQL Server database – a scenario that can be found very often. Please read the first part for a description of the sample application, a discussion of some general aspects of unit testing in a database context, and of some more specific aspects of the here discussed EF/MSSQL combination. Lately, I wondered how you would ‘mock’ the data layer of a legacy application, when this data layer is made up of an MS Entity Framework (EF) model in combination with a MS SQL Server database. Originally, this question came up in the context of how you could enable higher-level integration tests (automated UI tests, to be exact) for a legacy application that uses this EF/MSSQL combo as its data store mechanism – a not so uncommon scenario. The question sparked my interest, and I decided to dive into it somewhat deeper. What I've found out is, in short, that it's not very easy and straightforward to do it – but it can be done. The two strategies that are best suited to fit the bill involve using either the (commercial) Typemock Isolator tool or the (free) NDbUnit framework. The use of Typemock was discussed in the previous post, this post now will present the NDbUnit approach... NDbUnit is an Apache 2.0-licensed open-source project, and like so many other Nxxx tools and frameworks, it is basically a C#/.NET port of the corresponding Java version (DbUnit namely). In short, it helps you in flexibly managing the state of a database in that it lets you easily perform basic operations (like e.g. Insert, Delete, Refresh, DeleteAll)  against your database and, most notably, lets you feed it with data from external xml files. Let's have a look at how things can be done with the help of this framework. Preparing the test data Compared to Typemock, using NDbUnit implies a totally different approach to meet our testing needs.  So the here described testing scenario requires an instance of an SQL Server database in operation, and it also means that the Entity Framework model that sits on top of this database is completely unaffected. First things first: For its interactions with the database, NDbUnit relies on a .NET Dataset xsd file. See Step 1 of their Quick Start Guide for a description of how to create one. With this prerequisite in place then, the test fixture's setup code could look something like this: [TestFixture, TestsOn(typeof(PersonRepository))] [Metadata("NDbUnit Quickstart URL",           "http://code.google.com/p/ndbunit/wiki/QuickStartGuide")] [Description("Uses the NDbUnit library to provide test data to a local database.")] public class PersonRepositoryFixture {     #region Constants     private const string XmlSchema = @"..\..\TestData\School.xsd";     #endregion // Constants     #region Fields     private SchoolEntities _schoolContext;     private PersonRepository _personRepository;     private INDbUnitTest _database;     #endregion // Fields     #region Setup/TearDown     [FixtureSetUp]     public void FixtureSetUp()     {         var connectionString = ConfigurationManager.ConnectionStrings["School_Test"].ConnectionString;         _database = new SqlDbUnitTest(connectionString);         _database.ReadXmlSchema(XmlSchema);         var entityConnectionStringBuilder = new EntityConnectionStringBuilder         {             Metadata = "res://*/School.csdl|res://*/School.ssdl|res://*/School.msl",             Provider = "System.Data.SqlClient",             ProviderConnectionString = connectionString         };         _schoolContext = new SchoolEntities(entityConnectionStringBuilder.ConnectionString);         _personRepository = new PersonRepository(this._schoolContext);     }     [FixtureTearDown]     public void FixtureTearDown()     {         _database.PerformDbOperation(DbOperationFlag.DeleteAll);         _schoolContext.Dispose();     }     ...  As you can see, there is slightly more fixture setup code involved if your tests are using NDbUnit to provide the test data: Because we're dealing with a physical database instance here, we first need to pick up the test-specific connection string from the test assemblies' App.config, then initialize an NDbUnit helper object with this connection along with the provided xsd file, and also set up the SchoolEntities and the PersonRepository instances accordingly. The _database field (an instance of the INdUnitTest interface) will be our single access point to the underlying database: We use it to perform all the required operations against the data store. To have a flexible mechanism to easily insert data into the database, we can write a helper method like this: private void InsertTestData(params string[] dataFileNames) {     _database.PerformDbOperation(DbOperationFlag.DeleteAll);     if (dataFileNames == null)     {         return;     }     try     {         foreach (string fileName in dataFileNames)         {             if (!File.Exists(fileName))             {                 throw new FileNotFoundException(Path.GetFullPath(fileName));             }             _database.ReadXml(fileName);             _database.PerformDbOperation(DbOperationFlag.InsertIdentity);         }     }     catch     {         _database.PerformDbOperation(DbOperationFlag.DeleteAll);         throw;     } } This lets us easily insert test data from xml files, in any number and in a  controlled order (which is important because we eventually must fulfill referential constraints, or we must account for some other stuff that imposes a specific ordering on data insertion). Again, as with Typemock, I won't go into API details here. - Unfortunately, there isn't too much documentation for NDbUnit anyway, other than the already mentioned Quick Start Guide (and the source code itself, of course) - a not so uncommon problem with smaller Open Source Projects. Last not least, we need to provide the required test data in xml form. A snippet for data from the People table might look like this, for example: <?xml version="1.0" encoding="utf-8" ?> <School xmlns="http://tempuri.org/School.xsd">   <Person>     <PersonID>1</PersonID>     <LastName>Abercrombie</LastName>     <FirstName>Kim</FirstName>     <HireDate>1995-03-11T00:00:00</HireDate>   </Person>   <Person>     <PersonID>2</PersonID>     <LastName>Barzdukas</LastName>     <FirstName>Gytis</FirstName>     <EnrollmentDate>2005-09-01T00:00:00</EnrollmentDate>   </Person>   <Person>     ... You can also have data from various tables in one single xml file, if that's appropriate for you (but beware of the already mentioned ordering issues). It's true that your test assembly may end up with dozens of such xml files, each containing quite a big amount of text data. But because the files are of very low complexity, and with the help of a little bit of Copy/Paste and Excel magic, this appears to be well manageable. Executing some basic tests Here are some of the possible tests that can be written with the above preparations in place: private const string People = @"..\..\TestData\School.People.xml"; ... [Test, MultipleAsserts, TestsOn("PersonRepository.GetNameList")] public void GetNameList_ListOrdering_ReturnsTheExpectedFullNames() {     InsertTestData(People);     List<string> names =         _personRepository.GetNameList(NameOrdering.List);     Assert.Count(34, names);     Assert.AreEqual("Abercrombie, Kim", names.First());     Assert.AreEqual("Zheng, Roger", names.Last()); } [Test, MultipleAsserts, TestsOn("PersonRepository.GetNameList")] [DependsOn("RemovePerson_CalledOnce_DecreasesCountByOne")] public void GetNameList_NormalOrdering_ReturnsTheExpectedFullNames() {     InsertTestData(People);     List<string> names =         _personRepository.GetNameList(NameOrdering.Normal);     Assert.Count(34, names);     Assert.AreEqual("Alexandra Walker", names.First());     Assert.AreEqual("Yan Li", names.Last()); } [Test, TestsOn("PersonRepository.AddPerson")] public void AddPerson_CalledOnce_IncreasesCountByOne() {     InsertTestData(People);     int count = _personRepository.Count;     _personRepository.AddPerson(new Person { FirstName = "Thomas", LastName = "Weller" });     Assert.AreEqual(count + 1, _personRepository.Count); } [Test, TestsOn("PersonRepository.RemovePerson")] public void RemovePerson_CalledOnce_DecreasesCountByOne() {     InsertTestData(People);     int count = _personRepository.Count;     _personRepository.RemovePerson(new Person { PersonID = 33 });     Assert.AreEqual(count - 1, _personRepository.Count); } Not much difference here compared to the corresponding Typemock versions, except that we had to do a bit more preparational work (and also it was harder to get the required knowledge). But this picture changes quite dramatically if we look at some more demanding test cases: Ok, and what if things are becoming somewhat more complex? Tests like the above ones represent the 'easy' scenarios. They may account for the biggest portion of real-world use cases of the application, and they are important to make sure that it is generally sound. But usually, all these nasty little bugs originate from the more complex parts of our code, or they occur when something goes wrong. So, for a testing strategy to be of real practical use, it is especially important to see how easy or difficult it is to mimick a scenario which represents a more complex or exceptional case. The following test, for example, deals with the case that there is some sort of invalid input from the caller: [Test, MultipleAsserts, TestsOn("PersonRepository.GetCourseMembers")] [Row(null, typeof(ArgumentNullException))] [Row("", typeof(ArgumentException))] [Row("NotExistingCourse", typeof(ArgumentException))] public void GetCourseMembers_WithGivenVariousInvalidValues_Throws(string courseTitle, Type expectedInnerExceptionType) {     var exception = Assert.Throws<RepositoryException>(() =>                                 _personRepository.GetCourseMembers(courseTitle));     Assert.IsInstanceOfType(expectedInnerExceptionType, exception.InnerException); } Apparently, this test doesn't need an 'Arrange' part at all (see here for the same test with the Typemock tool). It acts just like any other client code, and all the required business logic comes from the database itself. This doesn't always necessarily mean that there is less complexity, but only that the complexity happens in a different part of your test resources (in the xml files namely, where you sometimes have to spend a lot of effort for carefully preparing the required test data). Another example, which relies on an underlying 1-n relationship, might be this: [Test, MultipleAsserts, TestsOn("PersonRepository.GetCourseMembers")] public void GetCourseMembers_WhenGivenAnExistingCourse_ReturnsListOfStudents() {     InsertTestData(People, Course, Department, StudentGrade);     List<Person> persons = _personRepository.GetCourseMembers("Macroeconomics");     Assert.Count(4, persons);     Assert.ForAll(         persons,         @p => new[] { 10, 11, 12, 14 }.Contains(@p.PersonID),         "Person has none of the expected IDs."); } If you compare this test to its corresponding Typemock version, you immediately see that the test itself is much simpler, easier to read, and thus much more intention-revealing. The complexity here lies hidden behind the call to the InsertTestData() helper method and the content of the used xml files with the test data. And also note that you might have to provide additional data which are not even directly relevant to your test, but are required only to fulfill some integrity needs of the underlying database. Conclusion The first thing to notice when comparing the NDbUnit approach to its Typemock counterpart obviously deals with performance: Of course, NDbUnit is much slower than Typemock. Technically,  it doesn't even make sense to compare the two tools. But practically, it may well play a role and could or could not be an issue, depending on how much tests you have of this kind, how often you run them, and what role they play in your development cycle. Also, because the dataset from the required xsd file must fully match the database schema (even in parts that otherwise wouldn't be relevant to you), it can be quite cumbersome to be in a team where different people are working with the database in parallel. My personal experience is – as already said in the first part – that Typemock gives you a better development experience in a 'dynamic' scenario (when you're working in some kind of TDD-style, you're oftentimes executing the tests from your dev box, and your database schema changes frequently), whereas the NDbUnit approach is a good and solid solution in more 'static' development scenarios (when you need to execute the tests less frequently or only on a separate build server, and/or the underlying database schema can be kept relatively stable), for example some variations of higher-level integration or User-Acceptance tests. But in any case, opening Entity Framework based applications for testing requires a fair amount of resources, planning, and preparational work – it's definitely not the kind of stuff that you would call 'easy to test'. Hopefully, future versions of EF will take testing concerns into account. Otherwise, I don't see too much of a future for the framework in the long run, even though it's quite popular at the moment... The sample solution A sample solution (VS 2010) with the code from this article series is available via my Bitbucket account from here (Bitbucket is a hosting site for Mercurial repositories. The repositories may also be accessed with the Git and Subversion SCMs - consult the documentation for details. In addition, it is possible to download the solution simply as a zipped archive – via the 'get source' button on the very right.). The solution contains some more tests against the PersonRepository class, which are not shown here. Also, it contains database scripts to create and fill the School sample database. To compile and run, the solution expects the Gallio/MbUnit framework to be installed (which is free and can be downloaded from here), the NDbUnit framework (which is also free and can be downloaded from here), and the Typemock Isolator tool (a fully functional 30day-trial is available here). Moreover, you will need an instance of the Microsoft SQL Server DBMS, and you will have to adapt the connection strings in the test projects App.config files accordingly.

    Read the article

  • Detecting HTML5/CSS3 Features using Modernizr

    - by dwahlin
    HTML5, CSS3, and related technologies such as canvas and web sockets bring a lot of useful new features to the table that can take Web applications to the next level. These new technologies allow applications to be built using only HTML, CSS, and JavaScript allowing them to be viewed on a variety of form factors including tablets and phones. Although HTML5 features offer a lot of promise, it’s not realistic to develop applications using the latest technologies without worrying about supporting older browsers in the process. If history has taught us anything it’s that old browsers stick around for years and years which means developers have to deal with backward compatibility issues. This is especially true when deploying applications to the Internet that target the general public. This begs the question, “How do you move forward with HTML5 and CSS3 technologies while gracefully handling unsupported features in older browsers?” Although you can write code by hand to detect different HTML5 and CSS3 features, it’s not always straightforward. For example, to check for canvas support you need to write code similar to the following:   <script> window.onload = function () { if (canvasSupported()) { alert('canvas supported'); } }; function canvasSupported() { var canvas = document.createElement('canvas'); return (canvas.getContext && canvas.getContext('2d')); } </script> If you want to check for local storage support the following check can be made. It’s more involved than it should be due to a bug in older versions of Firefox. <script> window.onload = function () { if (localStorageSupported()) { alert('local storage supported'); } }; function localStorageSupported() { try { return ('localStorage' in window && window['localStorage'] != null); } catch(e) {} return false; } </script> Looking through the previous examples you can see that there’s more than meets the eye when it comes to checking browsers for HTML5 and CSS3 features. It takes a lot of work to test every possible scenario and every version of a given browser. Fortunately, you don’t have to resort to writing custom code to test what HTML5/CSS3 features a given browser supports. By using a script library called Modernizr you can add checks for different HTML5/CSS3 features into your pages with a minimal amount of code on your part. Let’s take a look at some of the key features Modernizr offers.   Getting Started with Modernizr The first time I heard the name “Modernizr” I thought it “modernized” older browsers by added missing functionality. In reality, Modernizr doesn’t actually handle adding missing features or “modernizing” older browsers. The Modernizr website states, “The name Modernizr actually stems from the goal of modernizing our development practices (and ourselves)”. Because it relies on feature detection rather than browser sniffing (a common technique used in the past – that never worked that great), Modernizr definitely provides a more modern way to test features that a browser supports and can even handle loading additional scripts called shims or polyfills that fill in holes that older browsers may have. It’s a great tool to have in your arsenal if you’re a web developer. Modernizr is available at http://modernizr.com. Two different types of scripts are available including a development script and custom production script. To generate a production script, the site provides a custom script generation tool rather than providing a single script that has everything under the sun for HTML5/CSS3 feature detection. Using the script generation tool you can pick the specific test functionality that you need and ignore everything that you don’t need. That way the script is kept as small as possible. An example of the custom script download screen is shown next. Notice that specific CSS3, HTML5, and related feature tests can be selected. Once you’ve downloaded your custom script you can add it into your web page using the standard <script> element and you’re ready to start using Modernizr. <script src="Scripts/Modernizr.js" type="text/javascript"></script>   Modernizr and the HTML Element Once you’ve add a script reference to Modernizr in a page it’ll go to work for you immediately. In fact, by adding the script several different CSS classes will be added to the page’s <html> element at runtime. These classes define what features the browser supports and what features it doesn’t support. Features that aren’t supported get a class name of “no-FeatureName”, for example “no-flexbox”. Features that are supported get a CSS class name based on the feature such as “canvas” or “websockets”. An example of classes added when running a page in Chrome is shown next:   <html class=" js flexbox canvas canvastext webgl no-touch geolocation postmessage websqldatabase indexeddb hashchange history draganddrop websockets rgba hsla multiplebgs backgroundsize borderimage borderradius boxshadow textshadow opacity cssanimations csscolumns cssgradients cssreflections csstransforms csstransforms3d csstransitions fontface generatedcontent video audio localstorage sessionstorage webworkers applicationcache svg inlinesvg smil svgclippaths"> Here’s an example of what the <html> element looks like at runtime with Internet Explorer 9:   <html class=" js no-flexbox canvas canvastext no-webgl no-touch geolocation postmessage no-websqldatabase no-indexeddb hashchange no-history draganddrop no-websockets rgba hsla multiplebgs backgroundsize no-borderimage borderradius boxshadow no-textshadow opacity no-cssanimations no-csscolumns no-cssgradients no-cssreflections csstransforms no-csstransforms3d no-csstransitions fontface generatedcontent video audio localstorage sessionstorage no-webworkers no-applicationcache svg inlinesvg smil svgclippaths">   When using Modernizr it’s a common practice to define an <html> element in your page with a no-js class added as shown next:   <html class="no-js">   You’ll see starter projects such as HTML5 Boilerplate (http://html5boilerplate.com) or Initializr (http://initializr.com) follow this approach (see my previous post for more information on HTML5 Boilerplate). By adding the no-js class it’s easy to tell if a browser has JavaScript enabled or not. If JavaScript is disabled then no-js will stay on the <html> element. If JavaScript is enabled, no-js will be removed by Modernizr and a js class will be added along with other classes that define supported/unsupported features. Working with HTML5 and CSS3 Features You can use the CSS classes added to the <html> element directly in your CSS files to determine what style properties to use based upon the features supported by a given browser. For example, the following CSS can be used to render a box shadow for browsers that support that feature and a simple border for browsers that don’t support the feature: .boxshadow #MyContainer { border: none; -webkit-box-shadow: #666 1px 1px 1px; -moz-box-shadow: #666 1px 1px 1px; } .no-boxshadow #MyContainer { border: 2px solid black; }   If a browser supports box-shadows the boxshadow CSS class will be added to the <html> element by Modernizr. It can then be associated with a given element. This example associates the boxshadow class with a div with an id of MyContainer. If the browser doesn’t support box shadows then the no-boxshadow class will be added to the <html> element and it can be used to render a standard border around the div. This provides a great way to leverage new CSS3 features in supported browsers while providing a graceful fallback for older browsers. In addition to using the CSS classes that Modernizr provides on the <html> element, you also use a global Modernizr object that’s created. This object exposes different properties that can be used to detect the availability of specific HTML5 or CSS3 features. For example, the following code can be used to detect canvas and local storage support. You can see that the code is much simpler than the code shown at the beginning of this post. It also has the added benefit of being tested by a large community of web developers around the world running a variety of browsers.   $(document).ready(function () { if (Modernizr.canvas) { //Add canvas code } if (Modernizr.localstorage) { //Add local storage code } }); The global Modernizr object can also be used to test for the presence of CSS3 features. The following code shows how to test support for border-radius and CSS transforms:   $(document).ready(function () { if (Modernizr.borderradius) { $('#MyDiv').addClass('borderRadiusStyle'); } if (Modernizr.csstransforms) { $('#MyDiv').addClass('transformsStyle'); } });   Several other CSS3 feature tests can be performed such as support for opacity, rgba, text-shadow, CSS animations, CSS transitions, multiple backgrounds, and more. A complete list of supported HTML5 and CSS3 tests that Modernizr supports can be found at http://www.modernizr.com/docs.   Loading Scripts using Modernizr In cases where a browser doesn’t support a specific feature you can either provide a graceful fallback or load a shim/polyfill script to fill in missing functionality where appropriate (more information about shims/polyfills can be found at https://github.com/Modernizr/Modernizr/wiki/HTML5-Cross-Browser-Polyfills). Modernizr has a built-in script loader that can be used to test for a feature and then load a script if the feature isn’t available. The script loader is built-into Modernizr and is also available as a standalone yepnope script (http://yepnopejs.com). It’s extremely easy to get started using the script loader and it can really simplify the process of loading scripts based on the availability of a particular browser feature. To load scripts dynamically you can use Modernizr’s load() function which accepts properties defining the feature to test (test property), the script to load if the test succeeds (yep property), the script to load if the test fails (nope property), and a script to load regardless of if the test succeeds or fails (both property). An example of using load() with these properties is show next: Modernizr.load({ test: Modernizr.canvas, yep: 'html5CanvasAvailable.js’, nope: 'excanvas.js’, both: 'myCustomScript.js' }); In this example Modernizr is used to not only load scripts but also to test for the presence of the canvas feature. If the target browser supports the HTML5 canvas then the html5CanvasAvailable.js script will be loaded along with the myCustomScript.js script (use of the yep property in this example is a bit contrived – it was added simply to demonstrate how the property can be used in the load() function). Otherwise, a polyfill script named excanvas.js will be loaded to add missing canvas functionality for Internet Explorer versions prior to 9. Once excanvas.js is loaded the myCustomScript.js script will be loaded. Because Modernizr handles loading scripts, you can also use it in creative ways. For example, you can use it to load local scripts when a 3rd party Content Delivery Network (CDN) such as one provided by Google or Microsoft is unavailable for whatever reason. The Modernizr documentation provides the following example that demonstrates the process for providing a local fallback for jQuery when a CDN is down:   Modernizr.load([ { load: '//ajax.googleapis.com/ajax/libs/jquery/1.6.4/jquery.js', complete: function () { if (!window.jQuery) { Modernizr.load('js/libs/jquery-1.6.4.min.js'); } } }, { // This will wait for the fallback to load and // execute if it needs to. load: 'needs-jQuery.js' } ]); This code attempts to load jQuery from the Google CDN first. Once the script is downloaded (or if it fails) the function associated with complete will be called. The function checks to make sure that the jQuery object is available and if it’s not Modernizr is used to load a local jQuery script. After all of that occurs a script named needs-jQuery.js will be loaded. Conclusion If you’re building applications that use some of the latest and greatest features available in HTML5 and CSS3 then Modernizr is an essential tool. By using it you can reduce the amount of custom code required to test for browser features and provide graceful fallbacks or even load shim/polyfill scripts for older browsers to help fill in missing functionality. 

    Read the article

  • CodePlex Daily Summary for Wednesday, November 30, 2011

    CodePlex Daily Summary for Wednesday, November 30, 2011Popular ReleasesLINQ to Twitter: LINQ to Twitter Beta v2.0.22: New support for Windows Phone 7.1, 100% Twitter API coverage.Anno 2070 Assistant: Anno 2070 Assistant v1.1: Anno 2070 Assistant v1.1 Released! Features Included: Building Layouts for Ecos, Tycoons & Techs Production Chains for Ecos, Tycoons & Techs Supply Calculator New Features: The new supply calculator will tell you how many of each production chain you need to sustain the current population. Simply plot in your total inhabitants of each type and calculate! Coming in v2.0: Population Calculator Port from hard coded data to XML data Calculate by Residence Feature and some more secret ...Rawr: Rawr 4.3.0: This is the Downloadable WPF version of Rawr!For web-based version see http://elitistjerks.com/rawr.php You can find the version notes at: http://rawr.codeplex.com/wikipage?title=VersionNotes Rawr AddonWe now have a Rawr Official Addon for in-game exporting and importing of character data hosted on Curse. The Addon does not perform calculations like Rawr, it simply shows your exported Rawr data in wow tooltips and lets you export your character to Rawr (including bag and bank items) like Char...BugNET Issue Tracker: BugNET 0.9.131: This is a bug fix release for 0.9 that fixes a number of pressing issues. FixesBGN-1989 - User verification is not working on the latest release BGN-1988 - Send password notification does not include the actual password and throws an error BGN-1987 - Issues revisions stop being updated BGN-1985 - Search by issue id no longer works Fixed issue when user is not authenticated and clicks on the Vote link the user is redirected to page not found (wrong location for the Login.aspx page) N...QuickStart Engine (3D Game Engine for XNA): QuickStart Engine v0.23: Main FeaturesXNA 4.0 compatible Clean engine architecture Makes it easy to make your own game using the engine. Messaging system allows you to communicate between systems and other entities without coupling your code in ways you wouldn't want to. Entity/Component system allows you to make your own objects and customize them very easily. Terrain engine Quad-tree culling Multi-texture splatting Multi-texture normal mapping Smoothing and scaling Physics engine integration Uses Ji...Windows Azure Toolkit for Windows Phone: Windows Azure Toolkit for Windows Phone v1.3.2: Upgraded Windows Azure projects to Windows Azure SDK for .NET – November 2011. Updated BabelCam, CRUDSqlAzure, WPCloud.ACS, and WPCloud.SQL.ACS samples to include default ACS configuration to allow developers running the samples without owning an ACS namespace.VideoLan DotNet for WinForm, WPF & Silverlight 5: VideoLan DotNet for WinForm, WPF, SL5 - 2011.11.29: The new version add and correct many features : Add Medias list property to VlcControl Correction to implement Medias list property Add PlaybackMode property to VlcControl (Loop/Repeat/Default) Remove "--play-and-pause" option in WPF sample Add MediaList Interops Add MediaListPlayer Interops Correction on Interops (error) Add MediaSubItemAdded event on MediaBaseMFCMAPI: November 2011 Release: Build: 15.0.0.1029 Full release notes at SGriffin's blog. If you just want to run the MFCMAPI or MrMAPI, get the executables. If you want to debug them, get the symbol files and the source. The 64 bit builds will only work on a machine with Outlook 2010 64 bit installed. All other machines should use the 32 bit builds, regardless of the operating system. Facebook BadgeWCF Community Site: WCF Web API Preview 6: Welcome to the sixth preview release of WCF Web API on Codeplex! Install WCF Web API Preview 6 using NuGet New Features/EnhancementsURL form encoding - Http request bodies sent as application/x-www-form-urlencoded can now deserialize into objects and participate in content negotiation. Custom OData entity keys - The ODataFormatter now supports custom conventions for determining which properties identify an entity key. [ServiceContract] is no longer required on the Web API class definiti...Devpad: 4.11: Whats new for Devpad 4.11: New Import Archive Improved Save Dialog Improved Replace Dialog Improved Go To Dialog Minor Bug Fix's, improvements and speed upsHome Access Plus+: v7.7: v7.7.1128.2200Added: ShowTo Option on Resources Added: Basic Authentication Logon Method Added: Button on the Live Tracker page to clear logons in the database but not do a remote logoff Updated: jquery from v1.6.2 to v1.7.1 Fixed: Another attempt at fixing the SIMS import in the booking system Fixed: JSON Issues with the Setup Added: More DEBUG Events to the Event Log (note DEBUG not RELEASE) Added: Support for Week A Week B instead of Week 1 and Week 2 Updated: The My Files ...CommonLibrary.NET: CommonLibrary.NET 0.9.8 - Alpha: A collection of very reusable code and components in C# 4.0 ranging from ActiveRecord, Csv, Command Line Parsing, Configuration, Holiday Calendars, Logging, Authentication, and much more. Samples in <root>\src\Lib\CommonLibrary.NET\Samples CommonLibrary.NET 0.9.8 AlphaNew Dynamic Scripting Language : workitem : 7493 Fixes 1622 6803Widget Suite for DotNetNuke: 01.04.00: The following features/enhancements are associated with this release: Bug: Removed the empty box/white space created by some widgets New Widget: FlexSlider New Widget: Google+ Button New Widget: Klout Badge Sample Widget Script FileFxCop Integrator for Visual Studio 2010: FxCop Integrator 2.0.0 RC: Replaced the MSBuild Tasks installer to fix the bug of the targets file. FxCop Integrator is not affected by this bug. (Nov 28 2011) New FeatureSupported calculating code metrics with Code Metrics PowerTool. (Work Item #6568: 6568). Provided MSBuild tasks. #7454: 7454 Supported to filter out auto-generated code from code analysis result. #7485: 7485 Supported exporting report of code analysis result. Supported multi-project analysis. Supported file level analysis. Added the featu...Terminals: Version 2 - Beta 4 Release: Beta 4 Refresh Build Dont forget to backup your config files BEFORE upgrading! As usual, please take time to use and abuse this release. We left logging in place, and this is a debug build so be sure to submit your logs on each bug reported, and please do report all bugs! Updated the About form to include the date and time of the build. Useful for CI builds to ensure we have the correct version "Favourites" and "History" save their expanded states after app restarts Code cleanup, secu...MiniTwitter: 1.76: MiniTwitter 1.76 ???? ?? ?????????? User Streams ???????????? User Streams ???????????、??????????????? REST ?????????? ?????????????????????????????? ??????????????????????????????Media Companion: MC 3.424b Weekly: Ensure .NET 4.0 Full Framework is installed. (Available from http://www.microsoft.com/download/en/details.aspx?id=17718) Ensure the NFO ID fix is applied when transitioning from versions prior to 3.416b. (Details here) Movie Show Resolutions... Resolved issue when reverting multiselection of movies to "-none-" Added movie rename support for subtitle files '.srt' & '.sub' Finalised code for '-1' fix - radiobutton to choose either filename or title Fixed issue with Movie Batch Wizard Fanart - ...Advanced Windows Phone Enginering Tool: WPE Downloads: This version of WPE gives you basic updating, restoring, and, erasing for your Windows Phone device.Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.37: Fix for issue #16936 - CSS strings containing ASP.NET replacement blocks that contain the same delimiter character(s) as the CSS string cause the CSS parser to barf. Need to use the -aspnet:1 flag to treat the ASP.NET blocks as single entities. Added support for CSS3 @keyframes syntax. Added NuGet package support to the DLL project. Expand the -line switch to also optionally specify the multiline/single line mode, and the spaces-per-tab for multiline mode. Tweak the -pretty switch to be more ...Get simpler, simpler, and simpler: UI 0.1 Binary: Inklude: you can use it with lib, header and dlls. Uzing: you can use it by adding it to your project.New ProjectsArchiveBytes: ArchiveBytes is an archiving solution with rolling archive capabilities. Arduino Controller: Arduino Controller is a GUI to control and arduino wirelessly using: - XBEES - WIFI - Bluetooth All that is required is: - Arduino Controller - Your wireless technology - And an Arduino If you would like info I'm trying to put together sets that include: - Software - Arduino - And a wireless technology (your choice)Artesis - Studiegids MVC: Artesis StudiegidsAzureHost: AzureHost is a synchronization framework that makes it easy to deploy web application files to Windows Azure and keep them synchronized between Web Roles and Blob Storage. AzureHost is a fork of the Windows Azure Accelerator for Umbraco made more simple and generic.Codes Of My ACE Study: Activity-based Center ExpansionCoreGallery.NET: An extensible .NET-based photo gallery. Project has been inactive for some time and currently in very early planning/prototyping stages. Help is definitely welcome!Custom SharePoint Designer Activities for SharePoint 2010: This project adds to the power of SharePoint Designer workflows by adding additional activities that previously were not available including sending emails with attachments, custom from addresses, and much more. Created with visual studio 2010 for SharePoint 2010.Declarative command line parser: A powerful, declarative command-line parsing system in a single .cs file. This file is available as a nuget package (package id: declcmdparser) and can be easily included or excluded in your project.DevCow: This is a location for all of the community projects that help support DevCow.comejg: ejgFibo.Authority: a authority system based on nhibernate,autofac,asp.net MVC,jquery easyui and so on.Georgia Southern House Control: This is a semester project for a group of students at Georgia Southern university.JSAnalyse - Javascript Analyser and Dependency Checker: JSAnalyse - Javascript Analyser and Dependency CheckerLogger - logging for your .NET project using file, mail, debug output: A light weight yet competent logger for .NET with configurable output modules, such as log file, e-mail and debug log. Easy to add your own output module if needed.MetaEdit+ extension for Visual Studio: Visual Studio extension for integrating MetaEdit+ and Visual Studio. This extension allows you to browse MetaEdit+ models and use the main MetaEdit+ functions from Visual Studio. It can also automatically import into Visual Studio the source code generated from MetaEdit+. MonopolyPatrones: Proyecto de patrones de software universidad Javeriana ColombiaMulti-threaded simple Reversi game: XNA Reversi game supporting multi-core processors. NDuplicate: Finds duplicate code (methods) across a C# solution.Netduino-compatible embedded webserver: Embedded Webserver is a small footprint, multi-platform webserver implementation exposing IIS-like API to web enable your hobby project in four easy steps.Orchard UserReviewedItems: An Orchard module extension of the Contrib.Reviews module. Includes a orchard content part for showing the reviews that are written by the user of the current context.PivotViewer Extensions: PivotViewer Extensions is a collection of add-ons to the Silverlight 5 PivotViewer. PowerShell TFS Build Server CmdLets: A set of CmdLets for managing a Team Foundation Server (TFS) Build server through PowerShell. SABON: Simple Access Business Object Network. This is a JavaScript Library that provides quickest way on accessing HTML elements, and DOM this includes Object/Class Creator, Table Effects, Message Box, Object Pop-up, Email Notification, and caseed: Seed creates complete ASP.NET website from object oriented business model definitions. Programers serves programs? No... Programs serves programers, Yep! "Firstly, I think I will build a system with 90% solid functions, and 10% dynamiic functions, the dynamic functions is the flower blossoming from top of it."Sharpshooter: ????OOP??????,??????。Silverlight Front end calling WCF Service in Azure Web Role: I recently written a sample which has Windows Azure Web Role with Silverlight front end, calling to WCF service which is hosted in the same Windows Azure Web Role. SPWikiDialogFix: For SharePoint 2010: Fix to prevent users from receiving ‘The Ribbon Tab with id: "Ribbon.Read" has not been made available for this page or does not exist. Use Ribbon.MakeTabAvailable()’ when viewing wiki pages in dialog mode. For explanation and details, read the blog post at http://blog.furuknap.net/solving-the-ribbon-tab-with-id-ribbon-read-has-not-been-made-available-for-this-page-or-does-not-exist-use-ribbon-maketabavailable Written in C#, because you deserve it.Trinity Christian School Lunches: This is a semester project for a group of students at Georgia Southern University creating a software packages that allows a school to keep track of the lunch orders of students.wp7project: We`re just training...Zavida: C# learning curve.??: fff

    Read the article

  • CodePlex Daily Summary for Tuesday, December 04, 2012

    CodePlex Daily Summary for Tuesday, December 04, 2012Popular ReleasesAcDown?????: AcDown????? v4.3.2: ??●AcDown??????????、??、??、???????。????,????,?????????????????????????。???????????Acfun、????(Bilibili)、??、??、YouTube、??、???、??????、SF????、????????????。 ●??????AcPlay?????,??????、????????????????。 ● AcDown??????????????????,????????????????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ?? v4.3.2?? ?????????????????? ??Acfun??????? ??Bilibili?????? ??Bilibili???????????? ??Bilibili????????? ??????????????? ???? ??Bilibili??????? ????32??64? Windows XP/...ExtJS based ASP.NET 2.0 Controls: FineUI v3.2.2: ??FineUI ?? ExtJS ??? ASP.NET 2.0 ???。 FineUI??? ?? No JavaScript,No CSS,No UpdatePanel,No ViewState,No WebServices ???????。 ?????? IE 7.0、Firefox 3.6、Chrome 3.0、Opera 10.5、Safari 3.0+ ???? Apache License 2.0 (Apache) ???? ??:http://fineui.com/bbs/ ??:http://fineui.com/demo/ ??:http://fineui.com/doc/ ??:http://fineui.codeplex.com/ ???? +2012-12-03 v3.2.2 -?????????????,?????button/button_menu.aspx(????)。 +?Window????Plain??;?ToolbarPosition??Footer??;?????FooterBarAlign??。 -????win...Player Framework by Microsoft: Player Framework for Windows Phone 8: This is a brand new version of the Player Framework for Windows Phone, available exclusively for Windows Phone 8, and now based upon the Player Framework for Windows 8. While this new version is not backward compatible with Windows Phone 7 (get that http://smf.codeplex.com/releases/view/88970), it does offer the same great feature set plus dozens of new features such as advertising, localization support, and improved skinning. Click here for more information about what's new in the Windows P...SSH.NET Library: 2012.12.3: New feature(s): + SynchronizeDirectoriesmenu4web: menu4web 1.1 - free javascript menu: menu4web 1.1 has been tested with all major browsers: Firefox, Chrome, IE, Opera and Safari. Minified m4w.js library is less than 9K. Includes 22 menu examples of different styles. Can be freely distributed under The MIT License (MIT).Quest: Quest 5.3 Beta: New features in Quest 5.3 include: Grid-based map (sponsored by Phillip Zolla) Changable POV (sponsored by Phillip Zolla) Game log (sponsored by Phillip Zolla) Customisable object link colour (sponsored by Phillip Zolla) More room description options (by James Gregory) More mathematical functions now available to expressions Desktop Player uses the same UI as WebPlayer - this will make it much easier to implement customisation options New sorting functions: ObjectListSort(list,...Chinook Database: Chinook Database 1.4: Chinook Database 1.4 This is a sample database available in multiple formats: SQL scripts for multiple database vendors, embeded database files, and XML format. The Chinook data model is available here. ChinookDatabase1.4_CompleteVersion.zip is a complete package for all supported databases/data sources. There are also packages for each specific data source. Supported Database ServersDB2 EffiProz MySQL Oracle PostgreSQL SQL Server SQL Server Compact SQLite Issues Resolved293...RiP-Ripper & PG-Ripper: RiP-Ripper 2.9.34: changes FIXED: Thanks Function when "Download each post in it's own folder" is disabled FIXED: "PixHub.eu" linksD3 Loot Tracker: 1.5.6: Updated to work with D3 version 1.0.6.13300????????API for .Net SDK: SDK for .Net ??? Release 5: 2012?11?30??? ?OAuth?????????????????????SDK OAuth oauth = new OAuth("<AppKey>", "<AppSecret>", "<????>"); WebProxy proxy = new WebProxy(); proxy.Address = new Uri("http://proxy.domain.com:3128");//??????????? proxy.Credentials = new NetworkCredential("<??>", "<??>");//???????,??? oauth.Proxy = proxy; //??????,?~ DirectQ: DirectQ II 2012-11-29: A (slightly) modernized port of Quake II to D3D9. You need SM3 or better hardware to run this - if you don't have it, then don't even bother. It should work on Windows Vista, 7 or 8; it may also work on XP but I haven't tested. Known bugs include: Some mods may not work. This is unfortunately due to the nature of Quake II's game DLLs; sometimes a recompile of the game DLL is all that's needed. In any event, ensure that the game DLL is compatible with the last release of Quake II first (...Magelia WebStore Open-source Ecommerce software: Magelia WebStore 2.2: new UI for the Administration console Bugs fixes and improvement version 2.2.215.3JayData - The cross-platform HTML5 data-management library for JavaScript: JayData 1.2.5: What's new in JayData 1.2.5For detailed release notes check the release notes. Handlebars template engine supportImplement data manager applications with JayData using Handlebars.js for templating. Include JayDataModules/handlebars.js and begin typing the mustaches :) Blogpost: Handlebars templates in JayData Handlebars helpers and model driven commanding in JayData Easy JayStorm cloud data managementManage cloud data using the same syntax and data management concept just like any other data ...nopCommerce. Open source shopping cart (ASP.NET MVC): nopcommerce 2.70: Highlight features & improvements: • Performance optimization. • Search engine optimization. ID-less URLs for products, categories, and manufacturers. • Added ACL support (access control list) on products and categories. • Minify and bundle JavaScript files. • Allow a store owner to decide which billing/shipping address fields are enabled/disabled/required (like it's already done for the registration page). • Moved to MVC 4 (.NET 4.5 is required). • Now Visual Studio 2012 is required to work ...SQL Server Partition Management: Partition Management Release 3.0: Release 3.0 adds support for SQL Server 2012 and is backward compatible with SQL Server 2008 and 2005. The release consists of: • A Readme file • The Executable • The source code (Visual Studio project) Enhancements include: -- Support for Columnstore indexes in SQL Server 2012 -- Ability to create TSQL scripts for staging table and index creation operations -- Full support for global date and time formats, locale independent -- Support for binary partitioning column types -- Fixes to is...NHook - A debugger API: NHook 1.0: x86 debugger Resolve symbol from MS Public server Resolve RVA from executable's image Add breakpoints Assemble / Disassemble target process assembly More information here, you can also check unit tests that are real sample code.PDF Library: PDFLib v2.0: Release notes This new version include many bug fixes and include support for stream objects and cross-reference object streams. New FeatureExtract images from the PDFMCEBuddy 2.x: MCEBuddy 2.3.10: Critical Update to 2.3.9: Changelog for 2.3.10 (32bit and 64bit) 1. AsfBin executable missing from build 2. Removed extra references from build to avoid conflict 3. Showanalyzer installation now checked on remote engine machine Changelog for 2.3.9 (32bit and 64bit) 1. Added support for WTV output profile 2. Added support for minimizing MCEBuddy to the system tray 3. Added support for custom archive folder 4. Added support to disable subdirectory monitoring 5. Added support for better TS fil...DotNetNuke® Community Edition CMS: 07.00.00: Major Highlights Fixed issue that caused profiles of deleted users to be available Removed the postback after checkboxes are selected in Page Settings > Taxonomy Implemented the functionality required to edit security role names and social group names Fixed JavaScript error when using a ";" semicolon as a profile property Fixed issue when using DateTime properties in profiles Fixed viewstate error when using Facebook authentication in conjunction with "require valid profile fo...CODE Framework: 4.0.21128.0: See change notes in the documentation section for details on what's new.New Projects.Net Assembly Checker: This is the tool which can help you solve the issues like "exceptions on assembly loading."Abide - Halo Map Editor: Abide is used for modification of Halo 2 maps, and other Blam engine games.AX-OData Queries: The focus of this project, is to provide a utility that will allow someone to understand all Queries within an instance of AX 2012, that can be OData Feeds.Channel 9 Apps: The Channel 9 Apps project aims to bring Channel 9 content natively to all smart devices in the form of native applications.Edutainment: Eine App für Windows mit dem Ziel, eine Schülern ein bestimmtes Thema beizubringen.Generic Windows Service Host: *Release Coming Soon* - This is a plug-in based Windows Service that will automatically execute assemblies found in a specified directory on start-up.Good Diary: Good Diary Applicationhedefgrup: Source code for hedef-grupInIReader: It's a small and simple C# based InIReader. Read, Load and write easily with InIFiles.jsGotoDefinition: jsGotoDefinition is a VS2010 plugin that, amongst other things, allows developers to right-click and navigate to a Javascript function's definition, giving them same sort of experience available with C#.JSON-RPC RT: Json-RPC implements bidirectional JSON-RPC protocol using WinRT asynchronous async / await methods.LearnByteartRetail: The reason that I host the project in codeplext is let me to work same source code at home and company~MIX++: A C++ implementation of an emulator for the MIX processor and a MIXAL assembler as defined in 'The Art of Computer Programming'Nant SVN Extension: This library extends NAnt with SVN client tasks.POM Tools: Résumé du projetQuotesDaily: A Website to Display QuotesRSS Feed Reader for Windows 8: La aplicación RSS Feed Viewer for Windows 8 implementa un lector de RSS para una única fuente de datos que se integra con los servicios de Windows 8.SharePoint 2010 App Model: This Project has the purpose of create a set of component that emulate the new functionality of SharePoint 2013 that is the new App Model for SharePoint 2010.SiLL: SiLL is a framework for rapid development of custom CMS solutions. It was designed to offer unprecedented flexibility with no overhead present in other frameworSolid Edge SDK: Solid Edge SDKTestForCodePlex: this is a personal project for the codeplex test, public user could *not* jion in the project please.verse: Verse is a simple dynamic language and runtime with a focus on parallelism and pattern matching.VivendoByte Toolkit: A toolkit useful to build Windows Store apps. It contains helpers to bind commands in MVVM, user-controls and converters. It's built using Galasoft MVVM Light.Windows 8 Store Video App Framework: Dieses Framework zeigt die Erstellung einer eigenen Video App für Windows 8. Lokale Videos, gestreamt aus dem Netz und sogar von YouTube - alles kein Problem!WJ's Windows Phone User Controls: This project contains list of commonly used Windows Phone user controls in my daily projects. Those user controls are built on Windows Phone OS 7.1.WorkingWU: Project for photo designer web layoutxGui: a Direct3D GUI, support Direct X 9/10/11.YDNoteOpenAPI4N: a .net library for ydNote open API! ????OPENAPI?.NET??! ??????: ??????!

    Read the article

  • Detecting HTML5/CSS3 Features using Modernizr

    - by dwahlin
    HTML5, CSS3, and related technologies such as canvas and web sockets bring a lot of useful new features to the table that can take Web applications to the next level. These new technologies allow applications to be built using only HTML, CSS, and JavaScript allowing them to be viewed on a variety of form factors including tablets and phones. Although HTML5 features offer a lot of promise, it’s not realistic to develop applications using the latest technologies without worrying about supporting older browsers in the process. If history has taught us anything it’s that old browsers stick around for years and years which means developers have to deal with backward compatibility issues. This is especially true when deploying applications to the Internet that target the general public. This begs the question, “How do you move forward with HTML5 and CSS3 technologies while gracefully handling unsupported features in older browsers?” Although you can write code by hand to detect different HTML5 and CSS3 features, it’s not always straightforward. For example, to check for canvas support you need to write code similar to the following:   <script> window.onload = function () { if (canvasSupported()) { alert('canvas supported'); } }; function canvasSupported() { var canvas = document.createElement('canvas'); return (canvas.getContext && canvas.getContext('2d')); } </script> If you want to check for local storage support the following check can be made. It’s more involved than it should be due to a bug in older versions of Firefox. <script> window.onload = function () { if (localStorageSupported()) { alert('local storage supported'); } }; function localStorageSupported() { try { return ('localStorage' in window && window['localStorage'] != null); } catch(e) {} return false; } </script> Looking through the previous examples you can see that there’s more than meets the eye when it comes to checking browsers for HTML5 and CSS3 features. It takes a lot of work to test every possible scenario and every version of a given browser. Fortunately, you don’t have to resort to writing custom code to test what HTML5/CSS3 features a given browser supports. By using a script library called Modernizr you can add checks for different HTML5/CSS3 features into your pages with a minimal amount of code on your part. Let’s take a look at some of the key features Modernizr offers.   Getting Started with Modernizr The first time I heard the name “Modernizr” I thought it “modernized” older browsers by added missing functionality. In reality, Modernizr doesn’t actually handle adding missing features or “modernizing” older browsers. The Modernizr website states, “The name Modernizr actually stems from the goal of modernizing our development practices (and ourselves)”. Because it relies on feature detection rather than browser sniffing (a common technique used in the past – that never worked that great), Modernizr definitely provides a more modern way to test features that a browser supports and can even handle loading additional scripts called shims or polyfills that fill in holes that older browsers may have. It’s a great tool to have in your arsenal if you’re a web developer. Modernizr is available at http://modernizr.com. Two different types of scripts are available including a development script and custom production script. To generate a production script, the site provides a custom script generation tool rather than providing a single script that has everything under the sun for HTML5/CSS3 feature detection. Using the script generation tool you can pick the specific test functionality that you need and ignore everything that you don’t need. That way the script is kept as small as possible. An example of the custom script download screen is shown next. Notice that specific CSS3, HTML5, and related feature tests can be selected. Once you’ve downloaded your custom script you can add it into your web page using the standard <script> element and you’re ready to start using Modernizr. <script src="Scripts/Modernizr.js" type="text/javascript"></script>   Modernizr and the HTML Element Once you’ve add a script reference to Modernizr in a page it’ll go to work for you immediately. In fact, by adding the script several different CSS classes will be added to the page’s <html> element at runtime. These classes define what features the browser supports and what features it doesn’t support. Features that aren’t supported get a class name of “no-FeatureName”, for example “no-flexbox”. Features that are supported get a CSS class name based on the feature such as “canvas” or “websockets”. An example of classes added when running a page in Chrome is shown next:   <html class=" js flexbox canvas canvastext webgl no-touch geolocation postmessage websqldatabase indexeddb hashchange history draganddrop websockets rgba hsla multiplebgs backgroundsize borderimage borderradius boxshadow textshadow opacity cssanimations csscolumns cssgradients cssreflections csstransforms csstransforms3d csstransitions fontface generatedcontent video audio localstorage sessionstorage webworkers applicationcache svg inlinesvg smil svgclippaths"> Here’s an example of what the <html> element looks like at runtime with Internet Explorer 9:   <html class=" js no-flexbox canvas canvastext no-webgl no-touch geolocation postmessage no-websqldatabase no-indexeddb hashchange no-history draganddrop no-websockets rgba hsla multiplebgs backgroundsize no-borderimage borderradius boxshadow no-textshadow opacity no-cssanimations no-csscolumns no-cssgradients no-cssreflections csstransforms no-csstransforms3d no-csstransitions fontface generatedcontent video audio localstorage sessionstorage no-webworkers no-applicationcache svg inlinesvg smil svgclippaths">   When using Modernizr it’s a common practice to define an <html> element in your page with a no-js class added as shown next:   <html class="no-js">   You’ll see starter projects such as HTML5 Boilerplate (http://html5boilerplate.com) or Initializr (http://initializr.com) follow this approach (see my previous post for more information on HTML5 Boilerplate). By adding the no-js class it’s easy to tell if a browser has JavaScript enabled or not. If JavaScript is disabled then no-js will stay on the <html> element. If JavaScript is enabled, no-js will be removed by Modernizr and a js class will be added along with other classes that define supported/unsupported features. Working with HTML5 and CSS3 Features You can use the CSS classes added to the <html> element directly in your CSS files to determine what style properties to use based upon the features supported by a given browser. For example, the following CSS can be used to render a box shadow for browsers that support that feature and a simple border for browsers that don’t support the feature: .boxshadow #MyContainer { border: none; -webkit-box-shadow: #666 1px 1px 1px; -moz-box-shadow: #666 1px 1px 1px; } .no-boxshadow #MyContainer { border: 2px solid black; }   If a browser supports box-shadows the boxshadow CSS class will be added to the <html> element by Modernizr. It can then be associated with a given element. This example associates the boxshadow class with a div with an id of MyContainer. If the browser doesn’t support box shadows then the no-boxshadow class will be added to the <html> element and it can be used to render a standard border around the div. This provides a great way to leverage new CSS3 features in supported browsers while providing a graceful fallback for older browsers. In addition to using the CSS classes that Modernizr provides on the <html> element, you also use a global Modernizr object that’s created. This object exposes different properties that can be used to detect the availability of specific HTML5 or CSS3 features. For example, the following code can be used to detect canvas and local storage support. You can see that the code is much simpler than the code shown at the beginning of this post. It also has the added benefit of being tested by a large community of web developers around the world running a variety of browsers.   $(document).ready(function () { if (Modernizr.canvas) { //Add canvas code } if (Modernizr.localstorage) { //Add local storage code } }); The global Modernizr object can also be used to test for the presence of CSS3 features. The following code shows how to test support for border-radius and CSS transforms:   $(document).ready(function () { if (Modernizr.borderradius) { $('#MyDiv').addClass('borderRadiusStyle'); } if (Modernizr.csstransforms) { $('#MyDiv').addClass('transformsStyle'); } });   Several other CSS3 feature tests can be performed such as support for opacity, rgba, text-shadow, CSS animations, CSS transitions, multiple backgrounds, and more. A complete list of supported HTML5 and CSS3 tests that Modernizr supports can be found at http://www.modernizr.com/docs.   Loading Scripts using Modernizr In cases where a browser doesn’t support a specific feature you can either provide a graceful fallback or load a shim/polyfill script to fill in missing functionality where appropriate (more information about shims/polyfills can be found at https://github.com/Modernizr/Modernizr/wiki/HTML5-Cross-Browser-Polyfills). Modernizr has a built-in script loader that can be used to test for a feature and then load a script if the feature isn’t available. The script loader is built-into Modernizr and is also available as a standalone yepnope script (http://yepnopejs.com). It’s extremely easy to get started using the script loader and it can really simplify the process of loading scripts based on the availability of a particular browser feature. To load scripts dynamically you can use Modernizr’s load() function which accepts properties defining the feature to test (test property), the script to load if the test succeeds (yep property), the script to load if the test fails (nope property), and a script to load regardless of if the test succeeds or fails (both property). An example of using load() with these properties is show next: Modernizr.load({ test: Modernizr.canvas, yep: 'html5CanvasAvailable.js’, nope: 'excanvas.js’, both: 'myCustomScript.js' }); In this example Modernizr is used to not only load scripts but also to test for the presence of the canvas feature. If the target browser supports the HTML5 canvas then the html5CanvasAvailable.js script will be loaded along with the myCustomScript.js script (use of the yep property in this example is a bit contrived – it was added simply to demonstrate how the property can be used in the load() function). Otherwise, a polyfill script named excanvas.js will be loaded to add missing canvas functionality for Internet Explorer versions prior to 9. Once excanvas.js is loaded the myCustomScript.js script will be loaded. Because Modernizr handles loading scripts, you can also use it in creative ways. For example, you can use it to load local scripts when a 3rd party Content Delivery Network (CDN) such as one provided by Google or Microsoft is unavailable for whatever reason. The Modernizr documentation provides the following example that demonstrates the process for providing a local fallback for jQuery when a CDN is down:   Modernizr.load([ { load: '//ajax.googleapis.com/ajax/libs/jquery/1.6.4/jquery.js', complete: function () { if (!window.jQuery) { Modernizr.load('js/libs/jquery-1.6.4.min.js'); } } }, { // This will wait for the fallback to load and // execute if it needs to. load: 'needs-jQuery.js' } ]); This code attempts to load jQuery from the Google CDN first. Once the script is downloaded (or if it fails) the function associated with complete will be called. The function checks to make sure that the jQuery object is available and if it’s not Modernizr is used to load a local jQuery script. After all of that occurs a script named needs-jQuery.js will be loaded. Conclusion If you’re building applications that use some of the latest and greatest features available in HTML5 and CSS3 then Modernizr is an essential tool. By using it you can reduce the amount of custom code required to test for browser features and provide graceful fallbacks or even load shim/polyfill scripts for older browsers to help fill in missing functionality. 

    Read the article

  • CodePlex Daily Summary for Monday, May 12, 2014

    CodePlex Daily Summary for Monday, May 12, 2014Popular ReleasesMySqlBackup.NET - MySQL Backup Solution for C#, VB.NET, ASP.NET: MySqlBackup.NET 2.0.4: - Remove "SET GLOBALmax_allowed_packet". It will take effect on new connection, not on current. - Clean up. Remove old and inappropriate IntelliSense. - Minor update.SSIS SFTP Task Control Flow Component: SSIS SFTP v2 for SQL Server 2012: Please report you to the Documentation page for installation instructions.ExtJS based ASP.NET Controls: FineUI v4.0.6: FineUI(???) ?? ExtJS ??? ASP.NET ??? FineUI??? ?? No JavaScript,No CSS,No UpdatePanel,No ViewState,No WebServices ??????? ?????? IE 8.0+、Chrome、Firefox、Opera、Safari ???? Apache License v2.0 ?:ExtJS ?? GPL v3 ?????(http://www.sencha.com/license) ???? ??:http://fineui.com/ ??:http://fineui.com/bbs/ ??:http://fineui.com/demo/ ??:http://fineui.com/doc/ ??:http://fineui.codeplex.com/ FineUI ???? ExtJS ????????,???? ExtJS ?,?????: 1. ????? FineUI ? ExtJS ? http://fineui.com/bbs/forum.ph...TerraMap (Terraria World Map Viewer): TerraMap 1.0.3.33908: Added support for the new Terraria v1.2.4 update. New items, walls, and tiles Fixed Issue 35206: Hightlight/Find doesn't work for Demon Altars The setup file will make sure .NET 4 is installed, install TerraMap, create desktop and start menu shortcuts, add a .wld file association, and launch TerraMap. If you prefer the zip file, make sure you have .NET Framework v4.5 installed, then just download and extract the ZIP file, and run TerraMap.exe.Office App Model Samples: Office App Model Samples v2.0: Office App Model Samples v2.0Readable Passphrase Generator: KeePass Plugin 0.13.0: Version 0.13.0 Added "mutators" which add uppercase and numbers to passphrases (to help complying with upper, lower, number complexity rules). Additional API methods which help consuming the generator from 3rd party c# projects. 13,160 words in the default dictionary (~600 more than previous release).CS-Script for Notepad++ (C# intellisense and code execution): Release v1.0.25.0: Release v1.0.25.0 MemberInfo/MethodInfo popup is now positioned properly to fit the screen In MethodInfo popup method signatures are word-wrapped Implemented Debug text value visualizer Pining sub-values from Watch PanelxFunc: xFunc 2.15.3: Added #53R.NET: R.NET 1.5.12: R.NET 1.5.12 is a beta release towards R.NET 1.6. You are encouraged to use 1.5.12 now and give feedback. See the documentation for setup and usage instructions. Main changes for R.NET 1.5.12: The C stack limit was not disabled on Windows. For reasons possibly peculiar to R, this means that non-concurrent access to R from multiple threads was not stable. This is now fixed, with the fix validated with a unit test. Thanks to Odugen, skyguy94, and previously others (evolvedmicrobe, tomasp) fo...CTI Text Encryption: CTI Text Encryption 5.2: Change log: 5.2 - Remove Cut button. - Fixed Reset All button does not reset encrypted text column. - Switch button location between Copy and Paste. - Enable users to use local fonts to display characters of their language correctly. (A font settings file will be saved at the same folder of this program.) 5.1 - Improve encryption process. - Minor UI update. - Version 5.1 is not compatible with older version. 5.0 - Improve encryption algorithm. - Simply inner non-encryption related mec...SEToolbox: SEToolbox 01.029.006 Release 1: Fix to allow keyboard search on load dialog. (type the first few letters of your save) Fixed check for new release. Changed the way ship details are loaded to alleviate load time for worlds with very large ships (100,000+ blocks). Fixed Image importer, was incorrectly listing 'Asteroid' as import option. Minor changes to menus (text and appearance) for clarity and OS consistency. Added in reading of world palette for color dialog editor. WIP on subsystem editor. Can now multiselec...Media Companion: Media Companion MC3.597b: Thank you for being patient, againThere are a number of fixes in place with this release. and some new features added. Most are self explanatory, so check out the options in Preferences. Couple of new Features:* Movie - Allow save Title and Sort Title in Title Case format. * Movie - Allow save fanart.jpg if movie in folder. * TV - display episode source. Get episode source from episode filename. Fixed:* Movie - Added Fill Tags from plot keywords to Batch Rescraper. * Movie - Fixed TMDB s...SimCityPak: SimCityPak 0.3.0.0: Contains several bugfixes, newly identified properties and some UI improvements. Main new features UI overhaul for the main index list: Icons for each different index, including icons for different property files Tooltips for all relevant fields Removed clutter Identified hundreds of additional properties (thanks to MaxisGuillaume) - this should make modding gameplay easierSeal Report: Seal Report 1.4: New Features: Report Designer: New option to convert a Report Source into a Repository Source. Report Designer: New contextual helper menus to select, remove, copy, prompt elements in a model. Web Server: New option to expand sub-folders displayed in the tree view. Web Server: Web Folder Description File can be a .cshtml file to display a MVC View. Views: additional CSS parameters for some DIVs. NVD3 Chart: Some default configuration values have been changed. Issues Addressed:16 ...Magick.NET: Magick.NET 6.8.9.002: Magick.NET linked with ImageMagick 6.8.9.0.VidCoder: 1.5.22 Beta: Added ability to burn SRT subtitles. Updated to HandBrake SVN 6169. Added checks to prevent VidCoder from running with a database version newer than it expects. Tooltips in the Advanced Video panel now trigger on the field labels as well as the fields themselves. Fixed updating preset/profile/tune/level settings on changing video encoder. This should resolve some problems with QSV encoding. Fixed tunes and profiles getting set to blank when switching between x264 and x265. Fixed co...NuGet: NuGet 2.8.2: We will be releasing a 2.8.2 version of our own NuGet packages and the NuGet.exe command-line tool. The 2.8.2 release will not include updated VS or WebMatrix extensions. NuGet.Server.Extensions.dll needs to be used alongside NuGet-Signed.exe to provide the NuGet.exe mirror functionality.MVC - Filtered TextBox: SourceCode: The complete source code archived as a zip file.SmartStore.NET - Free ASP.NET MVC Ecommerce Shopping Cart Solution: SmartStore.NET 2.0.2: SmartStore.NET 2.0.2 is primarily a maintenance release for version 2.0.0, which has been released on April 04 2014. It contains several improvements & important fixes. BugfixesIMPORTANT FIX: Memory leak leads to OutOfMemoryException in application after a while Installation fix: some varchar(MAX) columns get created as varchar(4000). Added a migration to fix the column specs. Installation fix: Setup fails with exception Value cannot be null. Parameter name: stream Bugfix for stock iss...Channel9's Absolute Beginner Series: Windows Phone 8.1: Entire source code for Windows Phone 8.1 Absolute Beginner Series.New Projects2112110080: VÕ TU?N ANH_L?P TRÌNH HU?NG Ð?I TU?NG(OOP)Arquitectura de Proyecto: Este proyecto es una referencia pensada y creada para la comunidad de desarrolladores, basado en capas (Layer) e Inspirado en SOLID.EControl: Projeto integrador 4º Semestre de BSI - SENACFactory Pattern: A sample codes that implements Factory Patterns.FM??????: ????????. ?????. Microsoft Dynamics NAV 2013 R2 - OpenXml Samples: Samples for extending the OpenXml funcitonality used in Microsoft Dynamics NAV 20013 R2 Excel Buffer.OCBiz: ocbizOffice 365 Drive Mapping for Enterprise Desktops: Office 365 Drive Mapping for use with Enterprise Desktops. Allows users to utilize the technology known as SharePoint Online for folder redirection.Repo Pattern: This project will help you understand the Repository Pattern.. :)Roll20 Custom Power Card Macro Generator: Create HoneyBadger's Custom Power Cards for the Roll20 Virtual Tabletop easier with this GUI tool!SharpLog: A simple high-peformance portable logging framework for .NET, which follows simple convention over configuration, and works with PCL.Star Trek Online Fan Page: school project Subtitle RT: This is a Windows Runtime App that loads and display subtitles to accompany a video playback on next to it.TomLibraries: Ce projet regroupe tous mes Helpers en C# et en XAML que je me sers dans mes projets.TP Cuatrimestral Laboratorio V: Proyecto universitario para la materia laboratorio v para tssi de la utn frgpUnfolnt Copyright: ???????? ?? ?????? ????, ?????????? ????? ?????????????? ????? ??????? ????? ?? ?????? ?? ?????? ? ??????? Unfolnt.Viser Informacioni sistem: Informator visoke škole elektrotehnike i racunarstva namenjen studentima, posetiocima i nastavnom osoblju.Visual Studio Online App for Zendesk: The Visual Studio Online App for Zendesk allows users to integrate Visual Studio Online with Zendesk.Visual Studio Screensaver: A nice screensaver with the Visual Studio logo in the middle and a color changing background.Z SqlBulkCopy Extensions: SqlBulkCopy Extensions provide MUST-HAVE methods with outstanding performance missing from the SqlBulkCopy class like Delete, Update, Merge, Upsert.??????-??????【??】??????????: ????????????????,????????,??????????????,?????????,????,????,??????。 ?????-?????【??】?????????: ???????????????,????(??)????????,??????,????,???,????,???????! ?????-?????【??】?????????: ????????????????????,????????,????????,????,????,??????,???????! ??????-??????【??】??????????: ??????????????,??????????????,???????????,??????,????,??????,??????。 ?????-?????【??】?????????: ???????????????????,?????????。????????????,??????,????,????,?????????! ?????-?????【??】?????????: ????????????????,?????????????。?????????,???????,???????,?????????????。 ???????-???????【??】???????????: ?????????????????,????????,????:???????,??????,????,????,?????,?????,??????! ??????-??????【??】??????????: ????????????????、??????、??????、??????、??????、?????、??????、????、????、????????! ??????-??????【??】??????????: ??????????????????,?????,???,???、???、?????,???,?????,???????????????. ??????-??????【??】??????????: ????????????:???????,????,????,????,??????????,????,????,????。??????... ????-????【??】????????: ?????????????????????,?????、???、????,????,???、???、???、???、???,????,?????!?????-?????【??】?????????: ?????????????300??,????????、???????、????、????????、?????,??????:????,????,???????! ?????-?????【??】?????????: ???????????????,??????????????????,????????,??????????????、????。

    Read the article

  • Parsing Concerns

    - by Jesse
    If you’ve ever written an application that accepts date and/or time inputs from an external source (a person, an uploaded file, posted XML, etc.) then you’ve no doubt had to deal with parsing some text representing a date into a data structure that a computer can understand. Similarly, you’ve probably also had to take values from those same data structure and turn them back into their original formats. Most (all?) suitably modern development platforms expose some kind of parsing and formatting functionality for turning text into dates and vice versa. In .NET, the DateTime data structure exposes ‘Parse’ and ‘ToString’ methods for this purpose. This post will focus mostly on parsing, though most of the examples and suggestions below can also be applied to the ToString method. The DateTime.Parse method is pretty permissive in the values that it will accept (though apparently not as permissive as some other languages) which makes it pretty easy to take some text provided by a user and turn it into a proper DateTime instance. Here are some examples (note that the resulting DateTime values are shown using the RFC1123 format): DateTime.Parse("3/12/2010"); //Fri, 12 Mar 2010 00:00:00 GMT DateTime.Parse("2:00 AM"); //Sat, 01 Jan 2011 02:00:00 GMT (took today's date as date portion) DateTime.Parse("5-15/2010"); //Sat, 15 May 2010 00:00:00 GMT DateTime.Parse("7/8"); //Fri, 08 Jul 2011 00:00:00 GMT DateTime.Parse("Thursday, July 1, 2010"); //Thu, 01 Jul 2010 00:00:00 GMT Dealing With Inaccuracy While the DateTime struct has the ability to store a date and time value accurate down to the millisecond, most date strings provided by a user are not going to specify values with that much precision. In each of the above examples, the Parse method was provided a partial value from which to construct a proper DateTime. This means it had to go ahead and assume what you meant and fill in the missing parts of the date and time for you. This is a good thing, especially when we’re talking about taking input from a user. We can’t expect that every person using our software to provide a year, day, month, hour, minute, second, and millisecond every time they need to express a date. That said, it’s important for developers to understand what assumptions the software might be making and plan accordingly. I think the assumptions that were made in each of the above examples were pretty reasonable, though if we dig into this method a little bit deeper we’ll find that there are a lot more assumptions being made under the covers than you might have previously known. One of the biggest assumptions that the DateTime.Parse method has to make relates to the format of the date represented by the provided string. Let’s consider this example input string: ‘10-02-15’. To some people. that might look like ‘15-Feb-2010’. To others, it might be ‘02-Oct-2015’. Like many things, it depends on where you’re from. This Is America! Most cultures around the world have adopted a “little-endian” or “big-endian” formats. (Source: Date And Time Notation By Country) In this context,  a “little-endian” date format would list the date parts with the least significant first while the “big-endian” date format would list them with the most significant first. For example, a “little-endian” date would be “day-month-year” and “big-endian” would be “year-month-day”. It’s worth nothing here that ISO 8601 defines a “big-endian” format as the international standard. While I personally prefer “big-endian” style date formats, I think both styles make sense in that they follow some logical standard with respect to ordering the date parts by their significance. Here in the United States, however, we buck that trend by using what is, in comparison, a completely nonsensical format of “month/day/year”. Almost no other country in the world uses this format. I’ve been fortunate in my life to have done some international travel, so I’ve been aware of this difference for many years, but never really thought much about it. Until recently, I had been developing software for exclusively US-based audiences and remained blissfully ignorant of the different date formats employed by other countries around the world. The web application I work on is being rolled out to users in different countries, so I was recently tasked with updating it to support different date formats. As it turns out, .NET has a great mechanism for dealing with different date formats right out of the box. Supporting date formats for different cultures is actually pretty easy once you understand this mechanism. Pulling the Curtain Back On the Parse Method Have you ever taken a look at the different flavors (read: overloads) that the DateTime.Parse method comes in? In it’s simplest form, it takes a single string parameter and returns the corresponding DateTime value (if it can divine what the date value should be). You can optionally provide two additional parameters to this method: an ‘System.IFormatProvider’ and a ‘System.Globalization.DateTimeStyles’. Both of these optional parameters have some bearing on the assumptions that get made while parsing a date, but for the purposes of this article I’m going to focus on the ‘System.IFormatProvider’ parameter. The IFormatProvider exposes a single method called ‘GetFormat’ that returns an object to be used for determining the proper format for displaying and parsing things like numbers and dates. This interface plays a big role in the globalization capabilities that are built into the .NET Framework. The cornerstone of these globalization capabilities can be found in the ‘System.Globalization.CultureInfo’ class. To put it simply, the CultureInfo class is used to encapsulate information related to things like language, writing system, and date formats for a certain culture. Support for many cultures are “baked in” to the .NET Framework and there is capacity for defining custom cultures if needed (thought I’ve never delved into that). While the details of the CultureInfo class are beyond the scope of this post, so for now let me just point out that the CultureInfo class implements the IFormatInfo interface. This means that a CultureInfo instance created for a given culture can be provided to the DateTime.Parse method in order to tell it what date formats it should expect. So what happens when you don’t provide this value? Let’s crack this method open in Reflector: When no IFormatInfo parameter is provided (i.e. we use the simple DateTime.Parse(string) overload), the ‘DateTimeFormatInfo.CurrentInfo’ is used instead. Drilling down a bit further we can see the implementation of the DateTimeFormatInfo.CurrentInfo property: From this property we can determine that, in the absence of an IFormatProvider being specified, the DateTime.Parse method will assume that the provided date should be treated as if it were in the format defined by the CultureInfo object that is attached to the current thread. The culture specified by the CultureInfo instance on the current thread can vary depending on several factors, but if you’re writing an application where a single instance might be used by people from different cultures (i.e. a web application with an international user base), it’s important to know what this value is. Having a solid strategy for setting the current thread’s culture for each incoming request in an internationally used ASP .NET application is obviously important, and might make a good topic for a future post. For now, let’s think about what the implications of not having the correct culture set on the current thread. Let’s say you’re running an ASP .NET application on a server in the United States. The server was setup by English speakers in the United States, so it’s configured for US English. It exposes a web page where users can enter order data, one piece of which is an anticipated order delivery date. Most users are in the US, and therefore enter dates in a ‘month/day/year’ format. The application is using the DateTime.Parse(string) method to turn the values provided by the user into actual DateTime instances that can be stored in the database. This all works fine, because your users and your server both think of dates in the same way. Now you need to support some users in South America, where a ‘day/month/year’ format is used. The best case scenario at this point is a user will enter March 13, 2011 as ‘25/03/2011’. This would cause the call to DateTime.Parse to blow up since that value doesn’t look like a valid date in the US English culture (Note: In all likelihood you might be using the DateTime.TryParse(string) method here instead, but that method behaves the same way with regard to date formats). “But wait a minute”, you might be saying to yourself, “I thought you said that this was the best case scenario?” This scenario would prevent users from entering orders in the system, which is bad, but it could be worse! What if the order needs to be delivered a day earlier than that, on March 12, 2011? Now the user enters ‘12/03/2011’. Now the call to DateTime.Parse sees what it thinks is a valid date, but there’s just one problem: it’s not the right date. Now this order won’t get delivered until December 3, 2011. In my opinion, that kind of data corruption is a much bigger problem than having the Parse call fail. What To Do? My order entry example is a bit contrived, but I think it serves to illustrate the potential issues with accepting date input from users. There are some approaches you can take to make this easier on you and your users: Eliminate ambiguity by using a graphical date input control. I’m personally a fan of a jQuery UI Datepicker widget. It’s pretty easy to setup, can be themed to match the look and feel of your site, and has support for multiple languages and cultures. Be sure you have a way to track the culture preference of each user in your system. For a web application this could be done using something like a cookie or session state variable. Ensure that the current user’s culture is being applied correctly to DateTime formatting and parsing code. This can be accomplished by ensuring that each request has the handling thread’s CultureInfo set properly, or by using the Format and Parse method overloads that accept an IFormatProvider instance where the provided value is a CultureInfo object constructed using the current user’s culture preference. When in doubt, favor formats that are internationally recognizable. Using the string ‘2010-03-05’ is likely to be recognized as March, 5 2011 by users from most (if not all) cultures. Favor standard date format strings over custom ones. So far we’ve only talked about turning a string into a DateTime, but most of the same “gotchas” apply when doing the opposite. Consider this code: someDateValue.ToString("MM/dd/yyyy"); This will output the same string regardless of what the current thread’s culture is set to (with the exception of some cultures that don’t use the Gregorian calendar system, but that’s another issue all together). For displaying dates to users, it would be better to do this: someDateValue.ToString("d"); This standard format string of “d” will use the “short date format” as defined by the culture attached to the current thread (or provided in the IFormatProvider instance in the proper method overload). This means that it will honor the proper month/day/year, year/month/day, or day/month/year format for the culture. Knowing Your Audience The examples and suggestions shown above can go a long way toward getting an application in shape for dealing with date inputs from users in multiple cultures. There are some instances, however, where taking approaches like these would not be appropriate. In some cases, the provider or consumer of date values that pass through your application are not people, but other applications (or other portions of your own application). For example, if your site has a page that accepts a date as a query string parameter, you’ll probably want to format that date using invariant date format. Otherwise, the same URL could end up evaluating to a different page depending on the user that is viewing it. In addition, if your application exports data for consumption by other systems, it’s best to have an agreed upon format that all systems can use and that will not vary depending upon whether or not the users of the systems on either side prefer a month/day/year or day/month/year format. I’ll look more at some approaches for dealing with these situations in a future post. If you take away one thing from this post, make it an understanding of the importance of knowing where the dates that pass through your system come from and are going to. You will likely want to vary your parsing and formatting approach depending on your audience.

    Read the article

  • Error starting JBoss 5.1.0

    - by Alexandre
    I've installed JBoss 5.1.0 on a Xubuntu (running as a guest on VMWare - Windows 7 host). It did work fine for some days, but now I'm completelly unable to start it anymore. Every time I try to start it, I got a "Port 8x83 already in use". I've tried to run it with different ports configurations, and none of them works. I did look for the services using the problematic ports, using netstat and lsof, but they never show up. Since this error occurs in all port configurations, I think this is a Jboss problem. Below is the error stack trace: 2010-06-15 06:21:47,992 INFO [org.jboss.web.WebService] (main) Using RMI server codebase: http://192.168.0.104:8083/ 2010-06-15 06:21:48,085 ERROR [org.jboss.kernel.plugins.dependency.AbstractKernelController] (main) Error installing to Start: name=jboss:service=WebService state=Create mode=Manual requiredState=Installed java.lang.Exception: Port 8083 already in use. at org.jboss.web.WebServer.start(WebServer.java:233) at org.jboss.web.WebService.startService(WebService.java:322) at org.jboss.system.ServiceMBeanSupport.jbossInternalStart(ServiceMBeanSupport.java:376) at org.jboss.system.ServiceMBeanSupport.jbossInternalLifecycle(ServiceMBeanSupport.java:322) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.jboss.mx.interceptor.ReflectedDispatcher.invoke(ReflectedDispatcher.java:157) at org.jboss.mx.server.Invocation.dispatch(Invocation.java:96) at org.jboss.mx.server.Invocation.invoke(Invocation.java:88) at org.jboss.mx.server.AbstractMBeanInvoker.invoke(AbstractMBeanInvoker.java:264) at org.jboss.mx.server.MBeanServerImpl.invoke(MBeanServerImpl.java:668) at org.jboss.system.microcontainer.ServiceProxy.invoke(ServiceProxy.java:189) at $Proxy38.start(Unknown Source) at org.jboss.system.microcontainer.StartStopLifecycleAction.installAction(StartStopLifecycleAction.java:42) at org.jboss.system.microcontainer.StartStopLifecycleAction.installAction(StartStopLifecycleAction.java:37) at org.jboss.dependency.plugins.action.SimpleControllerContextAction.simpleInstallAction(SimpleControllerContextAction.java:62) at org.jboss.dependency.plugins.action.AccessControllerContextAction.install(AccessControllerContextAction.java:71) at org.jboss.dependency.plugins.AbstractControllerContextActions.install(AbstractControllerContextActions.java:51) at org.jboss.dependency.plugins.AbstractControllerContext.install(AbstractControllerContext.java:348) at org.jboss.system.microcontainer.ServiceControllerContext.install(ServiceControllerContext.java:286) at org.jboss.dependency.plugins.AbstractController.install(AbstractController.java:1631) at org.jboss.dependency.plugins.AbstractController.incrementState(AbstractController.java:934) at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:1082) at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:984) at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:822) at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:553) at org.jboss.system.ServiceController.doChange(ServiceController.java:688) at org.jboss.system.ServiceController.start(ServiceController.java:460) at org.jboss.system.deployers.ServiceDeployer.start(ServiceDeployer.java:163) at org.jboss.system.deployers.ServiceDeployer.deploy(ServiceDeployer.java:99) at org.jboss.system.deployers.ServiceDeployer.deploy(ServiceDeployer.java:46) at org.jboss.deployers.spi.deployer.helpers.AbstractSimpleRealDeployer.internalDeploy(AbstractSimpleRealDeployer.java:62) at org.jboss.deployers.spi.deployer.helpers.AbstractRealDeployer.deploy(AbstractRealDeployer.java:50) at org.jboss.deployers.spi.deployer.helpers.AbstractRealDeployer.deploy(AbstractRealDeployer.java:50) at org.jboss.deployers.plugins.deployers.DeployerWrapper.deploy(DeployerWrapper.java:171) at org.jboss.deployers.plugins.deployers.DeployersImpl.doDeploy(DeployersImpl.java:1439) at org.jboss.deployers.plugins.deployers.DeployersImpl.doInstallParentFirst(DeployersImpl.java:1157) at org.jboss.deployers.plugins.deployers.DeployersImpl.doInstallParentFirst(DeployersImpl.java:1178) at org.jboss.deployers.plugins.deployers.DeployersImpl.install(DeployersImpl.java:1098) at org.jboss.dependency.plugins.AbstractControllerContext.install(AbstractControllerContext.java:348) at org.jboss.dependency.plugins.AbstractController.install(AbstractController.java:1631) at org.jboss.dependency.plugins.AbstractController.incrementState(AbstractController.java:934) at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:1082) at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:984) at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:822) at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:553) at org.jboss.deployers.plugins.deployers.DeployersImpl.process(DeployersImpl.java:781) at org.jboss.deployers.plugins.main.MainDeployerImpl.process(MainDeployerImpl.java:702) at org.jboss.system.server.profileservice.repository.MainDeployerAdapter.process(MainDeployerAdapter.java:117) at org.jboss.system.server.profileservice.repository.ProfileDeployAction.install(ProfileDeployAction.java:70) at org.jboss.system.server.profileservice.repository.AbstractProfileAction.install(AbstractProfileAction.java:53) at org.jboss.system.server.profileservice.repository.AbstractProfileService.install(AbstractProfileService.java:361) at org.jboss.dependency.plugins.AbstractControllerContext.install(AbstractControllerContext.java:348) at org.jboss.dependency.plugins.AbstractController.install(AbstractController.java:1631) at org.jboss.dependency.plugins.AbstractController.incrementState(AbstractController.java:934) at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:1082) at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:984) at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:822) at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:553) at org.jboss.system.server.profileservice.repository.AbstractProfileService.activateProfile(AbstractProfileService.java:306) at org.jboss.system.server.profileservice.ProfileServiceBootstrap.start(ProfileServiceBootstrap.java:271) at org.jboss.bootstrap.AbstractServerImpl.start(AbstractServerImpl.java:461) at org.jboss.Main.boot(Main.java:221) at org.jboss.Main$1.run(Main.java:556) at java.lang.Thread.run(Thread.java:619) Caused by: java.net.BindException: Cannot assign requested address at java.net.PlainSocketImpl.socketBind(Native Method) at java.net.PlainSocketImpl.bind(PlainSocketImpl.java:365) at java.net.ServerSocket.bind(ServerSocket.java:319) at java.net.ServerSocket.<init>(ServerSocket.java:185) at org.jboss.web.WebServer.start(WebServer.java:226) Any hint on this? Thanks

    Read the article

  • Can't get MySQL to install

    - by James Marthenal
    I'd like to think I know what I'm doing in a Unix shell but maybe not. I made a mistake in a configuration file for MySQL, so I decided to just uninstall it and then reinstall it, so I did: sudo apt-get --purge remove mysql-server mysql-server-5.0 mysql-client The files were deleted, so I then tried to install it, but it didn't ask me for a root password or anything else, so I uninstalled it using the above command again and then did sudo rm -rf /etc/mysql sudo rm /etc/init.d/mysql sudo rm -rf /var/lib/mysql* I then restarted the computer then installed it again: sudo apt-get install mysql-server mysql-client It asked for a root password, and everything looked like it would work, until I saw this: $ sudo apt-get install mysql-server mysql-client Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: mysql-server-5.0 Suggested packages: tinyca The following NEW packages will be installed: mysql-client mysql-server mysql-server-5.0 0 upgraded, 3 newly installed, 0 to remove and 1 not upgraded. Need to get 0B/27.4MB of archives. After this operation, 86.7MB of additional disk space will be used. Do you want to continue [Y/n]? y WARNING: The following packages cannot be authenticated! mysql-server-5.0 mysql-client mysql-server Authentication warning overridden. Preconfiguring packages ... Can't exec "/tmp/mysql-server-5.0.config.28101": Permission denied at /usr/share/perl/5.10/IPC/Open3.pm line 168. open2: exec of /tmp/mysql-server-5.0.config.28101 configure failed at /usr/share/perl5/Debconf/ConfModule.pm line 59 mysql-server-5.0 failed to preconfigure, with exit status 255 Selecting previously deselected package mysql-server-5.0. (Reading database ... 160284 files and directories currently installed.) Unpacking mysql-server-5.0 (from .../mysql-server-5.0_5.0.51a-24+lenny5_amd64.deb) ... Selecting previously deselected package mysql-client. Unpacking mysql-client (from .../mysql-client_5.0.51a-24+lenny5_all.deb) ... Selecting previously deselected package mysql-server. Unpacking mysql-server (from .../mysql-server_5.0.51a-24+lenny5_all.deb) ... Processing triggers for man-db ... Setting up mysql-server-5.0 (5.0.51a-24+lenny5) ... Stopping MySQL database server: mysqld. /var/lib/dpkg/info/mysql-server-5.0.postinst: line 144: /etc/mysql/conf.d/old_passwords.cnf: No such file or directory dpkg: error processing mysql-server-5.0 (--configure): subprocess post-installation script returned error exit status 1 Setting up mysql-client (5.0.51a-24+lenny5) ... dpkg: dependency problems prevent configuration of mysql-server: mysql-server depends on mysql-server-5.0; however: Package mysql-server-5.0 is not configured yet. dpkg: error processing mysql-server (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: mysql-server-5.0 mysql-server E: Sub-process /usr/bin/dpkg returned an error code (1) Now I can't seem to figure out what to do. I just want to get a clean MySQL installation at this point. I'm running the latest stable release of Debian. All help is appreciated—thanks! Edit: I looked at this similar question, which suggests that I uninstall mysql-common, but when I try to do so I see: The following packages will be REMOVED: apache2 apache2-mpm-prefork apache2-utils apache2.2-common git-svn libapache2-mod-php5 libapache2-mod-python libapache2-svn libaprutil1 libdbd-mysql-perl libdbd-mysql-rubygem libmysql-ruby libmysql-ruby1.8 libmysql-rubygem libmysqlclient15-dev libmysqlclient15off librdf-perl librdf0 libserf-0-0 libsvn-perl libsvn1 mysql-client-5.0 mysql-common mytop ndn-apache22-php5 ndn-apache22-svn ndn-interpreters ndn-lighttpd ndn-netsaint-plugins ndn-perl-modules ndn-php5-cgi ndn-php5-xcache ndn-php53 ndn-php53-suhosin ndn-rubygems php5 php5-mcrypt php5-mysql proftpd proftpd-mod-mysql python-django python-mysqldb python-subversion python-svn subversion subversion-tools trac zendoptimizer 0 upgraded, 0 newly installed, 48 to remove and 1 not upgraded. Eeek! Any suggestions?

    Read the article

  • HP SmartArray P400: How to repair failed logical drive?

    - by TegtmeierDE
    I have a HP Server with SmartArray P400 controller (incl. 256 MB Cache/Battery Backup) with a logicaldrive with replaced failed physicaldrive that does not rebuild. This is how it looked when I detected the error: ~# /usr/sbin/hpacucli ctrl slot=0 show config Smart Array P400 in Slot 0 (Embedded) (sn: XXXX) array A (SATA, Unused Space: 0 MB) logicaldrive 1 (698.6 GB, RAID 1, OK) physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SATA, 750 GB, OK) physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SATA, 750 GB, OK) array B (SATA, Unused Space: 0 MB) logicaldrive 2 (2.7 TB, RAID 5, Failed) physicaldrive 1I:1:3 (port 1I:box 1:bay 3, SATA, 750 GB, OK) physicaldrive 1I:1:4 (port 1I:box 1:bay 4, SATA, 750 GB, OK) physicaldrive 2I:1:5 (port 2I:box 1:bay 5, SATA, 750 GB, OK) physicaldrive 2I:1:6 (port 2I:box 1:bay 6, SATA, 750 GB, Failed) physicaldrive 2I:1:7 (port 2I:box 1:bay 7, SATA, 750 GB, OK) unassigned physicaldrive 2I:1:8 (port 2I:box 1:bay 8, SATA, 750 GB, OK) ~# I thought that I had drive 2I:1:8 configured as a spare for Array A and Array B, but it seems this was not the case :-(. I noticed the problem due to I/O errors on the host, even if only 1 physicaldrive of the RAID5 is failed. Does someone know why this could happen? The logicaldrive should go into "Degraded" mode but still be fully accessible from the host os!? I first tried to add the unassigned drive 2I:1:8 as a spare to logicaldrive 2, but this was not possible: ~# /usr/sbin/hpacucli ctrl slot=0 array B add spares=2I:1:8 Error: This operation is not supported with the current configuration. Use the "show" command on devices to show additional details about the configuration. ~# Interestingly it is possible to add the unassigned drive to the first array without problems. I thought maybe the controller put the array into "failed" state due to the missing spare and protects failed arrays from modification. So I tried was to reenable the logicaldrive (to add the spare afterwards): ~# /usr/sbin/hpacucli ctrl slot=0 ld 2 modify reenable Warning: Any previously existing data on the logical drive may not be valid or recoverable. Continue? (y/n) y Error: This operation is not supported with the current configuration. Use the "show" command on devices to show additional details about the configuration. ~# But as you can see, re-enabling the logicaldrive this was not possible. Now I replaced the failed drive by hotswapping it with the unassigned drive. The status now looks like this: ~# /usr/sbin/hpacucli ctrl slot=0 show config Smart Array P400 in Slot 0 (Embedded) (sn: XXXX) array A (SATA, Unused Space: 0 MB) logicaldrive 1 (698.6 GB, RAID 1, OK) physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SATA, 750 GB, OK) physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SATA, 750 GB, OK) array B (SATA, Unused Space: 0 MB) logicaldrive 2 (2.7 TB, RAID 5, Failed) physicaldrive 1I:1:3 (port 1I:box 1:bay 3, SATA, 750 GB, OK) physicaldrive 1I:1:4 (port 1I:box 1:bay 4, SATA, 750 GB, OK) physicaldrive 2I:1:5 (port 2I:box 1:bay 5, SATA, 750 GB, OK) physicaldrive 2I:1:6 (port 2I:box 1:bay 6, SATA, 750 GB, OK) physicaldrive 2I:1:7 (port 2I:box 1:bay 7, SATA, 750 GB, OK) ~# The logical drive is still not accessible. Why is it not rebuilding? What can I do? FYI, this is the configuration of my controller: ~# /usr/sbin/hpacucli ctrl slot=0 show Smart Array P400 in Slot 0 (Embedded) Bus Interface: PCI Slot: 0 Serial Number: XXXX Cache Serial Number: XXXX RAID 6 (ADG) Status: Enabled Controller Status: OK Chassis Slot: Hardware Revision: Rev E Firmware Version: 5.22 Rebuild Priority: Medium Expand Priority: Medium Surface Scan Delay: 15 secs Surface Analysis Inconsistency Notification: Disabled Raid1 Write Buffering: Disabled Post Prompt Timeout: 0 secs Cache Board Present: True Cache Status: OK Accelerator Ratio: 25% Read / 75% Write Drive Write Cache: Disabled Total Cache Size: 256 MB No-Battery Write Cache: Disabled Cache Backup Power Source: Batteries Battery/Capacitor Count: 1 Battery/Capacitor Status: OK SATA NCQ Supported: True ~# Thanks for you help in advance.

    Read the article

  • How to bridge Debian guest VM to VPN via Cisco AnyConnect Client running on Windows Vista host

    - by bgoodr
    I am running Cisco Anyconnect VPN Client version 2.5.3054 on a laptop running Windows Vista Home Premium (version 6.0.6002) Service Pack 2. I am running the VMware Player version 4.0.2 build-591240. The host operating system running under VMware Player is Debian 6.0.2.1 i386. The laptop is connected to a wireless connection, and I can browse the web from Windows Vista using Firefox just fine. I am able to boot into the Debian VM and open up a browser and access websites on the WAN from within the VM just fine. I can ping real Linux hosts on my LAN via: ping <lan_system>.local where <lan_system> is the hostname returned from uname -a on that system on my LAN. From a DOS CMD shell, I am able to ping hosts that exist on the remote network served by the Cisco AnyConnect Client's VPN network (and without the .local suffix applied as above): ping <remote_system> However, from within the Debian VM, I expect to be able to also ping those same remote hosts (<remote_system>) that are tunnelled over the VPN set up by Cisco AnyConnect Client. Let's say that I can ping a <remote_system> called flubber from a DOS CMD prompt just fine. When I execute Linux ping command from inside the Debian VM via: ping flubber It returns immediately with this output: ping: unknown host flubber For reference since I suspect it will be useful, here is the output of the route print command from the DOS CMD prompt: route print =========================================================================== Interface List 30 ...00 05 9a 3c 7a 00 ...... Cisco AnyConnect VPN Virtual Miniport Adapter for Windows 11 ...00 1b 9e c4 de e5 ...... Atheros AR5007EG Wireless Network Adapter 26 ...00 50 56 c0 00 01 ...... VMware Virtual Ethernet Adapter for VMnet1 28 ...00 50 56 c0 00 08 ...... VMware Virtual Ethernet Adapter for VMnet8 1 ........................... Software Loopback Interface 1 12 ...02 00 54 55 4e 01 ...... Teredo Tunneling Pseudo-Interface 13 ...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #3 32 ...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #4 27 ...00 00 00 00 00 00 00 e0 isatap.{E5292CF6-4FBB-4320-806D-A6B366769255} 17 ...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #2 20 ...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #8 22 ...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #10 24 ...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #11 25 ...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #12 29 ...00 00 00 00 00 00 00 e0 isatap.{C3852986-5053-4E2E-BE60-52EA2FCF5899} 41 ...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #14 =========================================================================== At the top window border of the VM, clicking on Virtual Machine, then clicking on Virtual Machine Settings, then clicking on Network Adapter, I have these two options checked: [X] Bridged: Connected directly to the physical Network [X] Replicate physical network connection state [ ] NAT: used to share the hosts's IP address [ ] Host-only: A private network shared with the host [ ] LAN segment: [ ] <LAN Segments...> <Advanced> I've toyed with the other options such as NAT and Host-only but that had no effect. Is there some way to allow the VM to access those <remote_system>'s?

    Read the article

  • Persistent static routes fail on MacOS 10.6.5 startup!

    - by verbalicious
    I'm unable to get static routes to persist a reboot on Mac OS 10.6.5. I've tried all of the methods prescribed in Google search results, and previous posts on this site. I've tried manually creating a launchd daemon, and used RouteSplit's launchd daemon to no avail. It's clear that the interface is not ready when these methods attempt to apply the route. This workstation in question is getting its IP from DHCP and probably hasn't gotten its DHCP lease when the command runs. We're able to apply the route by hand when logged in, but not through startup methods. Is there another way to apply this route by sneaking the command into something later, but before the login window appears to the user? Here is some relevant log info from system.log. You can see the "route: writing to routing socket: Network is unreachable" errors where my launchd script fires off. I've tried adding extra "sleep" and "ipconfig waitall" statements later in the script but this doesn't fly. Dec 15 19:30:41 localhost com.apple.launchd[1]: *** launchd[1] has started up. *** Dec 15 19:30:45 localhost mDNSResponder[18]: mDNSResponder mDNSResponder-258.13 (Oct 8 2010 17:10:30) starting Dec 15 19:30:47 localhost configd[15]: bootp_session_transmit: bpf_write(en1) failed: Network is down (50) Dec 15 19:30:47 localhost configd[15]: DHCP en1: INIT transmit failed Dec 15 19:30:47 localhost configd[15]: network configuration changed. Dec 15 19:30:47 Administrators-MacBook-Pro configd[15]: setting hostname to "Administrators-MacBook-Pro.local" Dec 15 19:30:47 Administrators-MacBook-Pro blued[16]: Apple Bluetooth daemon started Dec 15 19:30:52 Administrators-MacBook-Pro syslog[67]: routes.sh: Starting RouteSplit Dec 15 19:30:53 Administrators-MacBook-Pro com.apple.usbmuxd[41]: usbmuxd-207 built for iTunesTenOne on Oct 19 2010 at 13:50:35, running 64 bit Dec 15 19:30:54 Administrators-MacBook-Pro /System/Library/CoreServices/loginwindow.app/Contents/MacOS/loginwindow[50]: Login Window Application Started Dec 15 19:30:55 Administrators-MacBook-Pro bootlog[61]: BOOT_TIME: 1292459441 0 Dec 15 19:30:55 Administrators-MacBook-Pro syslog[86]: routes.sh: static route 192.168.0.0/23 192.168.2.2 Dec 15 19:30:55 Administrators-MacBook-Pro net.routes.static[65]: route: writing to routing socket: Network is unreachable Dec 15 19:30:55 Administrators-MacBook-Pro net.routes.static[65]: add net 192.168.0.0: gateway 192.168.2.2: Network is unreachable Dec 15 19:30:57 Administrators-MacBook-Pro org.apache.httpd[38]: httpd: Could not reliably determine the server's fully qualified domain name, using Administrators-MacBook-Pro.local for ServerName Dec 15 19:30:58 Administrators-MacBook-Pro loginwindow[50]: Login Window Started Security Agent Dec 15 19:30:58 Administrators-MacBook-Pro WindowServer[89]: kCGErrorFailure: Set a breakpoint @ CGErrorBreakpoint() to catch errors as they are logged. Dec 15 19:30:58 Administrators-MacBook-Pro com.apple.WindowServer[89]: Wed Dec 15 19:30:58 Administrators-MacBook-Pro.local WindowServer[89] <Error>: kCGErrorFailure: Set a breakpoint @ CGErrorBreakpoint() to catch errors as they are logged. Dec 15 19:31:18 Administrators-MacBook-Pro configd[15]: network configuration changed. Dec 15 19:31:19 administrators-macbook-pro configd[15]: setting hostname to "administrators-macbook-pro.local" Dec 15 19:31:25 administrators-macbook-pro _mdnsresponder[121]: /usr/libexec/ntpd-wrapper: scutil key State:/Network/Global/DNS not present after 30 seconds Dec 15 19:31:25 administrators-macbook-pro _mdnsresponder[124]: sntp options: a=2 v=1 e=0.100 E=5.000 P=2147483647.000 Dec 15 19:31:25 administrators-macbook-pro _mdnsresponder[124]: d=15 c=5 x=0 op=1 l=/var/run/sntp.pid f= time.apple.com Dec 15 19:31:25 administrators-macbook-pro _mdnsresponder[124]: sntp: getaddrinfo(hostname, ntp) failed with nodename nor servname provided, or not known Dec 15 19:31:27 administrators-macbook-pro configd[15]: network configuration changed. Dec 15 19:31:27 Administrators-MacBook-Pro configd[15]: setting hostname to "Administrators-MacBook-Pro.local" Dec 15 19:31:27 Administrators-MacBook-Pro ntpd[37]: Cannot find existing interface for address 17.151.16.20 Dec 15 19:31:27 Administrators-MacBook-Pro ntpd_initres[125]: ntpd indicates no data available! Dec 15 19:31:31 Administrators-MacBook-Pro sshd[128]: USER_PROCESS: 133 ttys000 Dec 15 19:31:37 Administrators-MacBook-Pro sudo[138]: administrator : TTY=ttys000 ; PWD=/Users/administrator ; USER=root ; COMMAND=/usr/bin/less /var/log/system.log ``You can see the following line in /var/log/kernel.log that shows the en0 interface coming up: Dec 15 19:30:51 Administrators-MacBook-Pro kernel[0]: Ethernet [AppleBCM5701Ethernet]: Link up on en0, 1-Gigabit, Full-duplex, No flow-control, Debug [796d,0f01,0de1,0300,c1e1,3800]

    Read the article

  • IIS7 web farm - local or shared content?

    - by rbeier
    We're setting up an IIS7 web farm with two servers. Should each server have its own local copy of the content, or should they pull content directly from a UNC share? What are the pros and cons of each approach? We currently have a single live server WEB1, with content stored locally on a separate partition. A job periodically syncs WEB1 to a standby server WEB2, using robocopy for content and msdeploy for config. If WEB1 goes down, Nagios notifies us, and we manually run a script to move the IP addresses to WEB2's network interface. Both servers are actually VMs running on separate VMWare ESX 4 hosts. The servers are domain-joined. We have around 50-60 live sites on WEB1 - mostly ASP.NET, with a few that are just static HTML. Most are low-traffic "microsites". A few have moderate traffic, but none are massive. We'd like to change this so both WEB1 and WEB2 are actively serving content. This is mainly for reliability - if WEB1 goes down, we don't want to have to manually intervene to fail things over. Spreading the load is also nice, but the load is not high enough right now for us to need this. We're planning to configure our firewall to balance traffic across the two servers. It will detect when a server goes down and will send all the traffic to the remaining live server. We're planning to use sticky sessions for now... eventually we may move to SQL Server session state and stateless load balancing. But we need a way for the servers to share content. We were originally planning to move all the content to a UNC share. Our storage provider says they can set up a highly available SMB share for us. So if we go the UNC route, the storage shouldn't be a single point of failure. But we're wondering about the downsides to this approach: We'll need to change the physical paths for each site and virtual directory. There are also some projects that have absolute paths in their web.config files - we'll have to update those as well. We'll need to create a domain user for the web servers to access the share, and grant that user appropriate permissions. I haven't looked into this yet - I'm not sure if the application pool identity needs to be changed to this user, or if there's another way to tell IIS to use this account when connecting to the share. Sites will no longer be able to access their content if there's ever an Active Directory problem. In general, it just seems a lot more complicated, with more moving parts that could break. Our storage provider would create a volume for us on their redundant SAN. If I understand correctly, this SAN volume would be mounted on a VM running in their redundant VMWare environment; this VM would then expose the SMB share to our web servers. On the other hand, a benefit of the shared content approach is that we'd only need to deploy code to one place, and there would never be a temporary inconsistency between multiple copies of the content. This thread is pretty interesting, though some of these people are working at a much larger scale. I've just been discussing content so far, but we also need to think about configuration. I don't know if we can just use DFS replication for the applicationHost.config and other files, or if it's best to use the shared configuration feature with the config on a UNC share. What do you think? Thanks for your help, Richard

    Read the article

  • cannt run phpunit tests on bash ubuntu 11.10

    - by Mohamad Elbialy
    i'm working with ubuntu 11.10 as root on my local machine, i've installed xampp 1.7.7 and i'm a newbie to ubuntu, while following a tutorial on sitepoint(http://www.sitepoint.com/getting-started-with-pear/) on how to install pear to use PhpUnit, i didnt notice it then, but it seems that i installed or used an existing php version 5.3.6 in CL to do that, also the pear installation was built on this version, while xampp being installed,i now have two versions of php,xampp's 5.3.8 and the 5.3.6, anyway, what i want to do is to use the existing xampp php version and build pear on that, to make all my work through xampp.so my questions are: how to uninstall the php V5.3.6 and it's pear installation? how to link the CL with the php ver. of xampp? how to build the next pear installation on the php ver. of xampp? i want all my web dev. work through xampp, is there anything else i need to unistall, to avoid this confusion? 4. i did the following in attampet to solve the problem: i wrote this in bash: gedit ~/.bashrc i added that to the end of ~/.bashrc file in attempt to change environment path: export PATH=/opt/lampp/bin:$PATH export PATH=/opt/lampp/lib/php:$PATH export PATH=/opt/lampp/lib/php/PHPUnit/pearcmd.php:$PATH i checked the php and pear version using 'php -v' and 'pear list' i got an ouput of: PHP 5.3.8 (cli) (built: Sep 19 2011 13:29:27) Copyright (c) 1997-2011 The PHP Group Zend Engine v2.3.0, Copyright (c) 1998-2011 Zend Technologies and for pear: Installed packages, channel pear.php.net: ========================================= Package Version State Archive_Tar 1.3.9 stable Console_Getopt 1.3.1 stable PEAR 1.9.4 stable PHPUnit 1.3.2 stable Structures_Graph 1.0.4 stable XML_Util 1.2.1 stable when i run: 'phpunit MessageTest.php': i get PHP Warning: require_once(PHP/CodeCoverage/Filter.php): failed to open stream: No such file or directory in /usr/bin/phpunit on line 38 Warning: require_once(PHP/CodeCoverage/Filter.php): failed to open stream: No such file or directory in /usr/bin/phpunit on line 38 PHP Fatal error: require_once(): Failed opening required 'PHP/CodeCoverage/Filter.php' (include_path='.:/php/includes:/opt/lampp/lib/php:/opt/lampp/bin:/opt/lampp/lib/php/PEAR') in /usr/bin/phpunit on line 38 5.i ran the following commands as reported in other questions as a solution to that error: sudo apt-get remove phpunit sudo pear channel-discover pear.phpunit.de sudo pear channel-discover pear.symfony-project.com sudo pear channel-discover components.ez.no sudo pear update-channels sudo pear upgrade-all sudo pear install --alldeps phpunit/PHPUnit sudo apt-get install phpunit and updated include path of php.ini to be: include_path = ".:/php/includes:/opt/lampp/lib/php:/opt/lampp/bin:/opt/lampp/lib/php/PEAR" the php file MessageTest.php: <?php require 'PHPUnit/Autoload.php'; $path = '/opt/lampp/lib/php/PEAR'; set_include_path(get_include_path() . PATH_SEPARATOR . $path); require_once 'PHPUnit/Framework/TestCase.php'; require_once 'Message/Controller/MessageController.php'; class MessageTest extends PHPUnit_Framework_TestCase{ private $message; public function setUp() { $this->message = new MessageController(); } public function tearDown() { } public function testRepeat(){ $yell = "Hello, Any One Out There?"; $this->message->repeat($yell); //sending a request $returnedMessage = $this->message->repeat($yell);//get a response $this->assertEquals($returnedMessage, $yell); } } ?> MessageController class from MessageController.php that i'm trying to test <?php class MessageController { public function actionHelloWorld() { echo 'helloWorld'; } public function repeat($inputString){ return $inputString; } } $msg = new MessageController; ?> I'm not using any PHP framework, i just made the files and classes sounds like it that's all. and still i get the same error: PHP Warning: require_once(PHP/CodeCoverage/Filter.php): failed to open stream: No such file or directory in /usr/bin/phpunit on line Warning: require_once(PHP/CodeCoverage/Filter.php): failed to open stream: No such file or directory in /usr/bin/phpunit on line 38 PHP Fatal error: require_once(): Failed opening required 'PHP/CodeCoverage/Filter.php' (include_path='.:/php/includes:/opt/lampp/lib/php:/opt/lampp/bin:/opt/lampp/lib/php/PEAR') in /usr/bin/phpunit on line 38 sure, i'm getting demanding here, i've wasted a lot of time and got really frustrated over this, hope you guys dont get bored reading through my questions, i appreciate your help thanks in advance, Mohamad elbialy

    Read the article

  • Ubuntu: Failure to login with multiple video adapters

    - by tsilb
    Forgive my ignorance, for I am a complete linux noob. I have a computer with three video cards and six monitors. Works great on Windows. Trying to get it to run Ubuntu as well. It loads fine when I have it configured to run on one adapter; detects both screens, runs ok. But I want to turn the other 4 monitors on and run the whole thing as one extended desktop (one session, etc). So I downloaded and installed the newest ATI driver for Linux, which seems to work, kinda. I ran this to set up the screens: aticonfig --adapter=all --initial -f Now when I boot, Ubuntu seems to turn on all the screens (3 viewports, each with two cloned displays from what I can tell). When I enter my login info OR move the mouse off the main screen, the screens freeze and the kbd/ms become unresponsive. aticonfig generated xorg.conf included below. Have tried the following: aticonfig -initial -f - works, but only detects the primary adapter and 2 screens aticccle - Tells me I have to reboot after enabling the other cards. Then goes into above described freezing state. aticonfig --adapter=all --initial -f - see above Manually editing xorg.conf file with my limited knowledge - Was able to get two adapters running, but only the second adapter initialized while the primary stopped at the Ubuntu boot screen. Was unable to see the login prompt. Froze after I logged in blindly (was able to hear the login sound). Using generic "radeon" driver instead of ATI Proprietary driver with the above init attempts Toggling xinerama Various combinations of the above Hardware: Intel Core 2 Quad q6600 8GB DDR2 (3x) ATI Radeon HD 4680 5 monitors (21W, 21W, 22W Portrait, 22W Portrait, 19")and an HDTV (26"W, HDMI) in a horizontal arrangement I know next to nothing about Linux/Ubuntu aside from basic filesystem navigation, editing text files, and accessing my local and networked Windows stores and shares. Basically this is the most advanced thing I've had to do. I installed today. Please advise how to make this configuration work. my xorg.conf: Section "ServerLayout" Identifier "Layout0" Screen 0 "aticonfig-Screen[0]-0" 0 0 Screen "aticonfig-Screen[1]-0" RightOf "aticonfig-Screen[0]-0" Screen "aticonfig-Screen[2]-0" RightOf "aticonfig-Screen[1]-0" Option "RenderAccel" "true" Option "AllowGLXWithComposite" "true" EndSection Section "Files" EndSection Section "Module" EndSection Section "ServerFlags" Option "Xinerama" "0" EndSection Section "Monitor" Identifier "aticonfig-Monitor[0]-0" Option "VendorName" "ATI Proprietary Driver" Option "ModelName" "Generic Autodetecting Monitor" Option "DPMS" "true" EndSection Section "Monitor" Identifier "aticonfig-Monitor[1]-0" Option "VendorName" "ATI Proprietary Driver" Option "ModelName" "Generic Autodetecting Monitor" Option "DPMS" "true" EndSection Section "Monitor" Identifier "aticonfig-Monitor[2]-0" Option "VendorName" "ATI Proprietary Driver" Option "ModelName" "Generic Autodetecting Monitor" Option "DPMS" "true" EndSection Section "Device" Identifier "aticonfig-Device[0]-0" Driver "fglrx" BusID "PCI:1:0:0" EndSection Section "Device" Identifier "aticonfig-Device[1]-0" Driver "fglrx" BusID "PCI:3:0:0" EndSection Section "Device" Identifier "aticonfig-Device[2]-0" Driver "fglrx" BusID "PCI:4:0:0" EndSection Section "Screen" Identifier "aticonfig-Screen[0]-0" Device "aticonfig-Device[0]-0" Monitor "aticonfig-Monitor[0]-0" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 EndSubSection EndSection Section "Screen" Identifier "aticonfig-Screen[1]-0" Device "aticonfig-Device[1]-0" Monitor "aticonfig-Monitor[1]-0" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 EndSubSection EndSection Section "Screen" Identifier "aticonfig-Screen[2]-0" Device "aticonfig-Device[2]-0" Monitor "aticonfig-Monitor[2]-0" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 EndSubSection EndSection

    Read the article

  • ubuntu wifi disconnection & frustratingly connects to unavailable wifi

    - by ashishsony
    Hi, i have already posted this here: here This has happened before with ubuntu 9.1 Beta2 build too that my wifi disconnects if im idle for 5 minutes... so i cant leave my lappy to download anything... i have to keep on continuously using it.. as soon i leave it idle for abt 5 minutes... wifi disconnects... and the pop up asking for password for wifi pops up...with the password already filled in... i just click on connect and it connects again... so whats the use of asking the password if the pre filled in pass works correctly... and this is happening on ubuntu 10.04 Beta2 too... and the workaround is that just open any menu like the applications menu in the taskbar and keep it open... under this state the ubuntu idleness never activates and so the wifi gets never disconnected... this has been confirmed by me many times.. this seems to be repeating again and again... i dont know why... and the second thing i want to report is that there is no way to report this bug from ubuntu... the launchpad.net talks of going through bug reporting process which is done against a definite package... now how does a user know which package would be causing this error?? there should be a more clear process of reporting such bugs to ubuntu team... thirdly the apport utility that reports crashing apps is totally uselss on 10.04 beta 2... as it collests information and reports that i cant submit the report because i dont have 100 other packages... without updating which i cant submit the report.... surely on a beta build there would be packages continuously being updated... so no system would be reported as fully updated... and so no practical apport reporting is possible?? please address these issues... really frustrating all this ... im a big fan of ubuntu but these things really bug me... and just to add fourthly... the suspend/hibernate feature has never ever worked on my toshiba m70-113 laptop... on any ubuntu version... always have to hard reboot after putting into suspend/hibernate mode.. on windows this has never been the case... why cant ubuntu beat windows in such cases too?? i would really like to see this soon... most importantly, when the router switches off... the wifi signals go off... then why the hell ubuntu keeps on connecting to that very wifi like hell and when doesnt connect shows the prompt to manually connect... with the wifi key already filled in... whats the use of saving the key when it has to ask the question from me either to connect or not?? and if its isnt available... just wait when its available.. i have only option to cancel and if i cancel it wont auto-connect!! what the heck?? one can see in the image that it says "authentication required by wireless network" when there isnt any.. as router has gone down!!

    Read the article

  • Persistent static routes fail on MacOS 10.6.5 startup!

    - by verbalicious
    I'm unable to get static routes to persist a reboot on Mac OS 10.6.5. I've tried all of the methods prescribed in Google search results, and previous posts on this site. I've tried manually creating a launchd daemon, and used RouteSplit's launchd daemon to no avail. It's clear that the interface is not ready when these methods attempt to apply the route. This workstation in question is getting its IP from DHCP and probably hasn't gotten its DHCP lease when the command runs. We're able to apply the route by hand when logged in, but not through startup methods. Is there another way to apply this route by sneaking the command into something later, but before the login window appears to the user? Here is some relevant log info from system.log. You can see the "route: writing to routing socket: Network is unreachable" errors where my launchd script fires off. I've tried adding extra "sleep" and "ipconfig waitall" statements later in the script but this doesn't fly. Dec 15 19:30:41 localhost com.apple.launchd[1]: *** launchd[1] has started up. *** Dec 15 19:30:45 localhost mDNSResponder[18]: mDNSResponder mDNSResponder-258.13 (Oct 8 2010 17:10:30) starting Dec 15 19:30:47 localhost configd[15]: bootp_session_transmit: bpf_write(en1) failed: Network is down (50) Dec 15 19:30:47 localhost configd[15]: DHCP en1: INIT transmit failed Dec 15 19:30:47 localhost configd[15]: network configuration changed. Dec 15 19:30:47 Administrators-MacBook-Pro configd[15]: setting hostname to "Administrators-MacBook-Pro.local" Dec 15 19:30:47 Administrators-MacBook-Pro blued[16]: Apple Bluetooth daemon started Dec 15 19:30:52 Administrators-MacBook-Pro syslog[67]: routes.sh: Starting RouteSplit Dec 15 19:30:53 Administrators-MacBook-Pro com.apple.usbmuxd[41]: usbmuxd-207 built for iTunesTenOne on Oct 19 2010 at 13:50:35, running 64 bit Dec 15 19:30:54 Administrators-MacBook-Pro /System/Library/CoreServices/loginwindow.app/Contents/MacOS/loginwindow[50]: Login Window Application Started Dec 15 19:30:55 Administrators-MacBook-Pro bootlog[61]: BOOT_TIME: 1292459441 0 Dec 15 19:30:55 Administrators-MacBook-Pro syslog[86]: routes.sh: static route 192.168.0.0/23 192.168.2.2 Dec 15 19:30:55 Administrators-MacBook-Pro net.routes.static[65]: route: writing to routing socket: Network is unreachable Dec 15 19:30:55 Administrators-MacBook-Pro net.routes.static[65]: add net 192.168.0.0: gateway 192.168.2.2: Network is unreachable Dec 15 19:30:57 Administrators-MacBook-Pro org.apache.httpd[38]: httpd: Could not reliably determine the server's fully qualified domain name, using Administrators-MacBook-Pro.local for ServerName Dec 15 19:30:58 Administrators-MacBook-Pro loginwindow[50]: Login Window Started Security Agent Dec 15 19:30:58 Administrators-MacBook-Pro WindowServer[89]: kCGErrorFailure: Set a breakpoint @ CGErrorBreakpoint() to catch errors as they are logged. Dec 15 19:30:58 Administrators-MacBook-Pro com.apple.WindowServer[89]: Wed Dec 15 19:30:58 Administrators-MacBook-Pro.local WindowServer[89] <Error>: kCGErrorFailure: Set a breakpoint @ CGErrorBreakpoint() to catch errors as they are logged. Dec 15 19:31:18 Administrators-MacBook-Pro configd[15]: network configuration changed. Dec 15 19:31:19 administrators-macbook-pro configd[15]: setting hostname to "administrators-macbook-pro.local" Dec 15 19:31:25 administrators-macbook-pro _mdnsresponder[121]: /usr/libexec/ntpd-wrapper: scutil key State:/Network/Global/DNS not present after 30 seconds Dec 15 19:31:25 administrators-macbook-pro _mdnsresponder[124]: sntp options: a=2 v=1 e=0.100 E=5.000 P=2147483647.000 Dec 15 19:31:25 administrators-macbook-pro _mdnsresponder[124]: d=15 c=5 x=0 op=1 l=/var/run/sntp.pid f= time.apple.com Dec 15 19:31:25 administrators-macbook-pro _mdnsresponder[124]: sntp: getaddrinfo(hostname, ntp) failed with nodename nor servname provided, or not known Dec 15 19:31:27 administrators-macbook-pro configd[15]: network configuration changed. Dec 15 19:31:27 Administrators-MacBook-Pro configd[15]: setting hostname to "Administrators-MacBook-Pro.local" Dec 15 19:31:27 Administrators-MacBook-Pro ntpd[37]: Cannot find existing interface for address 17.151.16.20 Dec 15 19:31:27 Administrators-MacBook-Pro ntpd_initres[125]: ntpd indicates no data available! Dec 15 19:31:31 Administrators-MacBook-Pro sshd[128]: USER_PROCESS: 133 ttys000 Dec 15 19:31:37 Administrators-MacBook-Pro sudo[138]: administrator : TTY=ttys000 ; PWD=/Users/administrator ; USER=root ; COMMAND=/usr/bin/less /var/log/system.log ``You can see the following line in /var/log/kernel.log that shows the en0 interface coming up: Dec 15 19:30:51 Administrators-MacBook-Pro kernel[0]: Ethernet [AppleBCM5701Ethernet]: Link up on en0, 1-Gigabit, Full-duplex, No flow-control, Debug [796d,0f01,0de1,0300,c1e1,3800]

    Read the article

  • vim-powerline colors are out of whack in urxvt

    - by komidore64
    I have attached two images showing what my vim-powerline looks like. As you can see, something has happened to the colors and I cannot figure out how to fix it. I'm running Fedora 17 on a clean install with i3 (default config) and urxvt. Here is my bashrc: # .bashrc if [[ "$(uname)" != "Darwin" ]]; then # non mac os x # source global bashrc if [[ -f "/etc/bashrc" ]]; then . /etc/bashrc fi export TERM='xterm-256color' # probably shouldn't do this fi # bash prompt with colors # [ <user>@<hostname> <working directory> {current git branch (if you're in a repo)} ] # ==> PS1="\[\e[1;33m\][ \u\[\e[1;37m\]@\[\e[1;32m\]\h\[\e[1;33m\] \W\$(git branch 2> /dev/null | grep -e '\* ' | sed 's/^..\(.*\)/ {\[\e[1;36m\]\1\[\e[1;33m\]}/') ]\[\e[0m\]\n==> " # execute only in Mac OS X if [[ "$(uname)" == 'Darwin' ]]; then # if OS X has a $HOME/bin folder, then add it to PATH if [[ -d "$HOME/bin" ]]; then export PATH="$PATH:$HOME/bin" fi alias ls='ls -G' # ls with colors fi alias ll='ls -lah' # long listing of all files with human readable file sizes alias tree='tree -C' # turns on coloring for tree command alias mkdir='mkdir -p' # create parent directories as needed alias vim='vim -p' # if more than one file, open files in tabs export EDITOR='vim' # super-secret work stuff if [[ -f "$HOME/.workbashrc" ]]; then . $HOME/.workbashrc fi # Add RVM to PATH for scripting if [[ -d "$HOME/.rvm/bin" ]]; then # if installed PATH=$PATH:$HOME/.rvm/bin fi and my Xdefaults: ! URxvt config ! colors! URxvt.background: #101010 URxvt.foreground: #ededed URxvt.cursorColor: #666666 URxvt.color0: #2E3436 URxvt.color8: #555753 URxvt.color1: #993C3C URxvt.color9: #BF4141 URxvt.color2: #3C993C URxvt.color10: #41BF41 URxvt.color3: #99993C URxvt.color11: #BFBF41 URxvt.color4: #3C6199 URxvt.color12: #4174FB URxvt.color5: #993C99 URxvt.color13: #BF41BF URxvt.color6: #3C9999 URxvt.color14: #41BFBF URxvt.color7: #D3D7CF URxvt.color15: #E3E3E3 ! options URxvt*loginShell: true URxvt*font: xft:DejaVu Sans Mono for Powerline:antialias=true:size=12 URxvt*saveLines: 8192 URxvt*scrollstyle: plain URxvt*scrollBar_right: true URxvt*scrollTtyOutput: true URxvt*scrollTtyKeypress: true URxvt*urlLauncher: google-chrome and finally my vimrc set nocompatible set dir=~/.vim/ " set one place for vim swap files " vundler for vim plugins ---- filetype off set rtp+=~/.vim/bundle/vundle call vundle#rc() Bundle 'gmarik/vundle' Bundle 'tpope/vim-surround' Bundle 'greyblake/vim-preview' Bundle 'Lokaltog/vim-powerline' Bundle 'tpope/vim-endwise' Bundle 'kien/ctrlp.vim' " ---------------------------- syntax enable filetype plugin indent on " Powerline ------------------ set noshowmode set laststatus=2 let g:Powerline_symbols = 'fancy' " show fancy symbols (requires patched font) set encoding=utf-8 " ---------------------------- " ctrlp ---------------------- let g:ctrlp_open_multiple_files = 'tj' " open multiple files in additional tabs let g:ctrlp_show_hidden = 1 " include dotfiles and dotdirs in ctrlp indexing let g:ctrlp_prompt_mappings = { \ 'AcceptSelection("e")': ['<c-t>'], \ 'AcceptSelection("t")': ['<cr>', '<2-LeftMouse>'], \ } " remap <cr> to open file in a new tab " ---------------------------- set showcmd set tabpagemax=100 set hlsearch set incsearch set nowrapscan set ignorecase set smartcase set ruler set tabstop=4 set shiftwidth=4 set expandtab set wildmode=list:longest autocmd BufWritePre * :%s/\s\+$//e "remove trailing whitespace " :REV to "revert" file to state of the most recent save command REV earlier 1f " disable netrw -------------- let g:loaded_netrw = 1 let g:loaded_netrwPlugin = 1 " ---------------------------- Any guidance as to fixing the statusline would be fantastic. I've found a github issue outlining almost the exact same problem, but the solution was never posted. Thank you.

    Read the article

  • How do I determine if my controller is in IDE or AHCI mode in Linux?

    - by philcolbourn
    I have an old MacBook Pro 4,1 (early 2008) - but I suspect an answer would apply to many MacBook Pros. It has an Intel IDE/SATA controller (ICH8M/ICH8M-E). I have installed a patched MBR that is supposed to put my controller into AHCI mode. It does this by setting some controller port value that I don't understand. This seems to work as I get this from lspci: 00:1f.1 IDE interface: Intel Corporation 82801HM/HEM (ICH8M/ICH8M-E) IDE Controller (rev 03) 00:1f.2 IDE interface: Intel Corporation 82801HM/HEM (ICH8M/ICH8M-E) SATA Controller [AHCI mode] (rev 03) Now most, perhaps all, sites that provide a solution (enabling AHCI) suggest that after a sleep/wake cycle that a controller will revert to IDE mode due to how Apple support Windows. They recommend disabling sleep. From author of patchedcode.bin I think Enabling AHCI for Windows on MacBooks NB: I do not have bootcamp installed and I do not have Windows installed. Is there a way to prove that my controller is in IDE or AHCI mode? Background Data Using patchedcode.bin MBR I get this in syslog: Jun 12 22:33:22 max kernel: [ 1.860955] ahci 0000:00:1f.2: version 3.0 Jun 12 22:33:22 max kernel: [ 1.861052] ahci 0000:00:1f.2: irq 45 for MSI/MSI-X Jun 12 22:33:22 max kernel: [ 1.861117] ahci 0000:00:1f.2: AHCI 0001.0100 32 slots 3 ports 1.5 Gbps 0x1 impl SATA mode Jun 12 22:33:22 max kernel: [ 1.861120] ahci 0000:00:1f.2: flags: 64bit ncq sntf pm led clo pio slum part ccc ems Jun 12 22:33:22 max kernel: [ 1.861130] ahci 0000:00:1f.2: setting latency timer to 64 Jun 12 22:33:22 max kernel: [ 1.880880] ACPI: Video Device [GFX0] (multi-head: yes rom: no post: no) Jun 12 22:33:22 max kernel: [ 1.880983] scsi2 : ahci Jun 12 22:33:22 max kernel: [ 1.884552] scsi3 : ahci Jun 12 22:33:22 max kernel: [ 1.886932] scsi4 : ahci Jun 12 22:33:22 max kernel: [ 1.886998] ata3: SATA max UDMA/133 abar m2048@0xdb504000 port 0xdb504100 irq 45 Jun 12 22:33:22 max kernel: [ 1.887000] ata4: DUMMY Jun 12 22:33:22 max kernel: [ 1.887002] ata5: DUMMY Jun 12 22:33:22 max kernel: [ 2.204103] ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jun 12 22:33:22 max kernel: [ 2.204656] ata3.00: ATA-8: FUJITSU MHY2200BH, 0081000D, max UDMA/100 Jun 12 22:33:22 max kernel: [ 2.204662] ata3.00: 390721968 sectors, multi 16: LBA48 NCQ (depth 31/32), AA Jun 12 22:33:22 max kernel: [ 2.205324] ata3.00: configured for UDMA/100 Jun 12 22:33:22 max kernel: [ 2.205554] scsi 2:0:0:0: Direct-Access ATA FUJITSU MHY2200B 0081 PQ: 0 ANSI: 5 Using my original MBR I get this from syslog: Jun 13 18:07:13 max kernel: [ 0.622861] ata_piix 0000:00:1f.1: version 2.13 Jun 13 18:07:13 max kernel: [ 0.622869] ata_piix 0000:00:1f.1: power state changed by ACPI to D0 Jun 13 18:07:13 max kernel: [ 0.622924] ata_piix 0000:00:1f.1: setting latency timer to 64 Jun 13 18:07:13 max kernel: [ 0.623339] scsi0 : ata_piix Jun 13 18:07:13 max kernel: [ 0.623730] scsi1 : ata_piix Jun 13 18:07:13 max kernel: [ 0.623765] ata1: PATA max UDMA/100 cmd 0x8108 ctl 0x811c bmdma 0x80e0 irq 21 Jun 13 18:07:13 max kernel: [ 0.623767] ata2: PATA max UDMA/100 cmd 0x8100 ctl 0x8118 bmdma 0x80e8 irq 21 Jun 13 18:07:13 max kernel: [ 0.623810] ata_piix 0000:00:1f.2: MAP [ Jun 13 18:07:13 max kernel: [ 0.623811] P0 -- -- -- ] Jun 13 18:07:13 max kernel: [ 0.623866] ata_piix 0000:00:1f.2: setting latency timer to 64 Jun 13 18:07:13 max kernel: [ 0.624241] scsi2 : ata_piix Jun 13 18:07:13 max kernel: [ 0.624558] scsi3 : ata_piix Jun 13 18:07:13 max kernel: [ 0.624862] ata3: SATA max UDMA/133 cmd 0x80f8 ctl 0x8114 bmdma 0x8020 irq 18 Jun 13 18:07:13 max kernel: [ 0.624865] ata4: SATA max UDMA/133 cmd 0x80f0 ctl 0x8110 bmdma 0x8028 irq 18 Jun 13 18:07:13 max kernel: [ 1.208879] ata3.00: ATA-8: FUJITSU MHY2200BH, 0081000D, max UDMA/100 Jun 13 18:07:13 max kernel: [ 1.208882] ata3.00: 390721968 sectors, multi 16: LBA48 NCQ (depth 0/32) Jun 13 18:07:13 max kernel: [ 1.208961] ata1.01: ATAPI: MATSHITA DVD+/-RW UJ-867S, 1.00, max UDMA/33 Jun 13 18:07:13 max kernel: [ 1.216186] ata3.00: configured for UDMA/100 Jun 13 18:07:13 max kernel: [ 1.224396] ata1.01: configured for UDMA/33

    Read the article

  • How do you handle authentication across domains?

    - by William Ratcliff
    I'm trying to save users of our services from having to have multiple accounts/passwords. I'm in a large organization and there's one group that handles part of user authentication for users who are from outside the facility (primarily for administrative functions). They store a secure cookie to establish a session and communicate only via HTTPS via the browser. Sessions expire either through: 1) explicit logout of the user 2) Inactivity 3) Browser closes My team is trying to write a web application to help users analyze data that they've taken (or are currently taking) while at our facility. We need to determine if a user is 1) authenticated 2) Some identifier for that user so we can store state for them (what analysis they are working on, etc.) So, the problem is how do you authenticate across domains (the authentication server for the other application lives in a border region between public and private--we will live in the public region). We have come up with some scenarios and I'd like advice about what is best practice, or if there is one we haven't considered. Let's start with the case where the user is authenticated with the authentication server. 1) The authentication server leaves a public cookie in the browser with their primary key for a user. If this is deemed sensitive, they encrypt it on their server and we have the key to decrypt it on our server. When the user visits our site, we check for this public cookie. We extract the user_id and use a public api for the authentication server to request if the user is logged in. If they are, they send us a response with: response={ userid :we can then map this to our own user ids. If necessary, we can request additional information such as email-address/display name once (to notify them if long running jobs are done, or to share results with other people, like with google_docs). account_is_active:Make sure that the account is still valid session_is_active: Is their session still active? If we query this for a valid user, this will have a side effect that we will reset the last_time_session_activated value and thus prolong their session with the authentication server last_time_session_activated: let us know how much time they have left ip_address_session_started_from:make sure the person at our site is coming from the same ip as they started the session at } Given this response, we either accept them as authenticated and move on with our app, or redirect them to the login page for the authentication server (question: if we give an encrypted portion of the response (signed by us) with the page to redirect them to, do we open any gaping security holes in the authentication server)? The flaw that we've found with this is that if the user visits evilsite.com and they look at the session cookie and send a query to the public api of the authentication server, they can keep the session alive and if our original user leaves the machine without logging out, then the next user will be able to access their session (this was possible before, but having the session alive eternally makes this worse). 2) The authentication server redirects all requests made to our domain to us and we send responses back through them to the user. Essentially, they act as a proxy. The advantage of this is that we can handshake with the authentication server, so it's safe to be trusted with the email address/name of the user and they don't have to reenter it So, if the user tries to go to: authentication_site/mysite_page1 they are redirected to mysite. Which would you choose, or is there a better way? The goal is to minimize the "Yet Another Password/Yet another username" problem... Thanks!!!!

    Read the article

  • How do I achieve lossless JPEG joining without truncation of partial MCUs?

    - by Karan
    I am working on a project for which I need to join thousands of JPEG images losslessly (I'm not talking about the Lossless JPEG/JPEG 2000/JPEG-LS formats here). Aforementioned images have varying levels of chroma subsampling (1x1, 1x2, 2x1, 2x2), resulting in varying MCU sizes (8x8, 8x16, 16x8, 16x16 px). However, in any given set of images to be joined together, each image has identical characteristics. For now, let's assume I only have 2 images. Image #1 (I1) is 256x256px in size and #2 (I2) is 239x256px in size. 2x2 subsampling is used such that MCU size is 16x16px. I2 thus obviously has partial MCUs at the right edge, since its width is not evenly divisible by 16. (I've read that so-called 'partial' MCUs actually contain the data for a complete MCU, but the image dimensions instruct the renderer to only display the relevant pixels and ignore/hide the extra ones.) Looking around for tools that could help me accomplish this, I came across a modified version of JpegTran, that contains an experimental lossless crop 'n' drop (cut & paste) feature. All the other apps I encountered that support lossless JPEG editing seem to utilise IJG's (JpegTran) code, so this seemed to be the logical choice. Also, given the sheer number of images, I wanted something that could preferably be run from the command-line so that I could automate the process with a script. Unfortunately, while everything else worked fine, it seems JpegTran truncates the partial MCUs instead of retaining them. Thus in the example above, the final joined image contains all of I1, but only 224x256px of I2. Why 224? because 239 = 14x16+15, which means there are 14 full MCUs along the width, and 1 partial MCU (just 1px short of the complete 16px). The last 15px is what is getting blanked, leading to a 495x256px image with 15px of blank (grey) pixels at the right edge. See images below (shame that imgur re-compresses them): (left )+ (right) = As you can clearly see, the red portion (15px) of I2 has been truncated by JpegTran. If the MCUs were 8px in width, the lost portion would have been the right-most 7px of I2. Similarly, joining I3 (256x239px) *below * I1 would cause the loss of 7 or 15px, depending on the MCU height of course: (top) + (bottom) = If this is better suited to some other StackExchange (or even non-SE) site/forum where JPEG/image encoding experts hang out, do let me know. Can what I am attempting even be done, or is the so-called 'lossless' JPEG crop 'n' drop only valid for images with no partial MCUs? (Maybe that is why the feature is still in an "experimental state" more than a decade after being introduced...) Until I know for sure that it is impossible, I am not interested in suggestions for lossy joining. Avoiding any generational loss whatsoever is the sole reason why I'm breaking my head over this, else I'd have had this done and dusted ages ago. Also, I am not interested in suggestions related to switching image formats. I do not control the source of the images. If it can be done, how? Please keep in mind that any alternate apps suggested must ideally be capable of automation, given the requirements stated above. (But given how it's unlikely I'm even going to receive a useful answer given the constraints, I would be happy with any app suggestion just as long as it actually works. I can always look into an AutoIT/AHK script or something later to automate it.) I understand that an odd-sized final image might cause issues, so I am fully prepared to accept any solution, even if it results in blank (preferably black) padding pixels to the right/bottom. What I mean is, I don't care if I1 + I2 is 496x256px (1px padding) or even 512x256px (17px padding) in size, as long as the final image contains all the actual image data from both source images, and the entire process is lossless. Obviously the lesser the padding (if any), the better, but at this point any solution will do. A Windows-based solution would be perfect, but a Linux-based one would be entirely acceptable.

    Read the article

< Previous Page | 469 470 471 472 473 474 475 476 477 478 479 480  | Next Page >