Search Results

Search found 25440 results on 1018 pages for 'agent based modeling'.

Page 978/1018 | < Previous Page | 974 975 976 977 978 979 980 981 982 983 984 985  | Next Page >

  • HTG Explains: Do Non-Windows Platforms Like Mac, Android, iOS, and Linux Get Viruses?

    - by Chris Hoffman
    Viruses and other types of malware seem largely confined to Windows in the real world. Even on a Windows 8 PC, you can still get infected with malware. But how vulnerable are other operating systems to malware? When we say “viruses,” we’re actually talking about malware in general. There’s more to malware than just viruses, although the word virus is often used to talk about malware in general. Why Are All the Viruses For Windows? Not all of the malware out there is for Windows, but most of it is. We’ve tried to cover why Windows has the most viruses in the past. Windows’ popularity is definitely a big factor, but there are other reasons, too. Historically, Windows was never designed for security in the way that UNIX-like platforms were — and every popular operating system that’s not Windows is based on UNIX. Windows also has a culture of installing software by searching the web and downloading it from websites, whereas other platforms have app stores and Linux has centralized software installation from a secure source in the form of its package managers. Do Macs Get Viruses? The vast majority of malware is designed for Windows systems and Macs don’t get Windows malware. While Mac malware is much more rare, Macs are definitely not immune to malware. They can be infected by malware written specifically for Macs, and such malware does exist. At one point, over 650,000 Macs were infected with the Flashback Trojan. [Source] It infected Macs through the Java browser plugin, which is a security nightmare on every platform. Macs no longer include Java by default. Apple also has locked down Macs in other ways. Three things in particular help: Mac App Store: Rather than getting desktop programs from the web and possibly downloading malware, as inexperienced users might on Windows, they can get their applications from a secure place. It’s similar to a smartphone app store or even a Linux package manager. Gatekeeper: Current releases of Mac OS X use Gatekeeper, which only allows programs to run if they’re signed by an approved developer or if they’re from the Mac App Store. This can be disabled by geeks who need to run unsigned software, but it acts as additional protection for typical users. XProtect: Macs also have a built-in technology known as XProtect, or File Quarantine. This feature acts as a blacklist, preventing known-malicious programs from running. It functions similarly to Windows antivirus programs, but works in the background and checks applications you download. Mac malware isn’t coming out nearly as quick as Windows malware, so it’s easier for Apple to keep up. Macs are certainly not immune to all malware, and someone going out of their way to download pirated applications and disable security features may find themselves infected. But Macs are much less at risk of malware in the real world. Android is Vulnerable to Malware, Right? Android malware does exist and companies that produce Android security software would love to sell you their Android antivirus apps. But that isn’t the full picture. By default, Android devices are configured to only install apps from Google Play. They also benefit from antimalware scanning — Google Play itself scans apps for malware. You could disable this protection and go outside Google Play, getting apps from elsewhere (“sideloading”). Google will still help you if you do this, asking if you want to scan your sideloaded apps for malware when you try to install them. In China, where many, many Android devices are in use, there is no Google Play Store. Chinese Android users don’t benefit from Google’s antimalware scanning and have to get their apps from third-party app stores, which may contain infected copies of apps. The majority of Android malware comes from outside Google Play. The scary malware statistics you see primarily include users who get apps from outside Google Play, whether it’s pirating infected apps or acquiring them from untrustworthy app stores. As long as you get your apps from Google Play — or even another secure source, like the Amazon App Store — your Android phone or tablet should be secure. What About iPads and iPhones? Apple’s iOS operating system, used on its iPads, iPhones, and iPod Touches, is more locked down than even Macs and Android devices. iPad and iPhone users are forced to get their apps from Apple’s App Store. Apple is more demanding of developers than Google is — while anyone can upload an app to Google Play and have it available instantly while Google does some automated scanning, getting an app onto Apple’s App Store involves a manual review of that app by an Apple employee. The locked-down environment makes it much more difficult for malware to exist. Even if a malicious application could be installed, it wouldn’t be able to monitor what you typed into your browser and capture your online-banking information without exploiting a deeper system vulnerability. Of course, iOS devices aren’t perfect either. Researchers have proven it’s possible to create malicious apps and sneak them past the app store review process. [Source] However, if a malicious app was discovered, Apple could pull it from the store and immediately uninstall it from all devices. Google and Microsoft have this same ability with Android’s Google Play and Windows Store for new Windows 8-style apps. Does Linux Get Viruses? Malware authors don’t tend to target Linux desktops, as so few average users use them. Linux desktop users are more likely to be geeks that won’t fall for obvious tricks. As with Macs, Linux users get most of their programs from a single place — the package manager — rather than downloading them from websites. Linux also can’t run Windows software natively, so Windows viruses just can’t run. Linux desktop malware is extremely rare, but it does exist. The recent “Hand of Thief” Trojan supports a variety of Linux distributions and desktop environments, running in the background and stealing online banking information. It doesn’t have a good way if infecting Linux systems, though — you’d have to download it from a website or receive it as an email attachment and run the Trojan. [Source] This just confirms how important it is to only run trusted software on any platform, even supposedly secure ones. What About Chromebooks? Chromebooks are locked down laptops that only run the Chrome web browser and some bits around it. We’re not really aware of any form of Chrome OS malware. A Chromebook’s sandbox helps protect it against malware, but it also helps that Chromebooks aren’t very common yet. It would still be possible to infect a Chromebook, if only by tricking a user into installing a malicious browser extension from outside the Chrome web store. The malicious browser extension could run in the background, steal your passwords and online banking credentials, and send it over the web. Such malware could even run on Windows, Mac, and Linux versions of Chrome, but it would appear in the Extensions list, would require the appropriate permissions, and you’d have to agree to install it manually. And Windows RT? Microsoft’s Windows RT only runs desktop programs written by Microsoft. Users can only install “Windows 8-style apps” from the Windows Store. This means that Windows RT devices are as locked down as an iPad — an attacker would have to get a malicious app into the store and trick users into installing it or possibly find a security vulnerability that allowed them to bypass the protection. Malware is definitely at its worst on Windows. This would probably be true even if Windows had a shining security record and a history of being as secure as other operating systems, but you can definitely avoid a lot of malware just by not using Windows. Of course, no platform is a perfect malware-free environment. You should exercise some basic precautions everywhere. Even if malware was eliminated, we’d have to deal with social-engineering attacks like phishing emails asking for credit card numbers. Image Credit: stuartpilbrow on Flickr, Kansir on Flickr     

    Read the article

  • CodePlex Daily Summary for Monday, December 27, 2010

    CodePlex Daily Summary for Monday, December 27, 2010Popular ReleasesRocket Framework (.Net 4.0): Rocket Framework for Windows V 1.0.0: Architecture is reviewed and adjusted in a way so that I can introduce the Web version and WPF version of this framework next. - Rocket.Core is introduced - Controller button functions revisited and updated - DB is renewed to suite the implemented features - Create New button functionality is changed - Add Question Handling featuresVCC: Latest build, v2.1.31226.0: Automatic drop of latest buildFxCop Integrator for Visual Studio 2010: FxCop Integrtor 1.1.0: New FeatureSearch violation information FxCop Integrator provides the violation search feature. You can find out specific violation information by simple search expression.Analyze with FxCop project file FxCop Integrator supports code analysis with FxCop project file. You can customize code analysis behavior (e.g. analyze specifid types only, use specific rules only, and so on). ImprovementImproved the code analysis result list to show more information (added Proejct and File column). Change...Flickr Wallpaper Rotator (for Windows desktop): Wallpaper Flickr 1.1: Some minor bugfixes (mostly covering when network connection is flakey, so I discovered them all while at my parents' house for Christmas).Mcopy API: McopyAPI v.1.0.0.2: changed display help "mcopyapi /?" and a few minor changedOPSM: OPSM v2.0: Updated version, 30%-40% faster than v1.0. Requires .NET Framework 2.0People's Note: People's Note 0.19: Added touch scrolling to the note view screen. To install: copy the appropriate CAB file onto your WM device and run it.HSS Core Framework: HSS Core v4.0.700.10: Release v4.0.700.10 December 25th, 2010 Upgrade Instructions from v4.0.700 to v4.0.700.10: (patch release) Uninstall v4.0.700 Install the new v4.0.700.10 Upgrade Instructions: (full upgrade from a version prior to v4.0.700) Uninstall your old version Update the Log Configuration, by changing the logging from Database to Machine Backup and then Truncate the hss logging tables Uninstall the HSSLOG Database using the HSSLOG Database install wizard Uninstall the HSS Core Framework Ins...NoSimplerAccounting: NoSimplerAccounting 6.0: -Fixed a bug in expense category report.NHibernate Mapping Generator: NHibernate Mapping Generator 2.0: Added support for Postgres (Thanks to Angelo)NewLife XCode: XCode v6.5.2010.1223 ????(????v3.5??): XCode v6.5.2010.1223 ????,??: NewLife.Core ??? NewLife.Net ??? XControl ??? XTemplate ????,??C#?????? XAgent ???? NewLife.CommonEnitty ??????(???,XCode??????) XCode?? ?????????,??????????????????,?????95% XCode v3.5.2009.0714 ??,?v3.5?v6.0???????????????,?????????。v3.5???????????,??????????????。 XCoder ??XTemplate?????????,????????XCode??? XCoder_Src ???????(????XTemplate????),??????????????????MiniTwitter: 1.64: MiniTwitter 1.64 ???? ?? 1.63 ??? URL ??????????????VivoSocial: VivoSocial 7.4.0: Please see changes: http://support.vivoware.com/project/ChangeLog.aspx?PROJID=48Umbraco CMS: Umbraco 4.6 Beta - codename JUNO: The Umbraco 4.6 beta (codename JUNO) release contains many new features focusing on an improved installation experience, a number of robust developer features, and contains more than 89 bug fixes since the 4.5.2 release. Improved installer experience Updated Starter Kits (Simple, Blog, Personal, Business) Beautiful, free, customizable skins included Skinning engine and Skin customization (see Skinning Documentation Kit) Default dashboards on install with hide option Updated Login t...SSH.NET Library: 2010.12.23: This release includes some bug fixes and few new fetures. Fixes Allow to retrieve big directory structures ssh-dss algorithm is fixed Populate sftp file attributes New Features Support for passhrase when private key is used Support added for diffie-hellman-group14-sha1,diffie-hellman-group-exchange-sha256 and diffie-hellman-group-exchange-sha1 key exchange algorithms Allow to provide multiple key files for authentication Add support for "keyboard-interactive" authentication method...ASP.NET MVC SiteMap provider: MvcSiteMapProvider 2.3.0: Using NuGet?MvcSiteMapProvider is also listed in the NuGet feed. Learn more... Like the project? Consider a donation!Donate via PayPal via PayPal. Release notesThis will be the last release targeting ASP.NET MVC 2 and .NET 3.5. MvcSiteMapProvider 3.0.0 will be targeting ASP.NET MVC 3 and .NET 4 Web.config setting skipAssemblyScanOn has been deprecated in favor of excludeAssembliesForScan and includeAssembliesForScan ISiteMapNodeUrlResolver is now completely responsible for generating th...Media Companion: Media Companion 3.400: Extract the entire archive to a folder which has user access rights, eg desktop, documents etc. A manual is included to get you startedMulticore Task Framework: MTF 1.0.1: Release 1.0.1 of Multicore Task Framework.SQL Monitor - tracking sql server activities: SQL Monitor 3.0 alpha 7: 1. added script save/load in user query window 2. fixed problem with connection dialog when choosing windows auth but still ask for user name 3. auto open user table when double click one table node 4. improved alert message, added log only methodEnhSim: EnhSim 2.2.6 ALPHA: 2.2.6 ALPHAThis release supports WoW patch 4.03a at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 - Fixing up some r...New ProjectsActiveDirectory Object Model Library: The ActiveDirectory Object Model Library, which can operate the AD easily.Allspark: Allspark is an BI focused sharing space.Calendar Control for WP7: Calendar Control for Wp7 (Windows Phone 7). I needed a calendar control for my WP7 app and since WP7 did not have any and I couldn't find anything on the internet I decided to write my own. Please feel free to use it in your project and share if you have improved on it.CryptoPad: CryptoPad is a simple notepad like application that works with encrypted files. Users can open,edit and save encrypted files using CrypoPad without andy other application, everythning is easy and transparent.Entity Visualizer: Entity Framework Debugger VisualizerHTML-IDEx: HTML-IDEx is meant to be an open-source lightweight WYSIWYG HTML editor in which the user can see what they are doing in realtime. Comparable to my other project, Brandons HTML IDE, I'm hoping this to be a huge success.IL Inject: Aspect-oriented programming based on the injection of IL instruction in the methods of an assembly that is marked by attributesIntegração no Trabalho: Este projeto será desenvolvido para ser possível a integração entre funcionarios de uma empresa. O programa funcionará em rede e através dele será possível cadastrar projetos, atividades e seus colegas poderão comentar, criticar e dar dicas nos outros projetos .Knexsys Project: Knexsys, also know as KKD (Knexsys Knowledge Discovery), is a research program that aims to study the capabilities of SQL Server and the .NET framework to implement a rule production engine to mine real-time data. It's developed in C#.mysvn: This project host including some of my own projects. I am interested in WPF and RIA. Furthermore, I want to learn more about C, C++, Java, PHP, Python.OpenTwitter: Twitter clone made with C# and LINQ. this is only a webservice, there is no client for it, the main idea behind opentwitter is to provide a framework that you can use with your own clients ( mobile devices, web pages, etc ) and data provider ( xml, sql server, oracle, etc ).OPSM: OPSM Miner & information projectOutlook UI Tools: Changes to the Outlook UI to make it more usable: * Allow reply to address a recipient instead of the sender * Keep meeting reminders from popping up when overdue many daysSimpleUploadTo: Simple UploadToTFS 4 FPSE: Allow the ability for FrontPage Server Extensions to use TFS as a source control repository.TFS CheckInNotifier: Team Foundation Server 2010 Check-in event notifier desktop application.Universe.WCF.Behaviors: Universe.WCF.Behaviors provides behaviors: - Easy migration from Remoting - Transparent delivery - Traffic Statistics - WCF Streaming Adaptor for BinaryWriter and TextWriter - etc vebshop2: vebshop2Windows Weibo all in one for Sina Sohu and QQ: Windows Weibo all in one for Sina Sohu and QQ.

    Read the article

  • NHibernate Conventions

    - by Ricardo Peres
    Introduction It seems that nowadays everyone loves conventions! Not the ones that you go to, but the ones that you use, that is! It just happens that NHibernate also supports conventions, and we’ll see exactly how. Conventions in NHibernate are supported in two ways: Naming of tables and columns when not explicitly indicated in the mappings; Full domain mapping. Naming of Tables and Columns Since always NHibernate has supported the concept of a naming strategy. A naming strategy in NHibernate converts class and property names to table and column names and vice-versa, when a name is not explicitly supplied. In concrete, it must be a realization of the NHibernate.Cfg.INamingStrategy interface, of which NHibernate includes two implementations: DefaultNamingStrategy: the default implementation, where each column and table are mapped to identically named properties and classes, for example, “MyEntity” will translate to “MyEntity”; ImprovedNamingStrategy: underscores (_) are used to separate Pascal-cased fragments, for example, entity “MyEntity” will be mapped to a “my_entity” table. The naming strategy can be defined at configuration level (the Configuration instance) by calling the SetNamingStrategy method: 1: cfg.SetNamingStrategy(ImprovedNamingStrategy.Instance); Both the DefaultNamingStrategy and the ImprovedNamingStrategy classes offer singleton instances in the form of Instance static fields. DefaultNamingStrategy is the one NHibernate uses, if you don’t specify one. Domain Mapping In mapping by code, we have the choice of relying on conventions to do the mapping automatically. This means a class will inspect our classes and decide how they will relate to the database objects. The class that handles conventions is NHibernate.Mapping.ByCode.ConventionModelMapper, a specialization of the base by code mapper, NHibernate.Mapping.ByCode.ModelMapper. The ModelMapper relies on an internal SimpleModelInspector to help it decide what and how to map, but the mapper lets you override its decisions.  You apply code conventions like this: 1: //pick the types that you want to map 2: IEnumerable<Type> types = Assembly.GetExecutingAssembly().GetExportedTypes(); 3:  4: //conventions based mapper 5: ConventionModelMapper mapper = new ConventionModelMapper(); 6:  7: HbmMapping mapping = mapper.CompileMappingFor(types); 8:  9: //the one and only configuration instance 10: Configuration cfg = ...; 11: cfg.AddMapping(mapping); This is a very simple example, it lacks, at least, the id generation strategy, which you can add by adding an event handler like this: 1: mapper.BeforeMapClass += (IModelInspector modelInspector, Type type, IClassAttributesMapper classCustomizer) => 2: { 3: classCustomizer.Id(x => 4: { 5: //set the hilo generator 6: x.Generator(Generators.HighLow); 7: }); 8: }; The mapper will fire events like this whenever it needs to get information about what to do. And basically this is all it takes to automatically map your domain! It will correctly configure many-to-one and one-to-many relations, choosing bags or sets depending on your collections, will get the table and column names from the naming strategy we saw earlier and will apply the usual defaults to all properties, such as laziness and fetch mode. However, there is at least one thing missing: many-to-many relations. The conventional mapper doesn’t know how to find and configure them, which is a pity, but, alas, not difficult to overcome. To start, for my projects, I have this rule: each entity exposes a public property of type ISet<T> where T is, of course, the type of the other endpoint entity. Extensible as it is, NHibernate lets me implement this very easily: 1: mapper.IsOneToMany((MemberInfo member, Boolean isLikely) => 2: { 3: Type sourceType = member.DeclaringType; 4: Type destinationType = member.GetMemberFromDeclaringType().GetPropertyOrFieldType(); 5:  6: //check if the property is of a generic collection type 7: if ((destinationType.IsGenericCollection() == true) && (destinationType.GetGenericArguments().Length == 1)) 8: { 9: Type destinationEntityType = destinationType.GetGenericArguments().Single(); 10:  11: //check if the type of the generic collection property is an entity 12: if (mapper.ModelInspector.IsEntity(destinationEntityType) == true) 13: { 14: //check if there is an equivalent property on the target type that is also a generic collection and points to this entity 15: PropertyInfo collectionInDestinationType = destinationEntityType.GetProperties().Where(x => (x.PropertyType.IsGenericCollection() == true) && (x.PropertyType.GetGenericArguments().Length == 1) && (x.PropertyType.GetGenericArguments().Single() == sourceType)).SingleOrDefault(); 16:  17: if (collectionInDestinationType != null) 18: { 19: return (false); 20: } 21: } 22: } 23:  24: return (true); 25: }); 26:  27: mapper.IsManyToMany((MemberInfo member, Boolean isLikely) => 28: { 29: //a relation is many to many if it isn't one to many 30: Boolean isOneToMany = mapper.ModelInspector.IsOneToMany(member); 31: return (!isOneToMany); 32: }); 33:  34: mapper.BeforeMapManyToMany += (IModelInspector modelInspector, PropertyPath member, IManyToManyMapper collectionRelationManyToManyCustomizer) => 35: { 36: Type destinationEntityType = member.LocalMember.GetPropertyOrFieldType().GetGenericArguments().First(); 37: //set the mapping table column names from each source entity name plus the _Id sufix 38: collectionRelationManyToManyCustomizer.Column(destinationEntityType.Name + "_Id"); 39: }; 40:  41: mapper.BeforeMapSet += (IModelInspector modelInspector, PropertyPath member, ISetPropertiesMapper propertyCustomizer) => 42: { 43: if (modelInspector.IsManyToMany(member.LocalMember) == true) 44: { 45: propertyCustomizer.Key(x => x.Column(member.LocalMember.DeclaringType.Name + "_Id")); 46:  47: Type sourceType = member.LocalMember.DeclaringType; 48: Type destinationType = member.LocalMember.GetPropertyOrFieldType().GetGenericArguments().First(); 49: IEnumerable<String> names = new Type[] { sourceType, destinationType }.Select(x => x.Name).OrderBy(x => x); 50:  51: //set inverse on the relation of the alphabetically first entity name 52: propertyCustomizer.Inverse(sourceType.Name == names.First()); 53: //set mapping table name from the entity names in alphabetical order 54: propertyCustomizer.Table(String.Join("_", names)); 55: } 56: }; We have to understand how the conventions mapper thinks: For each collection of entities found, it will ask the mapper if it is a one-to-many; in our case, if the collection is a generic one that has an entity as its generic parameter, and the generic parameter type has a similar collection, then it is not a one-to-many; Next, the mapper will ask if the collection that it now knows is not a one-to-many is a many-to-many; Before a set is mapped, if it corresponds to a many-to-many, we set its mapping table. Now, this is tricky: because we have no way to maintain state, we sort the names of the two endpoint entities and we combine them with a “_”; for the first alphabetical entity, we set its relation to inverse – remember, on a many-to-many relation, only one endpoint must be marked as inverse; finally, we set the column name as the name of the entity with an “_Id” suffix; Before the many-to-many relation is processed, we set the column name as the name of the other endpoint entity with the “_Id” suffix, as we did for the set. And that’s it. With these rules, NHibernate will now happily find and configure many-to-many relations, as well as all the others. You can wrap this in a new conventions mapper class, so that it is more easily reusable: 1: public class ManyToManyConventionModelMapper : ConventionModelMapper 2: { 3: public ManyToManyConventionModelMapper() 4: { 5: base.IsOneToMany((MemberInfo member, Boolean isLikely) => 6: { 7: return (this.IsOneToMany(member, isLikely)); 8: }); 9:  10: base.IsManyToMany((MemberInfo member, Boolean isLikely) => 11: { 12: return (this.IsManyToMany(member, isLikely)); 13: }); 14:  15: base.BeforeMapManyToMany += this.BeforeMapManyToMany; 16: base.BeforeMapSet += this.BeforeMapSet; 17: } 18:  19: protected virtual Boolean IsManyToMany(MemberInfo member, Boolean isLikely) 20: { 21: //a relation is many to many if it isn't one to many 22: Boolean isOneToMany = this.ModelInspector.IsOneToMany(member); 23: return (!isOneToMany); 24: } 25:  26: protected virtual Boolean IsOneToMany(MemberInfo member, Boolean isLikely) 27: { 28: Type sourceType = member.DeclaringType; 29: Type destinationType = member.GetMemberFromDeclaringType().GetPropertyOrFieldType(); 30:  31: //check if the property is of a generic collection type 32: if ((destinationType.IsGenericCollection() == true) && (destinationType.GetGenericArguments().Length == 1)) 33: { 34: Type destinationEntityType = destinationType.GetGenericArguments().Single(); 35:  36: //check if the type of the generic collection property is an entity 37: if (this.ModelInspector.IsEntity(destinationEntityType) == true) 38: { 39: //check if there is an equivalent property on the target type that is also a generic collection and points to this entity 40: PropertyInfo collectionInDestinationType = destinationEntityType.GetProperties().Where(x => (x.PropertyType.IsGenericCollection() == true) && (x.PropertyType.GetGenericArguments().Length == 1) && (x.PropertyType.GetGenericArguments().Single() == sourceType)).SingleOrDefault(); 41:  42: if (collectionInDestinationType != null) 43: { 44: return (false); 45: } 46: } 47: } 48:  49: return (true); 50: } 51:  52: protected virtual new void BeforeMapManyToMany(IModelInspector modelInspector, PropertyPath member, IManyToManyMapper collectionRelationManyToManyCustomizer) 53: { 54: Type destinationEntityType = member.LocalMember.GetPropertyOrFieldType().GetGenericArguments().First(); 55: //set the mapping table column names from each source entity name plus the _Id sufix 56: collectionRelationManyToManyCustomizer.Column(destinationEntityType.Name + "_Id"); 57: } 58:  59: protected virtual new void BeforeMapSet(IModelInspector modelInspector, PropertyPath member, ISetPropertiesMapper propertyCustomizer) 60: { 61: if (modelInspector.IsManyToMany(member.LocalMember) == true) 62: { 63: propertyCustomizer.Key(x => x.Column(member.LocalMember.DeclaringType.Name + "_Id")); 64:  65: Type sourceType = member.LocalMember.DeclaringType; 66: Type destinationType = member.LocalMember.GetPropertyOrFieldType().GetGenericArguments().First(); 67: IEnumerable<String> names = new Type[] { sourceType, destinationType }.Select(x => x.Name).OrderBy(x => x); 68:  69: //set inverse on the relation of the alphabetically first entity name 70: propertyCustomizer.Inverse(sourceType.Name == names.First()); 71: //set mapping table name from the entity names in alphabetical order 72: propertyCustomizer.Table(String.Join("_", names)); 73: } 74: } 75: } Conclusion Of course, there is much more to mapping than this, I suggest you look at all the events and functions offered by the ModelMapper to see where you can hook for making it behave the way you want. If you need any help, just let me know!

    Read the article

  • CodePlex Daily Summary for Saturday, May 08, 2010

    CodePlex Daily Summary for Saturday, May 08, 2010New ProjectsBizSpark Camp Sample Code: This sample project demonstrates best practices when developing Windows Phone 7 appliactions with Azure. **This code is meant only as a sample f...CLB Article Module: The CLB Article Module is a simple DotNetNuke Module for Article writers. This module is designed to provide a simple location for article/screenc...cvplus: a computer vision libraryDemina: Demina is a simple keyframe based 2d skeletal animation system. It's developed in C# using XNA.Expo Live: Expo LiveFeedMonster: FeedMonster is a Cross platform Really Simple Syndication reader made to allow you to easily follow updates to your favorite sites. Its made using ...GamePad to KeyBoard: GamePad to KeyBoard (GP2KB) is a small app that lets you define an associated keyboard key for each button in your XBox 360 controller. This way, y...huanhuan's project: moajfiodsafasLazyNet: Lazynet Allows users to save the proxy settings and IP addresses of wireless networks. This makes it easier for users that move a lot between diffe...mfcfetionapi: mfcfetionapiMocking BizTalk: Mocking BizTalk is a set of tools aimed at bringing Mock Object concepts in BizTalk solution testing. Actually it consists in a set of MockPipelin...nkSWFControl: nkSWFControl is a ASP.NET control that provides numberous ways for publishing SWF movies in your pages. The project goal is to become defacto ...Property Pack for EPiServer CMS: With EPiServer CMS 6 it's possible to create a wide variety of custom property types that have customized settings in each usage. This Property Pac...SPRoleAssignment: SPRoleAssignment makes it easier for programmers to assign custom item level permissions. You can now easily grant or remove permissions to certain...Thats-Me Dot Net API: Thats-Me Dot Net API is a library which implements the Thats-Me.ch API for the Dot Net Framework.The Information Literacy Education Learning Environment (ILE): Written in C# utilizing the .net framework the Information Literacy Education (ILE) learning environment is designed to deliver information litera...tinytian: Tools, shared.Unidecode Sharp: UnidecodeSharp is a C# port of Python and Perl Unidecode module. Intended to transliterate an Unicode object into an ASCII string, no matter what l...Wave 4: Wave 4 is a platform for professional bloggers, integrated with many social networking services.New ReleasesµManager for MaNGOS: 0.8.6: Improvements: Supports up to MaNGOS revision 9692 (may support higher revisions, this is dependent on any changes made to the database structure b...BFBC2 PRoCon: PRoCon 0.3.5.0: Release Notes CommingBizSpark Camp Sample Code: Initial Check-in: There are two downloads available for this project. There is a windows phone 7 application that calls webservices hosted in Azure. And there is a...Bojinx: Bojinx Core V4.5.11: Minor release that fixes several minor bugs and adds more error prevention logic. Fixed: You will no longer get a null pointer error when loading ...CuttingEdge.Logging: CuttingEdge.Logging v1.2.2: CuttingEdge.Logging is a library that helps the programmer output log statements to a variety of output targets in .NET applications. The design is...DotNetNuke® Events: 05.01.00 Beta 1: Please take note: this is a Beta releaseThis is a not a formal Release of DNN Events 05.01.00. This is a beta version, which is not extensively te...dylan.NET: dylan.NET v. 9.8: This is the latest version of dylan.NET features the new locimport statement, improved error handling and other new stuff and bug fixes.Free Silverlight & WPF Chart Control - Visifire: Visifire SL and WPF Charts v3.0.9 beta 2 Released: Hi, This release contains the following enhancements: * New Property named ClosestPlotDistance has been implemented in Axis. It defines the d...Free Silverlight & WPF Chart Control - Visifire: Visifire SL and WPF Charts v3.5.2 beta 2 Released: Hi, This release contains the following enhancements: * New Property named ClosestPlotDistance has been implemented in Axis. It defines the d...GamePad to KeyBoard: GP2KB v0.1: First public version of GamePad to KeyBoard.Global: Version 0.1: This is the first beta stable release of this assembly. Check out the Test console app to see how some of the stuff works. EnjoyHtml Agility Pack: 1.4.0 Stable: 1.4.0 Adds some serious new features to Html Agility Pack to make it work nicer in a LINQ driven .NET World. The HtmlNodeCollection and HtmlAttribu...LazyNet: Beta Release: This is the Beta Version, All functionality has been implemented but needs further testing for bugs.Mercury Particle Engine: Mercury Particle Engine 3.1.0.0: Long overdue release of the latest version of Mercury Particle Engine, this release is changeset #66009NazTek.Extension.Clr4: NazTek.Extension.Clr4 Binary Cab: Includes bin, config, and chm filesnetDumbster: netDumbster 1.0: netDumbster release 1.0Opalis Community Releases: Workflow Examples (May 7, 2010): The Opalis team is providing some sample Workflows (policies) for customers and partners to help you get started in creating your own workflows in ...patterns & practices SharePoint Guidance: SPG2010 Drop10: SharePoint Guidance Drop Notes Microsoft patterns and practices ****************************************** ***************************************...Shake - C# Make: Shake v0.1.9: First public release including samples. Basic tasks: - MSBuild task (msbuild command line with parameters) - SVN command line client with checkout...SPRoleAssignment: Role Assigment Project [Eng]: SPRoleAssignment makes it easier for programmers to assign custom item level permissions. You can now easily grant or remove permissions to certain...SQL Server Metadata Toolkit 2008: Alpha 7 SQL Server Metadata Toolkit: The changes in this are to the Parser, and a change to handle WITH CTE's within the Analyzers. The Parser now handles CAST better, EXEC, EXECUTE, ...StackOverflow Desktop Client in C# and WPF: StackOverflow Client 0.5: Made the popup thinner, removed the extra icon and title. The questions on the popup automatically change every 5 seconds.TimeSpanExt: TimeSpanExt 1.0: This had been stuck in beta for a long time, so I'm glad to finally make the full release.Transcriber: Transcriber V0.2.0: First alpha release. See Issue Tracker for known bugs and incomplete features.Unidecode Sharp: UnidecodeSharp 0.04: First release. Tested and stable. Versions duplicates Perl and Python ones.Visual Studio 2010 AutoScroller Extension: AutoScroller v0.3: A Visual studio 2010 auto-scroller extension. Simply hold down your middle mouse button and drag the mouse in the direction you wish to scroll, fu...Visual Studio 2010 Test Case Import Utilities: V 1.0 (RC2): Work Item Migrator With the prior releases of Test Case Migrator ( Beta and RC1), it was possible to: migrate test cases (along with test steps) ...Word Add-in For Ontology Recognition: Technology Preview, Beta 2 - May 2010: This is a technology preview release of the Word Add-in for Ontology Recognition for Word. These are some of the updates included in this version:...Most Popular ProjectsWBFS ManagerRawrAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight Toolkitpatterns & practices – Enterprise LibraryWindows Presentation Foundation (WPF)Microsoft SQL Server Community & SamplesASP.NETPHPExcelMost Active Projectspatterns & practices – Enterprise LibraryAJAX Control FrameworkRawrThe Information Literacy Education Learning Environment (ILE)Caliburn: An Application Framework for WPF and SilverlightBlogEngine.NETpatterns & practices - UnityjQuery Library for SharePoint Web ServicesNB_Store - Free DotNetNuke Ecommerce Catalog ModuleTweetSharp

    Read the article

  • ASP.NET MVC 3 Hosting :: Rolling with Razor in MVC v3 Preview

    - by mbridge
    Razor is an alternate view engine for asp.net MVC.  It was introduced in the “WebMatrix” tool and has now been released as part of the asp.net MVC 3 preview 1.  Basically, Razor allows us to replace the clunky <% %> syntax with a much cleaner coding model, which integrates very nicely with HTML.  Additionally, it provides some really nice features for master page type scenarios and you don’t lose access to any of the features you are currently familiar with, such as HTML helper methods. First, download and install the ASP.NET MVC Preview 1.  You can find this at http://www.microsoft.com/downloads/details.aspx?FamilyID=cb42f741-8fb1-4f43-a5fa-812096f8d1e8&displaylang=en. Now, follow these steps to create your first asp.net mvc project using Razor: 1. Open Visual Studio 2010 2. Create a new project.  Select File->New->Project (Shift Control N) 3. You will see the list of project types which should look similar to what’s shown:   4. Select “ASP.NET MVC 3 Web Application (Razor).”  Set the application name to RazorTest and the path to c:projectsRazorTest for this tutorial. If you select accidently select ASPX, you will end up with the standard asp.net view engine and template, which isn’t what you want. 5. For this tutorial, and ONLY for this tutorial, select “No, do not create a unit test project.”  In general, you should create and use a unit test project.  Code without unit tests is kind of like diet ice cream.  It just isn’t very good. Now, once we have this done, our brand new project will be created.    In all likelihood, Visual Studio will leave you looking at the “HomeController.cs” class, as shown below: Immediately, you should notice one difference.  The Index action used to look like: public ActionResult Index () { ViewData[“Message”] = “Welcome to ASP.Net MVC!”; Return View(); } While this will still compile and run just fine, ASP.Net MVC 3 has a much nicer way of doing this: public ActionResult Index() { ViewModel.Message = “Welcome to ASP.Net MVC!”; Return View(); } Instead of using ViewData we are using the new ViewModel object, which uses the new dynamic data typing of .Net 4.0 to allow us to express ourselves much more cleanly.  This isn’t a tutorial on ALL of MVC 3, but the ViewModel concept is one we will need as we dig into Razor. What comes in the box? When we create a project using the ASP.Net MVC 3 Template with Razor, we get a standard project setup, just like we did in ASP.NET MVC 2.0 but with some differences.  Instead of seeing “.aspx” view files and “.ascx” files, we see files with the “.cshtml” which is the default razor extension.  Before we discuss the details of a razor file, one thing to keep in mind is that since this is an extremely early preview, intellisense is not currently enabled with the razor view engine.  This is promised as an updated before the final release.  Just like with the aspx view engine, the convention of the folder name for a set of views matching the controller name without the word “Controller” still stands.  Similarly, each action in the controller will usually have a corresponding view file in the appropriate view directory.  Remember, in asp.net MVC, convention over configuration is key to successful development! The initial template organizes views in the following folders, located in the project under Views: - Account – The default account management views used by the Account controller.  Each file represents a distinct view. - Home – Views corresponding to the appropriate actions within the home controller. - Shared – This contains common view objects used by multiple views.  Within here, master pages are stored, as well as partial page views (user controls).  By convention, these partial views are named “_XXXPartial.cshtml” where XXX is the appropriate name, such as _LogonPartial.cshtml.  Additionally, display templates are stored under here. With this in mind, let us take a look at the index.cshtml file under the home view directory.  When you open up index.cshtml you should see 1:   @inherits System.Web.Mvc.WebViewPage 2:  @{ 3:          View.Title = "Home Page"; 4:       LayoutPage = "~/Views/Shared/_Layout.cshtml"; 5:   } 6:  <h2>@View.Message</h2> 7:  <p> 8:     To learn more about ASP.NET MVC visit <a href="http://asp.net/mvc" title="ASP.NET MVC     9:    Website">http://asp.net/mvc</a>. 10:  </p> So looking through this, we observe the following facts: Line 1 imports the base page that all views (using Razor) are based on, which is System.Web.Mvc.WebViewPage.  Note that this is different than System.Web.MVC.ViewPage which is used by asp.net MVC 2.0 Also note that instead of the <% %> syntax, we use the very simple ‘@’ sign.  The View Engine contains enough context sensitive logic that it can even distinguish between @ in code and @ in an email.  It’s a very clean markup.  Line 2 introduces the idea of a code block in razor.  A code block is a scoping mechanism just like it is in a normal C# class.  It is designated by @{… }  and any C# code can be placed in between.  Note that this is all server side code just like it is when using the aspx engine and <% %>.  Line 3 allows us to set the page title in the client page’s file.  This is a new feature which I’ll talk more about when we get to master pages, but it is another of the nice things razor brings to asp.net mvc development. Line 4 is where we specify our “master” page, but as you can see, you can place it almost anywhere you want, because you tell it where it is located.  A Layout Page is similar to a master page, but it gains a bit when it comes to flexibility.  Again, we’ll come back to this in a later installment.  Line 6 and beyond is where we display the contents of our view.  No more using <%: %> intermixed with code.  Instead, we get to use very clean syntax such as @View.Message.  This is a lot easier to read than <%:@View.Message%> especially when intermixed with html.  For example: <p> My name is @View.Name and I live at @View.Address </p> Compare this to the equivalent using the aspx view engine <p> My name is <%:View.Name %> and I live at <%: View.Address %> </p> While not an earth shaking simplification, it is easier on the eyes.  As  we explore other features, this clean markup will become more and more valuable.

    Read the article

  • CodePlex Daily Summary for Friday, April 16, 2010

    CodePlex Daily Summary for Friday, April 16, 2010New Projects[C#] UML Touch: This project is my master thesis project for Msc Software Engineering at Oxford Brookes univesity. The goal of this project is to develop a UML Edi...3D Chart Reports, using ASP.NET 2.0: Project focuses on the visibility of internal supply chain management process. The objective is materialized, using MSChart and hence provides asse...Active Directory Health: ActiveDirectoryHealth (ADHealth) Aim: To create a suite of tools to allow an Active Directory administrator monitor and identify problems, particu...Art4Desktop: Aplicação desktop simplesAuto Increment field for any entity in MS CRM: Auto Increment field for any entity in MS CRM will add an attribute in any entity which will be auto increment type.Convection Game Engine (Basic Edition): A basic 2D game engine written in C# using XNA 3.1CSharp Intellisense: VS 2010 IntelliSense plug-in Provides custom IntelliSense for the CSharp Editor that is capable to filter out events, preoperties or methods. ...EP: Generic platform for enterprise applications developed on .NETFile Change Checker: A simple windows form application that checks and copies all modified files given a specified date. It aims to help developers that use Visual Stu...Folder: Folder is a puzzle platformer game, which provides you a whole new experience. You can fold objects in game play; a wall becomes a road, a high sti...Industrial Dashboard: WCF service that allows executing SQL Server stored procedures straight from javascript code, enabling sending and receiving structured data withou...kafrilion: Open Source Platform GameLogikBug.Injection: LogikBug.Injection is an IOC container and is very light weight. It is very simple to use and yet very roboust, there are many options and ways to ...MongoMvc: NoSql with MongoDB, NoRM and ASP.NET MVC. For more details check out the blog post http://weblogs.asp.net/shijuvarghese/archive/2010/04/16/nosql-wi...MOSSDAL: MOSS Data Access Layer for data from the Sharepoint Lists Service: MOSSDAL is a lightweight framework for working with Sharepoint MOSS List data using the List web Service. It can be used with silverlight or regul...Should: This is a set of test framework agnostic extensions assetion extensions. This project was born because test runners Should be independent of the...Software Localization Tool: summaryTIMETABLEASY: planning managerTR9: implementions of t9 technologies from mobile device to personal computer and laptopTrp net tools: net toolTwep: Twep is a JavaScript lib.Using PowerPivot to analyze MS Dynamics NAV: Project show how to prepare MS Dynamics NAV data for analyzing in PowerPivot for Excel. Project include Data Warehouse demo database, sql procedure...vmware virtual mashines management system for education: System for management many classes with PC and VMware virtual mashines (bad english,sorry)WebSite: Mr. John's projectsWebtrends DX and DC Services Libraries: This project is setup to provide assemblies to simplify access to the REST based DX and DC Web Services that Webtrends Provides. For access you ne...YetAnotherFileRenamer: Searches a directory and/or sub-directories for files matching a certain extension and copies and renames them to the same folder. The intent is a...New Releases3D Chart Reports, using ASP.NET 2.0: Beta iscm1.0.0.1: Its first release, might be a tolerant to bugs.A Guide to Parallel Programming: Drop 2 - Guide Chapters 2 and 5, accompanying code: This is Drop 2 with just the Guide Chapters 2 and 5 and the accompanying code samples. This drop requires Visual Studio 2010 Beta 2 or later in ord...AnimeJ: AnimeJ 1.1: This new release features reversible animations. It is fully backward compatible, and by simply specifying an additional parameter to Run() you can...AutoPoco: AutoPoco 0.4: A load of additions to the convention system Inheritance precedence added And a pile of extra data sourcesCSharp Intellisense: V1.5: CSharpIntellisensePresenter(Free) Created by: Bnaya Eshet (Bnaya Eshet (credit to Karl Shifflett)) Last Updated: Tuesday, April 13, 2010 Version...DevTreks -social budgeting that improves lives and livelihoods: Social Budgeting Web Software, DevTreks alpha 4: Alpha 4 upgrades all story-telling with one very basic, simple, 'story' data pattern, schema, and stylesheet. Basic, simple, stories and eBook pa...ESPEHA: Espeha 4.3: Deletion of categories and tasks via F8 Saving on CtrlS, CtrlShiftS Navigating to search on CtrlF Hiding on CtrlQ, CtrlX, CtrlH, Q, X, H + Drag ...Folder Bookmarks: Folder Bookmarks 1.4.4: This is the latest version of Folder Bookmarks (1.4.4), with general improvements. It has an installer - it will create a directory 'CPascoe' in My...HouseFly controls: HouseFly controls alpha 0.9.1: Version 0.9.1 alpha release of HouseFly controlsIndustrial Dashboard: 2.0 Beta: 2.0 Beta includes : IndustrialDashboard Framework Sample widgets : TabularReport, DropDownPicker, DatePicker, ScopePicker Sample html pageskdar: KDAR 0.0.20: KDAR - Kernel Debugger Anti Rootkit - signature's bases updated - many bugs fixedLogikBug.Injection: Initial Release: This project is dependent upon Microsoft.Practices.ServiceLocation.IServiceLocator and must be referenced when referencing LogikBug.Injection.Mesopotamia Experiment: Mesopotamia 1.2.72: New Features - Select organims to load in sim or in hw mode Fixes - fixed robotics engine not shutting down properly - selects new robotics port o...MobySharp: MobySharp 1.1: Fixed GetComments Added the new Likes featureMongoMvc: NoSQL with MongoDB, NoRM and ASP.NET MVC: A demo on NoSQL with MongoDB, NoRM and ASP.NET MVC. To run the demo application do the ollowing actions. 1. Create a directory call C:\data\...NetMassDownloader: Release 1.6.0.0: Mass Downloader For .Net Framework which allows you do download .Net Framework source code all at once.Offline debugging for VS2005 , VS2008 , Code...Nito.KitchenSink: Version 4: Features added in this release: PDBs are source-indexed to CodePlex, so it should be possible to step through the code (with on-demand downloads) w...Nito.KitchenSink: Version 5: The only feature for this release is support of the .NET 4.0 platform. Dependencies Nito.Linq 0.4 Beta (released 2010-04-15) Rx 1.0.2441.0 (rele...Nito.LINQ: Beta (v0.4): Rx version The "with Rx" versions of Nito.LINQ are built against Rx 1.0.2441.0, released 2010-04-15. Breaking changes IEnumerable<T>.Min and IEnum...N-Twill Twitter Client for VB.NET: NTWILL PROJECT 15-abril-2010: Este archivo es un Zip que contiene el proyecto hasta este punto editado... tiene algunas que otras funciones, pero lo cuelgo para tener un referen...PanBrowser: 1.2.0: added screensaver support, hit a key to exit, up/down keys cycle through images.PersianDateTimePicker, PersianMonthCalendar: PersianDatetimePicker,PersianMonthCalendar: PersianDatetimePicker,PersianMonthCalendar 1.1 releasesPokeIn Comet Ajax Library: PokeIn Library v03: New Features! Client MessageLog Management Added CrossBrowser Script Injector Added to work with non ajax supported browsers and activeX disabled...PokeIn Comet Ajax Library: PokeIn Sample VS09: Sample Project of PokeIn VS09. Contains version 0.3 of LibraryPokeIn Comet Ajax Library: PokeIn Sample VS10: Sample Visual Studio 10 project for PokeIn. Contains v03 Library of PokeInSilverlight Toolkit: April 2010: Suggestions? Features? Questions? Ask questions in our Silverlight Toolkit forum on Silverlight.net. The forum is the best resource for Silverlig...Software Localization Tool: SharpSLT 0.9: This is the first release of SharpSLT. Features Framework: .Net 3.5 UI language: English Works as standalone and External Tool for Visual C# Ex...Star Trooper for XNA 2D Tutorial: Lesson two content: Here is Lesson two original content for the StarTrooper 2D XNA tutorial. The blog tutorial has now started over on http://xna-uk.net/blogs/darkgen...StyleCop for ReSharper: StyleCop for ReSharper 5.0.14714.1: StyleCop for ReSharper 5.0.14714.1 o StyleCop 4.3.3.0 o ReSharper 5.0.1659.36 o Visual Studio 2008 / Visual Studio 2010 Installation Instructi...TaskUnZip for SSIS: TaskUnZip for SSIS 1.1.1.0 (beta 2): Ver. 1.1.1.0 (beta2) Bug: Correct installation bug. Add: Support SQL SERVER 2008 / 2005 Minor corrections Ver. 1.1.0.0 (Tnx to: Kevin Wendler)...TaskUnZip for SSIS: TaskUnZip for SSIS 1.2.0.0: Ver. 1.2.0.0 Add: Support SQL SERVER 2008 / 2005. Add: recursive compress.* Add: filter option for extract e compress file.* Add: Test archiv...Test Project (ignore): abcde: aaaaadaTR9: TR9 Beta: this setup is first release of the program.TR9: TR9 Source Code: TR9 Source CodeUsing PowerPivot to analyze MS Dynamics NAV: GL sql procedures and demo DB: For using sql procedures and creating DW database, please see Documentation.VivoSocial: VivoSocial 7.1.1: Version 7.1.1 of VivoSocial has been released. If you experienced any issues with the previous version, please update your modules to the 7.1.1 rel...WPF Inspirational Quote Management System: Release 1.1.1: - Changed to only allow one running instance a time. - Custom icon in system tray popup removed for now as this breaks when text size is set to gr...WPF ShaderEffect Generator: WPF ShaderEffect Generator 1.6.1: Just a minor release to fix the problem with resource Uri generation in the C# Shadereffect files. ChangesThe Uri should be correctly generated no...Most Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)ASP.NETMicrosoft SQL Server Community & SamplesPHPExcelpatterns & practices – Enterprise LibraryMost Active ProjectsRawrpatterns & practices – Enterprise LibraryGMap.NET - Great Maps for Windows Forms & PresentationIndustrial DashboardFarseer Physics EngineIonics Isapi Rewrite FilterNB_Store - Free DotNetNuke Ecommerce Catalog ModuleBlogEngine.NETjQuery Library for SharePoint Web ServicesDotRas

    Read the article

  • Handling HumanTask attachments in Oracle BPM 11g PS4FP+ (I)

    - by ccasares
    Adding attachments to a HumanTask is a feature that exists in Oracle HWF (Human Workflow) since 10g. However, in 11g there have been many improvements on this feature and this entry will try to summarize them. Oracle BPM 11g 11.1.1.5.1 (aka PS4 Feature Pack or PS4FP) introduced two great features: Ability to link attachments at a Task scope or at a Process scope: "Task" attachments are only visible within the scope (lifetime) of a task. This means that, initially, any member of the assignment pattern of the Human Task will be able to handle (add, review or remove) attachments. However, once the task is completed, subsequent human tasks will not have access to them. This does not mean those attachments got lost. Once the human task is completed, attachments can be retrieved in order to, i.e., check them in to a Content Server or to inject them to a new and different human task. Aside note: a "re-initiated" human task will inherit comments and attachments, along with history and -optionally- payload. See here for more info. "Process" attachments are visible within the scope of the process. This means that subsequent human tasks in the same process instance will have access to them. Ability to use Oracle WebCenter Content (previously known as "Oracle UCM") as the backend for the attachments instead of using HWF database backend. This feature adds all content server document lifecycle capabilities to HWF attachments (versioning, RBAC, metadata management, etc). As of today, only Oracle WCC is supported. However, Oracle BPM Suite does include a license of Oracle WCC for the solely usage of document management within BPM scope. Here are some code samples that leverage the above features. Retrieving uploaded attachments -Non UCM- Non UCM attachments (default ones or those that have existed from 10g, and are stored "as-is" in HWK database backend) can be retrieved after the completion of the Human Task. Firstly, we need to know whether any attachment has been effectively uploaded to the human task. There are two ways to find it out: Through an XPath function: Checking the execData/attachment[] structure. For example: Once we are sure one ore more attachments were uploaded to the Human Task, we want to get them. In this example, by "get" I mean to get the attachment name and the payload of the file. Aside note: Oracle HWF lets you to upload two kind of [non-UCM] attachments: a desktop document and a Web URL. This example focuses just on the desktop document one. In order to "retrieve" an uploaded Web URL, you can get it directly from the execData/attachment[] structure. Attachment content (payload) is retrieved through the getTaskAttachmentContents() XPath function: This example shows how to retrieve as many attachments as those had been uploaded to the Human Task and write them to the server using the File Adapter service. The sample process excerpt is as follows:  A dummy UserTask using "HumanTask1" Human Task followed by a Embedded Subprocess that will retrieve the attachments (we're assuming at least one attachment is uploaded): and once retrieved, we will write each of them back to a file in the server using a File Adapter service: In detail: We've defined an XSD structure that will hold the attachments (both name and payload): Then, we can create a BusinessObject based on such element (attachmentCollection) and create a variable (named attachmentBPM) of such BusinessObject type. We will also need to keep a copy of the HumanTask output's execData structure. Therefore we need to create a variable of type TaskExecutionData... ...and copy the HumanTask output execData to it: Now we get into the embedded subprocess that will retrieve the attachments' payload. First, and using an XSLT transformation, we feed the attachmentBPM variable with the name of each attachment and setting an empty value to the payload: Please note that we're using the XSLT for-each node to create as many target structures as necessary. Also note that we're setting an Empty text to the payload variable. The reason for this is to make sure the <payload></payload> tag gets created. This is needed when we map the payload to the XML variable later. Aside note: We are assuming that we're retrieving non-UCM attachments. However in real life you might want to check the type of attachment you're handling. The execData/attachment[]/storageType contains the values "UCM" for UCM type attachments, "TASK" for non-UCM ones or "URL" for Web URL ones. Those values are part of the "Ext.Com.Oracle.Xmlns.Bpel.Workflow.Task.StorageTypeEnum" enumeration. Once we have fed the attachmentsBPM structure and so it now contains the name of each of the attachments, it is time to iterate through it and get the payload. Therefore we will use a new embedded subprocess of type MultiInstance, that will iterate over the attachmentsBPM/attachment[] element: In every iteration we will use a Script activity to map the corresponding payload element with the result of the XPath function getTaskAttachmentContents(). Please, note how the target array element is indexed with the loopCounter predefined variable, so that we make sure we're feeding the right element during the array iteration:  The XPath function used looks as follows: hwf:getTaskAttachmentContents(bpmn:getDataObject('UserTask1LocalExecData')/ns1:systemAttributes/ns1:taskId, bpmn:getDataObject('attachmentsBPM')/ns:attachment[bpmn:getActivityInstanceAttribute('SUBPROCESS3067107484296', 'loopCounter')]/ns:fileName)  where the input parameters are: taskId of the just completed Human Task attachment name we're retrieving the payload from array index (loopCounter predefined variable)  Aside note: The reason whereby we're iterating the execData/attachment[] structure through embedded subprocess and not, i.e., using XSLT and for-each nodes, is mostly because the getTaskAttachmentContents() XPath function is currently not available in XSLT mappings. So all this example might be considered as a workaround until this gets fixed/enhanced in future releases. Once this embedded subprocess ends, we will have all attachments (name + payload) in the attachmentsBPM variable, which is the main goal of this sample. But in order to test everything runs fine, we finish the sample writing each attachment to a file. To that end we include a final embedded subprocess to concurrently iterate through each attachmentsBPM/attachment[] element: On each iteration we will use a Service activity that invokes a File Adapter write service. In here we have two important parameters to set. First, the payload itself. The file adapter awaits binary data in base64 format (string). We have to map it using XPath (Simple mapping doesn't recognize a String as a base64-binary valid target):  Second, we must set the target filename using the Service Properties dialog box:  Again, note how we're making use of the loopCounter index variable to get the right element within the embedded subprocess iteration. Handling UCM attachments will be part of a different and upcoming blog entry. Once I finish will all posts on this matter, I will upload the whole sample project to java.net.

    Read the article

  • CodePlex Daily Summary for Tuesday, April 13, 2010

    CodePlex Daily Summary for Tuesday, April 13, 2010New ProjectsChat Neo: Video chatDev-wow HappyFuwa: Silverlight + Asp.net + Ajax 实现的以北京奥运为题材,福娃在线聊天互动系统 登录系统后,你可以和线上的朋友即时互动,走动 聊天 动作等都会呈现给其他的在线用户Dynamic Configuration Manager: Dynamic Configuration Manager GameHelper: the project of myselfGeotron: Geotron is a C# geolocation library to resolve postcodes and addresses to co-ordinates, to assist developers in creating location-aware applications. InfoPath Forms Services 2010 Web Testing Toolkit: This project has the tools and information needed to write Visual Studio web tests for InfoPath Forms Services 2010.IronBrainFuck, SimpleBrainFuck: IronBrainFuck and SimpleBrainFuck makes it easier for BrainFuck programmers to develop BrainFuck-compatible programs. It's developed in C#.Runtime Intelligence API: The Runtime Intelligence API library and samples provided by PreEmptive Solutions.SilverVNC 1.0: This project is a Silverlight VNC Viewer. It requires Silverlight 4.0 and works in Out of Browser with full-trust.Snippet Creator: Yet another Visual Studio plugin for creating code snippets.Software Codex: Software Codex is a collection of projects developed in .net to provide a set of libraries and functionalities for developers. It is divided into m...TestCrm: Let go!Make our CrmVidCoder: VidCoder is a DVD ripping and video transcoding application. It uses HandBrake for the encoding engine, but has a revamped and easy to use UI writt...WPF Data Virtualization: Component for displaying and interacting a large data set in WPF application.WPF Gantt chart: Gantt chart control for WPFNew ReleasesAJAX Control Toolkit: 40412: AJAX Control Toolkit Release Notes - April 12th 2010 Release Version 40412April 12, 2010 release of the AJAX Control Toolkit. AJAX Control Toolkit...ASP.NET MVC | SCAFFOLD: ASP.NET MVC SCAFFOLD 1.0 PREVIEW: Primeiro release do ASP.NET MVC SCAFFOLD.Autenticar no OpenLDAP utilizando pGIna: LDAPAuth plugin: Release: DLL LDAPAuth Brief: pGina pluginBluetooth Radar: Version 1.8: Add position helper class to test whether a given point is on the interior of a circle. Random set of Devices on the radar + Zindex changes on Mous...Database Searcher: DB-Searcher Binaries v0.1: First beta version containing following features: Search exact database values via .NET DB-Provider Microsoft SQL MySQL .NET Connector (no .NET t...DBSourceTools: DBSourceTools_1.2.0.7: Release 1.2.0.7 Extended search engine from (pegas)'s patch. Fixed Script Data bug with reserved word (eripsni). Write Targets can now create targe...ESB Toolkit Extensions: Tellago SOA ESB Extenstions v0.4: Windows Installer file that installs Library on a BizTalk ESB 2.0 system. This Install automatically configures the esb.config to use the new compo...Fluent Assertions: Release 1.2: See this blog post for more details on this release: http://www.dennisdoomen.net/2010/04/fluent-assertions-12-has-been-released.htmlFNA Fractal Numerical Algorithm for a new encryption technology: FNA: This is a latest distribution ( 0.04 at the moment). Is a Perl package (.pm). More information on: http://search.cpan.org/~anak/Free Silverlight & WPF Chart Control - Visifire: Visifire SL and WPF Charts 3.0.6 Released!: Hi, Today we have released the final version of Visifire v3.0.6 which contains the following major features: * Zoom using interactive ZoomRec...HD-Trailers.NET Downloader: HD-Trailers.NET_Downloader_v.91_BETA: - added configuration option 'FeedAddress' to specify the URL of the RSS feed to consume - implemented fix for workitem4260: AddDate = false; will ...HobbyBrew Mobile: Beta 1: First public BetaHome Access Plus+: v3.2.6.1: v3.2.6.1 Fixed the wrong date in the iCal Generator Fixed the admin booking posting logging it as being booked by the admin Fixed the problem o...HTML Ruby: 6.21.1: Added back the space ruby text option More consistent ruby text positioning regardless to the page's stylesInfoPath Forms Services 2010 Web Testing Toolkit: IPFS 2010 Web Test Toolkit 20100412 for VS2008: The ExtractAndSubstituteDynamicInfoPathData web test plugin. To use it, simply add the plugin to your web test. It automatically recognizes the inf...IronPython: 2.6.1: Hello Python Community, We’re pleased to announce the final release of IronPython 2.6.1. This version of IronPython makes great strides in stabili...IronRuby: 1.0: IronRuby 1.0 is the first stable version of IronRuby, targeting Ruby 1.8.6 compatibility. For a high-level compatibility report solely based on Rub...METAR.NET Decoder: 0.3.x beta: First public release. Main of the application is working. Metar can be downloaded, decoded, updated and encoded back to metar string. Release incl...MiniTwitter: 1.11: MiniTwitter 1.11 更新内容 修正 設定ファイルを自動でバックアップして、破損したときは出来るだけ修正するように。 初回起動時にタイムラインを更新しようとすると落ちるバグを修正。MSBuild Mercurial Tasks: 1.0.1 Stable: Ready for Production release. This version integrates all the basic functionalities of Mercurial as defined in the Use Case 1.Multiplayer Quiz: Release 1_7_0_0: Latest Version Strongly recommended to use .NET 4.0 now that it is in RC It can be downloaded from hereMVC Foolproof Validation: Beta 0.9.3754: First Beta release. Addressed several bugs from alpha along with some considerable class refactoring. ModelAwareValidationAttribute will make creat...NodeXL: Network Overview, Discovery and Exploration for Excel: NodeXL Excel 2007 Template, version 1.0.1.121: The NodeXL Excel 2007 template displays a network graph using edge and vertex lists stored in an Excel 2007 workbook. What's NewThis version allow...Proxi [Proxy Interface]: Proxi Release 1.0.0.412: Proxi Release 1.0.0.412QueryUnit: QueryUnitPOC v. 0.0.0.8: This version add support for AreNotEqual, Greater, Less and fix some problems with "format" attribute used in conjunction with the string data type...Rainier: Trabalhos de orçamento empresarial: Estão disponíveis os arquivos de exemplo sobre o planejamento para orçamento empresarial. A resolução é referente aos exercícios explicados em sal...Runtime Intelligence API: Initial release: The initial release of the WCF contract & proxy assembly. AuthentiCode signed library.SharePoint Accelerators: Central Admin - Command Search: This web part allows you to search for a SharePoint 2010 Central Admin commands. This web part can come handy when you are demostrating SharePoint ...SharePoint Labs: SPLab5013A-FRA-Level100: SPLab5013A-FRA-Level100* This SharePoint Lab will teach you how to provision a computed site column that shows a customized view of an existing hid...SilverVNC 1.0: SilverVNC 1.0.3755.0: This download is the first release of the project published on www.silverlightplayground.org. For a detailed explanation please refer to http://www...Snippet Creator: SnippetCreator.Setup: This is the first and (I hope) final release.SQL Server Health & History (SQLH2): SQLH2 v.2.2.001: New Features Updated to use .Net 3.5 Job and Job history information implemented Last dif and log backup columns added Logical Disk implemented Dis...SQL Server Health & History (SQLH2): SQLH2PerfCollector v.2.1.003: Updated to run on .Net 3.5 Now installs to correct registry path on x64Star Trooper for XNA 2D Tutorial: Lesson one content: Here is Lesson one original content for the StarTrooper 2D XNA tutorial. The blog tutorial has now started over on http://xna-uk.net/blogs/darkgen...SysPad: SysPad 4.10.7.1: A folder management and scratchpad utility; especially useful in a business network setting that utilizes numerous, commonly used folders. The pro...TaskUnZip for SSIS: TaskUnZip for SSIS 1.1.0.0: Add: recursive compress. Add: filter option for exstract e compress file. (Tnx to: Kevin Wendler)TCP Wrapper: TCP Wrapper 1.0.0.3: Adding Client Accessor to CommingDataAvailableEventArgs ...UCD: Architecture: UCDArch 1.0: Production release of UCDArch 1.0 (for ASP.NET MVC 1.0). New Features including the ability to modify the NHibernate FlushMode, URL convention help...VCC: Latest build, v2.1.30412.0: Automatic drop of latest buildVidCoder: 0.1.0: First VidCoder beta release. It's missing a few features that will be added before release: Advanced x264 options In-GUI encode log Additiona...WatchersNET CKEditor™ Provider for DotNetNuke: CKEditor Provider 1.10.01: Whats New changesBrowser: Removed extra "\" sign from Current folder name selecting the root folder Browser: Fixed Folder Rendering Browser Fix...WPF Gantt chart: gantt: first, alpha version, of gantt chart for wpfxvanneste: Coverflow et thumbnail sharepoint: Code du coverflow silverlight du webcast sur les thumbnails sharepointMost Popular ProjectsWBFS ManagerRawrMicrosoft SQL Server Product Samples: DatabaseAJAX Control ToolkitSilverlight ToolkitWindows Presentation Foundation (WPF)ASP.NETMicrosoft SQL Server Community & SamplesFacebook Developer ToolkitPHPExcelMost Active ProjectsRawrnopCommerce. Open Source online shop e-commerce solution.AutoPocopatterns & practices – Enterprise LibraryShweet: SharePoint 2010 Team Messaging built with PexFarseer Physics EngineNB_Store - Free DotNetNuke Ecommerce Catalog ModuleIonics Isapi Rewrite FilterBlogEngine.NETBeanProxy

    Read the article

  • Credentials can not be delegated - Alfresco Share

    - by leftcase
    I've hit a brick wall configuring Alfresco 4.0.d on Redhat 6. I'm using Kerberos authentication, it seems to be working normally, and single sign on is working on the main alfresco app itself. I've been through the configuration steps to get the share app working, but try as I may, I keep getting this error in catalina.out each time a browser accesses http://server:8080/share along with a 'Windows Security' password box. WARN [site.servlet.KerberosSessionSetupPrivilegedAction] credentials can not be delegated! Here's what I've done so far: Using AD users and computers, selected the alfrescohttp account, and selected 'trust this user for delegation to any service (Kerberos only). Copied /opt/alfresco-4.0.d/tomcat/shared/classes/alfresco/web-extension/share-config-custom.xml.sample to share-config-custom.xml and edited like this: <config evaluator="string-compare" condition="Kerberos" replace="true"> <kerberos> <password>*****</password> <realm>MYDOMAIN.CO.UK</realm> <endpoint-spn>HTTP/[email protected]</endpoint-spn> <config-entry>ShareHTTP</config-entry> </kerberos> </config> <config evaluator="string-compare" condition="Remote"> <remote> <keystore> <path>alfresco/web-extension/alfresco-system.p12</path> <type>pkcs12</type> <password>alfresco-system</password> </keystore> <connector> <id>alfrescoCookie</id> <name>Alfresco Connector</name> <description>Connects to an Alfresco instance using cookie-based authentication</description> <class>org.springframework.extensions.webscripts.connector.AlfrescoConnector</class> </connector> <endpoint> <id>alfresco</id> <name>Alfresco - user access</name> <description>Access to Alfresco Repository WebScripts that require user authentication</description> <connector-id>alfrescoCookie</connector-id> <endpoint-url>http://localhost:8080/alfresco/wcs</endpoint-url> <identity>user</identity> <external-auth>true</external-auth> </endpoint> </remote> </config> Setup the /etc/krb5.conf file like this: [logging] default = FILE:/var/log/krb5libs.log kdc = FILE:/var/log/krb5kdc.log admin_server = FILE:/var/log/kadmind.log [libdefaults] default_realm = MYDOMAIN.CO.UK default_tkt_enctypes = rc4-hmac default_tgs_enctypes = rc4-hmac forwardable = true proxiable = true [realms] MYDOMAIN.CO.UK = { kdc = mydc.mydomain.co.uk admin_server = mydc.mydomain.co.uk } [domain_realm] .mydc.mydomain.co.uk = MYDOMAIN.CO.UK mydc.mydomain.co.uk = MYDOMAIN.CO.UK /opt/alfresco-4.0.d/java/jre/lib/security/java.login.config is configured like this: Alfresco { com.sun.security.auth.module.Krb5LoginModule sufficient; }; AlfrescoCIFS { com.sun.security.auth.module.Krb5LoginModule required storeKey=true useKeyTab=true keyTab="/etc/alfrescocifs.keytab" principal="cifs/server.mydomain.co.uk"; }; AlfrescoHTTP { com.sun.security.auth.module.Krb5LoginModule required storeKey=true useKeyTab=true keyTab="/etc/alfrescohttp.keytab" principal="HTTP/server.mydomain.co.uk"; }; com.sun.net.ssl.client { com.sun.security.auth.module.Krb5LoginModule sufficient; }; other { com.sun.security.auth.module.Krb5LoginModule sufficient; }; ShareHTTP { com.sun.security.auth.module.Krb5LoginModule required storeKey=true useKeyTab=true keyTab="/etc/alfrescohttp.keytab" principal="HTTP/server.mydomain.co.uk"; }; And finally, the following settings in alfresco-global.conf authentication.chain=kerberos1:kerberos,alfrescoNtlm1:alfrescoNtlm kerberos.authentication.real=MYDOMAIN.CO.UK kerberos.authentication.user.configEntryName=Alfresco kerberos.authentication.cifs.configEntryName=AlfrescoCIFS kerberos.authentication.http.configEntryName=AlfrescoHTTP kerberos.authentication.cifs.password=****** kerberos.authentication.http.password=***** kerberos.authentication.defaultAdministratorUserNames=administrator ntlm.authentication.sso.enabled=true As I say, I've hit a brick wall with this and I'd really appreciate any help you can give me! This question is also posted on the Alfresco forum, but I wondered if any folk here on serverfault have come across similar implementation challenges?

    Read the article

  • SQL SERVER – Import CSV into Database – Transferring File Content into a Database Table using CSVexpress

    - by pinaldave
    One of the most common data integration tasks I run into is a desire to move data from a file into a database table.  Generally the user is familiar with his data, the structure of the file, and the database table, but is unfamiliar with data integration tools and therefore views this task as something that is difficult.  What these users really need is a point and click approach that minimizes the learning curve for the data integration tool.  This is what CSVexpress (www.CSVexpress.com) is all about!  It is based on expressor Studio, a data integration tool I’ve been reviewing over the last several months. With CSVexpress, moving data between data sources can be as simple as providing the database connection details, describing the structure of the incoming and outgoing data and then connecting two pre-programmed operators.   There’s no need to learn the intricacies of the data integration tool or to write code.  Let’s look at an example. Suppose I have a comma separated value data file with data similar to the following, which is a listing of terminated employees that includes their hiring and termination date, department, job description, and final salary. EMP_ID,STRT_DATE,END_DATE,JOB_ID,DEPT_ID,SALARY 102,13-JAN-93,24-JUL-98 17:00,Programmer,60,"$85,000" 101,21-SEP-89,27-OCT-93 17:00,Account Representative,110,"$65,000" 103,28-OCT-93,15-MAR-97 17:00,Account Manager,110,"$75,000" 304,17-FEB-96,19-DEC-99 17:00,Marketing,20,"$45,000" 333,24-MAR-98,31-DEC-99 17:00,Data Entry Clerk,50,"$35,000" 100,17-SEP-87,17-JUN-93 17:00,Administrative Assistant,90,"$40,000" 334,24-MAR-98,31-DEC-98 17:00,Sales Representative,80,"$40,000" 400,01-JAN-99,31-DEC-99 17:00,Sales Manager,80,"$55,000" Notice the concise format used for the date values, the fact that the termination date includes both date and time information, and that the salary is clearly identified as money by the dollar sign and digit grouping.  In moving this data to a database table I want to express the dates using a format that includes the century since it’s obvious that this listing could include employees who left the company in both the 20th and 21st centuries, and I want the salary to be stored as a decimal value without the currency symbol and grouping character.  Most data integration tools would require coding within a transformation operation to effect these changes, but not expressor Studio.  Directives for these modifications are included in the description of the incoming data. Besides starting the expressor Studio tool and opening a project, the first step is to create connection artifacts, which describe to expressor where data is stored.  For this example, two connection artifacts are required: a file connection, which encapsulates the file system location of my file; and a database connection, which encapsulates the database connection information.  With expressor Studio, I use wizards to create these artifacts. First click New Connection > File Connection in the Home tab of expressor Studio’s ribbon bar, which starts the File Connection wizard.  In the first window, I enter the path to the directory that contains the input file.  Note that the file connection artifact only specifies the file system location, not the name of the file. Then I click Next and enter a meaningful name for this connection artifact; clicking Finish closes the wizard and saves the artifact. To create the Database Connection artifact, I must know the location of, or instance name, of the target database and have the credentials of an account with sufficient privileges to write to the target table.  To use expressor Studio’s features to the fullest, this account should also have the authority to create a table. I click the New Connection > Database Connection in the Home tab of expressor Studio’s ribbon bar, which starts the Database Connection wizard.  expressor Studio includes high-performance drivers for many relational database management systems, so I can simply make a selection from the “Supplied database drivers” drop down control.  If my desired RDBMS isn’t listed, I can optionally use an existing ODBC DSN by selecting the “Existing DSN” radio button. In the following window, I enter the connection details.  With Microsoft SQL Server, I may choose to use Windows Authentication rather than rather than account credentials.  After clicking Next, I enter a meaningful name for this connection artifact and clicking Finish closes the wizard and saves the artifact. Now I create a schema artifact, which describes the structure of the file data.  When expressor reads a file, all data fields are typed as strings.  In some use cases this may be exactly what is needed and there is no need to edit the schema artifact.  But in this example, editing the schema artifact will be used to specify how the data should be transformed; that is, reformat the dates to include century designations, change the employee and job ID’s to integers, and convert the salary to a decimal value. Again a wizard is used to create the schema artifact.  I click New Schema > Delimited Schema in the Home tab of expressor Studio’s ribbon bar, which starts the Database Connection wizard.  In the first window, I click Get Data from File, which then displays a listing of the file connections in the project.  When I click on the file connection I previously created, a browse window opens to this file system location; I then select the file and click Open, which imports 10 lines from the file into the wizard. I now view the file’s content and confirm that the appropriate delimiter characters are selected in the “Field Delimiter” and “Record Delimiter” drop down controls; then I click Next. Since the input file includes a header row, I can easily indicate that fields in the file should be identified through the corresponding header value by clicking “Set All Names from Selected Row. “ Alternatively, I could enter a different identifier into the Field Details > Name text box.  I click Next and enter a meaningful name for this schema artifact; clicking Finish closes the wizard and saves the artifact. Now I open the schema artifact in the schema editor.  When I first view the schema’s content, I note that the types of all attributes in the Semantic Type (the right-hand panel) are strings and that the attribute names are the same as the field names in the data file.  To change an attribute’s name and type, I highlight the attribute and click Edit in the Attributes grouping on the Schema > Edit tab of the editor’s ribbon bar.  This opens the Edit Attribute window; I can change the attribute name and select the desired type from the “Data type” drop down control.  In this example, I change the name of each attribute to the name of the corresponding database table column (EmployeeID, StartingDate, TerminationDate, JobDescription, DepartmentID, and FinalSalary).  Then for the EmployeeID and DepartmentID attributes, I select Integer as the data type, for the StartingDate and TerminationDate attributes, I select Datetime as the data type, and for the FinalSalary attribute, I select the Decimal type. But I can do much more in the schema editor.  For the datetime attributes, I can set a constraint that ensures that the data adheres to some predetermined specifications; a starting date must be later than January 1, 1980 (the date on which the company began operations) and a termination date must be earlier than 11:59 PM on December 31, 1999.  I simply select the appropriate constraint and enter the value (1980-01-01 00:00 as the starting date and 1999-12-31 11:59 as the termination date). As a last step in setting up these datetime conversions, I edit the mapping, describing the format of each datetime type in the source file. I highlight the mapping line for the StartingDate attribute and click Edit Mapping in the Mappings grouping on the Schema > Edit tab of the editor’s ribbon bar.  This opens the Edit Mapping window in which I either enter, or select, a format that describes how the datetime values are represented in the file.  Note the use of Y01 as the syntax for the year.  This syntax is the indicator to expressor Studio to derive the century by setting any year later than 01 to the 20th century and any year before 01 to the 21st century.  As each datetime value is read from the file, the year values are transformed into century and year values. For the TerminationDate attribute, my format also indicates that the datetime value includes hours and minutes. And now to the Salary attribute. I open its mapping and in the Edit Mapping window select the Currency tab and the “Use currency” check box.  This indicates that the file data will include the dollar sign (or in Europe the Pound or Euro sign), which should be removed. And on the Grouping tab, I select the “Use grouping” checkbox and enter 3 into the “Group size” text box, a comma into the “Grouping character” text box, and a decimal point into the “Decimal separator” character text box. These entries allow the string to be properly converted into a decimal value. By making these entries into the schema that describes my input file, I’ve specified how I want the data transformed prior to writing to the database table and completely removed the requirement for coding within the data integration application itself. Assembling the data integration application is simple.  Onto the canvas I drag the Read File and Write Table operators, connecting the output of the Read File operator to the input of the Write Table operator. Next, I select the Read File operator and its Properties panel opens on the right-hand side of expressor Studio.  For each property, I can select an appropriate entry from the corresponding drop down control.  Clicking on the button to the right of the “File name” text box opens the file system location specified in the file connection artifact, allowing me to select the appropriate input file.  I indicate also that the first row in the file, the header row, should be skipped, and that any record that fails one of the datetime constraints should be skipped. I then select the Write Table operator and in its Properties panel specify the database connection, normal for the “Mode,” and the “Truncate” and “Create Missing Table” options.  If my target table does not yet exist, expressor will create the table using the information encapsulated in the schema artifact assigned to the operator. The last task needed to complete the application is to create the schema artifact used by the Write Table operator.  This is extremely easy as another wizard is capable of using the schema artifact assigned to the Read Table operator to create a schema artifact for the Write Table operator.  In the Write Table Properties panel, I click the drop down control to the right of the “Schema” property and select “New Table Schema from Upstream Output…” from the drop down menu. The wizard first displays the table description and in its second screen asks me to select the database connection artifact that specifies the RDBMS in which the target table will exist.  The wizard then connects to the RDBMS and retrieves a list of database schemas from which I make a selection.  The fourth screen gives me the opportunity to fine tune the table’s description.  In this example, I set the width of the JobDescription column to a maximum of 40 characters and select money as the type of the LastSalary column.  I also provide the name for the table. This completes development of the application.  The entire application was created through the use of wizards and the required data transformations specified through simple constraints and specifications rather than through coding.  To develop this application, I only needed a basic understanding of expressor Studio, a level of expertise that can be gained by working through a few introductory tutorials.  expressor Studio is as close to a point and click data integration tool as one could want and I urge you to try this product if you have a need to move data between files or from files to database tables. Check out CSVexpress in more detail.  It offers a few basic video tutorials and a preview of expressor Studio 3.5, which will support the reading and writing of data into Salesforce.com. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Generate Strongly Typed Observable Events for the Reactive Extensions for .NET (Rx)

    - by Bobby Diaz
    I must have tried reading through the various explanations and introductions to the new Reactive Extensions for .NET before the concepts finally started sinking in.  The article that gave me the ah-ha moment was over on SilverlightShow.net and titled Using Reactive Extensions in Silverlight.  The author did a good job comparing the "normal" way of handling events vs. the new "reactive" methods. Admittedly, I still have more to learn about the Rx Framework, but I wanted to put together a sample project so I could start playing with the new Observable and IObservable<T> constructs.  I decided to throw together a whiteboard application in Silverlight based on the Drawing with Rx example on the aforementioned article.  At the very least, I figured I would learn a thing or two about a new technology, but my real goal is to create a fun application that I can share with the kids since they love drawing and coloring so much! Here is the code sample that I borrowed from the article: var mouseMoveEvent = Observable.FromEvent<MouseEventArgs>(this, "MouseMove"); var mouseLeftButtonDown = Observable.FromEvent<MouseButtonEventArgs>(this, "MouseLeftButtonDown"); var mouseLeftButtonUp = Observable.FromEvent<MouseButtonEventArgs>(this, "MouseLeftButtonUp");       var draggingEvents = from pos in mouseMoveEvent                              .SkipUntil(mouseLeftButtonDown)                              .TakeUntil(mouseLeftButtonUp)                              .Let(mm => mm.Zip(mm.Skip(1), (prev, cur) =>                                  new                                  {                                      X2 = cur.EventArgs.GetPosition(this).X,                                      X1 = prev.EventArgs.GetPosition(this).X,                                      Y2 = cur.EventArgs.GetPosition(this).Y,                                      Y1 = prev.EventArgs.GetPosition(this).Y                                  })).Repeat()                          select pos;       draggingEvents.Subscribe(p =>     {         Line line = new Line();         line.Stroke = new SolidColorBrush(Colors.Black);         line.StrokeEndLineCap = PenLineCap.Round;         line.StrokeLineJoin = PenLineJoin.Round;         line.StrokeThickness = 5;         line.X1 = p.X1;         line.Y1 = p.Y1;         line.X2 = p.X2;         line.Y2 = p.Y2;         this.LayoutRoot.Children.Add(line);     }); One thing that was nagging at the back of my mind was having to deal with the event names as strings, as well as the verbose syntax for the Observable.FromEvent<TEventArgs>() method.  I came up with a couple of static/helper classes to resolve both issues and also created a T4 template to auto-generate these helpers for any .NET type.  Take the following code from the above example: var mouseMoveEvent = Observable.FromEvent<MouseEventArgs>(this, "MouseMove"); var mouseLeftButtonDown = Observable.FromEvent<MouseButtonEventArgs>(this, "MouseLeftButtonDown"); var mouseLeftButtonUp = Observable.FromEvent<MouseButtonEventArgs>(this, "MouseLeftButtonUp"); Turns into this with the new static Events class: var mouseMoveEvent = Events.Mouse.Move.On(this); var mouseLeftButtonDown = Events.Mouse.LeftButtonDown.On(this); var mouseLeftButtonUp = Events.Mouse.LeftButtonUp.On(this); Or better yet, just remove the variable declarations altogether:     var draggingEvents = from pos in Events.Mouse.Move.On(this)                              .SkipUntil(Events.Mouse.LeftButtonDown.On(this))                              .TakeUntil(Events.Mouse.LeftButtonUp.On(this))                              .Let(mm => mm.Zip(mm.Skip(1), (prev, cur) =>                                  new                                  {                                      X2 = cur.EventArgs.GetPosition(this).X,                                      X1 = prev.EventArgs.GetPosition(this).X,                                      Y2 = cur.EventArgs.GetPosition(this).Y,                                      Y1 = prev.EventArgs.GetPosition(this).Y                                  })).Repeat()                          select pos; The Move, LeftButtonDown and LeftButtonUp members of the Events.Mouse class are readonly instances of the ObservableEvent<TTarget, TEventArgs> class that provide type-safe access to the events via the On() method.  Here is the code for the class: using System; using System.Collections.Generic; using System.Linq;   namespace System.Linq {     /// <summary>     /// Represents an event that can be managed via the <see cref="Observable"/> API.     /// </summary>     /// <typeparam name="TTarget">The type of the target.</typeparam>     /// <typeparam name="TEventArgs">The type of the event args.</typeparam>     public class ObservableEvent<TTarget, TEventArgs> where TEventArgs : EventArgs     {         /// <summary>         /// Initializes a new instance of the <see cref="ObservableEvent"/> class.         /// </summary>         /// <param name="eventName">Name of the event.</param>         protected ObservableEvent(String eventName)         {             EventName = eventName;         }           /// <summary>         /// Registers the specified event name.         /// </summary>         /// <param name="eventName">Name of the event.</param>         /// <returns></returns>         public static ObservableEvent<TTarget, TEventArgs> Register(String eventName)         {             return new ObservableEvent<TTarget, TEventArgs>(eventName);         }           /// <summary>         /// Creates an enumerable sequence of event values for the specified target.         /// </summary>         /// <param name="target">The target.</param>         /// <returns></returns>         public IObservable<IEvent<TEventArgs>> On(TTarget target)         {             return Observable.FromEvent<TEventArgs>(target, EventName);         }           /// <summary>         /// Gets or sets the name of the event.         /// </summary>         /// <value>The name of the event.</value>         public string EventName { get; private set; }     } } And this is how it's used:     /// <summary>     /// Categorizes <see cref="ObservableEvents"/> by class and/or functionality.     /// </summary>     public static partial class Events     {         /// <summary>         /// Implements a set of predefined <see cref="ObservableEvent"/>s         /// for the <see cref="System.Windows.System.Windows.UIElement"/> class         /// that represent mouse related events.         /// </summary>         public static partial class Mouse         {             /// <summary>Represents the MouseMove event.</summary>             public static readonly ObservableEvent<UIElement, MouseEventArgs> Move =                 ObservableEvent<UIElement, MouseEventArgs>.Register("MouseMove");               // additional members omitted...         }     } The source code contains a static Events class with prefedined members for various categories (Key, Mouse, etc.).  There is also an Events.tt template that you can customize to generate additional event categories for any .NET type.  All you should have to do is add the name of your class to the types collection near the top of the template:     types = new Dictionary<String, Type>()     {         //{ "Microsoft.Maps.MapControl.Map, Microsoft.Maps.MapControl", null }         { "System.Windows.FrameworkElement, System.Windows", null },         { "Whiteboard.MainPage, Whiteboard", null }     }; The template is also a bit rough at this point, but at least it generates code that *should* compile.  Please let me know if you run into any issues with it.  Some people have reported errors when trying to use T4 templates within a Silverlight project, but I was able to get it to work with a little black magic...  You can download the source code for this project or play around with the live demo.  Just be warned that it is at a very early stage so don't expect to find much today.  I plan on adding alot more options like pen colors and sizes, saving, printing, etc. as time permits.  HINT: hold down the ESC key to erase! Enjoy! Additional Resources Using Reactive Extensions in Silverlight DevLabs: Reactive Extensions for .NET (Rx) Rx Framework Part III - LINQ to Events - Generating GetEventName() Wrapper Methods using T4

    Read the article

  • Authenticating clients in the new WCF Http stack

    - by cibrax
    About this time last year, I wrote a couple of posts about how to use the “Interceptors” from the REST starker kit for implementing several authentication mechanisms like “SAML”, “Basic Authentication” or “OAuth” in the WCF Web programming model. The things have changed a lot since then, and Glenn finally put on our hands a new version of the Web programming model that deserves some attention and I believe will help us a lot to build more Http oriented services in the .NET stack. What you can get today from wcf.codeplex.com is a preview with some cool features like Http Processors (which I already discussed here), a new and improved version of the HttpClient library, Dependency injection and better TDD support among others. However, the framework still does not support an standard way of doing client authentication on the services (This is something planned for the upcoming releases I believe). For that reason, moving the existing authentication interceptors to this new programming model was one of the things I did in the last few days. In order to make authentication simple and easy to extend,  I first came up with a model based on what I called “Authentication Interceptors”. An authentication interceptor maps to an existing Http authentication mechanism and implements the following interface, public interface IAuthenticationInterceptor{ string Scheme { get; } bool DoAuthentication(HttpRequestMessage request, HttpResponseMessage response, out IPrincipal principal);} An authentication interceptors basically needs to returns the http authentication schema that implements in the property “Scheme”, and implements the authentication mechanism in the method “DoAuthentication”. As you can see, this last method “DoAuthentication” only relies on the HttpRequestMessage and HttpResponseMessage classes, making the testing of this interceptor very simple (There is no need to do some black magic with the WCF context or messages). After this, I implemented a couple of interceptors for supporting basic authentication and brokered authentication with SAML (using WIF) in my services. The following code illustrates how the basic authentication interceptors looks like. public class BasicAuthenticationInterceptor : IAuthenticationInterceptor{ Func<UsernameAndPassword, bool> userValidation; string realm;  public BasicAuthenticationInterceptor(Func<UsernameAndPassword, bool> userValidation, string realm) { if (userValidation == null) throw new ArgumentNullException("userValidation");  if (string.IsNullOrEmpty(realm)) throw new ArgumentNullException("realm");  this.userValidation = userValidation; this.realm = realm; }  public string Scheme { get { return "Basic"; } }  public bool DoAuthentication(HttpRequestMessage request, HttpResponseMessage response, out IPrincipal principal) { string[] credentials = ExtractCredentials(request); if (credentials.Length == 0 || !AuthenticateUser(credentials[0], credentials[1])) { response.StatusCode = HttpStatusCode.Unauthorized; response.Content = new StringContent("Access denied"); response.Headers.WwwAuthenticate.Add(new AuthenticationHeaderValue("Basic", "realm=" + this.realm));  principal = null;  return false; } else { principal = new GenericPrincipal(new GenericIdentity(credentials[0]), new string[] {});  return true; } }  private string[] ExtractCredentials(HttpRequestMessage request) { if (request.Headers.Authorization != null && request.Headers.Authorization.Scheme.StartsWith("Basic")) { string encodedUserPass = request.Headers.Authorization.Parameter.Trim();  Encoding encoding = Encoding.GetEncoding("iso-8859-1"); string userPass = encoding.GetString(Convert.FromBase64String(encodedUserPass)); int separator = userPass.IndexOf(':');  string[] credentials = new string[2]; credentials[0] = userPass.Substring(0, separator); credentials[1] = userPass.Substring(separator + 1);  return credentials; }  return new string[] { }; }  private bool AuthenticateUser(string username, string password) { var usernameAndPassword = new UsernameAndPassword { Username = username, Password = password };  if (this.userValidation(usernameAndPassword)) { return true; }  return false; }} This interceptor receives in the constructor a callback in the form of a Func delegate for authenticating the user and the “realm”, which is required as part of the implementation. The rest is a general implementation of the basic authentication mechanism using standard http request and response messages. I also implemented another interceptor for authenticating a SAML token with WIF. public class SamlAuthenticationInterceptor : IAuthenticationInterceptor{ SecurityTokenHandlerCollection handlers = null;  public SamlAuthenticationInterceptor(SecurityTokenHandlerCollection handlers) { if (handlers == null) throw new ArgumentNullException("handlers");  this.handlers = handlers; }  public string Scheme { get { return "saml"; } }  public bool DoAuthentication(HttpRequestMessage request, HttpResponseMessage response, out IPrincipal principal) { SecurityToken token = ExtractCredentials(request);  if (token != null) { ClaimsIdentityCollection claims = handlers.ValidateToken(token);  principal = new ClaimsPrincipal(claims);  return true; } else { response.StatusCode = HttpStatusCode.Unauthorized; response.Content = new StringContent("Access denied");  principal = null;  return false; } }  private SecurityToken ExtractCredentials(HttpRequestMessage request) { if (request.Headers.Authorization != null && request.Headers.Authorization.Scheme == "saml") { XmlTextReader xmlReader = new XmlTextReader(new StringReader(request.Headers.Authorization.Parameter));  var col = SecurityTokenHandlerCollection.CreateDefaultSecurityTokenHandlerCollection(); SecurityToken token = col.ReadToken(xmlReader);  return token; }  return null; }}This implementation receives a “SecurityTokenHandlerCollection” instance as part of the constructor. This class is part of WIF, and basically represents a collection of token managers to know how to handle specific xml authentication tokens (SAML is one of them). I also created a set of extension methods for injecting these interceptors as part of a service route when the service is initialized. var basicAuthentication = new BasicAuthenticationInterceptor((u) => true, "ContactManager");var samlAuthentication = new SamlAuthenticationInterceptor(serviceConfiguration.SecurityTokenHandlers); // use MEF for providing instancesvar catalog = new AssemblyCatalog(typeof(Global).Assembly);var container = new CompositionContainer(catalog);var configuration = new ContactManagerConfiguration(container); RouteTable.Routes.AddServiceRoute<ContactResource>("contact", configuration, basicAuthentication, samlAuthentication);RouteTable.Routes.AddServiceRoute<ContactsResource>("contacts", configuration, basicAuthentication, samlAuthentication); In the code above, I am injecting the basic authentication and saml authentication interceptors in the “contact” and “contacts” resource implementations that come as samples in the code preview. I will use another post to discuss more in detail how the brokered authentication with SAML model works with this new WCF Http bits. The code is available to download in this location.

    Read the article

  • Java EE 6 and NoSQL/MongoDB on GlassFish using JPA and EclipseLink 2.4 (TOTD #175)

    - by arungupta
    TOTD #166 explained how to use MongoDB in your Java EE 6 applications. The code in that tip used the APIs exposed by the MongoDB Java driver and so requires you to learn a new API. However if you are building Java EE 6 applications then you are already familiar with Java Persistence API (JPA). Eclipse Link 2.4, scheduled to release as part of Eclipse Juno, provides support for NoSQL databases by mapping a JPA entity to a document. Their wiki provides complete explanation of how the mapping is done. This Tip Of The Day (TOTD) will show how you can leverage that support in your Java EE 6 applications deployed on GlassFish 3.1.2. Before we dig into the code, here are the key concepts ... A POJO is mapped to a NoSQL data source using @NoSQL or <no-sql> element in "persistence.xml". A subset of JPQL and Criteria query are supported, based upon the underlying data store Connection properties are defined in "persistence.xml" Now, lets lets take a look at the code ... Download the latest EclipseLink 2.4 Nightly Bundle. There is a Installer, Source, and Bundle - make sure to download the Bundle link (20120410) and unzip. Download GlassFish 3.1.2 zip and unzip. Install the Eclipse Link 2.4 JARs in GlassFish Remove the following JARs from "glassfish/modules": org.eclipse.persistence.antlr.jar org.eclipse.persistence.asm.jar org.eclipse.persistence.core.jar org.eclipse.persistence.jpa.jar org.eclipse.persistence.jpa.modelgen.jar org.eclipse.persistence.moxy.jar org.eclipse.persistence.oracle.jar Add the following JARs from Eclipse Link 2.4 nightly build to "glassfish/modules": org.eclipse.persistence.antlr_3.2.0.v201107111232.jar org.eclipse.persistence.asm_3.3.1.v201107111215.jar org.eclipse.persistence.core.jpql_2.4.0.v20120407-r11132.jar org.eclipse.persistence.core_2.4.0.v20120407-r11132.jar org.eclipse.persistence.jpa.jpql_2.0.0.v20120407-r11132.jar org.eclipse.persistence.jpa.modelgen_2.4.0.v20120407-r11132.jar org.eclipse.persistence.jpa_2.4.0.v20120407-r11132.jar org.eclipse.persistence.moxy_2.4.0.v20120407-r11132.jar org.eclipse.persistence.nosql_2.4.0.v20120407-r11132.jar org.eclipse.persistence.oracle_2.4.0.v20120407-r11132.jar Start MongoDB Download latest MongoDB from here (2.0.4 as of this writing). Create the default data directory for MongoDB as: sudo mkdir -p /data/db/sudo chown `id -u` /data/db Refer to Quickstart for more details. Start MongoDB as: arungup-mac:mongodb-osx-x86_64-2.0.4 <arungup> ->./bin/mongod./bin/mongod --help for help and startup optionsMon Apr  9 12:56:02 [initandlisten] MongoDB starting : pid=3124 port=27017 dbpath=/data/db/ 64-bit host=arungup-mac.localMon Apr  9 12:56:02 [initandlisten] db version v2.0.4, pdfile version 4.5Mon Apr  9 12:56:02 [initandlisten] git version: 329f3c47fe8136c03392c8f0e548506cb21f8ebfMon Apr  9 12:56:02 [initandlisten] build info: Darwin erh2.10gen.cc 9.8.0 Darwin Kernel Version 9.8.0: Wed Jul 15 16:55:01 PDT 2009; root:xnu-1228.15.4~1/RELEASE_I386 i386 BOOST_LIB_VERSION=1_40Mon Apr  9 12:56:02 [initandlisten] options: {}Mon Apr  9 12:56:02 [initandlisten] journal dir=/data/db/journalMon Apr  9 12:56:02 [initandlisten] recover : no journal files present, no recovery neededMon Apr  9 12:56:02 [websvr] admin web console waiting for connections on port 28017Mon Apr  9 12:56:02 [initandlisten] waiting for connections on port 27017 Check out the JPA/NoSQL sample from SVN repository. The complete source code built in this TOTD can be downloaded here. Create Java EE 6 web app Create a Java EE 6 Maven web app as: mvn archetype:generate -DarchetypeGroupId=org.codehaus.mojo.archetypes -DarchetypeArtifactId=webapp-javaee6 -DgroupId=model -DartifactId=javaee-nosql -DarchetypeVersion=1.5 -DinteractiveMode=false Copy the model files from the checked out workspace to the generated project as: cd javaee-nosqlcp -r ~/code/workspaces/org.eclipse.persistence.example.jpa.nosql.mongo/src/model src/main/java Copy "persistence.xml" mkdir src/main/resources cp -r ~/code/workspaces/org.eclipse.persistence.example.jpa.nosql.mongo/src/META-INF ./src/main/resources Add the following dependencies: <dependency> <groupId>org.eclipse.persistence</groupId> <artifactId>org.eclipse.persistence.jpa</artifactId> <version>2.4.0-SNAPSHOT</version> <scope>provided</scope></dependency><dependency> <groupId>org.eclipse.persistence</groupId> <artifactId>org.eclipse.persistence.nosql</artifactId> <version>2.4.0-SNAPSHOT</version></dependency><dependency> <groupId>org.mongodb</groupId> <artifactId>mongo-java-driver</artifactId> <version>2.7.3</version></dependency> The first one is for the EclipseLink latest APIs, the second one is for EclipseLink/NoSQL support, and the last one is the MongoDB Java driver. And the following repository: <repositories> <repository> <id>EclipseLink Repo</id> <url>http://www.eclipse.org/downloads/download.php?r=1&amp;nf=1&amp;file=/rt/eclipselink/maven.repo</url> <snapshots> <enabled>true</enabled> </snapshots> </repository>  </repositories> Copy the "Test.java" to the generated project: mkdir src/main/java/examplecp -r ~/code/workspaces/org.eclipse.persistence.example.jpa.nosql.mongo/src/example/Test.java ./src/main/java/example/ This file contains the source code to CRUD the JPA entity to MongoDB. This sample is explained in detail on EclipseLink wiki. Create a new Servlet in "example" directory as: package example;import java.io.IOException;import java.io.PrintWriter;import javax.servlet.ServletException;import javax.servlet.annotation.WebServlet;import javax.servlet.http.HttpServlet;import javax.servlet.http.HttpServletRequest;import javax.servlet.http.HttpServletResponse;/** * @author Arun Gupta */@WebServlet(name = "TestServlet", urlPatterns = {"/TestServlet"})public class TestServlet extends HttpServlet { protected void processRequest(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { response.setContentType("text/html;charset=UTF-8"); PrintWriter out = response.getWriter(); try { out.println("<html>"); out.println("<head>"); out.println("<title>Servlet TestServlet</title>"); out.println("</head>"); out.println("<body>"); out.println("<h1>Servlet TestServlet at " + request.getContextPath() + "</h1>"); try { Test.main(null); } catch (Exception ex) { ex.printStackTrace(); } out.println("</body>"); out.println("</html>"); } finally { out.close(); } } @Override protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { processRequest(request, response); } @Override protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { processRequest(request, response); }} Build the project and deploy it as: mvn clean packageglassfish3/bin/asadmin deploy --force=true target/javaee-nosql-1.0-SNAPSHOT.war Accessing http://localhost:8080/javaee-nosql/TestServlet shows the following messages in the server.log: connecting(EISLogin( platform=> MongoPlatform user name=> "" MongoConnectionSpec())) . . .Connected: User: Database: 2.7  Version: 2.7 . . .Executing MappedInteraction() spec => null properties => {mongo.collection=CUSTOMER, mongo.operation=INSERT} input => [DatabaseRecord( CUSTOMER._id => 4F848E2BDA0670307E2A8FA4 CUSTOMER.NAME => AMCE)]. . .Data access result: [{TOTALCOST=757.0, ORDERLINES=[{DESCRIPTION=table, LINENUMBER=1, COST=300.0}, {DESCRIPTION=balls, LINENUMBER=2, COST=5.0}, {DESCRIPTION=rackets, LINENUMBER=3, COST=15.0}, {DESCRIPTION=net, LINENUMBER=4, COST=2.0}, {DESCRIPTION=shipping, LINENUMBER=5, COST=80.0}, {DESCRIPTION=handling, LINENUMBER=6, COST=55.0},{DESCRIPTION=tax, LINENUMBER=7, COST=300.0}], SHIPPINGADDRESS=[{POSTALCODE=L5J1H7, PROVINCE=ON, COUNTRY=Canada, CITY=Ottawa,STREET=17 Jane St.}], VERSION=2, _id=4F848E2BDA0670307E2A8FA8,DESCRIPTION=Pingpong table, CUSTOMER__id=4F848E2BDA0670307E2A8FA7, BILLINGADDRESS=[{POSTALCODE=L5J1H8, PROVINCE=ON, COUNTRY=Canada, CITY=Ottawa, STREET=7 Bank St.}]}] You'll not see any output in the browser, just the output in the console. But the code can be easily modified to do so. Once again, the complete Maven project can be downloaded here. Do you want to try accessing relational and non-relational (aka NoSQL) databases in the same PU ?

    Read the article

  • CodePlex Daily Summary for Sunday, May 09, 2010

    CodePlex Daily Summary for Sunday, May 09, 2010New ProjectsArtificial Spy: ASPX, C#, XML, This is big project for creation application for: People Search. People connection. Background check. Crime Prevention. Socia...Chef Framework: CHEF: CSS, HTML, Events, & Functions. Is a collection of libraries to help build concerns separated websites.Crabit Full File Manger: File manager SystemEPiMVC - EPiServer CMS with ASP.NET MVC: A framework for using EPiServer CMS with ASP.NET MVC.Fimyid IX: all My projects In ONE!Hosting Folder Sizes: When you run a hosting environment with the ability to upload files, and you're charging per gigabyte per month, you need quick statistics about di...Let's Up: A tool that breaks you every 50 minutes to protect your health.MediaXenter: MediaXenterMSClub BY: Microsoft community web-site projectOrbisArca: Windows Mobile gameSpatial Gateway: A common and standardized way of accessing spatial data stored in different datastores. This also includes functionality for replication across dif...Squiggle - A Free open source Lan Messenger: Squiggle is a free lan messenger that does not require a server. Just download and run it and you're ready to talk to everyone on your lan. Squi...Video Downloader: Video Downloader makes it easier for developers to generate download links for videos from You-Tube. You'll no longer have to search through source...ViewModelSupport: This is not an MVVM framework. It is just the base class I use to reduce the friction when writing ViewModels. Making it public to share my ideas.WPF CCTV Surveillance Control with IP Cameras.: Integracion de Video IP en WPF. IP Camera. WPF dinamic Control and Events in Video System. Surveillance system. CCTV system. .NetNew Releases.NET Extensions - Extension Methods Library: Release 2010.07: Added some extension methods for ICollection<T> and IList<T> for demonstrating the differences between both interfaces: - ICollection<T>.AddRangeU...bitly.net: bitly.dll: This is a .net DLL that works with the on-line URL shortening service bit.ly Compiled using .net 4.0 but the source code should run with version 2 ...Crabit Full File Manger: Crabit V1.0: Firts file manager system previewCSharp Intellisense: V1.9: this is a major release that was focus on bug fix, tooltip support and styling.Fimyid IX: fimyid ix 1.0: New Liscence!Gherkin editor: Beta: Added support for i18n (all languages supported by Gherkin/Cucumber are suported). Removed auto-completion of statements like As a user and followi...Grunty OS: GruntyOSAlphaSC: Grunty os sourceHKGolden Express: HKGoldenExpress (Build 201005081830): New features: Users can post new message or reply to a message. Special thanks for help from members 劉佳偉 (ID: 179892) and Maize. (ID: 142974). Bu...iLove SharePoint: Lookup Field with Picker 2010: Just forget the fuc**** dropdowns! Requirements: SharePoint Foundation 2010 Features * Single- and multi-Selection Mode * Search in pick...LazyNet: LazyNet Beta 2: Refresh Network Bug fixed.LazyNet: LazyNEt_Beta3: Beta 3 Release, Better Network RefreshingLazyNet: LazyNetBeta3_SRC: Refresh NetworkLet's Up: 1.0 (Build 100509): This is the first versionLive Distributed Objects: Windows Installer r48444 (2010-05-08): current development snapshotMDownloader: MDownloader-0.15.12.58576: Fixed presenting Hotfile's captcha. Fixed FilesTube searching. Fixed determining Rapidshare postpone period. Fixed minor bugs.NSIS Autorun: NSIS Autorun 0.1.7: This release includes source code, executable binaries and example materials.SharpDevelop: SharpDevelop 3.2: Release notes: http://community.sharpdevelop.net/forums/t/11165.aspxSilverlight SDK for Bing: Silverlight SDK for Bing 1.4: Build for Visual Studio 2010 and Silverlight 4 Issues Addressed10337 10342 10367 10804 10805 10806 10807 DownloadsSilverlight SDK For B...sqwarea: Sqwarea 0.0.252.0 (alpha): This release corrects a critical bug in Persistence.GameProvider.GetNextKingId. We strongly recommend you to upgrade to this version.Stratosphere: Stratosphere 1.0.5.1: Added many features to Amazon Web Services Shell (AwsSh) Improved scalable table reader for SimpleDB multi-valued attributes Added more functio...TechEdOneNoter: TechEdOneNoter verison 2010.5.9.2010: TechEdOneNoter is a utility to create OneNote Pages based on sessions selected in the TechEd North America 2010 Session Builder. This is the first...Video Downloader: Version 1.0: Version 1.0 See Home Page for usage and more information. Please remember changes at You-Tube can prevent this software from working.Visual Studio - Lua Language Support: May 8th Update: Release NotesThis release adds collapsible functions and tables. What's new:Collapsible functions and tables Using latest version of Irony Maj...WPF CCTV Surveillance Control with IP Cameras.: Hungry Foxx CCTV Preview: Este release, corresponde a una muestra del programa completo. La idea es poder contar con una solucion para los integradores de sistemas CCTV o d...XsltDb - DotNetNuke XSLT module: 01.01.08: Bugs fixed: 17204 17203 Many new features, but undocumented yet. I'm going to update docs in a week or two, but...Most Popular ProjectsWBFS ManagerRawrAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)patterns & practices – Enterprise LibraryMicrosoft SQL Server Community & SamplesASP.NETPHPExcelMost Active Projectspatterns & practices – Enterprise LibraryRawrThe Information Literacy Education Learning Environment (ILE)AJAX Control FrameworkCaliburn: An Application Framework for WPF and SilverlightMirror Testing SystemjQuery Library for SharePoint Web Servicespatterns & practices - UnityBlogEngine.NETTweetSharp

    Read the article

  • Investigation: Can different combinations of components effect Dataflow performance?

    - by jamiet
    Introduction The Dataflow task is one of the core components (if not the core component) of SQL Server Integration Services (SSIS) and often the most misunderstood. This is not surprising, its an incredibly complicated beast and we’re abstracted away from that complexity via some boxes that go yellow red or green and that have some lines drawn between them. Example dataflow In this blog post I intend to look under that facade and get into some of the nuts and bolts of the Dataflow Task by investigating how the decisions we make when building our packages can affect performance. I will do this by comparing the performance of three dataflows that all have the same input, all produce the same output, but which all operate slightly differently by way of having different transformation components. I also want to use this blog post to challenge a common held opinion that I see perpetuated over and over again on the SSIS forum. That is, that people assume adding components to a dataflow will be detrimental to overall performance. Its not surprising that people think this –it is intuitive to think that more components means more work- however this is not a view that I share. I have always been of the opinion that there are many factors affecting dataflow duration and the number of components is actually one of the less important ones; having said that I have never proven that assertion and that is one reason for this investigation. I have actually seen evidence that some people think dataflow duration is simply a function of number of rows and number of components. I’ll happily call that one out as a myth even without any investigation!  The Setup I have a 2GB datafile which is a list of 4731904 (~4.7million) customer records with various attributes against them and it contains 2 columns that I am going to use for categorisation: [YearlyIncome] [BirthDate] The data file is a SSIS raw format file which I chose to use because it is the quickest way of getting data into a dataflow and given that I am testing the transformations, not the source or destination adapters, I want to minimise external influences as much as possible. In the test I will split the customers according to month of birth (12 of those) and whether or not their yearly income is above or below 50000 (2 of those); in other words I will be splitting them into 24 discrete categories and in order to do it I shall be using different combinations of SSIS’ Conditional Split and Derived Column transformation components. The 24 datapaths that occur will each input to a rowcount component, again because this is the least resource intensive means of terminating a datapath. The test is being carried out on a Dell XPS Studio laptop with a quad core (8 logical Procs) Intel Core i7 at 1.73GHz and Samsung SSD hard drive. Its running SQL Server 2008 R2 on Windows 7. The Variables Here are the three combinations of components that I am going to test:     One Conditional Split - A single Conditional Split component CSPL Split by Month of Birth and income category that will use expressions on [YearlyIncome] & [BirthDate] to send each row to one of 24 outputs. This next screenshot displays the expression logic in use: Derived Column & Conditional Split - A Derived Column component DER Income Category that adds a new column [IncomeCategory] which will contain one of two possible text values {“LessThan50000”,”GreaterThan50000”} and uses [YearlyIncome] to determine which value each row should get. A Conditional Split component CSPL Split by Month of Birth and Income Category then uses that new column in conjunction with [BirthDate] to determine which of the same 24 outputs to send each row to. Put more simply, I am separating the Conditional Split of #1 into a Derived Column and a Conditional Split. The next screenshots display the expression logic in use: DER Income Category         CSPL Split by Month of Birth and Income Category       Three Conditional Splits - A Conditional Split component that produces two outputs based on [YearlyIncome], one for each Income Category. Each of those outputs will go to a further Conditional Split that splits the input into 12 outputs, one for each month of birth (identical logic in each). In this case then I am separating the single Conditional Split of #1 into three Conditional Split components. The next screenshots display the expression logic in use: CSPL Split by Income Category         CSPL Split by Month of Birth 1& 2       Each of these combinations will provide an input to one of the 24 rowcount components, just the same as before. For illustration here is a screenshot of the dataflow containing three Conditional Split components: As you can these dataflows have a fair bit of work to do and remember that they’re doing that work for 4.7million rows. I will execute each dataflow 10 times and use the average for comparison. I foresee three possible outcomes: The dataflow containing just one Conditional Split (i.e. #1) will be quicker There is no significant difference between any of them One of the two dataflows containing multiple transformation components will be quicker Regardless of which of those outcomes come to pass we will have learnt something and that makes this an interesting test to carry out. Note that I will be executing the dataflows using dtexec.exe rather than hitting F5 within BIDS. The Results and Analysis The table below shows all of the executions, 10 for each dataflow. It also shows the average for each along with a standard deviation. All durations are in seconds. I’m pasting a screenshot because I frankly can’t be bothered with the faffing about needed to make a presentable HTML table. It is plain to see from the average that the dataflow containing three conditional splits is significantly faster, the other two taking 43% and 52% longer respectively. This seems strange though, right? Why does the dataflow containing the most components outperform the other two by such a big margin? The answer is actually quite logical when you put some thought into it and I’ll explain that below. Before progressing, a side note. The standard deviation for the “Three Conditional Splits” dataflow is orders of magnitude smaller – indicating that performance for this dataflow can be predicted with much greater confidence too. The Explanation I refer you to the screenshot above that shows how CSPL Split by Month of Birth and salary category in the first dataflow is setup. Observe that there is a case for each combination of Month Of Date and Income Category – 24 in total. These expressions get evaluated in the order that they appear and hence if we assume that Month of Date and Income Category are uniformly distributed in the dataset we can deduce that the expected number of expression evaluations for each row is 12.5 i.e. 1 (the minimum) + 24 (the maximum) divided by 2 = 12.5. Now take a look at the screenshots for the second dataflow. We are doing one expression evaluation in DER Income Category and we have the same 24 cases in CSPL Split by Month of Birth and Income Category as we had before, only the expression differs slightly. In this case then we have 1 + 12.5 = 13.5 expected evaluations for each row – that would account for the slightly longer average execution time for this dataflow. Now onto the third dataflow, the quick one. CSPL Split by Income Category does a maximum of 2 expression evaluations thus the expected number of evaluations per row is 1.5. CSPL Split by Month of Birth 1 & CSPL Split by Month of Birth 2 both have less work to do than the previous Conditional Split components because they only have 12 cases to test for thus the expected number of expression evaluations is 6.5 There are two of them so total expected number of expression evaluations for this dataflow is 6.5 + 6.5 + 1.5 = 14.5. 14.5 is still more than 12.5 & 13.5 though so why is the third dataflow so much quicker? Simple, the conditional expressions in the first two dataflows have two boolean predicates to evaluate – one for Income Category and one for Month of Birth; the expressions in the Conditional Split in the third dataflow however only have one predicate thus they are doing a lot less work. To sum up, the difference in execution times can be attributed to the difference between: MONTH(BirthDate) == 1 && YearlyIncome <= 50000 and MONTH(BirthDate) == 1 In the first two dataflows YearlyIncome <= 50000 gets evaluated an average of 12.5 times for every row whereas in the third dataflow it is evaluated once and once only. Multiply those 11.5 extra operations by 4.7million rows and you get a significant amount of extra CPU cycles – that’s where our duration difference comes from. The Wrap-up The obvious point here is that adding new components to a dataflow isn’t necessarily going to make it go any slower, moreover you may be able to achieve significant improvements by splitting logic over multiple components rather than one. Performance tuning is all about reducing the amount of work that needs to be done and that doesn’t necessarily mean use less components, indeed sometimes you may be able to reduce workload in ways that aren’t immediately obvious as I think I have proven here. Of course there are many variables in play here and your mileage will most definitely vary. I encourage you to download the package and see if you get similar results – let me know in the comments. The package contains all three dataflows plus a fourth dataflow that will create the 2GB raw file for you (you will also need the [AdventureWorksDW2008] sample database from which to source the data); simply disable all dataflows except the one you want to test before executing the package and remember, execute using dtexec, not within BIDS. If you want to explore dataflow performance tuning in more detail then here are some links you might want to check out: Inequality joins, Asynchronous transformations and Lookups Destination Adapter Comparison Don’t turn the dataflow into a cursor SSIS Dataflow – Designing for performance (webinar) Any comments? Let me know! @Jamiet

    Read the article

  • The DOS DEBUG Environment

    - by MarkPearl
    Today I thought I would go back in time and have a look at the DEBUG command that has been available since the beginning of dawn in DOS, MS-DOS and Microsoft Windows. up to today I always knew it was there, but had no clue on how to use it so for those that are interested this might be a great geek party trick to pull out when you want the awe the younger generation and want to show them what “real” programming is about. But wait, you will have to do it relatively quickly as it seems like DEBUG was finally dumped from the Windows group in Windows 7. Not to worry, pull out that Windows XP box which will get you even more geek points and you can still poke DEBUG a bit. So, for those that are interested and want to find out a bit about the history of DEBUG read the wiki link here. That all put aside, lets get our hands dirty.. How to Start DEBUG in Windows Make sure your version of Windows supports DEBUG. Open up a console window Make a directory where you want to play with debug – in my instance I called it C221 Enter the directory and type Debug You will get a response with a – as illustrated in the image below…   The commands available in DEBUG There are several commands available in DEBUG. The most common ones are A (Assemble) R (Register) T (Trace) G (Go) D (Dump or Display) U (Unassemble) E (Enter) P (Proceed) N (Name) L (Load) W (Write) H (Hexadecimal) I (Input) O (Output) Q (Quit) I am not going to cover all these commands, but what I will do is go through a few of them briefly. A is for Assemble Command (to write code) The A command translates assembly language statements into machine code. It is quite useful for writing small assembly programs. Below I have written a very basic assembly program. The code typed out is as follows mov ax,0015 mov cx,0023 sub cx,ax mov [120],al mov cl,[120]A nop R is for Register (to jump to a point in memory) The r command turns out to be one of the most frequent commands you will use in DEBUG. It allows you to view the contents of registers and to change their values. It can be used with the following combinations… R – Displays the contents of all the registers R f – Displays the flags register R register_name – Displays the contents of a specific register All three methods are illustrated in the image above T is for Trace (To execute a program step by step) The t command allows us to execute the program step by step. Before we can trace the program we need to point back to the beginning of the program. We do this by typing in r ip, which moves us back to memory point 100. We then type trace which executes the first line of code (line 100) (As shown in the image below starting from the red arrow). You can see from the above image that the register AX now contains 0015 as per our instruction mov ax,0015 You can also see that the IP points to line 0103 which has the MOV CX,0023 command If we type t again it will now execute the second line of the program which moves 23 in the cx register. Again, we can see that the line of code was executed and that the CX register now holds the value of 23. What I would like to highlight now is the section underlined in red. These are the status flags. The ones we are going to look at now are 1st (NV), 4th (PL), 5th (NZ) & 8th (NC) NV means no overflow, the alternate would be OV PL means that the sign of the previous arithmetic operation was Plus, the alternate would be NG (Negative) NZ means that the results of the previous arithmetic operation operation was Not Zero, the alternate would be ZR NC means that No final Carry resulted from the previous arithmetic operation. CY means that there was a final Carry. We could now follow this process of entering the t command until the entire program is executed line by line. G is for Go (To execute a program up to a certain line number) So we have looked at executing a program line by line, which is fine if your program is minuscule BUT totally unpractical if we have any decent sized program. A quicker way to run some lines of code is to use the G command. The ‘g’ command executes a program up to a certain specified point. It can be used in connection with the the reset IP command. You would set your initial point and then run the G command with the line you want to end on. P is for Proceed (Similar to trace but slightly more streamlined) Another command similar to trace is the proceed command. All that the p command does is if it is called and it encounters a CALL, INT or LOOP command it terminates the program execution. In the example below I modified our example program to include an int 20 at the end of it as illustrated in the image below… Then when executing the code when I encountered the int 20 command I typed the P command and the program terminated normally (illustrated below). D is for Dump (or for those more polite Display) So, we have all these assembly lines of code, but if you have ever opened up an exe or com file in a text/hex editor, it looks nothing like assembly code. The D command is a way that we can see what our code looks like in memory (or in a hex editor). If we examined the image above, we can see that Debug is storing our assembly code with each instruction following immediately after the previous one. For instance in memory address 110 we have int and 111 we have 20. If we examine the dump of memory we can see at memory point 110 CD is stored and at memory point 111 20 is stored. U is for Unassemble (or Convert Machine code to Assembly Code) So up to now we have gone through a bunch of commands, but probably one of the most useful is the U command. Let’s say we don’t understand machine code so well and so instead we want to see it in its equivalent assembly code. We can type the U command followed by the start memory point, followed by the end memory point and it will show us the assembly code equivalent of the machine code. E is for a bunch of things… The E command can be used for a bunch of things… One example is to enter data or machine code instructions directly into memory. It can also be used to display the contents of memory locations. I am not going to worry to much about it in this post. N / L / W is for Name, Load & Write So we have written out assembly code in debug, and now we want to save it to disk, or write it as a com file or load it. This is where the N, L & W command come in handy. The n command is used to give a name to the executable program file and is pretty simple to use. The w command is a bit trickier. It saves to disk all the memory between point bx and point cx so you need to specify the bx memory address and the cx memory address for it to write your code. Let’s look at an example illustrated below. You do this by calling the r command followed by the either bx or cx. We can then go to the directory where we were working and will see the new file with the name we specified. The L command is relatively simple. You would first specify the name of the file you would like to load using the N command, and then call the L command. Q is for Quit The last command that I am going to write about in this post is the Q command. Simply put, calling the Q command exits DEBUG. Commands we did not Cover Out of the standard DEBUG commands we covered A, T, G, D, U, E, P, R, N, L & W. The ones we did not cover were H, I & O – I might make mention of these in a later post, but for the basics they are not really needed. Some Useful Resources Please note this post is based on the COS2213 handouts for UNISA A Guide to DEBUG - http://mirror.href.com/thestarman/asm/debug/debug.htm#NT

    Read the article

  • DocumentDB - Another Azure NoSQL Storage Service

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2014/08/25/documentdb---another-azure-nosql-storage-service.aspxMicrosoft just released a bunch of new features for Azure on 22nd and one of them I was interested in most is DocumentDB, a document NoSQL database service on the cloud.   Quick Look at DocumentDB We can try DocumentDB from the new azure preview portal. Just click the NEW button and select the item named DocumentDB to create a new account. Specify the name of the DocumentDB, which will be the endpoint we are going to use to connect later. Select the capacity unit, resource group and subscription. In resource group section we can select which region our DocumentDB will be located. Same as other azure services select the same location with your consumers of the DocumentDB, for example the website, web services, etc.. After several minutes the DocumentDB will be ready. Click the KEYS button we can find the URI and primary key, which will be used when connecting. Now let's open Visual Studio and try to use the DocumentDB we had just created. Create a new console application and install the DocumentDB .NET client library from NuGet with the keyword "DocumentDB". You need to select "Include Prerelase" in NuGet Package Manager window since this library was not yet released. Next we will create a new database and document collection under our DocumentDB account. The code below created an instance of DocumentClient with the URI and primary key we just copied from azure portal, and create a database and collection. And it also prints the document and collection link string which will be used later to insert and query documents. 1: static void Main(string[] args) 2: { 3: var endpoint = new Uri("https://shx.documents.azure.com:443/"); 4: var key = "LU2NoyS2fH0131TGxtBE4DW/CjHQBzAaUx/mbuJ1X77C4FWUG129wWk2oyS2odgkFO2Xdif9/ZddintQicF+lA=="; 5:  6: var client = new DocumentClient(endpoint, key); 7: Run(client).Wait(); 8:  9: Console.WriteLine("done"); 10: Console.ReadKey(); 11: } 12:  13: static async Task Run(DocumentClient client) 14: { 15:  16: var database = new Database() { Id = "testdb" }; 17: database = await client.CreateDatabaseAsync(database); 18: Console.WriteLine("database link = {0}", database.SelfLink); 19:  20: var collection = new DocumentCollection() { Id = "testcol" }; 21: collection = await client.CreateDocumentCollectionAsync(database.SelfLink, collection); 22: Console.WriteLine("collection link = {0}", collection.SelfLink); 23: } Below is the result from the console window. We need to copy the collection link string for future usage. Now if we back to the portal we will find a database was listed with the name we specified in the code. Next we will insert a document into the database and collection we had just created. In the code below we pasted the collection link which copied in previous step, create a dynamic object with several properties defined. As you can see we can add some normal properties contains string, integer, we can also add complex property for example an array, a dictionary and an object reference, unless they can be serialized to JSON. 1: static void Main(string[] args) 2: { 3: var endpoint = new Uri("https://shx.documents.azure.com:443/"); 4: var key = "LU2NoyS2fH0131TGxtBE4DW/CjHQBzAaUx/mbuJ1X77C4FWUG129wWk2oyS2odgkFO2Xdif9/ZddintQicF+lA=="; 5:  6: var client = new DocumentClient(endpoint, key); 7:  8: // collection link pasted from the result in previous demo 9: var collectionLink = "dbs/AAk3AA==/colls/AAk3AP6oFgA=/"; 10:  11: // document we are going to insert to database 12: dynamic doc = new ExpandoObject(); 13: doc.firstName = "Shaun"; 14: doc.lastName = "Xu"; 15: doc.roles = new string[] { "developer", "trainer", "presenter", "father" }; 16:  17: // insert the docuemnt 18: InsertADoc(client, collectionLink, doc).Wait(); 19:  20: Console.WriteLine("done"); 21: Console.ReadKey(); 22: } the insert code will be very simple as below, just provide the collection link and the object we are going to insert. 1: static async Task InsertADoc(DocumentClient client, string collectionLink, dynamic doc) 2: { 3: var document = await client.CreateDocumentAsync(collectionLink, doc); 4: Console.WriteLine(await JsonConvert.SerializeObjectAsync(document, Formatting.Indented)); 5: } Below is the result after the object had been inserted. Finally we will query the document from the database and collection. Similar to the insert code, we just need to specify the collection link so that the .NET SDK will help us to retrieve all documents in it. 1: static void Main(string[] args) 2: { 3: var endpoint = new Uri("https://shx.documents.azure.com:443/"); 4: var key = "LU2NoyS2fH0131TGxtBE4DW/CjHQBzAaUx/mbuJ1X77C4FWUG129wWk2oyS2odgkFO2Xdif9/ZddintQicF+lA=="; 5:  6: var client = new DocumentClient(endpoint, key); 7:  8: var collectionLink = "dbs/AAk3AA==/colls/AAk3AP6oFgA=/"; 9:  10: SelectDocs(client, collectionLink); 11:  12: Console.WriteLine("done"); 13: Console.ReadKey(); 14: } 15:  16: static void SelectDocs(DocumentClient client, string collectionLink) 17: { 18: var docs = client.CreateDocumentQuery(collectionLink + "docs/").ToList(); 19: foreach(var doc in docs) 20: { 21: Console.WriteLine(doc); 22: } 23: } Since there's only one document in my collection below is the result when I executed the code. As you can see all properties, includes the array was retrieve at the same time. DocumentDB also attached some properties we didn't specified such as "_rid", "_ts", "_self" etc., which is controlled by the service.   DocumentDB Benefit DocumentDB is a document NoSQL database service. Different from the traditional database, document database is truly schema-free. In a short nut, you can save anything in the same database and collection if it could be serialized to JSON. We you query the document database, all sub documents will be retrieved at the same time. This means you don't need to join other tables when using a traditional database. Document database is very useful when we build some high performance system with hierarchical data structure. For example, assuming we need to build a blog system, there will be many blog posts and each of them contains the content and comments. The comment can be commented as well. If we were using traditional database, let's say SQL Server, the database schema might be defined as below. When we need to display a post we need to load the post content from the Posts table, as well as the comments from the Comments table. We also need to build the comment tree based on the CommentID field. But if were using DocumentDB, what we need to do is to save the post as a document with a list contains all comments. Under a comment all sub comments will be a list in it. When we display this post we just need to to query the post document, the content and all comments will be loaded in proper structure. 1: { 2: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 3: "title": "xxxxx", 4: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 5: "postedOn": "08/25/2014 13:55", 6: "comments": 7: [ 8: { 9: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 10: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 11: "commentedOn": "08/25/2014 14:00", 12: "commentedBy": "xxx" 13: }, 14: { 15: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 16: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 17: "commentedOn": "08/25/2014 14:10", 18: "commentedBy": "xxx", 19: "comments": 20: [ 21: { 22: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 23: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 24: "commentedOn": "08/25/2014 14:18", 25: "commentedBy": "xxx", 26: "comments": 27: [ 28: { 29: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 30: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 31: "commentedOn": "08/25/2014 18:22", 32: "commentedBy": "xxx", 33: } 34: ] 35: }, 36: { 37: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 38: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 39: "commentedOn": "08/25/2014 15:02", 40: "commentedBy": "xxx", 41: } 42: ] 43: }, 44: { 45: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 46: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 47: "commentedOn": "08/25/2014 14:30", 48: "commentedBy": "xxx" 49: } 50: ] 51: }   DocumentDB vs. Table Storage DocumentDB and Table Storage are all NoSQL service in Microsoft Azure. One common question is "when we should use DocumentDB rather than Table Storage". Here are some ideas from me and some MVPs. First of all, they are different kind of NoSQL database. DocumentDB is a document database while table storage is a key-value database. Second, table storage is cheaper. DocumentDB supports scale out from one capacity unit to 5 in preview period and each capacity unit provides 10GB local SSD storage. The price is $0.73/day includes 50% discount. For storage service the highest price is $0.061/GB, which is almost 10% of DocumentDB. Third, table storage provides local-replication, geo-replication, read access geo-replication while DocumentDB doesn't support. Fourth, there is local emulator for table storage but none for DocumentDB. We have to connect to the DocumentDB on cloud when developing locally. But, DocumentDB supports some cool features that table storage doesn't have. It supports store procedure, trigger and user-defined-function. It supports rich indexing while table storage only supports indexing against partition key and row key. It supports transaction, table storage supports as well but restricted with Entity Group Transaction scope. And the last, table storage is GA but DocumentDB is still in preview.   Summary In this post I have a quick demonstration and introduction about the new DocumentDB service in Azure. It's very easy to interact through .NET and it also support REST API, Node.js SDK and Python SDK. Then I explained the concept and benefit of  using document database, then compared with table storage.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Nginx + PHP-FPM executes script, but returns 404

    - by MorfiusX
    I am using Nginx + PHP-FPM to run a Wordpress based site. I have a URL that should return dynamically generated JSON data for use with the DataTables jQuery plugin. The data is returned properly, but with a return code of 404. I think this is a Nginx config issue, but I haven't been able to figure out why. The script 'getTable.php' works properly on the production version of the site which is currently using Apache. Anyone know how I can get this to work on Nginx? URL: http://dev.iloveskydiving.org/wp-content/plugins/ils-workflow/lib/getTable.php SERVER: CentOS 6 + Varnish (caching disabled for development) + Nginx + PHP-FPM + Wordpress + W3 Total Cache Nginx Config: server { # Server Parameters listen 127.0.0.1:8082; server_name dev.iloveskydiving.org; root /var/www/dev.iloveskydiving.org/html; access_log /var/www/dev.iloveskydiving.org/logs/access.log main; error_log /var/www/dev.iloveskydiving.org/logs/error.log error; index index.php; # Rewrite minified CSS and JS files location ~* \.(css|js) { if (!-f $request_filename) { rewrite ^/wp-content/w3tc/min/(.+\.(css|js))$ /wp-content/w3tc/min/index.php?file=$1 last; expires max; } } # Set a variable to work around the lack of nested conditionals set $cache_uri $request_uri; # Don't cache uris containing the following segments if ($request_uri ~* "(\/wp-admin\/|\/xmlrpc.php|\/wp-(app|cron|login|register|mail)\.php|wp-.*\.php|index\.php|wp\-comments\-popup\.php|wp\-links\-opml\.php|wp\-locations\.php)") { set $cache_uri "no cache"; } # Don't use the cache for logged in users or recent commenters if ($http_cookie ~* "comment_author|wordpress_[a-f0-9]+|wp\-postpass|wordpress_logged_in") { set $cache_uri 'no cache'; } # Use cached or actual file if they exists, otherwise pass request to WordPress location / { try_files /wp-content/w3tc/pgcache/$cache_uri/_index.html $uri $uri/ /index.php?q=$uri&$args; } # Cache static files for as long as possible location ~* \.(xml|ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|css|rss|atom|js|jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid|midi|wav|bmp|rtf)$ { try_files $uri =404; expires max; access_log off; } # Deny access to hidden files location ~* /\.ht { deny all; access_log off; log_not_found off; } location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; include /etc/nginx/fastcgi_params; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_intercept_errors on; fastcgi_pass unix:/var/lib/php-fpm/php-fpm.sock; # port where FastCGI processes were spawned } } Fast CGI Params: fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param HTTPS $https if_not_empty; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; # PHP only, required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200; UPDATE: Upon further digging, it looks like Nginx is generating the 404 and PHP-FPM is executing the script properly and returning a 200. UPDATE: Here are the contents of the script: <?php /** * Connect to Wordpres */ require(dirname(__FILE__) . '/../../../../wp-blog-header.php'); /** * Define temporary array */ $aaData = array(); $aaData['aaData'] = array(); /** * Execute Query */ $query = new WP_Query( array( 'post_type' => 'post', 'posts_per_page' => '-1' ) ); foreach ($query->posts as $post) { array_push( $aaData['aaData'], array( $post->post_title ) ); } /** * Echo JSON encoded array */ echo json_encode($aaData);

    Read the article

  • Optimizing Solaris 11 SHA-1 on Intel Processors

    - by danx
    SHA-1 is a "hash" or "digest" operation that produces a 160 bit (20 byte) checksum value on arbitrary data, such as a file. It is intended to uniquely identify text and to verify it hasn't been modified. Max Locktyukhin and others at Intel have improved the performance of the SHA-1 digest algorithm using multiple techniques. This code has been incorporated into Solaris 11 and is available in the Solaris Crypto Framework via the libmd(3LIB), the industry-standard libpkcs11(3LIB) library, and Solaris kernel module sha1. The optimized code is used automatically on systems with a x86 CPU supporting SSSE3 (Intel Supplemental SSSE3). Intel microprocessor architectures that support SSSE3 include Nehalem, Westmere, Sandy Bridge microprocessor families. Further optimizations are available for microprocessors that support AVX (such as Sandy Bridge). Although SHA-1 is considered obsolete because of weaknesses found in the SHA-1 algorithm—NIST recommends using at least SHA-256, SHA-1 is still widely used and will be with us for awhile more. Collisions (the same SHA-1 result for two different inputs) can be found with moderate effort. SHA-1 is used heavily though in SSL/TLS, for example. And SHA-1 is stronger than the older MD5 digest algorithm, another digest option defined in SSL/TLS. Optimizations Review SHA-1 operates by reading an arbitrary amount of data. The data is read in 512 bit (64 byte) blocks (the last block is padded in a specific way to ensure it's a full 64 bytes). Each 64 byte block has 80 "rounds" of calculations (consisting of a mixture of "ROTATE-LEFT", "AND", and "XOR") applied to the block. Each round produces a 32-bit intermediate result, called W[i]. Here's what each round operates: The first 16 rounds, rounds 0 to 15, read the 512 bit block 32 bits at-a-time. These 32 bits is used as input to the round. The remaining rounds, rounds 16 to 79, use the results from the previous rounds as input. Specifically for round i it XORs the results of rounds i-3, i-8, i-14, and i-16 and rotates the result left 1 bit. The remaining calculations for the round is a series of AND, XOR, and ROTATE-LEFT operators on the 32-bit input and some constants. The 32-bit result is saved as W[i] for round i. The 32-bit result of the final round, W[79], is the SHA-1 checksum. Optimization: Vectorization The first 16 rounds can be vectorized (computed in parallel) because they don't depend on the output of a previous round. As for the remaining rounds, because of step 2 above, computing round i depends on the results of round i-3, W[i-3], one can vectorize 3 rounds at-a-time. Max Locktyukhin found through simple factoring, explained in detail in his article referenced below, that the dependencies of round i on the results of rounds i-3, i-8, i-14, and i-16 can be replaced instead with dependencies on the results of rounds i-6, i-16, i-28, and i-32. That is, instead of initializing intermediate result W[i] with: W[i] = (W[i-3] XOR W[i-8] XOR W[i-14] XOR W[i-16]) ROTATE-LEFT 1 Initialize W[i] as follows: W[i] = (W[i-6] XOR W[i-16] XOR W[i-28] XOR W[i-32]) ROTATE-LEFT 2 That means that 6 rounds could be vectorized at once, with no additional calculations, instead of just 3! This optimization is independent of Intel or any other microprocessor architecture, although the microprocessor has to support vectorization to use it, and exploits one of the weaknesses of SHA-1. Optimization: SSSE3 Intel SSSE3 makes use of 16 %xmm registers, each 128 bits wide. The 4 32-bit inputs to a round, W[i-6], W[i-16], W[i-28], W[i-32], all fit in one %xmm register. The following code snippet, from Max Locktyukhin's article, converted to ATT assembly syntax, computes 4 rounds in parallel with just a dozen or so SSSE3 instructions: movdqa W_minus_04, W_TMP pxor W_minus_28, W // W equals W[i-32:i-29] before XOR // W = W[i-32:i-29] ^ W[i-28:i-25] palignr $8, W_minus_08, W_TMP // W_TMP = W[i-6:i-3], combined from // W[i-4:i-1] and W[i-8:i-5] vectors pxor W_minus_16, W // W = (W[i-32:i-29] ^ W[i-28:i-25]) ^ W[i-16:i-13] pxor W_TMP, W // W = (W[i-32:i-29] ^ W[i-28:i-25] ^ W[i-16:i-13]) ^ W[i-6:i-3]) movdqa W, W_TMP // 4 dwords in W are rotated left by 2 psrld $30, W // rotate left by 2 W = (W >> 30) | (W << 2) pslld $2, W_TMP por W, W_TMP movdqa W_TMP, W // four new W values W[i:i+3] are now calculated paddd (K_XMM), W_TMP // adding 4 current round's values of K movdqa W_TMP, (WK(i)) // storing for downstream GPR instructions to read A window of the 32 previous results, W[i-1] to W[i-32] is saved in memory on the stack. This is best illustrated with a chart. Without vectorization, computing the rounds is like this (each "R" represents 1 round of SHA-1 computation): RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR With vectorization, 4 rounds can be computed in parallel: RRRRRRRRRRRRRRRRRRRR RRRRRRRRRRRRRRRRRRRR RRRRRRRRRRRRRRRRRRRR RRRRRRRRRRRRRRRRRRRR Optimization: AVX The new "Sandy Bridge" microprocessor architecture, which supports AVX, allows another interesting optimization. SSSE3 instructions have two operands, a input and an output. AVX allows three operands, two inputs and an output. In many cases two SSSE3 instructions can be combined into one AVX instruction. The difference is best illustrated with an example. Consider these two instructions from the snippet above: pxor W_minus_16, W // W = (W[i-32:i-29] ^ W[i-28:i-25]) ^ W[i-16:i-13] pxor W_TMP, W // W = (W[i-32:i-29] ^ W[i-28:i-25] ^ W[i-16:i-13]) ^ W[i-6:i-3]) With AVX they can be combined in one instruction: vpxor W_minus_16, W, W_TMP // W = (W[i-32:i-29] ^ W[i-28:i-25] ^ W[i-16:i-13]) ^ W[i-6:i-3]) This optimization is also in Solaris, although Sandy Bridge-based systems aren't widely available yet. As an exercise for the reader, AVX also has 256-bit media registers, %ymm0 - %ymm15 (a superset of 128-bit %xmm0 - %xmm15). Can %ymm registers be used to parallelize the code even more? Optimization: Solaris-specific In addition to using the Intel code described above, I performed other minor optimizations to the Solaris SHA-1 code: Increased the digest(1) and mac(1) command's buffer size from 4K to 64K, as previously done for decrypt(1) and encrypt(1). This size is well suited for ZFS file systems, but helps for other file systems as well. Optimized encode functions, which byte swap the input and output data, to copy/byte-swap 4 or 8 bytes at-a-time instead of 1 byte-at-a-time. Enhanced the Solaris mdb(1) and kmdb(1) debuggers to display all 16 %xmm and %ymm registers (mdb "$x" command). Previously they only displayed the first 8 that are available in 32-bit mode. Can't optimize if you can't debug :-). Changed the SHA-1 code to allow processing in "chunks" greater than 2 Gigabytes (64-bits) Performance I measured performance on a Sun Ultra 27 (which has a Nehalem-class Xeon 5500 Intel W3570 microprocessor @3.2GHz). Turbo mode is disabled for consistent performance measurement. Graphs are better than words and numbers, so here they are: The first graph shows the Solaris digest(1) command before and after the optimizations discussed here, contained in libmd(3LIB). I ran the digest command on a half GByte file in swapfs (/tmp) and execution time decreased from 1.35 seconds to 0.98 seconds. The second graph shows the the results of an internal microbenchmark that uses the Solaris libpkcs11(3LIB) library. The operations are on a 128 byte buffer with 10,000 iterations. The results show operations increased from 320,000 to 416,000 operations per second. Finally the third graph shows the results of an internal kernel microbenchmark that uses the Solaris /kernel/crypto/amd64/sha1 module. The operations are on a 64Kbyte buffer with 100 iterations. third graph shows the results of an internal kernel microbenchmark that uses the Solaris /kernel/crypto/amd64/sha1 module. The operations are on a 64Kbyte buffer with 100 iterations. The results show for 1 kernel thread, operations increased from 410 to 600 MBytes/second. For 8 kernel threads, operations increase from 1540 to 1940 MBytes/second. Availability This code is in Solaris 11 FCS. It is available in the 64-bit libmd(3LIB) library for 64-bit programs and is in the Solaris kernel. You must be running hardware that supports Intel's SSSE3 instructions (for example, Intel Nehalem, Westmere, or Sandy Bridge microprocessor architectures). The easiest way to determine if SSSE3 is available is with the isainfo(1) command. For example, nehalem $ isainfo -v $ isainfo -v 64-bit amd64 applications sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov amd_sysc cx8 tsc fpu 32-bit i386 applications sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov sep cx8 tsc fpu If the output also shows "avx", the Solaris executes the even-more optimized 3-operand AVX instructions for SHA-1 mentioned above: sandybridge $ isainfo -v 64-bit amd64 applications avx xsave pclmulqdq aes sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov amd_sysc cx8 tsc fpu 32-bit i386 applications avx xsave pclmulqdq aes sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov sep cx8 tsc fpu No special configuration or setup is needed to take advantage of this code. Solaris libraries and kernel automatically determine if it's running on SSSE3 or AVX-capable machines and execute the correctly-tuned code for that microprocessor. Summary The Solaris 11 Crypto Framework, via the sha1 kernel module and libmd(3LIB) and libpkcs11(3LIB) libraries, incorporated a useful SHA-1 optimization from Intel for SSSE3-capable microprocessors. As with other Solaris optimizations, they come automatically "under the hood" with the current Solaris release. References "Improving the Performance of the Secure Hash Algorithm (SHA-1)" by Max Locktyukhin (Intel, March 2010). The source for these SHA-1 optimizations used in Solaris "SHA-1", Wikipedia Good overview of SHA-1 FIPS 180-1 SHA-1 standard (FIPS, 1995) NIST Comments on Cryptanalytic Attacks on SHA-1 (2005, revised 2006)

    Read the article

  • [GEEK SCHOOL] Network Security 2: Preventing Disaster with User Account Control

    - by Ciprian Rusen
    In this second lesson in our How-To Geek School about securing the Windows devices in your network, we will talk about User Account Control (UAC). Users encounter this feature each time they need to install desktop applications in Windows, when some applications need administrator permissions in order to work and when they have to change different system settings and files. UAC was introduced in Windows Vista as part of Microsoft’s “Trustworthy Computing” initiative. Basically, UAC is meant to act as a wedge between you and installing applications or making system changes. When you attempt to do either of these actions, UAC will pop up and interrupt you. You may either have to confirm you know what you’re doing, or even enter an administrator password if you don’t have those rights. Some users find UAC annoying and choose to disable it but this very important security feature of Windows (and we strongly caution against doing that). That’s why in this lesson, we will carefully explain what UAC is and everything it does. As you will see, this feature has an important role in keeping Windows safe from all kinds of security problems. In this lesson you will learn which activities may trigger a UAC prompt asking for permissions and how UAC can be set so that it strikes the best balance between usability and security. You will also learn what kind of information you can find in each UAC prompt. Last but not least, you will learn why you should never turn off this feature of Windows. By the time we’re done today, we think you will have a newly found appreciation for UAC, and will be able to find a happy medium between turning it off completely and letting it annoy you to distraction. What is UAC and How Does it Work? UAC or User Account Control is a security feature that helps prevent unauthorized system changes to your Windows computer or device. These changes can be made by users, applications, and sadly, malware (which is the biggest reason why UAC exists in the first place). When an important system change is initiated, Windows displays a UAC prompt asking for your permission to make the change. If you don’t give your approval, the change is not made. In Windows, you will encounter UAC prompts mostly when working with desktop applications that require administrative permissions. For example, in order to install an application, the installer (generally a setup.exe file) asks Windows for administrative permissions. UAC initiates an elevation prompt like the one shown earlier asking you whether it is okay to elevate permissions or not. If you say “Yes”, the installer starts as administrator and it is able to make the necessary system changes in order to install the application correctly. When the installer is closed, its administrator privileges are gone. If you run it again, the UAC prompt is shown again because your previous approval is not remembered. If you say “No”, the installer is not allowed to run and no system changes are made. If a system change is initiated from a user account that is not an administrator, e.g. the Guest account, the UAC prompt will also ask for the administrator password in order to give the necessary permissions. Without this password, the change won’t be made. Which Activities Trigger a UAC Prompt? There are many types of activities that may trigger a UAC prompt: Running a desktop application as an administrator Making changes to settings and files in the Windows and Program Files folders Installing or removing drivers and desktop applications Installing ActiveX controls Changing settings to Windows features like the Windows Firewall, UAC, Windows Update, Windows Defender, and others Adding, modifying, or removing user accounts Configuring Parental Controls in Windows 7 or Family Safety in Windows 8.x Running the Task Scheduler Restoring backed-up system files Viewing or changing the folders and files of another user account Changing the system date and time You will encounter UAC prompts during some or all of these activities, depending on how UAC is set on your Windows device. If this security feature is turned off, any user account or desktop application can make any of these changes without a prompt asking for permissions. In this scenario, the different forms of malware existing on the Internet will also have a higher chance of infecting and taking control of your system. In Windows 8.x operating systems you will never see a UAC prompt when working with apps from the Windows Store. That’s because these apps, by design, are not allowed to modify any system settings or files. You will encounter UAC prompts only when working with desktop programs. What You Can Learn from a UAC Prompt? When you see a UAC prompt on the screen, take time to read the information displayed so that you get a better understanding of what is going on. Each prompt first tells you the name of the program that wants to make system changes to your device, then you can see the verified publisher of that program. Dodgy software tends not to display this information and instead of a real company name, you will see an entry that says “Unknown”. If you have downloaded that program from a less than trustworthy source, then it might be better to select “No” in the UAC prompt. The prompt also shares the origin of the file that’s trying to make these changes. In most cases the file origin is “Hard drive on this computer”. You can learn more by pressing “Show details”. You will see an additional entry named “Program location” where you can see the physical location on your hard drive, for the file that’s trying to perform system changes. Make your choice based on the trust you have in the program you are trying to run and its publisher. If a less-known file from a suspicious location is requesting a UAC prompt, then you should seriously consider pressing “No”. What’s Different About Each UAC Level? Windows 7 and Windows 8.x have four UAC levels: Always notify – when this level is used, you are notified before desktop applications make changes that require administrator permissions or before you or another user account changes Windows settings like the ones mentioned earlier. When the UAC prompt is shown, the desktop is dimmed and you must choose “Yes” or “No” before you can do anything else. This is the most secure and also the most annoying way to set UAC because it triggers the most UAC prompts. Notify me only when programs/apps try to make changes to my computer (default) – Windows uses this as the default for UAC. When this level is used, you are notified before desktop applications make changes that require administrator permissions. If you are making system changes, UAC doesn’t show any prompts and it automatically gives you the necessary permissions for making the changes you desire. When a UAC prompt is shown, the desktop is dimmed and you must choose “Yes” or “No” before you can do anything else. This level is slightly less secure than the previous one because malicious programs can be created for simulating the keystrokes or mouse moves of a user and change system settings for you. If you have a good security solution in place, this scenario should never occur. Notify me only when programs/apps try to make changes to my computer (do not dim my desktop) – this level is different from the previous in in the fact that, when the UAC prompt is shown, the desktop is not dimmed. This decreases the security of your system because different kinds of desktop applications (including malware) might be able to interfere with the UAC prompt and approve changes that you might not want to be performed. Never notify – this level is the equivalent of turning off UAC. When using it, you have no protection against unauthorized system changes. Any desktop application and any user account can make system changes without your permission. How to Configure UAC If you would like to change the UAC level used by Windows, open the Control Panel, then go to “System and Security” and select “Action Center”. On the column on the left you will see an entry that says “Change User Account Control settings”. The “User Account Control Settings” window is now opened. Change the position of the UAC slider to the level you want applied then press “OK”. Depending on how UAC was initially set, you may receive a UAC prompt requiring you to confirm this change. Why You Should Never Turn Off UAC If you want to keep the security of your system at decent levels, you should never turn off UAC. When you disable it, everything and everyone can make system changes without your consent. This makes it easier for all kinds of malware to infect and take control of your system. It doesn’t matter whether you have a security suite or antivirus installed or third-party antivirus, basic common-sense measures like having UAC turned on make a big difference in keeping your devices safe from harm. We have noticed that some users disable UAC prior to setting up their Windows devices and installing third-party software on them. They keep it disabled while installing all the software they will use and enable it when done installing everything, so that they don’t have to deal with so many UAC prompts. Unfortunately this causes problems with some desktop applications. They may fail to work after you enable UAC. This happens because, when UAC is disabled, the virtualization techniques UAC uses for your applications are inactive. This means that certain user settings and files are installed in a different place and when you turn on UAC, applications stop working because they should be placed elsewhere. Therefore, whatever you do, do not turn off UAC completely! Coming up next … In the next lesson you will learn about Windows Defender, what this tool can do in Windows 7 and Windows 8.x, what’s different about it in these operating systems and how it can be used to increase the security of your system.

    Read the article

  • CodePlex Daily Summary for Thursday, March 25, 2010

    CodePlex Daily Summary for Thursday, March 25, 2010New ProjectsAccessibilityChecker: Accessibility Checker is custom feature developed to check accessibility requirements in a SharePoint PortalAnne Epstein - Personal Repository: Project Description This project contains multiple samples with various snippets and projects from blog posts, user group talks, and conference se...BatterySaver: BatterySaver is a simple application, in C#, that allows laptop users to perform actions based on battery notification events (switching from batte...dtxJson: C# coded JSON (JavaScript Object Notation) parser.eCamp: eCamp is a modular and extensible electronic camp management application. Written in C# and WPF, it follows many of the latest technology trends su...epdevplatform: epdevplatformERP: Environment Colaborative Resources ProjectFaceLight - Simple Silverlight Face Detection: FaceLight is a simple facial recognition method that can be used with Silverlight 's webcam. It searches for a certain sized skin color region in a...Forum PAF - The Open Source .Net Forum - From Viet Nam - By Thomas John (jntpaf): The Open Source .Net Forum - From Viet Nam ------------------------- Các phần mềm cần thiết để chạy Forum PAF: 1. .Net Framework 2.0 (trở lên) 2....Gawam Savel - Sistema de Avaliação Eletrônica: Projeto de TCC ...Html5 Helpers and tools for Asp.Net MVC: Html5 Helper aims to provide a generic helper context to produce HTML5 content in ASP.NET MVCIfeanyi Echeruo's WPF Recipes: WPF Recipes C# code samples showing how to solve some non-trivial problems in WPFITM 495 - iPhone App: school project iphone appKnowledge Exchange: Stack Overflow Inspired Knowledge ExchangeMailCheck: Mail检查程序。NetBoard: NetBoard is a lightweight system designed to act as the Blackboard in a micro-blackboard architecture for use within an OO system - even when withi...RodBass.com: RodBass.comsemanticrest: This is a vision of semantics mashups for rest web services.StatSpaceUI: StatSpaceUITFS Merge Tool: A small tool for merging changesets between TFS branches.The Interface To End All Interfaces: We interfaced everything, so that you can implement anything...Tim - Open Source Projects And Samples: Open source projects / Samples for http://tim.bellette.netWindows XNA: A place for those who enjoy there XNA Game Studio programing on Windows. For a place to share XNA Game Studio games for Windows in English. I'm loo...XAML Code Snippets addin for Visual Studio 2010: Provides support for adding XAML code snippets in the Visual Studio 2010 code editor for XAML in WPF and Silverlight projects.New ReleasesAnyWorks: AnyWorks1.2Bin: AnyWorks1.2AnyWorks: AnyWorks1.2Src: AnyWorks1.2AppFabric Caching Admin Tool: AppFabric Caching Admin Tool 1.0: System Requirements:.NET 4.0 RC AppFabric Caching Beta2 Test On:Win 7 (64x) Note: Must run as Administrator !!!ASP.NET Wiki Control: Release 1.1: - Modified text and varchar columns to nvarchar for unicode support. - Modified path info logic to disable its use if the page's raw url currently...B&W Port Scanner: Black`n`White Port Scanner 2.0: Fast Cross-Platform Port Scanner with Vulnerability Detection Tools. 3 vulnerability detection tools are included in this version: - Detection of ...BatterySaver: 0.1: Initial Release This is the initial release of the application. The application is very much beta with lots of changes upcoming. Known Issues The...BatterySaver: 0.2: Changes+ Add support for enabling and disabling devices (6)Compare .NET Objects: Version 1.2.0.0: New Features: Compare Generic Classes that Implement IList Indexers Compare Datasets Compare DataTables Compare DataRows Consider IList and...Controlled Vocabulary: 1.0.0.3: System Requirements Outlook 2007 / 2010 .Net Framework 3.5 Installation 1. Close Outlook (Use Task Manager to ensure no running instances in the b...crudwork is a library of reuseable classes for developing .NET applications: crudwork 2.2.0.2: minor changes. new guid for msi and new strongly named guidDigitallyCreated Utilities: DigitallyCreated Utilities v1.0.0: This release is the v1.0.0 version of DigitallyCreated Utilities. Binary Distribution The binary distribution contains the following: Compiled bin...DirectQ: Release 1.8.2: Adds several bugfixes and improved functionality. This release supersedes 1.8.1 which will be shortly removed. A very big THANK YOU to everyone w...DotNetNuke® Community Edition: 05.03.01: Major Highlights Issue fixed issue with the email notifications where the From and To addresses were swapped. Issue fixed with signature ch...Encrypted Notes: Encrypted Notes 1.5: This is the latest version of Encrypted Notes (1.5). It has an installer - it will create a directory 'CPascoe' in My Documents. Once you have ext...EnhSim: Release v1.9.8.1: Release v1.9.8.1Adding in the Glyph of Flame Shock changes in 3.3.3FlickrNet API Library: 3.0 Beta: A brand new version of the FlickrNet library, exposing 100% of the Flickr API's methods, along with streamlined class and method names. All classe...Forum PAF - The Open Source .Net Forum - From Viet Nam - By Thomas John (jntpaf): Forum PAF - The Open Source .Net Forum: A, Các phần mềm cần thiết để chạy Forum PAF: 1. .Net Framework 2.0 (trở lên) 2. Ajax Extension 1.0 (trở lên) 3. Sql Server 2005 (Sql Server Expr...HydroDesktop - CUAHSI Hydrologic Information System Desktop Application: HydroDesktop 0.7.3735 Alpha Installer: This is the testing release of the HydroDesktop 0.7 alpha version. Features supported in this version include: Search for data and download of Hydr...MDownloader: MDownloader-0.15.9.56953: Fixed Uploading.com links detection.MiniTwitter: 1.10: MiniTwitter 1.10 更新内容 追加 未読管理時に未読数をタブに表示する機能を実装 サイレントモードを実装(通知領域アイコンを右クリックして出るメニューから切り替え) 修正 「お気に入りワードを含む項目だけ表示する」オプションが機能していなかった問題を修正NoteExpress User Tools (NEUT) - Do it by ourselves!: NoteExpress User Tools 1.9.1: 测试版本:NoteExpress 2.5.0.1147 #修正一个改动的bugOneCMS: OneCMS 2.6: OneCMS 2.6 is finally here! Along with various bug fixes 2.6 also brings with it many new features such as the videos module, plugins system, and m...Quantity System Framework: Quantity System Calculator 1.1.9.93: Experience the new edition of the quantity system with text support and function treated as values now you can multiply functions and divide funct...Selection Maker: Selection Maker 1.4: some minor bugs fixed. icon added for running and uninstalling the application.sPATCH: sPatcher v0.8a: + Disabled patchers proxy settings to increase connection speed sPatch - Server Example *Contains a sample Patch that "downgrades" PWI 1.4.2 Clien...VSTT 2008 Quick Reference Guide: VS Performance Testing Quick Reference V2.0: Visual Studio Performance Testing Quick Reference Guide (Version 2.0)WeatherBar: WeatherBar 2.0: WeatherBar 2.0 Changelog: Introduced application settings. Modified UI. Ability to switch between Fahrenheit and Celsius (application-wide). ...WillStrohl.LightboxGallery Module for DotNetNuke: WillStrohl.LightboxGallery v1.02.01: This version of the Lightbox Gallery Module adds the following features: Upgraded the Autocomplete jQuery plugin Fixed an IE8 error that was occu...Windows XNA: Base Defense Alpha 0.339: Alpha 0.338 had a really bad bug that made the game crash, that is what I get for coding after 3am... I also made some AI for the Raptor. So now it...WPF Dynamic Data Display: Silverlight DynamicDataDisplay v0.2 - Spring 2010: Silverlight version of WPF DynamicDataDisplay charting library The version 0.2 shows a greater performance comparing with version 0.1 while having...Most Popular ProjectsMetaSharpRawrWBFS ManagerASP.NET Ajax LibrarySilverlight ToolkitMicrosoft SQL Server Product Samples: DatabaseAJAX Control ToolkitLiveUpload to FacebookWindows Presentation Foundation (WPF)ASP.NETMost Active ProjectsRawrjQuery Library for SharePoint Web ServicesFarseer Physics EngineBlogEngine.NETFacebook Developer ToolkitNB_Store - Free DotNetNuke Ecommerce Catalog ModulePHPExcelTable2ClassFluent Ribbon Control SuiteLINQ to Twitter

    Read the article

  • CodePlex Daily Summary for Saturday, May 29, 2010

    CodePlex Daily Summary for Saturday, May 29, 2010New ProjectsASP.NET MVC Time Planner: ASP.NET MVC based time planner is example solution that introduces ASP.NET MVC, MSSQL AJAX and jQuery development.Blit Scripting Engine: Blit Scripting Engine provides developers using Microsofts XNA Framework the ability to implement a scripting solution to their games and other pro...Expression Evaluator: This is an article on how to build a basic expression evaluator. It can evaluate any numerical expression combined with trigonometric functions for...Log Analyzer: This project has the aim to help developers to see live log/trace from their application applying visual styles to the grabbed text.LParse: LParse is a monadic parser combinator library, similar to Haskell’s Parsec. It allows you create parsers on C# language. All parsers are first-clas...NeatHtml: NeatHtml™ is a highly-portable open source website component that displays untrusted content securely, efficiently, and accessibly. Untrusted conte...NeatUpload: The NeatUpload ™ ASP.NET component allows developers to stream uploaded files to storage (filesystem or database) and allows users to monitor uplo...NSoup: NSoup is a .NET port of the jsoup (http://jsoup.org) HTML parser and sanitizer originally written in Java. jsoup originally written by Jonathan He...Ordering: c# farm softwarephone7: Project for Windows Phone 7RestCall: A very simple library to make a simple REST call and deserialize to an object. It uses WCF REST Starter Kit and the .net serializer in: System.Runt...SCSM CSV Connector: CSV Connector allows you to specify a data file and mapping location and a scheuled interval in minutes. At each scheduled interval Service Manage...Silverlight Adorner Control: An Adorner is a custom FrameworkElement that is bound to a FrameworkElement and displays information about that element 'above' the element without...Simple Stupid Tools: Simple Stupid ToolsSQScriptRunner: Simple Quick Script Runner allows an administrator to run T-SQL Scripts against one or more servers with common characteristics. For example, an m...ssisassembly: ssisassemblySSRS Report RoboCopy: a tools used to pass a report from a server to anotherTeam Foundation Server Explorer: A standalone Team Foundation Server explorer that can be used to view and manage source files.New Releases(SocketCoder) Full Silverlight Web Video/Voice Conferencing: SocketCoderWebConferencingSystem_Compiled: Installing The Server: 1- before you start you should allow the SocketCoderWCService.MainService.exe service to use the TCP ports from 4528 to 4532...ASP.NET MVC Time Planner: MVC Time Planner - v0.0.1.0: First public alpha of MVC Time Planner is now available. I got a lot of letters from my ASP.NET blog readers who are interested in this example sol...AvalonDock: AvalonDock 1.3.3384: Welcome to AvalonDock 1.3 This is the new version of AvalonDock targetting .NET 4 These are the main features that are included: - Target Microso...Blit Scripting Engine: Blit Scripting Engine 1.0: This marks the initial release of the Blit Scripting Engine. It provides the ability to compile scripts to an assembly, load pre-compiled assemblie...Community Forums NNTP bridge: Community Forums NNTP Bridge V12: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release has add...Community Forums NNTP bridge: Community Forums NNTP Bridge V13: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release has add...CSharp Intellisense: V2.4: bug fix: Pascal Casing, Single Selection and other selection errorsExpression Evaluator: Expression Evaluator - Visual Studio 2010: Visual Studio 2010 VersionFacebook Graph Toolkit: Preview 2: Preview 2 updates the source to be much more like the Facebook PHP-SDK. Additionally, the code has been updated to follow StyleCop framework rules....Facebook Graph Toolkit: Preview 3: Rest API now working although not fully tested. Removed JsonObject and JsonArray custom dynamic objects in favor of standard ExpandoObject and List...Free Silverlight & WPF Chart Control - Visifire: Visifire SL and WPF Charts v3.1.1 beta Released: Hi, Today we are releasing the two most awaited features i.e, Logarithmic axis and auto update of y-axis while Scrolling and Zooming. * Logar...Free Silverlight & WPF Chart Control - Visifire: Visifire SL and WPF Charts v3.5.4 beta Released: Hi, Today we are releasing the two most awaited features i.e, Logarithmic axis and auto update of y-axis while Scrolling and Zooming. Logarithmic...Fulcrum: Fulcrum 1.0: Initial release.Git Source Control Provider: V 0.3: V 0.3 Add automatic status refresh when files in solution folder changedIBCSharp: IBCSharp 1.04: What IBCSharp 1.04.zip unzips to: http://i50.tinypic.com/205qofl.png IBCSharp Change Log 1.04 - 5/28/2010 Updated IBClient.dll to IB API version...MapWindow6: MapWindow 6.0 May 28 2010: This shifts the projection library to System.Spatial.Projections instead of MWProj4. This also fixes a meter/feet conversion error.Microsoft Health Common User Interface: Release 8.2.51.000: This is version 8.2 of the Microsoft® Health Common User Interface Control Toolkit. This release includes code updates to controls as listed below....NeatHtml: NeatHtml-trunk.221: Adds support for Internet Explorer Mobile 6.NeatUpload: NeatUpload-1.3.25: Fixes the following bugs: SWFUpload.swf could not be served by a CDN because it was embedded without setting allowScriptAccess="always". NeatUpl...NSoup: NSoup 0.1: Initial port release. Corresponds to jsoup version 0.3.1.Numina Application/Security Framework: Numina.Framework Core 53265: Visit http://framework.numina.net to help get you started.Nuntio Content: Nuntio Content 4.2.0: This upgrades MagicContent instances to the latest version that is now called NuntioContent. While this release is quite stable it is still marked ...patterns & practices: Composite WPF and Silverlight: ProjectLinker Source for VS2010 - May 2010: The ProjectLinker helps keep the source for two projects in sync by automatically creating a linked file in one project as files are added in anoth...phone7: Prism for WP7: This the first version of prism for wp7SCSM CSV Connector: SCSM CSV Connector Version 0.1: Release Notes This is the first release of the SCSM CSV Connector solution. It is an 'alpha' release and has only been tested by the developers on ...Silverlight Adorner Control: 1.0: Initial releaseSilverlight Web Comic: Comic 1.1.1: Comic Beta with functionality to button newSilverlight Web Comic: Web Comic 1.1: This version has a little implementation no visible about the future versions, options to new, save, and load. The next version has a better review...Simple.NET: Simple.Mocking 1.0.0.6: Initial version of a new mocking framework for .NET Revision 1: Expect.AnyInocationOn<T>(T target) changed to Expect.AnyInocationOn(object target...Sonic.Net: Sonic.Net v1.0.1 For Unity 2.0: This Version is a port to VS2010 of the codebase with support for unity 2.0. note: currently follows the xsd schema of the previous unity Configur...Squiggle - A Free open source Lan Messenger: Squiggle 1.0.2: v1.0 Release.Team Foundation Server Explorer: Beta 1: The first public beta release of the TFS Explorer.thinktecture WSCF.blue: WSCF.blue V1 Update (1.0.8): Bug fix release with the following fix: When an XmlArrayAttribute decorated member has IsNullable=false, and the List<T> or Collection option is s...VCC: Latest build, v2.1.30528.0: Automatic drop of latest buildVisual Studio 2010 AutoScroller Extension: AutoScroller v0.4: A Visual studio 2010 auto-scroller extension. Simply hold down your middle mouse button and drag the mouse in the direction you wish to scroll, fu...WatchersNET CKEditor™ Provider for DotNetNuke: CKEditor Provider 1.10.03: !!Whats New Added CKEditor 3.3 Revision 5542 changes Options: Default Toolbar Set to Full for Administrators Browser Window: Increased Size of ...Most Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)patterns & practices – Enterprise LibraryMicrosoft SQL Server Community & SamplesPHPExcelASP.NETMost Active ProjectsAStar.netpatterns & practices – Enterprise LibraryBlogEngine.NETGMap.NET - Great Maps for Windows Forms & PresentationCommunity Forums NNTP bridgeRawrSqlServerExtensionsCustomer Portal Accelerator for Microsoft Dynamics CRMPAPpatterns & practices: Windows Azure Security Guidance

    Read the article

  • C#: Optional Parameters - Pros and Pitfalls

    - by James Michael Hare
    When Microsoft rolled out Visual Studio 2010 with C# 4, I was very excited to learn how I could apply all the new features and enhancements to help make me and my team more productive developers. Default parameters have been around forever in C++, and were intentionally omitted in Java in favor of using overloading to satisfy that need as it was though that having too many default parameters could introduce code safety issues.  To some extent I can understand that move, as I’ve been bitten by default parameter pitfalls before, but at the same time I feel like Java threw out the baby with the bathwater in that move and I’m glad to see C# now has them. This post briefly discusses the pros and pitfalls of using default parameters.  I’m avoiding saying cons, because I really don’t believe using default parameters is a negative thing, I just think there are things you must watch for and guard against to avoid abuses that can cause code safety issues. Pro: Default Parameters Can Simplify Code Let’s start out with positives.  Consider how much cleaner it is to reduce all the overloads in methods or constructors that simply exist to give the semblance of optional parameters.  For example, we could have a Message class defined which allows for all possible initializations of a Message: 1: public class Message 2: { 3: // can either cascade these like this or duplicate the defaults (which can introduce risk) 4: public Message() 5: : this(string.Empty) 6: { 7: } 8:  9: public Message(string text) 10: : this(text, null) 11: { 12: } 13:  14: public Message(string text, IDictionary<string, string> properties) 15: : this(text, properties, -1) 16: { 17: } 18:  19: public Message(string text, IDictionary<string, string> properties, long timeToLive) 20: { 21: // ... 22: } 23: }   Now consider the same code with default parameters: 1: public class Message 2: { 3: // can either cascade these like this or duplicate the defaults (which can introduce risk) 4: public Message(string text = "", IDictionary<string, string> properties = null, long timeToLive = -1) 5: { 6: // ... 7: } 8: }   Much more clean and concise and no repetitive coding!  In addition, in the past if you wanted to be able to cleanly supply timeToLive and accept the default on text and properties above, you would need to either create another overload, or pass in the defaults explicitly.  With named parameters, though, we can do this easily: 1: var msg = new Message(timeToLive: 100);   Pro: Named Parameters can Improve Readability I must say one of my favorite things with the default parameters addition in C# is the named parameters.  It lets code be a lot easier to understand visually with no comments.  Think how many times you’ve run across a TimeSpan declaration with 4 arguments and wondered if they were passing in days/hours/minutes/seconds or hours/minutes/seconds/milliseconds.  A novice running through your code may wonder what it is.  Named arguments can help resolve the visual ambiguity: 1: // is this days/hours/minutes/seconds (no) or hours/minutes/seconds/milliseconds (yes) 2: var ts = new TimeSpan(1, 2, 3, 4); 3:  4: // this however is visually very explicit 5: var ts = new TimeSpan(days: 1, hours: 2, minutes: 3, seconds: 4);   Or think of the times you’ve run across something passing a Boolean literal and wondered what it was: 1: // what is false here? 2: var sub = CreateSubscriber(hostname, port, false); 3:  4: // aha! Much more visibly clear 5: var sub = CreateSubscriber(hostname, port, isBuffered: false);   Pitfall: Don't Insert new Default Parameters In Between Existing Defaults Now let’s consider a two potential pitfalls.  The first is really an abuse.  It’s not really a fault of the default parameters themselves, but a fault in the use of them.  Let’s consider that Message constructor again with defaults.  Let’s say you want to add a messagePriority to the message and you think this is more important than a timeToLive value, so you decide to put messagePriority before it in the default, this gives you: 1: public class Message 2: { 3: public Message(string text = "", IDictionary<string, string> properties = null, int priority = 5, long timeToLive = -1) 4: { 5: // ... 6: } 7: }   Oh boy have we set ourselves up for failure!  Why?  Think of all the code out there that could already be using the library that already specified the timeToLive, such as this possible call: 1: var msg = new Message(“An error occurred”, myProperties, 1000);   Before this specified a message with a TTL of 1000, now it specifies a message with a priority of 1000 and a time to live of -1 (infinite).  All of this with NO compiler errors or warnings. So the rule to take away is if you are adding new default parameters to a method that’s currently in use, make sure you add them to the end of the list or create a brand new method or overload. Pitfall: Beware of Default Parameters in Inheritance and Interface Implementation Now, the second potential pitfalls has to do with inheritance and interface implementation.  I’ll illustrate with a puzzle: 1: public interface ITag 2: { 3: void WriteTag(string tagName = "ITag"); 4: } 5:  6: public class BaseTag : ITag 7: { 8: public virtual void WriteTag(string tagName = "BaseTag") { Console.WriteLine(tagName); } 9: } 10:  11: public class SubTag : BaseTag 12: { 13: public override void WriteTag(string tagName = "SubTag") { Console.WriteLine(tagName); } 14: } 15:  16: public static class Program 17: { 18: public static void Main() 19: { 20: SubTag subTag = new SubTag(); 21: BaseTag subByBaseTag = subTag; 22: ITag subByInterfaceTag = subTag; 23:  24: // what happens here? 25: subTag.WriteTag(); 26: subByBaseTag.WriteTag(); 27: subByInterfaceTag.WriteTag(); 28: } 29: }   What happens?  Well, even though the object in each case is SubTag whose tag is “SubTag”, you will get: 1: SubTag 2: BaseTag 3: ITag   Why?  Because default parameter are resolved at compile time, not runtime!  This means that the default does not belong to the object being called, but by the reference type it’s being called through.  Since the SubTag instance is being called through an ITag reference, it will use the default specified in ITag. So the moral of the story here is to be very careful how you specify defaults in interfaces or inheritance hierarchies.  I would suggest avoiding repeating them, and instead concentrating on the layer of classes or interfaces you must likely expect your caller to be calling from. For example, if you have a messaging factory that returns an IMessage which can be either an MsmqMessage or JmsMessage, it only makes since to put the defaults at the IMessage level since chances are your user will be using the interface only. So let’s sum up.  In general, I really love default and named parameters in C# 4.0.  I think they’re a great tool to help make your code easier to read and maintain when used correctly. On the plus side, default parameters: Reduce redundant overloading for the sake of providing optional calling structures. Improve readability by being able to name an ambiguous argument. But remember to make sure you: Do not insert new default parameters in the middle of an existing set of default parameters, this may cause unpredictable behavior that may not necessarily throw a syntax error – add to end of list or create new method. Be extremely careful how you use default parameters in inheritance hierarchies and interfaces – choose the most appropriate level to add the defaults based on expected usage. Technorati Tags: C#,.NET,Software,Default Parameters

    Read the article

  • Caching NHibernate Named Queries

    - by TStewartDev
    I recently started a new job and one of my first tasks was to implement a "popular products" design. The parameters were that it be done with NHibernate and be cached for 24 hours at a time because the query will be pretty taxing and the results do not need to be constantly up to date. This ended up being tougher than it sounds. The database schema meant a minimum of four joins with filtering and ordering criteria. I decided to use a stored procedure rather than letting NHibernate create the SQL for me. Here is a summary of what I learned (even if I didn't ultimately use all of it): You can't, at the time of this writing, use Fluent NHibernate to configure SQL named queries or imports You can return persistent entities from a stored procedure and there are a couple ways to do that You can populate POCOs using the results of a stored procedure, but it isn't quite as obvious You can reuse your named query result mapping other places (avoid duplication) Caching your query results is not at all obvious Testing to see if your cache is working is a pain NHibernate does a lot of things right. Having unified, up-to-date, comprehensive, and easy-to-find documentation is not one of them. By the way, if you're new to this, I'll use the terms "named query" and "stored procedure" (from NHibernate's perspective) fairly interchangeably. Technically, a named query can execute any SQL, not just a stored procedure, and a stored procedure doesn't have to be executed from a named query, but for reusability, it seems to me like the best practice. If you're here, chances are good you're looking for answers to a similar problem. You don't want to read about the path, you just want the result. So, here's how to get this thing going. The Stored Procedure NHibernate has some guidelines when using stored procedures. For Microsoft SQL Server, you have to return a result set. The scalar value that the stored procedure returns is ignored as are any result sets after the first. Other than that, it's nothing special. CREATE PROCEDURE GetPopularProducts @StartDate DATETIME, @MaxResults INT AS BEGIN SELECT [ProductId], [ProductName], [ImageUrl] FROM SomeTableWithJoinsEtc END The Result Class - PopularProduct You have two options to transport your query results to your view (or wherever is the final destination): you can populate an existing mapped entity class in your model, or you can create a new entity class. If you go with the existing model, the advantage is that the query will act as a loader and you'll get full proxied access to the domain model. However, this can be a disadvantage if you require access to the related entities that aren't loaded by your results. For example, my PopularProduct has image references. Unless I tie them into the query (thus making it even more complicated and expensive to run), they'll have to be loaded on access, requiring more trips to the database. Since we're trying to avoid trips to the database by using a second-level cache, we should use the second option, which is to create a separate entity for results. This approach is (I believe) in the spirit of the Command-Query Separation principle, and it allows us to flatten our data and optimize our report-generation process from data source to view. public class PopularProduct { public virtual int ProductId { get; set; } public virtual string ProductName { get; set; } public virtual string ImageUrl { get; set; } } The NHibernate Mappings (hbm) Next up, we need to let NHibernate know about the query and where the results will go. Below is the markup for the PopularProduct class. Notice that I'm using the <resultset> element and that it has a name attribute. The name allows us to drop this into our query map and any others, giving us reusability. Also notice the <import> element which lets NHibernate know about our entity class. <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2"> <import class="PopularProduct, Infrastructure.NHibernate, Version=1.0.0.0"/> <resultset name="PopularProductResultSet"> <return-scalar column="ProductId" type="System.Int32"/> <return-scalar column="ProductName" type="System.String"/> <return-scalar column="ImageUrl" type="System.String"/> </resultset> </hibernate-mapping>  And now the PopularProductsMap: <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2"> <sql-query name="GetPopularProducts" resultset-ref="PopularProductResultSet" cacheable="true" cache-mode="normal"> <query-param name="StartDate" type="System.DateTime" /> <query-param name="MaxResults" type="System.Int32" /> exec GetPopularProducts @StartDate = :StartDate, @MaxResults = :MaxResults </sql-query> </hibernate-mapping>  The two most important things to notice here are the resultset-ref attribute, which links in our resultset mapping, and the cacheable attribute. The Query Class – PopularProductsQuery So far, this has been fairly obvious if you're familiar with NHibernate. This next part, maybe not so much. You can implement your query however you want to; for me, I wanted a self-encapsulated Query class, so here's what it looks like: public class PopularProductsQuery : IPopularProductsQuery { private static readonly IResultTransformer ResultTransformer; private readonly ISessionBuilder _sessionBuilder;   static PopularProductsQuery() { ResultTransformer = Transformers.AliasToBean<PopularProduct>(); }   public PopularProductsQuery(ISessionBuilder sessionBuilder) { _sessionBuilder = sessionBuilder; }   public IList<PopularProduct> GetPopularProducts(DateTime startDate, int maxResults) { var session = _sessionBuilder.GetSession(); var popularProducts = session .GetNamedQuery("GetPopularProducts") .SetCacheable(true) .SetCacheRegion("PopularProductsCacheRegion") .SetCacheMode(CacheMode.Normal) .SetReadOnly(true) .SetResultTransformer(ResultTransformer) .SetParameter("StartDate", startDate.Date) .SetParameter("MaxResults", maxResults) .List<PopularProduct>();   return popularProducts; } }  Okay, so let's look at each line of the query execution. The first, GetNamedQuery, matches up with our NHibernate mapping for the sql-query. Next, we set it as cacheable (this is probably redundant since our mapping also specified it, but it can't hurt, right?). Then we set the cache region which we'll get to in the next section. Set the cache mode (optional, I believe), and my cache is read-only, so I set that as well. The result transformer is very important. This tells NHibernate how to transform your query results into a non-persistent entity. You can see I've defined ResultTransformer in the static constructor using the AliasToBean transformer. The name is obviously leftover from Java/Hibernate. Finally, set your parameters and then call a result method which will execute the query. Because this is set to cached, you execute this statement every time you run the query and NHibernate will know based on your parameters whether to use its cached version or a fresh version. The Configuration – hibernate.cfg.xml and Web.config You need to explicitly enable second-level caching in your hibernate configuration: <hibernate-configuration xmlns="urn:nhibernate-configuration-2.2"> <session-factory> [...] <property name="dialect">NHibernate.Dialect.MsSql2005Dialect</property> <property name="cache.provider_class">NHibernate.Caches.SysCache.SysCacheProvider,NHibernate.Caches.SysCache</property> <property name="cache.use_query_cache">true</property> <property name="cache.use_second_level_cache">true</property> [...] </session-factory> </hibernate-configuration> Both properties "use_query_cache" and "use_second_level_cache" are necessary. As this is for a web deployement, we're using SysCache which relies on ASP.NET's caching. Be aware of this if you're not deploying to the web! You'll have to use a different cache provider. We also need to tell our cache provider (in this cache, SysCache) about our caching region: <syscache> <cache region="PopularProductsCacheRegion" expiration="86400" priority="5" /> </syscache> Here I've set the cache to be valid for 24 hours. This XML snippet goes in your Web.config (or in a separate file referenced by Web.config, which helps keep things tidy). The Payoff That should be it! At this point, your queries should run once against the database for a given set of parameters and then use the cache thereafter until it expires. You can, of course, adjust settings to work in your particular environment. Testing Testing your application to ensure it is using the cache is a pain, but if you're like me, you want to know that it's actually working. It's a bit involved, though, so I'll create a separate post for it if comments indicate there is interest.

    Read the article

  • Using Lightbox with _Screen

    Although, I have to admit that I discovered Bernard Bout's ideas and concepts about implementing a lightbox in Visual FoxPro quite a while ago, there was no "spare" time in active projects that allowed me to have a closer look into his solution(s). Luckily, these days I received a demand to focus a little bit more on this. This article describes the steps about how to integrate and make use of Bernard's lightbox class in combination with _Screen in Visual FoxPro. The requirement in this project was to be able to visually lock the whole application (_Screen area) and guide the user to an information that should not be ignored easily. Depending on the importance any current user activity should be interrupted and focus put onto the notification. Getting the "meat", eh, source code Please check out Bernard's blog on Foxite directly in order to get the latest and greatest version. As time of writing this article I use version 6.0 as described in this blog entry: The Fastest Lightbox Ever The Lightbox class is sub-classed from the imgCanvas class from the GdiPlusX project on VFPx and therefore you need to have the source code of GdiPlusX as well, and integrate it into your development environment. The version I use is available here: Release GDIPlusX 1.20 As soon as you open the bbGdiLightbox class the first it, VFP might ask you to update the reference to the gdiplusx.vcx. As we have the sources, no problem and you have access to Bernard's code. The class itself is pretty easy to understand, some properties that you do not need to change and three methods: Setup(), ShowLightbox() and BeforeDraw() The challenge - _Screen or not? Reading Bernard's article about the fastest lightbox ever, he states the following: "The class will only work on a form. It will not support any other containers" Really? And what about _Screen? Isn't that a form class, too? Yes, of course it is but nonetheless trying to use _Screen directly will fail. Well, let's have look at the code to see why: WITH This .Left = 0 .Top = 0 .Height = ThisForm.Height .Width = ThisForm.Width .ZOrder(0) .Visible = .F.ENDWITH During the setup of the lightbox as well as while capturing the image as replacement for your forms and controls, the object reference Thisform is used. Which is a little bit restrictive to my opinion but let's continue. The second issue lies in the method ShowLightbox() and introduced by the call of .Bitmap.FromScreen(): Lparameters tlVisiblilty* tlVisiblilty - show or hide (T/F)* grab a screen dump with controlsIF tlVisiblilty Local loCaptureBmp As xfcBitmap Local lnTitleHeight, lnLeftBorder, lnTopBorder, lcImage, loImage lnTitleHeight = IIF(ThisForm.TitleBar = 1,Sysmetric(9),0) lnLeftBorder = IIF(ThisForm.BorderStyle < 2,0,Sysmetric(3)) lnTopBorder = IIF(ThisForm.BorderStyle < 2,0,Sysmetric(4)) With _Screen.System.Drawing loCaptureBmp = .Bitmap.FromScreen(ThisForm.HWnd,; lnLeftBorder,; lnTopBorder+lnTitleHeight,; ThisForm.Width ,; ThisForm.Height) ENDWITH * save it to a property This.capturebmp = loCaptureBmp ThisForm.SetAll("Visible",.F.) This.DraW() This.Visible = .T.ELSE ThisForm.SetAll("Visible",.T.) This.Visible = .F.ENDIF My first trials in using the class ended in an exception - GdiPlusError:OutOfMemory - thrown by the Bitmap object. Frankly speaking, this happened mainly because of my lack of knowledge about GdiPlusX. After reading some documentation, especially about the FromScreen() method I experimented a little bit. Capturing the visible area of _Screen actually was not the real problem but the dimensions I specified for the bitmap. The modifications - step by step First of all, it is to get rid of restrictive object references on Thisform and to change them into either This.Parent or more generic into This.oForm (even better: This.oControl). The Lightbox.Setup() method now sets the necessary object reference like so: *====================================================================* Initial setup* Default value: This.oControl = "This.Parent"* Alternative: This.oControl = "_Screen"*====================================================================With This .oControl = Evaluate(.oControl) If Vartype(.oControl) == T_OBJECT .Anchor = 0 .Left = 0 .Top = 0 .Width = .oControl.Width .Height = .oControl.Height .Anchor = 15 .ZOrder(0) .Visible = .F. EndIfEndwith Also, based on other developers' comments in Bernard articles on his lightbox concept and evolution I found the source code to handle the differences between a form and _Screen and goes into Lightbox.ShowLightbox() like this: *====================================================================* tlVisibility - show or hide (T/F)* grab a screen dump with controls*====================================================================Lparameters tlVisibility Local loControl m.loControl = This.oControl If m.tlVisibility Local loCaptureBmp As xfcBitmap Local lnTitleHeight, lnLeftBorder, lnTopBorder, lcImage, loImage lnTitleHeight = Iif(m.loControl.TitleBar = 1,Sysmetric(9),0) lnLeftBorder = Iif(m.loControl.BorderStyle < 2,0,Sysmetric(3)) lnTopBorder = Iif(m.loControl.BorderStyle < 2,0,Sysmetric(4)) With _Screen.System.Drawing If Upper(m.loControl.Name) == Upper("Screen") loCaptureBmp = .Bitmap.FromScreen(m.loControl.HWnd) Else loCaptureBmp = .Bitmap.FromScreen(m.loControl.HWnd,; lnLeftBorder,; lnTopBorder+lnTitleHeight,; m.loControl.Width ,; m.loControl.Height) EndIf Endwith * save it to a property This.CaptureBmp = loCaptureBmp m.loControl.SetAll("Visible",.F.) This.Draw() This.Visible = .T. Else This.CaptureBmp = .Null. m.loControl.SetAll("Visible",.T.) This.Visible = .F. Endif {loadposition content_adsense} Are we done? Almost... Although, Bernard says it clearly in his article: "Just drop the class on a form and call it as shown." It did not come clear to my mind in the first place with _Screen, but, yeah, he is right. Dropping the class on a form provides a permanent link between those two classes, it creates a valid This.Parent object reference. Bearing in mind that the lightbox class can not be "dropped" on the _Screen, we have to create the same type of binding during runtime execution like so: *====================================================================* Create global lightbox component*==================================================================== Local llOk, loException As Exception m.llOk = .F. m.loException = .Null. If Not Vartype(_Screen.Lightbox) == "O" Try _Screen.AddObject("Lightbox", "bbGdiLightbox") Catch To m.loException Assert .F. Message m.loException.Message EndTry EndIf m.llOk = (Vartype(_Screen.Lightbox) == "O")Return m.llOk Through runtime instantiation we create a valid binding to This.Parent in the lightbox object and the code works as expected with _Screen. Ease your life: Use properties instead of constants Having a closer look at the BeforeDraw() method might wet your appetite to simplify the code a little bit. Looking at the sample screenshots in Bernard's article you see several forms in different colors. This got me to modify the code like so: *====================================================================* Apply the actual lightbox effect on the captured bitmap.*====================================================================If Vartype(This.CaptureBmp) == T_OBJECT Local loGfx As xfcGraphics loGfx = This.oGfx With _Screen.System.Drawing loGfx.DrawImage(This.CaptureBmp,This.Rectangle,This.Rectangle,.GraphicsUnit.Pixel) * change the colours as needed here * possible colours are (220,128,0,0),(220,0,0,128) etc. loBrush = .SolidBrush.New(.Color.FromArgb( ; This.Opacity, .Color.FromRGB(This.BorderColor))) loGfx.FillRectangle(loBrush,This.Rectangle) EndwithEndif Create an additional property Opacity to specify the grade of translucency you would like to have without the need to change the code in each instance of the class. This way you only need to change the values of Opacity and BorderColor to tweak the appearance of your lightbox. This could be quite helpful to signalize different levels of importance (ie. green, yellow, orange, red, etc...) of notifications to the users of the application. Final thoughts Using the lightbox concept in combination with _Screen instead of forms is possible. Already Jim Wiggins comments in Bernard's article to loop through the _Screen.Forms collection in order to cascade the lightbox visibility to all active forms. Good idea. But honestly, I believe that instead of looping all forms one could use _Screen.SetAll("ShowLightbox", .T./.F., "Form") with Form.ShowLightbox_Access method to gain more speed. The modifications described above might provide even more features to your applications while consuming less resources and performance. Additionally, the restrictions to capture only forms does not exist anymore. Using _Screen you are able to capture and cover anything. The captured area of _Screen does not include any toolbars, docked windows, or menus. Therefore, it is advised to take this concept on a higher level and to combine it with additional classes that handle the state of toolbars, docked windows and menus. Which I did for the customer's project.

    Read the article

< Previous Page | 974 975 976 977 978 979 980 981 982 983 984 985  | Next Page >