Search Results

Search found 110535 results on 4422 pages for 'source code management'.

Page 173/4422 | < Previous Page | 169 170 171 172 173 174 175 176 177 178 179 180  | Next Page >

  • It's not just “Single Sign-on” by Steve Knott (aurionPro SENA)

    - by Greg Jensen
    It is true that Oracle Enterprise Single Sign-on (Oracle ESSO) started out as purely an application single sign-on tool but as we have seen in the previous articles in this series the product has matured into a suite of tools that can do more than just automated single sign-on and can also provide rapidly deployed, cost effective solution to many demanding password management problems. In the last article of this series I would like to discuss three cases where customers faced password scenarios that required more than just single sign-on and how some of the less well known tools in the Oracle ESSO suite “kitbag” helped solve these challenges. Case #1 One of the issues often faced by our customers is how to keep their applications compliant. I had a client who liked the idea of automated single sign-on for most of his applications but had a key requirement to actually increase the security for one specific SOX application. For the SOX application he wanted to secure access by using two-factor authentication with a smartcard. The problem was that the application did not support two-factor authentication. The solution was to use a feature from the Oracle ESSO suite called authentication manager. This feature enables you to have multiple authentication methods for the same user which in this case was a smartcard and the Windows password.  Within authentication manager each authenticator can be configured with a security grade so we gave the smartcard a high grade and the Windows password a normal grade. Security grading in Oracle ESSO can be configured on a per application basis so we set the SOX application to require the higher grade smartcard authenticator. The end result for the user was that they enjoyed automated single sign-on for most of the applications apart from the SOX application. When the SOX application was launched, the user was required by ESSO to present their smartcard before being given access to the application. Case #2 Another example solving compliance issues was in the case of a large energy company who had a number of core billing applications. New regulations required that users change their password regularly and use a complex password. The problem facing the customer was that the core billing applications did not have any native user password change functionality. The customer could not replace the core applications because of the cost and time required to re-develop them. With a reputation for innovation aurionPro SENA were approached to provide a solution to this problem using Oracle ESSO. Oracle ESSO has a password expiry feature that can be triggered periodically based on the timestamp of the users’ last password creation therefore our strategy here was to leverage this feature to provide the password change experience. The trigger can launch an application change password event however in this scenario there was no native change password feature that could be launched therefore a “dummy” change password screen was created that could imitate the missing change password function and connect to the application database on behalf of the user. Oracle ESSO was configured to trigger a change password event every 60 days. After this period if the user launched the application Oracle ESSO would detect the logon screen and invoke the password expiry feature. Oracle ESSO would trigger the “dummy screen,” detect it automatically as the application change password screen and insert a complex password on behalf of the user. After the password event had completed the user was logged on to the application with their new password. All this was provided at a fraction of the cost of re-developing the core applications. Case #3 Recent popular initiatives such as the BYOD and working from home schemes bring with them many challenges in administering “unmanaged machines” and sometimes “unmanageable users.” In a recent case, a client had a dispersed community of casual contractors who worked for the business using their own laptops to access applications. To improve security the around password management the security goal was to provision the passwords directly to these contractors. In a previous article we saw how Oracle ESSO has the capability to provision passwords through Provisioning Gateway but the challenge in this scenario was how to get the Oracle ESSO agent to the casual contractor on an unmanaged machine. The answer was to use another tool in the suite, Oracle ESSO Anywhere. This component can compile the normal Oracle ESSO functionality into a deployment package that can be made available from a website in a similar way to a streamed application. The ESSO Anywhere agent does not actually install into the registry or program files but runs in a folder within the user’s profile therefore no local administrator rights are required for installation. The ESSO Anywhere package can also be configured to stay persistent or disable itself at the end of the user’s session. In this case the user just needed to be told where the website package was located and download the package. Once the download was complete the agent started automatically and the user was provided with single sign-on to their applications without ever knowing the application passwords. Finally, as we have seen in these series Oracle ESSO not only has great utilities in its own tool box but also has direct integration with Oracle Privileged Account Manager, Oracle Identity Manager and Oracle Access Manager. Integrated together with these tools provides a complete and complementary platform to address even the most complex identity and access management requirements. So what next for Oracle ESSO? “Agentless ESSO available in the cloud” – but that will be a subject for a future Oracle ESSO series!                                                                                                                               

    Read the article

  • Updated slide decks from SSMS presentation at SNESSUG

    - by AaronBertrand
    Tonight I spoke at the SNESSUG user group meeting in Warwick, RI. You can download the slide deck here (this is a 3.5 MB PDF with presenter notes): http://sqlblog.com/files/folders/23423/download.aspx If you attended the talk, please feel free to provide feedback at speakerrate.com: http://speakerrate.com/talks/2849-management-studio-tips-tricks Today also happened to be a birthday celebration for Grant Fritchey ( blog | twitter ). He blogged about the meeting and also took a picture of the cake...(read more)

    Read the article

  • SQL Server 2005 Reporting Services - Configuration Manager & Management Studio Problems

    - by Nick
    When I open Reporting Services Configuration Manager on my server, an error message appears that says: Reporting Services Configuration Manager An unknown error has occurred in the WMI Provider. Error Code 800706B3 This error appears before I can even attempt to connect to an SSRS instance. In addition to this problem, I'm not able to connect to my SSRS instance via SSMS on my desktop. When I attempt to connect, I get the following error message: Microsoft SQL Server Management Studio Exception has been thrown by the target of an invocation. (mscorlib) Additional information: The operation could not be completed. (WinMgmt) Information about my environment: Server: Win Server 2K3 x64, SQL 2005 x64 SP3 Build 9.0.4053 Desktop: Windows 7 Enterprise x64 Steps I have already taken: I have installed the latest service pack on my server and workstation. I do not see any errors in my event logs.

    Read the article

  • Dynamic Code for type casting Generic Types 'generically' in C#

    - by Rick Strahl
    C# is a strongly typed language and while that's a fundamental feature of the language there are more and more situations where dynamic types make a lot of sense. I've written quite a bit about how I use dynamic for creating new type extensions: Dynamic Types and DynamicObject References in C# Creating a dynamic, extensible C# Expando Object Creating a dynamic DataReader for dynamic Property Access Today I want to point out an example of a much simpler usage for dynamic that I use occasionally to get around potential static typing issues in C# code especially those concerning generic types. TypeCasting Generics Generic types have been around since .NET 2.0 I've run into a number of situations in the past - especially with generic types that don't implement specific interfaces that can be cast to - where I've been unable to properly cast an object when it's passed to a method or assigned to a property. Granted often this can be a sign of bad design, but in at least some situations the code that needs to be integrated is not under my control so I have to make due with what's available or the parent object is too complex or intermingled to be easily refactored to a new usage scenario. Here's an example that I ran into in my own RazorHosting library - so I have really no excuse, but I also don't see another clean way around it in this case. A Generic Example Imagine I've implemented a generic type like this: public class RazorEngine<TBaseTemplateType> where TBaseTemplateType : RazorTemplateBase, new() You can now happily instantiate new generic versions of this type with custom template bases or even a non-generic version which is implemented like this: public class RazorEngine : RazorEngine<RazorTemplateBase> { public RazorEngine() : base() { } } To instantiate one: var engine = new RazorEngine<MyCustomRazorTemplate>(); Now imagine that the template class receives a reference to the engine when it's instantiated. This code is fired as part of the Engine pipeline when it gets ready to execute the template. It instantiates the template and assigns itself to the template: var template = new TBaseTemplateType() { Engine = this } The problem here is that possibly many variations of RazorEngine<T> can be passed. I can have RazorTemplateBase, RazorFolderHostTemplateBase, CustomRazorTemplateBase etc. as generic parameters and the Engine property has to reflect that somehow. So, how would I cast that? My first inclination was to use an interface on the engine class and then cast to the interface.  Generally that works, but unfortunately here the engine class is generic and has a few members that require the template type in the member signatures. So while I certainly can implement an interface: public interface IRazorEngine<TBaseTemplateType> it doesn't really help for passing this generically templated object to the template class - I still can't cast it if multiple differently typed versions of the generic type could be passed. I have the exact same issue in that I can't specify a 'generic' generic parameter, since there's no underlying base type that's common. In light of this I decided on using object and the following syntax for the property (and the same would be true for a method parameter): public class RazorTemplateBase :MarshalByRefObject,IDisposable { public object Engine {get;set; } } Now because the Engine property is a non-typed object, when I need to do something with this value, I still have no way to cast it explicitly. What I really would need is: public RazorEngine<> Engine { get; set; } but that's not possible. Dynamic to the Rescue Luckily with the dynamic type this sort of thing can be mitigated fairly easily. For example here's a method that uses the Engine property and uses the well known class interface by simply casting the plain object reference to dynamic and then firing away on the properties and methods of the base template class that are common to all templates:/// <summary> /// Allows rendering a dynamic template from a string template /// passing in a model. This is like rendering a partial /// but providing the input as a /// </summary> public virtual string RenderTemplate(string template,object model) { if (template == null) return string.Empty; // if there's no template markup if(!template.Contains("@")) return template; // use dynamic to get around generic type casting dynamic engine = Engine; string result = engine.RenderTemplate(template, model); if (result == null) throw new ApplicationException("RenderTemplate failed: " + engine.ErrorMessage); return result; } Prior to .NET 4.0  I would have had to use Reflection for this sort of thing which would have a been a heck of a lot more verbose, but dynamic makes this so much easier and cleaner and in this case at least the overhead is negliable since it's a single dynamic operation on an otherwise very complex operation call. Dynamic as  a Bailout Sometimes this sort of thing often reeks of a design flaw, and I agree that in hindsight this could have been designed differently. But as is often the case this particular scenario wasn't planned for originally and removing the generic signatures from the base type would break a ton of other code in the framework. Given the existing fairly complex engine design, refactoring an interface to remove generic types just to make this particular code work would have been overkill. Instead dynamic provides a nice and simple and relatively clean solution. Now if there were many other places where this occurs I would probably consider reworking the code to make this cleaner but given this isolated instance and relatively low profile operation use of dynamic seems a valid choice for me. This solution really works anywhere where you might end up with an inheritance structure that doesn't have a common base or interface that is sufficient. In the example above I know what I'm getting but there's no common base type that I can cast to. All that said, it's a good idea to think about use of dynamic before you rush in. In many situations there are alternatives that can still work with static typing. Dynamic definitely has some overhead compared to direct static access of objects, so if possible we should definitely stick to static typing. In the example above the application already uses dynamics extensively for dynamic page page templating and passing models around so introducing dynamics here has very little additional overhead. The operation itself also fires of a fairly resource heavy operation where the overhead of a couple of dynamic member accesses are not a performance issue. So, what's your experience with dynamic as a bailout mechanism? © Rick Strahl, West Wind Technologies, 2005-2012Posted in CSharp   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Visitor pattern and compiler code generation, how to get children attributes?

    - by LeleDumbo
    I'd like to modify my compiler's code generator to use visitor pattern since the current approach must use multiple conditional statement to check the real type of a child before generating the corresponding code. However, I have problems to get children attributes after they're visited. For instance, in binary expression I use this: LHSCode := GenerateExpressionCode(LHSNode); RHSCode := GenerateExpressionCode(RHSNode); CreateBinaryExpression(Self,LHS,RHS); In visitor pattern the visit method is usually void, so I can't get the expression code from LHS and RHS. Keeping shared global variables isn't an option since expression code generation is recursive thus could erase previous values kept in the variables. I'll just show the binary expression as this is the most complicated part (for now): function TLLVMCodeGenerator.GenerateExpressionCode( Expr: TASTExpression): TLLVMValue; var BinExpr: TASTBinaryExpression; UnExpr: TASTUnaryExpression; LHSCode, RHSCode, ExprCode: TLLVMValue; VarExpr: TASTVariableExpression; begin if Expr is TASTBinaryExpression then begin BinExpr := Expr as TASTBinaryExpression; LHSCode := GenerateExpressionCode(BinExpr.LHS); RHSCode := GenerateExpressionCode(BinExpr.RHS); case BinExpr.Op of '<': Result := FBuilder.CreateICmp(ccSLT, LHSCode, RHSCode); '<=': Result := FBuilder.CreateICmp(ccSLE, LHSCode, RHSCode); '>': Result := FBuilder.CreateICmp(ccSGT, LHSCode, RHSCode); '>=': Result := FBuilder.CreateICmp(ccSGE, LHSCode, RHSCode); '==': Result := FBuilder.CreateICmp(ccEQ, LHSCode, RHSCode); '<>': Result := FBuilder.CreateICmp(ccNE, LHSCode, RHSCode); '/\': Result := FBuilder.CreateAnd(LHSCode, RHSCode); '\/': Result := FBuilder.CreateOr(LHSCode, RHSCode); '+': Result := FBuilder.CreateAdd(LHSCode, RHSCode); '-': Result := FBuilder.CreateSub(LHSCode, RHSCode); '*': Result := FBuilder.CreateMul(LHSCode, RHSCode); '/': Result := FBuilder.CreateSDiv(LHSCode, RHSCode); end; end else if Expr is TASTPrimaryExpression then if Expr is TASTBooleanConstant then with Expr as TASTBooleanConstant do Result := FBuilder.CreateConstant(Ord(Value), ltI1) else if Expr is TASTIntegerConstant then with Expr as TASTIntegerConstant do Result := FBuilder.CreateConstant(Value, ltI32) else if Expr is TASTUnaryExpression then begin UnExpr := Expr as TASTUnaryExpression; ExprCode := GenerateExpressionCode(UnExpr.Expr); case UnExpr.Op of '~': Result := FBuilder.CreateXor( FBuilder.CreateConstant(1, ltI1), ExprCode); '-': Result := FBuilder.CreateSub( FBuilder.CreateConstant(0, ltI32), ExprCode); end; end else if Expr is TASTVariableExpression then begin VarExpr := Expr as TASTVariableExpression; with VarExpr.VarDecl do Result := FBuilder.CreateVar(Ident, BaseTypeLLVMTypeMap[BaseType]); end; end; Hope you understand it :)

    Read the article

  • SQL SERVER – Tricks to Replace SELECT * with Column Names – SQL in Sixty Seconds #017 – Video

    - by pinaldave
    You might have heard many times that one should not use SELECT * as there are many disadvantages to the usage of the SELECT *. I also believe that there are always rare occasion when we need every single column of the query. In most of the cases, we only need a few columns of the query and we should retrieve only those columns. SELECT * has many disadvantages. Let me list a few and remaining you can add as a comment.  Retrieves unnecessary columns and increases network traffic When a new columns are added views needs to be refreshed manually Leads to usage of sub-optimal execution plan Uses clustered index in most of the cases instead of using optimal index It is difficult to debug. There are two quick tricks I have discussed in the video which explains how users can avoid using SELECT * but instead list the column names. 1) Drag the columns folder from SQL Server Management Studio to Query Editor 2) Right Click on Table Name >> Script TAble AS >> SELECT To… >> Select option It is extremely easy to list the column names in the table. In today’s sixty seconds video, you will notice that I was able to demonstrate both the methods very quickly. From now onwards there should be no excuse for not listing ColumnName. Let me ask a question back – is there ever a reason to SELECT *? If yes, would you please share that as a comment. More on SELECT *: SQL SERVER – Solution – Puzzle – SELECT * vs SELECT COUNT(*) SQL SERVER – Puzzle – SELECT * vs SELECT COUNT(*) SQL SERVER – SELECT vs. SET Performance Comparison I encourage you to submit your ideas for SQL in Sixty Seconds. We will try to accommodate as many as we can. If we like your idea we promise to share with you educational material. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

  • Web based library management (FOSS) application

    - by pgb
    I'm looking for a FOSS web-based library management application. I want to manage the few books we have at our company, and I'm looking for applications that provide a simple web interface to: Add new books. Search for books. (Not Required) Place holds on books. My ideal would be something like Delicious Library, but web based and open source. I've looked at Koha (which I could not install), Emilda, which seems to be a dead project (last emails are from 2006 and I could not install it) and VuFind which looks awesome but doesn't provide (or I could not find) an interface to add new records to the library. Any suggestions?

    Read the article

  • Slides and code for MPI Cluster Debugger

    I've blogged before about the MPI Cluster Debugger in VS2010 that facilitates launching the application on the cluster and attaching the debugger (btw, a shorter version of the screencast I link to there, is here).There have been requests for the code I use in the screencast, so please find a ZIP with that code.There have also been requests for a PowerPoint deck to use when showing this feature to others. Feel free to download some slides I threw together the other day. Comments about this post welcome at the original blog.

    Read the article

  • Twin Cities Code Camp 8 Retrospective

    - by Lee Brandt
    I just got back (a few hours ago) from Minneapolis, where I was speaking at the Twin Cities Code Camp 8. I’d never been to a Twin Cities Code Camp, and I have always heard such great things, so I submitted and got accepted to speak. The conference (what I got to see) was great. My talk was pretty short on people, but there are many reasons for that. First, I spoke opposite Donn Felker (speaking about developing for Android) and Keith Dahlby (speaking about Dynamic .NET). So of course, my talk is going to be empty. How could I compete with that? Plus, my talk was about software process improvement, specifically about how our process has evolved. Maybe not the smartest idea to submit to talk about software process at a developer’s conference. The people who DID attend however, seemed to really enjoy the talk. There was good interaction and good, thoughtful questions. So the attendees seemed engaged. I actually did get a chance to go to one session. I went and saw Javier Lozano talk about Open source tools for ASP.NET MVC. I am hip-deep in MVC stuff right now and getting up to speed on MVC 2 as well. I learned about MVC Turbine, Javier’s Open Source project. I will definitely be adding it to my MVC arsenal. Thanks Javier! I did forget my AC adapter for my laptop and got a little lost in Minneapolis on my way to get one from MicroCenter Saturday morning, but other than that, it was a great trip. It’s a long drive, but seeing all the guys and getting two Nut & Honey rolls from Roly Poly in Eden Prarie for lunch on Saturday made the trip totally worth it. I look forward to seeing what Jason & Chris come up with for next year! Thanks for having me guys!

    Read the article

  • The Work Order Printing Challenge

    - by celine.beck
    One of the biggest concerns we've heard from maintenance practitioners is the ability to print and batch print work order details along with its accompanying attachments. Indeed, maintenance workers traditionally rely on work order packets to complete their job. A standard work order packet can include a variety of information like equipment documentation, operating instructions, checklists, end-of-task feedback forms and the likes. Now, the problem is that most Asset Lifecycle Management applications do not provide a simple and efficient solution for process printing with document attachments. Work order forms can be easily printed but attachments are usually left out of the printing process. This sounds like a minor problem, but when you are processing high volume of work orders on a regular basis, this inconvenience can result in important inefficiencies. In order to print work order and its related attachments, maintenance personnel need to print the work order details and then go back to the work order and open each individual attachment using the proper authoring application to view and print each document. The printed output is collated into a work order packet. The AutoVue Document Print Service products that were just released in April 2010 aim at helping organizations address the work order printing challenge. Customers and partners can leverage the AutoVue Document Print Services to build a complete printing solution that complements their existing print server solution with AutoVue's document- and platform-agnostic document print services. The idea is to leverage AutoVue's printing services to invoke printing either programmatically or manually directly from within the work order management application, and efficiently process the printing of complete work order packets, including all types of attachments, from office files to more advanced engineering documents like 2D CAD drawings. Oracle partners like MIPRO Consulting, specialists in PeopleSoft implementations, have already expressed interest in the AutoVue Document Print Service products for their ability to offer print services to the PeopleSoft ALM suite, so that customers are able to print packages of documents for maintenance personnel. For more information on the subject, please consult MIPRO Consulting's article entitled Unsung Value: Primavera and AutoVue Integration into PeopleSoft posted on their blog. The blog post entitled Introducing AutoVue Document Print Service provides additional information on how the solution works. We would also love to hear what your thoughts are on the topic, so please do not hesitate to post your comments/feedback on our blog. Related Articles: Introducing AutoVue Document Print Service Print Any Document Type with AutoVue Document Print Services

    Read the article

  • PeopleSoft HCM @ OHUG 11: Enter the Matrix

    - by Jay Zuckert
    The PeopleSoft HCM team is back from a very busy and exciting OHUG conference in Orlando. The packed, standing-room only PeopleSoft HCM Roadmap keynote was the highlight of the conference for many attendees and the reviews are in : PeopleSoft rocked the house ! Great demonstration of products in the keynote. Best keynote in a long time, and fun. Engaging and entertaining, great demonstration of capabilities. Message received loud and clear, PeopleSoft applications are here to stay.  PeopleSoft has a real vision moving forward. Real-time polls using mobile texting were cutting edge.                          Tracy Martin (as Trinity) and other members of the PeopleSoft HCM team presented a ‘must-see’ Matrix-themed session while dressed as movie characters. The keynote highlighted planned HCM capabilities for Matrix administration and future organization visualization enhancements. The team also previewed the planned Manager Dashboard and Talent Summary.                           Following the keynote, some of the cast posed for photo opportunities at the OHUG booth in the exhibition hall. As you can imagine, they received some interesting looks walking by the other vendor booths. The PeopleSoft HCM team also presented numerous other OHUG sessions covering PeopleSoft Talent Management, Compensation, HR HelpDesk, Payroll, Global HCM Practices, Time & Labor, Absence Management, and Benefits. All of those presentations are available from the OHUG site at www.ohug.org. When not in one of the well-attended PeopleSoft HCM sessions, conference attendees filled the Oracle booth in the exhibition hall to see live product demonstrations. True to their PeopleSoft roots, some of the PeopleSoft HCM team played as hard as they worked in Orlando and enjoyed the OHUG Appreciation event along with customers at the Hard Rock. We are already busy planning for Oracle OpenWorld 2011 and prepping sessions our PeopleSoft HCM customers are sure to like. We hope to see you there in San Francisco from Oct. 2-6. To learn more about OpenWorld or to register, click here.

    Read the article

  • Out-of-the-Box Spatial Dashboards Improve Utility Outage Decisions

    - by stephen.garth
    Oracle Utilities Advanced Spatial Outage Analytics leverages the capabilities of Oracle Business Intelligence with map visualization and geospatial analysis of outage data from utility network management systems, providing BI dashboards to support utility executives and other decision makers throughout the enterprise. This excellent article by Oracle's Guerry Waters, published by Directions Media, gives details. Read the article here. Get more information: - Oracle Spatial - Oracle Utilities - Oracle Business Intelligence

    Read the article

  • "Fast link detected" warning in GP management console

    - by ???????? ??????
    There is a message that is shown in every report i make in Group Policy Results section of Group Policy Management Console, saying that "A fast link is detected". I followed the link in the waring, but after I read the page several times, I concluded, that I can ignore the warning. However, I noticed that the group policies are not applied when security filtering is used untl "gpupdate /sync" is executed... Is this related to the fast sync? In general, can somebody explain me the consequences of fast links briefly?

    Read the article

  • Open source alternative for Canonical Landscape?

    - by netvope
    From Canonical: Landscape is an easy-to-use systems management and monitoring service that enables you to manage multiple Ubuntu machines as easily as one through a simple Web-based interface. However, Landscape is not free. The RedHat counterpart Satellite has a free version called Spacewalk, but it doesn't work on Ubuntu. (There is an attempt to port Spacewalk to Debian, but it doesn't look like it's stable yet.) Are there any open source alternative to Landscape? Better yet, are there any Spacewalk-like software that works for both RedHat-based and Debian-based systems?

    Read the article

  • Philly.NET Code Camp

    - by Steve Michelotti
    This Saturday I will be at the Philly.NET Code Camp presenting C# 4.0.  The code camp is currently registered to capacity (800 attendees) but you will be able to view certain presentations on a Live Meeting simulcast (and later on Channel 9).  You can tune it at 3:30PM Eastern time to view my presentation. The attendee URL is here.

    Read the article

  • 4th Annual Hartford Code Camp - The Code Camp Manifesto lives on!

    - by SB Chatterjee
    It is amazing that Thom Robbins' blog posting back in December 2004 laid the foundation of the Code Camps that have grown world-wide - there is at least one every week-end in some country (unscientific tweets stats sampling). This week end, we at the Connecticut .NET Developers Group had the 4th Annual Hartford Code Camp and it was well attended with 120+ attendees with ~30 sessions. Our thanks to the Speakers from near and far who made our event a success.

    Read the article

  • My Code Kata–A Solution Kata

    - by Glav
    There are many developers and coders out there who like to do code Kata’s to keep their coding ability up to scratch and to practice their skills. I think it is a good idea. While I like the concept, I find them dead boring and of minimal purpose. Yes, they serve to hone your skills but that’s about it. They are often quite abstract, in that they usually focus on a small problem set requiring specific solutions. It is fair enough as that is how they are designed but again, I find them quite boring. What I personally like to do is go for something a little larger and a little more fun. It takes a little more time and is not as easily executed as a kata though, but it services the same purposes from a practice perspective and allows me to continue to solve some problems that are not directly part of the initial goal. This means I can cover a broader learning range and have a bit more fun. If I am lucky, sometimes they even end up being useful tools. With that in mind, I thought I’d share my current ‘kata’. It is not really a code kata as it is too big. I prefer to think of it as a ‘solution kata’. The code is on bitbucket here. What I wanted to do was create a kind of simplistic virtual world where I can create a player, or a class, stuff it into the world, and see if it survives, and can navigate its way to the exit. Requirements were pretty simple: Must be able to define a map to describe the world using simple X,Y co-ordinates. Z co-ordinates as well if you feel like getting clever. Should have the concept of entrances, exists, solid blocks, and potentially other materials (again if you want to get clever). A coder should be able to easily write a class which will act as an inhabitant of the world. An inhabitant will receive stimulus from the world in the form of surrounding environment and be able to make a decision on action which it passes back to the ‘world’ for processing. At a minimum, an inhabitant will have sight and speed characteristics which determine how far they can ‘see’ in the world, and how fast they can move. Coders who write a really bad ‘inhabitant’ should not adversely affect the rest of world. Should allow multiple inhabitants in the world. So that was the solution I set out to act as a practice solution and a little bit of fun. It had some interesting problems to solve and I figured, if it turned out ok, I could potentially use this as a ‘developer test’ for interviews. Ask a potential coder to write a class for an inhabitant. Show the coder the map they will navigate, but also mention that we will use their code to navigate a map they have not yet seen and a little more complex. I have been playing with solution for a short time now and have it working in basic concepts. Below is a screen shot using a very basic console visualiser that shows the map, boundaries, blocks, entrance, exit and players/inhabitants. The yellow asterisks ‘*’ are the players, green ‘O’ the entrance, purple ‘^’ the exit, maroon/browny ‘#’ are solid blocks. The players can move around at different speeds, knock into each others, and make directional movement decisions based on what they see and who is around them. It has been quite fun to write and it is also quite fun to develop different players to inject into the world. The code below shows a really simple implementation of an inhabitant that can work out what to do based on stimulus from the world. It is pretty simple and just tries to move in some direction if there is nothing blocking the path. public class TestPlayer:LivingEntity { public TestPlayer() { Name = "Beta Boy"; LifeKey = Guid.NewGuid(); } public override ActionResult DecideActionToPerform(EcoDev.Core.Common.Actions.ActionContext actionContext) { try { var action = new MovementAction(); // move forward if we can if (actionContext.Position.ForwardFacingPositions.Length > 0) { if (CheckAccessibilityOfMapBlock(actionContext.Position.ForwardFacingPositions[0])) { action.DirectionToMove = MovementDirection.Forward; return action; } } if (actionContext.Position.LeftFacingPositions.Length > 0) { if (CheckAccessibilityOfMapBlock(actionContext.Position.LeftFacingPositions[0])) { action.DirectionToMove = MovementDirection.Left; return action; } } if (actionContext.Position.RearFacingPositions.Length > 0) { if (CheckAccessibilityOfMapBlock(actionContext.Position.RearFacingPositions[0])) { action.DirectionToMove = MovementDirection.Back; return action; } } if (actionContext.Position.RightFacingPositions.Length > 0) { if (CheckAccessibilityOfMapBlock(actionContext.Position.RightFacingPositions[0])) { action.DirectionToMove = MovementDirection.Right; return action; } } return action; } catch (Exception ex) { World.WriteDebugInformation("Player: "+ Name, string.Format("Player Generated exception: {0}",ex.Message)); throw ex; } } private bool CheckAccessibilityOfMapBlock(MapBlock block) { if (block == null || block.Accessibility == MapBlockAccessibility.AllowEntry || block.Accessibility == MapBlockAccessibility.AllowExit || block.Accessibility == MapBlockAccessibility.AllowPotentialEntry) { return true; } return false; } } It is simple and it seems to work well. The world implementation itself decides the stimulus context that is passed to he inhabitant to make an action decision. All movement is carried out on separate threads and timed appropriately to be as fair as possible and to cater for additional skills such as speed, and eventually maybe stamina, strength, with actions like fighting. It is pretty fun to make up random maps and see how your inhabitant does. You can download the code from here. Along the way I have played with parallel extensions to make the compute intensive stuff spread across all cores, had to heavily factor in visibility of methods and properties so design of classes was paramount, work out movement algorithms that play fairly in the world and properly favour the players with higher abilities, as well as a host of other issues. So that is my ‘solution kata’. If I keep going with it, I may develop a web interface for it where people can upload assemblies and watch their player within a web browser visualiser and maybe even a map designer. What do you do to keep the fires burning?

    Read the article

  • Separating physics and game logic from UI code

    - by futlib
    I'm working on a simple block-based puzzle game. The game play consists pretty much of moving blocks around in the game area, so it's a trivial physics simulation. My implementation, however, is in my opinion far from ideal and I'm wondering if you can give me any pointers on how to do it better. I've split the code up into two areas: Game logic and UI, as I did with a lot of puzzle games: The game logic is responsible for the general rules of the game (e.g. the formal rule system in chess) The UI displays the game area and pieces (e.g. chess board and pieces) and is responsible for animations (e.g. animated movement of chess pieces) The game logic represents the game state as a logical grid, where each unit is one cell's width/height on the grid. So for a grid of width 6, you can move a block of width 2 four times until it collides with the boundary. The UI takes this grid, and draws it by converting logical sizes into pixel sizes (that is, multiplies it by a constant). However, since the game has hardly any game logic, my game logic layer [1] doesn't have much to do except collision detection. Here's how it works: Player starts to drag a piece UI asks game logic for the legal movement area of that piece and lets the player drag it within that area Player lets go of a piece UI snaps the piece to the grid (so that it is at a valid logical position) UI tells game logic the new logical position (via mutator methods, which I'd rather avoid) I'm not quite happy with that: I'm writing unit tests for my game logic layer, but not the UI, and it turned out all the tricky code is in the UI: Stopping the piece from colliding with others or the boundary and snapping it to the grid. I don't like the fact that the UI tells the game logic about the new state, I would rather have it call a movePieceLeft() method or something like that, as in my other games, but I didn't get far with that approach, because the game logic knows nothing about the dragging and snapping that's possible in the UI. I think the best thing to do would be to get rid of my game logic layer and implement a physics layer instead. I've got a few questions regarding that: Is such a physics layer common, or is it more typical to have the game logic layer do this? Would the snapping to grid and piece dragging code belong to the UI or the physics layer? Would such a physics layer typically work with pixel sizes or with some kind of logical unit, like my game logic layer? I've seen event-based collision detection in a game's code base once, that is, the player would just drag the piece, the UI would render that obediently and notify the physics system, and the physics system would call a onCollision() method on the piece once a collision is detected. What is more common? This approach or asking for the legal movement area first? [1] layer is probably not the right word for what I mean, but subsystem sounds overblown and class is misguiding, because each layer can consist of several classes.

    Read the article

  • Source Control - XCode - Visual Studio 2005/2008 / 2010

    - by Mick Walker
    My apologies if this has been asked before, I wasnt quite sure if this question should be asked on a programming forum, as it more relates to programming environment than a particular technology, so please accept my (double) appologies if I am posting this in the wrong place, my logic in this case was if it effects the code I write, then this is the place for it. At home, I do a lot of my development on a Mac Pro, I do development for the Mac, iPhone and Windows on this machine (Xcode & Visual Studio - (multiple versions installed in bootcamp, but generally I run it via Parallels)). When visiting a client, I have a similar setup, but on my MacBook Pro. What I want is a source control solution to install on the Mac Pro, that will support both XCode and multiple versions of visual studio, so that when I visit a client, I can simply grab the latest copy from source control via the MacBook Pro. Whilst visiting the client, he / she may suggest changes, and minor ones I would tend to make on site, so I need the ability to merge any modified code back into the trunk of the project / solution when I return home. At the moment, I am using no source control at all, and rely on simply coping folders and overwriting them when I return from a client- thats my 'merge'!!! I was wondering if anyone had any ideas of a source provider I could use, which would support both Windows and Mac development environments, and is cheap (free would be better).

    Read the article

  • More Chicago Code Camp Information

    - by Tim Murphy
    It seems the guys have posted the venue.  The Chicago Code Camp will be held at the Illinois Institute of Technology on May 1, 2010.  Sign up and join in. IIT- Stuart Building 10 West 31st Chicago, IL 60616   del.icio.us Tags: Chicago Code Camp

    Read the article

  • SSMS Tools Pack now supports Denali CTP1

    - by AaronBertrand
    Earlier today, Mladen Prajdic ( blog | twitter ) released an updated version of his SSMS Tools Pack (v.1.9.4), a free add-in for Management Studio that provides a ton of helpful functionality that isn't available with the native tools. I'm really glad this happened, because I've installed Denali on all of my VMs and have been using it for most of my work, and I've been missing some of the little things the tool adds. In addition to adding Denali support, Mladen also fixed a handful of minor bugs...(read more)

    Read the article

  • How to view special characters in SQL Management Studio

    - by B Z
    Sql 2005 I have a text column that has special characters stored e.g. CR, LF, but I don't know what they are. I would like to view these characters in management studio. Something like in Notepad ++ Show Symbol Show All Characters. My Goal: I am working on a data conversion from one database to another. When the data is converted and viewed in the native application it is displaying some funky characters like a pipe character. I would like to eliminate these characters during the conversion process.

    Read the article

< Previous Page | 169 170 171 172 173 174 175 176 177 178 179 180  | Next Page >