Search Results

Search found 5183 results on 208 pages for 'programmer skills'.

Page 207/208 | < Previous Page | 203 204 205 206 207 208  | Next Page >

  • The Changing Face of PASS

    - by Bill Graziano
    I’m starting my sixth year on the PASS Board.  I served two years as the Program Director, two years as the Vice-President of Marketing and I’m starting my second year as the Executive Vice-President of Finance.  There’s a pretty good chance that if PASS has done something you don’t like or is doing something you don’t like, that I’m involved in one way or another. Andy Leonard asked in a comment on his blog if the Board had ever reversed itself based on community input.  He asserted that it hadn’t.  I disagree.  I’m not going to try and list all the changes we make inside portfolios based on feedback from and meetings with the community.  I’m going to focus on major governance issues since I was elected to the Board. Management Company The first big change was our management company.  Our old management company had a standard approach to running a non-profit.  It worked well when PASS was launched.  Having a ready-made structure and process to run the organization enabled the organization to grow quickly.  As time went on we were limited in some of the things we wanted to do.  The more involved you were with PASS, the more you saw these limitations.  Key volunteers were regularly providing feedback that they wanted certain changes that were difficult for us to accomplish.  The Board at that time wanted changes that were difficult or impossible to accomplish under that structure. This was not a simple change.  Imagine a $2.5 million dollar company letting all its employees go on a Friday and starting with a new staff on Monday.  We also had a very narrow window to accomplish that so that we wouldn’t affect the Summit – our only source of revenue.  We spent the year after the change rebuilding processes and putting on the Summit in Denver.  That’s a concrete example of a huge change that PASS made to better serve its members.  And it was a change that many in the community were telling us we needed to make. Financials We heard regularly from our members that they wanted our financials posted.  Today on our web site you can find audited financials going back to 2004.  We publish our budget at the start of each year.  If you ask a question about the financials on the PASS site I do my best to answer it.  I’m also trying to do a better job answering financial questions posted in other locations.  (And yes, I know I owe a few of you some blog posts.) That’s another concrete example of a change that our members asked for that the Board agreed was a good decision. Minutes When I started on the Board the meeting minutes were very limited.  The minutes from a two day Board meeting might fit on one page.  I think we did the bare minimum we were legally required to do.  Today Board meeting minutes run from 5 to 12 pages and go into incredible detail on what we talk about.  There are certain topics that are NDA but where possible we try to list the topic we discussed but that the actual discussion was under NDA.  We also publish the agenda of Board meetings ahead of time. This is another specific example where input from the community influenced the decision.  It was certainly easier to have limited minutes but I think the extra effort helps our members understand what’s going on. Board Q&A At the 2009 Summit the Board held its first public Q&A with our members.  We’d always been available individually to answer questions.  There’s a benefit to getting us all in one room and asking the really hard questions to watch us squirm.  We learn what questions we don’t have good answers for.  We get to see how many people in the crowd look interested in the various questions and answers. I don’t recall the genesis of how this came about.  I’m fairly certain there was some community pressure though. Board Votes Until last November, the Board only reported the vote totals and not how individual Board members voted.  That was one of the topics at a great lunch I had with Tim Mitchell and Kendal van Dyke at the Summit.  That was also the topic of the first question asked at the Board Q&A by Kendal.  Kendal expressed his opposition to to anonymous votes clearly and passionately and without trying to paint anyone into a corner.  Less than 24 hours later the PASS Board voted to make individual votes public unless the topic was under NDA.  That’s another area where the Board decided to change based on feedback from our members. Summit Location While this isn’t actually a governance issue it is one of the more public decisions we make that has taken some public criticism.  There is a significant portion of our members that want the Summit near them.  There is a significant portion of our members that like the Summit in Seattle.  There is a significant portion of our members that think it should move around the country.  I was one that felt strongly that there were significant, tangible benefits to our attendees to being in Seattle every year.  I’m also one that has been swayed by some very compelling arguments that we need to have at least one outside Seattle and then revisit the decision.  I can’t tell you how the Board will vote but I know the opinion of our members weighs heavily on the decision. Elections And that brings us to the grand-daddy of all governance issues.  My thesis for this blog post is that the PASS Board has implemented policy changes in response to member feedback.  It isn’t to defend or criticize our election process.  It’s just to say that is has been under going continuous change since I’ve been on the Board.  I ran for the Board in the fall of 2005.  I don’t know much about what happened before then.  I was actively volunteering for PASS for four years prior to that as a chapter leader and on the program committee.  I don’t recall any complaints about elections but that doesn’t mean they didn’t occur.  The questions from the Nominating Committee (NomCom) were trivial and the selection process rudimentary (For example, “Tell us about your accomplishments”).  I don’t even remember who I ran against or how many other people ran.  I ran for the VP of Marketing in the fall of 2007.  I don’t recall any significant changes the Board made in the election process for that election.  I think a lot of the changes in 2007 came from us asking the management company to work on the election process.  I was expecting a similar set of puff ball questions from my previous election.  Boy, was I in for a shock.  The NomCom had found a much better set of questions and really made the interview portion difficult.  The questions were much more behavioral in nature.  I’d already written about my vision for PASS and my goals.  They wanted to know how I handled adversity, how I handled criticism, how I handled conflict, how I handled troublesome volunteers, how I motivated people and how I responded to motivation. And many, many other things. They grilled me for over an hour.  I’ve done a fair bit of technical sales in my time.  I feel I speak well under pressure addressing pointed questions.  This interview intentionally put me under pressure.  In addition to wanting to know about my interpersonal skills, my work experience, my volunteer experience and my supervisory experience they wanted to see how I’d do under pressure.  They wanted to see who would respond under pressure and who wouldn’t.  It was a bit of a shock. That was the first big change I remember in the election process.  I know there were other improvements around the process but none of them stick in my mind quite like the unexpected hour-long grilling. The next big change I remember was after the 2009 elections.  Andy Warren was unhappy with the election process and wanted to make some changes.  He worked with Hannes at HQ and they came up with a better set of processes.  I think Andy moved PASS in the right direction.  Nonetheless, after the 2010 election even more people were very publicly clamoring for changes to our election process.  In August of 2010 we had a choice to make.  There were numerous bloggers criticizing the Board and our upcoming election.  The easy change would be to announce that we were changing the process in a way that would satisfy our critics.  I believe that a knee-jerk response to criticism is seldom correct. Instead the Board spent August and September and October and November listening to the community.  I visited two SQLSaturdays and asked questions of everyone I could.  I attended chapter meetings and asked questions of as many people as they’d let me.  At Summit I made it a point to introduce myself to strangers and ask them about the election.  At every breakfast I’d sit down at a table full of strangers and ask about the election.  I’m happy to say that I left most tables arguing about the election.  Most days I managed to get 2 or 3 breakfasts in. I spent less time talking to people that had already written about the election.  They were already expressing their opinion.  I wanted to talk to people that hadn’t spoken up.  I wanted to know what the silent majority thought.  The Board all attended the Q&A session where our members expressed their concerns about a variety of issues including the election. The PASS Board also chose to create the Election Review Committee.  We wanted people from the community that had been involved with PASS to look at our election process with fresh eyes while listening to what the community had to say and give us some advice on how we could improve the process.  I’m a part of this as is Andy Warren.  None of the other members are on the Board.  I’ve sat in numerous calls and interviews with this group and attended an open meeting at the Summit.  We asked anyone that wanted to discuss the election to come speak with us.  The ERC held an open meeting at the Summit and invited anyone to attend.  There are forums on the ERC web site where we’ve invited people to participate.  The ERC has reached to key people involved in recent elections.  The years that I haven’t mentioned also saw minor improvements in the election process.  Off the top of my head I don’t recall what exact changes were made each year.  Specifically since the 2010 election we’ve gone out of our way to seek input from the community about the process.  I’m not sure what more we could have done to invite feedback from the community. I think to say that we haven’t “fixed” the election process isn’t a fair criticism at this time.  We haven’t rushed any changes through the process.  If you don’t see any changes in our election process in July or August then I think it’s fair to criticize us for ignoring the community or ask for an explanation for what we’ve done. In Summary Andy’s main point was that the PASS Board hasn’t changed in response to our members wishes.  I think I’ve shown that time and time again the PASS Board has changed in response to what our members want.  There are only two outstanding issues: Summit location and elections.  The 2013 Summit location hasn’t been decided yet.  Our work on the elections is also in progress.  And at every step in the election review we’ve gone out of our way to listen to the community and incorporate their feedback on the process. I also hope I’m not encouraging everyone that wants some change in the organization to organize a “blog rush” against the Board.  We take public suggestions very seriously but we also take the time to evaluate those suggestions and learn what the rest of our members think and make a measured decision.

    Read the article

  • CodePlex Daily Summary for Saturday, November 17, 2012

    CodePlex Daily Summary for Saturday, November 17, 2012Popular ReleasesPaint.NET PSD Plugin: 2.2.0: Changes: Layer group visibility is now applied to all layers within the group. This greatly improves the visual fidelity of complex PSD files that have hidden layer groups. Layer group names are prefixed so that users can get an indication of the layer group hierarchy. (Paint.NET has a flat list of layers, so the hierarchy is flattened out on load.) The progress bar now reports status when saving PSD files, instead of showing an indeterminate rolling bar. Performance improvement of 1...Water Entity for SunBurn: Sunburn Water Entity For 2.0.1.8 (Deffered Only): Sunburn water entity for Sunburn 2.0.1.8 for deffered rendering only, forward water is not working yet. You need to download water normal maps, from Sunburn Reflection/Refraction example from Here.CRM 2011 Visual Ribbon Editor: Visual Ribbon Editor (1.3.1116.7): [IMPROVED] Detailed error message descriptions for FaultException [FIX] Fixed bug in rule CrmOfflineAccessStateRule which had incorrect State attribute name [FIX] Fixed bug in rule EntityPropertyRule which was missing PropertyValue attribute [FIX] Current connection information was not displayed in status bar while refreshing list of entitiesSuper Metroid Randomizer: Super Metroid Randomizer v5: v5 -Added command line functionality for automation purposes. -Implented Krankdud's change to randomize the Etecoon's item. NOTE: this version will not accept seeds from a previous version. The seed format has changed by necessity. v4 -Started putting version numbers at the top of the form. -Added a warning when suitless Maridia is required in a parsed seed. v3 -Changed seed to only generate filename-legal characters. Using old seeds will still work exactly the same. -Files can now be saved...Caliburn Micro: WPF, Silverlight, WP7 and WinRT/Metro made easy.: Caliburn.Micro v1.4: Changes This version includes many bug fixes across all platforms, improvements to nuget support and...the biggest news of all...full support for both WinRT and WP8. Download Contents Debug and Release Assemblies Samples Readme.txt License.txt Packages Available on Nuget Caliburn.Micro – The full framework compiled into an assembly. Caliburn.Micro.Start - Includes Caliburn.Micro plus a starting bootstrapper, view model and view. Caliburn.Micro.Container – The Caliburn.Micro invers...DirectX Tool Kit: November 15, 2012: November 15, 2012 Added support for WIC2 when available on Windows 8 and Windows 7 with KB 2670838 Cleaned up warning level 4 warningsDotNetNuke® Community Edition CMS: 06.02.05: Major Highlights Updated the system so that it supports nested folders in the App_Code folder Updated the Global Error Handling so that when errors within the global.asax handler happen, they are caught and shown in a page displaying the original HTTP error code Fixed issue that stopped users from specifying Link URLs that open on a new window Security FixesFixed issue in the Member Directory module that could show members to non authenticated users Fixed issue in the Lists modul...xUnit.net Contrib: xunitcontrib-resharper 0.7 (RS 7.1, 6.1.1): xunitcontrib release 0.6.1 (ReSharper runner) This release provides a test runner plugin for Resharper 7.1 RTM and 6.1.1, targetting all versions of xUnit.net. (See the xUnit.net project to download xUnit.net itself.) This release drops 7.0 support and targets the latest revisions of the last two major versions of ReSharper (namely 7.0 and 6.1.1). Copies of the plugin that support previous verions of ReSharper can be downloaded from this release. Also note that all builds work against ALL ...MVC Bootstrap: MVC Boostrap 0.5.6: A small demo site, based on the default ASP.NET MVC 3 project template, showing off some of the features of MVC Bootstrap. Added features to the Membership provider, a "lock out" feature to help fight brute force hacking of accounts. After a set number of log in attempts, the account is locked for a set time. If you download and use this project, please give some feedback, good or bad!Home Access Plus+: v8.4: This release only contains fixes for the 97576 release, you can download the v8.3 release files which aren't in this release from 97576 Changes: Fixed: Setup.aspx wrong jquery reference Fixed: Issue with loading the user's photo Changed: The JSON Urls to use a number of a date rather than a string Added: Code to hopefully, finally, fix the AD Browser not working some times File Changes: ~/bin/hap.ad.dll ~/bin/hap.web.dll ~/bin/hap.web.configuration.dll ~/bin/hap.web.livetiles.dl...OnTopReplica: Release 3.4: Update to the 3 version with major fixes and improvements. Compatible with Windows 8. Now runs (and requires) .NET Framework v.4.0. Added relative mode for region selection (allows the user to select regions as margins from the borders of the thumbnail, useful for windows which have a variable size but fixed size controls, like video players). Improved window seeking when restoring cloned thumbnail or cloning a window by title or by class. Improved settings persistence. Improved co...DotSpatial: DotSpatial 1.4: This is a Minor Release. See the changes in the issue tracker. Minimal -- includes DotSpatial core and essential extensions Extended -- includes debugging symbols and additional extensions Tutorials are available. Just want to run the software? End user (non-programmer) version available branded as MapWindow Want to add your own feature? Develop a plugin, using the template and contribute to the extension feed (you can also write extensions that you distribute in other ways). Components ...WinRT XAML Toolkit: WinRT XAML Toolkit - 1.3.5: WinRT XAML Toolkit based on the Windows 8 RTM SDK. Download the latest source from the SOURCE CODE page. For compiled version use NuGet. You can add it to your project in Visual Studio by going to View/Other Windows/Package Manager Console and entering: PM> Install-Package winrtxamltoolkit Features Attachable Behaviors AwaitableUI extensions Controls Converters Debugging helpers Extension methods Imaging helpers IO helpers VisualTree helpers Samples Recent changes Docum...AcDown?????: AcDown????? v4.3: ??●AcDown??????????、??、??、???????。????,????,?????????????????????????。???????????Acfun、????(Bilibili)、??、??、YouTube、??、???、??????、SF????、????????????。 ●??????AcPlay?????,??????、????????????????。 ● AcDown??????????????????,????????????????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ????32??64? Windows XP/Vista/7/8 ???? 32??64? ???Linux ????(1)????????Windows XP???,????????.NET Framework 2.0???(x86),?????"?????????"??? (2)???????????Linux???,????????Mono?? ??2...????: ???? 1.0: ????Unicode IVS Add-in for Microsoft Office: Unicode IVS Add-in for Microsoft Office: Unicode IVS Add-in for Microsoft Office ??? ?????、Unicode IVS?????????????????Unicode IVS???????????????。??、??????????????、?????????????????????????????。Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.74: fix for issue #18836 - sometimes throws null-reference errors in ActivationObject.AnalyzeScope method. add back the Context object's 8-parameter constructor, since someone has code that's using it. throw a low-pri warning if an expression statement is == or ===; warn that the developer may have meant an assignment (=). if window.XXXX or window"XXXX" is encountered, add XXXX (as long as it's a valid JavaScript identifier) to the known globals so subsequent references to XXXX won't throw ...???????: Monitor 2012-11-11: This is the first releasehttpclient?????????: httpclient??????? 1.0: httpclient??????? (1)?????????? (2)????????? (3)??2012-11-06??,???????。VidCoder: 1.4.5 Beta: Removed the old Advanced user interface and moved x264 preset/profile/tune there instead. The functionality is still available through editing the options string. Added ability to specify the H.264 level. Added ability to choose VidCoder's interface language. If you are interested in translating, we can get VidCoder in your language! Updated WPF text rendering to use the better Display mode. Updated HandBrake core to SVN 5045. Removed logic that forced the .m4v extension in certain ...New Projects40Fingers Page Language module for DotNetNuke: DotNetNuke module that allows you to overrule the language a specific page, without using any other localization solutions. Brought to you by 40FINGERSAnonymous Interfaces (by Stephen Cleary): A fluent API for dynamically creating an anonymous type implementing a specific interface.Asynk: Asynk is a framework/application that allows existing applications to easily be extended with an offloaded asynchronous worker layer. Asynk is developed using C#.BizTalkWCFOperationPromoteEncoder: If you seen the BizTalk error: " An action mapping was defined but BTS.Operation was not found in the Message context ", When you use the BizTalk WCF-Customer, bLaugh Plugin for Windows Live Writer: The bLaugh Plugin for Windows Live Writer provides a quick and easy way to find and insert bLaugh (www.blaugh.com) comics into a blog post.CaptureWebPage: Code Between The Lines @ http://www.codebetweenthelines.com Capture an Entire Web Page in a C# Console Application Sample application that fixes most problems.CF-Soft: this is my workspace to study C# programming.Cloud Backup Scheduler: Desktop utility to auto schedule online backups with popular cloud services. Cloud Giraffe: Cloud Giraffe is a library which provides easy collection and analysis of Windows Azure Diagnostics data.dbscript: dbscript is a simple scripting language for DBAs, devs, and others who have need to routinely deliver the results of SQL queries in Excel spreadsheet form.Drill: Drill is the Dependency Resolution and Instance Location Layer, an abstraction to DI containers, service locators and the like facilitating loose coupling.Drzewa decyzyjne WPF: Praca inzE-Commerce Gado: E commerce voltado para o mercado bovinoELearningTutorial: A list of tutorialsElmas Kitap: Kitapliginizi düzenleyebileceginiz bir program.Epi Info iOS Companion: iPad and iPhone companion for Epi Info 7.FreeBlokus: A simple version of Blokus Duo.g1: test GGCStatus: Global Gift Card by We Solve IT. Server Status ApplicationGiveGraph: Social Managed Resources Distribution SystemGlobal String Formatter: The Global String Formatter library allows developers to deal with conditional string formatting in an elegant fashion. Developers specify a predicate and a corresponding string output function for each case of the formatting. The library plays well with DI frameworks.HTWKAidStation: The HTWKAidStation shows schedules and the menu of local mensae in Leipzig, Germany.JsViewEngine - a ASP.NET View Engine using JsRender on both client and server: An ASP.NET View Engine that specializes in sharing partial views between client and server.Kexp.ShoutcastRunner: ShoutcastRunner is a quick and simple Windows service that runs the SHOUTcast DNAS 1.x mp3 server and provides some extra features. MDrive - Controlling Precision Electric Motors: This project demonstrates how to control MDrive(r) precision electric motors from Schneider Electric.MyNet Project: MyNet is an undergraduate project developed in the context of the Cloud Data Management cours at Grenoble INP.Passwords Thief: Sometimes you need to see what is behind asteriks in password edit box. It will help you to resolve this problem. Pet Shop Web: Projeto para gerenciamento de Pet Shop Desenvolvido em ASP.NET com Framework 4.0primeiros passos: aRCVersion: A smart tool to modify version information in RC file.recycling20: Recycling 2.0replaceSID: replaceSID replaces an SID in a SDDL fileshmapcha: Cool toolSMC (Social Media Connector): Social Media APIs are developed to provide Join-In Game providers to access Social Media Portal. Textline Processor: Textline processor is a utility to execute C# codes dynamically on each line of texts, and get output to replace lines in the text. treadmill project: Treadmill ProjectTriathlon Checklist: Triathlon Checklist is a Windows Phone application for triathletes. Download it for free: http://bit.ly/TriathlonChecklistwarhamer40kListBuilder: projet personnel de geston de liste d'armée pour le jeux warhammer 40.000.Wonder: WonderWP7GBAEmulator: A C# GBA Emulator For Windows Phone 7WPF Message Box: WPF Message Box is a simple and free message box for WPF using MVVM pattern. Image, buttons, message, and caption can be set.????: ??????·???????????????

    Read the article

  • PASS: Bylaw Change 2013

    - by Bill Graziano
    PASS launched a Global Growth Initiative in the Summer of 2011 with the appointment of three international Board advisors.  Since then we’ve thought and talked extensively about how we make PASS more relevant to our members outside the US and Canada.  We’ve collected much of that discussion in our Global Growth site.  You can find vision documents, plans, governance proposals, feedback sites, and transcripts of Twitter chats and town hall meetings.  We also address these plans at the Board Q&A during the 2012 Summit. One of the biggest changes coming out of this process is around how we elect Board members.  And that requires a change to the bylaws.  We published the proposed bylaw changes as a red-lined document so you can clearly see the changes.  Our goal in these bylaw changes was to address the changes required by the global growth initiatives, conduct a legal review of the document and address other minor issues in the document.  There are numerous small wording changes throughout the document.  For example, we replaced every reference of “The Corporation” with the word “PASS” so it now reads “PASS is organized…”. Board Composition The biggest change in these bylaw changes is how the Board is composed and elected.  This discussion starts in section VI.2.  This section now says that some elected directors will come from geographic regions.  I think this is the best way to make sure we give all of our members a voice in the leadership of the organization.  The key parts of this section are: The remaining Directors (i.e. the non-Officer Directors and non-Vendor Appointed Directors) shall be elected by the voting membership (“Elected Directors”). Elected Directors shall include representatives of defined PASS regions (“Regions”) as set forth below (“Regional Directors”) and at minimum one (1) additional Director-at-Large whose selection is not limited by region. Regional Directors shall include, but are not limited to, two (2) seats for the Region covering Canada and the United States of America. Additional Regions for the purpose of electing additional Regional Directors and additional Director-at-Large seats for the purpose of expanding the Board shall be defined by a majority vote of the current Board of Directors and must be established prior to the public call for nominations in the general election. Previously defined Regions and seats approved by the Board of Directors shall remain in effect and can only be modified by a 2/3 majority vote by the then current Board of Directors. Currently PASS has six At-Large Directors elected by the members.  These changes allow for a Regional Director position that is elected by the members but must come from a particular region.  It also stipulates that there must always be at least one Director-at-Large who can come from any region. We also understand that PASS is currently a very US-centric organization.  Our Summit is held in America, roughly half our chapters are in the US and Canada and most of the Board members over the last ten years have come from America.  We wanted to reflect that by making sure that our US and Canadian volunteers would continue to play a significant role by ensuring that two Regional seats are reserved specifically for Canada and the US. Other than that, the bylaws don’t create any specific regional seats.  These rules allow us to create Regional Director seats but don’t require it.  We haven’t fully discussed what the criteria will be in order for a region to have a seat designated for it or how many regions there will be.  In our discussions we’ve broadly discussed regions for United States and Canada Europe, Middle East, and Africa (EMEA) Australia, New Zealand and Asia (also known as Asia Pacific or APAC) Mexico, South America, and Central America (LATAM) As you can see, our thinking is that there will be a few large regions.  I’ve also considered a non-North America region that we can gradually split into the regions above as our membership grows in those areas.  The regions will be defined by a policy document that will be published prior to the elections. I’m hoping that over the next year we can begin to publish more of what we do as Board-approved policy documents. While the bylaws only require a single non-region specific At-large Director, I would expect we would always have two.  That way we can have one in each election.  I think it’s important that we always have one seat open that anyone who is eligible to run for the Board can contest.  The Board is required to have any regions defined prior to the start of the election process. Board Elections – Regional Seats We spent a lot of time discussing how the elections would work for these Regional Director seats.  Ultimately we decided that the simplest solution is that every PASS member should vote for every open seat.  Section VIII.3 reads: Candidates who are eligible (i.e. eligible to serve in such capacity subject to the criteria set forth herein or adopted by the Board of Directors) shall be designated to fill open Board seats in the following order of priority on the basis of total votes received: (i) full term Regional Director seats, (ii) full term Director-at-Large seats, (iii) not full term (vacated) Regional Director seats, (iv) not full term (vacated) Director-at-Large seats. For the purposes of clarity, because of eligibility requirements, it is contemplated that the candidates designated to the open Board seats may not receive more votes than certain other candidates who are not selected to the Board. We debated whether to have multiple ballots or one single ballot.  Multiple ballot elections get complicated quickly.  Let’s say we have a ballot for US/Canada and one for Region 2.  After that we’d need a mechanism to merge those two together and come up with the winner of the at-large seat or have another election for the at-large position.  We think the best way to do this is a single ballot and putting the highest vote getters into the most restrictive seats.  Let’s look at an example: There are seats open for Region 1, Region 2 and at-large.  The election results are as follows: Candidate A (eligible for Region 1) – 550 votes Candidate B (eligible for Region 1) – 525 votes Candidate C (eligible for Region 1) – 475 votes Candidate D (eligible for Region 2) – 125 votes Candidate E (eligible for Region 2) – 75 votes In this case, Candidate A is the winner for Region 1 and is assigned that seat.  Candidate D is the winner for Region 2 and is assigned that seat.  The at-large seat is filled by the high remaining vote getter which is Candidate B. The key point to understand is that we may have a situation where a person with a lower vote total is elected to a regional seat and a person with a higher vote total is excluded.  This will be true whether we had multiple ballots or a single ballot.  Board Elections – Vacant Seats The other change to the election process is for vacant Board seats.  The actual changes are sprinkled throughout the document. Previously we didn’t have a mechanism that allowed for an election of a Board seat that we knew would be vacant in the future.  The most common case is when a Board members moves to an Officer role in the middle of their term.  One of the key changes is to allow the number of votes members have to match the number of open seats.  This allows each voter to express their preference on all open seats.  This only applies when we know about the opening prior to the call for nominations.  This all means that if there’s a seat will be open at the start of the next Board term, and we know about it prior to the call for nominations, we can include that seat in the elections.  Ultimately, the aim is to have PASS members decide who sits on the Board in as many situations as possible. We discussed the option of changing the bylaws to just take next highest vote-getter in all other cases.  I think that’s wrong for the following reasons: All voters aren’t able to express an opinion on all candidates.  If there are five people running for three seats, you can only vote for three.  You have no way to express your preference between #4 and #5. Different candidates may have different information about the number of seats available.  A person may learn that a Board member plans to resign at the end of the year prior to that information being made public. They may understand that the top four vote getters will end up on the Board while the rest of the members believe there are only three openings.  This may affect someone’s decision to run.  I don’t think this creates a transparent, fair election. Board members may use their knowledge of the election results to decide whether to remain on the Board or not.  Admittedly this one is unlikely but I don’t want to create a situation where this accusation can be leveled. I think the majority of vacancies in the future will be handled through elections.  The bylaw section quoted above also indicates that partial term vacancies will be filled after the full term seats are filled. Removing Directors Section VI.7 on removing directors has always had a clause that allowed members to remove an elected director.  We also had a clause that allowed appointed directors to be removed.  We added a clause that allows the Board to remove for cause any director with a 2/3 majority vote.  The updated text reads: Any Director may be removed for cause by a 2/3 majority vote of the Board of Directors whenever in its judgment the best interests of PASS would be served thereby. Notwithstanding the foregoing, the authority of any Director to act as in an official capacity as a Director or Officer of PASS may be suspended by the Board of Directors for cause. Cause for suspension or removal of a Director shall include but not be limited to failure to meet any Board-approved performance expectations or the presence of a reason for suspension or dismissal as listed in Addendum B of these Bylaws. The first paragraph is updated and the second and third are unchanged (except cleaning up language).  If you scroll down and look at Addendum B of these bylaws you find the following: Cause for suspension or dismissal of a member of the Board of Directors may include: Inability to attend Board meetings on a regular basis. Inability or unwillingness to act in a capacity designated by the Board of Directors. Failure to fulfill the responsibilities of the office. Inability to represent the Region elected to represent Failure to act in a manner consistent with PASS's Bylaws and/or policies. Misrepresentation of responsibility and/or authority. Misrepresentation of PASS. Unresolved conflict of interests with Board responsibilities. Breach of confidentiality. The bold line about your inability to represent your region is what we added to the bylaws in this revision.  We also added a clause to section VII.3 allowing the Board to remove an officer.  That clause is much less restrictive.  It doesn’t require cause and only requires a simple majority. The Board of Directors may remove any Officer whenever in their judgment the best interests of PASS shall be served by such removal. Other There are numerous other small changes throughout the document. Proxy voting.  The laws around how members and Board members proxy votes are specific in Illinois law.  PASS is an Illinois corporation and is subject to Illinois laws.  We changed section IV.5 to come into compliance with those laws.  Specifically this says you can only vote through a proxy if you have a written proxy through your authorized attorney.  English language proficiency.  As we increase our global footprint we come across more members that aren’t native English speakers.  The business of PASS is conducted in English and it’s important that our Board members speak English.  If we get big enough to afford translators, we may be able to relax this but right now we need English language skills for effective Board members. Committees.  The language around committees in section IX is old and dated.  Our lawyers advised us to clean it up.  This section specifically applies to any committees that the Board may form outside of portfolios.  We removed the term limits, quorum and vacancies clause.  We don’t currently have any committees that this would apply to.  The Nominating Committee is covered elsewhere in the bylaws. Electronic Votes.  The change allows the Board to vote via email but the results must be unanimous.  This is to conform with Illinois state law. Immediate Past President.  There was no mechanism to fill the IPP role if an outgoing President chose not to participate.  We changed section VII.8 to allow the Board to invite any previous President to fill the role by majority vote. Nominations Committee.  We’ve opened the language to allow for the transparent election of the Nominations Committee as outlined by the 2011 Election Review Committee. Revocation of Charters. The language surrounding the revocation of charters for local groups was flagged by the lawyers. We have allowed for the local user group to make all necessary payment before considering returning of items to PASS if required. Bylaw notification. We’ve spent countless meetings working on these bylaws with the intent to not open them again any time in the near future. Should the bylaws be opened again, we have included a clause ensuring that the PASS membership is involved. I’m proud that the Board has remained committed to transparency and accountability to members. This clause will require that same level of commitment in the future even when all the current Board members have rolled off. I think that covers everything.  I’d encourage you to look through the red-line document and see the changes.  It’s helpful to look at the language that’s being removed and the language that’s being added.  I’m happy to answer any questions here or you can email them to [email protected].

    Read the article

  • CodePlex Daily Summary for Sunday, May 20, 2012

    CodePlex Daily Summary for Sunday, May 20, 2012Popular ReleasesExtAspNet: ExtAspNet v3.1.6: ExtAspNet - ?? ExtJS ??? ASP.NET 2.0 ???,????? AJAX ?????????? ExtAspNet ????? ExtJS ??? ASP.NET 2.0 ???,????? AJAX ??????????。 ExtAspNet ??????? JavaScript,?? CSS,?? UpdatePanel,?? ViewState,?? WebServices ???????。 ??????: IE 7.0, Firefox 3.6, Chrome 3.0, Opera 10.5, Safari 3.0+ ????:Apache License 2.0 (Apache) ??:http://bbs.extasp.net/ ??:http://demo.extasp.net/ ??:http://doc.extasp.net/ ??:http://extaspnet.codeplex.com/ ??:http://sanshi.cnblogs.com/ ????: +2012-05-20 v3.1.6 -??RowD...totalem: version 2012.05.20.1: Beta version added function to create new empty file added function to create new file from clipboard content (save content of clipboard to file) added feature to direct view file from FTP server added feature to direct edit file from FTP server added feature to direct encrypt and copy files to FTP server added feature to direct copy and decrypt files from FTP servergGrid - Editable jQuery Grid: Initial Version release: The js file is the initial version of gGrid. Below are some of the limitations of the gGrid plugin The grid requires to return a MVC partial view to render the updated grid on screen.WatchersNET CKEditor™ Provider for DotNetNuke®: CKEditor Provider 1.14.05: Whats New Added New Editor Skin "BootstrapCK-Skin" Added New Editor Skin "Slick" Added Dnn Pages Drop Down to the Link Dialog (to quickly link to a portal tab) changes Fixed Issue #6956 Localization issue with some languages Fixed Issue #6930 Folder Tree view was not working in some cases Changed the user folder from User name to User id User Folder is now used when using Upload Function and User Folder is enabled File-Browser Fixed Resizer Preview Image Optimized the oEmbed Pl...PHPExcel: PHPExcel 1.7.7: See Change Log for details of the new features and bugfixes included in this release. BREAKING CHANGE! From PHPExcel 1.7.8 onwards, the 3rd-party tcPDF library will no longer be bundled with PHPExcel for rendering PDF files through the PDF Writer. The PDF Writer is being rewritten to allow a choice of 3rd party PDF libraries (tcPDF, mPDF, and domPDF initially), none of which will be bundled with PHPExcel, but which can be downloaded seperately from the appropriate sites.GhostBuster: GhostBuster Setup (91520): Added WMI based RestorePoint support Removed test code from program.cs Improved counting. Changed color of ghosted but unfiltered devices. Changed HwEntries into an ObservableCollection. Added Properties Form. Added Properties MenuItem to Context Menu. Added Hide Unfiltered Devices to Context Menu. If you like this tool, leave me a note, rate this project or write a review or Donate to Ghostbuster. Donate to GhostbusterC#??????EXCEL??、??、????????:DataPie(??MSSQL 2008、ORACLE、ACCESS 2007): DataPie_V3.2: V3.2, 2012?5?19? ????ORACLE??????。AvalonDock: AvalonDock 2.0.0795: Welcome to the Beta release of AvalonDock 2.0 After 4 months of hard work I'm ready to upload the beta version of AvalonDock 2.0. This new version boosts a lot of new features and now is stable enough to be deployed in production scenarios. For this reason I encourage everyone is using AD 1.3 or earlier to upgrade soon to this new version. The final version is scheduled for the end of June. What is included in Beta: 1) Stability! thanks to all users contribution I’ve corrected a lot of issues...myCollections: Version 2.1.0.0: New in this version : Improved UI New Metro Skin Improved Performance Added Proxy Settings New Music and Books Artist detail Lot of Bug FixingAspxCommerce: AspxCommerce1.1: AspxCommerce - 'Flexible and easy eCommerce platform' offers a complete e-Commerce solution that allows you to build and run your fully functional online store in minutes. You can create your storefront; manage the products through categories and subcategories, accept payments through credit cards and ship the ordered products to the customers. We have everything set up for you, so that you can only focus on building your own online store. Note: To login as a superuser, the username and pass...SiteMap Editor for Microsoft Dynamics CRM 2011: SiteMap Editor (1.1.1616.403): BUG FIX Hide save button when Titles or Descriptions element is selectedMapWindow 6 Desktop GIS: MapWindow 6.1.2: Looking for a .Net GIS Map Application?MapWindow 6 Desktop GIS is an open source desktop GIS for Microsoft Windows that is built upon the DotSpatial Library. This release requires .Net 4 (Client Profile). Are you a software developer?Instead of downloading MapWindow for development purposes, get started with with the DotSpatial template. The extensions you create from the template can be loaded in MapWindow.DotSpatial: DotSpatial 1.2: This is a Minor Release. See the changes in the issue tracker. Minimal -- includes DotSpatial core and essential extensions Extended -- includes debugging symbols and additional extensions Tutorials are available. Just want to run the software? End user (non-programmer) version available branded as MapWindow Want to add your own feature? Develop a plugin, using the template and contribute to the extension feed (you can also write extensions that you distribute in other ways). Components ...Mugen Injection: Mugen Injection 2.2.1 (WinRT supported): Added ManagedScopeLifecycle. Increase performance. Added support for resolve 'params'.Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.52: Make preprocessor comment-statements nestable; add the ///#IFNDEF statement. (Discussion #355785) Don't throw an error for old-school JScript event handlers, and don't rename them if they aren't global functions.DotNetNuke® Events: 06.00.00: This is a serious release of Events. DNN 6 form pattern - We have take the full route towards DNN6: most notably the incorporation of the DNN6 form pattern with streamlined UX/UI. We have also tried to change all formatting to a div based structure. A daunting task, since the Events module contains a lot of forms. Roger has done a splendid job by going through all the forms in great detail, replacing all table style layouts into the new DNN6 div class="dnnForm XXX" type of layout with chang...LogicCircuit: LogicCircuit 2.12.5.15: Logic Circuit - is educational software for designing and simulating logic circuits. Intuitive graphical user interface, allows you to create unrestricted circuit hierarchy with multi bit buses, debug circuits behavior with oscilloscope, and navigate running circuits hierarchy. Changes of this versionThis release is fixing one but nasty bug. Two functions XOR and XNOR when used with 3 or more inputs were incorrectly evaluating their results. If you have a circuit that is using these functions...LINQ to Twitter: LINQ to Twitter Beta v2.0.25: Supports .NET 3.5, .NET 4.0, Silverlight 4.0, Windows Phone 7.1, Client Profile, and Windows 8. 100% Twitter API coverage. Also available via NuGet! Follow @JoeMayo.BlogEngine.NET: BlogEngine.NET 2.6: Get DotNetBlogEngine for 3 Months Free! Click Here for More Info BlogEngine.NET Hosting - 3 months free! Cheap ASP.NET Hosting - $4.95/Month - Click Here!! Click Here for More Info Cheap ASP.NET Hosting - $4.95/Month - Click Here! If you want to set up and start using BlogEngine.NET right away, you should download the Web project. If you want to extend or modify BlogEngine.NET, you should download the source code. If you are upgrading from a previous version of BlogEngine.NET, please take...BlackJumboDog: Ver5.6.2: 2012.05.07 Ver5.6.2 (1) Web???????、????????·????????? (2) Web???????、?????????? COMSPEC PATHEXT WINDIR SERVERADDR SERVERPORT DOCUMENTROOT SERVERADMIN REMOTE_PORT HTTPACCEPTCHRSET HTTPACCEPTLANGUAGE HTTPACCEPTEXCODINGNew ProjectsAkumu Island: Akumu Island is a game being developed by Jared Thomson. At this time, things are still fairly under wraps. The source code is still available though.BasicSocialNetworkingSite: Basic/Simple Social Networking Web Site using C# - ASP.NETCasse Brique: Projet Casse-brique.Cdts.iOS: Cdts iOSCluster2: To be Published...CrowdMOS: CrowdMOS is a set of scripts and tools for performing evaluations of the subjective quality of media such as audio or images using crowdsourcing via Amazon Mechanical Turk. This project is designed to enable low cost, efficient assessments of signal processing algorithms, e.g., compression, denoising, or enhancement, using standard tests such as MOS (Mean Opinion Score) or MUSHRA.Data Frame Loader: A simple C# API for loading tabular dataframes into Microsoft SQL Server database using only a small number of tables to represent any kind of dataframe.Dynamic Segmentation Utility SOE Rest: Dynamic Segmentation Utility SOE Rest for ArcGIS Server 10 (msd)field2012: This is private projectFix soft HyperAero Form: Fix soft Aero Form (HyperAero Edition) works on both xp and Win7 and supports Customizable Animation Effects (On showing and closing form),Gradient Support (Multicolor Gradient Background,Gradient Editor Control),Power Functions (Shutdown,etc with Timer support) ,Aero Glass Support (Extend Margins,Aero Blur,Aero Glow text,Basic Theme Support),Aero Properties (IsWindowsAeroEnabled,AeroColor and opacity),Aero Events,Unlock Hidden Properties (EnableCloseButton, CaptionRenderMode, ActivateOn...Gamer: A program intended to be a PC gamers' companion app by providing features such as: * Customizable system tray menu that lists (favourite/frequent/all) games (as shown in the Windows Games Explorer) for quick launching (and clean desktops). * Ability to edit listings in the Windows Game Explorer * Once these goals are met other handy features can be implemented to increase the value of this app. This program should compliment existing programs used by gamers rather than compete w...gGrid - Editable jQuery Grid: This a jQuery plugin. The plugin will add three buttons Add/Edit/Delete which will need a popup control to add/edit data. Developer using this plugin need to define an HTML table and an HTML DIV which will be used for popup. Also MVC action method to handle the CRUD operation. The plug requires an MVC partial view to be returned from the add edit delete methods to update the table data.HoiChoMuaBan: h?i ch? mua bánmyfirstgit: ???????Nova Code: Nova code is a language to implement processor instructions, states, and other features planned soon for the NEmulation framework. Right now this project will be worked on separately, then integrated into NEmulation.Pocket Book App: Just try it!State Machine .netmf: StateMachineExample for .netmf C# uVersionClientCache: uVersionClientCache is a custom macro to always automatically version (URL querstring parameter) your files based on a MD5 hash of the file contents or file last modified date to prevent issues with client browsers caching an old file after you have changed it.XNA Shader Composer: XNA Shader Composer is a solution for Visual Studio 2010 and XNA 4.0. The goal is to create an environment for learn and create differents HLSL programs.???Disable????: ??????????????errdisable??,????????。

    Read the article

  • CodePlex Daily Summary for Sunday, July 08, 2012

    CodePlex Daily Summary for Sunday, July 08, 2012Popular ReleasesBlackJumboDog: Ver5.6.7: 2012.07.08 Ver5.6.7 (1) ????????????????「????? Request.Receve()」?????????? (2) Web???????????FlMML customized: FlMML customized ??: FlMML customized ????。 ??、PCM??????????、??????。ecBlog: ecBlog 0.2: ecBlog alpha realaseTaskScheduler ASP.NET: Release 3 - 1.2.0.0: Release 3 - Version 1.2.0.0 That version was altered only the library: In TaskScheduler was added new properties: UseBackgroundThreads Enables the use of separate threads for each task. StoreThreadsInPool Manager enables to store in the Pool threads that are performing the tasks. OnStopSchedulerAutoCancelThreads Scheduler allows aborting threads when it is stopped. false if the scheduler is not aborted the threads that are running. AutoDeletedExecutedTasks Allows Manager Delete Task afte...DotNetNuke Persian Packages: ??? ?? ???? ????? ???? 6.2.0: *????? ???? ??? ?? ???? 6.2.0 ? ??????? ???? ????? ???? ??? ????? *????? ????? ????? ??? ??? ???? ??? ??????? ??????? - ???? *?????? ???? ??? ?????? ?? ???? ???? ????? ? ?? ??? ?? ???? ???? ?? *????? ????? ????? ????? ????? / ??????? ???? ?? ???? ??? ??? - ???? *???? ???? ???? ????? ? ??????? ??? ??? ??? ?? ???? *????? ????? ???????? ??? ? ??????? ?? ?? ?????? ????? ????????? ????? ?????? - ???? *????? ????? ?????? ????? ?? ???? ?? ?? ?? ???????? ????? ????? ????????? ????? ?????? *???? ?...testtom07052012git02: r1: r1Cypher Bot: Cypher Bot 4.1: Cypher Bot is the most advanced encryption tool on the planet.... and now it actually works. That's right we fixed the bugs! For a full program summary go to the Home Page or visit www.iRareMedia.com So what's new? We've pretty much fixed all the bugs, but here's a run down if you wanna know exactly what's different: Fixed Installation / Setup Error, where an error message would display: "No Internet Connection, Try Again Later" Fixed File Encryption / Decryption error where the file exten...Coding4Fun Kinect Service: Coding4Fun Kinect Service v1.5: Requires Kinect for Windows SDK v1.5 Minor bug fixes + Kinect for Windows SDK v1.5 Aligning version with the Kinect for Windows SDK requiredtedplay: tedplay 1.1: tedplay 1.1 source and Win32 binary is out now. Changes are: SID card support Commodore 64 PSID music format support optimized FIR filter global hotkeys for skipping tracks (Windows only) module properties window (Windows only) mutable noise channel via GUI button (Windows only) disable SID card from the menu (Windows only) bugfixes PSID tunes are played on the C64 clock frequency but in a Commodore plus/4 virtual machine. The purpose is not to have yet another SID player, but t...xUnit.net Contrib: xunitcontrib-resharper 0.6 (RS 7.0, 6.1.1): xunitcontrib release 0.6 (ReSharper runner) This release provides a test runner plugin for Resharper 7.0 (EAP build 82) and 6.1, targetting all versions of xUnit.net. (See the xUnit.net project to download xUnit.net itself.) Copies of the plugin that support previous verions of ReSharper can be downloaded from this release. The plan is to support the latest revisions of the last two paid-for major versions of ReSharper (namely 7.0 and 6.1) Also note that all builds work against ALL VERSIONS...Umbraco CMS: Umbraco 4.8.0 Beta: Whats newuComponents in the core Multi-Node Tree Picker, Multiple Textstring, Slider and XPath Lists Easier Lucene searching built in IFile providers for easier file handling Updated 3rd party libraries Applications / Trees moved out of the database SQL Azure support added Various bug fixes Getting Started A great place to start is with our Getting Started Guide: Getting Started Guide: http://umbraco.codeplex.com/Project/Download/FileDownload.aspx?DownloadId=197051 Make sure to...CODE Framework: 4.0.20704.0: See CODE Framework (.NET) Change Log for changes in this version.xUnit.net - Unit testing framework for C# and .NET (a successor to NUnit): xUnit.net 1.9.1: xUnit.net release 1.9.1Build #1600 Important note for Resharper users: Resharper support has been moved to the xUnit.net Contrib project. Important note for TestDriven.net users: If you are having issues running xUnit.net tests in TestDriven.net, especially on 64-bit Windows, we strongly recommend you upgrade to TD.NET version 3.0 or later. Important note for VS2012 users: The VS2012 runner is in the Visual Studio Gallery now, and should be installed via Tools | Extension Manager from insi...MVC Controls Toolkit: Mvc Controls Toolkit 2.2.0: Added Modified all Mv4 related features to conform with the Mvc4 RC Now all items controls accept any IEnumerable<T>(before just List<T> were accepted by most of controls) retrievalManager class that retrieves automatically data from a data source whenever it catchs events triggered by filtering, sorting, and paging controls move method to the updatesManager to move one child objects from a father to another. The move operation can be undone like the insert, update and delete operatio...IronPython: 2.7.3: On behalf of the IronPython team, I'm happy to announce the final release of IronPython 2.7.3. This release includes everything from IronPython 54498, 62475, and 74478 as well. Like all IronPython 2.7-series releases, .NET 4 is required to install it. Installing this release will replace any existing IronPython 2.7-series installation. The incompatibility with IronRuby has been resolved, and they can once again be installed side-by-side. The biggest improvements in IronPython 2.7.3 are: the...Mini SQL Query: Mini SQL Query (v1.0.68.441): Just a bug fix release for when the connections try to refresh after an edit. Make sure you read the Quickstart for an introduction.Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.58: Fix for Issue #18296: provide "ALL" value to the -ignore switch to ignore all error and warning messages. Fix for issue #18293: if encountering EOF before a function declaration or expression is properly closed, throw an appropriate error and don't crash. Adjust the variable-renaming algorithm so it's very specific when renaming variables with the same number of references so a single source file ends up with the same minified names on different platforms. add the ability to specify kno...LogExpert: 1.4 build 4566: This release for the 1.4 version line contains various fixes which have been made some times ago. Until now these fixes were only available in the 1.5 alpha versions. It also contains a fix for: 710. Column finder (press F8 to show) Terminal server issues: Multiple sessions with same user should work now Settings Export/Import available via Settings Dialog still incomple (e.g. tab colors are not saved) maybe I change the file format one day no command line support yet (for importin...CommonLibrary.NET: CommonLibrary.NET 0.9.8.5 - Final Release: A collection of very reusable code and components in C# 4.0 ranging from ActiveRecord, Csv, Command Line Parsing, Configuration, Holiday Calendars, Logging, Authentication, and much more. FluentscriptCommonLibrary.NET 0.9.8 contains a scripting language called FluentScript. Releases notes for FluentScript located at http://fluentscript.codeplex.com/wikipage?action=Edit&title=Release%20Notes&referringTitle=Documentation Fluentscript - 0.9.8.5 - Final ReleaseApplication: FluentScript Versio...SharePoint 2010 Metro UI: SharePoint 2010 Metro UI8: Please review the documentation link for how to install. Installation takes some basic knowledge of how to upload and edit SharePoint Artifact files. Please view the discussions tab for ongoing FAQsNew ProjectsAdventures of Adventure Land: Adventures of Adventure Land is a new text based adventure. You will be able to level up, fight challenging enemies, use magical spells, and simply adventure.AdventureWorks Silverlight samples: AdventureWorks Silverlight samplesAFS.Collab.Duplex: my duplexarmmychan: ????????C# - WPF - .Net - MSSQL - Open Source Restaurant POS System: A C# .Net / WPF / MSSQL Restaurant Point of Sale (POS) Software that is PCI-DSS 2.0 Compliant running stable in many restaurants / integrated with MercuryPay.dcview: dcinside.com? ??? ? ?? ?? ??? ???? ???.DotNetNuke Contact Form: simple yet effective DotNetNuke contact formISBNdb.com Helper Library: A .net helper library that encapsulates all the functionality of the API at www.isbndb.com in .NET CLR objects.JPO Class Register: Simple class register.NACHA C# Class with WPF Test App: Do you need to generate a NACHA PPD file? This is great starting point for you! Actually, it's a great, almost finished, point for you! More info below.NotificationsWidget In ASP MVC: SummaryPayPal Manager: I decided to make a basic (for now) desktop client for PayPal to get better at WebRequests in VB.NET. I will be adding much more such as sending payments, etc.Pomodoro Timer Count Down App: Pomodoro count down timer Application Features ------------------------------- Pomodoro Mode Count down timer mode Start stop pause Notification Powershell HTML Highlight: Powershell html syntax highlightingProject F10_P1: F10 p1Razor-sharp your skills: This project will have details about the C# 2.0 C# 3.0 C# 4.0 C# 5.0 Salaria: Bienvenue sur notre projet "Salaria".SharePoint 2010 - Unlock SP Files: Unlock any file in SharePoint or get lock information of a SharePoint file.( Compatible with office 365) ""The file "" is locked for exclusive use by""SharePoint 2010 Metro Masterpage: This project will give you a full metro masterpage for sharepoint 2010SharePoint Cache Refresh Framework: This project's aim is build a small and easy to use framework for SharePoint developers to be able to control cached objects across servers in a SharePoint FarmTFSProjectTest: A test project.uhimania test project: testUpgrade SPSolution: This is a Sharepoint 2010 Management Shell cmdlet, which upgrades a sharepoint solution and installs/activates any new features added in the package.Video Frame Explorer: Video Frame Explorer allows you to make thumbnails (caps, previews) of video files. It supports of practically any videos-formats (even MP4, MKV, MOV if you havWML: WML

    Read the article

  • EM12c: Using the LIST verb in emcli

    - by SubinDaniVarughese
    Many of us who use EM CLI to write scripts and automate our daily tasks should not miss out on the new list verb released with Oracle Enterprise Manager 12.1.0.3.0. The combination of list and Jython based scripting support in EM CLI makes it easier to achieve automation for complex tasks with just a few lines of code. Before I jump into a script, let me highlight the key attributes of the list verb and why it’s simply excellent! 1. Multiple resources under a single verb:A resource can be set of users or targets, etc. Using the list verb, you can retrieve information about a resource from the repository database.Here is an example which retrieves the list of administrators within EM.Standard mode$ emcli list -resource="Administrators" Interactive modeemcli>list(resource="Administrators")The output will be the same as standard mode.Standard mode$ emcli @myAdmin.pyEnter password :  ******The output will be the same as standard mode.Contents of myAdmin.py scriptlogin()print list(resource="Administrators",jsonout=False).out()To get a list of all available resources use$ emcli list -helpWith every release of EM, more resources are being added to the list verb. If you have a resource which you feel would be valuable then go ahead and contact Oracle Support to log an enhancement request with product development. Be sure to say how the resource is going to help improve your daily tasks. 2. Consistent Formatting:It is possible to format the output of any resource consistently using these options:  –column  This option is used to specify which columns should be shown in the output. Here is an example which shows the list of administrators and their account status$ emcli list -resource="Administrators" -columns="USER_NAME,REPOS_ACCOUNT_STATUS" To get a list of columns in a resource use:$ emcli list -resource="Administrators" -help You can also specify the width of the each column. For example, here the column width of user_type is set to 20 and department to 30. $ emcli list -resource=Administrators -columns="USER_NAME,USER_TYPE:20,COST_CENTER,CONTACT,DEPARTMENT:30"This is useful if your terminal is too small or you need to fine tune a list of specific columns for your quick use or improved readability.  –colsize  This option is used to resize column widths.Here is the same example as above, but using -colsize to define the width of user_type to 20 and department to 30.$ emcli list -resource=Administrators -columns="USER_NAME,USER_TYPE,COST_CENTER,CONTACT,DEPARTMENT" -colsize="USER_TYPE:20,DEPARTMENT:30" The existing standard EMCLI formatting options are also available in list verb. They are: -format="name:pretty" | -format="name:script” | -format="name:csv" | -noheader | -scriptThere are so many uses depending on your needs. Have a look at the resources and columns in each resource. Refer to the EMCLI book in EM documentation for more information.3. Search:Using the -search option in the list verb makes it is possible to search for a specific row in a specific column within a resource. This is similar to the sqlplus where clause. The following operators are supported:           =           !=           >           <           >=           <=           like           is (Must be followed by null or not null)Here is an example which searches for all EM administrators in the marketing department located in the USA.$emcli list -resource="Administrators" -search="DEPARTMENT ='Marketing'" -search="LOCATION='USA'" Here is another example which shows all the named credentials created since a specific date.  $emcli list -resource=NamedCredentials -search="CredCreatedDate > '11-Nov-2013 12:37:20 PM'"Note that the timestamp has to be in the format DD-MON-YYYY HH:MI:SS AM/PM Some resources need a bind variable to be passed to get output. A bind variable is created in the resource and then referenced in the command. For example, this command will list all the default preferred credentials for target type oracle_database.Here is an example$ emcli list -resource="PreferredCredentialsDefault" -bind="TargetType='oracle_database'" -colsize="SetName:15,TargetType:15" You can provide multiple bind variables. To verify if a column is searchable or requires a bind variable, use the –help option. Here is an example:$ emcli list -resource="PreferredCredentialsDefault" -help 4. Secure accessWhen list verb collects the data, it only displays content for which the administrator currently logged into emcli, has access. For example consider this usecase:AdminA has access only to TargetA. AdminA logs into EM CLIExecuting the list verb to get the list of all targets will only show TargetA.5. User defined SQLUsing the –sql option, user defined sql can be executed. The SQL provided in the -sql option is executed as the EM user MGMT_VIEW, which has read-only access to the EM published MGMT$ database views in the SYSMAN schema. To get the list of EM published MGMT$ database views, go to the Extensibility Programmer's Reference book in EM documentation. There is a chapter about Using Management Repository Views. It’s always recommended to reference the documentation for the supported MGMT$ database views.  Consider you are using the MGMT$ABC view which is not in the chapter. During upgrade, it is possible, since the view was not in the book and not supported, it is likely the view might undergo a change in its structure or the data in it. Using a supported view ensures that your scripts using -sql will continue working after upgrade.Here’s an example  $ emcli list -sql='select * from mgmt$target' 6. JSON output support    JSON (JavaScript Object Notation) enables data to be displayed in a collection of name/value pairs. There is lot of reading material about JSON on line for more information.As an example, we had a requirement where an EM administrator had many 11.2 databases in their test environment and the developers had requested an Administrator to change the lifecycle status from Test to Production which meant the admin had to go to the EM “All targets” page and identify the set of 11.2 databases and then to go into each target database page and manually changes the property to Production. Sounds easy to say, but this Administrator had numerous targets and this task is repeated for every release cycle.We told him there is an easier way to do this with a script and he can reuse the script whenever anyone wanted to change a set of targets to a different Lifecycle status. Here is a jython script which uses list and JSON to change all 11.2 database target’s LifeCycle Property value.If you are new to scripting and Jython, I would suggest visiting the basic chapters in any Jython tutorials. Understanding Jython is important to write the logic depending on your usecase.If you are already writing scripts like perl or shell or know a programming language like java, then you can easily understand the logic.Disclaimer: The scripts in this post are subject to the Oracle Terms of Use located here.  1 from emcli import *  2  search_list = ['PROPERTY_NAME=\'DBVersion\'','TARGET_TYPE= \'oracle_database\'','PROPERTY_VALUE LIKE \'11.2%\'']  3 if len(sys.argv) == 2:  4    print login(username=sys.argv[0])  5    l_prop_val_to_set = sys.argv[1]  6      l_targets = list(resource="TargetProperties", search=search_list,   columns="TARGET_NAME,TARGET_TYPE,PROPERTY_NAME")  7    for target in l_targets.out()['data']:  8       t_pn = 'LifeCycle Status'  9      print "INFO: Setting Property name " + t_pn + " to value " +       l_prop_val_to_set + " for " + target['TARGET_NAME']  10      print  set_target_property_value(property_records=      target['TARGET_NAME']+":"+target['TARGET_TYPE']+":"+      t_pn+":"+l_prop_val_to_set)  11  else:  12   print "\n ERROR: Property value argument is missing"  13   print "\n INFO: Format to run this file is filename.py <username>   <Database Target LifeCycle Status Property Value>" You can download the script from here. I could not upload the file with .py extension so you need to rename the file to myScript.py before executing it using emcli.A line by line explanation for beginners: Line  1 Imports the emcli verbs as functions  2 search_list is a variable to pass to the search option in list verb. I am using escape character for the single quotes. In list verb to pass more than one value for the same option, you should define as above comma separated values, surrounded by square brackets.  3 This is an “if” condition to ensure the user does provide two arguments with the script, else in line #15, it prints an error message.  4 Logging into EM. You can remove this if you have setup emcli with autologin. For more details about setup and autologin, please go the EM CLI book in EM documentation.  5 l_prop_val_to_set is another variable. This is the property value to be set. Remember we are changing the value from Test to Production. The benefit of this variable is you can reuse the script to change the property value from and to any other values.  6 Here the output of the list verb is stored in l_targets. In the list verb I am passing the resource as TargetProperties, search as the search_list variable and I only need these three columns – target_name, target_type and property_name. I don’t need the other columns for my task.  7 This is a for loop. The data in l_targets is available in JSON format. Using the for loop, each pair will now be available in the ‘target’ variable.  8 t_pn is the “LifeCycle Status” variable. If required, I can have this also as an input and then use my script to change any target property. In this example, I just wanted to change the “LifeCycle Status”.  9 This a message informing the user the script is setting the property value for dbxyz.  10 This line shows the set_target_property_value verb which sets the value using the property_records option. Once it is set for a target pair, it moves to the next one. In my example, I am just showing three dbs, but the real use is when you have 20 or 50 targets. The script is executed as:$ emcli @myScript.py subin Production The recommendation is to first test the scripts before running it on a production system. We tested on a small set of targets and optimizing the script for fewer lines of code and better messaging.For your quick reference, the resources available in Enterprise Manager 12.1.0.4.0 with list verb are:$ emcli list -helpWatch this space for more blog posts using the list verb and EM CLI Scripting use cases. I hope you enjoyed reading this blog post and it has helped you gain more information about the list verb. Happy Scripting!!Disclaimer: The scripts in this post are subject to the Oracle Terms of Use located here. Stay Connected: Twitter | Facebook | YouTube | Linkedin | Newsletter mt=8">Download the Oracle Enterprise Manager 12c Mobile app

    Read the article

  • CodePlex Daily Summary for Thursday, October 18, 2012

    CodePlex Daily Summary for Thursday, October 18, 2012Popular ReleasesGac Library -- C++ Utilities for GPU Accelerated GUI and Script: Gaclib 0.4.0.0: Gaclib.zip contains the following content GacUIDemo Demo solution and projects Public Source GacUI library Document HTML document. Please start at reference_gacui.html Content Necessary CSS/JPG files for document. Improvements to the previous release Added Windows 8 theme (except tab control, will be provided for the next release) GacUI will choose to use Windows 7 theme or Windows 8 theme as default theme according to OS version Added 1 new demos Template.Window.CustomizedBorder ...Yahoo! UI Library: YUI Compressor for .Net: Version 2.1.1.0 - Sartha (BugFix): - Revered back the embedding of the 2x assemblies.Visual Studio Team Foundation Server Branching and Merging Guide: v2.1 - Visual Studio 2012: Welcome to the Branching and Merging Guide What is new? The Version Control specific discussions have been moved from the Branching and Merging Guide to the new Advanced Version Control Guide. The Branching and Merging Guide and the Advanced Version Control Guide have been ported to the new document style. See http://blogs.msdn.com/b/willy-peter_schaub/archive/2012/10/17/alm-rangers-raising-the-quality-bar-for-documentation-part-2.aspx for more information. Quality-Bar Details Documentatio...D3 Loot Tracker: 1.5.5: Compatible with 1.05.Write Once, Play Everywhere: MonoGame 3.0 (BETA): This is a beta release of the up coming MonoGame 3.0. It contains an Installer which will install a binary release of MonoGame on windows boxes with the following platforms. Windows, Linux, Android and Windows 8. If you need to build for iOS or Mac you will need to get the source code at this time as the installers for those platforms are not available yet. The installer will also install a bunch of Project templates for Visual Studio 2010 , 2012 and MonoDevleop. For those of you wish...Windawesome: Windawesome v1.4.1 x64: Fixed switching of applications across monitors Changed window flashing API (fix your config files) Added NetworkMonitorWidget (thanks to weiwen) Any issues/recommendations/requests for future versions? This is the 64-bit version of the release. Be sure to use that if you are on a 64-bit Windows. Works with "Required DLLs v3".CODE Framework: 4.0.21017.0: See change log in the Documentation section for details.Global Stock Exchange (Hobby Project): Global Stock Exchange - Invst Banking (Hobby Proj): Initial VersionMagelia WebStore Open-source Ecommerce software: Magelia WebStore 2.1: Add support for .net 4.0 to Magelia.Webstore.Client and StarterSite version 2.1.254.3 Scheduler Import & Export feature (for Professional and Entreprise Editions) UTC datetime and timezone support .net 4.5 and Visual Studio 2012 migration client magelia global refactoring release of a nugget package to help developers speed up development http://nuget.org/packages/Magelia.Webstore.Client optimization of the data update mechanism (a.k.a. "burst") Performance improvment of the d...JayData - The cross-platform HTML5 data-management library for JavaScript: JayData 1.2.2: JayData is a unified data access library for JavaScript to CRUD + Query data from different sources like OData, MongoDB, WebSQL, SqLite, HTML5 localStorage, Facebook or YQL. The library can be integrated with Knockout.js or Sencha Touch 2 and can be used on Node.js as well. See it in action in this 6 minutes video Sencha Touch 2 example app using JayData: Netflix browser. What's new in JayData 1.2.2 For detailed release notes check the release notes. Revitalized IndexedDB providerNow you c...JaySvcUtil - generate JavaScript context from OData metadata: JaySvcUtil 1.2.2: You will need the JayData library to use contexts generated with JaySvcUtil! Get it from here: http://jaydata.codeplex.comVFPX: FoxcodePlus: FoxcodePlus - Visual Studio like extensions to Visual FoxPro IntelliSense.Droid Explorer: Droid Explorer 0.8.8.8 Beta: fixed the icon for packages on the desktop fixed the install dialog closing right when it starts removed the link to "set up the sdk for me" as this is no longer supported. fixed bug where the device selection dialog would show, even if there was only one device connected. fixed toolbar from having "gap" between other toolbar removed main menu items that do not have any menus Fiskalizacija za developere: FiskalizacijaDev 1.0: Prva verzija ovog projekta, još je uvijek oznacena kao BETA - ovo znaci da su naša testiranja prošla uspješno :) No, kako mi ne proizvodimo neki software za blagajne, tako sve ovo nije niti isprobano u "realnim" uvjetima - svaka je sugestija, primjedba ili prijava bug-a je dobrodošla. Za sve ovo koristite, molimo, Discussions ili Issue Tracker. U ovom trenutku runtime binary je raspoloživ kao Any CPU za .NET verzije 2.0. Javite ukoliko trebaju i verzije buildane za 32-bit/64-bit kao i za .N...Squiggle - A free open source LAN Messenger: Squiggle 3.2 (Development): NOTE: This is development release and not recommended for production use. This release is mainly for enabling extensibility and interoperability with other platforms. Support for plugins Support for extensions Communication layer and protocol is platform independent (ZeroMQ, ProtocolBuffers) Bug fixes New /invite command Edit the sent message Disable update check NOTE: This is development release and not recommended for production use.AcDown????? - AcDown Downloader Framework: AcDown????? v4.2: ??●AcDown??????????、??、??、???????。????,????,?????????????????????????。???????????Acfun、????(Bilibili)、??、??、YouTube、??、???、??????、SF????、????????????。 ●??????AcPlay?????,??????、????????????????。 ● AcDown??????????????????,????????????????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ????32??64? Windows XP/Vista/7/8 ???? 32??64? ???Linux ????(1)????????Windows XP???,????????.NET Framework 2.0???(x86),?????"?????????"??? (2)???????????Linux???,????????Mono?? ??2...PHPExcel: PHPExcel 1.7.8: See Change Log for details of the new features and bugfixes included in this release, and methods that are now deprecated. Note changes to the PDF Writer: tcPDF is no longer bundled with PHPExcel, but should be installed separately if you wish to use that 3rd-Party library with PHPExcel. Alternatively, you can choose to use mPDF or DomPDF as PDF Rendering libraries instead: PHPExcel now provides a configurable wrapper allowing you a choice of PDF renderer. See the documentation, or the PDF s...DirectX Tool Kit: October 12, 2012: October 12, 2012 Added PrimitiveBatch for drawing user primitives Debug object names for all D3D resources (for PIX and debug layer leak reporting)Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.70: Fixed issue described in discussion #399087: variable references within case values weren't getting resolved.GoogleMap Control: GoogleMap Control 6.1: Some important bug fixes and couple of new features were added. There are no major changes to the sample website. Source code could be downloaded from the Source Code section selecting branch release-6.1. Thus just builds of GoogleMap Control are issued here in this release. Update 14.Oct.2012 - Client side access fixed NuGet Package GoogleMap Control 6.1 NuGet Package FeaturesBounds property to provide ability to create a map by center and bounds as well; Setting in markup <artem:Goog...New ProjectsAnonymous InfoPath Browser Forms Web Part: Web part for SharePoint 2007 and 2010 that enables the anonymous submission of InfoPath forms.ath sem5 gr3 proj1: just a simple app to learn how to work in groups, just a start in our programming careerAtif M DNNTaskManager: This is a simple task manager AWS for .NET Sample (Amazon EC2, S3, SQS, DynamoDB): Amazon Web Services (AWS) Sample for .NET (C#) with Asp.NET MVC and Web API. Including S3, DynamoDB, Elastic Beanstalk and SQSCodingWheels.DataTypes: DataTypes tries to make it easier for developers to have concrete typesafe objects for working with many common forms of data. Many times these data objects are just doubles or ints floating through your code with abbreviations on them describing what they represent.Display attachments (list view) SP 2010: Display attachments (SP 2010) is a field for all type lists without types: Document library, Image library, Links, Surveys.Donation Tracker: This is a school projectEveHQ : The Internet Spaceship Toolkit: Multi-faceted character application for Eve-Online. Includes pilot monitoring, skill queue planning, ship fitting, industry and more.EventConni: EventConniFileCloud.Net: FileCloud.NET is a .Net wrapper for http://filecloud.io/ service API.Foodies: FoodiesForce PowerPivot Refresh: This project contains an c# utility class (CustomDataRefreshUtils) to refresh a PowerPivot file in SharePoint 2010. Geometric Modeling: REnder a mesh with Visual Studio - C#Heavysoft Mince: Heavysoft Mince is an opportunity to enhance my skills and to show how free software can be used in enterprise-level systems.HP iLO Management Utility: A small utility to provide a nice interface for managing multiple servers over the iLO interface.iMeeting: iMeeting is a simple application built in Java. iTask: iTask for Windows 8 LamWebOS: this is test project.MackProjects_Musicallis: Projeto da disciplina de Técnicas de Programação Aplicada II da Universidade Presbiteriana Mackenzie - SP.Merge PDF: MergePDF is an easy and simple tool which could help you batch merge PDF files (any number you want) into one PDF file quickly. It is completely free. With MerNetGen: O NetGen é um projeto de geração automatizada de interface Web em ASP.NET a partir de modelo de classes do LINQ-TO-SQL, estando disponível como um Addin para o Visual Studio 2008.Other Ways: ?? ! ????!!!!!Physical Therapy Timer: A simple Windows application that makes it easy to set a countdown timer for a required time period, and keep track of how many times the timer has be run.PicoMax: This version of PicoMax has everything you need to use the Seeed OLED without the MS Graphics class. It can be used in Gadgeteer or a regular NETMF project.post tracker: ASP, WEB API, WINDOWS PhoneRandomEngine: Open Source Game Engine.Resident Raver: This is Windows 8 version of Resident Raver which I wrote for my Intro To HTML5 Game Dev book. It requires the Impact JavaScript game framework.RogueTS: RogueTS is a Roguelike engine written in TypeScript. It features randomly generated maps, basic lighting effects, collision detection and additional stub code.RonYeeStores: ????????????,??C2C?B2C???????????。ShangHaiTunnel monitor: shang hai tunnel projectSharePoint List Security: Free, open source set of features for SharePoint 2010 to enhance the OOTB list permissions by adding column level permissions for users and groups.Skywave Code Lines Counter: It is a tiny program which will count real written lines of code. Real lines means that we removed autogenerated lines, empty lines and single '{' or '}' lines (in C#) and ... Currently supports .cs and .vb files (for C# and VB) It will say how much you worked ;-)switchcontroles: HAT Switch controle sy seguimientoSyscore: Steps : 1 - Definitions 2 - Requirements 3 - Projecting 4 - Delegation 5 - Prototyping 6 - Testing/Analizing 7 - Finalizing 8 - Distribution 9 - Congratulationstarikscodeplexproject: tarikscodeplexprojectTaskManager04: This project is an effort to implement the coding examples given in the TASK MANAGER video series into a working DotNetNuke program.testc: this is a test ,heihei,yongshengtesttom10172012git01: fds fs fs fs tsUnit - TypeScript Unit Testing Framework: tsUnit is a unit testing framework for TypeScript, written in TypeScript. It allows you to encapsulate your test functions in classes and modules.WebGrease: WebGrease is a suite of tools for optimizing javascript, css files and images.????: ??《?????》、《??》、《???》、《??》???。。。

    Read the article

  • Quick guide to Oracle IRM 11g: Classification design

    - by Simon Thorpe
    Quick guide to Oracle IRM 11g indexThis is the final article in the quick guide to Oracle IRM. If you've followed everything prior you will now have a fully functional and tested Information Rights Management service. It doesn't matter if you've been following the 10g or 11g guide as this next article is common to both. ContentsWhy this is the most important part... Understanding the classification and standard rights model Identifying business use cases Creating an effective IRM classification modelOne single classification across the entire businessA context for each and every possible granular use caseWhat makes a good context? Deciding on the use of roles in the context Reviewing the features and security for context roles Summary Why this is the most important part...Now the real work begins, installing and getting an IRM system running is as simple as following instructions. However to actually have an IRM technology easily protecting your most sensitive information without interfering with your users existing daily work flows and be able to scale IRM across the entire business, requires thought into how confidential documents are created, used and distributed. This article is going to give you the information you need to ask the business the right questions so that you can deploy your IRM service successfully. The IRM team here at Oracle have over 10 years of experience in helping customers and it is important you understand the following to be successful in securing access to your most confidential information. Whatever you are trying to secure, be it mergers and acquisitions information, engineering intellectual property, health care documentation or financial reports. No matter what type of user is going to access the information, be they employees, contractors or customers, there are common goals you are always trying to achieve.Securing the content at the earliest point possible and do it automatically. Removing the dependency on the user to decide to secure the content reduces the risk of mistakes significantly and therefore results a more secure deployment. K.I.S.S. (Keep It Simple Stupid) Reduce complexity in the rights/classification model. Oracle IRM lets you make changes to access to documents even after they are secured which allows you to start with a simple model and then introduce complexity once you've understood how the technology is going to be used in the business. After an initial learning period you can review your implementation and start to make informed decisions based on user feedback and administration experience. Clearly communicate to the user, when appropriate, any changes to their existing work practice. You must make every effort to make the transition to sealed content as simple as possible. For external users you must help them understand why you are securing the documents and inform them the value of the technology to both your business and them. Before getting into the detail, I must pay homage to Martin White, Vice President of client services in SealedMedia, the company Oracle acquired and who created Oracle IRM. In the SealedMedia years Martin was involved with every single customer and was key to the design of certain aspects of the IRM technology, specifically the context model we will be discussing here. Listening carefully to customers and understanding the flexibility of the IRM technology, Martin taught me all the skills of helping customers build scalable, effective and simple to use IRM deployments. No matter how well the engineering department designed the software, badly designed and poorly executed projects can result in difficult to use and manage, and ultimately insecure solutions. The advice and information that follows was born with Martin and he's still delivering IRM consulting with customers and can be found at www.thinkers.co.uk. It is from Martin and others that Oracle not only has the most advanced, scalable and usable document security solution on the market, but Oracle and their partners have the most experience in delivering successful document security solutions. Understanding the classification and standard rights model The goal of any successful IRM deployment is to balance the increase in security the technology brings without over complicating the way people use secured content and avoid a significant increase in administration and maintenance. With Oracle it is possible to automate the protection of content, deploy the desktop software transparently and use authentication methods such that users can open newly secured content initially unaware the document is any different to an insecure one. That is until of course they attempt to do something for which they don't have any rights, such as copy and paste to an insecure application or try and print. Central to achieving this objective is creating a classification model that is simple to understand and use but also provides the right level of complexity to meet the business needs. In Oracle IRM the term used for each classification is a "context". A context defines the relationship between.A group of related documents The people that use the documents The roles that these people perform The rights that these people need to perform their role The context is the key to the success of Oracle IRM. It provides the separation of the role and rights of a user from the content itself. Documents are sealed to contexts but none of the rights, user or group information is stored within the content itself. Sealing only places information about the location of the IRM server that sealed it, the context applied to the document and a few other pieces of metadata that pertain only to the document. This important separation of rights from content means that millions of documents can be secured against a single classification and a user needs only one right assigned to be able to access all documents. If you have followed all the previous articles in this guide, you will be ready to start defining contexts to which your sensitive information will be protected. But before you even start with IRM, you need to understand how your own business uses and creates sensitive documents and emails. Identifying business use cases Oracle is able to support multiple classification systems, but usually there is one single initial need for the technology which drives a deployment. This need might be to protect sensitive mergers and acquisitions information, engineering intellectual property, financial documents. For this and every subsequent use case you must understand how users create and work with documents, to who they are distributed and how the recipients should interact with them. A successful IRM deployment should start with one well identified use case (we go through some examples towards the end of this article) and then after letting this use case play out in the business, you learn how your users work with content, how well your communication to the business worked and if the classification system you deployed delivered the right balance. It is at this point you can start rolling the technology out further. Creating an effective IRM classification model Once you have selected the initial use case you will address with IRM, you need to design a classification model that defines the access to secured documents within the use case. In Oracle IRM there is an inbuilt classification system called the "context" model. In Oracle IRM 11g it is possible to extend the server to support any rights classification model, but the majority of users who are not using an application integration (such as Oracle IRM within Oracle Beehive) are likely to be starting out with the built in context model. Before looking at creating a classification system with IRM, it is worth reviewing some recognized standards and methods for creating and implementing security policy. A very useful set of documents are the ISO 17799 guidelines and the SANS security policy templates. First task is to create a context against which documents are to be secured. A context consists of a group of related documents (all top secret engineering research), a list of roles (contributors and readers) which define how users can access documents and a list of users (research engineers) who have been given a role allowing them to interact with sealed content. Before even creating the first context it is wise to decide on a philosophy which will dictate the level of granularity, the question is, where do you start? At a department level? By project? By technology? First consider the two ends of the spectrum... One single classification across the entire business Imagine that instead of having separate contexts, one for engineering intellectual property, one for your financial data, one for human resources personally identifiable information, you create one context for all documents across the entire business. Whilst you may have immediate objections, there are some significant benefits in thinking about considering this. Document security classification decisions are simple. You only have one context to chose from! User provisioning is simple, just make sure everyone has a role in the only context in the business. Administration is very low, if you assign rights to groups from the business user repository you probably never have to touch IRM administration again. There are however some obvious downsides to this model.All users in have access to all IRM secured content. So potentially a sales person could access sensitive mergers and acquisition documents, if they can get their hands on a copy that is. You cannot delegate control of different documents to different parts of the business, this may not satisfy your regulatory requirements for the separation and delegation of duties. Changing a users role affects every single document ever secured. Even though it is very unlikely a business would ever use one single context to secure all their sensitive information, thinking about this scenario raises one very important point. Just having one single context and securing all confidential documents to it, whilst incurring some of the problems detailed above, has one huge value. Once secured, IRM protected content can ONLY be accessed by authorized users. Just think of all the sensitive documents in your business today, imagine if you could ensure that only everyone you trust could open them. Even if an employee lost a laptop or someone accidentally sent an email to the wrong recipient, only the right people could open that file. A context for each and every possible granular use case Now let's think about the total opposite of a single context design. What if you created a context for each and every single defined business need and created multiple contexts within this for each level of granularity? Let's take a use case where we need to protect engineering intellectual property. Imagine we have 6 different engineering groups, and in each we have a research department, a design department and manufacturing. The company information security policy defines 3 levels of information sensitivity... restricted, confidential and top secret. Then let's say that each group and department needs to define access to information from both internal and external users. Finally add into the mix that they want to review the rights model for each context every financial quarter. This would result in a huge amount of contexts. For example, lets just look at the resulting contexts for one engineering group. Q1FY2010 Restricted Internal - Engineering Group 1 - Research Q1FY2010 Restricted Internal - Engineering Group 1 - Design Q1FY2010 Restricted Internal - Engineering Group 1 - Manufacturing Q1FY2010 Restricted External- Engineering Group 1 - Research Q1FY2010 Restricted External - Engineering Group 1 - Design Q1FY2010 Restricted External - Engineering Group 1 - Manufacturing Q1FY2010 Confidential Internal - Engineering Group 1 - Research Q1FY2010 Confidential Internal - Engineering Group 1 - Design Q1FY2010 Confidential Internal - Engineering Group 1 - Manufacturing Q1FY2010 Confidential External - Engineering Group 1 - Research Q1FY2010 Confidential External - Engineering Group 1 - Design Q1FY2010 Confidential External - Engineering Group 1 - Manufacturing Q1FY2010 Top Secret Internal - Engineering Group 1 - Research Q1FY2010 Top Secret Internal - Engineering Group 1 - Design Q1FY2010 Top Secret Internal - Engineering Group 1 - Manufacturing Q1FY2010 Top Secret External - Engineering Group 1 - Research Q1FY2010 Top Secret External - Engineering Group 1 - Design Q1FY2010 Top Secret External - Engineering Group 1 - Manufacturing Now multiply the above by 6 for each engineering group, 18 contexts. You are then creating/reviewing another 18 every 3 months. After a year you've got 72 contexts. What would be the advantages of such a complex classification model? You can satisfy very granular rights requirements, for example only an authorized engineering group 1 researcher can create a top secret report for access internally, and his role will be reviewed on a very frequent basis. Your business may have very complex rights requirements and mapping this directly to IRM may be an obvious exercise. The disadvantages of such a classification model are significant...Huge administrative overhead. Someone in the business must manage, review and administrate each of these contexts. If the engineering group had a single administrator, they would have 72 classifications to reside over each year. From an end users perspective life will be very confusing. Imagine if a user has rights in just 6 of these contexts. They may be able to print content from one but not another, be able to edit content in 2 contexts but not the other 4. Such confusion at the end user level causes frustration and resistance to the use of the technology. Increased synchronization complexity. Imagine a user who after 3 years in the company ends up with over 300 rights in many different contexts across the business. This would result in long synchronization times as the client software updates all your offline rights. Hard to understand who can do what with what. Imagine being the VP of engineering and as part of an internal security audit you are asked the question, "What rights to researchers have to our top secret information?". In this complex model the answer is not simple, it would depend on many roles in many contexts. Of course this example is extreme, but it highlights that trying to build many barriers in your business can result in a nightmare of administration and confusion amongst users. In the real world what we need is a balance of the two. We need to seek an optimum number of contexts. Too many contexts are unmanageable and too few contexts does not give fine enough granularity. What makes a good context? Good context design derives mainly from how well you understand your business requirements to secure access to confidential information. Some customers I have worked with can tell me exactly the documents they wish to secure and know exactly who should be opening them. However there are some customers who know only of the government regulation that requires them to control access to certain types of information, they don't actually know where the documents are, how they are created or understand exactly who should have access. Therefore you need to know how to ask the business the right questions that lead to information which help you define a context. First ask these questions about a set of documentsWhat is the topic? Who are legitimate contributors on this topic? Who are the authorized readership? If the answer to any one of these is significantly different, then it probably merits a separate context. Remember that sealed documents are inherently secure and as such they cannot leak to your competitors, therefore it is better sealed to a broad context than not sealed at all. Simplicity is key here. Always revert to the first extreme example of a single classification, then work towards essential complexity. If there is any doubt, always prefer fewer contexts. Remember, Oracle IRM allows you to change your mind later on. You can implement a design now and continue to change and refine as you learn how the technology is used. It is easy to go from a simple model to a more complex one, it is much harder to take a complex model that is already embedded in the work practice of users and try to simplify it. It is also wise to take a single use case and address this first with the business. Don't try and tackle many different problems from the outset. Do one, learn from the process, refine it and then take what you have learned into the next use case, refine and continue. Once you have a good grasp of the technology and understand how your business will use it, you can then start rolling out the technology wider across the business. Deciding on the use of roles in the context Once you have decided on that first initial use case and a context to create let's look at the details you need to decide upon. For each context, identify; Administrative rolesBusiness owner, the person who makes decisions about who may or may not see content in this context. This is often the person who wanted to use IRM and drove the business purchase. They are the usually the person with the most at risk when sensitive information is lost. Point of contact, the person who will handle requests for access to content. Sometimes the same as the business owner, sometimes a trusted secretary or administrator. Context administrator, the person who will enact the decisions of the Business Owner. Sometimes the point of contact, sometimes a trusted IT person. Document related rolesContributors, the people who create and edit documents in this context. Reviewers, the people who are involved in reviewing documents but are not trusted to secure information to this classification. This role is not always necessary. (See later discussion on Published-work and Work-in-Progress) Readers, the people who read documents from this context. Some people may have several of the roles above, which is fine. What you are trying to do is understand and define how the business interacts with your sensitive information. These roles obviously map directly to roles available in Oracle IRM. Reviewing the features and security for context roles At this point we have decided on a classification of information, understand what roles people in the business will play when administrating this classification and how they will interact with content. The final piece of the puzzle in getting the information for our first context is to look at the permissions people will have to sealed documents. First think why are you protecting the documents in the first place? It is to prevent the loss of leaking of information to the wrong people. To control the information, making sure that people only access the latest versions of documents. You are not using Oracle IRM to prevent unauthorized people from doing legitimate work. This is an important point, with IRM you can erect many barriers to prevent access to content yet too many restrictions and authorized users will often find ways to circumvent using the technology and end up distributing unprotected originals. Because IRM is a security technology, it is easy to get carried away restricting different groups. However I would highly recommend starting with a simple solution with few restrictions. Ensure that everyone who reasonably needs to read documents can do so from the outset. Remember that with Oracle IRM you can change rights to content whenever you wish and tighten security. Always return to the fact that the greatest value IRM brings is that ONLY authorized users can access secured content, remember that simple "one context for the entire business" model. At the start of the deployment you really need to aim for user acceptance and therefore a simple model is more likely to succeed. As time passes and users understand how IRM works you can start to introduce more restrictions and complexity. Another key aspect to focus on is handling exceptions. If you decide on a context model where engineering can only access engineering information, and sales can only access sales data. Act quickly when a sales manager needs legitimate access to a set of engineering documents. Having a quick and effective process for permitting other people with legitimate needs to obtain appropriate access will be rewarded with acceptance from the user community. These use cases can often be satisfied by integrating IRM with a good Identity & Access Management technology which simplifies the process of assigning users the correct business roles. The big print issue... Printing is often an issue of contention, users love to print but the business wants to ensure sensitive information remains in the controlled digital world. There are many cases of physical document loss causing a business pain, it is often overlooked that IRM can help with this issue by limiting the ability to generate physical copies of digital content. However it can be hard to maintain a balance between security and usability when it comes to printing. Consider the following points when deciding about whether to give print rights. Oracle IRM sealed documents can contain watermarks that expose information about the user, time and location of access and the classification of the document. This information would reside in the printed copy making it easier to trace who printed it. Printed documents are slower to distribute in comparison to their digital counterparts, so time sensitive information in printed format may present a lower risk. Print activity is audited, therefore you can monitor and react to users abusing print rights. Summary In summary it is important to think carefully about the way you create your context model. As you ask the business these questions you may get a variety of different requirements. There may be special projects that require a context just for sensitive information created during the lifetime of the project. There may be a department that requires all information in the group is secured and you might have a few senior executives who wish to use IRM to exchange a small number of highly sensitive documents with a very small number of people. Oracle IRM, with its very flexible context classification system, can support all of these use cases. The trick is to introducing the complexity to deliver them at the right level. In another article i'm working on I will go through some examples of how Oracle IRM might map to existing business use cases. But for now, this article covers all the important questions you need to get your IRM service deployed and successfully protecting your most sensitive information.

    Read the article

  • ANTS CLR and Memory Profiler In Depth Review (Part 1 of 2 &ndash; CLR Profiler)

    - by ToStringTheory
    One of the things that people might not know about me, is my obsession to make my code as efficient as possible.  Many people might not realize how much of a task or undertaking that this might be, but it is surely a task as monumental as climbing Mount Everest, except this time it is a challenge for the mind…  In trying to make code efficient, there are many different factors that play a part – size of project or solution, tiers, language used, experience and training of the programmer, technologies used, maintainability of the code – the list can go on for quite some time. I spend quite a bit of time when developing trying to determine what is the best way to implement a feature to accomplish the efficiency that I look to achieve.  One program that I have recently come to learn about – Red Gate ANTS Performance (CLR) and Memory profiler gives me tools to accomplish that job more efficiently as well.  In this review, I am going to cover some of the features of the ANTS profiler set by compiling some hideous example code to test against. Notice As a member of the Geeks With Blogs Influencers program, one of the perks is the ability to review products, in exchange for a free license to the program.  I have not let this affect my opinions of the product in any way, and Red Gate nor Geeks With Blogs has tried to influence my opinion regarding this product in any way. Introduction The ANTS Profiler pack provided by Red Gate was something that I had not heard of before receiving an email regarding an offer to review it for a license.  Since I look to make my code efficient, it was a no brainer for me to try it out!  One thing that I have to say took me by surprise is that upon downloading the program and installing it you fill out a form for your usual contact information.  Sure enough within 2 hours, I received an email from a sales representative at Red Gate asking if she could help me to achieve the most out of my trial time so it wouldn’t go to waste.  After replying to her and explaining that I was looking to review its feature set, she put me in contact with someone that setup a demo session to give me a quick rundown of its features via an online meeting.  After having dealt with a massive ordeal with one of my utility companies and their complete lack of customer service, Red Gates friendly and helpful representatives were a breath of fresh air, and something I was thankful for. ANTS CLR Profiler The ANTS CLR profiler is the thing I want to focus on the most in this post, so I am going to dive right in now. Install was simple and took no time at all.  It installed both the profiler for the CLR and Memory, but also visual studio extensions to facilitate the usage of the profilers (click any images for full size images): The Visual Studio menu options (under ANTS menu) Starting the CLR Performance Profiler from the start menu yields this window If you follow the instructions after launching the program from the start menu (Click File > New Profiling Session to start a new project), you are given a dialog with plenty of options for profiling: The New Session dialog.  Lots of options.  One thing I noticed is that the buttons in the lower right were half-covered by the panel of the application.  If I had to guess, I would imagine that this is caused by my DPI settings being set to 125%.  This is a problem I have seen in other applications as well that don’t scale well to different dpi scales. The profiler options give you the ability to profile: .NET Executable ASP.NET web application (hosted in IIS) ASP.NET web application (hosted in IIS express) ASP.NET web application (hosted in Cassini Web Development Server) SharePoint web application (hosted in IIS) Silverlight 4+ application Windows Service COM+ server XBAP (local XAML browser application) Attach to an already running .NET 4 process Choosing each option provides a varying set of other variables/options that one can set including options such as application arguments, operating path, record I/O performance performance counters to record (43 counters in all!), etc…  All in all, they give you the ability to profile many different .Net project types, and make it simple to do so.  In most cases of my using this application, I would be using the built in Visual Studio extensions, as they automatically start a new profiling project in ANTS with the options setup, and start your program, however RedGate has made it easy enough to profile outside of Visual Studio as well. On the flip side of this, as someone who lives most of their work life in Visual Studio, one thing I do wish is that instead of opening an entirely separate application/gui to perform profiling after launching, that instead they would provide a Visual Studio panel with the information, and integrate more of the profiling project information into Visual Studio.  So, now that we have an idea of what options that the profiler gives us, its time to test its abilities and features. Horrendous Example Code – Prime Number Generator One of my interests besides development, is Physics and Math – what I went to college for.  I have especially always been interested in prime numbers, as they are something of a mystery…  So, I decided that I would go ahead and to test the abilities of the profiler, I would write a small program, website, and library to generate prime numbers in the quantity that you ask for.  I am going to start off with some terrible code, and show how I would see the profiler being used as a development tool. First off, the IPrimes interface (all code is downloadable at the end of the post): interface IPrimes { IEnumerable<int> GetPrimes(int retrieve); } Simple enough, right?  Anything that implements the interface will (hopefully) provide an IEnumerable of int, with the quantity specified in the parameter argument.  Next, I am going to implement this interface in the most basic way: public class DumbPrimes : IPrimes { public IEnumerable<int> GetPrimes(int retrieve) { //store a list of primes already found var _foundPrimes = new List<int>() { 2, 3 }; //if i ask for 1 or two primes, return what asked for if (retrieve <= _foundPrimes.Count()) return _foundPrimes.Take(retrieve); //the next number to look at int _analyzing = 4; //since I already determined I don't have enough //execute at least once, and until quantity is sufficed do { //assume prime until otherwise determined bool isPrime = true; //start dividing at 2 //divide until number is reached, or determined not prime for (int i = 2; i < _analyzing && isPrime; i++) { //if (i) goes into _analyzing without a remainder, //_analyzing is NOT prime if (_analyzing % i == 0) isPrime = false; } //if it is prime, add to found list if (isPrime) _foundPrimes.Add(_analyzing); //increment number to analyze next _analyzing++; } while (_foundPrimes.Count() < retrieve); return _foundPrimes; } } This is the simplest way to get primes in my opinion.  Checking each number by the straight definition of a prime – is it divisible by anything besides 1 and itself. I have included this code in a base class library for my solution, as I am going to use it to demonstrate a couple of features of ANTS.  This class library is consumed by a simple non-MVVM WPF application, and a simple MVC4 website.  I will not post the WPF code here inline, as it is simply an ObservableCollection<int>, a label, two textbox’s, and a button. Starting a new Profiling Session So, in Visual Studio, I have just completed my first stint developing the GUI and DumbPrimes IPrimes class, so now I want to check my codes efficiency by profiling it.  All I have to do is build the solution (surprised initiating a profiling session doesn’t do this, but I suppose I can understand it), and then click the ANTS menu, followed by Profile Performance.  I am then greeted by the profiler starting up and already monitoring my program live: You are provided with a realtime graph at the top, and a pane at the bottom giving you information on how to proceed.  I am going to start by asking my program to show me the first 15000 primes: After the program finally began responding again (I did all the work on the main UI thread – how bad!), I stopped the profiler, which did kill the process of my program too.  One important thing to note, is that the profiler by default wants to give you a lot of detail about the operation – line hit counts, time per line, percent time per line, etc…  The important thing to remember is that this itself takes a lot of time.  When running my program without the profiler attached, it can generate the 15000 primes in 5.18 seconds, compared to 74.5 seconds – almost a 1500 percent increase.  While this may seem like a lot, remember that there is a trade off.  It may be WAY more inefficient, however, I am able to drill down and make improvements to specific problem areas, and then decrease execution time all around. Analyzing the Profiling Session After clicking ‘Stop Profiling’, the process running my application stopped, and the entire execution time was automatically selected by ANTS, and the results shown below: Now there are a number of interesting things going on here, I am going to cover each in a section of its own: Real Time Performance Counter Bar (top of screen) At the top of the screen, is the real time performance bar.  As your application is running, this will constantly update with the currently selected performance counters status.  A couple of cool things to note are the fact that you can drag a selection around specific time periods to drill down the detail views in the lower 2 panels to information pertaining to only that period. After selecting a time period, you can bookmark a section and name it, so that it is easy to find later, or after reloaded at a later time.  You can also zoom in, out, or fit the graph to the space provided – useful for drilling down. It may be hard to see, but at the top of the processor time graph below the time ticks, but above the red usage graph, there is a green bar. This bar shows at what times a method that is selected in the ‘Call tree’ panel is called. Very cool to be able to click on a method and see at what times it made an impact. As I said before, ANTS provides 43 different performance counters you can hook into.  Click the arrow next to the Performance tab at the top will allow you to change between different counters if you have them selected: Method Call Tree, ADO.Net Database Calls, File IO – Detail Panel Red Gate really hit the mark here I think. When you select a section of the run with the graph, the call tree populates to fill a hierarchical tree of method calls, with information regarding each of the methods.   By default, methods are hidden where the source is not provided (framework type code), however, Red Gate has integrated Reflector into ANTS, so even if you don’t have source for something, you can select a method and get the source if you want.  Methods are also hidden where the impact is seen as insignificant – methods that are only executed for 1% of the time of the overall calling methods time; in other words, working on making them better is not where your efforts should be focused. – Smart! Source Panel – Detail Panel The source panel is where you can see line level information on your code, showing the code for the currently selected method from the Method Call Tree.  If the code is not available, Reflector takes care of it and shows the code anyways! As you can notice, there does seem to be a problem with how ANTS determines what line is the actual line that a call is completed on.  I have suspicions that this may be due to some of the inline code optimizations that the CLR applies upon compilation of the assembly.  In a method with comments, the problem is much more severe: As you can see here, apparently the most offending code in my base library was a comment – *gasp*!  Removing the comments does help quite a bit, however I hope that Red Gate works on their counter algorithm soon to improve the logic on positioning for statistics: I did a small test just to demonstrate the lines are correct without comments. For me, it isn’t a deal breaker, as I can usually determine the correct placements by looking at the application code in the region and determining what makes sense, but it is something that would probably build up some irritation with time. Feature – Suggest Method for Optimization A neat feature to really help those in need of a pointer, is the menu option under tools to automatically suggest methods to optimize/improve: Nice feature – clicking it filters the call tree and stars methods that it thinks are good candidates for optimization.  I do wish that they would have made it more visible for those of use who aren’t great on sight: Process Integration I do think that this could have a place in my process.  After experimenting with the profiler, I do think it would be a great benefit to do some development, testing, and then after all the bugs are worked out, use the profiler to check on things to make sure nothing seems like it is hogging more than its fair share.  For example, with this program, I would have developed it, ran it, tested it – it works, but slowly. After looking at the profiler, and seeing the massive amount of time spent in 1 method, I might go ahead and try to re-implement IPrimes (I actually would probably rewrite the offending code, but so that I can distribute both sets of code easily, I’m just going to make another implementation of IPrimes).  Using two pieces of knowledge about prime numbers can make this method MUCH more efficient – prime numbers fall into two buckets 6k+/-1 , and a number is prime if it is not divisible by any other primes before it: public class SmartPrimes : IPrimes { public IEnumerable<int> GetPrimes(int retrieve) { //store a list of primes already found var _foundPrimes = new List<int>() { 2, 3 }; //if i ask for 1 or two primes, return what asked for if (retrieve <= _foundPrimes.Count()) return _foundPrimes.Take(retrieve); //the next number to look at int _k = 1; //since I already determined I don't have enough //execute at least once, and until quantity is sufficed do { //assume prime until otherwise determined bool isPrime = true; int potentialPrime; //analyze 6k-1 //assign the value to potential potentialPrime = 6 * _k - 1; //if there are any primes that divise this, it is NOT a prime number //using PLINQ for quick boost isPrime = !_foundPrimes.AsParallel() .Any(prime => potentialPrime % prime == 0); //if it is prime, add to found list if (isPrime) _foundPrimes.Add(potentialPrime); if (_foundPrimes.Count() == retrieve) break; //analyze 6k+1 //assign the value to potential potentialPrime = 6 * _k + 1; //if there are any primes that divise this, it is NOT a prime number //using PLINQ for quick boost isPrime = !_foundPrimes.AsParallel() .Any(prime => potentialPrime % prime == 0); //if it is prime, add to found list if (isPrime) _foundPrimes.Add(potentialPrime); //increment k to analyze next _k++; } while (_foundPrimes.Count() < retrieve); return _foundPrimes; } } Now there are definitely more things I can do to help make this more efficient, but for the scope of this example, I think this is fine (but still hideous)! Profiling this now yields a happy surprise 27 seconds to generate the 15000 primes with the profiler attached, and only 1.43 seconds without.  One important thing I wanted to call out though was the performance graph now: Notice anything odd?  The %Processor time is above 100%.  This is because there is now more than 1 core in the operation.  A better label for the chart in my mind would have been %Core time, but to each their own. Another odd thing I noticed was that the profiler seemed to be spot on this time in my DumbPrimes class with line details in source, even with comments..  Odd. Profiling Web Applications The last thing that I wanted to cover, that means a lot to me as a web developer, is the great amount of work that Red Gate put into the profiler when profiling web applications.  In my solution, I have a simple MVC4 application setup with 1 page, a single input form, that will output prime values as my WPF app did.  Launching the profiler from Visual Studio as before, nothing is really different in the profiler window, however I did receive a UAC prompt for a Red Gate helper app to integrate with the web server without notification. After requesting 500, 1000, 2000, and 5000 primes, and looking at the profiler session, things are slightly different from before: As you can see, there are 4 spikes of activity in the processor time graph, but there is also something new in the call tree: That’s right – ANTS will actually group method calls by get/post operations, so it is easier to find out what action/page is giving the largest problems…  Pretty cool in my mind! Overview Overall, I think that Red Gate ANTS CLR Profiler has a lot to offer, however I think it also has a long ways to go.  3 Biggest Pros: Ability to easily drill down from time graph, to method calls, to source code Wide variety of counters to choose from when profiling your application Excellent integration/grouping of methods being called from web applications by request – BRILLIANT! 3 Biggest Cons: Issue regarding line details in source view Nit pick – Processor time vs. Core time Nit pick – Lack of full integration with Visual Studio Ratings Ease of Use (7/10) – I marked down here because of the problems with the line level details and the extra work that that entails, and the lack of better integration with Visual Studio. Effectiveness (10/10) – I believe that the profiler does EXACTLY what it purports to do.  Especially with its large variety of performance counters, a definite plus! Features (9/10) – Besides the real time performance monitoring, and the drill downs that I’ve shown here, ANTS also has great integration with ADO.Net, with the ability to show database queries run by your application in the profiler.  This, with the line level details, the web request grouping, reflector integration, and various options to customize your profiling session I think create a great set of features! Customer Service (10/10) – My entire experience with Red Gate personnel has been nothing but good.  their people are friendly, helpful, and happy! UI / UX (8/10) – The interface is very easy to get around, and all of the options are easy to find.  With a little bit of poking around, you’ll be optimizing Hello World in no time flat! Overall (8/10) – Overall, I am happy with the Performance Profiler and its features, as well as with the service I received when working with the Red Gate personnel.  I WOULD recommend you trying the application and seeing if it would fit into your process, BUT, remember there are still some kinks in it to hopefully be worked out. My next post will definitely be shorter (hopefully), but thank you for reading up to here, or skipping ahead!  Please, if you do try the product, drop me a message and let me know what you think!  I would love to hear any opinions you may have on the product. Code Feel free to download the code I used above – download via DropBox

    Read the article

  • Microsoft Introduces WebMatrix

    - by Rick Strahl
    originally published in CoDe Magazine Editorial Microsoft recently released the first CTP of a new development environment called WebMatrix, which along with some of its supporting technologies are squarely aimed at making the Microsoft Web Platform more approachable for first-time developers and hobbyists. But in the process, it also provides some updated technologies that can make life easier for existing .NET developers. Let’s face it: ASP.NET development isn’t exactly trivial unless you already have a fair bit of familiarity with sophisticated development practices. Stick a non-developer in front of Visual Studio .NET or even the Visual Web Developer Express edition and it’s not likely that the person in front of the screen will be very productive or feel inspired. Yet other technologies like PHP and even classic ASP did provide the ability for non-developers and hobbyists to become reasonably proficient in creating basic web content quickly and efficiently. WebMatrix appears to be Microsoft’s attempt to bring back some of that simplicity with a number of technologies and tools. The key is to provide a friendly and fully self-contained development environment that provides all the tools needed to build an application in one place, as well as tools that allow publishing of content and databases easily to the web server. WebMatrix is made up of several components and technologies: IIS Developer Express IIS Developer Express is a new, self-contained development web server that is fully compatible with IIS 7.5 and based on the same codebase that IIS 7.5 uses. This new development server replaces the much less compatible Cassini web server that’s been used in Visual Studio and the Express editions. IIS Express addresses a few shortcomings of the Cassini server such as the inability to serve custom ISAPI extensions (i.e., things like PHP or ASP classic for example), as well as not supporting advanced authentication. IIS Developer Express provides most of the IIS 7.5 feature set providing much better compatibility between development and live deployment scenarios. SQL Server Compact 4.0 Database access is a key component for most web-driven applications, but on the Microsoft stack this has mostly meant you have to use SQL Server or SQL Server Express. SQL Server Compact is not new-it’s been around for a few years, but it’s been severely hobbled in the past by terrible tool support and the inability to support more than a single connection in Microsoft’s attempt to avoid losing SQL Server licensing. The new release of SQL Server Compact 4.0 supports multiple connections and you can run it in ASP.NET web applications simply by installing an assembly into the bin folder of the web application. In effect, you don’t have to install a special system configuration to run SQL Compact as it is a drop-in database engine: Copy the small assembly into your BIN folder (or from the GAC if installed fully), create a connection string against a local file-based database file, and then start firing SQL requests. Additionally WebMatrix includes nice tools to edit the database tables and files, along with tools to easily upsize (and hopefully downsize in the future) to full SQL Server. This is a big win, pending compatibility and performance limits. In my simple testing the data engine performed well enough for small data sets. This is not only useful for web applications, but also for desktop applications for which a fully installed SQL engine like SQL Server would be overkill. Having a local data store in those applications that can potentially be accessed by multiple users is a welcome feature. ASP.NET Razor View Engine What? Yet another native ASP.NET view engine? We already have Web Forms and various different flavors of using that view engine with Web Forms and MVC. Do we really need another? Microsoft thinks so, and Razor is an implementation of a lightweight, script-only view engine. Unlike the Web Forms view engine, Razor works only with inline code, snippets, and markup; therefore, it is more in line with current thinking of what a view engine should represent. There’s no support for a “page model” or any of the other Web Forms features of the full-page framework, but just a lightweight scripting engine that works with plain markup plus embedded expressions and code. The markup syntax for Razor is geared for minimal typing, plus some progressive detection of where a script block/expression starts and ends. This results in a much leaner syntax than the typical ASP.NET Web Forms alligator (<% %>) tags. Razor uses the @ sign plus standard C# (or Visual Basic) block syntax to delineate code snippets and expressions. Here’s a very simple example of what Razor markup looks like along with some comment annotations: <!DOCTYPE html> <html>     <head>         <title></title>     </head>     <body>     <h1>Razor Test</h1>          <!-- simple expressions -->     @DateTime.Now     <hr />     <!-- method expressions -->     @DateTime.Now.ToString("T")          <!-- code blocks -->     @{         List<string> names = new List<string>();         names.Add("Rick");         names.Add("Markus");         names.Add("Claudio");         names.Add("Kevin");     }          <!-- structured block statements -->     <ul>     @foreach(string name in names){             <li>@name</li>     }     </ul>           <!-- Conditional code -->        @if(true) {                        <!-- Literal Text embedding in code -->        <text>         true        </text>;    }    else    {        <!-- Literal Text embedding in code -->       <text>       false       </text>;    }    </body> </html> Like the Web Forms view engine, Razor parses pages into code, and then executes that run-time compiled code. Effectively a “page” becomes a code file with markup becoming literal text written into the Response stream, code snippets becoming raw code, and expressions being written out with Response.Write(). The code generated from Razor doesn’t look much different from similar Web Forms code that only uses script tags; so although the syntax may look different, the operational model is fairly similar to the Web Forms engine minus the overhead of the large Page object model. However, there are differences: -Razor pages are based on a new base class, Microsoft.WebPages.WebPage, which is hosted in the Microsoft.WebPages assembly that houses all the Razor engine parsing and processing logic. Browsing through the assembly (in the generated ASP.NET Temporary Files folder or GAC) will give you a good idea of the functionality that Razor provides. If you look closely, a lot of the feature set matches ASP.NET MVC’s view implementation as well as many of the helper classes found in MVC. It’s not hard to guess the motivation for this sort of view engine: For beginning developers the simple markup syntax is easier to work with, although you obviously still need to have some understanding of the .NET Framework in order to create dynamic content. The syntax is easier to read and grok and much shorter to type than ASP.NET alligator tags (<% %>) and also easier to understand aesthetically what’s happening in the markup code. Razor also is a better fit for Microsoft’s vision of ASP.NET MVC: It’s a new view engine without the baggage of Web Forms attached to it. The engine is more lightweight since it doesn’t carry all the features and object model of Web Forms with it and it can be instantiated directly outside of the HTTP environment, which has been rather tricky to do for the Web Forms view engine. Having a standalone script parser is a huge win for other applications as well – it makes it much easier to create script or meta driven output generators for many types of applications from code/screen generators, to simple form letters to data merging applications with user customizability. For me personally this is very useful side effect and who knows maybe Microsoft will actually standardize they’re scripting engines (die T4 die!) on this engine. Razor also better fits the “view-based” approach where the view is supposed to be mostly a visual representation that doesn’t hold much, if any, code. While you can still use code, the code you do write has to be self-contained. Overall I wouldn’t be surprised if Razor will become the new standard view engine for MVC in the future – and in fact there have been announcements recently that Razor will become the default script engine in ASP.NET MVC 3.0. Razor can also be used in existing Web Forms and MVC applications, although that’s not working currently unless you manually configure the script mappings and add the appropriate assemblies. It’s possible to do it, but it’s probably better to wait until Microsoft releases official support for Razor scripts in Visual Studio. Once that happens, you can simply drop .cshtml and .vbhtml pages into an existing ASP.NET project and they will work side by side with classic ASP.NET pages. WebMatrix Development Environment To tie all of these three technologies together, Microsoft is shipping WebMatrix with an integrated development environment. An integrated gallery manager makes it easy to download and load existing projects, and then extend them with custom functionality. It seems to be a prominent goal to provide community-oriented content that can act as a starting point, be it via a custom templates or a complete standard application. The IDE includes a project manager that works with a single project and provides an integrated IDE/editor for editing the .cshtml and .vbhtml pages. A run button allows you to quickly run pages in the project manager in a variety of browsers. There’s no debugging support for code at this time. Note that Razor pages don’t require explicit compilation, so making a change, saving, and then refreshing your page in the browser is all that’s needed to see changes while testing an application locally. It’s essentially using the auto-compiling Web Project that was introduced with .NET 2.0. All code is compiled during run time into dynamically created assemblies in the ASP.NET temp folder. WebMatrix also has PHP Editing support with syntax highlighting. You can load various PHP-based applications from the WebMatrix Web Gallery directly into the IDE. Most of the Web Gallery applications are ready to install and run without further configuration, with Wizards taking you through installation of tools, dependencies, and configuration of the database as needed. WebMatrix leverages the Web Platform installer to pull the pieces down from websites in a tight integration of tools that worked nicely for the four or five applications I tried this out on. Click a couple of check boxes and fill in a few simple configuration options and you end up with a running application that’s ready to be customized. Nice! You can easily deploy completed applications via WebDeploy (to an IIS server) or FTP directly from within the development environment. The deploy tool also can handle automatically uploading and installing the database and all related assemblies required, making deployment a simple one-click install step. Simplified Database Access The IDE contains a database editor that can edit SQL Compact and SQL Server databases. There is also a Database helper class that facilitates database access by providing easy-to-use, high-level query execution and iteration methods: @{       var db = Database.OpenFile("FirstApp.sdf");     string sql = "select * from customers where Id > @0"; } <ul> @foreach(var row in db.Query(sql,1)){         <li>@row.FirstName @row.LastName</li> } </ul> The query function takes a SQL statement plus any number of positional (@0,@1 etc.) SQL parameters by simple values. The result is returned as a collection of rows which in turn have a row object with dynamic properties for each of the columns giving easy (though untyped) access to each of the fields. Likewise Execute and ExecuteNonQuery allow execution of more complex queries using similar parameter passing schemes. Note these queries use string-based queries rather than LINQ or Entity Framework’s strongly typed LINQ queries. While this may seem like a step back, it’s also in line with the expectations of non .NET script developers who are quite used to writing and using SQL strings in code rather than using OR/M frameworks. The only question is why was something not included from the beginning in .NET and Microsoft made developers build custom implementations of these basic building blocks. The implementation looks a lot like a DataTable-style data access mechanism, but to be fair, this is a common approach in scripting languages. This type of syntax that uses simple, static, data object methods to perform simple data tasks with one line of code are common in scripting languages and are a good match for folks working in PHP/Python, etc. Seems like Microsoft has taken great advantage of .NET 4.0’s dynamic typing to provide this sort of interface for row iteration where each row has properties for each field. FWIW, all the examples demonstrate using local SQL Compact files - I was unable to get a SQL Server connection string to work with the Database class (the connection string wasn’t accepted). However, since the code in the page is still plain old .NET, you can easily use standard ADO.NET code or even LINQ or Entity Framework models that are created outside of WebMatrix in separate assemblies as required. The good the bad the obnoxious - It’s still .NET The beauty (or curse depending on how you look at it :)) of Razor and the compilation model is that, behind it all, it’s still .NET. Although the syntax may look foreign, it’s still all .NET behind the scenes. You can easily access existing tools, helpers, and utilities simply by adding them to the project as references or to the bin folder. Razor automatically recognizes any assembly reference from assemblies in the bin folder. In the default configuration, Microsoft provides a host of helper functions in a Microsoft.WebPages assembly (check it out in the ASP.NET temp folder for your application), which includes a host of HTML Helpers. If you’ve used ASP.NET MVC before, a lot of the helpers should look familiar. Documentation at the moment is sketchy-there’s a very rough API reference you can check out here: http://www.asp.net/webmatrix/tutorials/asp-net-web-pages-api-reference Who needs WebMatrix? Uhm… good Question Clearly Microsoft is trying hard to create an environment with WebMatrix that is easy to use for newbie developers. The goal seems to be simplicity in providing a minimal development environment and an easy-to-use script engine/language that makes it easy to get started with. There’s also some focus on community features that can be used as starting points, such as Web Gallery applications and templates. The community features in particular are very nice and something that would be nice to eventually see in Visual Studio as well. The question is whether this is too little too late. Developers who have been clamoring for a simpler development environment on the .NET stack have mostly left for other simpler platforms like PHP or Python which are catering to the down and dirty developer. Microsoft will be hard pressed to win those folks-and other hardcore PHP developers-back. Regardless of how much you dress up a script engine fronted by the .NET Framework, it’s still the .NET Framework and all the complexity that drives it. While .NET is a fine solution in its breadth and features once you get a basic handle on the core features, the bar of entry to being productive with the .NET Framework is still pretty high. The MVC style helpers Microsoft provides are a good step in the right direction, but I suspect it’s not enough to shield new developers from having to delve much deeper into the Framework to get even basic applications built. Razor and its helpers is trying to make .NET more accessible but the reality is that in order to do useful stuff that goes beyond the handful of simple helpers you still are going to have to write some C# or VB or other .NET code. If the target is a hobby/amateur/non-programmer the learning curve isn’t made any easier by WebMatrix it’s just been shifted a tad bit further along in your development endeavor when you run out of canned components that are supplied either by Microsoft or the community. The database helpers are interesting and actually I’ve heard a lot of discussion from various developers who’ve been resisting .NET for a really long time perking up at the prospect of easier data access in .NET than the ridiculous amount of code it takes to do even simple data access with raw ADO.NET. It seems sad that such a simple concept and implementation should trigger this sort of response (especially since it’s practically trivial to create helpers like these or pick them up from countless libraries available), but there it is. It also shows that there are plenty of developers out there who are more interested in ‘getting stuff done’ easily than necessarily following the latest and greatest practices which are overkill for many development scenarios. Sometimes it seems that all of .NET is focused on the big life changing issues of development, rather than the bread and butter scenarios that many developers are interested in to get their work accomplished. And that in the end may be WebMatrix’s main raison d'être: To bring some focus back at Microsoft that simpler and more high level solutions are actually needed to appeal to the non-high end developers as well as providing the necessary tools for the high end developers who want to follow the latest and greatest trends. The current version of WebMatrix hits many sweet spots, but it also feels like it has a long way to go before it really can be a tool that a beginning developer or an accomplished developer can feel comfortable with. Although there are some really good ideas in the environment (like the gallery for downloading apps and components) which would be a great addition for Visual Studio as well, the rest of the development environment just feels like crippleware with required functionality missing especially debugging and Intellisense, but also general editor support. It’s not clear whether these are because the product is still in an early alpha release or whether it’s simply designed that way to be a really limited development environment. While simple can be good, nobody wants to feel left out when it comes to necessary tool support and WebMatrix just has that left out feeling to it. If anything WebMatrix’s technology pieces (which are really independent of the WebMatrix product) are what are interesting to developers in general. The compact IIS implementation is a nice improvement for development scenarios and SQL Compact 4.0 seems to address a lot of concerns that people have had and have complained about for some time with previous SQL Compact implementations. By far the most interesting and useful technology though seems to be the Razor view engine for its light weight implementation and it’s decoupling from the ASP.NET/HTTP pipeline to provide a standalone scripting/view engine that is pluggable. The first winner of this is going to be ASP.NET MVC which can now have a cleaner view model that isn’t inconsistent due to the baggage of non-implemented WebForms features that don’t work in MVC. But I expect that Razor will end up in many other applications as a scripting and code generation engine eventually. Visual Studio integration for Razor is currently missing, but is promised for a later release. The ASP.NET MVC team has already mentioned that Razor will eventually become the default MVC view engine, which will guarantee continued growth and development of this tool along those lines. And the Razor engine and support tools actually inherit many of the features that MVC pioneered, so there’s some synergy flowing both ways between Razor and MVC. As an existing ASP.NET developer who’s already familiar with Visual Studio and ASP.NET development, the WebMatrix IDE doesn’t give you anything that you want. The tools provided are minimal and provide nothing that you can’t get in Visual Studio today, except the minimal Razor syntax highlighting, so there’s little need to take a step back. With Visual Studio integration coming later there’s little reason to look at WebMatrix for tooling. It’s good to see that Microsoft is giving some thought about the ease of use of .NET as a platform For so many years, we’ve been piling on more and more new features without trying to take a step back and see how complicated the development/configuration/deployment process has become. Sometimes it’s good to take a step - or several steps - back and take another look and realize just how far we’ve come. WebMatrix is one of those reminders and one that likely will result in some positive changes on the platform as a whole. © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET   IIS7  

    Read the article

  • Issue 15: Oracle Exadata Marketing Campaigns

    - by rituchhibber
         PARTNER FOCUS Oracle ExadataMarketing Campaign Steve McNickleVP Europe, cVidya Steve McNickle is VP Europe for cVidya, an innovative provider of revenue intelligence solutions for telecom, media and entertainment service providers including AT&T, BT, Deutsche Telecom and Vodafone. The company's product portfolio helps operators and service providers maximise margins, improve customer experience and optimise ecosystem relationships through revenue assurance, fraud and security management, sales performance management, pricing analytics, and inter-carrier services. cVidya has partnered with Oracle for more than a decade. RESOURCES -- Oracle PartnerNetwork (OPN) Oracle Exastack Program Oracle Exastack Optimized Oracle Exastack Labs and Enablement Resources Oracle Engineered Systems Oracle Communications cVidya SUBSCRIBE FEEDBACK PREVIOUS ISSUES Are you ready for Oracle OpenWorld this October? -- -- Please could you tell us a little about cVidya's partnering history with Oracle, and expand on your Oracle Exastack accreditations? "cVidya was established just over ten years ago and we've had a strong relationship with Oracle almost since the very beginning. Through our Revenue Intelligence work with some of the world's largest service providers we collect tremendous amounts of information, amounting to billions of records per day. We help our clients to collect, store and analyse that data to ensure that their end customers are getting the best levels of service, are billed correctly, and are happy that they are on the correct price plan. We have been an Oracle Gold level partner for seven years, and crucially just two months ago we were also accredited as Oracle Exastack Optimized for MoneyMap, our core Revenue Assurance solution. Very soon we also expect to be Oracle Exastack Optimized DRMap, our Data Retention solution." What unique capabilities and customer benefits does Oracle Exastack add to your applications? "Oracle Exastack enables us to deliver radical benefits to our customers. A typical mobile operator in the UK might handle between 500 million and two billion call data record details daily. Each transaction needs to be validated, billed correctly and fraud checked. Because of the enormous volumes involved, our clients demand scalable infrastructure that allows them to efficiently acquire, store and process all that data within controlled cost, space and environmental constraints. We have proved that the Oracle Exadata system can process data up to seven times faster and load it as much as 20 times faster than other standard best-of-breed server approaches. With the Oracle Exadata Database Machine they can reduce their datacentre equipment from say, the six or seven cabinets that they needed in the past, down to just one. This dramatic simplification delivers incredible value to the customer by cutting down enormously on all of their significant cost, space, energy, cooling and maintenance overheads." "The Oracle Exastack Program has given our clients the ability to switch their focus from reactive to proactive. Traditionally they may have spent 80 percent of their day processing, and just 20 percent enabling end customers to see advanced analytics, and avoiding issues before they occur. With our solutions and Oracle Exadata they can now switch that balance around entirely, resulting not only in reduced revenue leakage, but a far higher focus on proactive leakage prevention. How has the Oracle Exastack Program transformed your customer business? "We can already see the impact. Oracle solutions allow our delivery teams to achieve successful deployments, happy customers and self-satisfaction, and the power of Oracle's Exa solutions is easy to measure in terms of their transformational ability. We gained our first sale into a major European telco by demonstrating the major performance gains that would transform their business. Clients can measure the ease of organisational change, the early prevention of business issues, the reduction in manpower required to provide protection and coverage across all their products and services, plus of course end customer satisfaction. If customers know that that service is provided accurately and that their bills are calculated correctly, then over time this satisfaction can be attributed to revenue intelligence and the underlying systems which provide it. Combine this with the further integration we have with the other layers of the Oracle stack, including the telecommunications offerings such as NCC, OCDM and BRM, and the result is even greater customer value—not to mention the increased speed to market and the reduced project risk." What does the Oracle Exastack community bring to cVidya, both in terms of general benefits, and also tangible new opportunities and partnerships? "A great deal. We have participated in the Oracle Exastack community heavily over the past year, and have had lots of meetings with Oracle and our peers around the globe. It brings us into contact with like-minded, innovative partners, who like us are not happy to just stand still and want to take fresh technology to their customer base in order to gain enhanced value. We identified three new partnerships in each of two recent meetings, and hope these will open up new opportunities, not only in areas that exactly match where we operate today, but also in some new associative areas that will expand our reach into new business sectors. Notably, thanks to the Exastack community we were invited on stage at last year's Oracle OpenWorld conference. Appearing so publically with Oracle senior VP Judson Althoff elevated awareness and visibility of cVidya and has enabled us to participate in a number of other events with Oracle over the past eight months. We've been involved in speaking opportunities, forums and exhibitions, providing us with invaluable opportunities that we wouldn't otherwise have got close to." How has Exastack differentiated cVidya as an ISV, and helped you to evolve your business to the next level? "When we are selling to our core customer base of Tier 1 telecommunications providers, we know that they want more than just software. They want an enduring partnership that will last many years, they want innovation, and a forward thinking partner who knows how to guide them on where they need to be to meet market demand three, five or seven years down the line. Membership of respected global bodies, such as the Telemanagement Forum enables us to lead standard adherence in our area of business, giving us a lot of credibility, but Oracle is also involved in this forum with its own telecommunications portfolio, strengthening our position still further. When we approach CEOs, CTOs and CIOs at the very largest Tier 1 operators, not only can we easily show them that our technology is fantastic, we can also talk about our strong partnership with Oracle, and our joint embracing of today's standards and tomorrow's innovation." Where would you like cVidya to be in one year's time? "We want to get all of our relevant products Oracle Exastack Optimized. Our MoneyMap Revenue Assurance solution is already Exastack Optimised, our DRMAP Data Retention Solution should be Exastack Optimised within the next month, and our FraudView Fraud Management solution within the next two to three months. We'd then like to extend our Oracle accreditation out to include other members of the Oracle Engineered Systems family. We are moving into the 'Big Data' space, and so we're obviously very keen to work closely with Oracle to conduct pilots, map new technologies onto Oracle Big Data platforms, and embrace and measure the benefits of other Oracle systems, namely Oracle Exalogic Elastic Cloud, the Oracle Exalytics In-Memory Machine and the Oracle SPARC SuperCluster. We would also like to examine how the Oracle Database Appliance might benefit our Tier 2 service provider customers. Finally, we'd also like to continue working with the Oracle Communications Global Business Unit (CGBU), furthering our integration with Oracle billing products so that we are able to quickly deploy fraud solutions into Oracle's Engineered System stack, give operational benefits to our clients that are pre-integrated, more cost-effective, and can be rapidly deployed rapidly and producing benefits in three months, not nine months." Chris Baker ,Senior Vice President, Oracle Worldwide ISV-OEM-Java Sales Chris Baker is the Global Head of ISV/OEM Sales responsible for working with ISV/OEM partners to maximise Oracle's business through those partners, whilst maximising those partners' business to their end users. Chris works with partners, customers, innovators, investors and employees to develop innovative business solutions using Oracle products, services and skills. Firstly, could you please explain Oracle's current strategy for ISV partners, globally and in EMEA? "Oracle customers use independent software vendor (ISV) applications to run their businesses. They use them to generate revenue and to fulfil obligations to their own customers. Our strategy is very straight-forward. We want all of our ISV partners and OEMs to concentrate on the things that they do the best – building applications to meet the unique industry and functional requirements of their customer. We want to ensure that we deliver a best in class application platform so the ISV is free to concentrate their effort on their application functionality and user experience We invest over four billion dollars in research and development every year, and we want our ISVs to benefit from all of that investment in operating systems, virtualisation, databases, middleware, engineered systems, and other hardware. By doing this, we help them to reduce their costs, gain more consistency and agility for quicker implementations, and also rapidly differentiate themselves from other application vendors. It's all about simplification because we believe that around 25 to 30 percent of the development costs incurred by many ISVs are caused by customising infrastructure and have nothing to do with their applications. Our strategy is to enable our ISV partners to standardise their application platform using engineered architecture, so they can write once to the Oracle stack and deploy seamlessly in the cloud, on-premise, or in hybrid deployments. It's really important that architecture is the same in order to keep cost and time overheads at a minimum, so we provide standardisation and an environment that enables our ISVs to concentrate on the core business that makes them the most money and brings them success." How do you believe this strategy is helping the ISVs to work hand-in-hand with Oracle to ensure that end customers get the industry-leading solutions that they need? "We work with our ISVs not just to help them be successful, but also to help them market themselves. We have something called the 'Oracle Exastack Ready Program', which enables ISVs to publicise themselves as 'Ready' to run the core software platforms that run on Oracle's engineered systems including Exadata and Exalogic. So, for example, they can become 'Database Ready' which means that they use the latest version of Oracle Database and therefore can run their application without modification on Exadata or the Oracle Database Appliance. Alternatively, they can become WebLogic Ready, Oracle Linux Ready and Oracle Solaris Ready which means they run on the latest release and therefore can run their application, with no new porting work, on Oracle Exalogic. Those 'Ready' logos are important in helping ISVs advertise to their customers that they are using the latest technologies which have been fully tested. We now also have Exadata Ready and Exalogic Ready programmes which allow ISVs to promote the certification of their applications on these platforms. This highlights these partners to Oracle customers as having solutions that run fluently on the Oracle Exadata Database Machine, the Oracle Exalogic Elastic Cloud or one of our other engineered systems. This makes it easy for customers to identify solutions and provides ISVs with an avenue to connect with Oracle customers who are rapidly adopting engineered systems. We have also taken this programme to the next level in the shape of 'Oracle Exastack Optimized' for partners whose applications run best on the Oracle stack and have invested the time to fully optimise application performance. We ensure that Exastack Optimized partner status is promoted and supported by press releases, and we help our ISVs go to market and differentiate themselves through the use our technology and the standardisation it delivers. To date we have had several hundred organisations successfully work through our Exastack Optimized programme." How does Oracle's strategy of offering pre-integrated open platform software and hardware allow ISVs to bring their products to market more quickly? "One of the problems for many ISVs is that they have to think very carefully about the technology on which their solutions will be deployed, particularly in the cloud or hosted environments. They have to think hard about how they secure these environments, whether the concern is, for example, middleware, identity management, or securing personal data. If they don't use the technology that we build-in to our products to help them to fulfil these roles, they then have to build it themselves. This takes time, requires testing, and must be maintained. By taking advantage of our technology, partners will now know that they have a standard platform. They will know that they can confidently talk about implementation being the same every time they do it. Very large ISV applications could once take a year or two to be implemented at an on-premise environment. But it wasn't just the configuration of the application that took the time, it was actually the infrastructure - the different hardware configurations, operating systems and configurations of databases and middleware. Now we strongly believe that it's all about standardisation and repeatability. It's about making sure that our partners can do it once and are then able to roll it out many different times using standard componentry." What actions would you recommend for existing ISV partners that are looking to do more business with Oracle and its customer base, not only to maximise benefits, but also to maximise partner relationships? "My team, around the world and in the EMEA region, is available and ready to talk to any of our ISVs and to explore the possibilities together. We run programmes like 'Excite' and 'Insight' to help us to understand how we can help ISVs with architecture and widen their environments. But we also want to work with, and look at, new opportunities - for example, the Machine-to-Machine (M2M) market or 'The Internet of Things'. Over the next few years, many millions, indeed billions of devices will be collecting massive amounts of data and communicating it back to the central systems where ISVs will be running their applications. The only way that our partners will be able to provide a single vendor 'end-to-end' solution is to use Oracle integrated systems at the back end and Java on the 'smart' devices collecting the data – a complete solution from device to data centre. So there are huge opportunities to work closely with our ISVs, using Oracle's complete M2M platform, to provide the infrastructure that enables them to extract maximum value from the data collected. If any partners don't know where to start or who to contact, then they can contact me directly at [email protected] or indeed any of our teams across the EMEA region. We want to work with ISVs to help them to be as successful as they possibly can through simplification and speed to market, and we also want all of the top ISVs in the world based on Oracle." What opportunities are immediately opened to new ISV partners joining the OPN? "As you know OPN is very, very important. New members will discover a huge amount of content that instantly becomes accessible to them. They can access a wealth of no-cost training and enablement materials to build their expertise in Oracle technology. They can download Oracle software and use it for development projects. They can help themselves become more competent by becoming part of a true community and uncovering new opportunities by working with Oracle and their peers in the Oracle Partner Network. As well as publishing massive amounts of information on OPN, we also hold our global Oracle OpenWorld event, at which partners play a huge role. This takes place at the end of September and the beginning of October in San Francisco. Attending ISV partners have an unrivalled opportunity to contribute to elements such as the OpenWorld / OPN Exchange, at which they can talk to other partners and really begin thinking about how they can move their businesses on and play key roles in a very large ecosystem which revolves around technology and standardisation." Finally, are there any other messages that you would like to share with the Oracle ISV community? "The crucial message that I always like to reinforce is architecture, architecture and architecture! The key opportunities that ISVs have today revolve around standardising their architectures so that they can confidently think: “I will I be able to do exactly the same thing whenever a customer is looking to deploy on-premise, hosted or in the cloud”. The right architecture is critical to being competitive and to really start changing the game. We want to help our ISV partners to do just that; to establish standard architecture and to seize the opportunities it opens up for them. New market opportunities like M2M are enormous - just look at how many devices are all around you right now. We can help our partners to interface with these devices more effectively while thinking about their entire ecosystem, rather than just the piece that they have traditionally focused upon. With standardised architecture, we can help people dramatically improve their speed, reach, agility and delivery of enhanced customer satisfaction and value all the way from the Java side to their centralised systems. All Oracle ISV partners must take advantage of these opportunities, which is why Oracle will continue to invest in and support them." -- Gergely Strbik is Oracle Hardware and Software Product Manager for Avnet in Hungary. Avnet Technology Solutions is an OracleValue Added Distributor focused on the development of the existing Oracle channel. This includes the recruitment and enablement of Oracle partners as well as driving deeper adoption of Oracle's technology and application products within the IT channel. "The main business benefits of ODA for our customers and partners are scalability, flexibility, a great price point for the high performance delivered, and the easily configurable embedded Linux operating system. People welcome a lower point of entry and the ability to grow capacity on demand as their business expands." "Marketing and selling the ODA requires another way of thinking because it is an appliance. We have to transform the ways in which our partners and customers think from buying hardware and software independently to buying complete solutions. Successful early adopters and satisfied customer reactions will certainly help us to sell the ODA. We will have more experience with the product after the first deliveries and installations—end users need to see the power and benefits for themselves." "Our typical ODA customers will be those looking for complete solutions from a single reseller partner who is also able to manage the appliance. They will have enjoyed using Oracle Database but now want a new product that is able to unlock new levels of performance. A higher proportion of potential customers will come from our existing Oracle base, with around 30% from new business, but we intend to evangelise the ODA on the market to see how we can change this balance as all our customers adjust to the concept of 'Hardware and Software, Engineered to Work Together'. -- Back to the welcome page

    Read the article

  • Partner Blog Series: PwC Perspectives - The Gotchas, The Do's and Don'ts for IDM Implementations

    - by Tanu Sood
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:12.0pt; mso-para-margin-left:0in; line-height:12.0pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Arial","sans-serif"; mso-ascii-font-family:Arial; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Arial; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} table.MsoTableMediumList1Accent6 {mso-style-name:"Medium List 1 - Accent 6"; mso-tstyle-rowband-size:1; mso-tstyle-colband-size:1; mso-style-priority:65; mso-style-unhide:no; border-top:solid #E0301E 1.0pt; mso-border-top-themecolor:accent6; border-left:none; border-bottom:solid #E0301E 1.0pt; mso-border-bottom-themecolor:accent6; border-right:none; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Georgia","serif"; color:black; mso-themecolor:text1; mso-ansi-language:EN-GB;} table.MsoTableMediumList1Accent6FirstRow {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:first-row; mso-style-priority:65; mso-style-unhide:no; mso-tstyle-border-top:cell-none; mso-tstyle-border-bottom:1.0pt solid #E0301E; mso-tstyle-border-bottom-themecolor:accent6; font-family:"Verdana","sans-serif"; mso-ascii-font-family:Georgia; mso-ascii-theme-font:major-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:major-fareast; mso-hansi-font-family:Georgia; mso-hansi-theme-font:major-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:major-bidi;} table.MsoTableMediumList1Accent6LastRow {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:last-row; mso-style-priority:65; mso-style-unhide:no; mso-tstyle-border-top:1.0pt solid #E0301E; mso-tstyle-border-top-themecolor:accent6; mso-tstyle-border-bottom:1.0pt solid #E0301E; mso-tstyle-border-bottom-themecolor:accent6; color:#968C6D; mso-themecolor:text2; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableMediumList1Accent6FirstCol {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:first-column; mso-style-priority:65; mso-style-unhide:no; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableMediumList1Accent6LastCol {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:last-column; mso-style-priority:65; mso-style-unhide:no; mso-tstyle-border-top:1.0pt solid #E0301E; mso-tstyle-border-top-themecolor:accent6; mso-tstyle-border-bottom:1.0pt solid #E0301E; mso-tstyle-border-bottom-themecolor:accent6; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableMediumList1Accent6OddColumn {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:odd-column; mso-style-priority:65; mso-style-unhide:no; mso-tstyle-shading:#F7CBC7; mso-tstyle-shading-themecolor:accent6; mso-tstyle-shading-themetint:63;} table.MsoTableMediumList1Accent6OddRow {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:odd-row; mso-style-priority:65; mso-style-unhide:no; mso-tstyle-shading:#F7CBC7; mso-tstyle-shading-themecolor:accent6; mso-tstyle-shading-themetint:63;} Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:12.0pt; mso-para-margin-left:0in; line-height:12.0pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Arial","sans-serif"; mso-ascii-font-family:Arial; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Arial; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} table.MsoTableMediumList1Accent6 {mso-style-name:"Medium List 1 - Accent 6"; mso-tstyle-rowband-size:1; mso-tstyle-colband-size:1; mso-style-priority:65; mso-style-unhide:no; border-top:solid #E0301E 1.0pt; mso-border-top-themecolor:accent6; border-left:none; border-bottom:solid #E0301E 1.0pt; mso-border-bottom-themecolor:accent6; border-right:none; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Georgia","serif"; color:black; mso-themecolor:text1; mso-ansi-language:EN-GB;} table.MsoTableMediumList1Accent6FirstRow {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:first-row; mso-style-priority:65; mso-style-unhide:no; mso-tstyle-border-top:cell-none; mso-tstyle-border-bottom:1.0pt solid #E0301E; mso-tstyle-border-bottom-themecolor:accent6; font-family:"Arial Narrow","sans-serif"; mso-ascii-font-family:Georgia; mso-ascii-theme-font:major-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:major-fareast; mso-hansi-font-family:Georgia; mso-hansi-theme-font:major-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:major-bidi;} table.MsoTableMediumList1Accent6LastRow {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:last-row; mso-style-priority:65; mso-style-unhide:no; mso-tstyle-border-top:1.0pt solid #E0301E; mso-tstyle-border-top-themecolor:accent6; mso-tstyle-border-bottom:1.0pt solid #E0301E; mso-tstyle-border-bottom-themecolor:accent6; color:#968C6D; mso-themecolor:text2; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableMediumList1Accent6FirstCol {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:first-column; mso-style-priority:65; mso-style-unhide:no; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableMediumList1Accent6LastCol {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:last-column; mso-style-priority:65; mso-style-unhide:no; mso-tstyle-border-top:1.0pt solid #E0301E; mso-tstyle-border-top-themecolor:accent6; mso-tstyle-border-bottom:1.0pt solid #E0301E; mso-tstyle-border-bottom-themecolor:accent6; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableMediumList1Accent6OddColumn {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:odd-column; mso-style-priority:65; mso-style-unhide:no; mso-tstyle-shading:#F7CBC7; mso-tstyle-shading-themecolor:accent6; mso-tstyle-shading-themetint:63;} table.MsoTableMediumList1Accent6OddRow {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:odd-row; mso-style-priority:65; mso-style-unhide:no; mso-tstyle-shading:#F7CBC7; mso-tstyle-shading-themecolor:accent6; mso-tstyle-shading-themetint:63;} It is generally accepted among business communities that technology by itself is not a silver bullet to all problems, but when it is combined with leading practices, strategy, careful planning and execution, it can create a recipe for success. This post attempts to highlight some of the best practices along with dos & don’ts that our practice has accumulated over the years in the identity & access management space in general, and also in the context of R2, in particular. Best Practices The following section illustrates the leading practices in “How” to plan, implement and sustain a successful OIM deployment, based on our collective experience. Planning is critical, but often overlooked A common approach to planning an IAM program that we identify with our clients is the three step process involving a current state assessment, a future state roadmap and an executable strategy to get there. It is extremely beneficial for clients to assess their current IAM state, perform gap analysis, document the recommended controls to address the gaps, align future state roadmap to business initiatives and get buy in from all stakeholders involved to improve the chances of success. When designing an enterprise-wide solution, the scalability of the technology must accommodate the future growth of the enterprise and the projected identity transactions over several years. Aligning the implementation schedule of OIM to related information technology projects increases the chances of success. As a baseline, it is recommended to match hardware specifications to the sizing guide for R2 published by Oracle. Adherence to this will help ensure that the hardware used to support OIM will not become a bottleneck as the adoption of new services increases. If your Organization has numerous connected applications that rely on reconciliation to synchronize the access data into OIM, consider hosting dedicated instances to handle reconciliation. Finally, ensure the use of clustered environment for development and have at least three total environments to help facilitate a controlled migration to production. If your Organization is planning to implement role based access control, we recommend performing a role mining exercise and consolidate your enterprise roles to keep them manageable. In addition, many Organizations have multiple approval flows to control access to critical roles, applications and entitlements. If your Organization falls into this category, we highly recommend that you limit the number of approval workflows to a small set. Most Organizations have operations managed across data centers with backend database synchronization, if your Organization falls into this category, ensure that the overall latency between the datacenters when replicating the databases is less than ten milliseconds to ensure that there are no front office performance impacts. Ingredients for a successful implementation During the development phase of your project, there are a number of guidelines that can be followed to help increase the chances for success. Most implementations cannot be completed without the use of customizations. If your implementation requires this, it’s a good practice to perform code reviews to help ensure quality and reduce code bottlenecks related to performance. We have observed at our clients that the development process works best when team members adhere to coding leading practices. Plan for time to correct coding defects and ensure developers are empowered to report their own bugs for maximum transparency. Many organizations struggle with defining a consistent approach to managing logs. This is particularly important due to the amount of information that can be logged by OIM. We recommend Oracle Diagnostics Logging (ODL) as an alternative to be used for logging. ODL allows log files to be formatted in XML for easy parsing and does not require a server restart when the log levels are changed during troubleshooting. Testing is a vital part of any large project, and an OIM R2 implementation is no exception. We suggest that at least one lower environment should use production-like data and connectors. Configurations should match as closely as possible. For example, use secure channels between OIM and target platforms in pre-production environments to test the configurations, the migration processes of certificates, and the additional overhead that encryption could impose. Finally, we ask our clients to perform database backups regularly and before any major change event, such as a patch or migration between environments. In the lowest environments, we recommend to have at least a weekly backup in order to prevent significant loss of time and effort. Similarly, if your organization is using virtual machines for one or more of the environments, it is recommended to take frequent snapshots so that rollbacks can occur in the event of improper configuration. Operate & sustain the solution to derive maximum benefits When migrating OIM R2 to production, it is important to perform certain activities that will help achieve a smoother transition. At our clients, we have seen that splitting the OIM tables into their own tablespaces by categories (physical tables, indexes, etc.) can help manage database growth effectively. If we notice that a client hasn’t enabled the Oracle-recommended indexing in the applicable database, we strongly suggest doing so to improve performance. Additionally, we work with our clients to make sure that the audit level is set to fit the organization’s auditing needs and sometimes even allocate UPA tables and indexes into their own table-space for better maintenance. Finally, many of our clients have set up schedules for reconciliation tables to be archived at regular intervals in order to keep the size of the database(s) reasonable and result in optimal database performance. For our clients that anticipate availability issues with target applications, we strongly encourage the use of the offline provisioning capabilities of OIM R2. This reduces the provisioning process for a given target application dependency on target availability and help avoid broken workflows. To account for this and other abnormalities, we also advocate that OIM’s monitoring controls be configured to alert administrators on any abnormal situations. Within OIM R2, we have begun advising our clients to utilize the ‘profile’ feature to encapsulate multiple commonly requested accounts, roles, and/or entitlements into a single item. By setting up a number of profiles that can be searched for and used, users will spend less time performing the same exact steps for common tasks. We advise our clients to follow the Oracle recommended guides for database and application server tuning which provides a good baseline configuration. It offers guidance on database connection pools, connection timeouts, user interface threads and proper handling of adapters/plug-ins. All of these can be important configurations that will allow faster provisioning and web page response times. Many of our clients have begun to recognize the value of data mining and a remediation process during the initial phases of an implementation (to help ensure high quality data gets loaded) and beyond (to support ongoing maintenance and business-as-usual processes). A successful program always begins with identifying the data elements and assigning a classification level based on criticality, risk, and availability. It should finish by following through with a remediation process. Dos & Don’ts Here are the most common dos and don'ts that we socialize with our clients, derived from our experience implementing the solution. Dos Don’ts Scope the project into phases with realistic goals. Look for quick wins to show success and value to the stake holders. Avoid “boiling the ocean” and trying to integrate all enterprise applications in the first phase. Establish an enterprise ID (universal unique ID across the enterprise) earlier in the program. Avoid major UI customizations that require code changes. Have a plan in place to patch during the project, which helps alleviate any major issues or roadblocks (product and database). Avoid publishing all the target entitlements if you don't anticipate their usage during access request. Assess your current state and prepare a roadmap to address your operations, tactical and strategic goals, align it with your business priorities. Avoid integrating non-production environments with your production target systems. Defer complex integrations to the later phases and take advantage of lessons learned from previous phases Avoid creating multiple accounts for the same user on the same system, if there is an opportunity to do so. Have an identity and access data quality initiative built into your plan to identify and remediate data related issues early on. Avoid creating complex approval workflows that would negative impact productivity and SLAs. Identify the owner of the identity systems with fair IdM knowledge and empower them with authority to make product related decisions. This will help ensure overcome any design hurdles. Avoid creating complex designs that are not sustainable long term and would need major overhaul during upgrades. Shadow your internal or external consulting resources during the implementation to build the necessary product skills needed to operate and sustain the solution. Avoid treating IAM as a point solution and have appropriate level of communication and training plan for the IT and business users alike. Conclusion In our experience, Identity programs will struggle with scope, proper resourcing, and more. We suggest that companies consider the suggestions discussed in this post and leverage them to help enable their identity and access program. This concludes PwC blog series on R2 for the month and we sincerely hope that the information we have shared thus far has been beneficial. For more information or if you have questions, you can reach out to Rex Thexton, Senior Managing Director, PwC and or Dharma Padala, Director, PwC. We look forward to hearing from you. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:12.0pt; mso-para-margin-left:0in; line-height:12.0pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Arial","sans-serif"; mso-ascii-font-family:Arial; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Arial; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Meet the Writers: Dharma Padala is a Director in the Advisory Security practice within PwC.  He has been implementing medium to large scale Identity Management solutions across multiple industries including utility, health care, entertainment, retail and financial sectors.   Dharma has 14 years of experience in delivering IT solutions out of which he has been implementing Identity Management solutions for the past 8 years. Praveen Krishna is a Manager in the Advisory Security practice within PwC.  Over the last decade Praveen has helped clients plan, architect and implement Oracle identity solutions across diverse industries.  His experience includes delivering security across diverse topics like network, infrastructure, application and data where he brings a holistic point of view to problem solving. Scott MacDonald is a Director in the Advisory Security practice within PwC.  He has consulted for several clients across multiple industries including financial services, health care, automotive and retail.   Scott has 10 years of experience in delivering Identity Management solutions. John Misczak is a member of the Advisory Security practice within PwC.  He has experience implementing multiple Identity and Access Management solutions, specializing in Oracle Identity Manager and Business Process Engineering Language (BPEL).

    Read the article

  • makefile pathing issues on OSX

    - by Justin808
    OK, I thought I would try one last update and see if it gets me anywhere. I've created a very small test case. This should not build anything, it just tests the path settings. Also I've setup the path so there are no spaces. The is the smallest, simplest test case I could come up with. This makefile will set the path, echo the path, run avr-gcc -v with the full path specified and then try to run it without the full path specified. It should find avr-gcc in the path on the second try, but does not. makefile TOOLCHAIN := /Users/justinzaun/Desktop/AVRBuilder.app/Contents/Resources/avrchain PATH := ${TOOLCHAIN}/bin:${PATH} export PATH all: @echo ${PATH} @echo -------- "${TOOLCHAIN}/bin/avr-gcc" -v @echo -------- avr-gcc -v output JUSTINs-MacBook-Air:Untitled justinzaun$ make /Users/justinzaun/Desktop/AVRBuilder.app/Contents/Resources/avrchain/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin -------- "/Users/justinzaun/Desktop/AVRBuilder.app/Contents/Resources/avrchain/bin/avr-gcc" -v Using built-in specs. COLLECT_GCC=/Users/justinzaun/Desktop/AVRBuilder.app/Contents/Resources/avrchain/bin/avr-gcc COLLECT_LTO_WRAPPER=/Users/justinzaun/Desktop/AVRBuilder.app/Contents/Resources/avrchain/bin/../libexec/gcc/avr/4.6.3/lto-wrapper Target: avr Configured with: /Users/justinzaun/Development/AVRBuilder/Packages/gccobj/../gcc/configure --prefix=/Users/justinzaun/Development/AVRBuilder/Packages/gccobj/../build/ --exec-prefix=/Users/justinzaun/Development/AVRBuilder/Packages/gccobj/../build/ --datadir=/Users/justinzaun/Development/AVRBuilder/Packages/gccobj/../build/ --target=avr --enable-languages=c,objc,c++ --disable-libssp --disable-lto --disable-nls --disable-libgomp --disable-gdbtk --disable-threads --enable-poison-system-directories Thread model: single gcc version 4.6.3 (GCC) -------- avr-gcc -v make: avr-gcc: No such file or directory make: *** [all] Error 1 JUSTINs-MacBook-Air:Untitled justinzaun$ Original Question I'm trying to set the path from within the makefile. I can't seem to do this on OSX. Setting the path with PATH := /new/bin/:$(PATH) does not work. See my makefile below. makefile PROJECTNAME = Untitled # Name of target controller # (e.g. 'at90s8515', see the available avr-gcc mmcu # options for possible values) MCU = atmega640 # id to use with programmer # default: PROGRAMMER_MCU=$(MCU) # In case the programer used, e.g avrdude, doesn't # accept the same MCU name as avr-gcc (for example # for ATmega8s, avr-gcc expects 'atmega8' and # avrdude requires 'm8') PROGRAMMER_MCU = $(MCU) # Source files # List C/C++/Assembly source files: # (list all files to compile, e.g. 'a.c b.cpp as.S'): # Use .cc, .cpp or .C suffix for C++ files, use .S # (NOT .s !!!) for assembly source code files. PRJSRC = main.c \ utils.c # additional includes (e.g. -I/path/to/mydir) INC = # libraries to link in (e.g. -lmylib) LIBS = # Optimization level, # use s (size opt), 1, 2, 3 or 0 (off) OPTLEVEL = s ### You should not have to touch anything below this line ### PATH := /Users/justinzaun/Library/Developer/Xcode/DerivedData/AVR_Builder-gxiykwiwjywvoagykxvmotvncbyd/Build/Products/Debug/AVR\ Builder.app/Contents/Resources/avrchain/bin:/usr/bin:/bin:$(PATH) CPATH := /Users/justinzaun/Library/Developer/Xcode/DerivedData/AVR_Builder-gxiykwiwjywvoagykxvmotvncbyd/Build/Products/Debug/AVR\ Builder.app/Contents/Resources/avrchain/include # HEXFORMAT -- format for .hex file output HEXFORMAT = ihex # compiler CFLAGS = -I. $(INC) -g -mmcu=$(MCU) -O$(OPTLEVEL) \ -fpack-struct -fshort-enums \ -funsigned-bitfields -funsigned-char \ -Wall -Wstrict-prototypes \ -Wa,-ahlms=$(firstword \ $(filter %.lst, $(<:.c=.lst))) # c++ specific flags CPPFLAGS = -fno-exceptions \ -Wa,-ahlms=$(firstword \ $(filter %.lst, $(<:.cpp=.lst)) \ $(filter %.lst, $(<:.cc=.lst)) \ $(filter %.lst, $(<:.C=.lst))) # assembler ASMFLAGS = -I. $(INC) -mmcu=$(MCU) \ -x assembler-with-cpp \ -Wa,-gstabs,-ahlms=$(firstword \ $(<:.S=.lst) $(<.s=.lst)) # linker LDFLAGS = -Wl,-Map,$(TRG).map -mmcu=$(MCU) \ -lm $(LIBS) ##### executables #### CC=avr-gcc OBJCOPY=avr-objcopy OBJDUMP=avr-objdump SIZE=avr-size AVRDUDE=avrdude REMOVE=rm -f ##### automatic target names #### TRG=$(PROJECTNAME).out DUMPTRG=$(PROJECTNAME).s HEXROMTRG=$(PROJECTNAME).hex HEXTRG=$(HEXROMTRG) $(PROJECTNAME).ee.hex # Start by splitting source files by type # C++ CPPFILES=$(filter %.cpp, $(PRJSRC)) CCFILES=$(filter %.cc, $(PRJSRC)) BIGCFILES=$(filter %.C, $(PRJSRC)) # C CFILES=$(filter %.c, $(PRJSRC)) # Assembly ASMFILES=$(filter %.S, $(PRJSRC)) # List all object files we need to create OBJDEPS=$(CFILES:.c=.o) \ $(CPPFILES:.cpp=.o) \ $(BIGCFILES:.C=.o) \ $(CCFILES:.cc=.o) \ $(ASMFILES:.S=.o) # Define all lst files. LST=$(filter %.lst, $(OBJDEPS:.o=.lst)) # All the possible generated assembly # files (.s files) GENASMFILES=$(filter %.s, $(OBJDEPS:.o=.s)) .SUFFIXES : .c .cc .cpp .C .o .out .s .S \ .hex .ee.hex .h .hh .hpp # Make targets: # all, disasm, stats, hex, writeflash/install, clean all: $(TRG) $(TRG): $(OBJDEPS) $(CC) $(LDFLAGS) -o $(TRG) $(OBJDEPS) #### Generating assembly #### # asm from C %.s: %.c $(CC) -S $(CFLAGS) $< -o $@ # asm from (hand coded) asm %.s: %.S $(CC) -S $(ASMFLAGS) $< > $@ # asm from C++ .cpp.s .cc.s .C.s : $(CC) -S $(CFLAGS) $(CPPFLAGS) $< -o $@ #### Generating object files #### # object from C .c.o: $(CC) $(CFLAGS) -c $< -o $@ # object from C++ (.cc, .cpp, .C files) .cc.o .cpp.o .C.o : $(CC) $(CFLAGS) $(CPPFLAGS) -c $< -o $@ # object from asm .S.o : $(CC) $(ASMFLAGS) -c $< -o $@ #### Generating hex files #### # hex files from elf .out.hex: $(OBJCOPY) -j .text \ -j .data \ -O $(HEXFORMAT) $< $@ .out.ee.hex: $(OBJCOPY) -j .eeprom \ --change-section-lma .eeprom=0 \ -O $(HEXFORMAT) $< $@ #### Information #### info: @echo PATH: @echo "$(PATH)" $(CC) -v which $(CC) #### Cleanup #### clean: $(REMOVE) $(TRG) $(TRG).map $(DUMPTRG) $(REMOVE) $(OBJDEPS) $(REMOVE) $(LST) $(REMOVE) $(GENASMFILES) $(REMOVE) $(HEXTRG) error JUSTINs-MacBook-Air:Untitled justinzaun$ make avr-gcc -I. -g -mmcu=atmega640 -Os -fpack-struct -fshort-enums -funsigned-bitfields -funsigned-char -Wall -Wstrict-prototypes -Wa,-ahlms=main.lst -c main.c -o main.o make: avr-gcc: No such file or directory make: *** [main.o] Error 1 JUSTINs-MacBook-Air:Untitled justinzaun$ If I change my CC= to include the full path: CC=/Users/justinzaun/Library/Developer/Xcode/DerivedData/AVR_Builder-gxiykwiwjywvoagykxvmotvncbyd/Build/Products/Debug/AVR\ Builder.app/Contents/Resources/avrchain/bin/avr-gcc then it finds it, but this doesn't seem the correct way to do things. For instance its trying to use the system as not the one in the correct path. update - Just to be sure, I'm adding the output of my ls command too so everyone knows the file exist. Also I've added a make info target to the makefile and showing that output as well. JUSTINs-MacBook-Air:Untitled justinzaun$ ls /Users/justinzaun/Library/Developer/Xcode/DerivedData/AVR_Builder-gxiykwiwjywvoagykxvmotvncbyd/Build/Products/Debug/AVR\ Builder.app/Contents/Resources/avrchain/bin ar avr-elfedit avr-man avr-strip objcopy as avr-g++ avr-nm avrdude objdump avr-addr2line avr-gcc avr-objcopy c++ ranlib avr-ar avr-gcc-4.6.3 avr-objdump g++ strip avr-as avr-gcov avr-ranlib gcc avr-c++ avr-gprof avr-readelf ld avr-c++filt avr-ld avr-size ld.bfd avr-cpp avr-ld.bfd avr-strings nm JUSTINs-MacBook-Air:Untitled justinzaun$ Output of make info with the \ in my path JUSTINs-MacBook-Air:Untitled justinzaun$ make info PATH: /Users/justinzaun/Library/Developer/Xcode/DerivedData/AVR_Builder-gxiykwiwjywvoagykxvmotvncbyd/Build/Products/Debug/AVR\ Builder.app/Contents/Resources/avrchain/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin avr-gcc -v make: avr-gcc: No such file or directory make: *** [info] Error 1 JUSTINs-MacBook-Air:Untitled justinzaun$ Output of make info with the \ not in my path JUSTINs-MacBook-Air:Untitled justinzaun$ make info PATH: /Users/justinzaun/Library/Developer/Xcode/DerivedData/AVR_Builder-gxiykwiwjywvoagykxvmotvncbyd/Build/Products/Debug/AVR Builder.app/Contents/Resources/avrchain/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin avr-gcc -v make: avr-gcc: No such file or directory make: *** [info] Error 1 JUSTINs-MacBook-Air:Untitled justinzaun$ update - When I have my CC set to include the full path as described above, this is the result of make info. JUSTINs-MacBook-Air:Untitled justinzaun$ make info PATH: /Users/justinzaun/Library/Developer/Xcode/DerivedData/AVR_Builder-gxiykwiwjywvoagykxvmotvncbyd/Build/Products/Debug/AVR Builder.app/Contents/Resources/avrchain/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin /Users/justinzaun/Library/Developer/Xcode/DerivedData/AVR_Builder-gxiykwiwjywvoagykxvmotvncbyd/Build/Products/Debug/AVR\ Builder.app/Contents/Resources/avrchain/bin/avr-gcc -v Using built-in specs. COLLECT_GCC=/Users/justinzaun/Library/Developer/Xcode/DerivedData/AVR_Builder-gxiykwiwjywvoagykxvmotvncbyd/Build/Products/Debug/AVR Builder.app/Contents/Resources/avrchain/bin/avr-gcc COLLECT_LTO_WRAPPER=/Users/justinzaun/Library/Developer/Xcode/DerivedData/AVR_Builder-gxiykwiwjywvoagykxvmotvncbyd/Build/Products/Debug/AVR Builder.app/Contents/Resources/avrchain/bin/../libexec/gcc/avr/4.6.3/lto-wrapper Target: avr Configured with: /Users/justinzaun/Development/AVRBuilder/Packages/gccobj/../gcc/configure --prefix=/Users/justinzaun/Development/AVRBuilder/Packages/gccobj/../build/ --exec-prefix=/Users/justinzaun/Development/AVRBuilder/Packages/gccobj/../build/ --datadir=/Users/justinzaun/Development/AVRBuilder/Packages/gccobj/../build/ --target=avr --enable-languages=c,objc,c++ --disable-libssp --disable-lto --disable-nls --disable-libgomp --disable-gdbtk --disable-threads --enable-poison-system-directories Thread model: single gcc version 4.6.3 (GCC) which /Users/justinzaun/Library/Developer/Xcode/DerivedData/AVR_Builder-gxiykwiwjywvoagykxvmotvncbyd/Build/Products/Debug/AVR\ Builder.app/Contents/Resources/avrchain/bin/avr-gcc /Users/justinzaun/Library/Developer/Xcode/DerivedData/AVR_Builder-gxiykwiwjywvoagykxvmotvncbyd/Build/Products/Debug/AVR Builder.app/Contents/Resources/avrchain/bin/avr-gcc JUSTINs-MacBook-Air:Untitled justinzaun$

    Read the article

  • C++ assignment operators dynamic arrays

    - by user2905445
    First off i know the multiplying part is wrong but i have some questions about the code. 1. When i am overloading my operator+ i print out the matrix using cout << *this then right after i return *this and when i do a+b on matix a and matix b it doesnt give me the same thing this is very confusing. 2. When i make matrix c down in my main i cant use my default constructor for some reason because when i go to set it = using my assignment operator overloaded function it gives me an error saying "expression must be a modifiable value. although using my constructor that sets the row and column numbers is the same as my default constructor using (0,0). 3. My assignment operator= function uses a copy constructor to make a new matrix using the values on the right hand side of the equal sign and when i print out c it doesn't give me anything Any help would be great this is my hw for a algorithm class which i still need to do the algorithm for the multiplying matrices but i need to solve these issues first and im having a lot of trouble please help. //Programmer: Eric Oudin //Date: 10/21/2013 //Description: Working with matricies #include <iostream> using namespace std; class matrixType { public: friend ostream& operator<<(ostream&, const matrixType&); const matrixType& operator*(const matrixType&); matrixType& operator+(const matrixType&); matrixType& operator-(const matrixType&); const matrixType& operator=(const matrixType&); void fillMatrix(); matrixType(); matrixType(int, int); matrixType(const matrixType&); ~matrixType(); private: int **matrix; int rowSize; int columnSize; }; ostream& operator<< (ostream& osObject, const matrixType& matrix) { osObject << endl; for (int i=0;i<matrix.rowSize;i++) { for (int j=0;j<matrix.columnSize;j++) { osObject << matrix.matrix[i][j] <<", "; } osObject << endl; } return osObject; } const matrixType& matrixType::operator=(const matrixType& matrixRight) { matrixType temp(matrixRight); cout << temp; return temp; } const matrixType& matrixType::operator*(const matrixType& matrixRight) { matrixType temp(rowSize*matrixRight.columnSize, columnSize*matrixRight.rowSize); if(rowSize == matrixRight.columnSize) { for (int i=0;i<rowSize;i++) { for (int j=0;j<columnSize;j++) { temp.matrix[i][j] = matrix[i][j] * matrixRight.matrix[i][j]; } } } else { cout << "Cannot multiply matricies that have different size rows from the others columns." << endl; } return temp; } matrixType& matrixType::operator+(const matrixType& matrixRight) { if(rowSize == matrixRight.rowSize && columnSize == matrixRight.columnSize) { for (int i=0;i<rowSize;i++) { for (int j=0;j<columnSize;j++) { matrix[i][j] += matrixRight.matrix[i][j]; } } } else { cout << "Cannot add matricies that are different sizes." << endl; } cout << *this; return *this; } matrixType& matrixType::operator-(const matrixType& matrixRight) { matrixType temp(rowSize, columnSize); if(rowSize == matrixRight.rowSize && columnSize == matrixRight.columnSize) { for (int i=0;i<rowSize;i++) { for (int j=0;j<columnSize;j++) { matrix[i][j] -= matrixRight.matrix[i][j]; } } } else { cout << "Cannot subtract matricies that are different sizes." << endl; } return *this; } void matrixType::fillMatrix() { for (int i=0;i<rowSize;i++) { for (int j=0;j<columnSize;j++) { cout << "Enter the matix number at (" << i << "," << j << "):"; cin >> matrix[i][j]; } } } matrixType::matrixType() { rowSize=0; columnSize=0; matrix = new int*[rowSize]; for (int i=0; i < rowSize; i++) { matrix[i] = new int[columnSize]; } } matrixType::matrixType(int setRows, int setColumns) { rowSize=setRows; columnSize=setColumns; matrix = new int*[rowSize]; for (int i=0; i < rowSize; i++) { matrix[i] = new int[columnSize]; } } matrixType::matrixType(const matrixType& otherMatrix) { rowSize=otherMatrix.rowSize; columnSize=otherMatrix.columnSize; matrix = new int*[rowSize]; for (int i = 0; i < rowSize; i++) { for (int j = 0; j < columnSize; j++) { matrix[i]=new int[columnSize]; matrix[i][j]=otherMatrix.matrix[i][j]; } } } matrixType::~matrixType() { delete [] matrix; } int main() { matrixType a(2,2); matrixType b(2,2); matrixType c(0,0); cout << "fill matrix a:"<< endl;; a.fillMatrix(); cout << "fill matrix b:"<< endl;; b.fillMatrix(); cout << a; cout << b; c = a+b; cout <<"matrix a + matrix b =" << c; system("PAUSE"); return 0; }

    Read the article

  • Authoritative sources about Database vs. Flatfile decision

    - by FastAl
    <tldr>looking for a reference to a book or other undeniably authoritative source that gives reasons when you should choose a database vs. when you should choose other storage methods. I have provided an un-authoritative list of reasons about 2/3 of the way down this post.</tldr> I have a situation at my company where a database is being used where it would be better to use another solution (in this case, an auto-generated piece of source code that contains a static lookup table, searched by binary sort). Normally, a database would be an OK solution even though the problem does not require a database, e.g, none of the elements of ACID are needed, as it is read-only data, updated about every 3-5 years (also requiring other sourcecode changes), and fits in memory, and can be keyed into via binary search (a tad faster than db, but speed is not an issue). The problem is that this code runs on our enterprise server, but is shared with several PC platforms (some disconnected, some use a central DB, etc.), and parts of it are managed by multiple programming units, parts by the DBAs, parts even by mathematicians in another department, etc. These hit their own platform’s version of their databases (containing their own copy of the static data). What happens is that every implementation, every little change, something different goes wrong. There are many other issues as well. I can’t even use a flatfile, because one mode of running on our enterprise server does not have permission to read files (only databases, and of course, its own literal storage, e.g., in-source table). Of course, other parts of the system use databases in proper, less obscure manners; there is no problem with those parts. So why don’t we just change it? I don’t have administrative ability to force a change. But I’m affected because sometimes I have to help fix the problems, but mostly because it causes outages and tons of extra IT time by other programmers and d*mmit that makes me mad! The reason neither management, nor the designers of the system, can see the problem is that they propose a solution that won’t work: increase communication; implement more safeguards and standards; etc. But every time, in a different part of the already-pared-down but still multi-step processes, a few different diligent, hard-working, top performing IT personnel make a unique subtle error that causes it to fail, sometimes after the last round of testing! And in general these are not single-person failures, but understandable miscommunications. And communication at our company is actually better than most. People just don't think that's the case because they haven't dug into the matter. However, I have it on very good word from somebody with extensive formal study of sociology and psychology that the relatively small amount of less-than-proper database usage in this gigantic cross-platform multi-source, multi-language project is bureaucratically un-maintainable. Impossible. No chance. At least with Human Beings in the loop, and it can’t be automated. In addition, the management and developers who could change this, though intelligent and capable, don’t understand the rigidity of this ‘how humans are’ issue, and are not convincible on the matter. The reason putting the static data in sourcecode will solve the problem is, although the solution is less sexy than a database, it would function with no technical drawbacks; and since the sharing of sourcecode already works very well, you basically erase any database-related effort from this section of the project, along with all the drawbacks of it that are causing problems. OK, that’s the background, for the curious. I won’t be able to convince management that this is an unfixable sociological problem, and that the real solution is coding around these limits of human nature, just as you would code around a bug in a 3rd party component that you can’t change. So what I have to do is exploit the unsuitableness of the database solution, and not do it using logic, but rather authority. I am aware of many reasons, and posts on this site giving reasons for one over the other; I’m not looking for lists of reasons like these (although you can add a comment if I've miss a doozy): WHY USE A DATABASE? instead of flatfile/other DB vs. file: if you need... Random Read / Transparent search optimization Advanced / varied / customizable Searching and sorting capabilities Transaction/rollback Locks, semaphores Concurrency control / Shared users Security 1-many/m-m is easier Easy modification Scalability Load Balancing Random updates / inserts / deletes Advanced query Administrative control of design, etc. SQL / learning curve Debugging / Logging Centralized / Live Backup capabilities Cached queries / dvlp & cache execution plans Interleaved update/read Referential integrity, avoid redundant/missing/corrupt/out-of-sync data Reporting (from on olap or oltp db) / turnkey generation tools [Disadvantages:] Important to get right the first time - professional design - but only b/c it's meant to last s/w & h/w cost Usu. over a network, speed issue (best vs. best design vs. local=even then a separate process req's marshalling/netwk layers/inter-p comm) indicies and query processing can stand in the way of simple processing (vs. flatfile) WHY USE FLATFILE: If you only need... Sequential Row processing only Limited usage append only (no reading, no master key/update) Only Update the record you're reading (fixed length recs only) Too big to fit into memory If Local disk / read-ahead network connection Portability / small system Email / cut & Paste / store as document by novice - simple format Low design learning curve but high cost later WHY USE IN-MEMORY/TABLE (tables, arrays, etc.): if you need... Processing a single db/ff record that was imported Known size of data Static data if hardcoding the table Narrow, unchanging use (e.g., one program or proc) -includes a class that will be shared, but encapsulates its data manipulation Extreme speed needed / high transaction frequency Random access - but search is dependent on implementation Following are some other posts about the topic: http://stackoverflow.com/questions/1499239/database-vs-flat-text-file-what-are-some-technical-reasons-for-choosing-one-over http://stackoverflow.com/questions/332825/are-flat-file-databases-any-good http://stackoverflow.com/questions/2356851/database-vs-flat-files http://stackoverflow.com/questions/514455/databases-vs-plain-text/514530 What I’d like to know is if anybody could recommend a hard, authoritative source containing these reasons. I’m looking for a paper book I can buy, or a reputable website with whitepapers about the issue (e.g., Microsoft, IBM), not counting the user-generated content on those sites. This will have a greater change to elicit a change that I’m looking for: less wasted programmer time, and more reliable programs. Thanks very much for your help. You win a prize for reading such a large post!

    Read the article

  • Choose variable based on users choice from picklist.

    - by Kelbizzle
    I have this form. Basically what I want is to send a auto-response with a different URL based on what the user picks in the "attn" picklist. I've been thinking I could have a different variable for each drop down value. It will then pass this variable on to the mail script that will choose which URL to insert inside the auto response that is sent. It gives me a headache thinking about it sometimes. What's the easiest way to accomplish this? Am I making more work for myself? I really don't know because I'm no programmer. Thanks in advance! Here is the form: <form name="contact_form" method="post" action="sendemail_reports.php" onsubmit="return validate_form ( );"> <div id='zohoWebToLead' align=center> <table width="100%" border="0" align="center" cellpadding="0" cellspacing="0" class="txt_body"> <table border=0 cellspacing=0 cellpadding=5 width=480 style='border-bottom-color: #999999; border-top-color: #999999; border-bottom-style: none; border-top-style: none; border-bottom-width: 1px; border-top-width: 2px; background-color:transparent;'> <tr> <td width='75%'><table width=480 border=0 cellpadding=5 cellspacing=0 style='border-bottom-color: #999999; border-top-color: #999999; border-bottom-style: none; border-top-style: none; border-bottom-width: 1px; border-top-width: 2px; background-color:transparent;'> <input type="hidden" name="ip" value="'.$ipi.'" /> <input type="hidden" name="httpref" value="'.$httprefi.'" /> <input type="hidden" name="httpagent" value="'.$httpagenti.'" /> <tr></tr> <tr> <td colspan='2' align='left' style='border-bottom-color: #dadada; border-bottom-style: none; border-bottom-width: 2px; color:#000000; font-family:sans-serif; font-size:14px;'><strong>Send us an Email</strong></td> </tr> <tr> <td nowrap style='font-family:sans-serif;font-size:12px;font-weight:bold' align='right' width='25%'> First Name   : </td> <td width='75%'><input name='visitorf' type='text' size="48" maxlength='40' /></td> </tr> <tr> <td nowrap style='font-family:sans-serif;font-size:12px;font-weight:bold' align='right' width='25%'>Last Name   :</td> <td width='75%'><input name='visitorfl' type='text' size="48" maxlength='80' /></td> </tr> <tr> <td nowrap style= 'font-family:sans-serif;font-size:12px;font-weight:bold' align='right' width='25%'> Email Adress  : </td> <td width='75%'><input name='visitormail' type='text' size="48" maxlength='100' /></td> </tr> <tr> <td nowrap style='font-family:sans-serif;font-size:12px;font-weight:bold' align='right' width='25%'> Phone   : </td> <td width='75%'><input name='visitorphone' type='text' size="48" maxlength='30' /></td> </tr> <td nowrap style='font-family:sans-serif;font-size:12px;font-weight:bold' align='right' width='25%'> Subject   : </td> <td width='75%'><select name="attn" size="1"> <option value=" Investment Opportunities ">Investment Opportunities </option> <option value=" Vacation Rentals ">Vacation Rentals </option> <option value=" Real Estate Offerings ">Real Estate Offerings </option> <option value=" Gatherings ">Gatherings </option> <option value=" General ">General </option> </select></td> <tr> <td nowrap style= 'font-family:sans-serif;font-size:12px;font-weight:bold' align='right' width='25%'> Message   :<br /> <em>(max 5000 char)</em></td> <td width='75%'><textarea name='notes' maxlength='5000' cols="48" rows="3"></textarea></td> </tr> <tr> <td colspan=2 align=center style=''><input name='save' type='submit' class="boton" value=Send mail /> &nbsp; &nbsp; <input type='reset' name='reset' value=Reset class="boton" /></td> </tr> </table></td> </tr> </table> </div> </form> Here is the mail script: <?php //the 3 variables below were changed to use the SERVER variable $ip = $_SERVER['REMOTE_ADDR']; $httpref = $_SERVER['HTTP_REFERER']; $httpagent = $_SERVER['HTTP_USER_AGENT']; $visitorf = $_POST['visitorf']; $visitorl = $_POST['visitorl']; $visitormail = $_POST['visitormail']; $visitorphone = $_POST['visitorphone']; $notes = $_POST['notes']; $attn = $_POST['attn']; //additional headers $headers = 'From: Me <[email protected]>' . "\n" ; $headers = 'Bcc: [email protected]' . "\n"; if (eregi('http:', $notes)) { die ("Do NOT try that! ! "); } if(!$visitormail == "" && (!strstr($visitormail,"@") || !strstr($visitormail,"."))) { echo "<h2>Use Back - Enter valid e-mail</h2>\n"; $badinput = "<h2>Feedback was NOT submitted</h2>\n"; echo $badinput; die ("Go back! ! "); } if(empty($visitorf) || empty($visitormail) || empty($notes )) { echo "<h2>Use Back - fill in all fields</h2>\n"; die ("Use back! ! "); } $todayis = date("l, F j, Y, g:i a") ; $subject = "I want to download the report about $attn"; $notes = stripcslashes($notes); $message = "$todayis [EST] \nAttention: $attn \nMessage: $notes \nFrom: $visitorf $visitorl ($visitormail) \nTelephone Number: $visitorphone \nAdditional Info : IP = $ip \nBrowser Info: $httpagent \nReferral : $httpref\n"; //check if the function even exists if(function_exists("mail")) { //send the email mail($_SESSION['email'], $subject, $message, $headers) or die("could not send email"); } else { die("mail function not enabled"); } header( "Location: http://www.domain.com/thanks.php" ); ?>

    Read the article

  • Dealing with external processes

    - by Jesse Aldridge
    I've been working on a gui app that needs to manage external processes. Working with external processes leads to a lot of issues that can make a programmer's life difficult. I feel like maintenence on this app is taking an unacceptably long time. I've been trying to list the things that make working with external processes difficult so that I can come up with ways of mitigating the pain. This kind of turned into a rant which I thought I'd post here in order to get some feedback and to provide some guidance to anybody thinking about sailing into these very murky waters. Here's what I've got so far: Output from the child can get mixed up with output from the parent. This can make both outputs misleading and hard to read. It can be hard to tell what came from where. It becomes harder to figure out what's going on when things are asynchronous. Here's a contrived example: import textwrap, os, time from subprocess import Popen test_path = 'test_file.py' with open(test_path, 'w') as file: file.write(textwrap.dedent(''' import time for i in range(3): print 'Hello %i' % i time.sleep(1)''')) proc = Popen('python -B "%s"' % test_path) for i in range(3): print 'Hello %i' % i time.sleep(1) os.remove(test_path) I guess I could have the child process write its output to a file. But it can be annoying to have to open up a file every time I want to see the result of a print statement. If I have code for the child process I could add a label, something like print 'child: Hello %i', but it can be annoying to do that for every print. And it adds some noise to the output. And of course I can't do it if I don't have access to the code. I could manually manage the process output. But then you open up a huge can of worms with threads and polling and stuff like that. A simple solution is to treat processes like synchronous functions, that is, no further code executes until the process completes. In other words, make the process block. But that doesn't work if you're building a gui app. Which brings me to the next problem... Blocking processes cause the gui to become unresponsive. import textwrap, sys, os from subprocess import Popen from PyQt4.QtGui import * from PyQt4.QtCore import * test_path = 'test_file.py' with open(test_path, 'w') as file: file.write(textwrap.dedent(''' import time for i in range(3): print 'Hello %i' % i time.sleep(1)''')) app = QApplication(sys.argv) button = QPushButton('Launch process') def launch_proc(): # Can't move the window until process completes proc = Popen('python -B "%s"' % test_path) proc.communicate() button.connect(button, SIGNAL('clicked()'), launch_proc) button.show() app.exec_() os.remove(test_path) Qt provides a process wrapper of its own called QProcess which can help with this. You can connect functions to signals to capture output relatively easily. This is what I'm currently using. But I'm finding that all these signals behave suspiciously like goto statements and can lead to spaghetti code. I think I want to get sort-of blocking behavior by having the 'finished' signal from QProcess call a function containing all the code that comes after the process call. I think that should work but I'm still a bit fuzzy on the details... Stack traces get interrupted when you go from the child process back to the parent process. If a normal function screws up, you get a nice complete stack trace with filenames and line numbers. If a subprocess screws up, you'll be lucky if you get any output at all. You end up having to do a lot more detective work everytime something goes wrong. Speaking of which, output has a way of disappearing when dealing external processes. Like if you run something via the windows 'cmd' command, the console will pop up, execute the code, and then disappear before you have a chance to see the output. You have to pass the /k flag to make it stick around. Similar issues seem to crop up all the time. I suppose both problems 3 and 4 have the same root cause: no exception handling. Exception handling is meant to be used with functions, it doesn't work with processes. Maybe there's some way to get something like exception handling for processes? I guess that's what stderr is for? But dealing with two different streams can be annoying in itself. Maybe I should look into this more... Processes can hang and stick around in the background without you realizing it. So you end up yelling at your computer cuz it's going so slow until you finally bring up your task manager and see 30 instances of the same process hanging out in the background. Also, hanging background processes can interefere with other instances of the process in various fun ways, such as causing permissions errors by holding a handle to a file or someting like that. It seems like an easy solution to this would be to have the parent process kill the child process on exit if the child process didn't close itself. But if the parent process crashes, cleanup code might not get called and the child can be left hanging. Also, if the parent waits for the child to complete, and the child is in an infinite loop or something, you can end up with two hanging processes. This problem can tie in to problem 2 for extra fun, causing your gui to stop responding entirely and force you to kill everything with the task manager. F***ing quotes Parameters often need to be passed to processes. This is a headache in itself. Especially if you're dealing with file paths. Say... 'C:/My Documents/whatever/'. If you don't have quotes, the string will often be split at the space and interpreted as two arguments. If you need nested quotes you can use ' and ". But if you need to use more than two layers of quotes, you have to do some nasty escaping, for example: "cmd /k 'python \'path 1\' \'path 2\''". A good solution to this problem is passing parameters as a list rather than as a single string. Subprocess allows you to do this. Can't easily return data from a subprocess. You can use stdout of course. But what if you want to throw a print in there for debugging purposes? That's gonna screw up the parent if it's expecting output formatted a certain way. In functions you can print one string and return another and everything works just fine. Obscure command-line flags and a crappy terminal based help system. These are problems I often run into when using os level apps. Like the /k flag I mentioned, for holding a cmd window open, who's idea was that? Unix apps don't tend to be much friendlier in this regard. Hopefully you can use google or StackOverflow to find the answer you need. But if not, you've got a lot of boring reading and frusterating trial and error to do. External factors. This one's kind of fuzzy. But when you leave the relatively sheltered harbor of your own scripts to deal with external processes you find yourself having to deal with the "outside world" to a much greater extent. And that's a scary place. All sorts of things can go wrong. Just to give a random example: the cwd in which a process is run can modify it's behavior. There are probably other issues, but those are the ones I've written down so far. Any other snags you'd like to add? Any suggestions for dealing with these problems?

    Read the article

  • Array sorting efficiency... Beginner need advice

    - by SoleSoft
    I'll start by saying I am very much a beginner programmer, this is essentially my first real project outside of using learning material. I've been making a 'Simon Says' style game (the game where you repeat the pattern generated by the computer) using C# and XNA, the actual game is complete and working fine but while creating it, I wanted to also create a 'top 10' scoreboard. The scoreboard would record player name, level (how many 'rounds' they've completed) and combo (how many buttons presses they got correct), the scoreboard would then be sorted by combo score. This led me to XML, the first time using it, and I eventually got to the point of having an XML file that recorded the top 10 scores. The XML file is managed within a scoreboard class, which is also responsible for adding new scores and sorting scores. Which gets me to the point... I'd like some feedback on the way I've gone about sorting the score list and how I could have done it better, I have no other way to gain feedback =(. I know .NET features Array.Sort() but I wasn't too sure of how to use it as it's not just a single array that needs to be sorted. When a new score needs to be entered into the scoreboard, the player name and level also have to be added. These are stored within an 'array of arrays' (10 = for 'top 10' scores) scoreboardComboData = new int[10]; // Combo scoreboardTextData = new string[2][]; scoreboardTextData[0] = new string[10]; // Name scoreboardTextData[1] = new string[10]; // Level as string The scoreboard class works as follows: - Checks to see if 'scoreboard.xml' exists, if not it creates it - Initialises above arrays and adds any player data from scoreboard.xml, from previous run - when AddScore(name, level, combo) is called the sort begins - Another method can also be called that populates the XML file with above array data The sort checks to see if the new score (combo) is less than or equal to any recorded scores within the scoreboardComboData array (if it's greater than a score, it moves onto the next element). If so, it moves all scores below the score it is less than or equal to down one element, essentially removing the last score and then places the new score within the element below the score it is less than or equal to. If the score is greater than all recorded scores, it moves all scores down one and inserts the new score within the first element. If it's the only score, it simply adds it to the first element. When a new score is added, the Name and Level data is also added to their relevant arrays, in the same way. What a tongue twister. Below is the AddScore method, I've added comments in the hope that it makes things clearer O_o. You can get the actual source file HERE. Below the method is an example of the quickest way to add a score to follow through with a debugger. public static void AddScore(string name, string level, int combo) { // If the scoreboard has not yet been filled, this adds another 'active' // array element each time a new score is added. The actual array size is // defined within PopulateScoreBoard() (set to 10 - for 'top 10' if (totalScores < scoreboardComboData.Length) totalScores++; // Does the scoreboard even need sorting? if (totalScores > 1) { for (int i = totalScores - 1; i > - 1; i--) { // Check to see if score (combo) is greater than score stored in // array if (combo > scoreboardComboData[i] && i != 0) { // If so continue to next element continue; } // Check to see if score (combo) is less or equal to element 'i' // score && that the element is not the last in the // array, if so the score does not need to be added to the scoreboard else if (combo <= scoreboardComboData[i] && i != scoreboardComboData.Length - 1) { // If the score is lower than element 'i' and greater than the last // element within the array, it needs to be added to the scoreboard. This is achieved // by moving each element under element 'i' down an element. The new score is then inserted // into the array under element 'i' for (int j = totalScores - 1; j > i; j--) { // Name and level data are moved down in their relevant arrays scoreboardTextData[0][j] = scoreboardTextData[0][j - 1]; scoreboardTextData[1][j] = scoreboardTextData[1][j - 1]; // Score (combo) data is moved down in relevant array scoreboardComboData[j] = scoreboardComboData[j - 1]; } // The new Name, level and score (combo) data is inserted into the relevant array under element 'i' scoreboardTextData[0][i + 1] = name; scoreboardTextData[1][i + 1] = level; scoreboardComboData[i + 1] = combo; break; } // If the method gets the this point, it means that the score is greater than all scores within // the array and therefore cannot be added in the above way. As it is not less than any score within // the array. else if (i == 0) { // All Names, levels and scores are moved down within their relevant arrays for (int j = totalScores - 1; j != 0; j--) { scoreboardTextData[0][j] = scoreboardTextData[0][j - 1]; scoreboardTextData[1][j] = scoreboardTextData[1][j - 1]; scoreboardComboData[j] = scoreboardComboData[j - 1]; } // The new number 1 top name, level and score, are added into the first element // within each of their relevant arrays. scoreboardTextData[0][0] = name; scoreboardTextData[1][0] = level; scoreboardComboData[0] = combo; break; } // If the methods get to this point, the combo score is not high enough // to be on the top10 score list and therefore needs to break break; } } // As totalScores < 1, the current score is the first to be added. Therefore no checks need to be made // and the Name, Level and combo data can be entered directly into the first element of their relevant // array. else { scoreboardTextData[0][0] = name; scoreboardTextData[1][0] = level; scoreboardComboData[0] = combo; } } } Example for adding score: private static void Initialize() { scoreboardDoc = new XmlDocument(); if (!File.Exists("Scoreboard.xml")) GenerateXML("Scoreboard.xml"); PopulateScoreBoard("Scoreboard.xml"); // ADD TEST SCORES HERE! AddScore("EXAMPLE", "10", 100); AddScore("EXAMPLE2", "24", 999); PopulateXML("Scoreboard.xml"); } In it's current state the source file is just used for testing, initialize is called within main and PopulateScoreBoard handles the majority of other initialising, so nothing else is needed, except to add a test score. I thank you for your time!

    Read the article

  • How do I prove I should put a table of values in source code instead of a database table?

    - by FastAl
    <tldr>looking for a reference to a book or other undeniably authoritative source that gives reasons when you should choose a database vs. when you should choose other storage methods. I have provided an un-authoritative list of reasons about 2/3 of the way down this post.</tldr> I have a situation at my company where a database is being used where it would be better to use another solution (in this case, an auto-generated piece of source code that contains a static lookup table, searched by binary sort). Normally, a database would be an OK solution even though the problem does not require a database, e.g, none of the elements of ACID are needed, as it is read-only data, updated about every 3-5 years (also requiring other sourcecode changes), and fits in memory, and can be keyed into via binary search (a tad faster than db, but speed is not an issue). The problem is that this code runs on our enterprise server, but is shared with several PC platforms (some disconnected, some use a central DB, etc.), and parts of it are managed by multiple programming units, parts by the DBAs, parts even by mathematicians in another department, etc. These hit their own platform’s version of their databases (containing their own copy of the static data). What happens is that every implementation, every little change, something different goes wrong. There are many other issues as well. I can’t even use a flatfile, because one mode of running on our enterprise server does not have permission to read files (only databases, and of course, its own literal storage, e.g., in-source table). Of course, other parts of the system use databases in proper, less obscure manners; there is no problem with those parts. So why don’t we just change it? I don’t have administrative ability to force a change. But I’m affected because sometimes I have to help fix the problems, but mostly because it causes outages and tons of extra IT time by other programmers and d*mmit that makes me mad! The reason neither management, nor the designers of the system, can see the problem is that they propose a solution that won’t work: increase communication; implement more safeguards and standards; etc. But every time, in a different part of the already-pared-down but still multi-step processes, a few different diligent, hard-working, top performing IT personnel make a unique subtle error that causes it to fail, sometimes after the last round of testing! And in general these are not single-person failures, but understandable miscommunications. And communication at our company is actually better than most. People just don't think that's the case because they haven't dug into the matter. However, I have it on very good word from somebody with extensive formal study of sociology and psychology that the relatively small amount of less-than-proper database usage in this gigantic cross-platform multi-source, multi-language project is bureaucratically un-maintainable. Impossible. No chance. At least with Human Beings in the loop, and it can’t be automated. In addition, the management and developers who could change this, though intelligent and capable, don’t understand the rigidity of this ‘how humans are’ issue, and are not convincible on the matter. The reason putting the static data in sourcecode will solve the problem is, although the solution is less sexy than a database, it would function with no technical drawbacks; and since the sharing of sourcecode already works very well, you basically erase any database-related effort from this section of the project, along with all the drawbacks of it that are causing problems. OK, that’s the background, for the curious. I won’t be able to convince management that this is an unfixable sociological problem, and that the real solution is coding around these limits of human nature, just as you would code around a bug in a 3rd party component that you can’t change. So what I have to do is exploit the unsuitableness of the database solution, and not do it using logic, but rather authority. I am aware of many reasons, and posts on this site giving reasons for one over the other; I’m not looking for lists of reasons like these (although you can add a comment if I've miss a doozy): WHY USE A DATABASE? instead of flatfile/other DB vs. file: if you need... Random Read / Transparent search optimization Advanced / varied / customizable Searching and sorting capabilities Transaction/rollback Locks, semaphores Concurrency control / Shared users Security 1-many/m-m is easier Easy modification Scalability Load Balancing Random updates / inserts / deletes Advanced query Administrative control of design, etc. SQL / learning curve Debugging / Logging Centralized / Live Backup capabilities Cached queries / dvlp & cache execution plans Interleaved update/read Referential integrity, avoid redundant/missing/corrupt/out-of-sync data Reporting (from on olap or oltp db) / turnkey generation tools [Disadvantages:] Important to get right the first time - professional design - but only b/c it's meant to last s/w & h/w cost Usu. over a network, speed issue (best vs. best design vs. local=even then a separate process req's marshalling/netwk layers/inter-p comm) indicies and query processing can stand in the way of simple processing (vs. flatfile) WHY USE FLATFILE: If you only need... Sequential Row processing only Limited usage append only (no reading, no master key/update) Only Update the record you're reading (fixed length recs only) Too big to fit into memory If Local disk / read-ahead network connection Portability / small system Email / cut & Paste / store as document by novice - simple format Low design learning curve but high cost later WHY USE IN-MEMORY/TABLE (tables, arrays, etc.): if you need... Processing a single db/ff record that was imported Known size of data Static data if hardcoding the table Narrow, unchanging use (e.g., one program or proc) -includes a class that will be shared, but encapsulates its data manipulation Extreme speed needed / high transaction frequency Random access - but search is dependent on implementation Following are some other posts about the topic: http://stackoverflow.com/questions/1499239/database-vs-flat-text-file-what-are-some-technical-reasons-for-choosing-one-over http://stackoverflow.com/questions/332825/are-flat-file-databases-any-good http://stackoverflow.com/questions/2356851/database-vs-flat-files http://stackoverflow.com/questions/514455/databases-vs-plain-text/514530 What I’d like to know is if anybody could recommend a hard, authoritative source containing these reasons. I’m looking for a paper book I can buy, or a reputable website with whitepapers about the issue (e.g., Microsoft, IBM), not counting the user-generated content on those sites. This will have a greater change to elicit a change that I’m looking for: less wasted programmer time, and more reliable programs. Thanks very much for your help. You win a prize for reading such a large post!

    Read the article

  • How to find an entry-level job after you already have a graduate degree?

    - by Uri
    Note: I asked this question in early 2009. A couple of months later, I found a great job. I've previously updated this question with some tips for whoever ends up in a similar situation, and now cleaned it up a little for the benefit of the fresh batch of graduates. Original post: In my early 20s I abandoned a great C++ development career path in a major company to go to graduate school and get a research masters (3 years). I did another year in industrial research, and then moved to the US to attend graduate school again, getting another masters and a Ph.D in software engineering from a top school (another 6 years down the drain). I was coding the whole way throughout my degrees (core Java and Eclipse plug-ins) and working on research related to software engineering (usability of APIs). I ended up graduating the year of the recession, with a son on the way and the prospects of no healthcare. Academic jobs and industrial research jobs are quite scarce. Initially, I was naive, thinking that with my background, I could easily find a coding job. Big mistake. It turns out that I'm in a complicated position. Entry level positions are usually offered to college undergraduates. I attended my school's career fairs, but you could immediately see signs of Ph.D. aversion and overqualification issues. Some of the recruiters I spoke with explicitly told me that they wanted 20 year olds with clean slates, and some were looking for interns since they are in various forms of hiring freezes. I managed to get a couple of interviews from these career fairs and through recruiters. However, since I've been out of school for a long time and programming primarily in Java, I am also no longer proficient in C/C++ and the usual range of college-level interview questions that everyone uses. I had no problems with this when I was 19 and interviewing for my first job since a lot of what you do in C is manipulate pointers and I was coding C++ for fun and for school. Later I was routinely doing pointer manipulation on the job, and during my first masters taught college courses with data structures and C++. But even though I remember many properties of C++ well, it's been close to ten years since I regularly used C++ and pointers. As a Java developer I rarely had to work at this level, but experience in OOD and in writing good maintainable code is meaningless for C++ interviews. Reading books as a refresh and looking at sample code did not do the trick. I also looked at mid-to-senior level Java positions, but most of them focused on J2EE APIs rather than on core Java and required a certain number of years in industrial positions. Coding research tools and prior C++ experience doesn't count. So that sends me back to entry-level jobs that are posted through job-boards, and these are not common (mostly they are Monster junk), and small companies are even less likely to answer a Ph.D. compared to the giants who participate in top-10 career fairs. Even worse, in many companies initial screening is done by HR folks who really don't want to deal with anything anomalous like a Ph.D. Any tips on how I should approach this intractable position? For example, what should I write in cover letters? Note that while immigration is not an issue for me, I cannot go freelance as I need the benefits (and in particular group health insurance). During my studies I had no time to contribute to open-source projects or maintain a popular blog, so even if I invested in that now there would be no immediate benefit. Updates: In the two months after posting this I received several offers to work as a core Java developer in the financial industry and accepted one from a firm where I am working to this day. For those who find themselves in similar situations, here are my tips: Give up on trying to find an entry level positions. You can't undo time. Accept the fact that there is Ph.D. discrimination in the job market (some might say rightfully so). It is legal to discriminate based on education. No point fighting it. The most important tip is to focus on the language you are comfortable with. The sad truth about programming in a particular language is that it is not like riding a bike. If you haven't used a language in the last few years, and can't actually apply it routinely (not just as a refresher) before you start your search, it is going to be very difficult to do well in an interview. Now that I'm interviewing others, I routinely see it in folks with a mixed C++/Java background. We maintain "a shadow" of the old language but end up with a weird mix that makes it hard to interview on either. Entry-level folks are at an advantage here since they usually have one language. Memory can help you do great in a screening interview, but without recent day-to-day experience, code tests will be difficult. Despite the supposed relation, core Java programming and J2EE programming are two different things with different skillsets. If you come from academia, you likely have very little J2EE experience and may find it hard to get accepted for a J2EE job. J2EE jobs seem to have a larger list of acronyms in their requirements. In addition, from interviewing J2EE developers it seems that for many there is a focus on mastering specific APIs and architectures, whereas core Java development tends to be secondary. In the same way that I can no longer manipulate pointers well, a J2EE developer may have difficulties doing low level Java manipulation. This puts you at a relative advantage in competing for core Java jobs! If you are able to work for startups (in terms of family life and stability) or migrate to startup-rich areas such as the west coast, you can find many exciting opportunities where advanced degrees are a benefit. I've since been approached by several startups, although I had to decline. Work through a recruiter if possible. They have direct contacts with the hiring parties, allowing you to "stand out". It is better to get a clear yes/no confirmation from a recruiter on whether a company might be interested in interviewing you, than it is to send your resume and hope that someone will ever see it. Recruiters are also a great way of bypassing HR. However, also beware of recruiters. They have a vested interest and will go to various shady practices and pressure tactics. To find a good recruiter, talk to a friend who declined a job offer he got through a recruiter. A good recruiter, to me, is measured in how they handle that. Interview for the jobs that require your core strength. If you're rusty or entirely unfamiliar with a technology around which the job revolves, you're probably not a good match. Yes, you probably have the talent to master them, but most companies would want "instant gratification". I got my offers from companies that wanted core Java developer. I didn't do well on places that wanted advance C++ because I am too rusty and not up to date on recent libraries. I also didn't hear from companies that wanted lots of J2EE experience, and that's ok. Finding companies that want core Java without web is harder, but exists in specific industries (e.g., finance, defense). This requires a lot more legwork in terms of search, but these jobs do exist. There are different interview styles. Some companies focus on puzzles, some companies focus on algorithms, and some companies focus on design and coding skills. I had the most success in places where the questions were the most related to the function I would have been performing. Pick companies accordingly as well.

    Read the article

  • error echo id when i want to echo id for edit

    - by Prasanta Baidya
    I have a entry and edit page of a branch, I want echo id, when I mouse over into edit link in edit button, its show error,: branchedit.php?id=Note:Undefined index:id in line 101, but it work properly in localhost. error picture page link : https://www.dropbox.com/s/i1vu62lz3pezia0/id%20error.JPG My code: <?php include 'include/config.php'; include 'include/opendb.php'; include 'loginheader.php'; ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Branch</title> <!--Requered Validation --> <link rel="stylesheet" type="text/css" media="screen" href="jqueryvalidation/demo/css/screen.css" /> <script src="jqueryvalidation/jquery.js" type="text/javascript"></script> <script src="jqueryvalidation/jquery.validate.js" type="text/javascript"></script> <script type="text/javascript"> $(document).ready(function() { $("#commentForm").validate(); }); </script> <!--End Requered Validation --> <style type="text/css"> <!-- body { background-color: #cccccc; } --> </style> <style type="text/css"> <!-- --> </style> <link href="css/usercss.css" rel="stylesheet" type="text/css" /> <style type="text/css"> <!-- .style7 { color: #000000; font-weight: bold; } .style8 {color: #FFFFFF} --> </style> </head> <body> <div id="container"> <table width="453" border="0" align="left" cellpadding="0" cellspacing="1"> <tr> <td width="451"><form name="cmxform" id="commentForm" method="post" action="insert_ac.php"> <table width="100%" border="0" cellspacing="1" cellpadding="3"> <tr> <td colspan="3" class="style2">Insert Branch into Database </td> </tr> <tr> <td width="100" height="46">Branch Code</td> <td width="18">:</td> <td width="309"><input name="branch_code" type="text" id="branch_code" minlength="3" class="required"></td> </tr> <tr> <td height="51">Branch Name</td> <td>:</td> <td><input name="branch_name" type="text" id="branch_name" class="required" ></td> </tr> <tr> <td height="47" colspan="3" align="center"> <div align="right"> <input name="Submit" type="submit" class="submit_button" value="Submit" /> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; </div></td> </tr> </table> </form></td> </tr> </table> <!--Branch List --> <?php $sql="SELECT * FROM dc_master"; $result=mysql_query($sql); ?> <table width="436" border="1" cellpadding="2" cellspacing="0" class="table" id="list"> <tr> <td colspan="4"><div align="center" class="style7">List of Branches </div></td> </tr> <tr class="style4" > <td width="87" align="center"><span class="style8">Branch Code</span></td> <td width="176" align="center" ><span class="style8">Branch Name</span></td> <td width="70" align="center" ><span class="style8">Edit</span></td> <td width="77" align="center" ><span class="style8">Delete</span></td> </tr> <?php while($rows=mysql_fetch_array($result)){ ?> <tr> <td height="28"><div align="center" class="style3"><?php echo $rows['branch_code']; ?></div></td> <td class="style3">&nbsp;&nbsp;&nbsp;<?php echo $rows['dc_name']; ?></td> <!--link to update.php and send value of id --> <td align="center"><a href="branchedit.php?id=<?php echo $rows['id']; ?>" class="style3 style5 style5">Edit</a></td> <td align="center"><a href="delete.php?id=<?php echo $rows['id']; ?>" class="style3 style5 style5" onclick="return confirm('Are you sure, you want to delete? (After delete you can not undo or get it again) <?php ?>')">Delete</a></td> </tr> <?php } ?> </table> <span class="footer">Programmer : Prasanta Baidya / Mobile : 09830980840 / Email id : [email protected]</span></div> <?php mysql_close(); ?> </body> </html>

    Read the article

  • How to insert sub root node in xml file

    - by pravakar
    Hi guys hope all are doing good. I want to create one sub root node in my xml file like, <CapitalJobsList xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <JobAds> -- element to create <JobAd> <AdvertiserDetails> <AdvertiserId>718508549</AdvertiserId> <AdvertiserName>ABC</AdvertiserName> </AdvertiserDetails> <ConsultantDetails> <ContactName>Naga Divakar</ContactName> <ContactPhone>6239 7755</ContactPhone> <ContactEmail>[email protected]</ContactEmail> <ContactFax>12345678912</ContactFax> </ConsultantDetails> <JobAdDetails> <DateEntered>2009-10-03T21:09:35.500</DateEntered> <AdvertiserJobRef>83754865</AdvertiserJobRef> <Title>IT Operations Manager</Title> <DescriptionShort>Large scale/exciting projects Mentor and manage o...</DescriptionShort> <Description>Large scale/exciting projects Mentor and manage others Management/technical mix This is a fantastic opportunity to join a high profile client who is active across both the commercial and Government domain. As the IT Operations Manager you will be responsible for leading and mentoring a small team of Infrastructure Engineers to ensure the availability and performance of the IT infrastructure. You w</Description> <SalaryMin>0.00</SalaryMin> <SalaryMax>0.00</SalaryMax> <WorkType xsi:nil="true" /> <Location>) as [JobAd/JobAdDetails/Bullets], isnull(Job</Location> <PostCode>2600</PostCode> <ClosingDate>2009-11-01T00:00:00</ClosingDate> <Keywords xsi:nil="true" /> <ApplyEmail xsi:nil="true" /> <ApplyURL>http://jobview.careerone.com.au/GetJob.aspx?JobID=83754865</ApplyURL> </JobAdDetails> <JobAdOptions> <BlindPost xsi:nil="true" /> <AdFormatType xsi:nil="true" /> <AdTemplateName xsi:nil="true" /> <ShowContactDetails xsi:nil="true" /> <ShowSalary xsi:nil="true" /> <HasVideo xsi:nil="true" /> <ResumeRequired>1</ResumeRequired> <ResidentsOnly>0</ResidentsOnly> </JobAdOptions> <CategoryList> <Category xsi:nil="true" /> </CategoryList> <RegionsList> <Region>ACT</Region> </RegionsList> <LevelsList> <Level xsi:nil="true" /> </LevelsList> </JobAd> <JobAd> <AdvertiserDetails> <AdvertiserId>718508549</AdvertiserId> <AdvertiserName>ABC</AdvertiserName> </AdvertiserDetails> <ConsultantDetails> <ContactName>Naga Divakar</ContactName> <ContactPhone>6239 7755</ContactPhone> <ContactEmail>[email protected]</ContactEmail> <ContactFax>12345678912</ContactFax> </ConsultantDetails> <JobAdDetails> <DateEntered>2009-10-03T21:09:35.530</DateEntered> <AdvertiserJobRef>83731488</AdvertiserJobRef> <Title>SAP Developers Required in Canberra - 12 month contract</Title> <DescriptionShort>My client, a large government department in Canbe...</DescriptionShort> <Description>My client, a large government department in Canberra, seeks two SAP Developers for 12 month ongoing contracts. Two SAP Developers Required Expert level ABAP programming skills Large SAP landscape - SAP R/3, SAP Web, SAP BI, SAP ITS My client, a large government department in Canberra, seeks two SAP Developers for 12 month ongoing contracts. My client is a large government department in Canberra, a</Description> <SalaryMin>0.00</SalaryMin> <SalaryMax>0.00</SalaryMax> <WorkType xsi:nil="true" /> <Location>) as [JobAd/JobAdDetails/Bullets], isnull(Job</Location> <PostCode>2600</PostCode> <ClosingDate>2009-11-01T00:00:00</ClosingDate> <Keywords xsi:nil="true" /> <ApplyEmail xsi:nil="true" /> <ApplyURL>http://jobview.careerone.com.au/GetJob.aspx?JobID=83731488</ApplyURL> </JobAdDetails> <JobAdOptions> <BlindPost xsi:nil="true" /> <AdFormatType xsi:nil="true" /> <AdTemplateName xsi:nil="true" /> <ShowContactDetails xsi:nil="true" /> <ShowSalary xsi:nil="true" /> <HasVideo xsi:nil="true" /> <ResumeRequired>1</ResumeRequired> <ResidentsOnly>0</ResidentsOnly> </JobAdOptions> <CategoryList> <Category xsi:nil="true" /> </CategoryList> <RegionsList> <Region>ACT</Region> </RegionsList> <LevelsList> <Level xsi:nil="true" /> </LevelsList> </JobAd> </JobAds> </CapitalJobsList> I have used the sql query for xml path like: select r.advid as [JobAd/AdvertiserDetails/AdvertiserId], CompanyName as [JobAd/AdvertiserDetails/AdvertiserName], firstname +'' ''+ lastname as [JobAd/ConsultantDetails/ContactName], WorkPhone as [JobAd/ConsultantDetails/ContactPhone], AdvEmail as [JobAd/ConsultantDetails/ContactEmail], FaxNo as [JobAd/ConsultantDetails/ContactFax], Job_CreatedDate as [JobAd/JobAdDetails/DateEntered], Job_Id as [JobAd/JobAdDetails/AdvertiserJobRef], Job_Title as [JobAd/JobAdDetails/Title], substring(Job_Description,0,50)+''...'' as [JobAd/JobAdDetails/DescriptionShort], Job_Description as [JobAd/JobAdDetails/Description], CONVERT(DECIMAL(10,2),MinSalary) as [JobAd/JobAdDetails/SalaryMin], CONVERT(DECIMAL(10,2),MaxSalary) as [JobAd/JobAdDetails/SalaryMax], Job_Type as [JobAd/JobAdDetails/WorkType], isnull(Job_Bullets,'') as [JobAd/JobAdDetails/Bullets], isnull(Job_Location,'') as [JobAd/JobAdDetails/Location], Job_PostCode as [JobAd/JobAdDetails/PostCode], Job_ExpireDate as [JobAd/JobAdDetails/ClosingDate], Job_Keywords as [JobAd/JobAdDetails/Keywords], ApplyEmail as [JobAd/JobAdDetails/ApplyEmail], Job_BrandURL+Job_Id as [JobAd/JobAdDetails/ApplyURL], BlindPost as [JobAd/JobAdOptions/BlindPost], AdFormatType as [JobAd/JobAdOptions/AdFormatType], AdTemplateName as [JobAd/JobAdOptions/AdTemplateName], ShowContactDetails as [JobAd/JobAdOptions/ShowContactDetails], ShowSalary as [JobAd/JobAdOptions/ShowSalary], HasVideo as [JobAd/JobAdOptions/HasVideo], ResumeRequired as [JobAd/JobAdOptions/ResumeRequired], ResidentsOnly as [JobAd/JobAdOptions/ResidentsOnly], Job_Category as [JobAd/CategoryList/Category], Job_Location_State as [JobAd/RegionsList/Region], [Level] as [JobAd/LevelsList/Level] from DR_Adv_Registration r, DR_CareerOne_ACTJobs j where r.Advid = j.Advid and job_location_city like(''%'+''+ @City +''+'%'') and job_location_state in('''+ @State +''') and job_status=1 for xml path(''''), Root(''CapitalJobsList''),ELEMENTS XSINIL So, suggest me how to get the sub root node. Thanks in advance

    Read the article

  • Am I just not understanding TDD unit testing (Asp.Net MVC project)?

    - by KallDrexx
    I am trying to figure out how to correctly and efficiently unit test my Asp.net MVC project. When I started on this project I bought the Pro ASP.Net MVC, and with that book I learned about TDD and unit testing. After seeing the examples, and the fact that I work as a software engineer in QA in my current company, I was amazed at how awesome TDD seemed to be. So I started working on my project and went gun-ho writing unit tests for my database layer, business layer, and controllers. Everything got a unit test prior to implementation. At first I thought it was awesome, but then things started to go downhill. Here are the issues I started encountering: I ended up writing application code in order to make it possible for unit tests to be performed. I don't mean this in a good way as in my code was broken and I had to fix it so the unit test pass. I mean that abstracting out the database to a mock database is impossible due to the use of linq for data retrieval (using the generic repository pattern). The reason is that with linq-sql or linq-entities you can do joins just by doing: var objs = select p from _container.Projects select p.Objects; However, if you mock the database layer out, in order to have that linq pass the unit test you must change the linq to be var objs = select p from _container.Projects join o in _container.Objects on o.ProjectId equals p.Id select o; Not only does this mean you are changing your application logic just so you can unit test it, but you are making your code less efficient for the sole purpose of testability, and getting rid of a lot of advantages using an ORM has in the first place. Furthermore, since a lot of the IDs for my models are database generated, I proved to have to write additional code to handle the non-database tests since IDs were never generated and I had to still handle those cases for the unit tests to pass, yet they would never occur in real scenarios. Thus I ended up throwing out my database unit testing. Writing unit tests for controllers was easy as long as I was returning views. However, the major part of my application (and the one that would benefit most from unit testing) is a complicated ajax web application. For various reasons I decided to change the app from returning views to returning JSON with the data I needed. After this occurred my unit tests became extremely painful to write, as I have not found any good way to write unit tests for non-trivial json. After pounding my head and wasting a ton of time trying to find a good way to unit test the JSON, I gave up and deleted all of my controller unit tests (all controller actions are focused on this part of the app so far). So finally I was left with testing the Service layer (BLL). Right now I am using EF4, however I had this issue with linq-sql as well. I chose to do the EF4 model-first approach because to me, it makes sense to do it that way (define my business objects and let the framework figure out how to translate it into the sql backend). This was fine at the beginning but now it is becoming cumbersome due to relationships. For example say I have Project, User, and Object entities. One Object must be associated to a project, and a project must be associated to a user. This is not only a database specific rule, these are my business rules as well. However, say I want to do a unit test that I am able to save an object (for a simple example). I now have to do the following code just to make sure the save worked: User usr = new User { Name = "Me" }; _userService.SaveUser(usr); Project prj = new Project { Name = "Test Project", Owner = usr }; _projectService.SaveProject(prj); Object obj = new Object { Name = "Test Object" }; _objectService.SaveObject(obj); // Perform verifications There are many issues with having to do all this just to perform one unit test. There are several issues with this. For starters, if I add a new dependency, such as all projects must belong to a category, I must go into EVERY single unit test that references a project, add code to save the category then add code to add the category to the project. This can be a HUGE effort down the road for a very simple business logic change, and yet almost none of the unit tests I will be modifying for this requirement are actually meant to test that feature/requirement. If I then add verifications to my SaveProject method, so that projects cannot be saved unless they have a name with at least 5 characters, I then have to go through every Object and Project unit test to make sure that the new requirement doesn't make any unrelated unit tests fail. If there is an issue in the UserService.SaveUser() method it will cause all project, and object unit tests to fail and it the cause won't be immediately noticeable without having to dig through the exceptions. Thus I have removed all service layer unit tests from my project. I could go on and on, but so far I have not seen any way for unit testing to actually help me and not get in my way. I can see specific cases where I can, and probably will, implement unit tests, such as making sure my data verification methods work correctly, but those cases are few and far between. Some of my issues can probably be mitigated but not without adding extra layers to my application, and thus making more points of failure just so I can unit test. Thus I have no unit tests left in my code. Luckily I heavily use source control so I can get them back if I need but I just don't see the point. Everywhere on the internet I see people talking about how great TDD unit tests are, and I'm not just talking about the fanatical people. The few people who dismiss TDD/Unit tests give bad arguments claiming they are more efficient debugging by hand through the IDE, or that their coding skills are amazing that they don't need it. I recognize that both of those arguments are utter bullocks, especially for a project that needs to be maintainable by multiple developers, but any valid rebuttals to TDD seem to be few and far between. So the point of this post is to ask, am I just not understanding how to use TDD and automatic unit tests?

    Read the article

  • [C++] A minimalistic smart array (container) class template

    - by legends2k
    I've written a (array) container class template (lets call it smart array) for using it in the BREW platform (which doesn't allow many C++ constructs like STD library, exceptions, etc. It has a very minimal C++ runtime support); while writing this my friend said that something like this already exists in Boost called MultiArray, I tried it but the ARM compiler (RVCT) cries with 100s of errors. I've not seen Boost.MultiArray's source, I've just started learning template only lately; template meta programming interests me a lot, although am not sure if this is strictly one, which can be categorised thus. So I want all my fellow C++ aficionados to review it ~ point out flaws, potential bugs, suggestions, optimisations, etc.; somthing like "you've not written your own Big Three which might lead to...". Possibly any criticism that'll help me improve this class and thereby my C++ skills. smart_array.h #include <vector> using std::vector; template <typename T, size_t N> class smart_array { vector < smart_array<T, N - 1> > vec; public: explicit smart_array(vector <size_t> &dimensions) { assert(N == dimensions.size()); vector <size_t>::iterator it = ++dimensions.begin(); vector <size_t> dimensions_remaining(it, dimensions.end()); smart_array <T, N - 1> temp_smart_array(dimensions_remaining); vec.assign(dimensions[0], temp_smart_array); } explicit smart_array(size_t dimension_1 = 1, ...) { static_assert(N > 0, "Error: smart_array expects 1 or more dimension(s)"); assert(dimension_1 > 1); va_list dim_list; vector <size_t> dimensions_remaining(N - 1); va_start(dim_list, dimension_1); for(size_t i = 0; i < N - 1; ++i) { size_t dimension_n = va_arg(dim_list, size_t); assert(dimension_n > 0); dimensions_remaining[i] = dimension_n; } va_end(dim_list); smart_array <T, N - 1> temp_smart_array(dimensions_remaining); vec.assign(dimension_1, temp_smart_array); } smart_array<T, N - 1>& operator[](size_t index) { assert(index < vec.size() && index >= 0); return vec[index]; } size_t length() const { return vec.size(); } }; template<typename T> class smart_array<T, 1> { vector <T> vec; public: explicit smart_array(vector <size_t> &dimension) : vec(dimension[0]) { assert(dimension[0] > 0); } explicit smart_array(size_t dimension_1 = 1) : vec(dimension_1) { assert(dimension_1 > 0); } T& operator[](size_t index) { assert(index < vec.size() && index >= 0); return vec[index]; } size_t length() { return vec.size(); } }; Sample Usage: #include <iostream> using std::cout; using std::endl; int main() { // testing 1 dimension smart_array <int, 1> x(3); x[0] = 0, x[1] = 1, x[2] = 2; cout << "x.length(): " << x.length() << endl; // testing 2 dimensions smart_array <float, 2> y(2, 3); y[0][0] = y[0][1] = y[0][2] = 0; y[1][0] = y[1][1] = y[1][2] = 1; cout << "y.length(): " << y.length() << endl; cout << "y[0].length(): " << y[0].length() << endl; // testing 3 dimensions smart_array <char, 3> z(2, 4, 5); cout << "z.length(): " << z.length() << endl; cout << "z[0].length(): " << z[0].length() << endl; cout << "z[0][0].length(): " << z[0][0].length() << endl; z[0][0][4] = 'c'; cout << z[0][0][4] << endl; // testing 4 dimensions smart_array <bool, 4> r(2, 3, 4, 5); cout << "z.length(): " << r.length() << endl; cout << "z[0].length(): " << r[0].length() << endl; cout << "z[0][0].length(): " << r[0][0].length() << endl; cout << "z[0][0][0].length(): " << r[0][0][0].length() << endl; // testing copy constructor smart_array <float, 2> copy_y(y); cout << "copy_y.length(): " << copy_y.length() << endl; cout << "copy_x[0].length(): " << copy_y[0].length() << endl; cout << copy_y[0][0] << "\t" << copy_y[1][0] << "\t" << copy_y[0][1] << "\t" << copy_y[1][1] << "\t" << copy_y[0][2] << "\t" << copy_y[1][2] << endl; return 0; }

    Read the article

  • Overlapping and Stacking in CSS

    - by ApacheCode
    Hello, I'm a programmer and learning new techniques in web development. I've ran into a problem if you could look at the link below. http://bailesslaw.com/Bailess_003/bailesHeader/header.html This example I made isnt fixed and it needs to be, which is becoming difficult. This looks fine on here, but when I put those layers on main website, index.html, place this code as the header, the banner moves in the documents position 0,0 . I need these boxes fixed, center page and I cannot get them to do that without messing up the layers order and content. Layer1-rotating images, js causes the rotation Layer2-blue triangle with backdrop effect overlapping layer 1, Layer 3-is a static image with a high z-index Below I including some code, the important part that needs 3 overlapped layers exactly matching in width and height, except it has to be fixed in center 780px wide. Code: <style rel="stylesheet" type="text/css"> div#layer1 { border: 1px solid #000; height: 200px; left: 0px; position: fixed; top: 0px; width: 780px; z-index: 1; } div#layer2 { border: 1px solid #000; height: 200px; left: 0px; position: absolute; top: 0px; width: 780px; z-index: 2; } div#layer3 { border: 1px solid #000000; height: 200px; left: 0px; position: absolute; top: 0px; width: 780px; z-index: 3; } </style> </head> <body class="oneColFixCtr"> <div id="container"> <div id="mainContent"> <div id="layer1"> </div> <div id="layer2"> <div class="slideshow"> <span id="rotating1"> <p class="rotating"> </p> </span> <span id="rotating2"> <p class="rotating"> </p> </span> <span id="rotating3"> <p class="rotating"> </p> </span> <span id="rotating4"> <p class="rotating"> </p> </span> </div> </div> <div id="layer3"> <table width="385" border="0"> <tr> <th width="81" scope="col"> &nbsp; </th> <th width="278" scope="col"> &nbsp; </th> <th width="12" scope="col"> &nbsp; </th> </tr> <tr> <td> &nbsp; </td> <td> </td> <td> &nbsp; </td> </tr> <tr> <td> &nbsp; </td> <td> &nbsp; </td> <td> &nbsp; </td> </tr> </table> </div> </div> <!-- end #container --> </div> </body> </body> </html> CSS: @charset "utf-8"; CSS code: #rotating1 { height: 200px; width: 780px; } #rotating2 { height: 200px; width: 780px; } #rotating3 { height: 200px; width: 780px; } #main { background-repeat: no-repeat; height: 200px; width: 780px; z-index: 100; } #test { width: 780px; z-index: 2; } #indexContent { background-color: #12204d; background-repeat: no-repeat; height: 200px; width: 780px; z-index: 1; } #indexContent p { padding: .5em 2em .5em 2em; text-align: justify; text-indent: 2em; } .rotating { float: right; margin-top: 227px; text-indent: 0px !important; } .clearfix:after { clear: both; content: " "; display: block; font-size: 0; height: 0; visibility: hidden; } .clearfix { display: inline-block; } * html .clearfix { height: 1%; } .clearfix { display: block; }

    Read the article

< Previous Page | 203 204 205 206 207 208  | Next Page >