Search Results

Search found 15353 results on 615 pages for 'native methods'.

Page 448/615 | < Previous Page | 444 445 446 447 448 449 450 451 452 453 454 455  | Next Page >

  • What constitutes a "substantial, good-faith effort to remove the links"

    - by Luke McCallum
    We engaged the services of a 3rd party SEO consultant to assist us in managing our Meta data and to write regular blogs on our site http://cyberdesignworks.com.au Without our authorisation, the SEO also ran a link building campaign which has seen us Penguin slapped and we no longer appear in Google for a number of our core keywords. Since notification by Google that we have "unnatural links" back in March we have undertaken a significant campaign to rid ourselves of these dodgy backlinks by a number of methods. I have just received feedback on my 4th or 5th resubmission which is still advising that we need to make a "substantial, good-faith effort to remove the links" before Google will reconsider us for inclusion. After the effort that I have gone through to get links removed, I am now at a loss as to what else I can do to demonstrate "substantial, good-faith effort to remove the links". Below is a summary of the actions that we have taken to date. According to http://removem.com we had about 5584 back-linking domains. Of those we have successfully contacted and had removed links from 344 domains We ignored links from 625 domains as they were either legitimate press releases, natural backlinks or client websites containing an attribution link in the footer that points back to us. Due to our efforts, or the sites simply becoming defunct, removem.com reports that links from 3262 domains have been removed. We have contacted but are yet to receive feedback from 1666 domains so we can assume that the backlinks remain. We have configured an automatic 301 redirect for each of the links from these 1666 domains to point to http://redirects.sanscode.com/ which we are calling our Bad Link Catcher (a stroke of genius I thought). i.e http://www.mysimplewebdesign.com/create-a-perfect-webpage-with-four-important-tips-from-sydney-web-development-service-companies.php As we are a web design agency, we have a large number of client websites which contain an attribution link in their footer which points back to us. We have gone through the vast majority of these and updated these links to replace anchor text with an image and rel="nofollow" link. i.e <a rel="nofollow" target="_blank" href="http://www.cyberdesignworks.com.au/"><img src="https://sessions.sanscode.com/site/assets/media/badges/Badge_CDW_SANSCODE.png"></a> See http://www.milkatwork.com.au/ An export from http://removem.com detailing the number of times we have contacted each link and whether it is still found or not was also supplied with each resubmission. The total back links reported in Google Web Master Tools has dropped from over 100K to 87K and I expect it to drop significantly lower once Google re-crawls each back-linking page. Based on all of the above, I am not sure what else I can do to to demonstrate a "substantial, good-faith effort to remove the links". I would sincerely appreciate any feedback or suggestions that you may have as I am out of ideas.

    Read the article

  • How to shoot a triangle out of an asteroid which floats all of the way up to the screen?

    - by Holland
    I currently have an asteroid texture loaded as my "test player" for the game I'm writing. What I'm trying to figure out how to do is get a triangle to shoot from the center of the asteroid, and keep going until it hits the top of the screen. What happens in my case (as you'll see from the code I've posted), is that the triangle will show, however it will either be a long line, or it will just be a single triangle which stays in the same location as the asteroid moving around (that disappears when I stop pressing the space bar), or it simply won't appear at all. I've tried many different methods, but I could use a formula here. All I'm trying to do is write a space invaders clone for my final in C#. I know how to code fairly well, my formulas just need work is all. So far, this is what I have: Main Logic Code protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(ClearOptions.Target, Color.Black, 1, 1); mAsteroid.Draw(mSpriteBatch); if (mIsFired) { mPositions.Add(mAsteroid.LastPosition); mRay.Fire(mPositions); mIsFired = false; mRay.Bullets.Clear(); mPositions.Clear(); } base.Draw(gameTime); } Draw Code public void Draw() { VertexPositionColor[] vertices = new VertexPositionColor[3]; int stopDrawing = mGraphicsDevice.Viewport.Width / mGraphicsDevice.Viewport.Height; for (int i = 0; i < mRayPos.Length(); ++i) { vertices[0].Position = new Vector3(mRayPos.X, mRayPos.Y + 5f, 10); vertices[0].Color = Color.Blue; vertices[1].Position = new Vector3(mRayPos.X - 5f, mRayPos.Y - 5f, 10); vertices[1].Color = Color.White; vertices[2].Position = new Vector3(mRayPos.X + 5f, mRayPos.Y - 5f, 10); vertices[2].Color = Color.Red; mShader.CurrentTechnique.Passes[0].Apply(); mGraphicsDevice.DrawUserPrimitives<VertexPositionColor>(PrimitiveType.TriangleStrip, vertices, 0, 1); mRayPos += new Vector2(0, 1f); mGraphicsDevice.ReferenceStencil = 1; } }

    Read the article

  • Running Multiple Queries in Oracle SQL Developer

    - by thatjeffsmith
    There are two methods for running queries in SQL Developer: Run Statement Run Statement, Shift+Enter, F9, or this button Run Script No grids, just script (SQL*Plus like) ouput is fine, thank you very much! What’s the Difference? There are some obvious differences between the two features, the most obvious being the format of the output delivered. But there are some other, more subtle differences here, primarily around fetching. What is Fetch? After you run send your query to Oracle, it has to do 3 things: Parse Execute Fetch Technically it has to do at least 2 things, and sometimes only 1. But, to get the data back to the user, the fetch must occur. If you have a 10 row query or a 1,000,000 row query, this can mean 1 or many fetches in groups of records. Ok, before I went on the Fetch tangent, I said there were two ways to run statements in SQL Developer: Run Statement Run statement brings your query results to a grid with a single fetch. The user sees 50, 100, 500, etc rows come back, but SQL Developer and the database know that there are more rows waiting to be retrieved. The process on the server that was used to execute the query is still hanging around too. To alleviate this, increase your fetch size to 500. Every query ran will come back with the first 500 rows, and rows will be continued to be fetched in 500 row increments. You’ll then see most of your ad hoc queries complete with a single fetch. Scroll down, or hit Ctrl+End to force a full fetch and get all your rows back. Run Script Run Script runs the contents of the worksheet (or what’s highlighted) as a ‘script.’ What does that mean exactly? Think of this as being equivalent to running this in SQL*Plus: @my_script.sql; Each statement is executed. Also, ALL rows are fetched. So once it’s finished executing, there are no open cursors left around. The more obvious difference here is that the output comes back formatted as plain old text. Run one or more commands plus SQL*Plus commands like SET and SPOOL The Trick: Run Statement Works With Multiple Statements! It says ‘run statement,’ but if you select more than one with your mouse and hit the button – it will run each and throw the results to 1 grid for each statement. If you mouse hover over the Query Result panel tab, SQL Developer will tell you the query used to populate that grid. This will work regardless of what you have this preference set to: DATABASE – WORKSHEET – SHOW QUERY RESULTS IN NEW TABS Mind the fetch though! Close those cursors by bring back all the records or closing the grids when you’re done with them.

    Read the article

  • Questions re: Eclipse Jobs API

    - by BenCole
    Similar to http://stackoverflow.com/questions/8738160/eclipse-jobs-api-for-a-stand-alone-swing-project This question mentions the Jobs API from the Eclipse IDE: ...The disadvantage of the pre-3.0 approach was that the user had to wait until an operation completed before the UI became responsive again. The UI still provided the user the ability to cancel the currently running operation but no other work could be done until the operation completed. Some operations were performed in the background (resource decoration and JDT file indexing are two such examples) but these operations were restricted in the sense that they could not modify the workspace. If a background operation did try to modify the workspace, the UI thread would be blocked if the user explicitly performed an operation that modified the workspace and, even worse, the user would not be able to cancel the operation. A further complication with concurrency was that the interaction between the independent locking mechanisms of different plug-ins often resulted in deadlock situations. Because of the independent nature of the locks, there was no way for Eclipse to recover from the deadlock, which forced users to kill the application... ...The functionality provided by the workspace locking mechanism can be broken down into the following three aspects: Resource locking to ensure multiple operations did not concurrently modify the same resource Resource change batching to ensure UI stability during an operation Identification of an appropriate time to perform incremental building With the introduction of the Jobs API, these areas have been divided into separate mechanisms and a few additional facilities have been added. The following list summarizes the facilities added. Job class: support for performing operations or other work in the background. ISchedulingRule interface: support for determining which jobs can run concurrently. WorkspaceJob and two IWorkspace#run() methods: support for batching of delta change notifications. Background auto-build: running of incremental build at a time when no other running operations are affecting resources. ILock interface: support for deadlock detection and recovery. Job properties for configuring user feedback for jobs run in the background. The rest of this article provides examples of how to use the above-mentioned facilities... In regards to above API, is this an implementation of a particular design pattern? Which one?

    Read the article

  • How to refactor a Python “god class”?

    - by Zearin
    Problem I’m working on a Python project whose main class is a bit “God Object”. There are so friggin’ many attributes and methods! I want to refactor the class. So Far… For the first step, I want to do something relatively simple; but when I tried the most straightforward approach, it broke some tests and existing examples. Basically, the class has a loooong list of attributes—but I can clearly look over them and think, “These 5 attributes are related…These 8 are also related…and then there’s the rest.” getattr I basically just wanted to group the related attributes into a dict-like helper class. I had a feeling __getattr__ would be ideal for the job. So I moved the attributes to a separate class, and, sure enough, __getattr__ worked its magic perfectly well… At first. But then I tried running one of the examples. The example subclass tries to set one of these attributes directly (at the class level). But since the attribute was no longer “physically located” in the parent class, I got an error saying that the attribute did not exist. @property I then read up about the @property decorator. But then I also read that it creates problems for subclasses that want to do self.x = blah when x is a property of the parent class. Desired Have all client code continue to work using self.whatever, even if the parent’s whatever property is not “physically located” in the class (or instance) itself. Group related attributes into dict-like containers. Reduce the extreme noisiness of the code in the main class. For example, I don’t simply want to change this: larry = 2 curly = 'abcd' moe = self.doh() Into this: larry = something_else('larry') curly = something_else('curly') moe = yet_another_thing.moe() …because that’s still noisy. Although that successfully makes a simply attribute into something that can manage the data, the original had 3 variables and the tweaked version still has 3 variables. However, I would be fine with something like this: stooges = Stooges() And if a lookup for self.larry fails, something would check stooges and see if larry is there. (But it must also work if a subclass tries to do larry = 'blah' at the class level.) Summary Want to replace related groups of attributes in a parent class with a single attribute that stores all the data elsewhere Want to work with existing client code that uses (e.g.) larry = 'blah' at the class level Want to continue to allow subclasses to extend, override, and modify these refactored attributes without knowing anything has changed Is this possible? Or am I barking up the wrong tree?

    Read the article

  • What do you do when you encounter an idiotic interview question?

    - by Senthil
    I was interviewing with a "too proud of my java skills"-looking person. He asked me "What is your knowledge on Java IO classes.. say.. hash maps?" He asked me to write a piece of java code on paper - instantiate a class and call one of the instance's methods. When I was done, he said my program wouldn't run. After 5 minutes of serious thinking, I gave up and asked why. He said I didn't write a main function so it wouldn't run. ON PAPER. [I am too furious to continue with the stupidity...] Believe me it wasn't trick questions or a psychic or anger management evaluation thing. I can tell from his face, he was proud of these questions. That "developer" was supposed to "judge" the candidates. I can think of several things: Hit him with a chair (which I so desperately wanted to) and walk out. Simply walk out. Ridicule him saying he didn't make sense. Politely let him know that he didn't make sense and go on to try and answer the questions. Don't tell him anything, but simply go on to try and answer the questions. So far, I have tried just 4 and 5. It hasn't helped. Unfortunately many candidates seem to do the same and remain polite but this lets these kind of "developers" just keep ascending up the corporate ladder, gradually getting the capacity to pi** off more and more people. How do you handle these interviewers without bursting your veins? What is the proper way to handle this, yet maintain your reputation if other potential employers were to ever get to know what happened here? Is there anything you can do or should you even try to fix this? P.S. Let me admit that my anger has been amplified many times by the facts: He was smiling like you wouldn't believe. I got so many (20 or so) calls from that company the day before, asking me to come to the interview, that I couldn't do any work that day. I wasted a paid day off.

    Read the article

  • ArchBeat Link-o-Rama Top 10 for October 28 - November 3, 2012

    - by Bob Rhubart
    The Top 10 most popular items shared on the OTN ArchBeat Facebook Page for the week of Oct 28 - Nov 3, 2012. Eventually, 90% of tech budgets will be outside IT departments | ZDNet Another interesting post from ZDNet blogger Joe McKendrick about changing roles in IT. ADF Mobile - Login Functionality | Andrejus Baranovskis "The new ADF Mobile approach with native deployment is cool when you want to access phone functionality (camera, email, sms and etc.), also when you want to build mobile applications with advanced UI," reports Oracle ACE Director Andrejus Baranovskis. Mobile Development Platform Strategy Chart: ADF Mobile, WebCenter Sites, Portal, Content and Social "Unlike desktop web focused efforts, the world of mobile has undergone change at a feverish pace," says social enterprise expert John Brunswick. His extensive post charts various resources that will help you keep up. ADF Essentials - The Bare Necessities | Floyd Teter The experiment is over… And now Oracle ACE Director Floyd Teter shares his impressions after spending some time with Oracle ADF Essentials, the free version of Oracle ADF. A review of Oracle SOA Suite 11g Administrator’s Handbook | RedStack "More so than any other single piece of content that I have seen on the topic, it provides the information that a SOA administrator needs to know in order to successfully configure, manage, monitor, troubleshoot and backup an Oracle SOA environment." So says Oracle Fusion Middleware A-Team solution architect Mark Nelson of Oracle SOA Suite 11g Administrator’s Handbook, by Ahmed Aboulnaga and Arun Pareek. Expanding the Oracle Enterprise Repository with functional documentation Capgemini middleware specialist Marc Kuijpers shares information on how Oracle Enterprise Repository can be configured "to contain functional assets, i.e. functional designs, use cases and a logical data model" to aid in SOA governance efforts. Podcast: Are You Future Proof? - Part 2 In Part 2, practicing architects and Oracle ACE Directors Ron Batra (AT&T), Basheer Khan (Innowave Technology), and Ronald van Luttikhuizen discuss re-tooling one’s skill set to reflect changes in enterprise IT, including the knowledge to steer stakeholders around the hype to what’s truly valuable. Easy way to access JPA with REST (JSON / XML) | Edwin Biemond Oracle ACE Edwin Biemond shows you "what is possible with JPA-RS, how easy it is and howto setup your own EclipseLink REST service." Clustering ODI11g for High-Availability Part 1: Introduction and Architecture | Richard Yeardley "JEE agents can be deployed alongside, or instead of, standalone agents," says Rittman Meade's Richard Yeardley. "But there is one key advantage in using JEE agents and WebLogic: when you deploy JEE agents as part of a WebLogic cluster they can be configured together to form a high availability cluster." Learn more in Yeardley's extensive post. 2012 IOUG Virtualization SIG – Online Symposium on Nov 7 and Nov 8 | Kai Yu Oracle ACE Director Kai Yu shares information on this week's IOUG Virtualization SIG online event. Does that make it a virtual virtualization event? Thought for the Day "If McDonalds were run like a software company, one out of every hundred Big Macs would give you food poisoning — and the response would be, 'We’re sorry, here’s a coupon for two more.'" — Mark Minasi Source: SoftwareQuotes.com

    Read the article

  • The Template Method Design Pattern using C# .Net

    - by nijhawan.saurabh
    First of all I'll just put this pattern in context and describe its intent as in the GOF book:   Template Method: Define the skeleton of an algorithm in an operation, deferring some steps to Subclasses. Template Method lets subclasses redefine certain steps of an algorithm without changing the Algorithm's Structure.    Usage: When you are certain about the High Level steps involved in an Algorithm/Work flow you can use the Template Pattern which allows the Base Class to define the Sequence of the Steps but permits the Sub classes to alter the implementation of any/all steps.   Example in the .Net framework: The most common example is the Asp.Net Page Life Cycle. The Page Life Cycle has a few methods which are called in a sequence but we have the liberty to modify the functionality of any of the methods by overriding them.   Sample implementation of Template Method Pattern:   Let's see the class diagram first:            Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:8.0pt; mso-para-margin-left:0in; line-height:107%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-font-kerning:1.0pt; mso-ligatures:standard;}   And here goes the code:EmailBase.cs     1 using System;     2 using System.Collections.Generic;     3 using System.Linq;     4 using System.Text;     5 using System.Threading.Tasks;     6      7 namespace TemplateMethod     8 {     9     public abstract class EmailBase    10     {    11     12         public bool SendEmail()    13         {    14             if (CheckEmailAddress() == true) // Method1 in the sequence    15             {    16                 if (ValidateMessage() == true) // Method2 in the sequence    17                 {    18                     if (SendMail() == true) // Method3 in the sequence    19                     {    20                         return true;    21                     }    22                     else    23                     {    24                         return false;    25                     }    26     27                 }    28                 else    29                 {    30                     return false;    31                 }    32     33             }    34             else    35             {    36                 return false;    37     38             }    39     40     41         }    42     43         protected abstract bool CheckEmailAddress();    44         protected abstract bool ValidateMessage();    45         protected abstract bool SendMail();    46     47     48     }    49 }    50    EmailYahoo.cs      1 using System;     2 using System.Collections.Generic;     3 using System.Linq;     4 using System.Text;     5 using System.Threading.Tasks;     6      7 namespace TemplateMethod     8 {     9     public class EmailYahoo:EmailBase    10     {    11     12         protected override bool CheckEmailAddress()    13         {    14             Console.WriteLine("Checking Email Address : YahooEmail");    15             return true;    16         }    17         protected override bool ValidateMessage()    18         {    19             Console.WriteLine("Validating Email Message : YahooEmail");    20             return true;    21         }    22     23     24         protected override bool SendMail()    25         {    26             Console.WriteLine("Semding Email : YahooEmail");    27             return true;    28         }    29     30     31     }    32 }    33   EmailGoogle.cs      1 using System;     2 using System.Collections.Generic;     3 using System.Linq;     4 using System.Text;     5 using System.Threading.Tasks;     6      7 namespace TemplateMethod     8 {     9     public class EmailGoogle:EmailBase    10     {    11     12         protected override bool CheckEmailAddress()    13         {    14             Console.WriteLine("Checking Email Address : GoogleEmail");    15             return true;    16         }    17         protected override bool ValidateMessage()    18         {    19             Console.WriteLine("Validating Email Message : GoogleEmail");    20             return true;    21         }    22     23     24         protected override bool SendMail()    25         {    26             Console.WriteLine("Semding Email : GoogleEmail");    27             return true;    28         }    29     30     31     }    32 }    33   Program.cs      1 using System;     2 using System.Collections.Generic;     3 using System.Linq;     4 using System.Text;     5 using System.Threading.Tasks;     6      7 namespace TemplateMethod     8 {     9     class Program    10     {    11         static void Main(string[] args)    12         {    13             Console.WriteLine("Please choose an Email Account to send an Email:");    14             Console.WriteLine("Choose 1 for Google");    15             Console.WriteLine("Choose 2 for Yahoo");    16             string choice = Console.ReadLine();    17     18             if (choice == "1")    19             {    20                 EmailBase email = new EmailGoogle(); // Rather than newing it up here, you may use a factory to do so.    21                 email.SendEmail();    22     23             }    24             if (choice == "2")    25             {    26                 EmailBase email = new EmailYahoo(); // Rather than newing it up here, you may use a factory to do so.    27                 email.SendEmail();    28             }    29         }    30     }    31 }    32    Final Words: It's very obvious that why the Template Method Pattern is a popular pattern, everything at last revolves around Algorithms and if you are clear with the steps involved it makes real sense to delegate the duty of implementing the step's functionality to the sub classes. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:8.0pt; mso-para-margin-left:0in; line-height:107%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-font-kerning:1.0pt; mso-ligatures:standard;}

    Read the article

  • Do MORE with WebCenter

    - by Michael Snow
    WEBCAST THURSDAY!! 03/22/12 Do you need to lower costs? Raise Productivity? Foster Innovation? Improve Online Engagement? But you’re still stuck with Documentum? Step away from the ledge – there is hope – let us help you. Top 4 Content Imperatives · Lower Costs - Reduce labor, maintenance fees, storage and electrical consumption · Raise Productivity - Automation and integration, communication, findability · Foster Innovation - Enable collaboration, expertise location · Improve Online Engagement – enable user-driven, dynamic marketing initiatives With the coming technology wave we see four content imperatives. Every organization has had to reduce costs, cost cutting has become a way of life. Everyone is working three jobs as positions are eliminated. And so we have to reduce labor, reduce maintenance, and reduce money we are wasting on things like storing content that is redundant or no longer useful. We also, to fill that gap, need to raise productivity. Knowledge workers represent the fastest growing segment of the workforce, accounting for 40%-75% of the employees at organizations in sectors like financial services, life sciences, healthcare and retail.  What’s more, their wages total 18 percent of the United States GDP. And so we can’t afford information systems that don’t let our top performers be the best they can be. We look to automate the content processes, provide ways to integrate that content into our processes, provide communication to make decisions, and to make content more findable so people can make the right decision and move the process forward. And really to get ourselves out of the current financial status, we can only cut costs so far. We have to innovate out of economic tough times – to find new products and new markets. And to enable the innovation process, we have to enable collaboration and expertise location. So much of innovation is about building on innovations that have come before. To solve problems, we have to be able to find what our organization has already created. We find that problems we need to solve have already been solved if we can find the right document, the right person. So we have to provide systems that enable us to stand on the shoulders of our organization’s accomplishments. Good content drives great marketing. Online engagement is growing as an absolute necessity for modern growing marketing organizations that require the business users be enabled for dynamic marketing content creation, updates and targeted content creation and management. Unfortunately – if you are currently stuck with Documentum, you are really lacking in your Web Experience Management capabilities. Documentum previously used FatWire for web publishing. Now FatWire is part of Oracle. Oracle provides powerful web engagement capabilities: Increase sales and loyalty by optimizing online engagement Create, manage and moderate contextually relevant, targeted and interactive online experiences Optimize customer engagement across, web, mobile and social channels Manage large scale multichannel global online presence with integration to enterprise applications Enable business users to control their content and make their own updates Publish content from native files – enable navigation of project documents, procedures, policy information Enable content display and updates from existing web applications – one click to drag and drop content management functionality So you get the ability to self-publish information and make it navigable, to move the process of publishing from IT to business users, and the ability to address a whole new area of user engagement with web experience management. So… if you are still stuck with Documentum and don’t know what to do – contact us – not only will Oracle help you step away from the ledge, but also with the MoveOff Documentum program, we are offering you a way – trade-in your Documentum licenses for a 100% credit on Oracle WebCenter. How’s that for a nice bonus? It’s time to stop maintaining Documentum, and to start innovating with Oracle WebCenter. Learn More Here! To learn more about what Oracle WebCenter can offer you today – join us for a webcast – your eyes will be opened to all that’s possible. Do More with WebCenter: Extend Beyond Content Management

    Read the article

  • Announcement: Employee Info Starter Kit (v6.0–ASP.NET MVC Edition) is Released

    - by Mohammad Ashraful Alam
    Originally posted on: http://geekswithblogs.net/joycsharp/archive/2013/06/16/announcement-employee-info-starter-kit-v6.0asp.net-mvc-edition-is-released.aspxAfter a long wait, the next version of Employee Info Starter Kit is released! This starter kit is basically a project template that contains code samples targeting a specific technology, such as ASP.NET Web Form, ASP.NET MVC etc. Since its first release, this open source project gained a huge popularity in the developer community and had 250K+ combined downloads. This starter kit is honored to be placed at the official ASP.NET site, along with other asp.net starter kits, which all are being considered as the “best” ASP.NET coding standards, recommended by Microsoft. EISK is showcased in Microsoft’s Channel 9’s Weekly Show, as well. The ASP.NET MVC Edition of the new version 6.0 bundles most of the greatest and successful platforms, frameworks and technologies together, to enable web developers to learn and build manageable and high performance web applications with rich user experience effectively and quickly. User End Specifications Creating a new employee record Read existing employee records Update an existing employee record Delete existing employee records Role based security model Key Technology Areas ASP.NET MVC 4 Entity Framework 4.3.1 Sql Server Compact Edition 4 Visual Studio 2012 QuickStart Guide Getting started with EISK 6.0 ASP.NET is pretty easy. Once you've Visual Studio 2012 installed, then just follow the steps as provided below: Download the EISK 6.0 MVC version. Extract the file. From the extracted folder, click the solution file "Eisk.MVC-VS2012.sln". Right click the "Eisk.MVC" project node and select "Select set as StartUp Project". Hit Ctrl+F5 and explore! Architectural Overview Overall architecture is based on Model-View-Controller pattern Support for desktop & mobile browsers. Usage of Domain Model, Repository and Unit of Work pattern from Domain Driven Development approach Usage of Data Annotations in model (entity) classes to centralize basic validation mechanism that facilitates DRY principle Usage of IValidatableObject interface in model (entity) classes that isolates custom business logic from application layer Usage of OOP inheritance and Value Object pattern in model (entity) classes that provides reusability in application architecture Usage of View Model, Editor Model pattern that provides mechanism for testable view rendering logic Several helper classes and extension methods to enable developers build application with reduced code If you want to learn more about it in details, just check the following links: Getting Started - Hands on Coding Walkthrough – Technology Stack - Design & Architecture Enjoy!

    Read the article

  • Overload Avoidance

    - by mikef
    A little under a year ago, Matt Simmons wrote a rather reflective article about his terrifying brush with stress-induced ill health. SysAdmins and DBAs have always been prime victims of work-related stress, but I wonder if that predilection is perhaps getting worse, despite the best efforts of Matt and his trusty side-kick, HR. The constant pressure from share-holders and CFOs to 'streamline' the workforce is partially to blame, but the more recent culprit is technology itself. I can't deny that the rise of technologies like virtualization, PowerCLI, PowerShell, and a host of others has been a tremendous boon. As a result, individual IT professionals are now able to handle more and more tasks and manage increasingly large and complex environments. But, without a doubt, this is a two-edged sword; The reward for competence is invariably more work. Unfortunately, SysAdmins play such a pivotal role in modern business that it's easy to see how they can very quickly become swamped in conflicting demands coming from different directions. However, that doesn't justify the ridiculous hours many are asked (or volunteer) to devote to their work. Admirably though their commitment is, it isn't healthy for them, it sets a dangerous expectation, and eventually something will snap. There are times when everyone needs to step up to the plate outside of 'normal' work hours, but that time isn't all the time. Naturally, with all that lovely technology, you can automate more and more of those tricky tasks to keep on top of the workload, but you are still only human. Clever though you may be, there is a very real limit to how far technology can take you. I'm not suggesting that you avoid these technologies, or deliberately aim for mediocrity; I'm just saying that you need to be more than just technically skilled (and Wesley Nonapeptide riffs on and around this topic in his excellent 'Telepathic Robot Drones' blog post). You need to be able to manage expectations, not just Exchange. Specifically, that means your own expectations of what you are capable of, because those come before everyone else's. After all, how can you keep your work-life balance under control, if you're the one setting the bar way too high? Talking to your manager, or discussing issues with your users, is only going to be productive if you have some facts to work with. "Know Thyself" is the first law of managing work overload, and this is obviously a skill which people develop over time; the fact that veteran Sysadmins exist at all is testament to this. I'd just love to know how you get to that point. Personally, I'm using RescueTime to keep myself honest, but I'm open to recommendations for better methods. Do you track your own time, do you have an intuitive sense of what is possible, or do you just rely on someone else to handle that all for you? Cheers, Michael

    Read the article

  • Internal Mutation of Persistent Data Structures

    - by Greg Ros
    To clarify, when I mean use the terms persistent and immutable on a data structure, I mean that: The state of the data structure remains unchanged for its lifetime. It always holds the same data, and the same operations always produce the same results. The data structure allows Add, Remove, and similar methods that return new objects of its kind, modified as instructed, that may or may not share some of the data of the original object. However, while a data structure may seem to the user as persistent, it may do other things under the hood. To be sure, all data structures are, internally, at least somewhere, based on mutable storage. If I were to base a persistent vector on an array, and copy it whenever Add is invoked, it would still be persistent, as long as I modify only locally created arrays. However, sometimes, you can greatly increase performance by mutating a data structure under the hood. In more, say, insidious, dangerous, and destructive ways. Ways that might leave the abstraction untouched, not letting the user know anything has changed about the data structure, but being critical in the implementation level. For example, let's say that we have a class called ArrayVector implemented using an array. Whenever you invoke Add, you get a ArrayVector build on top of a newly allocated array that has an additional item. A sequence of such updates will involve n array copies and allocations. Here is an illustration: However, let's say we implement a lazy mechanism that stores all sorts of updates -- such as Add, Set, and others in a queue. In this case, each update requires constant time (adding an item to a queue), and no array allocation is involved. When a user tries to get an item in the array, all the queued modifications are applied under the hood, requiring a single array allocation and copy (since we know exactly what data the final array will hold, and how big it will be). Future get operations will be performed on an empty cache, so they will take a single operation. But in order to implement this, we need to 'switch' or mutate the internal array to the new one, and empty the cache -- a very dangerous action. However, considering that in many circumstances (most updates are going to occur in sequence, after all), this can save a lot of time and memory, it might be worth it -- you will need to ensure exclusive access to the internal state, of course. This isn't a question about the efficacy of such a data structure. It's a more general question. Is it ever acceptable to mutate the internal state of a supposedly persistent or immutable object in destructive and dangerous ways? Does performance justify it? Would you still be able to call it immutable? Oh, and could you implement this sort of laziness without mutating the data structure in the specified fashion?

    Read the article

  • Oracle Linux Training Across Five Continents

    - by Antoinette O'Sullivan
    The Oracle Linux System Administration course, a top selling course, provides you with a broad selection of key competencies you need to be a great Linux system administrator. And you can now take this course from your desk or in classrooms across all five contents. You can take this 5-day instructor-led course through the follow delivery methods: Training-on-Demand: Start training within 24 hours of registering. You following lecture material at your own pace via streaming video and book time on a lab environment to suit your schedule. Live-Virtual Event: Follow a live event from your own desk, no travel required. You can choose from a selection of events on the schedule to suit a different time zones. In-Class Event: Travel to an education center to take this course. Below is a selection of the in-class events already on the schedule. AFRICA  Location  Date  Delivery Language  Nairobi, Kenya  13 October 2014  English  Johannesburg, South Africa  24 November 2014  English AMERICA  Location  Date  Delivery Language  Mississauga, Canada  27 October 2014  English  Chicago, IL, United States  13 October 2014  English  Roseville, MN, United States  13 October 2014  English ASIA  Location  Date  Delivery  Jakarta, Indonesia  20 October 2014  English  Petaling Jaya, Malaysia  25 August 2014  English  Kuala Lumpur, Malaysia  8 December 2014  English  Istanbul, Turkey  10 November 2014  Turkish   Dubai, United Arab Emirates  4 January 2015  English AUSTRALIA  Location  Date  Delivery Language  Canberra, Australia  20 October 2014  English  Melbourne, Australia  20 October 2014  English EUROPE  Location  Date  Delivery Language  Paris, France  6 October 2014  French  Milan, Italy  20 October 2014  Italian  Rome, Italy  8 September 2014  Italian  Bucharest, Romania  27 October 2014  Romanian  Madrid, Spain  1 September 2014  Spanish The Oracle Linux System Administration course is the recommended training course to prepare for you for the Oracle Linux 5 & 6 System Administrator OCA certification exam. Those who have acquired the skills provided in the Oracle Linux System Administration course, can advance their learning by taking the Oracle Linux Advanced Administration course. You can take this 5-day instructor led course as a live-virtual event or an in-class event. Below is a selection of the in-class events on the schedule:  Location  Date  Delivery Language  Jakarta, Indonesia  27 October 2014  English  Kuala Lumpur, Malaysia  6 October 2014  English  Bangkok, Thailand  20 October 2014  English  Belmont, CA, United States  15 September 2014  English For information on the Oracle Linux curriculum, go to http://oracle.com/education/linux.

    Read the article

  • Using runtime checking of code contracts in Visual Studio 2010

    - by DigiMortal
    In my last posting about code contracts I introduced how to check input parameters of randomizer using static contracts checking. But you can also compile code contracts to your assemblies and use them also in runtime. In this posting I will show you simple example about runtime checking of code contracts. NB! If you want to play with code and try out things described here feel free to download example solution. if you are speaker and want to use this solution as a part of your sessions then feel free to do so, but don’t forget to refer me and this blog as source of this solution. And please let me know about your session. As a speaker I am very interested about it. :) To see how code contracts are checked at runtime we have to enable runtime checking from project properties. Make sure you have checked the box “Perform Runtime Contract Checking” and make sure you select “Full” from dropdown. These parts are in red box on the screenshot below. Visual Studio 2010 settings for code contracts. Runtime Checking is turned on and checks are made only in public surface. Click on image to see it at original size.  Save project settings. Then compile code and run it. As soon as code execution hits the call to GetRandomFromRangeContracted() exception is thrown. If you are not currently playing with solution referred above take a look at the following screenshot. Visual Studio 2010 runtime checking of code contracts. Exception of type ContractException is thrown when contract is violated. Click on image to see it at original size.  The exact type of exception is ContractException and it is defined in System.Diagnostics.Contracts.__ContractsRuntime namespace. In our example the message of exception is following: "Precondition failed: min < max  Min must be less than max" Besides the description we inserted for the case contract violation the message also contains violated contract type. In this case the type of contract is Precondition. Conclusion Using runtime checking of code contracts enables you to take code contracts with your code and have them checked every time when your methods are called. This way you can assure that all conditions are met to run method or exception is thrown and calling system has to handle the situation.

    Read the article

  • Drawing multiple Textures as tilemap

    - by DocJones
    I am trying to draw a 2d game map and the objects on the map in a single pass. Here is my OpenGL initialization code // Turn off unnecessary operations glDisable(GL_DEPTH_TEST); glDisable(GL_LIGHTING); glDisable(GL_CULL_FACE); glDisable(GL_STENCIL_TEST); glDisable(GL_DITHER); glEnable(GL_BLEND); glEnable(GL_TEXTURE_2D); // activate pointer to vertex & texture array glEnableClientState(GL_VERTEX_ARRAY); glEnableClientState(GL_TEXTURE_COORD_ARRAY); My drawing code is being called by a NSTimer every 1/60 s. Here is the drawing code of my world object: - (void) draw:(NSRect)rect withTimedDelta:(double)d { GLint *t; glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, [_textureManager textureByName:@"blocks"]); glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE); for (int x=0; x<[_map getWidth] ; x++) { for (int y=0; y<[_map getHeight] ; y++) { GLint v[] = { 16*x ,16*y, 16*x+16,16*y, 16*x+16,16*y+16, 16*x ,16*y+16 }; t=[_textureManager getBlockWithNumber:[_map getBlockAtX:x andY:y]]; glVertexPointer(2, GL_INT, 0, v); glTexCoordPointer(2, GL_INT, 0, t); glDrawArrays(GL_QUADS, 0, 4); } } } (_textureManager is a Singelton only loading a texture once!) The object drawing codes is identical (except the nested loops) in terms of OpenGL calls: - (void) drawWithTimedDelta:(double)d { GLint *t; GLint v[] = { 16*xpos ,16*ypos, 16*xpos+16,16*ypos, 16*xpos+16,16*ypos+16, 16*xpos ,16*ypos+16 }; glBindTexture(GL_TEXTURE_2D, [_textureManager textureByName:_textureName]); t=[_textureManager getBlockWithNumber:12]; glVertexPointer(2, GL_INT, 0, v); glTexCoordPointer(2, GL_INT, 0, t); glDrawArrays(GL_QUADS, 0, 4); } As soon as my central drawing routine calls the two drawing methods the second call overlays the first one. i would expect the call to world.draw to draw the map and "stamp" the objects upon it. Debugging shows me, that the first call is performed correctly (world is being drawn), but the following call to all objects ONLY draws the objects, the rest of the scene is getting black. I think i need to blend the drawn textures, but i cant seem to figure out how. Any help is appreciated. Thanks PS: Here is the github link to the project. It may not be in sync of my post here, but for some more in-depth analysis it may help.

    Read the article

  • Extending Currying: Partial Functions in Javascript

    - by kerry
    Last week I posted about function currying in javascript.  This week I am taking it a step further by adding the ability to call partial functions. Suppose we have a graphing application that will pull data via Ajax and perform some calculation to update a graph.  Using a method with the signature ‘updateGraph(id,value)’. To do this, we have do something like this: 1: for(var i=0;i<objects.length;i++) { 2: Ajax.request('/some/data',{id:objects[i].id},function(json) { 3: updateGraph(json.id, json.value); 4: } 5: } This works fine.  But, using this method we need to return the id in the json response from the server.  This works fine, but is not that elegant and increase network traffic. Using partial function currying we can bind the id parameter and add the second parameter later (when returning from the asynchronous call).  To do this, we will need the updated curry method.  I have added support for sending additional parameters at runtime for curried methods. 1: Function.prototype.curry = function(scope) { 2: scope = scope || window 3: var args = []; 4: for (var i=1, len = arguments.length; i < len; ++i) { 5: args.push(arguments[i]); 6: } 7: var m = this; 8: return function() { 9: for (var i=0, len = arguments.length; i < len; ++i) { 10: args.push(arguments[i]); 11: } 12: return m.apply(scope, args); 13: }; 14: } To partially curry this method we will call the curry method with the id parameter, then the request will callback on it with just the value.  Any additional parameters are appended to the method call. 1: for(var i=0;i<objects.length;i++) { 2: var id=objects[i].id; 3: Ajax.request('/some/data',{id: id}, updateGraph.curry(id)); 4: } As you can see, partial currying gives is a very useful tool and this simple method should be a part of every developer’s toolbox.

    Read the article

  • Patterns for Handling Changing Property Sets in C++

    - by Bhargav Bhat
    I have a bunch "Property Sets" (which are simple structs containing POD members). I'd like to modify these property sets (eg: add a new member) at run time so that the definition of the property sets can be externalized and the code itself can be re-used with multiple versions/types of property sets with minimal/no changes. For example, a property set could look like this: struct PropSetA { bool activeFlag; int processingCount; /* snip few other such fields*/ }; But instead of setting its definition in stone at compile time, I'd like to create it dynamically at run time. Something like: class PropSet propSetA; propSetA("activeFlag",true); //overloading the function call operator propSetA("processingCount",0); And the code dependent on the property sets (possibly in some other library) will use the data like so: bool actvFlag = propSet["activeFlag"]; if(actvFlag == true) { //Do Stuff } The current implementation behind all of this is as follows: class PropValue { public: // Variant like class for holding multiple data-types // overloaded Conversion operator. Eg: operator bool() { return (baseType == BOOLEAN) ? this->ToBoolean() : false; } // And a method to create PropValues various base datatypes static FromBool(bool baseValue); }; class PropSet { public: // overloaded[] operator for adding properties void operator()(std::string propName, bool propVal) { propMap.insert(std::make_pair(propName, PropVal::FromBool(propVal))); } protected: // the property map std::map<std::string, PropValue> propMap; }; This problem at hand is similar to this question on SO and the current approach (described above) is based on this answer. But as noted over at SO this is more of a hack than a proper solution. The fundamental issues that I have with this approach are as follows: Extending this for supporting new types will require significant code change. At the bare minimum overloaded operators need to be extended to support the new type. Supporting complex properties (eg: struct containing struct) is tricky. Supporting a reference mechanism (needed for an optimization of not duplicating identical property sets) is tricky. This also applies to supporting pointers and multi-dimensional arrays in general. Are there any known patterns for dealing with this scenario? Essentially, I'm looking for the equivalent of the visitor pattern, but for extending class properties rather than methods. Edit: Modified problem statement for clarity and added some more code from current implementation.

    Read the article

  • SQL SERVER – Script to Update a Specific Column in Entire Database

    - by Pinal Dave
    Last week, I have received a very interesting question and I find in email and I really liked the question as I had to play around with SQL Script for a while to come up with the answer he was looking for. Please read the question and I believe that all of us face this kind of situation. “Pinal, In our database we have recently introduced ModifiedDate column in all of the tables. Now onwards any update happens in the row, we are updating current date and time to that field. Now here is the issue, when we added that field we did not update it with a default value because we were not sure when we will go live with the system so we let it be NULL. Now modification to the application went live yesterday and we are now updating this field. Here is where I need your help. We need to update all the tables in our database where we have column created ModifiedDate and now want to update with current datetime. As our system is already live since yesterday there are several thousands of the rows which are already updated with real world value so we do not want to update those values. Essentially, in our entire database where ever there is a ModifiedDate column and if it is NULL we want to update that with current date time?  Do you have a script for it?” Honestly I did not have such a script. This is very specific required but I was able to come up with two different methods how he can use this method. Method 1 : Using INFORMATION_SCHEMA SELECT 'UPDATE ' + T.TABLE_SCHEMA + '.' + T.TABLE_NAME + ' SET ModifiedDate = GETDATE() WHERE ModifiedDate IS NULL;' FROM INFORMATION_SCHEMA.TABLES T INNER JOIN INFORMATION_SCHEMA.COLUMNS C ON T.TABLE_NAME = C.TABLE_NAME AND c.COLUMN_NAME ='ModifiedDate' WHERE T.TABLE_TYPE = 'BASE TABLE' ORDER BY T.TABLE_SCHEMA, T.TABLE_NAME; Method 2: Using DMV SELECT 'UPDATE ' + SCHEMA_NAME(t.schema_id) + '.' + t.name + ' SET ModifiedDate = GETDATE() WHERE ModifiedDate IS NULL;' FROM sys.tables AS t INNER JOIN sys.columns c ON t.OBJECT_ID = c.OBJECT_ID WHERE c.name ='ModifiedDate' ORDER BY SCHEMA_NAME(t.schema_id), t.name; Above scripts will create an UPDATE script which will do the task which is asked. We can pretty much the update script to any other SELECT statement and retrieve any other data as well. Click to Download Scripts Reference: Pinal Dave (http://blog.sqlauthority.com)  Filed under: PostADay, SQL, SQL Authority, SQL Joins, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • New Information Center - Reviewing Security For FMW 11g

    - by Daniel Mortimer
    Announcing ... Information Center: Reviewing Security For Oracle Fusion Middleware 11g [ID 1458051.2] has been published.  Screenshot of ID 1458051.2 What is an Information Center? Information Centers use widgets to aggregate knowledge content, such as support documents, product documentation, support community threads, which is pertinent to a given task or intent. Widgets either contain static lists or better still some widgets are dynamic. A dynamic widget uses a query criteria to present a list of support documents relevant to the title / subject matter of the widget. The content of a dynamic widget is refreshed automatically every 24 hours. Once you are in an Information Center, you can use the left hand menu to navigate to other Tasks / Intent Information Centers (e.g "Install and Configure", "Patch", "Troubleshoot", "Upgrade" which are available for the chosen product. Are Information Centers easy to find? You can go straight to the new "Reviewing Security" Information Center by using the hyperlink given above. There are, however, two other methods which make Information Centers easier to find. Browse Knowledge Refine Your Search Browse Knowledge The "Browse Knowledge" is currently found in the "Knowledge" Tab Page in My Oracle Support. As illustrated by the screenshots below, you can find Information Centers by choosing a product (e.g "Oracle Fusion Middleware"), a version and an action / intent. If an Information Center exists for your selection the "Advisor Found" button is enabled. Clicking on this button will take you straight to the desired Information Center.Screenshot - Browse Knowledge 1 Screenshot - Browse Knowledge 2 Screenshot - Browse Knowledge 3 Refine Your Search Refine your search is a dialogue which is triggered by certain keywords that you may enter into the Global Search field in the top right hand corner of My Oracle Support. The "Refine Your Search" works in a similar manner to "Browse Knowledge". Choose your product and version. The appropriate Task / Intent should already be selected for you. Thereafter, click the Go button. Screenshot - Refine Your Search 1 Screenshot - Refine Your Search 2 Screenshot - Refine Your Search 3

    Read the article

  • Allocating Entities within an Entity System

    - by miguel.martin
    I'm quite unsure how I should allocate/resemble my entities within my entity system. I have various options, but most of them seem to have cons associated with them. In all cases entities are resembled by an ID (integer), and possibly has a wrapper class associated with it. This wrapper class has methods to add/remove components to/from the entity. Before I mention the options, here is the basic structure of my entity system: Entity An object that describes an object within the game Component Used to store data for the entity System Contains entities with specific components Used to update entities with specific components World Contains entities and systems for the entity system Can create/destroy entites and have systems added/removed from/to it Here are my options, that I have thought of: Option 1: Do not store the Entity wrapper classes, and just store the next ID/deleted IDs. In other words, entities will be returned by value, like so: Entity entity = world.createEntity(); This is much like entityx, except I see some flaws in this design. Cons There can be duplicate entity wrapper classes (as the copy-ctor has to be implemented, and systems need to contain entities) If an Entity is destroyed, the duplicate entity wrapper classes will not have an updated value Option 2: Store the entity wrapper classes within an object pool. i.e. Entities will be return by pointer/reference, like so: Entity& e = world.createEntity(); Cons If there is duplicate entities, then when an entity is destroyed, the same entity object may be re-used to allocate another entity. Option 3: Use raw IDs, and forget about the wrapper entity classes. The downfall to this, I think, is the syntax that will be required for it. I'm thinking about doing thisas it seems the most simple & easy to implement it. I'm quite unsure about it, because of the syntax. i.e. To add a component with this design, it would look like: Entity e = world.createEntity(); world.addComponent<Position>(e, 0, 3); As apposed to this: Entity e = world.createEntity(); e.addComponent<Position>(0, 3); Cons Syntax Duplicate IDs

    Read the article

  • Informed TDD &ndash; Kata &ldquo;To Roman Numerals&rdquo;

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/05/28/informed-tdd-ndash-kata-ldquoto-roman-numeralsrdquo.aspxIn a comment on my article on what I call Informed TDD (ITDD) reader gustav asked how this approach would apply to the kata “To Roman Numerals”. And whether ITDD wasn´t a violation of TDD´s principle of leaving out “advanced topics like mocks”. I like to respond with this article to his questions. There´s more to say than fits into a commentary. Mocks and TDD I don´t see in how far TDD is avoiding or opposed to mocks. TDD and mocks are orthogonal. TDD is about pocess, mocks are about structure and costs. Maybe by moving forward in tiny red+green+refactor steps less need arises for mocks. But then… if the functionality you need to implement requires “expensive” resource access you can´t avoid using mocks. Because you don´t want to constantly run all your tests against the real resource. True, in ITDD mocks seem to be in almost inflationary use. That´s not what you usually see in TDD demonstrations. However, there´s a reason for that as I tried to explain. I don´t use mocks as proxies for “expensive” resource. Rather they are stand-ins for functionality not yet implemented. They allow me to get a test green on a high level of abstraction. That way I can move forward in a top-down fashion. But if you think of mocks as “advanced” or if you don´t want to use a tool like JustMock, then you don´t need to use mocks. You just need to stand the sight of red tests for a little longer ;-) Let me show you what I mean by that by doing a kata. ITDD for “To Roman Numerals” gustav asked for the kata “To Roman Numerals”. I won´t explain the requirements again. You can find descriptions and TDD demonstrations all over the internet, like this one from Corey Haines. Now here is, how I would do this kata differently. 1. Analyse A demonstration of TDD should never skip the analysis phase. It should be made explicit. The requirements should be formalized and acceptance test cases should be compiled. “Formalization” in this case to me means describing the API of the required functionality. “[D]esign a program to work with Roman numerals” like written in this “requirement document” is not enough to start software development. Coding should only begin, if the interface between the “system under development” and its context is clear. If this interface is not readily recognizable from the requirements, it has to be developed first. Exploration of interface alternatives might be in order. It might be necessary to show several interface mock-ups to the customer – even if that´s you fellow developer. Designing the interface is a task of it´s own. It should not be mixed with implementing the required functionality behind the interface. Unfortunately, though, this happens quite often in TDD demonstrations. TDD is used to explore the API and implement it at the same time. To me that´s a violation of the Single Responsibility Principle (SRP) which not only should hold for software functional units but also for tasks or activities. In the case of this kata the API fortunately is obvious. Just one function is needed: string ToRoman(int arabic). And it lives in a class ArabicRomanConversions. Now what about acceptance test cases? There are hardly any stated in the kata descriptions. Roman numerals are explained, but no specific test cases from the point of view of a customer. So I just “invent” some acceptance test cases by picking roman numerals from a wikipedia article. They are supposed to be just “typical examples” without special meaning. Given the acceptance test cases I then try to develop an understanding of the problem domain. I´ll spare you that. The domain is trivial and is explain in almost all kata descriptions. How roman numerals are built is not difficult to understand. What´s more difficult, though, might be to find an efficient solution to convert into them automatically. 2. Solve The usual TDD demonstration skips a solution finding phase. Like the interface exploration it´s mixed in with the implementation. But I don´t think this is how it should be done. I even think this is not how it really works for the people demonstrating TDD. They´re simplifying their true software development process because they want to show a streamlined TDD process. I doubt this is helping anybody. Before you code you better have a plan what to code. This does not mean you have to do “Big Design Up-Front”. It just means: Have a clear picture of the logical solution in your head before you start to build a physical solution (code). Evidently such a solution can only be as good as your understanding of the problem. If that´s limited your solution will be limited, too. Fortunately, in the case of this kata your understanding does not need to be limited. Thus the logical solution does not need to be limited or preliminary or tentative. That does not mean you need to know every line of code in advance. It just means you know the rough structure of your implementation beforehand. Because it should mirror the process described by the logical or conceptual solution. Here´s my solution approach: The arabic “encoding” of numbers represents them as an ordered set of powers of 10. Each digit is a factor to multiply a power of ten with. The “encoding” 123 is the short form for a set like this: {1*10^2, 2*10^1, 3*10^0}. And the number is the sum of the set members. The roman “encoding” is different. There is no base (like 10 for arabic numbers), there are just digits of different value, and they have to be written in descending order. The “encoding” XVI is short for [10, 5, 1]. And the number is still the sum of the members of this list. The roman “encoding” thus is simpler than the arabic. Each “digit” can be taken at face value. No multiplication with a base required. But what about IV which looks like a contradiction to the above rule? It is not – if you accept roman “digits” not to be limited to be single characters only. Usually I, V, X, L, C, D, M are viewed as “digits”, and IV, IX etc. are viewed as nuisances preventing a simple solution. All looks different, though, once IV, IX etc. are taken as “digits”. Then MCMLIV is just a sum: M+CM+L+IV which is 1000+900+50+4. Whereas before it would have been understood as M-C+M+L-I+V – which is more difficult because here some “digits” get subtracted. Here´s the list of roman “digits” with their values: {1, I}, {4, IV}, {5, V}, {9, IX}, {10, X}, {40, XL}, {50, L}, {90, XC}, {100, C}, {400, CD}, {500, D}, {900, CM}, {1000, M} Since I take IV, IX etc. as “digits” translating an arabic number becomes trivial. I just need to find the values of the roman “digits” making up the number, e.g. 1954 is made up of 1000, 900, 50, and 4. I call those “digits” factors. If I move from the highest factor (M=1000) to the lowest (I=1) then translation is a two phase process: Find all the factors Translate the factors found Compile the roman representation Translation is just a look-up. Finding, though, needs some calculation: Find the highest remaining factor fitting in the value Remember and subtract it from the value Repeat with remaining value and remaining factors Please note: This is just an algorithm. It´s not code, even though it might be close. Being so close to code in my solution approach is due to the triviality of the problem. In more realistic examples the conceptual solution would be on a higher level of abstraction. With this solution in hand I finally can do what TDD advocates: find and prioritize test cases. As I can see from the small process description above, there are two aspects to test: Test the translation Test the compilation Test finding the factors Testing the translation primarily means to check if the map of factors and digits is comprehensive. That´s simple, even though it might be tedious. Testing the compilation is trivial. Testing factor finding, though, is a tad more complicated. I can think of several steps: First check, if an arabic number equal to a factor is processed correctly (e.g. 1000=M). Then check if an arabic number consisting of two consecutive factors (e.g. 1900=[M,CM]) is processed correctly. Then check, if a number consisting of the same factor twice is processed correctly (e.g. 2000=[M,M]). Finally check, if an arabic number consisting of non-consecutive factors (e.g. 1400=[M,CD]) is processed correctly. I feel I can start an implementation now. If something becomes more complicated than expected I can slow down and repeat this process. 3. Implement First I write a test for the acceptance test cases. It´s red because there´s no implementation even of the API. That´s in conformance with “TDD lore”, I´d say: Next I implement the API: The acceptance test now is formally correct, but still red of course. This will not change even now that I zoom in. Because my goal is not to most quickly satisfy these tests, but to implement my solution in a stepwise manner. That I do by “faking” it: I just “assume” three functions to represent the transformation process of my solution: My hypothesis is that those three functions in conjunction produce correct results on the API-level. I just have to implement them correctly. That´s what I´m trying now – one by one. I start with a simple “detail function”: Translate(). And I start with all the test cases in the obvious equivalence partition: As you can see I dare to test a private method. Yes. That´s a white box test. But as you´ll see it won´t make my tests brittle. It serves a purpose right here and now: it lets me focus on getting one aspect of my solution right. Here´s the implementation to satisfy the test: It´s as simple as possible. Right how TDD wants me to do it: KISS. Now for the second equivalence partition: translating multiple factors. (It´a pattern: if you need to do something repeatedly separate the tests for doing it once and doing it multiple times.) In this partition I just need a single test case, I guess. Stepping up from a single translation to multiple translations is no rocket science: Usually I would have implemented the final code right away. Splitting it in two steps is just for “educational purposes” here. How small your implementation steps are is a matter of your programming competency. Some “see” the final code right away before their mental eye – others need to work their way towards it. Having two tests I find more important. Now for the next low hanging fruit: compilation. It´s even simpler than translation. A single test is enough, I guess. And normally I would not even have bothered to write that one, because the implementation is so simple. I don´t need to test .NET framework functionality. But again: if it serves the educational purpose… Finally the most complicated part of the solution: finding the factors. There are several equivalence partitions. But still I decide to write just a single test, since the structure of the test data is the same for all partitions: Again, I´m faking the implementation first: I focus on just the first test case. No looping yet. Faking lets me stay on a high level of abstraction. I can write down the implementation of the solution without bothering myself with details of how to actually accomplish the feat. That´s left for a drill down with a test of the fake function: There are two main equivalence partitions, I guess: either the first factor is appropriate or some next. The implementation seems easy. Both test cases are green. (Of course this only works on the premise that there´s always a matching factor. Which is the case since the smallest factor is 1.) And the first of the equivalence partitions on the higher level also is satisfied: Great, I can move on. Now for more than a single factor: Interestingly not just one test becomes green now, but all of them. Great! You might say, then I must have done not the simplest thing possible. And I would reply: I don´t care. I did the most obvious thing. But I also find this loop very simple. Even simpler than a recursion of which I had thought briefly during the problem solving phase. And by the way: Also the acceptance tests went green: Mission accomplished. At least functionality wise. Now I´ve to tidy up things a bit. TDD calls for refactoring. Not uch refactoring is needed, because I wrote the code in top-down fashion. I faked it until I made it. I endured red tests on higher levels while lower levels weren´t perfected yet. But this way I saved myself from refactoring tediousness. At the end, though, some refactoring is required. But maybe in a different way than you would expect. That´s why I rather call it “cleanup”. First I remove duplication. There are two places where factors are defined: in Translate() and in Find_factors(). So I factor the map out into a class constant. Which leads to a small conversion in Find_factors(): And now for the big cleanup: I remove all tests of private methods. They are scaffolding tests to me. They only have temporary value. They are brittle. Only acceptance tests need to remain. However, I carry over the single “digit” tests from Translate() to the acceptance test. I find them valuable to keep, since the other acceptance tests only exercise a subset of all roman “digits”. This then is my final test class: And this is the final production code: Test coverage as reported by NCrunch is 100%: Reflexion Is this the smallest possible code base for this kata? Sure not. You´ll find more concise solutions on the internet. But LOC are of relatively little concern – as long as I can understand the code quickly. So called “elegant” code, however, often is not easy to understand. The same goes for KISS code – especially if left unrefactored, as it is often the case. That´s why I progressed from requirements to final code the way I did. I first understood and solved the problem on a conceptual level. Then I implemented it top down according to my design. I also could have implemented it bottom-up, since I knew some bottom of the solution. That´s the leaves of the functional decomposition tree. Where things became fuzzy, since the design did not cover any more details as with Find_factors(), I repeated the process in the small, so to speak: fake some top level, endure red high level tests, while first solving a simpler problem. Using scaffolding tests (to be thrown away at the end) brought two advantages: Encapsulation of the implementation details was not compromised. Naturally private methods could stay private. I did not need to make them internal or public just to be able to test them. I was able to write focused tests for small aspects of the solution. No need to test everything through the solution root, the API. The bottom line thus for me is: Informed TDD produces cleaner code in a systematic way. It conforms to core principles of programming: Single Responsibility Principle and/or Separation of Concerns. Distinct roles in development – being a researcher, being an engineer, being a craftsman – are represented as different phases. First find what, what there is. Then devise a solution. Then code the solution, manifest the solution in code. Writing tests first is a good practice. But it should not be taken dogmatic. And above all it should not be overloaded with purposes. And finally: moving from top to bottom through a design produces refactored code right away. Clean code thus almost is inevitable – and not left to a refactoring step at the end which is skipped often for different reasons.   PS: Yes, I have done this kata several times. But that has only an impact on the time needed for phases 1 and 2. I won´t skip them because of that. And there are no shortcuts during implementation because of that.

    Read the article

  • Stuck with Documentum Still? Do MORE with Oracle WebCenter!

    - by Michael Snow
    WEBCAST TODAY!! 03/22/12 Do you need to lower costs? Raise Productivity? Foster Innovation? Improve Online Engagement? But you’re still stuck with Documentum? Step away from the ledge – there is hope – let us help you. Top 4 Content Imperatives · Lower Costs - Reduce labor, maintenance fees, storage and electrical consumption · Raise Productivity - Automation and integration, communication, findability · Foster Innovation - Enable collaboration, expertise location · Improve Online Engagement – enable user-driven, dynamic marketing initiatives With the coming technology wave we see four content imperatives. Every organization has had to reduce costs, cost cutting has become a way of life. Everyone is working three jobs as positions are eliminated. And so we have to reduce labor, reduce maintenance, and reduce money we are wasting on things like storing content that is redundant or no longer useful. We also, to fill that gap, need to raise productivity. Knowledge workers represent the fastest growing segment of the workforce, accounting for 40%-75% of the employees at organizations in sectors like financial services, life sciences, healthcare and retail.  What’s more, their wages total 18 percent of the United States GDP. And so we can’t afford information systems that don’t let our top performers be the best they can be. We look to automate the content processes, provide ways to integrate that content into our processes, provide communication to make decisions, and to make content more findable so people can make the right decision and move the process forward. And really to get ourselves out of the current financial status, we can only cut costs so far. We have to innovate out of economic tough times – to find new products and new markets. And to enable the innovation process, we have to enable collaboration and expertise location. So much of innovation is about building on innovations that have come before. To solve problems, we have to be able to find what our organization has already created. We find that problems we need to solve have already been solved if we can find the right document, the right person. So we have to provide systems that enable us to stand on the shoulders of our organization’s accomplishments. Good content drives great marketing. Online engagement is growing as an absolute necessity for modern growing marketing organizations that require the business users be enabled for dynamic marketing content creation, updates and targeted content creation and management. Unfortunately – if you are currently stuck with Documentum, you are really lacking in your Web Experience Management capabilities. Documentum previously used FatWire for web publishing. Now FatWire is part of Oracle. Oracle provides powerful web engagement capabilities: Increase sales and loyalty by optimizing online engagement Create, manage and moderate contextually relevant, targeted and interactive online experiences Optimize customer engagement across, web, mobile and social channels Manage large scale multichannel global online presence with integration to enterprise applications Enable business users to control their content and make their own updates Publish content from native files – enable navigation of project documents, procedures, policy information Enable content display and updates from existing web applications – one click to drag and drop content management functionality So you get the ability to self-publish information and make it navigable, to move the process of publishing from IT to business users, and the ability to address a whole new area of user engagement with web experience management. So… if you are still stuck with Documentum and don’t know what to do – contact us – not only will Oracle help you step away from the ledge, but also with the MoveOff Documentum program, we are offering you a way – trade-in your Documentum licenses for a 100% credit on Oracle WebCenter. How’s that for a nice bonus? It’s time to stop maintaining Documentum, and to start innovating with Oracle WebCenter. Learn More Here! To learn more about what Oracle WebCenter can offer you today – join us for a webcast – your eyes will be opened to all that’s possible. Do More with WebCenter: Extend Beyond Content Management

    Read the article

  • How to get bearable 2D and 3D performance on AMD Radeon HD 6950?

    - by l0b0
    I have had an AMD Radeon HD 6950 (i.e., Cayman series) for a couple years now, and I have tried a lot of combinations of drivers and settings with terrible results. I'm completely at a loss as to how to proceed. The open source driver has much better 2D performance, but it offloads all OpenGL rendering to the CPU. What I've tried so far: All the latest stable Ubuntu releases in the period, plus one Linux Mint release. All the latest stable AMD Catalyst Proprietary Display Drivers, and currently 13.1. The unofficial wiki installation instructions for every Ubuntu version and the semi-official Ubuntu instructions. All the tips and tweaks I could find for Minecraft (Optifine, reducing settings to minimum), VLC (postprocessing at minimum, rendering at native video size), Catalyst Control Center (flipped every lever in there) and X11 (some binary toggles I can no longer remember). Results: Typically 13-15 FPS in Minecraft, 30 max (100+ in Windows with the same driver version). Around 10 FPS in Team Fortress 2 using the official Steam client. Choppy video playback, in Flash and with VLC. CPU use goes through the roof when rendering video (150% for 1080p on YouTube in Chromium, 100% for 1080p H264 in VLC). glxgears shows 12.5 FPS when maximized. fgl_glxgears shows 10 FPS when maximized. Hardware details from lshw: Motherboard ASUS P6X58D-E CPU Intel Core i7 CPU 950 @ 3.07GHz (never overclocked; 64 bit) 6 GB RAM Video card product "Cayman PRO [Radeon HD 6950]", vendor "Hynix Semiconductor (Hyundai Electronics)" 2 x 1920x1200 monitors, both connected with HDMI. I feel I must be missing something absolutely fundamental here. Is there no accelerated support for anything on 64-bit architectures? Does a dual monitor completely mess up the driver? $ fglrxinfo display: :0 screen: 0 OpenGL vendor string: Advanced Micro Devices, Inc. OpenGL renderer string: AMD Radeon HD 6900 Series OpenGL version string: 4.2.11995 Compatibility Profile Context $ glxinfo | grep 'direct rendering' direct rendering: Yes I am currently using the open source driver, with the following results: Full frame rate and low CPU load when playing 1080p video. Black screen (but music in the background) in Team Fortress 2. Similar performance in Minecraft as the Catalyst driver. In hindsight obvious, since both end up offloading the rendering to the CPU. My /var/log/Xorg.0.log after upgrading to AMD Catalyst 13.1. Some possibly important lines: (WW) Falling back to old probe method for fglrx (WW) fglrx: No matching Device section for instance (BusID PCI:0@3:0:1) found The generated xorg.conf. The disabled "monitor" 0-DFP9 is actually an A/V receiver, which sometimes confuses the monitor drivers when turned on/off (but not in Windows). All three "monitor" devices are connected with HDMI. Edit: Chris Carter's suggestion to use the xorg-edgers PPA (Catalyst 13.1) resulted in some improvement, but still pretty bad performance overall: Minecraft stabilizes at 13-17 FPS, but at least the CPU load is "only" at 45-60%. Still 150% CPU use for 1080p video rendering on YouTube in Chromium. Massive improvement for 1080p H264 in VLC: 40-50% CPU use and no visible jitter glxgears performance about doubled to 25-30 FPS when maximized. fgl_glxgears still at ~10 FPS when maximized.

    Read the article

  • Prefilling an SMS on Mobile Devices with the sms: Uri Scheme

    - by Rick Strahl
    Popping up the native SMS app from a mobile HTML Web page is a nice feature that allows you to pre-fill info into a text for sending by a user of your mobile site. The syntax is a bit tricky due to some device inconsistencies (and quite a bit of wrong/incomplete info on the Web), but it's definitely something that's fairly easy to do.In one of my Mobile HTML Web apps I need to share a current location via SMS. While browsing around a page I want to select a geo location, then share it by texting it to somebody. Basically I want to pre-fill an SMS message with some text, but no name or number, which instead will be filled in by the user.What worksThe syntax that seems to work fairly consistently except for iOS is this:sms:phonenumber?body=messageFor iOS instead of the ? use a ';' (because Apple is always right, standards be damned, right?):sms:phonenumber;body=messageand that works to pop up a new SMS message on the mobile device. I've only marginally tested this with a few devices: an iPhone running iOS 6, an iPad running iOS 7, Windows Phone 8 and a Nexus S in the Android Emulator. All four devices managed to pop up the SMS with the data prefilled.You can use this in a link:<a href="sms:1-111-1111;body=I made it!">Send location via SMS</a>or you can set it on the window.location in JavaScript:window.location ="sms:1-111-1111;body=" + encodeURIComponent("I made it!");to make the window pop up directly from code. Notice that the content should be URL encoded - HTML links automatically encode, but when you assign the URL directly in code the text value should be encoded.Body onlyI suspect in most applications you won't know who to text, so you only want to fill the text body, not the number. That works as you'd expect by just leaving out the number - here's what the URLs look like in that case:sms:?body=messageFor iOS same thing except with the ;sms:;body=messageHere's an example of the code I use to set up the SMS:var ua = navigator.userAgent.toLowerCase(); var url; if (ua.indexOf("iphone") > -1 || ua.indexOf("ipad") > -1) url = "sms:;body=" + encodeURIComponent("I'm at " + mapUrl + " @ " + pos.Address); else url = "sms:?body=" + encodeURIComponent("I'm at " + mapUrl + " @ " + pos.Address); location.href = url;and that also works for all the devices mentioned above.It's pretty cool that URL schemes exist to access device functionality and the SMS one will come in pretty handy for a number of things. Now if only all of the URI schemes were a bit more consistent (damn you Apple!) across devices...© Rick Strahl, West Wind Technologies, 2005-2013Posted in IOS  JavaScript  HTML5   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Where would my different development rhythm be suitable for the work?

    - by DarenW
    Over the years I have worked on many projects, with some successful and a great benefit to the company, and some total failures with me getting fired or otherwise leaving. What is the difference? Naturally I prefer the former and wish to avoid the latter, so I'm pondering this issue. The key seems to be that my personal approach differs from the norm. I write code first, letting it be all spaghetti and chaos, using whatever tools "fit my hand" that I'm fluent in. I try to organize it, then give up and start over with a better design. I go through cycles, from thinking-design to coding-testing. This may seem to be the same as any other development process, Agile or whatever, cycling between design and coding, but there does seem to be a subtle difference: The methods (ideally) followed by most teams goes design, code; design, code; ... while I'm going code, design; code, design; (if that makes any sense.) Music analogy: some types of music have a strong downbeat while others have prominent syncopation. In practice, I just can't think in terms of UML, specifications and so on, but grok things only by attempting to code and debug and refactor ad-hoc. I need the grounding provided by coding in order to think constructively, then to offer any opinions, advice or solutions to the team and get real work done. In positions where I can initially hack up cowboy code without constraints of tool or language choices, I easily gain a "feel" for the data, requirements etc and eventually do good work. In formalized positions where paperwork and pure "design" comes first and only later any coding (even for small proof-of-concept projects), I am lost at sea and drown. Therefore, I'd like to know how to either 1) change my rhythm to match the more formalized methodology-oriented team ways of doing things, or 2) find positions at organizations where my sense of development rhythm is perfect for the work. It's probably unrealistic for a person to change their fundamental approach to things. So option 2) is preferred. So where I can I find such positions? How common is my approach and where is it seen as viable but different, and not dismissed as undisciplined or cowboy coder ways?

    Read the article

< Previous Page | 444 445 446 447 448 449 450 451 452 453 454 455  | Next Page >