Search Results

Search found 39207 results on 1569 pages for 'foreign function interfac'.

Page 863/1569 | < Previous Page | 859 860 861 862 863 864 865 866 867 868 869 870  | Next Page >

  • Tracking click conversions with Google Analytics

    - by Joel
    Is there anyway I can use Google Analytics to track click conversions on a link? For example, if I have a link to www.a.com , is it possible for google to track the number of times that particular link was shown on my page and then track how many times it was really clicked? The problem is that I do not show the link to www.a.com every time the page loads. I am using a random function (server side) to generate a different link everytime. I would like Google Analytics to provide me with the click conversion for each of the links I choose to show the user. Thanks, Joel

    Read the article

  • Python C API return more than one value / object without building a tuple [migrated]

    - by Grisu
    I got the following problem. I have written a C-Extension to Python(2.7 / 3.2) to interface a self written software library. Unfortunately I need to return two values from the function where the last one is optional. In Python I tried def func(x,y): return x+y, x-y test = func(13,4) but test is a tuple. If I write test1,test2 = func(13,4) I got both values separated. Is there a possibility to return only one value without unpacking the tuple, i.e. the second(,.. third, ..fourth) value gets neglected? And if such a solution existst, how does it look for the C-API? Because return Py_BuildValue("ii",x+y,x-y); results in a tuple as well.

    Read the article

  • Is there a constant for "end of time"?

    - by Nick Rosencrantz
    For some systems, the time value 9999-12-31 is used as the "end of time" as the end of the time that the computer can calculate. But what if it changes? Wouldn't it be better to define this time as a builtin variable? In C and other programming languages there usually is a variable such as MAX_INT or similar to get the largest value an integer could have. Why is there not a similar function for MAX_TIME i.e. set the variable to the "end of time" which for many systems usually is 9999-12-31. To avoid the problem of hardcoding to a wrong year (9999) could these systems introduce a variable for the "end of time"?

    Read the article

  • Use a template to get alternate behaviour?

    - by Serge
    Is this a bad practice? const int sId(int const id); // true/false it doesn't matter template<bool i> const int sId(int const id) { return this->id = id; } const int MCard::sId(int const id){ MCard card = *this; this->id = id; this->onChange.fire(EventArgs<MCard&, MCard&>(*this, card)); return this->id; } myCard.sId(9); myCard.sId<true>(8); As you can see, my goal is to be able to have an alternative behaviour for sId. I know I could use a second parameter to the function and use a if, but this feels more fun (imo) and might prevent branch prediction (I'm no expert in that field). So, is it a valid practice, and/or is there a better approach?

    Read the article

  • Guidance on building an au pair-to-family networking site.

    - by Philip Kidd
    I'm building a website for an au pair agency business that will connect au pairs to families around Europe. I know nothing about website building, HTML etc. so I'm using a wysiwyg editer (weebly). How I would like the site to function: Families upload their information into profiles Au pairs do the same families can view a limited part of an au pairs' profile until they pay a deposit After deposit is payed, all au pairs' profile information becomes open to families Families can order au pairs and confirm their order with another payment payment must be made before 'order' is confirmed By 'order' I mean full communications become open between the family and the au pair they have 'ordered' as well as travel information being sent to another agency the site needs to be linked with a bank account (e.g paypal) and another agency, who will look after the flight bookings etc. A website already exists for this business however it just contains information on the business and application forms - if the site becomes fully automated it will relieve a lot of strain on administration in the office (dealing with applications, travel information etc.)

    Read the article

  • Scripts won't affect clones - Unity3d

    - by user3666251
    I made a script which swaps two game objects on click.But the script won't work because the objects are actualy clones of the original prefab. This is the script (UnityScript): #pragma strict var object1 : GameObject; var object2 : GameObject; function OnMouseDown () { Instantiate(object2,object1.transform.position,object1.transform.rotation); Destroy(object1); } I use this script to create other game objects (clones)[c#] : using UnityEngine; using System.Collections; public class Spawner : MonoBehaviour { public GameObject[] obj; public float spawnMin = 1f; public float spawnMax = 2f; // Use this for initialization void Start () { Spawn (); } void Spawn() { Instantiate(obj[Random.Range(0, obj.GetLength(0))],transform.position, Quaternion.identity); Invoke ("Spawn", Random.Range (spawnMin, spawnMax)); } } The objects get renamed to NAME (Clone). What I wanna do is make the script affect clones too.So they will swap when I click on them.

    Read the article

  • Pointless Code In Your Source

    - by Ali
    I've heard stories of this from senior coders and I've seen some of it myself. It seems that there are more than a few instances of programmers writing pointless code. I will see things like: Method or function calls that do nothing of value. Redundant checks done in a separate class file, object or method. if statements that always evaluate to true. Threads that spin off and do nothing of note. Just to name a few. I've been told that this is because programmers want to intentionally make the code confusing to raise their own worth to the organization or make sure of repeat business in the case of contractual or outsourced work. My question is. Has anyone else seen code like this? What was your conclusion was to why that code was there? If anyone has written code like this, can you share why?

    Read the article

  • Master Data Management Implementation Styles

    - by david.butler(at)oracle.com
    In any Master Data Management solution deployment, one of the key decisions to be made is the choice of the MDM architecture. Gartner and other analysts describe some different Hub deployment styles, which must be supported by a best of breed MDM solution in order to guarantee the success of the deployment project.   Registry Style: In a Registry Style MDM Hub, the various source systems publish their data and a subscribing Hub stores only the source system IDs, the Foreign Keys (record IDs on source systems) and the key data values needed for matching. The Hub runs the cleansing and matching algorithms and assigns unique global identifiers to the matched records, but does not send any data back to the source systems. The Registry Style MDM Hub uses data federation capabilities to build the "virtual" golden view of the master entity from the connected systems.   Consolidation Style: The Consolidation Style MDM Hub has a physically instantiated, "golden" record stored in the central Hub. The authoring of the data remains distributed across the spoke systems and the master data can be updated based on events, but is not guaranteed to be up to date. The master data in this case is usually not used for transactions, but rather supports reporting; however, it can also be used for reference operationally.   Coexistence Style: The Coexistence Style MDM Hub involves master data that's authored and stored in numerous spoke systems, but includes a physically instantiated golden record in the central Hub and harmonized master data across the application portfolio. The golden record is constructed in the same manner as in the consolidation style, and, in the operational world, Consolidation Style MDM Hubs often evolve into the Coexistence Style. The key difference is that in this architectural style the master data stored in the central MDM system is selectively published out to the subscribing spoke systems.   Transaction Style: In this architecture, the Hub stores, enhances and maintains all the relevant (master) data attributes. It becomes the authoritative source of truth and publishes this valuable information back to the respective source systems. The Hub publishes and writes back the various data elements to the source systems after the linking, cleansing, matching and enriching algorithms have done their work. Upstream, transactional applications can read master data from the MDM Hub, and, potentially, all spoke systems subscribe to updates published from the central system in a form of harmonization. The Hub needs to support merging of master records. Security and visibility policies at the data attribute level need to be supported by the Transaction Style hub, as well.   Adaptive Transaction Style: This is similar to the Transaction Style, but additionally provides the capability to respond to diverse information and process requests across the enterprise. This style emerged most recently to address the limitations of the above approaches. With the Adaptive Transaction Style, the Hub is built as a platform for consolidating data from disparate third party and internal sources and for serving unified master entity views to operational applications, analytical systems or both. This approach delivers a real-time Hub that has a reliable, persistent foundation of master reference and relationship data, along with all the history and lineage of data changes needed for audit and compliance tracking. On top of this persistent master data foundation, the Hub can dynamically aggregate transaction data on demand from different source systems to deliver the unified golden view to downstream systems. Data can also be accessed through batch interfaces, published to a message bus or served through a real-time services layer. New data sources can be readily added in this approach by extending the data model and by configuring the new source mappings and the survivorship rules, meaning that all legacy data hubs can be leveraged to contribute their records/rules into the new transaction hub. Finally, through rich user interfaces for data stewardship, it allows exception handling by business analysts to keep it current with business rules/practices while maintaining the reliability of best-of-breed master records.   Confederation Style: In this architectural style, several Hubs are maintained at departmental and/or agency and/or territorial level, and each of them are connected to the other Hubs either directly or via a central Super-Hub. Each Domain level Hub can be implemented using any of the previously described styles, but normally the Central Super-Hub is a Registry Style one. This is particularly important for Public Sector organizations, where most of the time it is practically or legally impossible to store in a single central hub all the relevant constituent information from all departments.   Oracle MDM Solutions can be deployed according to any of the above MDM architectural styles, and have been specifically designed to fully support the Transaction and Adaptive Transaction styles. Oracle MDM Solutions provide strong data federation and integration capabilities which are key to enabling the use of the Confederated Hub as a possible architectural style approach. Don't lock yourself into a solution that cannot evolve with your needs. With Oracle's support for any type of deployment architecture, its ability to leverage the outstanding capabilities of the Oracle technology stack, and its open interfaces for non-Oracle technology stacks, Oracle MDM Solutions provide a low TCO and a quick ROI by enabling a phased implementation strategy.

    Read the article

  • Circular Bullet Spread not Even

    - by SoulBeaver
    I'm creating a bullet shooter much in the style of Touhou. Right now I want to have a very simple circular shot being fired from the enemy. See this picture: As you can see, the spacing is very uneven, which isn't very good if you want to survive. The code I'm using is this: private function shoot() : void { const BULLETS_PER_WAVE : int = 72; var interval : Number = BULLETS_PER_WAVE / 360; for (var i : int = 0; i < BULLETS_PER_WAVE; ++i { var xSpeed : Number = GameConstants.BULLET_NORMAL_SPEED_X * Math.sin(i * interval); var ySpeed : Number = GameConstants.BULLET_NORMAL_SPEED_Y * Math.cos(i * interval); BulletFactory.createNormalBullet(bulletColor_, alice_.center, xSpeed, ySpeed); } canShoot_ = false; cooldownTimer_.start(); } I imagine my mistake is in the sin, cos functions, but I'm not entirely sure what's wrong.

    Read the article

  • REPLACENULL in SSIS 2012

    - by Davide Mauri
    While preparing my slides e demos for the forthcoming SQL Server Conference 2012 in Italy, I’ve come across a nice addition to DTS Expression language which I never noticed before and that seems unknown also to the blogosphere: REPLACENULL. REPLACENULL is the same of ISNULL in T-SQL. It’s *very* useful especially when loading a fact table of your BI solution when you need to replace unexisting reference to dimension with dummy values. Here’s an example of how it can be used (please notice that in this example I’m NOT loading a fact table): I’ve noticed that the feature was requested by fellow MVP John Welch http://connect.microsoft.com/SQLServer/feedback/details/636057/ssis-add-a-replacenull-function-to-the-expression-language So: Thanks John and Thanks SSIS Team ! Ah, btw, the Help online is here http://msdn.microsoft.com/en-us/library/hh479601(v=sql.110).aspx Enjoy!

    Read the article

  • Do programmers at non-software companies need the same things as at software companies?

    - by Michael
    There is a lot of evidence that things like offices, multiple screens, administration rights of your own computer, and being allowed whatever software you want is great for productivity while developing. However, the studies I've seen tend toward companies that sell software. Therefore, keeping the programmers productive is paramount to the company's profitability. However, at companies that produce software simply to support their primary function, programming is merely a support role. Do the same rules apply at a company that only uses the software they produce to support their business, and a lot of a programmer's work is maintainence?

    Read the article

  • cannot open ubuntu software center

    - by success
    I deleted some unnecessary icon themes and now my application icons are changed. I cannot open Ubuntu software center also.... the following message is displayed.... success@user-pc:~$ software-center 2012-09-12 22:24:52,048 - softwarecenter.ui.gtk3.app - INFO - setting up proxy 'None' 2012-09-12 22:24:52,055 - softwarecenter.db.database - INFO - open() database: path=None use_axi=True use_agent=True Traceback (most recent call last): File "/usr/bin/software-center", line 142, in <module> app = SoftwareCenterAppGtk3(datadir, xapian_base_path, options, args) File "/usr/share/software-center/softwarecenter/ui/gtk3/app.py", line 387, in __init__ self.datadir) File "/usr/share/software-center/softwarecenter/ui/gtk3/panes/historypane.py", line 78, in __init__ self._get_emblems(self.icons) File "/usr/share/software-center/softwarecenter/ui/gtk3/panes/historypane.py", line 192, in _get_emblems pb = icons.load_icon(emblem, self.ICON_SIZE, 0) File "/usr/lib/python2.7/dist-packages/gi/types.py", line 43, in function return info.invoke(*args, **kwargs) gi._glib.GError: Icon 'package-install' not present in theme I also tried the following code to change the icon but no didnt work.... gksu gedit /usr/share/applications/ubuntu-software-center.desktop

    Read the article

  • Frame Interpolation issues for skeletal animation

    - by sebby_man
    I'm trying to animate in-between keyframes for skeletal animation but having some issues. Each joint is represented by a quaternion and there is no translation component. When I try to slerp between the orientations at the two key frames, I got a very wacky animation. I know my skinning equation is right because the animation is perfectly fine when the animation is directly on a keyframe rather than in-between two. I'm using glm's built in mix function to do the slerp, so I don't think there are any problems with the actual slerp implementation. There's really one thing left that could be wrong here. I must not be in the correct space to do slerp. Right now the orientations are in joint local space. Do I have to be in world space? In some other space along the way? I have the bind pose matrix and world-space transformation matrix at my disposal if those are needed.

    Read the article

  • JQuery Mobile: Fire Mobileinit Event

    - by Yousef_Jadallah
     Many people asked that the Mobileinit event didn't work. Simplicity just you need to follow this sequence: <link rel="stylesheet" href="http://code.jquery.com/mobile/1.0b1/jquery.mobile-1.0b1.min.css" />     <script src="http://code.jquery.com/jquery-1.6.1.min.js"></script>     <script>         $(document).bind("mobileinit", function () {             alert('mobileinit is fired');         });     </script>     <script src="http://code.jquery.com/mobile/1.0b1/jquery.mobile-1.0b1.min.js"></script> Hope that helps.  

    Read the article

  • Box2D Platform body not moving player body along with it

    - by onedayitwillmake
    I am creating a game using Box2D (Javascript implementation) - and I added the ability to have a static platform, that is moved along an axis as a function of a sine. My problem is when the player lands on the platform, as the platform moves along the X axis - the player is not moved along with it, as you visually would expect. The player can land on the object, and if it hits the side of the object, it does colide with it and is pushed. This image might explain better than I did: After jumping on to the red platform the player character will fall off as the platform moves to the right UPDATE: Here is a live demo showing the problem: http://onedayitwillmake.com/ChuClone/slideexample.php

    Read the article

  • Understanding hand written lexers

    - by Cole Johnson
    I am going to make a compiler for C (C99; I own the standards PDF), written in C (go figure) and looking up on how compilers work on Wikipedia has told me a lot. However, after reading up on lexers has confused me. The Wikipedia page states that: the GNU Compiler Collection (gcc) uses hand-written lexers I have tried googling what a hand written lexer and have come up with nothing except for "making a flowchart that describes how it should function", however, isn't that how all software development should be done? So my question is: "What is a hand written lexer?"

    Read the article

  • MySql for Visual Studio 1.0.2 GA has been released

    - by fernando
    MySQL for Visual Studio is a new product including all of the Visual Studio integration previously available as part of Connector/Net.  The product is now released as GA and is appropriate for use in production environments.  It is compatible with MySQL Server versions 5.0-5.7 and Visual Studio versions 2008-2012.  It is now available as part of MySql Installer for Windows (http://dev.mysql.com/tech-resources/articles/mysql-installer-for-windows.html). The 1.0 version of MySQL for Visual Studio brings the following new features:   Workbench Launching.   MySql Utilities Launching.   Table Script Generation.   The functionality of the core libraries (ADO.NET, EF, ASP.NET providers is available as the separate download of Connector/NET 6.7). Features available from previous versions:        Server explorer connections     Design time support     Entity Framework designer (Database First & Model First)     Stored Routines Debugger     Intellisense     ASP.NET Website Configuration Tool Workbench Launching  ------------------------------------------- A context menu for connections in Server Explorer allows to launch Workbench (if Workbench is installed). MySql Utilities Launching  ------------------------------------------- A context menu for connections in Server Explorer allows to launch a prompt for MySql Utilities (if MySql Utilities is installed). Table Script Generation  ------------------------------------------- A context menu for tables is available in Server Explorer to generate the script for a table. The full list of bug fixes for "MySql for Visual Studio" 1.0 follows: 1.0.2 - Fix for Documentation not found (Oracle bug #6915712). - Fix for intellisense completion, now Views are displayed together with Tables calling intellisense (Oracle Bug #16881451). - Fix for parser syntax, now the parser supports the clause ALTER TABLE table_name RENAME {INDEX|KEY} old_index_name TO new_index_name introduced in MySql 5.7. (Oracle Bug #16881481) - Fix for Debugging a routine produces an error when binary log is enabled (Oracle bug #16941181). - Fix for WorkItem 552: MySql for Visual Studio Installer fails when installing against VS2008. - Fix for bug Vs plugin installer is not working (Oracle bug #16973339). - Fix for bug Release notes file has no notes about (Oracle bug #16973326). 1.0.1 - Fix for "README" file and "Release Notes" file referes to connector 6.6. - Fix for Parser fails to recognizes a complex view (Oracle bug #16815427). - Fix for Altering table's primary key in designer not working (Oracle bug #16866053). - Fix for Web configuration tool is not shown on mysql for visual studio (Oracle bug # 16902696). - Fix for Model first is not supported using mysql for visual studio (Oracle bug # 16902743). - Fix for Mysql for vs should not be installed with connector/net version < 6.7 (Oracle bug # 16902774). - Fix for Resolve assemblies dependencies between MySql.Data (Connector/Net version) and MySql.Data (WI # 460). - Fix for Showing an exception related to resources (Oracle bug #16903039). 1.0.0 - Added new option on Connection Node for Server Explorer Window in Visual Studio to give the user the option when WB is Installed to open the MySQL Utilities console window. - Added new option on Connection Node for Server Explorer Window in Visual Studio to give the user the option when WB is Installed to open the SQL Editor Window using the same connection. - Implemented a menu option to generate table script from server explorer context menu (Tracker task 433). - Fix for bug If using repair option, then vs2010 doesnt allow to connect to db (Oracle bug #16238242). - Fix for bug "Can't change the name for a view in view editor" (Oracle bug #13805346). - Fix for Debugger cannot debug stored procedures with a main begin labeled and declare statements included (Oracle bug #16002371). - Fix for bug If using repair option at Installer, then vs2010 doesnt allow to connect to db (Oracle bug #16238242). - Fix for "Cannot change the name for a Foreign Key in table designer" (Oracle bug #16238068). - Fix for error when trying to set primary key for a column with same name as mysql keyword (like INT) in table designer   (Oracle bug #16238102). - Fix for databases not displayed in connect dialog for mysql script when correcting credentials, after entering a bad password   (Oracle bug #13805337). - Fix for Debugger fails trying to debug a stored routine in a MySql server hosted in linux without lower_case_table_names option enabled   (MySql bug #69065, Oracle bug #16770384). - Fix for Debugger issue, Values through watch tab shouldn't allow to be modified (Oracle bug #14545448). - Fix for Visual Studio Mysql editor colors cannot be customized (Oracle bug #16453324, MySql bug #67994). The documentation is available as part of Connector/NET at http://dev.mysql.com/doc/refman/5.7/en/connector-net.html  Enjoy and thanks for the support!  --  Fernando Gonzalez Sanchez | Software Engineer |  Oracle MySQL Windows Experience Team, Connector/NET  Guadalajara | Jalisco | Mexico

    Read the article

  • Rotate sphere in Javascript / three.js while moving on x/z axes

    - by kaipr
    I have a sphere/ball in three.js which I want to "roll" arround on a x/z axis. For the z axe I could simply do this no matter what the current x and y rotation is: sphere.roll_z = function(distance) { sphere.position.z += distance; sphere.rotation.x += distance > 0 ? 0.05 : -0.05; } But how can I roll it along the x axe? And how could I properly do the roll_z? I've found a lot about quateration and matrixes, but I can't figure out how to use them properly to achieve my (rather simple) goal. I'm aware that I have to update multiple rotations and that I have to calculate how far to rotate the sphere to match the distance, but the "how" is the question. It's probably just lack of mathematical skills which I should train, but a working example/short explanation would help alot to start with.

    Read the article

  • How can I use the Windows key?

    - by torbengb
    The Windows key seems to not have any use in Ubuntu, but since I'm just coming from Windows I'm used to this key having some function. How can I make good use of the Windows key in Ubuntu? I've seen that I can remap keys in SystemPreferencesKeyboardLayoutOptionsAlt/Win key behavior, but I have no idea what the choices meta, super, hyper mean. The help button in this dialog doesn't give any specifics about them. I've experimented a little and found that meta seems to have some use, like Win+M = Me menu, or Win+S is the shutdown menu, but for some keys (B, I) it's more like Ctrl (bold, italic). Haven't found any further. What would a useful setting be for a Linux newbie?

    Read the article

  • C++ difference between "char *" and "char * = new char[]"

    - by nashmaniac
    So, if I want to declare an array of characters I can go this way char a[2]; char * a ; char * a = new char[2]; Ignoring the first declaration, the other two use pointers. As far as I know the third declaration is stored in heap and is freed using the delete operator . does the second declaration also hold the array in heap ? Does it mean that if something is stored in heap and not freed can be used anywhere in a file like a variable with file linkage ? I tried both third and second declaration in one function and then using the variable in another but it didn't work, why ? Are there any other differences between the second and third declarations ?

    Read the article

  • What is really happening when we change encoding in a string?

    - by Jim Thio
    http://php.net/manual/en/function.mb-convert-encoding.php Say I do: $encoded = mb_convert_encoding ($original); That looks like simple enough. WHat I am imagining is the following $original has a pointer to the way the string is actually encoded. Something like char * kind of thing. And then there are things like what the character actually encoded. It's probably somewhere along UTF-64 kind of thing where each glyph is indeed a character. Now when we do $encoded = mb_convert_encoding ($original); several thing can happen: the original internal representation doesn't change however it is REINTERPRETED so that the code that show up differs the original string that it represent doesn't change however the ENCODING change. Which one is right?

    Read the article

  • get mysql_real_escape is giving me errors when I try and add security to my website

    - by Mike
    I tried doing this: @ $db = new myConnectDB(); $beerName = mysql_real_escape_string($beerName); $beerID = mysql_real_escape_string($beerID); $brewery = mysql_real_escape_string($brewery); $style = mysql_real_escape_string($style); $userID = mysql_real_escape_string($userID); $abv = mysql_real_escape_string($abv); $ibu = mysql_real_escape_string($ibu); $breweryID = mysql_real_escape_string($breweryID); $icon = mysql_real_escape_string($icon); I get this error: Warning: mysql_real_escape_string() [function.mysql-real-escape-string]: Access denied for user

    Read the article

  • A little primer on using TFS with a small team

    - by johndoucette
    The scenario; A small team of 3 developers mostly in maintenance mode with traditional ASP.net, classic ASP, .Net integration services and utilities with the company’s third party packages, and a bunch of java-based Coldfusion web applications all under Visual Source Safe (VSS). They are about to embark on a huge SharePoint 2010 new construction project and wanted to use subversion instead VSS. TFS was a foreign word and smelled of “high cost” and of an “over complicated process”. Since they had no preconditions about the old TFS versions (‘05 & ‘08), it was fun explaining how simple it was to install a TFS server and get the ball rolling, with or without all the heavy stuff one sometimes associates with such a huge and powerful application management lifecycle product. So, how does a small team begin using TFS? 1. Start by using source control and migrate current VSS source trees into TFS. You can take the latest version or migrate the entire version history. It’s up to you on whether you want a clean start or need quick access to all the version notes and history of the bits. 2. Since most shops are mainly in maintenance mode with existing applications, begin using bug workitems for everything. When you receive an issue/bug from your current tracking system, manually enter the workitem in TFS right through Visual Studio. You can automate the integration to the current tracking system later or replace it entirely. Believe me, this thing is powerful and can handle even the largest of help desks. 3. With new construction, begin work with requirements and task workitems and follow the traditional sprint-based development lifecycle. Obviously, some minor training will be needed, but don’t fear, this is very intuitive and MSDN has a ton of lesson based labs and videos. 4. For the java developers, use the new Team Explorer Everywhere 2010 plugin (recently known as Teamprise). There is a seamless interface in Eclipse, but also a good command-line utility for other environments such as Dreamweaver. 5. Wait to fully integrate the whole workitem/project management/testing process until your team is familiar with the integrated workitems for bugs and code. After a while, you will see the team wanting more transparency into the work they are all doing and naturally, everyone will want workitems to help them organize the chaos! 6. Management will be limited in the value of the reports until you have a fully blown implementation of project planning, construction, build, deployment and testing. However, there are some basic “bug rate” reports and current backlog listings that can provide good information. Some notable explanations of TFS; Work Item Tracking and Project Management - A workitem represents the unit of work within the system which enables tracking of all activities produced by a user, whether it is a developer, business user, project manager or tester. The properties of a workitem such as linked changesets (checked-in code), who updated the data and when, the states and reasons for change, are all transitioned to a data warehouse within TFS for reporting purposes. A workitem can be defines as a "bug", "requirement", test case", or a "change request". They drive the work effort by the individual assigned to it and also provide a key role in defining what needs to be done. Workitems are the things the team needs to do to accomplish a goal. Test Case Management - Starting with a workitem known as a "test case", a tester (or developer) can now author and manage test cases within a formal test plan subsystem. Although TFS supports the test case workitem type, there is a new product known as the VS Test Professional 2010 which allows a tester to facilitate manual tests including fast forwarding steps in the process to arrive at the assertion point quickly. This repeatable process provides quick regression tests and can be conducted by the business user to ensure completeness during UAT. In addition, developers no longer can provide a response to a bug with the line "cannot reproduce". With every test run, attachments including the recorded session, captured environment configurations and settings, screen shots, intellitrace (debugging history), and in some cases if the lab manager is being used, a snapshot of the tested environment is available. Version Control - A modern system allowing shared check-in/check-out, excellent merge conflict resolution, Shelvesets (personal check-ins), branching/merging visualization, public workspaces, gated check-ins, security hierarchy capabilities, and changeset/workitem tracking. Knowing what was done with the code by any developer has become much easier to picture and resolve issues. Team Build - Automate the compilation process whether you need it to be whenever a developer checks-in code, periodically such as nightly builds for testers in the morning, or manual builds to be deployed into production. Each build can run through pre-determined tests, perform code analysis to see if the developer conforms to the team standards, and reject the build if either fails. Project Portal & Reporting - Provide management with a dashboard with insight into the project(s). "Where are we" in each step of the way including past iterations and the current burndown rate. Enabling this feature is easy as it seamlessly interfaces with existing SharePoint implementations.

    Read the article

  • Informed TDD &ndash; Kata &ldquo;To Roman Numerals&rdquo;

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/05/28/informed-tdd-ndash-kata-ldquoto-roman-numeralsrdquo.aspxIn a comment on my article on what I call Informed TDD (ITDD) reader gustav asked how this approach would apply to the kata “To Roman Numerals”. And whether ITDD wasn´t a violation of TDD´s principle of leaving out “advanced topics like mocks”. I like to respond with this article to his questions. There´s more to say than fits into a commentary. Mocks and TDD I don´t see in how far TDD is avoiding or opposed to mocks. TDD and mocks are orthogonal. TDD is about pocess, mocks are about structure and costs. Maybe by moving forward in tiny red+green+refactor steps less need arises for mocks. But then… if the functionality you need to implement requires “expensive” resource access you can´t avoid using mocks. Because you don´t want to constantly run all your tests against the real resource. True, in ITDD mocks seem to be in almost inflationary use. That´s not what you usually see in TDD demonstrations. However, there´s a reason for that as I tried to explain. I don´t use mocks as proxies for “expensive” resource. Rather they are stand-ins for functionality not yet implemented. They allow me to get a test green on a high level of abstraction. That way I can move forward in a top-down fashion. But if you think of mocks as “advanced” or if you don´t want to use a tool like JustMock, then you don´t need to use mocks. You just need to stand the sight of red tests for a little longer ;-) Let me show you what I mean by that by doing a kata. ITDD for “To Roman Numerals” gustav asked for the kata “To Roman Numerals”. I won´t explain the requirements again. You can find descriptions and TDD demonstrations all over the internet, like this one from Corey Haines. Now here is, how I would do this kata differently. 1. Analyse A demonstration of TDD should never skip the analysis phase. It should be made explicit. The requirements should be formalized and acceptance test cases should be compiled. “Formalization” in this case to me means describing the API of the required functionality. “[D]esign a program to work with Roman numerals” like written in this “requirement document” is not enough to start software development. Coding should only begin, if the interface between the “system under development” and its context is clear. If this interface is not readily recognizable from the requirements, it has to be developed first. Exploration of interface alternatives might be in order. It might be necessary to show several interface mock-ups to the customer – even if that´s you fellow developer. Designing the interface is a task of it´s own. It should not be mixed with implementing the required functionality behind the interface. Unfortunately, though, this happens quite often in TDD demonstrations. TDD is used to explore the API and implement it at the same time. To me that´s a violation of the Single Responsibility Principle (SRP) which not only should hold for software functional units but also for tasks or activities. In the case of this kata the API fortunately is obvious. Just one function is needed: string ToRoman(int arabic). And it lives in a class ArabicRomanConversions. Now what about acceptance test cases? There are hardly any stated in the kata descriptions. Roman numerals are explained, but no specific test cases from the point of view of a customer. So I just “invent” some acceptance test cases by picking roman numerals from a wikipedia article. They are supposed to be just “typical examples” without special meaning. Given the acceptance test cases I then try to develop an understanding of the problem domain. I´ll spare you that. The domain is trivial and is explain in almost all kata descriptions. How roman numerals are built is not difficult to understand. What´s more difficult, though, might be to find an efficient solution to convert into them automatically. 2. Solve The usual TDD demonstration skips a solution finding phase. Like the interface exploration it´s mixed in with the implementation. But I don´t think this is how it should be done. I even think this is not how it really works for the people demonstrating TDD. They´re simplifying their true software development process because they want to show a streamlined TDD process. I doubt this is helping anybody. Before you code you better have a plan what to code. This does not mean you have to do “Big Design Up-Front”. It just means: Have a clear picture of the logical solution in your head before you start to build a physical solution (code). Evidently such a solution can only be as good as your understanding of the problem. If that´s limited your solution will be limited, too. Fortunately, in the case of this kata your understanding does not need to be limited. Thus the logical solution does not need to be limited or preliminary or tentative. That does not mean you need to know every line of code in advance. It just means you know the rough structure of your implementation beforehand. Because it should mirror the process described by the logical or conceptual solution. Here´s my solution approach: The arabic “encoding” of numbers represents them as an ordered set of powers of 10. Each digit is a factor to multiply a power of ten with. The “encoding” 123 is the short form for a set like this: {1*10^2, 2*10^1, 3*10^0}. And the number is the sum of the set members. The roman “encoding” is different. There is no base (like 10 for arabic numbers), there are just digits of different value, and they have to be written in descending order. The “encoding” XVI is short for [10, 5, 1]. And the number is still the sum of the members of this list. The roman “encoding” thus is simpler than the arabic. Each “digit” can be taken at face value. No multiplication with a base required. But what about IV which looks like a contradiction to the above rule? It is not – if you accept roman “digits” not to be limited to be single characters only. Usually I, V, X, L, C, D, M are viewed as “digits”, and IV, IX etc. are viewed as nuisances preventing a simple solution. All looks different, though, once IV, IX etc. are taken as “digits”. Then MCMLIV is just a sum: M+CM+L+IV which is 1000+900+50+4. Whereas before it would have been understood as M-C+M+L-I+V – which is more difficult because here some “digits” get subtracted. Here´s the list of roman “digits” with their values: {1, I}, {4, IV}, {5, V}, {9, IX}, {10, X}, {40, XL}, {50, L}, {90, XC}, {100, C}, {400, CD}, {500, D}, {900, CM}, {1000, M} Since I take IV, IX etc. as “digits” translating an arabic number becomes trivial. I just need to find the values of the roman “digits” making up the number, e.g. 1954 is made up of 1000, 900, 50, and 4. I call those “digits” factors. If I move from the highest factor (M=1000) to the lowest (I=1) then translation is a two phase process: Find all the factors Translate the factors found Compile the roman representation Translation is just a look-up. Finding, though, needs some calculation: Find the highest remaining factor fitting in the value Remember and subtract it from the value Repeat with remaining value and remaining factors Please note: This is just an algorithm. It´s not code, even though it might be close. Being so close to code in my solution approach is due to the triviality of the problem. In more realistic examples the conceptual solution would be on a higher level of abstraction. With this solution in hand I finally can do what TDD advocates: find and prioritize test cases. As I can see from the small process description above, there are two aspects to test: Test the translation Test the compilation Test finding the factors Testing the translation primarily means to check if the map of factors and digits is comprehensive. That´s simple, even though it might be tedious. Testing the compilation is trivial. Testing factor finding, though, is a tad more complicated. I can think of several steps: First check, if an arabic number equal to a factor is processed correctly (e.g. 1000=M). Then check if an arabic number consisting of two consecutive factors (e.g. 1900=[M,CM]) is processed correctly. Then check, if a number consisting of the same factor twice is processed correctly (e.g. 2000=[M,M]). Finally check, if an arabic number consisting of non-consecutive factors (e.g. 1400=[M,CD]) is processed correctly. I feel I can start an implementation now. If something becomes more complicated than expected I can slow down and repeat this process. 3. Implement First I write a test for the acceptance test cases. It´s red because there´s no implementation even of the API. That´s in conformance with “TDD lore”, I´d say: Next I implement the API: The acceptance test now is formally correct, but still red of course. This will not change even now that I zoom in. Because my goal is not to most quickly satisfy these tests, but to implement my solution in a stepwise manner. That I do by “faking” it: I just “assume” three functions to represent the transformation process of my solution: My hypothesis is that those three functions in conjunction produce correct results on the API-level. I just have to implement them correctly. That´s what I´m trying now – one by one. I start with a simple “detail function”: Translate(). And I start with all the test cases in the obvious equivalence partition: As you can see I dare to test a private method. Yes. That´s a white box test. But as you´ll see it won´t make my tests brittle. It serves a purpose right here and now: it lets me focus on getting one aspect of my solution right. Here´s the implementation to satisfy the test: It´s as simple as possible. Right how TDD wants me to do it: KISS. Now for the second equivalence partition: translating multiple factors. (It´a pattern: if you need to do something repeatedly separate the tests for doing it once and doing it multiple times.) In this partition I just need a single test case, I guess. Stepping up from a single translation to multiple translations is no rocket science: Usually I would have implemented the final code right away. Splitting it in two steps is just for “educational purposes” here. How small your implementation steps are is a matter of your programming competency. Some “see” the final code right away before their mental eye – others need to work their way towards it. Having two tests I find more important. Now for the next low hanging fruit: compilation. It´s even simpler than translation. A single test is enough, I guess. And normally I would not even have bothered to write that one, because the implementation is so simple. I don´t need to test .NET framework functionality. But again: if it serves the educational purpose… Finally the most complicated part of the solution: finding the factors. There are several equivalence partitions. But still I decide to write just a single test, since the structure of the test data is the same for all partitions: Again, I´m faking the implementation first: I focus on just the first test case. No looping yet. Faking lets me stay on a high level of abstraction. I can write down the implementation of the solution without bothering myself with details of how to actually accomplish the feat. That´s left for a drill down with a test of the fake function: There are two main equivalence partitions, I guess: either the first factor is appropriate or some next. The implementation seems easy. Both test cases are green. (Of course this only works on the premise that there´s always a matching factor. Which is the case since the smallest factor is 1.) And the first of the equivalence partitions on the higher level also is satisfied: Great, I can move on. Now for more than a single factor: Interestingly not just one test becomes green now, but all of them. Great! You might say, then I must have done not the simplest thing possible. And I would reply: I don´t care. I did the most obvious thing. But I also find this loop very simple. Even simpler than a recursion of which I had thought briefly during the problem solving phase. And by the way: Also the acceptance tests went green: Mission accomplished. At least functionality wise. Now I´ve to tidy up things a bit. TDD calls for refactoring. Not uch refactoring is needed, because I wrote the code in top-down fashion. I faked it until I made it. I endured red tests on higher levels while lower levels weren´t perfected yet. But this way I saved myself from refactoring tediousness. At the end, though, some refactoring is required. But maybe in a different way than you would expect. That´s why I rather call it “cleanup”. First I remove duplication. There are two places where factors are defined: in Translate() and in Find_factors(). So I factor the map out into a class constant. Which leads to a small conversion in Find_factors(): And now for the big cleanup: I remove all tests of private methods. They are scaffolding tests to me. They only have temporary value. They are brittle. Only acceptance tests need to remain. However, I carry over the single “digit” tests from Translate() to the acceptance test. I find them valuable to keep, since the other acceptance tests only exercise a subset of all roman “digits”. This then is my final test class: And this is the final production code: Test coverage as reported by NCrunch is 100%: Reflexion Is this the smallest possible code base for this kata? Sure not. You´ll find more concise solutions on the internet. But LOC are of relatively little concern – as long as I can understand the code quickly. So called “elegant” code, however, often is not easy to understand. The same goes for KISS code – especially if left unrefactored, as it is often the case. That´s why I progressed from requirements to final code the way I did. I first understood and solved the problem on a conceptual level. Then I implemented it top down according to my design. I also could have implemented it bottom-up, since I knew some bottom of the solution. That´s the leaves of the functional decomposition tree. Where things became fuzzy, since the design did not cover any more details as with Find_factors(), I repeated the process in the small, so to speak: fake some top level, endure red high level tests, while first solving a simpler problem. Using scaffolding tests (to be thrown away at the end) brought two advantages: Encapsulation of the implementation details was not compromised. Naturally private methods could stay private. I did not need to make them internal or public just to be able to test them. I was able to write focused tests for small aspects of the solution. No need to test everything through the solution root, the API. The bottom line thus for me is: Informed TDD produces cleaner code in a systematic way. It conforms to core principles of programming: Single Responsibility Principle and/or Separation of Concerns. Distinct roles in development – being a researcher, being an engineer, being a craftsman – are represented as different phases. First find what, what there is. Then devise a solution. Then code the solution, manifest the solution in code. Writing tests first is a good practice. But it should not be taken dogmatic. And above all it should not be overloaded with purposes. And finally: moving from top to bottom through a design produces refactored code right away. Clean code thus almost is inevitable – and not left to a refactoring step at the end which is skipped often for different reasons.   PS: Yes, I have done this kata several times. But that has only an impact on the time needed for phases 1 and 2. I won´t skip them because of that. And there are no shortcuts during implementation because of that.

    Read the article

  • Obtaining a HBITMAP/HICON from D2D Bitmap

    - by Tom
    Is there any way to obtain a HBITMAP or HICON from a ID2D1Bitmap* using Direct2D? I am using the following function to load a bitmap: http://msdn.microsoft.com/en-us/library/windows/desktop/dd756686%28v=vs.85%29.aspx The reason I ask is because I am creating my level editor tool and would like to draw a PNG image on a standard button control. I know that you can do this using GDI+: HBITMAP hBitmap; Gdiplus::Bitmap b(L"a.png"); b.GetHBITMAP(NULL, &hBitmap); SendMessage(GetDlgItem(hDlg, IDC_BUTTON1), BM_SETIMAGE, IMAGE_BITMAP, (LPARAM)hBitmap); Is there any equivalent, simple solution using Direct2D? If possible, I would like to render multiple PNG files (some with transparency) on a single button.

    Read the article

< Previous Page | 859 860 861 862 863 864 865 866 867 868 869 870  | Next Page >