Search Results

Search found 44734 results on 1790 pages for 'model based design'.

Page 618/1790 | < Previous Page | 614 615 616 617 618 619 620 621 622 623 624 625  | Next Page >

  • PDC and Tech-Ed Europe Slides and Code

    - by Stephen Walther
    I spent close to three weeks on the road giving talks at Tech-Ed Europe (Berlin), PDC (Los Angeles), and the Los Angeles Code Camp (Los Angeles). I got to talk about two topics that I am very passionate about: ASP.NET MVC and Ajax. Thanks everyone for coming to all my talks! At PDC, I announced all of the new features of our ASP.NET Ajax Library. In particular, I made five big announcements: ASP.NET Ajax Library Beta Released – You can download the beta from Ajax.CodePlex.com ASP.NET Ajax Library includes the AJAX Control Toolkit – You can use the Ajax Control Toolkit with ASP.NET MVC. ASP.NET Ajax Library being contributed to the CodePlex Foundation – ASP.NET Ajax is the founding project for the CodePlex Foundation (see CodePlex.org) ASP.NET Ajax Library is receiving full product support – Complain to Microsoft Customer Service at midnight on Christmas ASP.NET Ajax Library supports jQuery integration – Use (almost) all of the Ajax Control Toolkit controls in jQuery For more details on the Ajax announcements, see James Senior’s blog entry on the Ajax announcements at: http://jamessenior.com/post/News-on-the-ASPNET-Ajax-Library.aspx In my MVC talks, I discussed the new features being introduced with ASP.NET MVC 2. Here are three of my favorite new features: Client Validation – Client validation done the right way. Do your validation in your model and let the validation bubble up to JavaScript code automatically. Areas – Divide your ASP.NET MVC application into sub-applications. Great for managing both medium and large projects. RenderAction() – Finally, a way to add content to master pages and multiple pages without doing anything strange or twisted. There are demos of all of these features in the MVC downloads below. Here are the power point and code from all of the talks: PDC – Introducing the New ASP.NET Ajax Library PDC – ASP.NET MVC: The New Stuff Tech-Ed Europe - What's New in Microsoft ASP.NET Model-View-Controller Tech-Ed Europe - Microsoft ASP.NET AJAX: Taking AJAX to the Next Level

    Read the article

  • Database version control resources

    - by Wes McClure
    In the process of creating my own DB VCS tool tsqlmigrations.codeplex.com I ran into several good resources to help guide me along the way in reviewing existing offerings and in concepts that would be needed in a good DB VCS.  This is my list of helpful links that others can use to understand some of the concepts and some of the tools in existence.  In the next few posts I will try to explain how I used these to create TSqlMigrations.   Blogs entries Three rules for database work - K. Scott Allen http://odetocode.com/blogs/scott/archive/2008/01/30/three-rules-for-database-work.aspx Versioning databases - the baseline http://odetocode.com/blogs/scott/archive/2008/01/31/versioning-databases-the-baseline.aspx Versioning databases - change scripts http://odetocode.com/blogs/scott/archive/2008/02/02/versioning-databases-change-scripts.aspx Versioning databases - views, stored procedures and the like http://odetocode.com/blogs/scott/archive/2008/02/02/versioning-databases-views-stored-procedures-and-the-like.aspx Versioning databases - branching and merging http://odetocode.com/blogs/scott/archive/2008/02/03/versioning-databases-branching-and-merging.aspx Evolutionary Database Design - Martin Fowler http://martinfowler.com/articles/evodb.html Are database migration frameworks worth the effort? - Good challenges http://www.ridgway.co.za/archive/2009/01/03/are-database-migration-frameworks-worth-the-effort.aspx Continuous Integration (in general) http://martinfowler.com/articles/continuousIntegration.html http://martinfowler.com/articles/originalContinuousIntegration.html Is Your Database Under Version Control? http://www.codinghorror.com/blog/archives/000743.html 11 Tools for Database Versioning http://secretgeek.net/dbcontrol.asp How to do database source control and builds http://mikehadlow.blogspot.com/2006/09/how-to-do-database-source-control-and.html .Net Database Migration Tool Roundup http://flux88.com/blog/net-database-migration-tool-roundup/ Books Book Description Refactoring Databases: Evolutionary Database Design Martin Fowler signature series on refactoring databases. Book site: http://databaserefactoring.com/ Recipes for Continuous Database Integration: Evolutionary Database Development (Digital Short Cut) A good question/answer layout of common problems and solutions with database version control. http://www.informit.com/store/product.aspx?isbn=032150206X

    Read the article

  • How would the optimal Emacs-keyboard look like?

    - by Thorsten
    Emacs is a historic piece of software. It promises outstanding productivity for keyboard wizards that really want to explore it's power. The effective use of the keyboard is key to Emacs productivity, but the keyboard hardware has changed a lot since the old days, so many modern Emacs users are struggling with weird 'Emacs chords' on their Windows/IBM keyboards. If one would design a keyboard that is entirely focused on the needs of Emacs users - how would it look like? I assume the following: the standard keybindings of Emacs are accepted, redefinitions are rare exceptions we are only talking about QWERTY keyboards (including regional variations like QWERTZ) we are only considering users applying the (10 fingers) touch typing system. the question is not only about remapping the keys of existing keyboards (perfectly possible on Linux with .xmodmap and on Windows with keytweak, for example) - think about the perfect keyboard-hardware you would like to see on your desk while hacking in Emacs all day long. Please tag your answer with your locale, i.e. [en] or [de], so that everybody knows what regional layout you are using. I will answer my own question below, to show you the results of some investigation and experimentation, but I really would like to read about different approaches and their pro's and con's. The emacswiki has a somehow related page with a lot of links (http://www.emacswiki.org/emacs/RepeatedStrainInjury), but here it's about optimal keyboard design for maximal productivity, assuming avoidance of RSI as a byproduct.

    Read the article

  • SQLAuthority News – Whitepaper – SQL Azure vs. SQL Server

    - by pinaldave
    SQL Server and SQL Azure are two Microsoft Products which goes almost together. There are plenty of misconceptions about SQL Azure. I have seen enough developers not planning for SQL Azure because they are not sure what exactly they are getting into. Some are confused thinking Azure is not powerful enough. I disagree and strongly urge all of you to read following white paper written and published by Microsoft. SQL Azure vs. SQL Server by Dinakar Nethi, Niraj Nagrani SQL Azure Database is a cloud-based relational database service from Microsoft. SQL Azure provides relational database functionality as a utility service. Cloud-based database solutions such as SQL Azure can provide many benefits, including rapid provisioning, cost-effective scalability, high availability, and reduced management overhead. This paper compares SQL Azure Database with SQL Server in terms of logical administration vs. physical administration, provisioning, Transact-SQL support, data storage, SSIS, along with other features and capabilities. The content of this white paper is as following: Similarities and Differences Logical Administration vs. Physical Administration Provisioning Transact-SQL Support Features and Types Key Benefits of the Service Self-Managing High Availability Scalability Familiar Development Model Relational Data Model The above summary text is taken from white paper itself. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, SQLAuthority News, T SQL, Technology Tagged: SQL Azure

    Read the article

  • How do I fix "bzr: ERROR: Unable to determine your name. "?

    - by Daniel
    I am trying to quickly create my first app and am getting gtk errors when I try to run or create an application. Here is a copy of what I executed and what results I got: daniel@laptop:~/PyDevelopment$ quickly create ubuntu-application app001 Creating project directory app001 Creating bzr repository and committing Launching your newly created project! /usr/lib/python2.7/dist-packages/gi/overrides/Gtk.py:391: Warning: g_object_set_property: construct property "type" for object `Window' can't be set after construction Gtk.Window.__init__(self, type=type, **kwds) /usr/lib/python2.7/dist-packages/gi/overrides/Gtk.py:391: Warning: g_object_set_property: construct property "type" for object `App001Window' can't be set after construction Gtk.Window.__init__(self, type=type, **kwds) Congrats, your new project is setup! cd /home/daniel/PyDevelopment/app001/ to start hacking. daniel@laptop:~/PyDevelopment$ cd app001 daniel@laptop:~/PyDevelopment/app001$ quickly design daniel@laptop:~/PyDevelopment/app001$ quickly rub ERROR: No rub command found in template ubuntu-application. Candidate commands are: add, commands, configure, create, debug, design, edit, getstarted, help, license, package, quickly, release, run, save, share, submitubuntu, test, tutorial, upgrade daniel@laptop:~/PyDevelopment/app001$ quickly run /usr/lib/python2.7/dist-packages/gi/overrides/Gtk.py:391: Warning: g_object_set_property: construct property "type" for object `Window' can't be set after construction Gtk.Window.__init__(self, type=type, **kwds) /usr/lib/python2.7/dist-packages/gi/overrides/Gtk.py:391: Warning: g_object_set_property: construct property "type" for object `App001Window' can't be set after construction Gtk.Window.__init__(self, type=type, **kwds) daniel@laptop:~/PyDevelopment/app001$ quickly package .......Ubuntu packaging created in debian/ ....... ---------------------------------- Command returned some ERRORS: ---------------------------------- bzr: ERROR: Unable to determine your name. ---------------------------------- ERROR: can't create or update ubuntu package ERROR: package command failed Aborting

    Read the article

  • Rush…iPAD Pre-order announced officially

    - by samsudeen
    Apple’s latest product iPAD is now available for pre-order through online. You can place your pre-order through its online store (Apple) or reserve it at any of the Apple retail stores. iPAD may have received mixed reactions when announced last month. But Apple knows how to sell; it is believed that more than 50,000 pre orders are already placed till now placed till now. People have to wait for another 3 weeks to get the actual device as the launch date is 3rd of April in the US. The initial model released will be available only with Wi-Fi and the planned 3G model is expected to be released by end of April. So how much does it cost you to get this little marvel? The basic iPAD (16 GB Wi-Fi) will cost you $499. if you are serious apple fan and plan to buy an iPAD better place your order now. There already rumours that the initial demand may outstrip supply.The pre-order is limited only to US. Join us on Facebook to read all our stories right inside your Facebook news feed.

    Read the article

  • What code smell best describes this code?

    - by Paul Stovell
    Suppose you have this code in a class: private DataContext _context; public Customer[] GetCustomers() { GetContext(); return _context.Customers.ToArray(); } public Order[] GetOrders() { GetContext(); return _context.Customers.ToArray(); } // For the sake of this example, a new DataContext is *required* // for every public method call private void GetContext() { if (_context != null) { _context.Dispose(); } _context = new DataContext(); } This code isn't thread-safe - if two calls to GetOrders/GetCustomers are made at the same time from different threads, they may end up using the same context, or the context could be disposed while being used. Even if this bug didn't exist, however, it still "smells" like bad code. A much better design would be for GetContext to always return a new instance of DataContext and to get rid of the private field, and to dispose of the instance when done. Changing from an inappropriate private field to a local variable feels like a better solution. I've looked over the code smell lists and can't find one that describes this. In the past I've thought of it as temporal coupling, but the Wikipedia description suggests that's not the term: Temporal coupling When two actions are bundled together into one module just because they happen to occur at the same time. This page discusses temporal coupling, but the example is the public API of a class, while my question is about the internal design. Does this smell have a name? Or is it simply "buggy code"?

    Read the article

  • Using a "white list" for extracting terms for Text Mining, Part 2

    - by [email protected]
    In my last post, we set the groundwork for extracting specific tokens from a white list using a CTXRULE index. In this post, we will populate a table with the extracted tokens and produce a case table suitable for clustering with Oracle Data Mining. Our corpus of documents will be stored in a database table that is defined as create table documents(id NUMBER, text VARCHAR2(4000)); However, any suitable Oracle Text-accepted data type can be used for the text. We then create a table to contain the extracted tokens. The id column contains the unique identifier (or case id) of the document. The token column contains the extracted token. Note that a given document many have many tokens, so there will be one row per token for a given document. create table extracted_tokens (id NUMBER, token VARCHAR2(4000)); The next step is to iterate over the documents and extract the matching tokens using the index and insert them into our token table. We use the MATCHES function for matching the query_string from my_thesaurus_rules with the text. DECLARE     cursor c2 is       select id, text       from documents; BEGIN     for r_c2 in c2 loop        insert into extracted_tokens          select r_c2.id id, main_term token          from my_thesaurus_rules          where matches(query_string,                        r_c2.text)>0;     end loop; END; Now that we have the tokens, we can compute the term frequency - inverse document frequency (TF-IDF) for each token of each document. create table extracted_tokens_tfidf as   with num_docs as (select count(distinct id) doc_cnt                     from extracted_tokens),        tf       as (select a.id, a.token,                            a.token_cnt/b.num_tokens token_freq                     from                        (select id, token, count(*) token_cnt                        from extracted_tokens                        group by id, token) a,                       (select id, count(*) num_tokens                        from extracted_tokens                        group by id) b                     where a.id=b.id),        doc_freq as (select token, count(*) overall_token_cnt                     from extracted_tokens                     group by token)   select tf.id, tf.token,          token_freq * ln(doc_cnt/df.overall_token_cnt) tf_idf   from num_docs,        tf,        doc_freq df   where df.token=tf.token; From the WITH clause, the num_docs query simply counts the number of documents in the corpus. The tf query computes the term (token) frequency by computing the number of times each token appears in a document and divides that by the number of tokens found in the document. The doc_req query counts the number of times the token appears overall in the corpus. In the SELECT clause, we compute the tf_idf. Next, we create the nested table required to produce one record per case, where a case corresponds to an individual document. Here, we COLLECT all the tokens for a given document into the nested column extracted_tokens_tfidf_1. CREATE TABLE extracted_tokens_tfidf_nt              NESTED TABLE extracted_tokens_tfidf_1                  STORE AS extracted_tokens_tfidf_tab AS              select id,                     cast(collect(DM_NESTED_NUMERICAL(token,tf_idf)) as DM_NESTED_NUMERICALS) extracted_tokens_tfidf_1              from extracted_tokens_tfidf              group by id;   To build the clustering model, we create a settings table and then insert the various settings. Most notable are the number of clusters (20), using cosine distance which is better for text, turning off auto data preparation since the values are ready for mining, the number of iterations (20) to get a better model, and the split criterion of size for clusters that are roughly balanced in number of cases assigned. CREATE TABLE km_settings (setting_name  VARCHAR2(30), setting_value VARCHAR2(30)); BEGIN  INSERT INTO km_settings (setting_name, setting_value) VALUES     VALUES (dbms_data_mining.clus_num_clusters, 20);  INSERT INTO km_settings (setting_name, setting_value)     VALUES (dbms_data_mining.kmns_distance, dbms_data_mining.kmns_cosine);   INSERT INTO km_settings (setting_name, setting_value) VALUES     VALUES (dbms_data_mining.prep_auto,dbms_data_mining.prep_auto_off);   INSERT INTO km_settings (setting_name, setting_value) VALUES     VALUES (dbms_data_mining.kmns_iterations,20);   INSERT INTO km_settings (setting_name, setting_value) VALUES     VALUES (dbms_data_mining.kmns_split_criterion,dbms_data_mining.kmns_size);   COMMIT; END; With this in place, we can now build the clustering model. BEGIN     DBMS_DATA_MINING.CREATE_MODEL(     model_name          => 'TEXT_CLUSTERING_MODEL',     mining_function     => dbms_data_mining.clustering,     data_table_name     => 'extracted_tokens_tfidf_nt',     case_id_column_name => 'id',     settings_table_name => 'km_settings'); END;To generate cluster names from this model, check out my earlier post on that topic.

    Read the article

  • exporting bind and keyframe bone poses from blender to use in OpenGL

    - by SaldaVonSchwartz
    I'm having a hard time trying to understand how exactly Blender's concept of bone transforms maps to the usual math of skinning (which I'm implementing in an OpenGL-based engine of sorts). Or I'm missing out something in the math.. It's gonna be long, but here's as much background as I can think of. First, a few notes and assumptions: I'm using column-major order and multiply from right to left. So for instance, vertex v transformed by matrix A and then further transformed by matrix B would be: v' = BAv. This also means whenever I export a matrix from blender through python, I export it (in text format) in 4 lines, each representing a column. This is so I can then I can read them back into my engine like this: if (fscanf(fileHandle, "%f %f %f %f", &skeleton.joints[currentJointIndex].inverseBindTransform.m[0], &skeleton.joints[currentJointIndex].inverseBindTransform.m[1], &skeleton.joints[currentJointIndex].inverseBindTransform.m[2], &skeleton.joints[currentJointIndex].inverseBindTransform.m[3])) { if (fscanf(fileHandle, "%f %f %f %f", &skeleton.joints[currentJointIndex].inverseBindTransform.m[4], &skeleton.joints[currentJointIndex].inverseBindTransform.m[5], &skeleton.joints[currentJointIndex].inverseBindTransform.m[6], &skeleton.joints[currentJointIndex].inverseBindTransform.m[7])) { if (fscanf(fileHandle, "%f %f %f %f", &skeleton.joints[currentJointIndex].inverseBindTransform.m[8], &skeleton.joints[currentJointIndex].inverseBindTransform.m[9], &skeleton.joints[currentJointIndex].inverseBindTransform.m[10], &skeleton.joints[currentJointIndex].inverseBindTransform.m[11])) { if (fscanf(fileHandle, "%f %f %f %f", &skeleton.joints[currentJointIndex].inverseBindTransform.m[12], &skeleton.joints[currentJointIndex].inverseBindTransform.m[13], &skeleton.joints[currentJointIndex].inverseBindTransform.m[14], &skeleton.joints[currentJointIndex].inverseBindTransform.m[15])) { I'm simplifying the code I show because otherwise it would make things unnecessarily harder (in the context of my question) to explain / follow. Please refrain from making remarks related to optimizations. This is not final code. Having said that, if I understand correctly, the basic idea of skinning/animation is: I have a a mesh made up of vertices I have the mesh model-world transform W I have my joints, which are really just transforms from each joint's space to its parent's space. I'll call these transforms Bj meaning matrix which takes from joint j's bind pose to joint j-1's bind pose. For each of these, I actually import their inverse to the engine, Bj^-1. I have keyframes each containing a set of current poses Cj for each joint J. These are initially imported to my engine in TQS format but after (S)LERPING them I compose them into Cj matrices which are equivalent to the Bjs (not the Bj^-1 ones) only that for the current spacial configurations of each joint at that frame. Given the above, the "skeletal animation algorithm is" On each frame: check how much time has elpased and compute the resulting current time in the animation, from 0 meaning frame 0 to 1, meaning the end of the animation. (Oh and I'm looping forever so the time is mod(total duration)) for each joint: 1 -calculate its world inverse bind pose, that is Bj_w^-1 = Bj^-1 Bj-1^-1 ... B0^-1 2 -use the current animation time to LERP the componets of the TQS and come up with an interpolated current pose matrix Cj which should transform from the joints current configuration space to world space. Similar to what I did to get the world version of the inverse bind poses, I come up with the joint's world current pose, Cj_w = C0 C1 ... Cj 3 -now that I have world versions of Bj and Cj, I store this joint's world- skinning matrix K_wj = Cj_w Bj_w^-1. The above is roughly implemented like so: - (void)update:(NSTimeInterval)elapsedTime { static double time = 0; time = fmod((time + elapsedTime),1.); uint16_t LERPKeyframeNumber = 60 * time; uint16_t lkeyframeNumber = 0; uint16_t lkeyframeIndex = 0; uint16_t rkeyframeNumber = 0; uint16_t rkeyframeIndex = 0; for (int i = 0; i < aClip.keyframesCount; i++) { uint16_t keyframeNumber = aClip.keyframes[i].number; if (keyframeNumber <= LERPKeyframeNumber) { lkeyframeIndex = i; lkeyframeNumber = keyframeNumber; } else { rkeyframeIndex = i; rkeyframeNumber = keyframeNumber; break; } } double lTime = lkeyframeNumber / 60.; double rTime = rkeyframeNumber / 60.; double blendFactor = (time - lTime) / (rTime - lTime); GLKMatrix4 bindPosePalette[aSkeleton.jointsCount]; GLKMatrix4 currentPosePalette[aSkeleton.jointsCount]; for (int i = 0; i < aSkeleton.jointsCount; i++) { F3DETQSType& lPose = aClip.keyframes[lkeyframeIndex].skeletonPose.jointPoses[i]; F3DETQSType& rPose = aClip.keyframes[rkeyframeIndex].skeletonPose.jointPoses[i]; GLKVector3 LERPTranslation = GLKVector3Lerp(lPose.t, rPose.t, blendFactor); GLKQuaternion SLERPRotation = GLKQuaternionSlerp(lPose.q, rPose.q, blendFactor); GLKVector3 LERPScaling = GLKVector3Lerp(lPose.s, rPose.s, blendFactor); GLKMatrix4 currentTransform = GLKMatrix4MakeWithQuaternion(SLERPRotation); currentTransform = GLKMatrix4Multiply(currentTransform, GLKMatrix4MakeTranslation(LERPTranslation.x, LERPTranslation.y, LERPTranslation.z)); currentTransform = GLKMatrix4Multiply(currentTransform, GLKMatrix4MakeScale(LERPScaling.x, LERPScaling.y, LERPScaling.z)); if (aSkeleton.joints[i].parentIndex == -1) { bindPosePalette[i] = aSkeleton.joints[i].inverseBindTransform; currentPosePalette[i] = currentTransform; } else { bindPosePalette[i] = GLKMatrix4Multiply(aSkeleton.joints[i].inverseBindTransform, bindPosePalette[aSkeleton.joints[i].parentIndex]); currentPosePalette[i] = GLKMatrix4Multiply(currentPosePalette[aSkeleton.joints[i].parentIndex], currentTransform); } aSkeleton.skinningPalette[i] = GLKMatrix4Multiply(currentPosePalette[i], bindPosePalette[i]); } } At this point, I should have my skinning palette. So on each frame in my vertex shader, I do: uniform mat4 modelMatrix; uniform mat4 projectionMatrix; uniform mat3 normalMatrix; uniform mat4 skinningPalette[6]; attribute vec4 position; attribute vec3 normal; attribute vec2 tCoordinates; attribute vec4 jointsWeights; attribute vec4 jointsIndices; varying highp vec2 tCoordinatesVarying; varying highp float lIntensity; void main() { vec3 eyeNormal = normalize(normalMatrix * normal); vec3 lightPosition = vec3(0., 0., 2.); lIntensity = max(0.0, dot(eyeNormal, normalize(lightPosition))); tCoordinatesVarying = tCoordinates; vec4 skinnedVertexPosition = vec4(0.); for (int i = 0; i < 4; i++) { skinnedVertexPosition += jointsWeights[i] * skinningPalette[int(jointsIndices[i])] * position; } gl_Position = projectionMatrix * modelMatrix * skinnedVertexPosition; } The result: The mesh parts that are supposed to animate do animate and follow the expected motion, however, the rotations are messed up in terms of orientations. That is, the mesh is not translated somewhere else or scaled in any way, but the orientations of rotations seem to be off. So a few observations: In the above shader notice I actually did not multiply the vertices by the mesh modelMatrix (the one which would take them to model or world or global space, whichever you prefer, since there is no parent to the mesh itself other than "the world") until after skinning. This is contrary to what I implied in the theory: if my skinning matrix takes vertices from model to joint and back to model space, I'd think the vertices should already be premultiplied by the mesh transform. But if I do so, I just get a black screen. As far as exporting the joints from Blender, my python script exports for each armature bone in bind pose, it's matrix in this way: def DFSJointTraversal(file, skeleton, jointList): for joint in jointList: poseJoint = skeleton.pose.bones[joint.name] jointTransform = poseJoint.matrix.inverted() file.write('Joint ' + joint.name + ' Transform {\n') for col in jointTransform.col: file.write('{:9f} {:9f} {:9f} {:9f}\n'.format(col[0], col[1], col[2], col[3])) DFSJointTraversal(file, skeleton, joint.children) file.write('}\n') And for current / keyframe poses (assuming I'm in the right keyframe): def exportAnimations(filepath): # Only one skeleton per scene objList = [object for object in bpy.context.scene.objects if object.type == 'ARMATURE'] if len(objList) == 0: return elif len(objList) > 1: return #raise exception? dialog box? skeleton = objList[0] jointNames = [bone.name for bone in skeleton.data.bones] for action in bpy.data.actions: # One animation clip per action in Blender, named as the action animationClipFilePath = filepath[0 : filepath.rindex('/') + 1] + action.name + ".aClip" file = open(animationClipFilePath, 'w') file.write('target skeleton: ' + skeleton.name + '\n') file.write('joints count: {:d}'.format(len(jointNames)) + '\n') skeleton.animation_data.action = action keyframeNum = max([len(fcurve.keyframe_points) for fcurve in action.fcurves]) keyframes = [] for fcurve in action.fcurves: for keyframe in fcurve.keyframe_points: keyframes.append(keyframe.co[0]) keyframes = set(keyframes) keyframes = [kf for kf in keyframes] keyframes.sort() file.write('keyframes count: {:d}'.format(len(keyframes)) + '\n') for kfIndex in keyframes: bpy.context.scene.frame_set(kfIndex) file.write('keyframe: {:d}\n'.format(int(kfIndex))) for i in range(0, len(skeleton.data.bones)): file.write('joint: {:d}\n'.format(i)) joint = skeleton.pose.bones[i] jointCurrentPoseTransform = joint.matrix translationV = jointCurrentPoseTransform.to_translation() rotationQ = jointCurrentPoseTransform.to_3x3().to_quaternion() scaleV = jointCurrentPoseTransform.to_scale() file.write('T {:9f} {:9f} {:9f}\n'.format(translationV[0], translationV[1], translationV[2])) file.write('Q {:9f} {:9f} {:9f} {:9f}\n'.format(rotationQ[1], rotationQ[2], rotationQ[3], rotationQ[0])) file.write('S {:9f} {:9f} {:9f}\n'.format(scaleV[0], scaleV[1], scaleV[2])) file.write('\n') file.close() Which I believe follow the theory explained at the beginning of my question. But then I checked out Blender's directX .x exporter for reference.. and what threw me off was that in the .x script they are exporting bind poses like so (transcribed using the same variable names I used so you can compare): if joint.parent: jointTransform = poseJoint.parent.matrix.inverted() else: jointTransform = Matrix() jointTransform *= poseJoint.matrix and exporting current keyframe poses like this: if joint.parent: jointCurrentPoseTransform = joint.parent.matrix.inverted() else: jointCurrentPoseTransform = Matrix() jointCurrentPoseTransform *= joint.matrix why are they using the parent's transform instead of the joint in question's? isn't the join transform assumed to exist in the context of a parent transform since after all it transforms from this joint's space to its parent's? Why are they concatenating in the same order for both bind poses and keyframe poses? If these two are then supposed to be concatenated with each other to cancel out the change of basis? Anyway, any ideas are appreciated.

    Read the article

  • Minty Bug: Build an FM Bug Inside a Mint Container

    - by ETC
    Electronics projects that have real world (and showing off to your friends) potential are the most fun; today we take a look at a clever FM bug design hidden in a mint container. At PyroElectro Projects they wanted to try something new with the whole electronics-in-mint-container genre. They opted to turn a container of Ice Breakers Frost mints (the Ice Breakers response to Altoid Mints, presumably) into a small FM bug. The most clever part of the design is that the container still holds mints. Aside from a small black dot on the back of the case you’d have little reason to believe it was anything buy a box of mints. Check out the video below to see the mint container unpacked and the hidden electronics payload revealed: If you’re interested in the project hit up the link below for additional information. FM Bug Transmitter Mint Box [Pyro Electro Projects via Hack A Day] Latest Features How-To Geek ETC How to Get Amazing Color from Photos in Photoshop, GIMP, and Paint.NET Learn To Adjust Contrast Like a Pro in Photoshop, GIMP, and Paint.NET Have You Ever Wondered How Your Operating System Got Its Name? Should You Delete Windows 7 Service Pack Backup Files to Save Space? What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? Get the MakeUseOf eBook Guide to Hacker Proofing Your PC Sync Your Windows Computer with Your Ubuntu One Account [Desktop Client] Awesome 10 Meter Curved Touchscreen at the University of Groningen [Video] TV Antenna Helper Makes HDTV Antenna Calibration a Snap Turn a Green Laser into a Microscope Projector [Science] The Open Road Awaits [Wallpaper]

    Read the article

  • Connect ViewModel and View using Unity

    - by brainbox
    In this post i want to describe the approach of connecting View and ViewModel which I'm using in my last project.The main idea is to do it during resolve inside of unity container. It can be achived using InjectionFactory introduced in Unity 2.0 public static class MVVMUnityExtensions{    public static void RegisterView<TView, TViewModel>(this IUnityContainer container) where TView : FrameworkElement    {        container.RegisterView<TView, TView, TViewModel>();    }    public static void RegisterView<TViewFrom, TViewTo, TViewModel>(this IUnityContainer container)        where TViewTo : FrameworkElement, TViewFrom    {        container.RegisterType<TViewFrom>(new InjectionFactory(            c =>            {                var model = c.Resolve<TViewModel>();                var view = Activator.CreateInstance<TViewTo>();                view.DataContext = model;                return view;            }         ));    }}}And here is the sample how it could be used:var unityContainer = new UnityContainer();unityContainer.RegisterView<IFooView, FooView, FooViewModel>();IFooView view = unityContainer.Resolve<IFooView>(); // view with injected viewmodel in its datacontextPlease tell me your prefered way to connect viewmodel and view.

    Read the article

  • The Making of Arduino [Geek History]

    - by Jason Fitzpatrick
    The open-source Arduino board is the heart of thousands of different DIY projects–it would be easy to think that the Arduino has always been around. The ubiquitous little hobby board, however, is but a scant six years old. At technology blog IEEESpectrum they delve into the history of the Arduino board and its quiet origins in a small Italian town. Here’s an excerpt from their lengthy write up about the the origin and history of the beloved Arduino: Arduino is a low-cost microcontroller board that lets even a novice do really amazing things. You can connect an Arduino to all kinds of sensors, lights, motors, and other devices and use easy-to-learn software to program how your creation will behave. You can build an interactive display or a mobile robot and then share your design with the world by posting it on the Net. Released in 2005 as a modest tool for Banzi’s students at the Interaction Design Institute Ivrea (IDII), Arduino has spawned an international do-it-yourself revolution in electronics. You can buy an Arduino board for just about US $30 or build your own from scratch: All hardware schematics and source code are available for free under public licenses. As a result, Arduino has become the most influential open-source hardware movement of its time. The little board is now the go-to gear for artists, hobbyists, students, and anyone with a gadgetry dream. More than 250 000 Arduino boards have been sold around the world—and that doesn’t include the reams of clones. “It made it possible for people do things they wouldn’t have done otherwise,” says David A. Mellis, who was a student at IDII before pursuing graduate work at the MIT Media Lab and is the lead software developer of Arduino. HTG Explains: Understanding Routers, Switches, and Network Hardware How to Use Offline Files in Windows to Cache Your Networked Files Offline How to See What Web Sites Your Computer is Secretly Connecting To

    Read the article

  • Overriding GetHashCode in a mutable struct - What NOT to do?

    - by Kyle Baran
    I am using the XNA Framework to make a learning project. It has a Point struct which exposes an X and Y value; for the purpose of optimization, it breaks the rules for proper struct design, since its a mutable struct. As Marc Gravell, John Skeet, and Eric Lippert point out in their respective posts about GetHashCode() (which Point overrides), this is a rather bad thing, since if an object's values change while its contained in a hashmap (ie, LINQ queries), it can become "lost". However, I am making my own Point3D struct, following the design of Point as a guideline. Thus, it too is a mutable struct which overrides GetHashCode(). The only difference is that mine exposes and int for X, Y, and Z values, but is fundamentally the same. The signatures are below: public struct Point3D : IEquatable<Point3D> { public int X; public int Y; public int Z; public static bool operator !=(Point3D a, Point3D b) { } public static bool operator ==(Point3D a, Point3D b) { } public Point3D Zero { get; } public override int GetHashCode() { } public override bool Equals(object obj) { } public bool Equals(Point3D other) { } public override string ToString() { } } I have tried to break my struct in the way they describe, namely by storing it in a List<Point3D>, as well as changing the value via a method using ref, but I did not encounter they behavior they warn about (maybe a pointer might allow me to break it?). Am I being too cautious in my approach, or should I be okay to use it as is?

    Read the article

  • Silverlight Cream for April 25, 2010 -- #847

    - by Dave Campbell
    In this Issue: Michael Washington, David Poll, Andrea Boschin, Kunal Chowdhury(-2-), Lee, and Chad Campbell. Shoutout: Not at all Silverlight, but Kirupa has a great article up about Elastic Collisions From SilverlightCream.com: Easily decouple your MVVM ViewModel from your Model using RX Extensions Michael Washington continues with his Simplified MVVM and uses Rx to allow him to put web service methods in the model and call them from the ViewModel. A “refreshing” Authentication/Authorization experience with Silverlight 4 David Poll expands on his previous post and demonstrates returning the user to the page prior to the login, just the way you'd like it. Using a PollingDuplex service to handle long running activities Andrea Boschin is back discussing PollingDuplex again, this time is discussing getting updates from a long-running process. Introduction to Silverlight - Silverlight tutorials Chapter 1 Kunal Chowdhury has an intro to Silverlight Tutorial that looks good... if you're just getting started, here's a place to look. Introduction to Silverlight Application Development - Silverlight tutorial Chapter 2 In his 2nd introductory tutorial, Kunal Chowdhury gets into code and discusses User Controls. Drag and Drop grouping in Datagrid Lee has a post up where he's taken a DataGrid and produced some of the features of some of the 3rd-party controls, specifically dragging column headers to a place above the grid to sort the grid. Silverlight – HTTP request to [Url] was aborted. …local channel being closed while the request was still in progress Chad Campbell is addressing timeout problems you may hit with connecting up to your WCF service. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • SRs @ Oracle: How do I License Thee?

    - by [email protected]
    With the release of the new Sun Ray product last week comes the advent of a different software licensing model. Where Sun had initially taken the approach of '1 desktop device = one license', we later changed things to be '1 concurrent connection to the server software = one license', and while there were ways to tell how many connections there were at a time, it wasn't the easiest thing to do.  And, when should you measure concurrency?  At your busiest time, of course... but when might that be?  9:00 Monday morning this week might yield a different result than 9:00 Monday morning last week.In the acquisition of this desktop virtualization product suite Oracle has changed things to be, in typical Oracle fashion, simpler.  There are now two choices for customers around licensing: Named User licenses and Per Device licenses.Here's how they work, and some examples:The Rules1) A Sun Ray device, and PC running the Desktop Access Client (DAC), are both considered unique devices.OR, 2) Any user running a session on either a Sun Ray or an DAC is still just one user.So, you have a choice of path to go down.Some Examples:Here are 6 use cases I can think of right now that will help you choose the Oracle server software licensing model that is right for your business:Case 1If I have 100 Sun Rays for 100 users, and 20 of them use DAC at home that is 100 user licenses.If I have 100 Sun Rays for 100 users, and 20 of them use DAC at home that is 120 device licenses.Two cases using the same metrics - different licensing models and therefore different results.Case 2If I have 100 Sun Rays for 200 users, and 20 of them use DAC at home that is 200 user licenses.If I have 100 Sun Rays for 200 users, and 20 of them use DAC at home that is 120 device licenses.Same metrics - very different results.Case 3If I have 100 Sun Rays for 50 users, and 20 of them use DAC at home that is 50 user licenses.If I have 100 Sun Rays for 50 users, and 20 of them use DAC at home that is 120 device licenses.Same metrics - but again - very different results.Based on the way your business operates you should be able to see which of the two licensing models is most advantageous to you.Got questions?  I'll try to help.(Thanks to Brad Lackey for the clarifications!)

    Read the article

  • Documentation in Oracle Retail Analytics, Release 13.3

    - by Oracle Retail Documentation Team
    The 13.3 Release of Oracle Retail Analytics is now available on the Oracle Software Delivery Cloud and from My Oracle Support. The Oracle Retail Analytics 13.3 release introduced significant new functionality with its new Customer Analytics module. The Customer Analytics module enables you to perform retail analysis of customers and customer segments. Market basket analysis (part of the Customer Analytics module) provides insight into which products have strong affinity with one another. Customer behavior information is obtained from mining sales transaction history, and it is correlated with customer segment attributes to inform promotion strategies. The ability to understand market basket affinities allows marketers to calculate, monitor, and build promotion strategies based on critical metrics such as customer profitability. Highlighted End User Documentation Updates With the addition of Oracle Retail Customer Analytics, the documentation set addresses both modules under the single umbrella name of Oracle Retail Analytics. Note, however, that the modules, Oracle Retail Merchandising Analytics and Oracle Retail Customer Analytics, are licensed separately. To accommodate new functionality, the Retail Analytics suite of documentation has been updated in the following areas, among others: The User Guide has been updated with an overview of Customer Analytics. It also contains a list of metrics associated with Customer Analytics. The Operations Guide provides details on Market Basket Analysis as well as an updated list of APIs. The program reference list now also details the module (Merchandising Analytics or Customer Analytics) to which each program applies. The Data Model was updated to include new information related to Customer Analytics, and a new section, Market Basket Analysis Module, was added to the document with its own entity relationship diagrams and data definitions. List of Documents The following documents are included in Oracle Retail Analytics 13.3: Oracle Retail Analytics Release Notes Oracle Retail Analytics Installation Guide Oracle Retail Analytics User Guide Oracle Retail Analytics Implementation Guide Oracle Retail Analytics Operations Guide Oracle Retail Analytics Data Model

    Read the article

  • Frederick .NET User Group June 2010 Meeting

    - by John Blumenauer
    FredNUG is pleased to announce our June speaker will be Pete Brown.  Pete was one FredNUG’s first speakers when the group started and we’re very happy to have him visiting us again to present on Silverlight!  On June 15th @ 6:30 PM, we’ll start with a Visual Studio 2010 Launch with pizza, swag and a presentation about what makes Visual Studio 2010 great.  Then, starting at 7 PM, Pete Brown will present “What’s New in Silverlight 4.”  It looks like a evening filled with newness!   The scheduled agenda is:   6:30 PM - 7:15 PM – Visual Studio 2010 Launch Event plus Pizza/Social Networking/Announcements 7:15 PM - 8:30 PM - Main Topic: What’s New in Silverlight 4 with Pete Brown  Main Topic:  What’s New in Silverlight 4? Speaker Bio: Pete Brown is a Senior Program Manager with Microsoft on the developer community team led by Scott Hanselman, as well as a former Microsoft Silverlight MVP, INETA speaker, and RIA Architect for Applied Information Sciences, where he worked for over 13 years. Pete's focus at Microsoft is the community around client application development (WPF, Silverlight, Windows Phone, Surface, Windows Forms, C++, Native Windows API and more). From his first sprite graphics and custom character sets on the Commodore 64 to 3d modeling and design through to Silverlight, Surface, XNA, and WPF, Pete has always had a deep interest in programming, design, and user experience. His involvement in Silverlight goes back to the Silverlight 1.1 alpha application that he co-wrote and put into production in July 2007. Pete has been programming for fun since 1984, and professionally since 1992. In his spare time, Pete enjoys programming, blogging, designing and building his own woodworking projects and raising his two children with his wife in the suburbs of Maryland. Pete's site and blog is at 10rem.net, and you can follow him on Twitter at http://twitter.com/pete_brown Twitter: http://twitter.com/pete_brown Facebook: http://www.facebook.com/pmbrown Pete is a founding member of the CapArea .NET Silverlight SIG. (Visit the CapArea. NET Silverlight SIG here )    8:30 PM - 8:45 PM – RAFFLE! Please join us and get involved in our .NET developers community!

    Read the article

  • Professional Scrum Developer (.NET) Training in London

    - by Martin Hinshelwood
    On the 26th - 30th July in Microsoft’s offices in London Adam Cogan from SSW will be presenting the first Professional Scrum Developer course in the UK. I will be teaching this course along side Adam and it is a fantastic experience. You are split into teams and go head-to-head to deliver units of potentially shippable work in four two hour sprints. The Professional Scrum Developer course is the only course endorsed by both Microsoft and Ken Schwaber and they have worked together very effectively in brining this course to fruition. This course is the brain child of Richard Hundhausen, a Microsoft Regional Director, and both Adam and I attending the Trainer Prep in Sydney when he was there earlier this year. He is a fantastic trainer and no matter where you do this course you can be safe in the knowledge that he has trained and vetted all of the teachers. A tools version of Ken if you will Find a course and register Download this syllabus Download the Scrum Guide What is the Professional Scrum Developer course all about? Professional Scrum Developer course is a unique and intensive five-day experience for software developers. The course guides teams on how to turn product requirements into potentially shippable increments of software using the Scrum framework, Visual Studio 2010, and modern software engineering practices. Attendees will work in self-organizing, self-managing teams using a common instance of Team Foundation Server 2010. Who should attend this course? This course is suitable for any member of a software development team – architect, programmer, database developer, tester, etc. Entire teams are encouraged to attend and experience the course together, but individuals are welcome too. Attendees will self-organize to form cross-functional Scrum teams. These teams require an aggregate of skills specific to the selected case study. Please see the last page of this document for specific details. Product Owners, ScrumMasters, and other stakeholders are welcome too, but keep in mind that everyone who attends will be expected to commit to work and pull their weight on a Scrum team. What should you know by the end of the course? Scrum will be experienced through a combination of lecture, demonstration, discussion, and hands-on exercises. Attendees will learn how to do Scrum correctly while being coached and critiqued by the instructor, in the following topic areas: Form effective teams Explore and understand legacy “Brownfield” architecture Define quality attributes, acceptance criteria, and “done” Create automated builds How to handle software hotfixes Verify that bugs are identified and eliminated Plan releases and sprints Estimate product backlog items Create and manage a sprint backlog Hold an effective sprint review Improve your process by using retrospectives Use emergent architecture to avoid technical debt Use Test Driven Development as a design tool Setup and leverage continuous integration Use Test Impact Analysis to decrease testing times Manage SQL Server development in an Agile way Use .NET and T-SQL refactoring effectively Build, deploy, and test SQL Server databases Create and manage test plans and cases Create, run, record, and play back manual tests Setup a branching strategy and branch code Write more maintainable code Identify and eliminate people and process dysfunctions Inspect and improve your team’s software development process What does the week look like? This course is a mix of lecture, demonstration, group discussion, simulation, and hands-on software development. The bulk of the course will be spent working as a team on a case study application delivering increments of new functionality in mini-sprints. Here is the week at a glance: Monday morning and most of the day Friday will be spent with the computers powered off, so you can focus on sharpening your game of Scrum and avoiding the common pitfalls when implementing it. The Sprints Timeboxing is a critical concept in Scrum as well as in this course. We expect each team and student to understand and obey all of the timeboxes. The timebox duration will always be clearly displayed during each activity. Expect the instructor to enforce it. Each of the ½ day sprints will roughly follow this schedule: Component Description Minutes Instruction Presentation and demonstration of new and relevant tools & practices 60 Sprint planning meeting Product owner presents backlog; each team commits to delivering functionality 10 Sprint planning meeting Each team determines how to build the functionality 10 The Sprint The team self-organizes and self-manages to complete their tasks 120 Sprint Review meeting Each team will present their increment of functionality to the other teams = 30 Sprint Retrospective A group retrospective meeting will be held to inspect and adapt 10 Each team is expected to self-organize and manage their own work during the sprint. Pairing is highly encouraged. The instructor/product owner will be available if there are questions or impediments, but will be hands-off by default. You should be prepared to communicate and work with your team members in order to achieve your sprint goal. If you have development-related questions or get stuck, your partner or team should be your first level of support. Module 1: INTRODUCTION This module provides a chance for the attendees to get to know the instructors as well as each other. The Professional Scrum Developer program, as well as the day by day agenda, will be explained. Finally, the Scrum team will be selected and assembled so that the forming, storming, norming, and performing can begin. Trainer and student introductions Professional Scrum Developer program Agenda Logistics Team formation Retrospective Module 2: SCRUMDAMENTALS This module provides a level-setting understanding of the Scrum framework including the roles, timeboxes, and artifacts. The team will then experience Scrum firsthand by simulating a multi-day sprint of product development, including planning, review, and retrospective meetings. Scrum overview Scrum roles Scrum timeboxes (ceremonies) Scrum artifacts Simulation Retrospective It’s required that you read Ken Schwaber’s Scrum Guide in preparation for this module and course. MODULE 3: IMPLEMENTING SCRUM IN VISUAL STUDIO 2010 This module demonstrates how to implement Scrum in Visual Studio 2010 using a Scrum process template*. The team will learn the mapping between the Scrum concepts and how they are implemented in the tool. After connecting to the shared Team Foundation Server, the team members will then return to the simulation – this time using Visual Studio to manage their product development. Mapping Scrum to Visual Studio 2010 User Story work items Task work items Bug work items Demonstration Simulation Retrospective Module 4: THE CASE STUDY In this module the team is introduced to their problem domain for the week. A kickoff meeting by the Product Owner (the instructor) will set the stage for the why and what that will take during the upcoming sprints. The team will then define the quality attributes of the project and their definition of “done.” The legacy application code will be downloaded, built, and explored, so that any bugs can be discovered and reported. Introduction to the case study Download the source code, build, and explore the application Define the quality attributes for the project Define “done” How to file effective bugs in Visual Studio 2010 Retrospective Module 5: HOTFIX This module drops the team directly into a Brownfield (legacy) experience by forcing them to analyze the existing application’s architecture and code in order to locate and fix the Product Owner’s high-priority bug(s). The team will learn best practices around finding, testing, fixing, validating, and closing a bug. How to use Architecture Explorer to visualize and explore Create a unit test to validate the existence of a bug Find and fix the bug Validate and close the bug Retrospective Module 6: PLANNING This short module introduces the team to release and sprint planning within Visual Studio 2010. The team will define and capture their goals as well as other important planning information. Release vs. Sprint planning Release planning and the Product Backlog Product Backlog prioritization Acceptance criteria and tests Sprint planning and the Sprint Backlog Creating and linking Sprint tasks Retrospective At this point the team will have the knowledge of Scrum, Visual Studio 2010, and the case study application to begin developing increments of potentially shippable functionality that meet their definition of done. Module 7: EMERGENT ARCHITECTURE This module introduces the architectural practices and tools a team can use to develop a valid design on which to develop new functionality. The teams will learn how Scrum supports good architecture and design practices. After the discussion, the teams will be presented with the product owner’s prioritized backlog so that they may select and commit to the functionality they can deliver in this sprint. Architecture and Scrum Emergent architecture Principles, patterns, and practices Visual Studio 2010 modeling tools UML and layer diagrams SPRINT 1 Retrospective Module 8: TEST DRIVEN DEVELOPMENT This module introduces Test Driven Development as a design tool and how to implement it using Visual Studio 2010. To maximize productivity and quality, a Scrum team should setup Continuous Integration to regularly build every team member’s code changes and run regression tests. Refactoring will also be defined and demonstrated in combination with Visual Studio’s Test Impact Analysis to efficiently re-run just those tests which were impacted by refactoring. Continuous integration Team Foundation Build Test Driven Development (TDD) Refactoring Test Impact Analysis SPRINT 2 Retrospective Module 9: AGILE DATABASE DEVELOPMENT This module lets the SQL Server database developers in on a little secret – they can be agile too. By using the database projects in Visual Studio 2010, the database developers can join the rest of the team. The students will see how to apply Agile database techniques within Visual Studio to support the SQL Server 2005/2008/2008R2 development lifecycle. Agile database development Visual Studio database projects Importing schema and scripts Building and deploying Generating data Unit testing SPRINT 3 Retrospective Module 10: SHIP IT Teams need to know that just because they like the functionality doesn’t mean the Product Owner will. This module revisits acceptance criteria as it pertains to acceptance testing. By refining acceptance criteria into manual test steps, team members can execute the tests, recording the results and reporting bugs in a number of ways. Manual tests will be defined and executed using the Microsoft Test Manager tool. As the Sprint completes and an increment of functionality is delivered, the team will also learn why and when they should create a branch of the codeline. Acceptance criteria Testing in Visual Studio 2010 Microsoft Test Manager Writing and running manual tests Branching SPRINT 4 Retrospective Module 11: OVERCOMING DYSFUNCTION This module introduces the many types of people, process, and tool dysfunctions that teams face in the real world. Many dysfunctions and scenarios will be identified, along with ideas and discussion for how a team might mitigate them. This module will enable you and your team to move toward independence and improve your game of Scrum when you depart class. Scrum-butts and flaccid Scrum Best practices working as a team Team challenges ScrumMaster challenges Product Owner challenges Stakeholder challenges Course Retrospective What will be expected of you and you team? This is a unique course in that it’s technically-focused, team-based, and employs timeboxes. It demands that the members of the teams self-organize and self-manage their own work to collaboratively develop increments of software. All attendees must commit to: Pay attention to all lectures and demonstrations Participate in team and group discussions Work collaboratively with other team members Obey the timebox for each activity Commit to work and do your best to deliver All teams should have these skills: Understanding of Scrum Familiarity with Visual Studio 201 C#, .NET 4.0 & ASP.NET 4.0 experience*  SQL Server 2008 development experience Software testing experience * Check with the instructor ahead of time for the exact technologies Self-organising teams Another unique attribute of this course is that it’s a technical training class being delivered to teams of developers, not pairs, and not individuals. Ideally, your actual software development team will attend the training to ensure that all necessary skills are covered. However, if you wish to attend an open enrolment course alone or with just a couple of colleagues, realize that you may be placed on a team with other attendees. The instructor will do his or her best to ensure that each team is cross-functional to tackle the case study, but there are no guarantees. You may be required to try a new role, learn a new skill, or pair with somebody unfamiliar to you. This is just good Scrum! Who should NOT take this course? Because of the nature of this course, as explained above, certain types of people should probably not attend this course: Students requiring command and control style instruction – there are no prescriptive/step-by-step (think traditional Microsoft Learning) labs in this course Students who are unwilling to work within a timebox Students who are unwilling to work collaboratively on a team Students who don’t have any skill in any of the software development disciplines Students who are unable to commit fully to their team – not only will this diminish the student’s learning experience, but it will also impact their team’s learning experience Find a course and register Download this syllabus Download the Scrum Guide Technorati Tags: Scrum,SSW,Pro Scrum Dev

    Read the article

  • Programming concepts taken from the arts and humanities

    - by Joey Adams
    After reading Paul Graham's essay Hackers and Painters and Joel Spolsky's Advice for Computer Science College Students, I think I've finally gotten it through my thick skull that I should not be loath to work hard in academic courses that aren't "programming" or "computer science" courses. To quote the former: I've found that the best sources of ideas are not the other fields that have the word "computer" in their names, but the other fields inhabited by makers. Painting has been a much richer source of ideas than the theory of computation. — Paul Graham, "Hackers and Painters" There are certainly other, much stronger reasons to work hard in the "boring" classes. However, it'd also be neat to know that these classes may someday inspire me in programming. My question is: what are some specific examples where ideas from literature, art, humanities, philosophy, and other fields made their way into programming? In particular, ideas that weren't obviously applied the way they were meant to (like most math and domain-specific knowledge), but instead gave utterance or inspiration to a program's design and choice of names. Good examples: The term endian comes from Gulliver's Travels by Tom Swift (see here), where it refers to the trivial matter of which side people crack open their eggs. The terms journal and transaction refer to nearly identical concepts in both filesystem design and double-entry bookkeeping (financial accounting). mkfs.ext2 even says: Writing superblocks and filesystem accounting information: done Off-topic: Learning to write English well is important, as it enables a programmer to document and evangelize his/her software, as well as appear competent to other programmers online. Trigonometry is used in 2D and 3D games to implement rotation and direction aspects. Knowing finance will come in handy if you want to write an accounting package. Knowing XYZ will come in handy if you want to write an XYZ package. Arguably on-topic: The Monad class in Haskell is based on a concept by the same name from category theory. Actually, Monads in Haskell are monads in the category of Haskell types and functions. Whatever that means...

    Read the article

  • 10 Reasons Why Java is the Top Embedded Platform

    - by Roger Brinkley
    With the release of Oracle ME Embedded 3.2 and Oracle Java Embedded Suite, Java is now ready to fully move into the embedded developer space, what many have called the "Internet of Things". Here are 10 reasons why Java is the top embedded platform. 1. Decouples software development from hardware development cycle Development is typically split between both hardware and software in a traditional design flow . This leads to complicated co-design and requires prototype hardware to be built. This parallel and interdependent hardware / software design process typically leads to two or more re-development phases. With Embedded Java, all specific work is carried out in software, with the (processor) hardware implementation fully decoupled. This with eliminate or at least reduces the need for re-spins of software or hardware and the original development efforts can be carried forward directly into product development and validation. 2. Development and testing can be done (mostly) using standard desktop systems through emulation Because the software and hardware are decoupled it now becomes easier to test the software long before it reaches the hardware through hardware emulation. Emulation is the ability of a program in an electronic device to imitate another program or device. In the past Java tools like the Java ME SDK and the SunSPOTs Solarium provided developers with emulation for a complete set of mobile telelphones and SunSpots. This often included network interaction or in the case of SunSPOTs radio communication. What emulation does is speed up the development cycle by refining the software development process without the need of hardware. The software is fixed, redefined, and refactored without the timely expense of hardware testing. With tools like the Java ME 3.2 SDK, Embedded Java applications can be be quickly developed on Windows based platforms. In the end of course developers should do a full set of testing on the hardware as incompatibilities between emulators and hardware will exist, but the amount of time to do this should be significantly reduced. 3. Highly productive language, APIs, runtime, and tools mean quick time to market Charles Nutter probably said it best in twitter blog when he tweeted, "Every time I see a piece of C code I need to port, my heart dies a little. Then I port it to 1/4 as much Java, and feel better." The Java environment is a very complex combination of a Java Virtual Machine, the Java Language, and it's robust APIs. Combine that with the Java ME SDK for small devices or just Netbeans for the larger devices and you have a development environment where development time is reduced significantly meaning the product can be shipped sooner. Of course this is assuming that the engineers don't get slap happy adding new features given the extra time they'll have.  4. Create high-performance, portable, secure, robust, cross-platform applications easily The latest JIT compilers for the Oracle JVM approach the speed of C/C++ code, and in some memory allocation intensive circumstances, exceed it. And specifically for the embedded devices both ME Embedded and SE Embedded have been optimized for the smaller footprints.  In portability Java uses Bytecode to make the language platform independent. This creates a write once run anywhere environment that allows you to develop on one platform and execute on others and avoids a platform vendor lock in. For security, Java achieves protection by confining a Java program to a Java execution environment and not allowing it to access other parts of computer.  In variety of systems the program must execute reliably to be robust. Finally, Oracle Java ME Embedded is a cross-industry and cross-platform product optimized in release version 3.2 for chipsets based on the ARM architectures. Similarly Oracle Java SE Embedded works on a variety of ARM V5, V6, and V7, X86 and Power Architecture Linux. 5. Java isolates your apps from language and platform variations (e.g. C/C++, kernel, libc differences) This has been a key factor in Java from day one. Developers write to Java and don't have to worry about underlying differences in the platform variations. Those platform variations are being managed by the JVM. Gone are the C/C++ problems like memory corruptions, stack overflows, and other such bugs which are extremely difficult to isolate. Of course this doesn't imply that you won't be able to get away from native code completely. There could be some situations where you have to write native code in either assembler or C/C++. But those instances should be limited. 6. Most popular embedded processors supported allowing design flexibility Java SE Embedded is now available on ARM V5, V6, and V7 along with Linux on X86 and Power Architecture platforms. Java ME Embedded is available on system based on ARM architecture SOCs with low memory footprints and a device emulation environment for x86/Windows desktop computers, integrated with the Java ME SDK 3.2. A standard binary of Oracle Java ME Embedded 3.2 for ARM KEIL development boards based on ARM Cortex M-3/4 (KEIL MCBSTM32F200 using ST Micro SOC STM32F207IG) will soon be available for download from the Oracle Technology Network (OTN). 7. Support for key embedded features (low footprint, power mgmt., low latency, etc) All embedded devices by there very nature are constrained in some way. Economics may dictate a device with a less RAM and ROM. The CPU needs can dictate a less powerful device. Power consumption is another major resource in some embedded devices as connecting to consistent power source not always desirable or possible. For others they have to constantly on. Often many of these systems are headless (in the embedded space it's almost always Halloween).  For memory resources ,Java ME Embedded can run in environment as low as 130KB RAM/350KB ROM for a minimal, customized configuration up to 700KB RAM/1500KB ROM for the full, standard configuration. Java SE Embedded is designed for environments starting at 32MB RAM/39MB  ROM. Key functionality of embedded devices such as auto-start and recovery, flexible networking are fully supported. And while Java SE Embedded has been optimized for mid-range to high-end embedded systems, Java ME Embedded is a Java runtime stack optimized for small embedded systems. It provides a robust and flexible application platform with dedicated embedded functionality for always-on, headless (no graphics/UI), and connected devices. 8. Leverage huge Java developer ecosystem (expertise, existing code) There are over 9 million developers in world that work on Java, and while not all of them work on embedded systems, their wealth of expertise in developing applications is immense. In short, getting a java developer to work on a embedded system is pretty easy, you probably have a java developer living in your subdivsion.  Then of course there is the wealth of existing code. The Java Embedded Community on Java.net is central gathering place for embedded Java developers. Conferences like Embedded Java @ JavaOne and the a variety of hardware vendor conferences like Freescale Technlogy Forums offer an excellent opportunity for those interested in embedded systems. 9. Easily create end-to-end solutions integrated with Java back-end services In the "Internet of Things" things aren't on an island doing an single task. For instance and embedded drink dispenser doesn't just dispense a beverage, but could collect money from a credit card and also send information about current sales. Similarly, an embedded house power monitoring system doesn't just manage the power usage in a house, but can also send that data back to the power company. In both cases it isn't about the individual thing, but monitoring a collection of  things. How much power did your block, subdivsion, area of town, town, county, state, nation, world use? How many Dr Peppers were purchased from thing1, thing2, thingN? The point is that all this information can be collected and transferred securely  (and believe me that is key issue that Java fully supports) to back end services for further analysis. And what better back in service exists than a Java back in service. It's interesting to note that on larger embedded platforms that support the Java Embedded Suite some of the analysis might be done on the embedded device itself as JES has a glassfish server and Java Database as part of the installation. The result is an end to end Java solution. 10. Solutions from constrained devices to server-class systems Just take a look at some of the embedded Java systems that have already been developed and you'll see a vast range of solutions. Livescribe pen, Kindle, each and every Blu-Ray player, Cisco's Advanced VOIP phone, KronosInTouch smart time clock, EnergyICT smart metering, EDF's automated meter management, Ricoh Printers, and Stanford's automated car  are just a few of the list of embedded Java implementation that continues to grow. Conclusion Now if your a Java Developer you probably look at some of the 10 reasons and say "duh", but for the embedded developers this is should be an eye opening list. And with the release of ME Embedded 3.2 and the Java Embedded Suite the embedded developers life is now a whole lot easier. For the Java developer your employment opportunities are about to increase. For both it's a great time to start developing Java for the "Internet of Things".

    Read the article

  • How do I get information about the level to the player object?

    - by pangaea
    I have a design problem with my Player and Level class in my game. So below is a picture of the game. The problem is I don't want to move on the black space and only the white space. I know how to do this as all I need to do is get the check for the sf::Color::Black and I have methods to do this in the Level class. The problem is this piece of code void Game::input() { player.input(); } void Game::update() { (*level).update(); player.update(); } void Game::render() { (*level).render(); player.render(); } So as you there is a problem in that how do I get the map information from the Level class to the Player class. Now I was thinking if I made the Player position static and pass it into the Level as parameter in update I could do it. The problem is interaction. I don't know what to do. I could maybe make player go into the Level class. However, what if I want multiple levels? So I have big design problems that I'm trying to solve.

    Read the article

  • SQLAuthority News – 2 Whitepapers Announced – AlwaysOn Architecture Guide: Building a High Availability and Disaster Recovery Solution

    - by pinaldave
    Understanding AlwaysOn Architecture is extremely important when building a solution with failover clusters and availability groups. Microsoft has just released two very important white papers related to this subject. Both the white papers are written by top experts in industry and have been reviewed by excellent panel of experts. Every time I talk with various organizations who are adopting the SQL Server 2012 they are always excited with the concept of the new feature AlwaysOn. One of the requests I often here is the related to detailed documentations which can help enterprises to build a robust high availability and disaster recovery solution. I believe following two white paper now satisfies the request. AlwaysOn Architecture Guide: Building a High Availability and Disaster Recovery Solution by Using AlwaysOn Availability Groups SQL Server 2012 AlwaysOn Availability Groups provides a unified high availability and disaster recovery (HADR) solution. This paper details the key topology requirements of this specific design pattern on important concepts like quorum configuration considerations, steps required to build the environment, and a workflow that shows how to handle a disaster recovery. AlwaysOn Architecture Guide: Building a High Availability and Disaster Recovery Solution by Using Failover Cluster Instances and Availability Groups SQL Server 2012 AlwaysOn Failover Cluster Instances (FCI) and AlwaysOn Availability Groups provide a comprehensive high availability and disaster recovery solution. This paper details the key topology requirements of this specific design pattern on important concepts like asymmetric storage considerations, quorum model selection, quorum votes, steps required to build the environment, and a workflow. If you are not going to implement AlwaysOn feature, this two Whitepapers are still a great reference material to review as it will give you complete idea regarding what it takes to implement AlwaysOn architecture and what kind of efforts needed. One should at least bookmark above two white papers for future reference. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, T SQL, Technology Tagged: AlwaysOn

    Read the article

  • DRM Tallyrand - The New User Interface

    - by russ.bishop
    I received word recently that the Tallyrand (11.1.2.0) build is out of our hands. I'm not sure when it will hit eDelivery, but if it hasn't already it should happen soon. For this post, I want to really quickly show the new user interface. The login screen: When you login, you are browsing versions and hierarchies. Note that Unicode is fully supported: The UI attempts to provide context-sensitive links where possible; notice here that an unloaded version is selected, so the UI shows a link. Clicking the link automatically brings up this Load Version dialog. This same thing applies elsewhere in the UI when you attempt to perform an action with an unloaded version: Here is browsing a hierarchy, with the property grid and context menu displayed (though you can hide the property grid anytime you like to provide more room): Worried about drag and drop? Don't! We support it even though this is a browser app. Also notice the Relationships feature on the right displaying a node's ancestors: Where possible, we try to present the available options, rather than just throwing up an "OK/Cancel" dialog (which most users never read anyway): Context-sensitive shortcuts automatically fill-in the context based on the currently selected node. For example, if you want to run a query using the selected node as the root, you can just click that query in the Shortcuts tab. In this screenshot, clicking Model After would model the selected node: This is just for starters. There is much more to cover, on both the client and server. For example, all communication channels are now configurable (no more DCOM). You can pick the ports, the encoding (binary or XML), and the transport mechanism (TCP, TCP over SSL, or SOAP over HTTP). All the relevant WS-* standards are also supported, eg: WS-Security, etc. Plus new features (besides the web client and unicode support). I hope to cover as much of these things as I can in the coming months. If you have specific requests, comment on this post and I'll try to cover them.

    Read the article

  • ODI 11g – How to override SQL at runtime?

    - by David Allan
    Following on from the posting some time back entitled ‘ODI 11g – Simple, Powerful, Flexible’ here we push the envelope even further. Rather than just having the SQL we override defined statically in the interface design we will have it configurable via a variable….at runtime. Imagine you have a well defined interface shape that you want to be fulfilled and that shape can be satisfied from a number of different sources that is what this allows - or the ability for one interface to consume data from many different places using variables. The cool thing about ODI’s reference API and this is that it can be fantastically flexible and useful. When I use the variable as the option value, and I execute the top level scenario that uses this temporary interface I get prompted (or can get prompted to be correct) for the value of the variable. Note I am using the <@=odiRef.getObjectName("L","EMP", "SCOTT","D")@> notation for the table reference, since this is done at runtime, then the context will resolve to the correct table name etc. Each time I execute, I could use a different source provider (obviously some dependencies on KMs/technologies here). For example, the following groovy snippet first executes and the query uses SCOTT model with EMP, the next time it is from BOB model and the datastore OTHERS. m=new Properties(); m.put("DEMO.SQLSTR", "select empno, deptno from <@=odiRef.getObjectName("L","EMP", "SCOTT","D")@>"); s=new StartupParams(m); runtimeAgent.startScenario("TOP", null, s, null, "GLOBAL", 5, null, true); m2=new Properties(); m2.put("DEMO.SQLSTR", "select empno, deptno from <@=odiRef.getObjectName("L","OTHERS", "BOB","D")@>"); s2=new StartupParams(m); runtimeAgent.startScenario("TOP", null, s2, null, "GLOBAL", 5, null, true); You’ll need a patch to 11.1.1.6 for this type of capability, thanks to my ole buddy Ron Gonzalez from the Enterprise Management group for help pushing the envelope!

    Read the article

  • Best Practices for MVC Architecture

    - by Mystere Man
    There are a number of questions on StackOverflow regard MVC best practices, but most of those seem to revolve around things like using Dependancy Injection, or creating helper functions, or do's and don'ts of what to do in views and controllers. My question is more about how to architect an MVC application. For example, we are encouraged to use DI with the Repository pattern to decouple data access from the controller, however very little is said on HOW to do that specifically for MVC. Where would we place the Repository classes, for instance? They don't seem to be model related specifically, since the model should likewise be relatively decoupled from the actual data access technologies. A second question involves how to structure the layers or tiers. Most example applications (Nerd dinner, Music Store, etc..) all seem to use a single tier, 2 layer approach (not counting tests) that typically has controllers directly calling L2S or EF code. If I want to create a multi-tier/layer aplication what are some of the best practices there in regards to MVC? This question is one-part standard q-a, but another part best-practices, so it could go either here or programmers.se, I am marking it CW. If you feel it would be better suited to programmers.se then it can be migrated. EDIT: What happened to the Community Wiki option? It seems to be gone.

    Read the article

< Previous Page | 614 615 616 617 618 619 620 621 622 623 624 625  | Next Page >