Search Results

Search found 5254 results on 211 pages for 'concept analysis'.

Page 111/211 | < Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >

  • Google Programmers

    - by seth
    As a soon-to-be software engineer, I have studied some languages during my time in college. I can name C, C++, Java, Scheme, Ruby, PHP, for example. However, one of the main principles in my college (recognized by many as the best in my country) is to teach us how to learn for ourselves and how to search the web when we have a doubt. This leads to a proactive attitude, when I need something, I go get it and this has worked so far for me. Recently, I started wondering though, how much development would I be able to do without internet access. The answer bugged me quite a bit. I know the concept of the languages, mostly I know what to do, but I was amazed by how "slow" things were without having the google to help in the development. The problem was mostly related to specific syntax but it was not without some effort that i solved some of the SPOJ problems in C++. Is this normal? Should I be worried and try to change something in my programming behaviour? UPDATE: I'll give a concrete example. Reading and writing to a file in java. I have done this about a dozen times in my life, yet every time I need to do it, I end up googling "read file java" and refreshing my memory. I completely understand the code, i fully understand what it does. But I am sure, that without google, it would take me a few tries to read and write correctly (if I had to sit in front of the screen with a blank page and write this without consulting any source whatsoever).

    Read the article

  • How to achieve a loosely coupled REST API but with a defined and well understood contract?

    - by BestPractices
    I am new to REST and am struggling to understand how one would properly design a REST system to both allow for loose coupling but at the same time allow a consumer of a REST API to understand the API. If, in my client code, I issue a GET request for a resource and get back XML, how do I know what to do with that xml? e.g. if it contains <fname>John</fname><lname>Smith</lname> how do I know that these refer to the concept of "first name", "last name"? Is it up to the person writing the REST API to define in documentation some place what each of the XML fields mean? What if producer of the API wants to change the implementation to be <firstname> instead of <fname>? How do they do this and notify their consumers that this change occurred? Or do the consumers just encounter the error and then look at the payload and figure out on their own that it changed? I've read in REST in Practice that using a WADL tool to create a client implementation based on the WADL (and hide the fact that you're doing a distributed call) is an "anti-pattern". But I was planning to do this-- at least then I would have a statically typed API call that, if it changed, I would know at compile time and not at run time. Why is this a bad thing to generate client code based on a WADL? And how do I know what to do with the links that returned in the response of a POST to a REST API? What defines this contract and gives true meaning to what each link will do? Please help! I dont understand how to go from statically-typed or even SOAP/RPC to REST!

    Read the article

  • Why is testing MVC Views frowned upon?

    - by Peter Bernier
    I'm currently setting the groundwork for an ASP.Net MVC application and I'm looking into what sort of unit-tests I should be prepared to write. I've seen in multiple places people essentially saying 'don't bother testing your views, there's no logic and it's trivial and will be covered by an integration test'. I don't understand how this has become the accepted wisdom. Integration tests serve an entirely different purpose than unit tests. If I break something, I don't want to know a half-hour later when my integration tests break, I want to know immediately. Sample Scenario : Lets say we're dealing with a standard CRUD app with a Customer entity. The customer has a name and an address. At each level of testing, I want to verify that the Customer retrieval logic gets both the name and the address properly. To unit-test the repository, I write an integration test to hit the database. To unit-test the business rules, I mock out the repository, feed the business rules appropriate data, and verify my expected results are returned. What I'd like to do : To unit-test the UI, I mock out the business rules, setup my expected customer instance, render the view, and verify that the view contains the appropriate values for the instance I specified. What I'm stuck doing : To unit-test the repository, I write an integration test, setup an appropriate login, create the required data in the database, open a browser, navigate to the customer, and verify the resulting page contains the appropriate values for the instance I specified. I realize that there is overlap between the two scenarios discussed above, but the key difference it time and effort required to setup and execute the tests. If I (or another dev) removes the address field from the view, I don't want to wait for the integration test to discover this. I want is discovered and flagged in a unit-test that gets multiple times daily. I get the feeling that I'm just not grasping some key concept. Can someone explain why wanting immediate test feedback on the validity of an MVC view is a bad thing? (or if not bad, then not the expected way to get said feedback)

    Read the article

  • What's a nice explanation for pointers?

    - by Macneil
    In your own studies (on your own, or for a class) did you have an "ah ha" moment when you finally, really understood pointers? Do you have an explanation you use for beginner programmers that seems particularly effective? For example, when beginners first encounter pointers in C, they might just add &s and *s until it compiles (as I myself once did). Maybe it was a picture, or a really well motivated example, that made pointers "click" for you or your student. What was it, and what did you try before that didn't seem to work? Were any topics prerequisites (e.g. structs, or arrays)? In other words, what was necessary to understand the meaning of &s and *, when you could use them with confidence? Learning the syntax and terminology or the use cases isn't enough, at some point the idea needs to be internalized. Update: I really like the answers so far; please keep them coming. There are a lot of great perspectives here, but I think many are good explanations/slogans for ourselves after we've internalized the concept. I'm looking for the detailed contexts and circumstances when it dawned on you. For example: I only somewhat understood pointers syntactically in C. I heard two of my friends explaining pointers to another friend, who asked why a struct was passed with a pointer. The first friend talked about how it needed to be referenced and modified, but it was just a short comment from the other friend where it hit me: "It's also more efficient." Passing 4 bytes instead of 16 bytes was the final conceptual shift I needed.

    Read the article

  • Architectural Composition Languages

    - by C. Lawrence Wenham
    Recently stumbled upon this paper (PDF) talking about ACLs, or Architectural Composition Languages. They're a fusion of two earlier lines of research: Architectural Definition Languages (such as UML) and Object Composition Languages (such as XAML, WWF, or scripting languages). The goal of an ACL is to have a high-level description of a program's architecture which can also be compiled into a runnable program. The high-level description assists automated analysis, while the 'executability' means changes can be tested immediately. You would still author the components of the program in a conventional programming language (C, Java, Python, etc), but they would be composed into a complete program by the ACL. One of the expected benefits is that a program can be ported to a different platform by swapping in "similar but different" components. I've been hankering for something like this for a long time (see this answer I gave on a StackOverflow question a few years ago). The paper mentions that the researchers were working on a language called ACL/1 that initially targeted Java, but would be ported to support .Net as well. However, I can't find any more mention of ACL/1 anywhere. Has there been any more work done on this? Are there any other implementations of the ACL concept that are available for use or experimentation?

    Read the article

  • Is there any simple game that involves psychological factors?

    - by Roman
    I need to find a simple game in which several people need to interact with each other. The game should be simple for an analysis (it should be simple to describe what happens in the game, what players did). Because of the last reason, the video games are not appropriate for my purposes. I am thinking of a simple, schematic, strategic game where people can make a limited set of simple moves. Moreover, the moves of the game should be conditioned not only by a pure logic (like in chess or go). The behavior in the game should depend on psychological factors, on relations between people. In more details, I think it should be a cooperation game where people make their decisions based on mutual trust. It would be nice if players can express punishment and forgiveness in the game. Does anybody knows a game that is close to what I have described above? ADDED I need to add that I need a game where actions of players are simple and easy to formalize. Because of that I cannot use verbal games (where communication between players is important). By simple actions I understand, for example, moves on the board from one position to another one, or passing chips from one player to another one and so on.

    Read the article

  • Full Portfolio of x86 Systems On Display at Oracle OpenWorld

    - by kgee
    This OpenWorld, Oracle’s x86 hardware team will have two hardware demos, showcasing the new X3 systems, as well as several other x86 solutions such as the ZFS Storage Appliance, Oracle Database Appliance and the Carrier Grade NETRA systems. These two demos are located in the South Hall in Oracle’s booth 1133 and Intel’s booth 1101.  The Intel booth will feature additional demos including 3D demos of each server, a static architectural demo, the Oracle x86 Grand Prix video game and the Intel Theatre featuring several presentations by Intel’s partners. Oracle’s Intel Theatre Schedule and Topics Include:Monday 1. 10:30 a.m. - Engineered to Work Together: Oracle x86 Systems in the Data Center2. 12:30 a.m. - The Oracle NoSQL Database on the Intel Platform.3. 1:30 p.m. - Accelerate Your Path to Cloud with Oracle VM4. 3:30 p.m. - Why Oracle Linux is the Best Linux for Your Intel Based Systems5. 4:30 p.m. - Accelerate Your Path to Cloud with Oracle VMTuesday 1. 10:00 a.m. - Speed of thought” Analytics using In-Memory Analytics2. 1:30 a.m. - A Storage Architecture for Big Data:  "It’s Not JUST Hadoop"3. 2:00 a.m. - Oracle Optimized Solution for Enterprise Cloud Infrastructure.4. 2:30 p.m. - Configuring Storage to Optimize Database Performance and Efficiency.5. 3:30 p.m. - Total Cloud Control for Oracle's x86 SystemsWednesday 1. 10:00 a.m. - Big Data Analysis Using R-Programming Language2. 11:30 a.m. - Extreme Performance Overview, The Oracle Exadata Database Machine3. 1:30 p.m. - Oracle Times Ten In-Memory Database Overview

    Read the article

  • Message Driven Bean JMS integration

    - by Anthony Shorten
    In Oracle Utilities Application Framework V4.1 and above the product introduced the concept of real time JMS integration within the Framework for interfacing. Customer familiar with older versions of the Framework will recall that we used a component called the Multi-purpose Listener (MPL) which was a very light service bus for calling interface channels (including JMS). The MPL is not supplied with all products and customers prefer to use Oracle SOA Suite and native methods rather then MPL. In Oracle Utilities Application Framework V4.1 (and for Oracle Utilities Application Framework V2.2 via Patches 9454971, 9256359, 9672027 and 9838219) we introduced real time JMS integration natively for outbound JMS integration and using Message Driven Beans (MDB) for incoming integration. The outbound integration has not changed a lot between releases where you create an Outbound Message Type to indicate the record types to send out, create a JMS sender (though now you use the Real Time Sender) and then create an External System definition to complete the configuration. When an outbound message appears in the table of the type and external system configured (via a business event such as an algorithm or plug-in script) the Oracle Utilities Application Framework will place the message on the configured Queue linked to the JMS Sender. The inbound integration has changed. In the past you created XAI Receivers and specified configuration about what types of transactions to process. This is now all configuration file driven. The configuration files for the Business Application Server (ejb-jar.xml and weblogic-ejb-jar.xml) define Message Driven Beans and the queues to monitor. When a message appears on the queue, the MDB processes it through our web services interface. Configuration of the MDB can be native (via editing the configuration files) or through the new user exit capabilities (which is aimed at maintaining custom configuration across upgrades). The latter is better as you build fragments of configuration to make it easier to maintain. In the next few weeks a number of new whitepaper will be released to illustrate the features of the Oracle WebLogic JMS and Oracle SOA Suite integration capabilities.

    Read the article

  • Solving Big Problems with Oracle R Enterprise, Part I

    - by dbayard
    Abstract: This blog post will show how we used Oracle R Enterprise to tackle a customer’s big calculation problem across a big data set. Overview: Databases are great for managing large amounts of data in a central place with rigorous enterprise-level controls.  R is great for doing advanced computations.  Sometimes you need to do advanced computations on large amounts of data, subject to rigorous enterprise-level concerns.  This blog post shows how Oracle R Enterprise enables R plus the Oracle Database enabled us to do some pretty sophisticated calculations across 1 million accounts (each with many detailed records) in minutes. The problem: A financial services customer of mine has a need to calculate the historical internal rate of return (IRR) for its customers’ portfolios.  This information is needed for customer statements and the online web application.  In the past, they had solved this with a home-grown application that pulled trade and account data out of their data warehouse and ran the calculations.  But this home-grown application was not able to do this fast enough, plus it was a challenge for them to write and maintain the code that did the IRR calculation. IRR – a problem that R is good at solving: Internal Rate of Return is an interesting calculation in that in most real-world scenarios it is impractical to calculate exactly.  Rather, IRR is a calculation where approximation techniques need to be used.  In this blog post, we will discuss calculating the “money weighted rate of return” but in the actual customer proof of concept we used R to calculate both money weighted rate of returns and time weighted rate of returns.  You can learn more about the money weighted rate of returns here: http://www.wikinvest.com/wiki/Money-weighted_return First Steps- Calculating IRR in R We will start with calculating the IRR in standalone/desktop R.  In our second post, we will show how to take this desktop R function, deploy it to an Oracle Database, and make it work at real-world scale.  The first step we did was to get some sample data.  For a historical IRR calculation, you have a balances and cash flows.  In our case, the customer provided us with several accounts worth of sample data in Microsoft Excel.      The above figure shows part of the spreadsheet of sample data.  The data provides balances and cash flows for a sample account (BMV=beginning market value. FLOW=cash flow in/out of account. EMV=ending market value). Once we had the sample spreadsheet, the next step we did was to read the Excel data into R.  This is something that R does well.  R offers multiple ways to work with spreadsheet data.  For instance, one could save the spreadsheet as a .csv file.  In our case, the customer provided a spreadsheet file containing multiple sheets where each sheet provided data for a different sample account.  To handle this easily, we took advantage of the RODBC package which allowed us to read the Excel data sheet-by-sheet without having to create individual .csv files.  We wrote ourselves a little helper function called getsheet() around the RODBC package.  Then we loaded all of the sample accounts into a data.frame called SimpleMWRRData. Writing the IRR function At this point, it was time to write the money weighted rate of return (MWRR) function itself.  The definition of MWRR is easily found on the internet or if you are old school you can look in an investment performance text book.  In the customer proof, we based our calculations off the ones defined in the The Handbook of Investment Performance: A User’s Guide by David Spaulding since this is the reference book used by the customer.  (One of the nice things we found during the course of this proof-of-concept is that by using R to write our IRR functions we could easily incorporate the specific variations and business rules of the customer into the calculation.) The key thing with calculating IRR is the need to solve a complex equation with a numerical approximation technique.  For IRR, you need to find the value of the rate of return (r) that sets the Net Present Value of all the flows in and out of the account to zero.  With R, we solve this by defining our NPV function: where bmv is the beginning market value, cf is a vector of cash flows, t is a vector of time (relative to the beginning), emv is the ending market value, and tend is the ending time. Since solving for r is a one-dimensional optimization problem, we decided to take advantage of R’s optimize method (http://stat.ethz.ch/R-manual/R-patched/library/stats/html/optimize.html). The optimize method can be used to find a minimum or maximum; to find the value of r where our npv function is closest to zero, we wrapped our npv function inside the abs function and asked optimize to find the minimum.  Here is an example of using optimize: where low and high are scalars that indicate the range to search for an answer.   To test this out, we need to set values for bmv, cf, t, emv, tend, low, and high.  We will set low and high to some reasonable defaults. For example, this account had a negative 2.2% money weighted rate of return. Enhancing and Packaging the IRR function With numerical approximation methods like optimize, sometimes you will not be able to find an answer with your initial set of inputs.  To account for this, our approach was to first try to find an answer for r within a narrow range, then if we did not find an answer, try calling optimize() again with a broader range.  See the R help page on optimize()  for more details about the search range and its algorithm. At this point, we can now write a simplified version of our MWRR function.  (Our real-world version is  more sophisticated in that it calculates rate of returns for 5 different time periods [since inception, last quarter, year-to-date, last year, year before last year] in a single invocation.  In our actual customer proof, we also defined time-weighted rate of return calculations.  The beauty of R is that it was very easy to add these enhancements and additional calculations to our IRR package.)To simplify code deployment, we then created a new package of our IRR functions and sample data.  For this blog post, we only need to include our SimpleMWRR function and our SimpleMWRRData sample data.  We created the shell of the package by calling: To turn this package skeleton into something usable, at a minimum you need to edit the SimpleMWRR.Rd and SimpleMWRRData.Rd files in the \man subdirectory.  In those files, you need to at least provide a value for the “title” section. Once that is done, you can change directory to the IRR directory and type at the command-line: The myIRR package for this blog post (which has both SimpleMWRR source and SimpleMWRRData sample data) is downloadable from here: myIRR package Testing the myIRR package Here is an example of testing our IRR function once it was converted to an installable package: Calculating IRR for All the Accounts So far, we have shown how to calculate IRR for a single account.  The real-world issue is how do you calculate IRR for all of the accounts?This is the kind of situation where we can leverage the “Split-Apply-Combine” approach (see http://www.cscs.umich.edu/~crshalizi/weblog/815.html).  Given that our sample data can fit in memory, one easy approach is to use R’s “by” function.  (Other approaches to Split-Apply-Combine such as plyr can also be used.  See http://4dpiecharts.com/2011/12/16/a-quick-primer-on-split-apply-combine-problems/). Here is an example showing the use of “by” to calculate the money weighted rate of return for each account in our sample data set.  Recap and Next Steps At this point, you’ve seen the power of R being used to calculate IRR.  There were several good things: R could easily work with the spreadsheets of sample data we were given R’s optimize() function provided a nice way to solve for IRR- it was both fast and allowed us to avoid having to code our own iterative approximation algorithm R was a convenient language to express the customer-specific variations, business-rules, and exceptions that often occur in real-world calculations- these could be easily added to our IRR functions The Split-Apply-Combine technique can be used to perform calculations of IRR for multiple accounts at once. However, there are several challenges yet to be conquered at this point in our story: The actual data that needs to be used lives in a database, not in a spreadsheet The actual data is much, much bigger- too big to fit into the normal R memory space and too big to want to move across the network The overall process needs to run fast- much faster than a single processor The actual data needs to be kept secured- another reason to not want to move it from the database and across the network And the process of calculating the IRR needs to be integrated together with other database ETL activities, so that IRR’s can be calculated as part of the data warehouse refresh processes In our next blog post in this series, we will show you how Oracle R Enterprise solved these challenges.

    Read the article

  • still about perl vs python but (to me) slightly different from what has been asked [closed]

    - by B Chen
    Being a newbie to coding, I read from this site that Perl is still as viable as it has been, while Python, quoted from someone else's post, is good but just "snake oil" (not sure what this refers to exactly though). So from the responses in that post, I got the gist that Perl is good and worthy to learn. My question is - pardon me for phrasing it in this "non-programmer's" way - Which one should I learn FIRST? (I am actually currently learning R) Here below is the background info - (a) I will be using it mostly for data mining and statistics analysis (b) Will there be this "first" and "later" issue with learning either Perl or Python? That is, after I become competent with one language, would there be a need to learn the second one (for a similar task??) (c) If there should be circumstances where I must learn the second one, would learning Perl FIRST be better than learning Python? I hope to learn as much from exchanging info here, so please help provide with more than just "it depends" type of info. Great many thanks to all who choose to respond to my query.

    Read the article

  • Similar domains using my business' content, and stealing SEO results

    - by Murciano
    I've been hired to create a website for a restaurant in my city, let's call it "Flying Dragon" Chinese restaurant. The restaurant has never had a website, though the business itself is about ten years old. However, if you Google the restaurant's name, the first site that comes up seems to be affiliated with the restaurant itself, even though it is not. This site - let's say, flyingdragonchinese.com - is also the one that Google has apparently selected, in its results, to be the official website of the restaurant - in essence, the first Google result is flyingdragonchinese.com, and directly beneath it, within the same entry, are the Google reviews and contact information. Upon visiting flyingdragonchinese.com (again, not the actual name), I see that the website has taken the menu content from the restaurant, in the same manner that Yelp does, but it also seems (to the untrained eye) to be the restaurant's official site. Basically, someone has created a fake website for the business (I am not sure why) using its actual menu and contact information, and is hogging the search results. The concept is similar to a "scraping site" except that the information seems to have been stolen manually. The main problem is that visitors to this site will have an inaccurate impression of the restaurant. I feel like the obvious solution is to register a new domain for my site, and simply beat out this competitor (or whatever it is) with smarter SEO and business verification with Google. However, the Conan-the-Barbarian-web-designer part of me wants to somehow bash this other site (deservedly?) into oblivion. But I don't know what I can really do, besides maybe issuing a cease-and-desist letter, or trying to contact the web host for the site, although there is no contact information available on this "fake" site for the site owner. Has anyone ever experienced something like this? Is there any solution?

    Read the article

  • Leveraging ERP Investments with EPM and BI Solutions

    - by john.orourke(at)oracle.com
    Now that many organizations have implemented ERP systems to automate and integrate their operational processes, IT investments are beginning to shift to the management systems i.e. EPM and BI tools and applications that integrate data from multiple transactional systems.  These solutions automate and integrate the management processes and enable organizations to achieve "management excellence" becoming smarter, more agile and more aligned than their competitors.  In fact the results of a recent IDC survey indicate that "Organizations that have implemented performance management more broadly are nearly four times more likely to be among the most competitive organizations in their industry."  One example of an organization that is leveraging their ERP investments with Oracle EPM and BI solutions is General Dynamics.  The Business Intelligence Collaborative (BIC) group within General Dynamics' IT organization assists various business units with the implementation, application support, and application hosting for their Business Intelligence and Enterprise Performance Management Applications.  Attend the Oracle Virtual Trade Show "Spotlight on Customer Success" on February 3rd to hear the details of how General Dynamics is using Oracle Essbase, Hyperion Planning, and Oracle BI to improve their planning, reporting and analysis processes and leverage their investments in Oracle E-Business Suite and other operational systems.   During the event, you can also hear about the latest developments and plans for Oracle Applications products, as well as what's coming with Oracle Fusion Applications. Here's a link to the Virtual Trade Show event overview and registration page.  The event runs from 8AM - 1PM PST/11AM - 4PM EST, and the EPM session is 10:30 - 11AM PST/1:30 - 2PM EST.    http://event.on24.com/event/26/79/15/rt/opFb.html?partnerref=internal I hope you'll join us on February 3rd!  

    Read the article

  • Happy New Year! Back to school :)

    - by Jim Wang
    A brand new year is upon us and it’s time to get cracking with WebMatrix again…and go back to school :).  Last year we ran a successful product walkthrough for WebMatrix Beta 2 with our students from around the world, gathering awesome feedback for the final version of WebMatrix which is coming soon!  I’d like to take this chance to thank all the students who participated in this effort…you have really helped make the final product much better than it would have been otherwise. In 2011, we’re looking, as always, at bigger and better things.  One of the ideas that has been floating around is the concept of a WebMatrix college course that you could take for actual credit.  Of course, this is going to require coordination with college educators, but we think we’re up to the challenge :) If your school is still using an antiquated language to teach their web development 101 course, and you’d like to switch to WebMatrix, we’d like to hear your voice – better yet if you have contacts from your school and would like to be one of the first to give the program a try!  Comment on this post or email wptsdrext at microsoft.com.  We look forward to partnering with you guys ^^.

    Read the article

  • Suggested Resources Visual Studio Plug-In

    Todays post is a quick plug for a new tool developed by my friend Olaf Conijn, who (amongst other things) has been a developer on several versions of Enterprise Library. His new tool is called Suggested Resources for .NET Developers, and the current 0.8 release works with both Visual Studio 2008 and Visual Studio 2010. So what does it do? Well heres what Olaf has to say: This Visual Studio Integration Package is a proof of concept in: Aggregation of online content within the Visual Studio IDE. Analysis of development activities within the Visual Studio IDE. This combination of features allows Suggested Resources for .NET developers to pro-actively suggest online content that applies on the task being performed by a developer... A bit like having a programming pair that searches for online resources while you focus on getting the job done. For more info, screenshots and downloads, head to the Codeplex project site or the Visual Studio Gallery page.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Were the first assemblers written in machine code?

    - by The111
    I am reading the book The Elements of Computing Systems: Building a Modern Computer from First Principles, which contains projects encompassing the build of a computer from boolean gates all the way to high level applications (in that order). The current project I'm working on is writing an assembler using a high level language of my choice, to translate from Hack assembly code to Hack machine code (Hack is the name of the hardware platform built in the previous chapters). Although the hardware has all been built in a simulator, I have tried to pretend that I am really constructing each level using only the tools available to me at that point in the real process. That said, it got me thinking. Using a high level language to write my assembler is certainly convenient, but for the very first assembler ever written (i.e. in history), wouldn't it need to be written in machine code, since that's all that existed at the time? And a correlated question... how about today? If a brand new CPU architecture comes out, with a brand new instruction set, and a brand new assembly syntax, how would the assembler be constructed? I'm assuming you could still use an existing high level language to generate binaries for the assembler program, since if you know the syntax of both the assembly and machine languages for your new platform, then the task of writing the assembler is really just a text analysis task and is not inherently related to that platform (i.e. needing to be written in that platform's machine language)... which is the very reason I am able to "cheat" while writing my Hack assembler in 2012, and use some preexisting high level language to help me out.

    Read the article

  • Dealing with institutionalized programmers.

    - by Singleton
    Some times programmers who work in a project for long time tend to get institutionalized. It is difficult to convince them with reasoning. Even if we manage to convince them they will be adamant to take suggestion on board. How do we handle the situation without developing friction in team? Institutionalized in terms of practices. I recently joined in a project where build &release process was made so complicated with unnecessary roadblocks. My suggestion was we can get rid of some of the development overheads(like filling few spreadsheets) just by integrating defect management and version controlling tools (both are IBM-Rational tools integration can be very easy and one-off effort). Also by using tools like Maven & Ant (project involves java and some COTS products) build & release can be simplified and reduce manual errors& intervention. I managed to convince and ready to put efforts for developing proof of concept. But the ‘Senior’ developer is not willing to take it on board. One reason could be the current process makes him valuable in team.

    Read the article

  • CodePlex Daily Summary for Sunday, June 16, 2013

    CodePlex Daily Summary for Sunday, June 16, 2013Popular ReleasesEmployee Info Starter Kit: v6.0 - ASP.NET MVC Edition: Release Home - Getting Started - Hands on Coding Walkthrough – Technology Stack - Design & Architecture EISK v6.0 – ASP.NET MVC edition bundles most of the greatest and successful platforms, frameworks and technologies together, to enable web developers to learn and build manageable and high performance web applications with rich user experience effectively and quickly. User End SpecificationsCreating a new employee record Read existing employee records Update an existing employee reco...OLAP PivotTable Extensions: Release 0.8.1: Use the 32-bit download for... Excel 2007 Excel 2010 32-bit (even Excel 2010 32-bit on a 64-bit operating system) Excel 2013 32-bit (even Excel 2013 32-bit on a 64-bit operating system) Use the 64-bit download for... Excel 2010 64-bit Excel 2013 64-bit Just download and run the EXE. There is no need to uninstall the previous release. If you have problems getting the add-in to work, see the Troubleshooting Installation wiki page. The new features in this release are: View #VALUE! Err...VidCoder: 1.4.22: New in 1.4.22 Added Xbox 360 preset, thanks to Relhak. Added Spanish translation, thanks to fantasmanegro. Added Basque translation, thanks to azpidatziak. Fixed behavior of custom anamorphic auto display width and max width/height. Fixed double-logging on local encodes. Fixed remote encoder not using libdvdnav even when enabled, which had caused some problems with multi-angle DVDs. New in 1.4 Updated HandBrake core to 0.9.9 Blu-ray subtitle (PGS) support Additional framerates: 30...WPF Application Framework (WAF): WPF Application Framework (WAF) 3.0.0.440: Version: 3.0.0.440 (Release Candidate): This release contains the source code of the WPF Application Framework (WAF) and the sample applications. Please build the whole solution before you start one of the sample applications. Requirements .NET Framework 4.5 (The package contains a solution file for Visual Studio 2012) Changelog Legend: [B] Breaking change; [O] Marked member as obsolete Samples: Use ValueConverters via StaticResource instead of x:Static. Other Downloads Downloads OverviewSFDL.NET: SFDL.NET v1.1.0.5: Changelog: Implemeted SFDL Container v4 (AES Encryption, Set Character Set) Added Stopwatch (download time) Many Bugfixes and ImprovementsBlackJumboDog: Ver5.9.1: 2013.06.13 Ver5.9.1 (1) Web??????SSI?#include???、CGI?????????????????????? (2) ???????????????????????????Lakana - WPF Framework: Lakana V2.1 RTM: - Dynamic text localization - A new application wide message busFree language translator and file converter: Free Language Translator 3.3: some bug fixes and a new link to video tutorials on Youtube.Pokemon Battle Online: ETV: ETV???2012?12??????,????,???????$/PBO/branches/PrivateBeta??。 ???????bug???????。 ???? Server??????,?????。 ?????????,?????????????,?????????。 ????????,????,?????????,???????????(??)??。 ???? ????????????。 ???????。 ???PP????,????????????????????PP????,??3。 ?????????????,??????????。 ???????? ??? ?? ???? ??? ???? ?? ?????????? ?? ??? ??? ??? ???????? ???? ???? ???????????????、???????????,??“???????”??。 ???bug ???Modern UI for WPF: Modern UI 1.0.4: The ModernUI assembly including a demo app demonstrating the various features of Modern UI for WPF. Related downloads NuGet ModernUI for WPF is also available as NuGet package in the NuGet gallery, id: ModernUI.WPF Download Modern UI for WPF Templates A Visual Studio 2012 extension containing a collection of project and item templates for Modern UI for WPF. The extension includes the ModernUI.WPF NuGet package. DownloadToolbox for Dynamics CRM 2011: XrmToolBox (v1.2013.6.11): XrmToolbox improvement Add exception handling when loading plugins Updated information panel for displaying two lines of text Tools improvementMetadata Document Generator (v1.2013.6.10)New tool Web Resources Manager (v1.2013.6.11)Retrieve list of unused web resources Retrieve web resources from a solution All tools listAccess Checker (v1.2013.2.5) Attribute Bulk Updater (v1.2013.1.17) FetchXml Tester (v1.2013.3.4) Iconator (v1.2013.1.17) Metadata Document Generator (v1.2013.6.10) Privilege...Document.Editor: 2013.23: What's new for Document.Editor 2013.23: New Insert Emoticon support Improved Format support Minor Bug Fix's, improvements and speed upsChristoc's DotNetNuke Module Development Template: DotNetNuke 7 Project Templates V2.4 for VS2012: V2.4 - Release Date 6/10/2013 Items addressed in this 2.4 release Updated MSBuild Community Tasks reference to 1.4.0.61 Setting up your DotNetNuke Module Development Environment Installing Christoc's DotNetNuke Module Development Templates Customizing the latest DotNetNuke Module Development Project TemplatesLayered Architecture Sample for .NET: Leave Sample - June 2013 (for .NET 4.5): Thank You for downloading Layered Architecture Sample. Please read the accompanying README.txt file for setup and installation instructions. This is the first set of a series of revised samples that will be released to illustrate the layered architecture design pattern. This version is only supported on Visual Studio 2012. This set contains 2 samples that illustrates the use of: ASP.NET Web Forms, ASP.NET Model Binding, Windows Communications Foundation (WCF), Windows Workflow Foundation (W...Papercut: Papercut 2013-6-10: Feature: Shows From, To, Date and Subject of Email. Feature: Async UI and loading spinner. Enhancement: Improved speed when loading large attachments. Enhancement: Decoupled SMTP server into secondary assembly. Enhancement: Upgraded to .NET v4. Fix: Messages lost when received very fast. Fix: Email encoding issues on display/Automatically detect message Encoding Installation Note:Installation is copy and paste. Incoming messages are written to the start-up directory of Papercut. If you do n...Supporting Guidance and Whitepapers: v1.BETA Unit test Generator Documentation: Welcome to the Unit Test Generator Once you’ve moved to Visual Studio 2012, what’s a dev to do without the Create Unit Tests feature? Based on the high demand on User Voice for this feature to be restored, the Visual Studio ALM Rangers have introduced the Unit Test Generator Visual Studio Extension. The extension adds the “create unit test” feature back, with a focus on automating project creation, adding references and generating stubs, extensibility, and targeting of multiple test framewor...MapWindow 4: MapWindow GIS v4.8.8 - Release Candidate - 32Bit: Download the release notes here: http://svn.mapwindow.org/svnroot/MapWindow4Dev/Bin/MapWindowNotes.rtfLINQ to Twitter: LINQ to Twitter v2.1.06: Supports .NET 3.5, .NET 4.0, .NET 4.5, Silverlight 4.0, Windows Phone 7.1, Windows Phone 8, Client Profile, Windows 8, and Windows Azure. 100% Twitter API coverage. Also supports Twitter API v1.1! Also on NuGet.VR Player: VR Player 0.3.1 ALPHA: New plugin system with individual folders TrackIR support Maya and 3ds max formats support Dual screen support Mono layouts (left and right) Cylinder height parameter Barel effect factor parameter Razer hydra filter parameter VRPN bug fixes UI improvements Performances improvements Stabilization and logging with Log4Net New default values base on users feedback CTRL key to open menuSimCityPak: SimCityPak 0.1.0.8: SimCityPak 0.1.0.8 New features: Import BMP color palettes for vehicles Import RASTER file (uncompressed 8.8.8.8 DDS files) View different channels of RASTER files or preview of all layers combined Find text in javascripts TGA viewer Ground textures added to lot editor Many additional identified instances and propertiesNew ProjectsADJD-S311-CR99 Colour Sensor Arduino: Integrating the Avago ADJD-s311-CR999 with ArduinoBerkeley Algorithm: student project for DS course Berkeley Algorithm C#Code Kata - HarryPotter Win8: HarryPotter Series discount programming excerciseHad CMS: Had CMSjean0615mercurialmm: ddjet.version.Incrementor: It's a console tool parse AssemblyInfo.cs files in given directory and set AssemblyVersion and AssemblyFileVersion attribtes to given version. It can be easilyLogger - logging for your .NET project using file, mail, debug output: A light weight yet competent logger for .NET with configurable output modules, such as log file, e-mail and debug log. Easy to add your own output module if needed.MatUtils: Um projecto onde se incluem algumas ferramentas matemáticas.MoGo Mobile: The first makings of an open source Mobile Game!myFirstHTML5: Project was to create a mobile responsive websiteNeTools: This Tool Allows You To Know & Monitor Your Network Better. You'l Be Able To Hijack The Network's Traffic, Poison The Network, Kick Devices From It & Many More.Prism Photo Browser: This is a functioning photo browser written in C# for a WPF platform implementing the Model View View-Model (MVVM) design pattern and PrismQuesTime: This is a quiz projectSM130 Arduino Integration: A project to integrate an SM130 RFID module with Arduino.SQL Server Analysis Services Cube Status Web Part for SharePoint: The Cube Status project provides a SharePoint Web Part that connects to a SQL Server Analysis Services instance and shows the status of a specific cubeSQLite Sync for Windows Phone 8: Project Description This project is the Windows Phone 8 implementation of the Sync Framework Toolkit to enable synchronization with Windows Phone 8 and SQLite.Test Project: testTool To Recover or Restore Windows 7 Files: Impossible Is Nothing!: Get easy to use Windows 2007 Recovery software which is fully helpful application that can easily recover or restore Windows 2007 files with full of quality.Vitus Localization: An Orchard module containing a collection of features useful for localized or multilingual Orchard websites.Wedn.Net: Blog ????: ????? ??? ?????? ????? ???? ??? ????? ???? ??? ?? ??? ???????

    Read the article

  • OLL Live webcast - Using SQL for Pattern Matching in Oracle Database

    - by KLaker
    If you are interested in learning about our exciting new 12c SQL pattern matching feature then mark your diaries. On Wednesday, October 30th at 8:00 am (US/Pacific time zone) Supriya Ananth, who is one of our top curriculum developers at Oracle, will be hosting an OLL webcast on our new SQL pattern matching feature. The ability to recognize patterns in a sequence of rows has been a capability that was widely desired, but not possible with SQL until now. Row pattern matching in native SQL improves application and development productivity and query efficiency for row-sequence analysis. With Oracle Database 12c you can use the new MATCH_RECOGNIZE clause to perform pattern matching in SQL to do the following: Logically partition and order the data using the PARTITION BY and ORDER BY clauses Use regular expressions syntax to define patterns of rows to seek using the PATTERN clause. These patterns a powerful and expressive feature, applied to the pattern variables you define. Specify the logical conditions required to map a row to a row pattern variable in the DEFINE clause. Define measures, which are expressions usable in the MEASURES clause of the SQL query. For more information and to register for this exciting webcast please visit the OLL Live website, see here: https://apex.oracle.com/pls/apex/f?p=44785:145:116820049307135::::P145_EVENT_ID,P145_PREV_PAGE:461,143.  Please note - if the above link does not work then go to OLL (https://apex.oracle.com/pls/apex/f?p=44785:1:) and click the OLL Live icon (upper right, beneath the Login link or logout link if you are already logged in). The pattern matching webcast is listed on the calendar of events on 30 October.

    Read the article

  • How to pick a great working team?

    - by Javierfdr
    I've just finished my master and I'm starting to dig into the laboral world, i.e. learning how programming teams and technology companies work in the real world. I'm starting to design the idea of my own service or product based on free software, and I will require a well coupled, enthusiast and fluid team to build and the idea. My problem is that I'm not sure which would be the best skills to ask for a programming team of 4-5 members. I have many friends and acquaintances, with whom I've worked during my studies. Must of those ones I have in mind are very capable and smart people, with a good logic and programming base, although some of them have some characteristics that I believe that could influtiate negatively in the group: lack of communication, fear to debate ideas, hard to give when debating, lack of structured programming (testing, good commenting, previous design and analysis). Some of them have this negative characteristics, but must of them have a lot of enthusiasm, nice working skills (from an individual point of view), and ability to see the whole picture. The question is: how to pick the best team for a large scale project, with a lot of programming? Which of these negative skills do you think are just too influential? Which can be softened with good leadership? Wich good skills are to be expected? And any other opinion about social and programming skills of a programming team.

    Read the article

  • Coherence Management with EM Cloud Control 12c –demo for partners

    - by JuergenKress
    For access to the Oracle demo systems please visit OPN and talk to your Partner Expert We are pleased to announce the availability of the Coherence Management demo that showcases some of the key capabilities of Management Pack for Oracle Coherence and JVM Diagnostics (licensed under WLS Management Pack EE and Management Pack for NonOracle MW). This demo specifically focuses on some of the performance management and configuration management solutions for Oracle Coherence. The demo flow showcases the key enhancements made in Enterprise Manager 12c release which includes new customizable performance summary, cache data management and configuration management. Demo Highlights The demo showcases the following capabilities. Centralized monitoring for enterprise wide Coherence deployments Drill down diagnostics Customizable performance views Monitoring performance trends Monitoring Caches, Nodes, Services, etc Performance and Log Alerts Real-time Java Diagnostics and memory leak analysis Cache Data Management Lifecycle management Provisioning Coherence on a new machine Starting nodes on machine where Coherence is already running Killing a node process Demo Instructions Go to the DSS website for Oracle Partners. On the Standard Demo Launchpad page, under the “Middleware Management” section, click on the link “EM Cloud Control 12c Coherence Management” (tagged as “NEW”). Specific demo launchpad page contains a link to the detailed demo script with instructions on how to show the demo. Read more on Community Events and post your comment here. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: Coherence,Coherence demo,DSS,CAF,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Coherence Management with EM Cloud Control 12c –demo for partners

    - by JuergenKress
    For access to the Oracle demo systems please visit OPN and talk to your Partner Expert We are pleased to announce the availability of the Coherence Management demo that showcases some of the key capabilities of Management Pack for Oracle Coherence and JVM Diagnostics (licensed under WLS Management Pack EE and Management Pack for NonOracle MW). This demo specifically focuses on some of the performance management and configuration management solutions for Oracle Coherence. The demo flow showcases the key enhancements made in Enterprise Manager 12c release which includes new customizable performance summary, cache data management and configuration management. Demo Highlights The demo showcases the following capabilities. Centralized monitoring for enterprise wide Coherence deployments Drill down diagnostics Customizable performance views Monitoring performance trends Monitoring Caches, Nodes, Services, etc Performance and Log Alerts Real-time Java Diagnostics and memory leak analysis Cache Data Management Lifecycle management Provisioning Coherence on a new machine Starting nodes on machine where Coherence is already running Killing a node process Demo Instructions Go to the DSS website for Oracle Partners. On the Standard Demo Launchpad page, under the “Middleware Management” section, click on the link “EM Cloud Control 12c Coherence Management” (tagged as “NEW”). Specific demo launchpad page contains a link to the detailed demo script with instructions on how to show the demo. Read more on Community Events and post your comment here. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: Coherence,Coherence demo,DSS,CAF,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Is what someone publishes on the Internet fair game when considering them for employment as a programmer?

    - by Jon Hopkins
    (Originally posted on Stack Overflow but closed there and more relevant for here) So we first interviewed a guy for a technical role and he was pretty good. Before the second interview we googled him and found his MySpace page which could, to put it mildly, be regarded as inappropriate. Just to be clear there was no doubt that it was his page (name, photos, matching biographical information and so on). The content was entirely personal and in no way related to his professional abilities or attitude. Is it fair to consider this when thinking about whether to offer them a job? In most situations my response would be what goes on in someone's private life is their own doing. However for anyone technical who professes (implicitly or explicitly) to understand the Internet and the possibilities it offers, is posting things in a way which can so obviously be discovered a significant error of judgement? EDIT: Clarification - essentially it was a fairly graphic commentary on porn (but of, shall we say, a non-academic nature). I'm actually more interested in the general concept than the specific incident as it's something we're likely to see more in the future as people put more and more of themselves on-line. My concerns are not primarily about him and how he feels about such things (he's white, straight, male and about the last possible victim of discrimination on the planet in that sense), more how it reflects on the company that a very simple search (basically his name) returns these things and that clients may also do it. We work in a relatively conservative industry.

    Read the article

  • When do domain concepts become application constructs?

    - by Noren
    I recently posted a question regarding recovering a DDD architecture that became an anemic domain model into a multitier architecture and this question is a follow-on of sorts. My question is when do domain concepts become application constructs. My application is a local client C# 4/WPF with the following architecture: Presentation Layer Views ViewModels Business Layer ??? Domain Layer Classes that take the POCOs with primitive types and create domain concepts (e.g. image, layer, etc) Sanity checks values (e.g. image width 0) Interfaces for DTOs Interface for a repository that abstracts the filesystem Data Access Layer Classes that parse the proprietary binary files into POCOs with primitive types by explicit knowledge of the file format Implementation of domain DTOs Implementation of domain repository class Local Filesystem Proprietary binary files When does the MyImageType domain class with Int32 width, height, and Int32[] pixels become a System.Windows.Media.ImageDrawing? If I put it in the domain layer, it seems like implemenation details are being leaked (what if I didn't want to use WPF?). If I put it in the presentation layer, it seems like it's doing too much. If I create a business layer, it seems like it would be doing too little since there are few "rules" given the CRUD nature of the application. I think all of my reading has lead to analysis paralysis, so I thought fresh eyes might lend some perspective.

    Read the article

  • MAAS/JuJu Clarifications

    - by ectoskeleton
    I really love the concept of MAAS underlying an OpenStack implementation, but there are a few questions about MAAS that I am not entirely clear on. Should all hosts be set to network boot at all times or after they have been registered and allocated as a service, should they boot to disk? After juju bootstrap is executed, I turn on the machine that has been allocated (note WoL isn't working, I suspect it's being blocked on the network), the machine boot's up and then juju status executes correct, agent running and all that good stuff. If I 'reboot' the machine (testing power failure/problem whatever), juju status comes back but the agent-state is no longer in running state, and so far I have to destroy the environment and restart. In all cases I have never been able to deploy any services to any of the other nodes. I deploy the service with juju, note which node it was assigned, and then start the system. The system just boots up into a basic node. If I SSH to it I have to enter password, so it's not setting up the ssh key or anything. This is on Ubuntu 12.04.1 LTS systems and HP GL360G7 hosts. The MAAS management server is running as a VM but all on the same network. At this point I am not sure if I am doing something wrong or if there is a problem somewhere else. Is the idea that anytime a host is rebooted it should be rebuilt from the ground up, or is something else going on behind the scene to tell it to boot the local image. If the latter, why doesn't the agent start on a system that has been successfully setup before (juju bootstrapped system)?

    Read the article

  • The Exceptional EXCEPT clause

    - by steveh99999
    Ok, I exaggerate, but it can be useful… I came across some ‘poorly-written’ stored procedures on a SQL server recently, that were using sp_xml_preparedocument. Unfortunately these procs were  not properly removing the memory allocated to XML structures – ie they were not subsequently calling sp_xml_removedocument… I needed a quick way of identifying on the server how many stored procedures this affected.. Here’s what I used.. EXEC sp_msforeachdb 'USE ? SELECT DB_NAME(),OBJECT_NAME(s1.id) FROM syscomments s1 WHERE [text] LIKE ''%sp_xml_preparedocument%'' EXCEPT SELECT DB_NAME(),OBJECT_NAME(s2.id) FROM syscomments s2 WHERE [text] LIKE ''%sp_xml_removedocument%'' ‘ There’s three nice features about the code above… 1. It uses sp_msforeachdb. There’s a nice blog on this statement here 2. It uses the EXCEPT clause.  So in the above query I get all the procedures which include the sp_xml_preparedocument string, but by using the EXCEPT clause I remove all the procedures which contain sp_xml_removedocument.  Read more about EXCEPT here 3. It can be used to quickly identify incorrect usage of sp_xml_preparedocument. Read more about this here The above query isn’t perfect – I’m not properly parsing the SQL text to ignore comments for example - but for the quick analysis I needed to perform, it was just the job…

    Read the article

< Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >