Search Results

Search found 10963 results on 439 pages for 'documentation generation'.

Page 335/439 | < Previous Page | 331 332 333 334 335 336 337 338 339 340 341 342  | Next Page >

  • In Social Relationship Management, the Spirit is Willing, but Execution is Weak

    - by Mike Stiles
    In our final talk in this series with Aberdeen’s Trip Kucera, we wanted to find out if enterprise organizations are actually doing anything about what they’re learning around the importance of communicating via social and using social listening for a deeper understanding of customers and prospects. We found out that if your brand is lagging behind, you’re not alone. Spotlight: How was Aberdeen able to find out if companies are putting their money where their mouth is when it comes to implementing social across the enterprise? Trip: One way to think about the relative challenges a business has in a given area is to look at the gap between “say” and “do.” The first of those words reveals the brand’s priorities, while the second reveals their ability to execute on those priorities. In Aberdeen’s research, we capture this by asking firms to rank the value of a set of activities from one on the low end to five on the high end. We then ask them to rank their ability to execute those same activities, again on a one to five, not effective to highly effective scale. Spotlight: And once you get their self-assessments, what is it you’re looking for? Trip: There are two things we’re looking for in this analysis. The first is we want to be able to identify the widest gaps between perception of value and execution. This suggests impediments to adoption or simply a high level of challenge, be it technical or otherwise. It may also suggest areas where we can expect future investment and innovation. Spotlight: So the biggest potential pain points surface, places where they know something is critical but also know they aren’t doing much about it. What’s the second thing you look for? Trip: The second thing we want to do is look at specific areas in which high-performing companies, the Leaders, are out-executing the Followers. This points to the business impact of these activities since Leaders are defined by a set of business performance metrics. Put another way, we’re correlating adoption of specific business competencies with performance, looking for what high-performers do differently. Spotlight: Ah ha, that tells us what steps the winners are taking that are making them winners. So what did you find out? Trip: Generally speaking, we see something of a glass curtain when it comes to the social relationship management execution gap. There isn’t a single social media activity in which more than 50% of respondents indicated effectiveness, which would be a 4 or 5 on that 1-5 scale. This despite the fact that 70% of firms indicate that generating positive social media mentions is valuable or very valuable, a 4 or 5 on our 1-5 scale. Spotlight: Well at least they get points for being honest. The verdict they’re giving themselves is that they just aren’t cutting it in these highly critical social development areas. Trip: And the widest gap is around directly engaging with customers and/or prospects on social networks, which 69% of firms rated as valuable but only 34% of companies say they are executing well. Perhaps even more interesting is that these two are interdependent since you’re most likely to generate goodwill on social through happy, engaged customers. This data also suggests that social is largely being used as a broadcast channel rather than for one-to-one engagement. As we’ve discussed previously, social is an inherently personal media. Spotlight: And if they’re still using it as a broadcast channel, that shows they still fail to understand the root of social and see it as just another outlet for their ads and push-messaging. That’s depressing. Trip: A second way to evaluate this data is by using Aberdeen’s performance benchmarking. The story is both a bit different, but consistent in its own way. The first thing we notice is that Leaders are more effective in their execution of several key social relationship management capabilities, namely generating positive mentions and engaging with “influencers” and customers. Based on the fact that Aberdeen uses a broad set of performance metrics to rank the respondents as either “Leaders” (top 35% in weighted performance) or “Followers” (bottom 65% in weighted performance), from website conversion to annual revenue growth, we can then correlated high social effectiveness with company performance. We can also connect the specific social capabilities used by Leaders with effectiveness. We spoke about a few of those key capabilities last time and also discuss them in a new report: Social Powers Activate: Engineering Social Engagement to Win the Hidden Sales Cycle. Spotlight: What all that tells me is there are rewards for making the effort and getting it right. That’s how you become a Leader. Trip: But there’s another part of the story, which is that overall effectiveness, even among Leaders, is muted. There’s just one activity in which more than a majority of Leaders cite high effectiveness, effectiveness being the generation of positive buzz. While 80% of Leaders indicate “directly engaging with customers” through social media channels is valuable, the highest rated activity among Leaders, only 42% say they’re effective. This gap even among Leaders shows the challenges still involved in effective social relationship management. @mikestilesPhoto: stock.xchng

    Read the article

  • JavaOne 2012 Sunday Strategy Keynote

    - by Janice J. Heiss
    At the Sunday Strategy Keynote, held at the Masonic Auditorium, Hasan Rizvi, EVP, Middleware and Java Development, stated that the theme for this year's JavaOne is: “Make the future Java”-- meaning that Java continues in its role as the most popular, complete, productive, secure, and innovative development platform. But it also means, he qualified, the process by which we make the future Java -- an open, transparent, collaborative, and community-driven evolution. "Many of you have bet your businesses and your careers on Java, and we have bet our business on Java," he said.Rizvi detailed the three factors they consider critical to the success of Java--technology innovation, community participation, and Oracle's leadership/stewardship. He offered a scorecard in these three realms over the past year--with OS X and Linux ARM support on Java SE, open sourcing of JavaFX by the end of the year, the release of Java Embedded Suite 7.0 middleware platform, and multiple releases on the Java EE side. The JCP process continues, with new JSR activity, and JUGs show a 25% increase in participation since last year. Oracle, meanwhile, continues its commitment to both technology and community development/outreach--with four regional JavaOne conferences last year in various part of the world, as well as the release of Java Magazine, with over 120,000 current subscribers. Georges Saab, VP Development, Java SE, next reviewed features of Java SE 7--the first major revision to the platform under Oracle's stewardship, which has included near-monthly update releases offering hundreds of fixes, performance enhancements, and new features. Saab indicated that developers, ISVs, and hosting providers have all been rapid adopters of the platform. He also noted that Oracle's entire Fusion middleware stack is supported on SE 7. The supported platforms for SE 7 has also increased--from Windows, Linux, and Solaris, to OS X, Linux ARM, and the emerging ARM micro-server market. "In the last year, we've added as many new platforms for Java, as were added in the previous decade," said Saab.Saab also explored the upcoming JDK 8 release--including Project Lambda, Project Nashorn (a modern implementation of JavaScript running on the JVM), and others. He noted that Nashorn functionality had already been used internally in NetBeans 7.3, and announced that they were planning to contribute the implementation to OpenJDK. Nandini Ramani, VP Development, Java Client, ME and Card, discussed the latest news pertaining to JavaFX 2.0--releases on Windows, OS X, and Linux, release of the FX Scene Builder tool, the JavaFX WebView component in NetBeans 7.3, and an OpenJFX project in OpenJDK. Nandini announced, as of Sunday, the availability for download of JavaFX on Linux ARM (developer preview), as well as Scene Builder on Linux. She noted that for next year's JDK 8 release, JavaFX will offer 3D, as well as third-party component integration. Avinder Brar, Senior Software Engineer, Navis, and Dierk König, Canoo Fellow, next took the stage and demonstrated all that JavaFX offers, with a feature-rich, animation-rich, real-time cargo management application that employs Canoo's just open-sourced Dolphin technology.Saab also explored Java SE 9 and beyond--Jigsaw modularity, Penrose Project for interoperability with OSGi, improved multi-tenancy for Java in the cloud, and Project Sumatra. Phil Rogers, HSA Foundation President and AMD Corporate Fellow, explored heterogeneous computing platforms that combine the CPU and the parallel processor of the GPU into a single piece of silicon and shared memory—a hardware technology driven by such advanced functionalities as HD video, face recognition, and cloud workloads. Project Sumatra is an OpenJDK project targeted at bringing Java to such heterogeneous platforms--with hardware and software experts working together to modify the JVM for these advanced applications and platforms.Ramani next discussed the latest with Java in the embedded space--"the Internet of things" and M2M--declaring this to be "the next IT revolution," with Java as the ideal technology for the ecosystem. Last week, Oracle released Java ME Embedded 3.2 (for micro-contollers and low-power devices), and Java Embedded Suite 7.0 (a middleware stack based on Java SE 7). Axel Hansmann, VP Strategy and Marketing, Cinterion, explored his company's use of Java in M2M, and their new release of EHS5, the world's smallest 3G-capable M2M module, running Java ME Embedded. Hansmaan explained that Java offers them the ability to create a "simple to use, scalable, coherent, end-to-end layer" for such diverse edge devices.Marc Brule, Chief Financial Office, Royal Canadian Mint, also explored the fascinating use-case of JavaCard in his country's MintChip e-cash technology--deployable on smartphones, USB device, computer, tablet, or cloud. In parting, Ramani encouraged developers to download the latest releases of Java Embedded, and try them out.Cameron Purdy, VP, Fusion Middleware Development and Java EE, summarized the latest developments and announcements in the Enterprise space--greater developer productivity in Java EE6 (with more on the way in EE 7), portability between platforms, vendors, and even cloud-to-cloud portability. The earliest version of the Java EE 7 SDK is now available for download--in GlassFish 4--with WebSocket support, better JSON support, and more. The final release is scheduled for April of 2013. Nicole Otto, Senior Director, Consumer Digital Technology, Nike, explored her company's Java technology driven enterprise ecosystem for all things sports, including the NikeFuel accelerometer wrist band. Looking beyond Java EE 7, Purdy mentioned NoSQL database functionality for EE 8, the concurrency utilities (possibly in EE 7), some of the Avatar projects in EE 7, some in EE 8, multi-tenancy for the cloud, supporting SaaS applications, and more.Rizvi ended by introducing Dr. Robert Ballard, oceanographer and National Geographic Explorer in Residence--part of Oracle's philanthropic relationship with the National Geographic Society to fund K-12 education around ocean science and conservation. Ballard is best known for having discovered the wreckage of the Titanic. He offered a fascinating video and overview of the cutting edge technology used in such deep-sea explorations, noting that in his early days, high-bandwidth exploration meant that you’d go down in a submarine and "stick your face up against the window." Now, it's a remotely operated, technology telepresence--"I think of my Hercules vehicle as my equivalent of a Na'vi. When I go beneath the sea, I actually send my spirit." Using high bandwidth satellite links, such amazing explorations can now occur via smartphone, laptop, or whatever platform. Ballard’s team regularly offers live feeds and programming out to schools and the world, spanning 188 countries--with embedding educators as part of the expeditions. It's technology at its finest, inspiring the next-generation of scientists and explorers!

    Read the article

  • Do we need to adopt a black-box asset our project is inheriting from its predecessor?

    - by Tom Anderson
    Our client has an eCommerce site which was developed by an in-house team, and is now showing its age. I work for a firm brought in as external contractors to build a replacement. Part of the current site is a Flash viewer applet which displays media about the product - zoom-able images, 360-degree views, movies, and so on. We need to show the same media the current site does, so we are simply reusing the viewer. The viewer is embedded on a page in the usual way, and told what media to show by means of an XML file it loads from our server, which is pretty simple for us to generate. We've got this working; it was pretty straightforward. But what else do we need to do? The thing is, as far as we're concerned, the viewer is a binary blob which is served from the client's content-distribution network. We embed it, feed it some XML, and it does its job, but we have no power over its internals. It's completely opaque to us - a black box. We can use it to do what it does, but we can't change it, so if we ever need to do something different, we're stuffed. We're building this site for the client, and when we're done, we'll hand it over for them to maintain. We won't be doing the maintenance ourselves. There's a small team within the client who are working as part of our team, and who will be the ones doing the maintenance. That team only includes one person from the team that built the old site, and it's not someone who knows the image viewer. The people who do know the image viewer are not slated to join our team when our system replaces theirs - they'll be moved to other projects. The documentation on the viewer is extremely thin, and as far as i know doesn't cover the internals at all. My worry is that if someone doesn't take some positive action, all knowledge of the internal workings of the viewer - even down to where the source code for it is - will be lost. It's possible it already has been. Is this something to worry about? If so, whose job is it to worry about it? What should they do about it once they've got worried?

    Read the article

  • Code Analysis Rule Sets in Visual Studio 2010

    - by Anthony Trudeau
    Microsoft Visual Studio 2010 introduces the concept of rule sets when configuring code analysis.  This is a valuable change from Visual Studio 2008 that I didn't even realize I wanted.  Visual Studio 2008 by default selected all rules and then you had to remove rules on an item by item basis. The rule sets fall into logical groups including "Microsoft All Rules", "Microsoft Basic Correctness Rules", "Microsoft Security Rules", et al.  And within the project properties you can select one rule set, multiple rule sets, or you can define your own rule set based upon another. Selecting a single rule set is obviously the easiest option.  The default rule set when you create a new project is the "Microsoft Minimum Recommended Rules".  However, in my opinion the recommended rules are just too permissive.  For that reason you might want to change your rule set to "Microsoft All Rules" until you get around to creating your own rule set; or alternately you can select multiple rule sets which is an option from the rule set combo box.  The Visual Studio documentation has comprehensive help on what is contained within the rule sets. Creating your own rule set is easy if not obvious.  You need to start a rule set from an existing rule set.  To get started select a rule set in the combo box within the Code Analysis tab of the project properties.  I selected the "Microsoft All Rules" for my rule set, but you may find it easier to start with the "Microsoft Minimum Recommended Rules" if your rules are on the more permissive side. Once your rule set is selected click the Open button.  This will display a dialog that is similar in composition to the rules selection from Visual Studio 2008.  Browsing through the tree view you can select or deselect individual rules within their categories; and you can indicate that the rules are flagged as errors instead of the default which is a warning.  A nice touch to the form is that you get a help pane when you select an individual rule.  That helped me considerably when I first configured my rule set. Once you have finished selecting your rules click the Save tool button, specify a location and name, and click the Save button on the Save As dialog.  Once you're back on the Code Analysis tab you'll choose the Browse option within the combo box and open the file you just created.

    Read the article

  • PMI South Florida Job Fair 2010

    - by Sam Abraham
    The South Florida Chapter of the Project Management Institute is planning a Job Fair slated for September 2010. This year has seen a significant improvement in the job market with many surveyed companies indicating their intention to add temporary or permanent staff to their workforce in the near future.   The Job Fair Initiative fits well within the chapter's message and goal for this year: "Exercising Social Responsibility" - Our responsibility as PMI volunteers at all levels towards our members and surrounding community.   Our Free-to-members Annual Job Fair will play an important role in connecting Recruiters, Exhibitors and Job Seekers together thereby helping hiring companies gain access to a large talent pool at an affordable cost (Totally free in certain cases, details to be revealed once finalized) while giving job seekers centralized access to many reputable hiring companies in the South Florida area.   My involvement in the 2010 Job Fair started with a good conversation I had with Bernie Saenz, President and CEO of the South Florida PMI Chapter, in a networking event a few months ago. I had approached him with a few ideas in line with his goal to serve the community and our members given today's difficult economic climate. Bernie indicated that the Project Manager for the 2010 Job Fair had just been appointed and invited me to participate in this important initiative as a member of her team. I simply couldn't resist and gladly accepted the invitation.   I chose an initial role as Recruiter Relations Lead which entails developing documentation and timelines for our project plan with regards to Recruiter Engagement as well as reaching out to recruiting companies to meet target representation at the Job Fair.   Being heavily involved in the local Technical community has afforded me the privilege of coming in contact with many reputable Technology Recruiting companies. (As a matter of fact, I already have 2 interested very reputable IT recruiting firms willing to join us at the fair)   The excitement for me however will be finding and reaching out to recruiters in areas of Project Management and Leadership that I might not have been exposed to before including Finance, Healthcare and Marketing, to name a few.   Keep an eye in the upcoming few weeks for official announcements on the PMI South Florida Job Fair 2010.   Environment.Exit(0);   -Sam Abraham Site Director - West Palm Beach .Net User Group Recruiter Relations Lead - PMI South Florida Job Fair 2010 Project Lead - Mentoring Programs- PMI South Florida

    Read the article

  • Help needed with pyparsing [closed]

    - by Zearin
    Overview So, I’m in the middle of refactoring a project, and I’m separating out a bunch of parsing code. The code I’m concerned with is pyparsing. I have a very poor understanding of pyparsing, even after spending a lot of time reading through the official documentation. I’m having trouble because (1) pyparsing takes a (deliberately) unorthodox approach to parsing, and (2) I’m working on code I didn’t write, with poor comments, and a non-elementary set of existing grammars. (I can’t get in touch with the original author, either.) Failing Test I’m using PyVows to test my code. One of my tests is as follows (I think this is clear even if you’re unfamiliar with PyVows; let me know if it isn’t): def test_multiline_command_ends(self, topic): output = parsed_input('multiline command ends\n\n',topic) expect(output).to_equal( r'''['multiline', 'command ends', '\n', '\n'] - args: command ends - multiline_command: multiline - statement: ['multiline', 'command ends', '\n', '\n'] - args: command ends - multiline_command: multiline - terminator: ['\n', '\n'] - terminator: ['\n', '\n']''') But when I run the test, I get the following in the terminal: Failed Test Results Expected topic("['multiline', 'command ends']\n- args: command ends\n- command: multiline\n- statement: ['multiline', 'command ends']\n - args: command ends\n - command: multiline") to equal "['multiline', 'command ends', '\\n', '\\n']\n- args: command ends\n- multiline_command: multiline\n- statement: ['multiline', 'command ends', '\\n', '\\n']\n - args: command ends\n - multiline_command: multiline\n - terminator: ['\\n', '\\n']\n- terminator: ['\\n', '\\n']" Note: Since the output is to a Terminal, the expected output (the second one) has extra backslashes. This is normal. The test ran without issue before this piece of refactoring began. Expected Behavior The first line of output should match the second, but it doesn’t. Specifically, it’s not including the two newline characters in that first list object. So I’m getting this: "['multiline', 'command ends']\n- args: command ends\n- command: multiline\n- statement: ['multiline', 'command ends']\n - args: command ends\n - command: multiline" When I should be getting this: "['multiline', 'command ends', '\\n', '\\n']\n- args: command ends\n- multiline_command: multiline\n- statement: ['multiline', 'command ends', '\\n', '\\n']\n - args: command ends\n - multiline_command: multiline\n - terminator: ['\\n', '\\n']\n- terminator: ['\\n', '\\n']" Earlier in the code, there is also this statement: pyparsing.ParserElement.setDefaultWhitespaceChars(' \t') …Which I think should prevent exactly this kind of error. But I’m not sure. Even if the problem can’t be identified with certainty, simply narrowing down where the problem is would be a HUGE help. Please let me know how I might take a step or two towards fixing this.

    Read the article

  • SSIS Reporting Pack v0.4 – Execution Report updated

    - by jamiet
    SSIS Reporting Pack is a suite of reports that I maintain at http://ssisreportingpack.codeplex.com/ that provide visualisation over the SSIS Catalog in SQL Server 2012 and attempt to add value over the reports that ship in the box. Work on the reports has stalled (my last SSIS Reporting Pack blog post was on 4th September 2011) as I’ve had rather more important things going on my life of late however I have recently checked-in a fix that couldn’t really be delayed. I discovered a problem with the Execution report that was causing the report to effectively hang, it was caused by this bit of SQL hidden away in the report definition: [generated_executables] AS (   SELECT  [new_executable].[execution_path],[new_executable].[parent_execution_path]   FROM    (           SELECT  [execution_path] = SUBSTRING([loop_iteration].[execution_path] ,1, [loop_iteration].length_exec_path - [loop_iteration].[char_index_close_square] + 1)           ,       [parent_execution_path] = SUBSTRING([loop_iteration].[execution_path] ,1, [loop_iteration].length_exec_path - [loop_iteration].[char_index_open_square])           FROM    (                   SELECT  [execution_path]                   ,       [char_index_open_square] = CHARINDEX('[',REVERSE([execution_path]),1)                   ,       [char_index_close_square] = CHARINDEX(']',REVERSE([execution_path]),1)                   ,       [length_exec_path] = LEN([execution_path])                   FROM    [exec_stats] es                   WHERE   execution_path LIKE '%\[%]%'  ESCAPE '\'                   )AS [loop_iteration]           ) AS [new_executable]   GROUP   BY [new_executable].[execution_path],[new_executable].[parent_execution_path]) It was there because SSIS does not currently treat a loop iteration as an executable yet I figured there was still value in being able to view it as such – this SQL essentially “invents” new executables for those loop iterations; its what enabled the following visualisation: where each of the three iterations of a For Each Loop called “FEL Loop over top performing regions” appear in the report. Unfortunately, as I alluded, this could under certain circumstances (most likely when there were many loop iterations) cause the report to hang as it waited for the results to be constructed and returned. The change that I have made eradicates this generation of “fake” executables and thus produces this visualisation instead: Notice that the three “children” of the For Each Loop are no longer the three iterations but actually the task (“EPT Call Data Export Package”) contained within that For Each Loop. The problem here is of course that there is no longer a visual distinction between those three iterations; I have instead made the full execution path viewable via a tooltip:   If you preferred the “old” way of presenting this information and are happy to put up with the performance degradation then I have kept the old version of the report hanging around in the reporting pack as “execution loop with iterations” however none of the other reports link to it so you will have to browse to it manually if you want to use it. Please let me know if you ARE using it – I would be very interested to hear about your experiences.   The last change to make you aware of in the execution report is that by default I no longer show OnPreValidate or OnPostValidate messages as I consider them to be superfluous and only serve to clutter up the results. If you want to put them back, well, its open source so go right ahead!   The latest release of SSIS Reporting Pack that contains all of these changes is v0.4 and can be downloaded from http://ssisreportingpack.codeplex.com/releases/view/88178   Feedback on all of the above changes would be very much appreciated. @Jamiet

    Read the article

  • Five hours of Task Flow Overview Recordings Available

    - by Frank Nimphius
    In addition to the ADF Controller task flow documentation in Oracle Fusion Middleware Fusion Developer's Guide for Oracle Application Development Framework 11g Release 1 http://download.oracle.com/docs/cd/E21764_01/web.1111/b31974/partpage3.htm#BABHIIAI The ADF Insider website … http://www.oracle.com/technetwork/developer-tools/adf/learnmore/adfinsider-093342.html … hosts five online videos that explain how to build and work with ADF Controller task flows in Oracle ADF. ADF Task Flow - Overview (Part 1) This 90 minute recording introduces the concept of ADF unbounded and bounded task flows, as well as other ADF Controller features. The session starts with an overview of unbounded task flows, bounded task flows and the different activities that exist for developers to build complex application flows. Exception handling and the Train navigation model is also covered in this first part of a two part series. By example of developing a sample application, the recording guides viewers through building unbounded and bounded task flows. This session is continued in a second part. http://download.oracle.com/otn_hosted_doc/jdeveloper/11gdemos/taskflow-overview-p1/taskflow-overview-p1.html ADF Task Flow - Overview (Part 2) This 75 minute session continues where part 1 ended and completes the sample application that guides viewers through different aspects of unbounded and bounded task flow development. In this recording, memory scopes, save for later, task flow opening in dialogs and remote task flow calls are explained and demonstrated. If you are new to ADF Task Flow, then it is recommended to first watch part 1 of this series to be able to follow the explanation guided by the sample application. http://download.oracle.com/otn_hosted_doc/jdeveloper/11gdemos/taskflow-overview-p2/taskflow-overview-p2.html ADF Region Interaction - An Overview This session covers most of the options that exist for communicating between regions. It briefly discusses what it takes to build regions from bounded task flows before going into details using slides and samples. The following interaction is explained: contextual events, queue action in region, input parameters and PPR, drag and drop, shared Data Controls, parent action and region navigation listener. http://download.oracle.com/otn_hosted_doc/jdeveloper/11gdemos/adf-region-interaction/adf-region-interaction.html ADF Region Interaction - Contextual Events Contextual event is used as a communication channel between a parent view and its contained regions, as well as between regions. By example, this session explains how to set up contextual events, how to define producers and event listeners and how to define the payload message. http://download.oracle.com/otn_hosted_doc/jdeveloper/11gdemos/AdfInsiderContextualEvents/AdfInsiderContextualEvents.html

    Read the article

  • Oracle GoldenGate 11gR2 New Feature: Integrated Capture

    - by Doug Reid
    0 false 18 pt 18 pt 0 0 false false false /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Times New Roman"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} With the release of Oracle GoldenGate 11gR2, the Product Management team is very excited about the addition of Integrated Capture for the Oracle platform. Integrated capture is unique in the industry and unique to the Oracle database. It is not available on any other database platform. This new feature moves GoldenGate’s capture capabilities closer to the Oracle Database engine and is the foundation for Oracle GoldenGate on the Oracle Database platform over the long term. It is important to note that Integrated Capture does not replace our classic Capture process. Both are available on the Oracle Database platform. The Integrated Capture mechanism relies on Oracle’s internal log parsing and processing to capture DML transactions. By moving closer to the Oracle Database engine, Oracle GoldenGate can take advantage of new Oracle Database features and functionality more quickly. For example, this new mechanism allows GoldenGate to support advanced features such as compression. Integrated Capture provides support for all flavors of Oracle compression, including hybrid columnar compression (EHCC) on Exadata, where as our “Classic” capture would not. Integrated Capture supports two different deployment configurations; On-Source and Downstream. The on-source deployment model is what most customers are familiar with. Oracle GoldenGate is executing on the database server capturing changes in real time. This is the default deployment method. The other option is downstream, where the source database and the Oracle GoldenGate Capture process are on different machines. This method effectively off-loads the processing requirements to a second machine. Customers may choose which option they prefer based on their requirements.   Additional information on Integrated Capture can be found in our documentation and the white paper “Oracle GoldenGate for Oracle”.

    Read the article

  • Full disk encryption with seperate boot and encrypted keyfile storage: Two-Form Authentication

    - by Cain
    I am trying to setup true Full Disk encryption with two-form authentication on 12.04 and can not find out how to call a keyfile for the encrypted root out of another encrypted partition. All documentation or questions I am finding for whole or full disk encryption only encrypts separate partitions on the same disk. This is not what most are calling full disk encryption, /boot is not on a partition on the root drive, rather it is on a usb stick as sdx1. Instead root is on a logical partition on top of a LUKS container. Luks is run on the whole disk, encrypting the partition table as well. All drives in the machine are completely encrypted and to open it it requires a USB drive (what I have) as well as a passphrase (what I know) resulting in Two-Form Authentication to boot the machine. Device sdx cryptroot vg00 lvroot / There is no passphrase to open the encrypted root device, only a keyfile. That keyfile is kept on the usb drive with /boot, in its own encrypted partition (I'll call this cryptkey). In order for the root file system (cryptroot) to be opened, initramfs must ask for the passphrase to cryptkey on the usb drive, then use the keyfile inside that to open cryproot. I did manage to find what I think is the how-to I used to do this once before: http://wiki.ubuntu.org.cn/UbuntuHelp:FeistyLUKSTwoFormFactor I already have the system installed and can chroot into it, however, I can not get it to call for the keys on the USB during boot. I did find a how-to saying I needed to make a cryptroot conf for initramfs but, I believe that is for a passphrase: https://help.ubuntu.com/community/EncryptedFilesystemLVMHowto#Notes_for_making_it_work_in_Ubuntu_12.04_.22Precise_Pangolin.22_amd64 I also tried to setup crypttab. However, crypttab only works for drives mounted after the root drive as calling for a keyfile on a device not yet mounted to the system doesnt work. The Feisty how-to included scripts that would be run during boot instructing initramfs to mount the usb drive temporarily and call the keyfile for root which worked quite well except those scripts are outdated now, many of the things they relied on have been merged into something else, changed, or simply don't exist anymore. If I have missed a clear how-to for this, that would be wonderful, I just don't think I have.

    Read the article

  • Understanding how OpenGL blending works

    - by yuumei
    I am attempting to understand how OpenGL (ES) blending works. I am finding it difficult to understand the documentation and how the results of glBlendFunc and glBlendEquation effect the final pixel that is written. Do the source and destination out of glBlendFunc get added together with GL_FUNC_ADD by default? This seems wrong because "basic" blending of GL_ONE, GL_ONE would output 2,2,2,2 then (Source giving 1,1,1,1 and dest giving 1,1,1,1). I have written the following pseudo-code, what have I got wrong? struct colour { float r, g, b, a; }; colour blend_factor( GLenum factor, colour source, colour destination, colour blend_colour ) { colour colour_factor; float i = min( source.a, 1 - destination.a ); // From http://www.khronos.org/opengles/sdk/docs/man/xhtml/glBlendFunc.xml switch( factor ) { case GL_ZERO: colour_factor = { 0, 0, 0, 0 }; break; case GL_ONE: colour_factor = { 1, 1, 1, 1 }; break; case GL_SRC_COLOR: colour_factor = source; break; case GL_ONE_MINUS_SRC_COLOR: colour_factor = { 1 - source.r, 1 - source.g, 1 - source.b, 1 - source.a }; break; // ... } return colour_factor; } colour blend( colour & source, colour destination, GLenum source_factor, // from glBlendFunc GLenum destination_factor, // from glBlendFunc colour blend_colour, // from glBlendColor GLenum blend_equation // from glBlendEquation ) { colour source_colour = blend_factor( source_factor, source, destination, blend_colour ); colour destination_colour = blend_factor( destination_factor, source, destination, blend_colour ); colour output; // From http://www.khronos.org/opengles/sdk/docs/man/xhtml/glBlendEquation.xml switch( blend_equation ) { case GL_FUNC_ADD: output = add( source_colour, destination_colour ); case GL_FUNC_SUBTRACT: output = sub( source_colour, destination_colour ); case GL_FUNC_REVERSE_SUBTRACT: output = sub( destination_colour, source_colour ); } return output; } void do_pixel() { colour final_colour; // Blending if( enable_blending ) { final_colour = blend( current_colour_output, framebuffer[ pixel ], ... ); } else { final_colour = current_colour_output; } } Thanks!

    Read the article

  • Pace Layering Comes Alive

    - by Tanu Sood
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Rick Beers is Senior Director of Product Management for Oracle Fusion Middleware. Prior to joining Oracle, Rick held a variety of executive operational positions at Corning, Inc. and Bausch & Lomb. With a professional background that includes senior management positions in manufacturing, supply chain and information technology, Rick brings a unique set of experiences to cover the impact that technology can have on business models, processes and organizations. Rick hosts the IT Leaders Editorial on a monthly basis. By now, readers of this column are quite familiar with Oracle AppAdvantage, a unified framework of middleware technologies, infrastructure and applications utilizing a pace layered approach to enterprise systems platforms. 1. Standardize and Consolidate core Enterprise Applications by removing invasive customizations, costly workarounds and the complexity that multiple instances creates. 2. Move business specific processes and applications to the Differentiate Layer, thus creating greater business agility with process extensions and best of breed applications managed by cross- application process orchestration. 3. The Innovate Layer contains all the business capabilities required for engagement, collaboration and intuitive decision making. This is the layer where innovation will occur, as people engage one another in a secure yet open and informed way. 4. Simplify IT by minimizing complexity, improving performance and lowering cost with secure, reliable and managed systems across the entire Enterprise. But what hasn’t been discussed is the pace layered architecture that Oracle AppAdvantage adopts. What is it, what are its origins and why is it relevant to enterprise scale applications and technologies? It’s actually a fascinating tale that spans the past 20 years and a basic understanding of it provides a wonderful context to what is evolving as the future of enterprise systems platforms. It all begins in 1994 with a book by noted architect Stewart Brand, of ’Whole Earth Catalog’ fame. In his 1994 book How Buildings Learn, Brand popularized the term ‘Shearing Layers’, arguing that any building is actually a hierarchy of pieces, each of which inherently changes at different rates. In 1997 he produced a 6 part BBC Series adapted from the book, in which Part 6 focuses on Shearing Layers. In this segment Brand begins to introduce the concept of ‘pace’. Brand further refined this idea in his subsequent book, The Clock of the Long Now, which began to link the concept of Shearing Layers to computing and introduced the term ‘pace layering’, where he proposes that: “An imperative emerges: an adaptive [system] has to allow slippage between the differently-paced systems … otherwise the slow systems block the flow of the quick ones and the quick ones tear up the slow ones with their constant change. Embedding the systems together may look efficient at first but over time it is the opposite and destructive as well.” In 2000, IBM architects Ian Simmonds and David Ing published a paper entitled A Shearing Layers Approach to Information Systems Development, which applied the concept of Shearing Layers to systems design and development. It argued that at the time systems were still too rigid; that they constrained organizations by their inability to adapt to changes. The findings in the Conclusions section are particularly striking: “Our starting motivation was that enterprises need to become more adaptive, and that an aspect of doing that is having adaptable computer systems. The challenge is then to optimize information systems development for change (high maintenance) rather than stability (low maintenance). Our response is to make it explicit within software engineering the notion of shearing layers, and explore it as the principle that systems should be built to be adaptable in response to the qualitatively different rates of change to which they will be subjected. This allows us to separate functions that should legitimately change relatively slowly and at significant cost from that which should be changeable often, quickly and cheaply.” The problem at the time of course was that this vision of adaptable systems was simply not possible within the confines of 1st generation ERP, which were conceived, designed and developed for standardization and compliance. It wasn’t until the maturity of open, standards based integration, and the middleware innovation that followed, that pace layering became an achievable goal. And Oracle is leading the way. Oracle’s AppAdvantage framework makes pace layering come alive by taking a strategic vision 20 years in the making and transforming it to a reality. It allows enterprises to retain and even optimize their existing ERP systems, while wrapping around those ERP systems three layers of capabilities that inherently adapt as needed, at a pace that’s optimal for the enterprise.

    Read the article

  • XmlWriter and lower ASCII characters

    - by Rick Strahl
    Ran into an interesting problem today on my CodePaste.net site: The main RSS and ATOM feeds on the site were broken because one code snippet on the site contained a lower ASCII character (CHR(3)). I don't think this was done on purpose but it was enough to make the feeds fail. After quite a bit of debugging and throwing in a custom error handler into my actual feed generation code that just spit out the raw error instead of running it through the ASP.NET MVC and my own error pipeline I found the actual error. The lovely base exception and error trace I got looked like this: Error: '', hexadecimal value 0x03, is an invalid character. at System.Xml.XmlUtf8RawTextWriter.InvalidXmlChar(Int32 ch, Byte* pDst, Boolean entitize)at System.Xml.XmlUtf8RawTextWriter.WriteElementTextBlock(Char* pSrc, Char* pSrcEnd)at System.Xml.XmlUtf8RawTextWriter.WriteString(String text)at System.Xml.XmlWellFormedWriter.WriteString(String text)at System.Xml.XmlWriter.WriteElementString(String localName, String ns, String value)at System.ServiceModel.Syndication.Rss20FeedFormatter.WriteItemContents(XmlWriter writer, SyndicationItem item, Uri feedBaseUri)at System.ServiceModel.Syndication.Rss20FeedFormatter.WriteItem(XmlWriter writer, SyndicationItem item, Uri feedBaseUri)at System.ServiceModel.Syndication.Rss20FeedFormatter.WriteItems(XmlWriter writer, IEnumerable`1 items, Uri feedBaseUri)at System.ServiceModel.Syndication.Rss20FeedFormatter.WriteFeed(XmlWriter writer)at System.ServiceModel.Syndication.Rss20FeedFormatter.WriteTo(XmlWriter writer)at CodePasteMvc.Controllers.ApiControllerBase.GetFeed(Object instance) in C:\Projects2010\CodePaste\CodePasteMvc\Controllers\ApiControllerBase.cs:line 131 XML doesn't like extended ASCII Characters It turns out the issue is that XML in general does not deal well with lower ASCII characters. According to the XML spec it looks like any characters below 0x09 are invalid. If you generate an XML document in .NET with an embedded &#x3; entity (as mine did to create the error above), you tend to get an XML document error when displaying it in a viewer. For example, here's what the result of my  feed output looks like with the invalid character embedded inside of Chrome which displays RSS feeds as raw XML by default: Other browsers show similar error messages. The nice thing about Chrome is that you can actually view source and jump down to see the line that causes the error which allowed me to track down the actual message that failed. If you create an XML document that contains a 0x03 character the XML writer fails outright with the error: '', hexadecimal value 0x03, is an invalid character. The good news is that this behavior is overridable so XML output can at least be created by using the XmlSettings object when configuring the XmlWriter instance. In my RSS configuration code this looks something like this:MemoryStream ms = new MemoryStream(); var settings = new XmlWriterSettings() { CheckCharacters = false }; XmlWriter writer = XmlWriter.Create(ms,settings); and voila the feed now generates. Now generally this is probably NOT a good idea, because as mentioned above these characters are illegal and if you view a raw XML document you'll get validation errors. Luckily though most RSS feed readers however don't care and happily accept and display the feed correctly, which is good because it got me over an embarrassing hump until I figured out a better solution. How to handle extended Characters? I was glad to get the feed fixed for the time being, but now I was still stuck with an interesting dilemma. CodePaste.net accepts user input for code snippets and those code snippets can contain just about anything. This means that ASP.NET's standard request filtering cannot be applied to this content. The code content displayed is encoded before display so for the HTML end the CHR(3) input is not really an issue. While invisible characters are hardly useful in user input it's not uncommon that odd characters show up in code snippets. You know the old fat fingering that happens when you're in the middle of a coding session and those invisible characters do end up sometimes in code editors and then end up pasted into the HTML textbox for pasting as a Codepaste.net snippet. The question is how to filter this text? Looking back at the XML Charset Spec it looks like all characters below 0x20 (space) except for 0x09 (tab), 0x0A (LF), 0x0D (CR) are illegal. So applying the following filter with a RegEx should work to remove invalid characters:string code = Regex.Replace(item.Code, @"[\u0000-\u0008,\u000B,\u000C,\u000E-\u001F]", ""); Applying this RegEx to the code snippet (and title) eliminates the problems and the feed renders cleanly.© Rick Strahl, West Wind Technologies, 2005-2012Posted in .NET  XML   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Some Oracle VM 3 updates

    - by wcoekaer
    Today we did another patch set update for Oracle VM 3 (3.0.3-build 227). This can be downloaded from My Oracle Support as patch ID 14736185. There are quite a few updates in here and I highly recommend any Oracle VM 3 customer or user to install this update. This patch can be installed on top of Oracle VM 3.0 versions 3.0.2 and 3.0.3. The patch is cumulative for 3.0.3. So if you already installed patch update 1 (3.0.3-150) then this will just be incremental on top of that and brings you to 3.0.3-build 227. There is a readme file which contains the patchlist in the patch info. The following patches are released on ULN for Oracle VM server 3.0 : initscripts-8.45.30-2.100.18.el5.x86_64 The inittab file and the /etc/init.d scripts. kernel-ovs-2.6.32.21-45.6.x86_64 The Linux kernel kernel-ovs-firmware-2.6.32.21-45.6.x86_64 Firmware files used by the Linux kernel osc-oracle-ocfs2-0.1.0-35.el5.noarch Oracle Storage Connect ocfs2 Plugin osc-plugin-manager-1.2.8-9.el5.3.noarch Oracle Storage Connect Plugin Infrastructure osc-plugin-manager-devel-1.2.8-9.el5.3.noarch Oracle Storage Connect Plugin Development ovs-agent-3.0.3-41.6.x86_64 Agent for Oracle VM xen-4.0.0-81.el5.1.x86_64 Xen is a virtual machine monitor xen-devel-4.0.0-81.el5.1.x86_64 Development libraries for Xen tools xen-tools-4.0.0-81.el5.1.x86_64 Various tooling for the manipulation of Xen instances Errata emails will be sent in the next few days with details on the above updates. Or you will find them here. I also did an update of my Oracle VM utilities to 0.4.0. They are also available from My Oracle Support, patch ID 14736239. These utils can be unzipped and installed on the server running Oracle VM Manager. Typically in /u01/app/oracle/ovm-manager-3/ovm_utils. There is a set of man pages in /u01/app/oracle/ovm-manager-3/ovm_utils/man/man8. There now are 6 commands : ovm_vmcontrol : VM level operations ovm_servercontrol : server level operations ovm_vmdisks : virtual disk/physical location mapping for VM disks ovm_vmmessage : message passing utility between the manager and the VM tools (in the Oracle VM templates) ovm_repocontrol : repository level operations ovm_poolcontrol : pool level operations Some of the new changes : at a pool level, acknowledge events and cascade to servers and virtual machines with outstanding events at a pool level, do a rescan of the storage for fibrechannel/iscsi disks if you add new devices (it does this operation then on every running server) at a repository level, fixup a device if it had a failed create repository at a repository level, refresh the repository and this will update the free space in the UI for ocfs2 repositories at a server level, acknowledge server events and cascade to virtual machines if needed at a VM level, acknowledge VM events at a VM level, bind vcpus to cores with vcpuset/vcpuget Please see the man pages and remember that these tools are just written As Is - no SRs... (per the documentation) Hopefully they are useful.

    Read the article

  • Extra Life 2012 - The Final Plea ... Until the Next One

    - by Chris Gardner
    I thought I'd share the email stream that my friends and family get about the event.So, here we are again. We scream closer to the event, and the goal is not met.I was approached by the ghost of feral platypii past last night. Well, approached is putting it lightly. I was mugged by the ghost of platypii past last night. He reminded me, in no uncertain terms that I have only reached the midway point of my fundraising goal. He then reminded me, in even less uncertain terms, that we are one week away from the event. There were other reminders past that, but this is a family broadcast. *shudder*Now, let us be serious for a moment. The event organizers claim a personal story helps to tug heart strings, whatever those are...I've been to Children's Hospital of Birmingham. I had to take Spawn, the Latter, there to verify she was not going to die. Instead, she's just a ticking time bomb for the next generation, but I digress.While I was there, I saw things. I saw child after child after child waiting for their appointment. I saw the most sublime displays of children's art juxtaposed with hospital sterilization that I could ever possibly imagine. I saw and heard things that only occur in the nightmares of parents, and I was only in the waiting rooms.But I will never forget the 10-ish year old girl that came in for her regularly scheduled dialysis appointment ... as if it was just another Friday afternoon. She had her school books, a little snack, a book to read for pleasure, and a DVD, in case she finished her homework a little early. You know, everything you'd need for an afternoon hooked up to a huge medical machine that going to clean out all the toxins in your blood. As she entered the secured area, she warmly greeted all the doctors and nurses with the same familiarity that I would greet the staff of my favorite coffee shop as I stopped in for my morning cup of coffee.I don't know the status of that little girl. I don't know if she's healthy or, quite frankly, alive. I don't even know her name, as I only heard it in passing for the 37 seconds our paths crossed. However, I do remember being incredibly moved and touched by her upbeat attitude about the situations, and I hope that my efforts last two Octobers got her, in some way, a little comfort.And, if she is still with us, I hope we can get her a little more.=== PREVIOUS MESSAGE FOLLOWS ===Greetings (Again),If you are receiving this updated message, then you didn't feel generous the first time. Now, I tried to be nice the first time. I tried to send a simple, unobtrusive email message to get you into the spirit. Well, much like the bell ringers that I ignore in front of the Wal-Mart, you ignored me.I probably should have seen that coming...However, unlike those poor souls, I know how to contact you. And I can find out where you live. So, so, so, you better feel lucky that I'm too lazy to terrorize you people, but cause I could do it.Remember, it's not for me, it's for those poor kids... and the feral platypii.  Because, we can make more children, but platypii are hard to come by.=== ORIGINAL MESSAGE FOLLOWS ===It's that time of year again. The time when I beg you for money for charity. See, unlike those bell ringers outside Wal-Mart, I don't do it when you have ten bazillion holiday obligations...Once again, I will be enduring a 24-hour marathon of gaming to raise money for Children Hospital in Birmingham. All the money goes straight to them, and you get to tell Uncie Samuel that you're good for that money. I'd REALLY like to break $1000 this year, as I have come REALLY close for the past 2 year to doing so.This year, the event will take place on October 20th, beginning at 8 A.M. Once again, I will try to provide some web streams, etc, if you want to point and laugh (especially if I have to result to playing Dance Central at 4 AM to stay awake for the last part.)Look at it this way, I'm going to badger you about this for the next month. You might as well donate some money so you can righteously tell me to shut the Smurf up.You can place your bid at the link below. Feel free to spread the word to anyone and everyone.I thank you. The children thank you. Several breeds of feral platypus thank you. Maybe, just maybe, doing so will help you feel the love felt by re-fried beans when lovingly hugged in a warm tortilla.Enjoy your burrito.http://www.extra-life.org/participant/cgardner

    Read the article

  • GWT: Generate more complete crawl error report

    - by Mike
    I'm a developer in charge of managing Webmasters and related issues (including correcting crawl errors) for dozens (hundreds, maybe?) of active sites and as part of my duties I create a report of every discrepancy, including all pages generating a 404 and all pages that link to those pages. Currently within Webmaster Tools I'm able to download a csv file of all pages with a 404 response, but I'm then having to manually click on every single one of those links and copy the "linked from" field to paste into my spreadsheet. This is extremely tedious and seems unnecessary; I would expect the ability to download all that data at once. I'm ultimately looking for the end result of one csv file that has every url with a 404, but also has every url that links to each one of them. Am I overlooking this functionality somewhere or does anyone have a good solution? Edit 1 (2/11/2013): Example of what the csv output looks like now: URL,Response Code,News Error,Detected,Category http://www.abcdef.com/123.php,404,,11/12/13,Not found http://www.abcdef.com/456.php,404,,11/12/13,Not found Which is great, but let's say 123.php has 5 pages that link to it. Now I have to duplicate that row in my spreadsheet 4 more times, then go into Webmasters, get all the url's that link to the page, and add that data to my spreadsheet. The output I would prefer: URL,Response Code,Linked From,News Error,Detected,Category http://www.abcdef.com/123.php,404,http://www.ghijkl.com/naughtypage1.php,,11/12/13,Not found http://www.abcdef.com/123.php,404,http://www.ghijkl.com/naughtypage2.php,,11/12/13,Not found http://www.abcdef.com/123.php,404,http://www.ghijkl.com/naughtypage3.php,,11/12/13,Not found http://www.abcdef.com/456.php,404,http://www.ghijkl.com/naughtypage1.php,,11/12/13,Not found http://www.abcdef.com/456.php,404,http://www.ghijkl.com/naughtypage2.php,,11/12/13,Not found http://www.abcdef.com/456.php,404,http://www.ghijkl.com/naughtypage3.php,,11/12/13,Not found Note the (hypothetical) addition of a "Linked From" column, as well as the fact there are only 2 unique URL's now (like before) but all of the "Linked To" pages are shown in one report. Edit 2 (2/12/2013): To clarify, my question is less about detecting and correcting 404's, but more about generating a report of what Google has listed as errors. Oftentimes, these errors aren't even valid anymore but I still need documentation to show that Google detected a problem and that problem is now fixed. Many of the "linked from" url's I find are actually outdated, cached resources. For example, I'll frequently see that the linked-from url is the sitemap, which is actually an old sitemap cached by Google that points to an old page. Neither the sitemap or old page exist, but they still appear in my crawl error reports because they are cached resources.

    Read the article

  • Cheating on Technical Debt

    - by Tony Davis
    One bad practice guaranteed to cause dismay amongst your colleagues is passing on technical debt without full disclosure. There could only be two reasons for this. Either the developer or DBA didn’t know the difference between good and bad practices, or concealed the debt. Neither reflects well on their professional competence. Technical debt, or code debt, is a convenient term to cover all the compromises between the ideal solution and the actual solution, reflecting the reality of the pressures of commercial coding. The one time you’re guaranteed to hear one developer, or DBA, pass judgment on another is when he or she inherits their project, and is surprised by the amount of technical debt left lying around in the form of inelegant architecture, incomplete tests, confusing interface design, no documentation, and so on. It is often expedient for a Project Manager to ignore the build-up of technical debt, the cut corners, not-quite-finished features and rushed designs that mean progress is satisfyingly rapid in the short term. It’s far less satisfying for the poor person who inherits the code. Nothing sends a colder chill down the spine than the dawning realization that you’ve inherited a system crippled with performance and functional issues that will take months of pain to fix before you can even begin to make progress on any of the planned new features. It’s often hard to justify this ‘debt paying’ time to the project owners and managers. It just looks as if you are making no progress, in marked contrast to your predecessor. There can be many good reasons for allowing technical debt to build up, at least in the short term. Often, rapid prototyping is essential, there is a temporary shortfall in test resources, or the domain knowledge is incomplete. It may be necessary to hit a specific deadline with a prototype, or proof-of-concept, to explore a possible market opportunity, with planned iterations and refactoring to follow later. However, it is a crime for a developer to build up technical debt without making this clear to the project participants. He or she needs to record it explicitly. A design compromise made in to order to hit a deadline, be it an outright hack, or a decision made without time for rigorous investigation and testing, needs to be documented with the same rigor that one tracks a bug. What’s the best way to do this? Ideally, we’d have some kind of objective assessment of the level of technical debt in a software project, although that smacks of Science Fiction even as I write it. I’d be interested of hear of any methods you’ve used, but I’m sure most teams have to rely simply on the integrity of their colleagues and the clear perceptions of the project manager… Cheers, Tony.

    Read the article

  • Simple collision detection in Unity 2D

    - by N1ghtshade3
    I realise other posts exist with this topic yet none have gone into enough detail for me. I am attempting to create a 2D game in Unity using C# as my scripting language. Basically I have two objects, player and bomb. Both were created simply by dragging the respective PNG to the stage. I have set up touch controls to move player left and right; gravity of any kind is not needed as I only require it to move x units when I tap either the left or right side of the screen. This movement is stored in a script called playerController.cs and works just fine. I also have a variable health = 3 for player, which is stored in healthScript.cs. I am now at a point where I am stuck. I would like it so that when player collides with bomb, health decreases by one and the bomb object is destroyed. So what I tried doing is using a new script called playerPhysics.cs, I added the following: void OnCollisionEnter2D(Collision2D coll){ if(coll.gameObject.name=="bomb") GameObject.Destroy("bomb"); healthScript.health -= 1; } While I'm fairly sure I don't know the proper way to reference a variable in another script and that's why the health didn't decrease when I collided, bomb never disappeared from the stage so I'm thinking there's also a problem with my collision. Initially, I had simply attached playerPhysics.cs to player. After searching around though, it appeared as though player also needed a rigidBody attached to it, so I did that. Still no luck. I tried using a circleCollider (player is a circle), using a rigidBody2D, and using all manner of colliders on one and/or both of the objects. If you could please explain what colliders (if any) should be attached to which objects and whether I need to change my script(s), that would be much more helpful than pointing me to one of the generic documentation examples I've already read. Also, if it would be simple to fix the health thing not working that would be an added bonus but not exactly the focus of this question. Bear in mind that this game is 2D; I'm not sure if that changes anything. Thanks!

    Read the article

  • Parallel MSBuild FTW - Build faster in parallel

    - by deadlydog
    Hey everyone, I just discovered this great post yesterday that shows how to have msbuild build projects in parallel Basically all you need to do is pass the switches “/m:[NumOfCPUsToUse] /p:BuildInParallel=true” into MSBuild. Example to use 4 cores/processes (If you just pass in “/m” it will use all CPU cores): MSBuild /m:4 /p:BuildInParallel=true "C:\dev\Client.sln" Obviously this trick will only be useful on PCs with multi-core CPUs (which we should all have by now) and solutions with multiple projects; So there’s no point using it for solutions that only contain one project.  Also, testing shows that using multiple processes does not speed up Team Foundation Database deployments either in case you’re curious Also, I found that if I didn’t explicitly use “/p:BuildInParallel=true” I would get many build errors (even though the MSDN documentation says that it is true by default). The poster boasts compile time improvements up to 59%, but the performance boost you see will vary depending on the solution and its project dependencies.  I tested with building a solution at my office, and here are my results (runs are in seconds): # of Processes 1st Run 2nd Run 3rd Run Avg Performance 1 192 195 200 195.67 100% 2 155 156 156 155.67 79.56% 4 146 149 146 147.00 75.13% 8 136 136 138 136.67 69.85%   So I updated all of our build scripts to build using 2 cores (~20% speed boost), since that gives us the biggest bang for our buck on our solution without bogging down a machine, and developers may sometimes compile more than 1 solution at a time.  I’ve put the any-PC-safe batch script code at the bottom of this post. The poster also has a follow-up post showing how to add a button and keyboard shortcut to the Visual Studio IDE to have VS build in parallel as well (so you don’t have to use a build script); if you do this make sure you use the .Net 4.0 MSBuild, not the 3.5 one that he shows in the screenshot.  While this did work for me, I found it left an MSBuild.exe process always hanging around afterwards for some reason, so watch out (batch file doesn’t have this problem though).  Also, you do get build output, but it may not be the same that you’re used to, and it doesn’t say “Build succeeded” in the status bar when completed, so I chose to not make this my default Visual Studio build option, but you may still want to. Happy building! ------------------------------------------------------------------------------------- :: Calculate how many Processes to use to do the build. SET NumberOfProcessesToUseForBuild=1  SET BuildInParallel=false if %NUMBER_OF_PROCESSORS% GTR 2 (                 SET NumberOfProcessesToUseForBuild=2                 SET BuildInParallel=true ) MSBuild /maxcpucount:%NumberOfProcessesToUseForBuild% /p:BuildInParallel=%BuildInParallel% "C:\dev\Client.sln"

    Read the article

  • Blender to 3ds max to cal3d format

    - by Kaliber64
    There are quite a few questions on cal3d but they are old and don't apply anymore. In Blender(must be 2.49a for python script to work!!!): I have a scene with 7 meshes, 1 armature, 10 bones. I tried going to one mesh to simplify it but doesn't change anything. I found a small blend file that was used for cal3d and it exported just fine. So I tried to copy it's setup with no success. EDIT*8/13/2012 In the last week here is what I have found so far. I made the mesh in the newest blender(2.62?) and exported it to import it in the old one(2.49a). Did an animation in the old one because importing new blend files to old blenders, its just said it would lose keyframe data and all was good. And then you get the last problem of it not exporting meshes. BUT I found that meshes made in the old one export regardless. I can't find any that won't export. So if I used the old blender to remake my model I could get it to export :) At this point I found a modified release of cal3d (because the most core model variable would not initiate as I made a really small test subject in old blender instead of remaking my big one which took 4 hours.) which fixes the morph objects and adds what cal3d left off with. Under their license they have to release the modification but it has no documentation so I have to figure it out on my own. Its mostly the same. But with this lib it came with a 3ds max exporter. My question now is how do I transfer armature and mesh information from blender to 3ds max in order to export into cal3d format. Every time I try the models are see through and small and there are no bones. The formats I have tried to import are .3ds .obj(mesh only) and COLLADA. In all of them the mesh is invisible and no bones. It says the default texture is on so I should be able to see it. All the vertices are present I found a vertex highlighter so I can see those. If any of this is confusing let me know so I can clear it up. Its late .<=sleep.

    Read the article

  • Cloning a dual boot system from HDD to SSD

    - by Alex
    I'm planning on replacing my laptop's HDD with a 256GB SSD, but I have a dual-boot (12.04 and Windows 7) setup and I'd like to be able to directly migrate Ubuntu over without having to reinstall and lose all of my settings. GParted reports the following partition setup on my HDD. I am, of course, able to modify it if necessary. /dev/sda1 (NTFS) 66.92 out of 200.00 MB used I'm honestly not sure what this partition is for. Maybe for Windows 7 system files? I'm hesitant to mess with it. (edit; it turns out it is a partition for Windows recovery files in the event of OS corruption, so I don't want to remove it. Plus it also appears to be a major pain to remove anyways) /dev/sda2 (NTFS) 116.35 out of 339.06 GB used (boot) This partition is the C:/ drive on my Windows installation. I don't use it on my Ubuntu installation, except it is the boot partition and thus has grub on it. /dev/sda4 (extended) > /dev/sda5 (ext4) 14.49 out of 91.34 GB used > /dev/sda6 (linux-swap) 5.92 GB These are my Ubuntu partitions. /sda5 contains my documents and all of the files I use on Ubuntu, and (as far as I know) the system files for Ubuntu itself (it's the partition I created when prompted by the Live-DVD installer). /sda6 is, of course, the swap partition which I only need for hibernation (6GB of RAM). /dev/sda3 (NTFS) 9.89 out of 14.75 GB used This is an annoying partition that Lenovo created to store some drivers and files that I might need later on. For example, it allows me to use OneKeyRecovery for a quick factory recovery if absolutely necessary, not sure if that'll work on an SSD. It also contains not-so-important files for bloatware installation. In total, my HDD only has about 150GB of files on it so it should fit comfortably on the SSD. The problem is, I want to exactly migrate my files, partitions, OSes, MBR, etc. from my HDD to my SSD and I'm not quite sure how to do this. I've seen CloneZilla referenced before, but I'm not all too experienced and the documentation for it quite frankly seems a bit like a foreign language to me. So, put simply, is there any way I can exactly clone this HDD to an SSD without a massive headache? Also, if it matters, I'll probably be using an external hard drive case (as recommended in online tutorials) to externally attach the SSD to my laptop during the cloning process due to the lack of two hard drive slots in the machine.

    Read the article

  • Alternative way of developing for ASP.NET to WebForms - Any problems with this?

    - by John
    So I have been developing in ASP.NET WebForms for some time now but often get annoyed with all the overhead (like ViewState and all the JavaScript it generates), and the way WebForms takes over a lot of the HTML generation. Sometimes I just want full control over the markup and produce efficient HTML of my own so I have been experimenting with what I like to call HtmlForms. Essentially this is using ASP.NET WebForms but without the form runat="server" tag. Without this tag, ASP.NET does not seem to add anything to the page at all. From some basic tests it seems that it runs well and you still have the ability to use code-behind pages, and many ASP.NET controls such as repeaters. Of course without the form runat="server" many controls won't work. A post at Enterprise Software Development lists the controls that do require the tag. From that list you will see that all of the form elements like TextBoxes, DropDownLists, RadioButtons, etc cannot be used. Instead you use normal HTML form controls. But how do you access these HTML controls from the code behind? Retrieving values on post back is easy, you just use Request.QueryString or Request.Form. But passing data to the control could be a little messy. Do you use a ASP.NET Literal control in the value field or do you use <%= value % in the markup page? I found it best to add runat="server" to my HTML controls and then you can access the control in your code-behind like this: ((HtmlInputText)txtName).Value = "blah"; Here's a example that shows what you can do with a textbox and drop down list: Default.aspx <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="NoForm.Default" %> <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="NoForm.Default" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN""http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title></title> </head> <body> <form action="" method="post"> <label for="txtName">Name:</label> <input id="txtName" name="txtName" runat="server" /><br /> <label for="ddlState">State:</label> <select id="ddlState" name="ddlState" runat="server"> <option value=""></option> </select><br /> <input type="submit" value="Submit" /> </form> </body> </html> Default.aspx.cs using System; using System.Web.UI.HtmlControls; using System.Web.UI.WebControls; namespace NoForm { public partial class Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { //Default values string name = string.Empty; string state = string.Empty; if (Request.RequestType == "POST") { //If form submitted (post back) name = Request.Form["txtName"]; state = Request.Form["ddlState"]; //Server side form validation would go here //and actions to process form and redirect } ((HtmlInputText)txtName).Value = name; ((HtmlSelect)ddlState).Items.Add(new ListItem("ACT")); ((HtmlSelect)ddlState).Items.Add(new ListItem("NSW")); ((HtmlSelect)ddlState).Items.Add(new ListItem("NT")); ((HtmlSelect)ddlState).Items.Add(new ListItem("QLD")); ((HtmlSelect)ddlState).Items.Add(new ListItem("SA")); ((HtmlSelect)ddlState).Items.Add(new ListItem("TAS")); ((HtmlSelect)ddlState).Items.Add(new ListItem("VIC")); ((HtmlSelect)ddlState).Items.Add(new ListItem("WA")); if (((HtmlSelect)ddlState).Items.FindByValue(state) != null) ((HtmlSelect)ddlState).Value = state; } } } As you can see, you have similar functionality to ASP.NET server controls but more control over the final markup, and less overhead like ViewState and all the JavaScript ASP.NET adds. Interestingly you can also use HttpPostedFile to handle file uploads using your own input type="file" control (and necessary form enctype="multipart/form-data"). So my question is can you see any problems with this method, and any thoughts on it's usefulness? I have further details and tests on my blog.

    Read the article

  • Writing or extending existing emacs packages: is it worth or should I move to Netbeans/Eclipse?

    - by Andrea
    I'm finishing my master degree course in CS and I've almost become addicted to Emacs. I've used it to write in C, Latex, Java, JSP,XML, CommonLisp, Ada and other languages no other editor supported, like AMPL. I'd like to improve the packages I've been using the most or create new ones, but, in practice, I find that the implementation of Emacs leaves a lot to be desired. There are a lot of poorly-featured/poorly-maintained packages with either overlapping functionalities or obscure incompatibilities, and Elisp just seems to foster the situation by lacking the common features modern lisps have. In contrast Eclipse and Netbeans are actively improved and it does seem they can be effective for non-mainstream languages. I tried Hibachi for Ada in Eclipse and it worked well, there's CUPS for Lisp in Eclipse and LambdaBeans built using NetBeans components. On the other hand those plugins seem to be less active than their Emacs' counterparts, for example Hibachi was archived last year. What's your opinion on this? Which editor should I write extension for? EDIT: To answer Larry Coleman (see comment below): I like Emacs as a user because it is efficient both for me and the computer I'm using. It's fast and the textual interface (i.e. minibuffer) allows for quick interaction. It's solid and packages are usually small and easy to manage. If I need to correct or remove something I usually just have to change a row in my .emacs or an elisp file, or delete a directory. Eclipse plugins rely on a more complicated process that screwed my Eclipse configuration a couple of times, forcing me to do a clean reinstall. Emacs works as long as I use the basic packages. If I need something more complicated the situation gets pretty hairy. As a "power user" I think that the best I can hope for is to write a severely crippled version of the extensions I'd actually like to have; in other words, that it's not worth the trouble. I'd like to write extensions for the things I'd like to have automated in Emacs, for example project support with automated tag-table update on file writing. There are a few projects on this that lack integration, documentation, extensibility and so forth. The best one is probably CEDET, for which I believe the Greenspun's 10th rule can be applied. EDIT: To comment Larry Coleman's answer I'm pretty sure I can pick elisp programming but the extensions I have in mind don't exist yet despite their relative simplicity and the effort more knowledgeable people poured into related projects.This makes me wonder whether it is so because of the way emacs is developed, i.e. people tend to write their own little extensions without coordination, or its implementation, its extension language not being able to keep up with the growing complexity.

    Read the article

  • Have you used the ExecutionValue and ExecValueVariable properties?

    The ExecutionValue execution value property and it’s friend ExecValueVariable are a much undervalued feature of SSIS, and many people I talk to are not even aware of their existence, so I thought I’d try and raise their profile a bit. The ExecutionValue property is defined on the base object Task, so all tasks have it available, but it is up to the task developer to do something useful with it. The basic idea behind it is that it allows the task to return something useful and interesting about what it has performed, in addition to the standard success or failure result. The best example perhaps is the Execute SQL Task which uses the ExecutionValue property to return the number of rows affected by the SQL statement(s). This is a very useful feature, something people often want to capture into a variable, and start using the result set options to do. Unfortunately we cannot read the value of a task property at runtime from within a SSIS package, so the ExecutionValue property on its own is a bit of a let down, but enter the ExecValueVariable and we have the perfect marriage. The ExecValueVariable is another property exposed through the task (TaskHost), which lets us select a SSIS package variable. What happens now is that when the task sets the ExecutionValue, the interesting value is copied into the variable we set on the ExecValueVariable property, and a variable is something we can access and do something with. So put simply if the ExecutionValue property value is of interest, make sure you create yourself a package variable and set the name as the ExecValueVariable. Have  look at the 3 step guide below: 1 Configure your task as normal, for example the Execute SQL Task, which here calls a stored procedure to do some updates. 2 Create variable of a suitable type to match the ExecutionValue, an integer is used to match the result we want to capture, the number of rows. 3 Set the ExecValueVariable for the task, just select the variable we created in step 2. You need to do this in Properties grid for the task (Short-cut key, select the task and press F4) Now when we execute the sample task above, our variable UpdateQueueRowCount will get the number of rows we updated in our Execute SQL Task. I’ve tried to collate a list of tasks that return something useful via the ExecutionValue and ExecValueVariable mechanism, but the documentation isn’t always great. Task ExecutionValue Description Execute SQL Task Returns the number of rows affected by the SQL statement or statements. File System Task Returns the number of successful operations performed by the task. File Watcher Task Returns the full path of the file found Transfer Error Messages Task Returns the number of error messages that have been transferred Transfer Jobs Task Returns the number of jobs that are transferred Transfer Logins Task Returns the number of logins transferred Transfer Master Stored Procedures Task Returns the number of stored procedures transferred Transfer SQL Server Objects Task Returns the number of objects transferred WMI Data Reader Task Returns an object that contains the results of the task. Not exactly clear, but I assume it depends on the WMI query used.

    Read the article

  • Dealing with coworkers when developing, need advice [closed]

    - by Yippie-Kai-Yay
    I developed our current project architecture and started developing it on my own (reaching something like, revision 40). We're developing a simple subway routing framework and my design seemed to be done extremely well - several main models, corresponding views, main logic and data structures were modeled "as they should be" and fully separated from rendering, algorithmic part was also implemented apart from the main models and had a minor number of intersection points. I would call that design scalable, customizable, easy-to-implement, interacting mostly based on the "black box interaction" and, well, very nice. Now, what was done: I started some implementations of the corresponding interfaces, ported some convenient libraries and wrote implementation stubs for some application parts. I had the document describing coding style and examples of that coding style usage (my own written code). I forced the usage of more or less modern C++ development techniques, including no-delete code (wrapped via smart pointers) and etc. I documented the purpose of concrete interface implementations and how they should be used. Unit tests (mostly, integration tests, because there wasn't a lot of "actual" code) and a set of mocks for all the core abstractions. I was absent for 12 days. What do we have now (the project was developed by 4 other members of the team): 3 different coding styles all over the project (I guess, two of them agreed to use the same style :), same applies to the naming of our abstractions (e.g CommonPathData.h, SubwaySchemeStructures.h), which are basically headers declaring some data structures. Absolute lack of documentation for the recently implemented parts. What I could recently call a single-purpose-abstraction now handles at least 2 different types of events, has tight coupling with other parts and so on. Half of the used interfaces now contain member variables (sic!). Raw pointer usage almost everywhere. Unit tests disabled, because "(Rev.57) They are unnecessary for this project". ... (that's probably not everything). Commit history shows that my design was interpreted as an overkill and people started combining it with personal bicycles and reimplemented wheels and then had problems integrating code chunks. Now - the project still does only a small amount of what it has to do, we have severe integration problems, I assume some memory leaks. Is there anything possible to do in this case? I do realize that all my efforts didn't have any benefit, but the deadline is pretty soon and we have to do something. Did someone have a similar situation? Basically I thought that a good (well, I did everything that I could) start for the project would probably lead to something nice, however, I understand that I'm wrong. Any advice would be appreciated, sorry for my bad english.

    Read the article

< Previous Page | 331 332 333 334 335 336 337 338 339 340 341 342  | Next Page >