Search Results

Search found 2515 results on 101 pages for 'distributed filesystems'.

Page 71/101 | < Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >

  • CodePlex Daily Summary for Thursday, November 29, 2012

    CodePlex Daily Summary for Thursday, November 29, 2012Popular ReleasesJayData - The cross-platform HTML5 data-management library for JavaScript: JayData 1.2.5: What's new in JayData 1.2.5For detailed release notes check the release notes. Handlebars template engine supportImplement data manager applications with JayData using Handlebars.js for templating. Include JayDataModules/handlebars.js and begin typing the mustaches :) Blogpost: Handlebars templates in JayData Handlebars helpers and model driven commanding in JayData Easy JayStorm cloud data managementManage cloud data using the same syntax and data management concept just like any other data ...nopCommerce. Open source shopping cart (ASP.NET MVC): nopcommerce 2.70: Highlight features & improvements: • Performance optimization. • Search engine optimization. ID-less URLs for products, categories, and manufacturers. • Added ACL support (access control list) on products and categories. • Minify and bundle JavaScript files. • Allow a store owner to decide which billing/shipping address fields are enabled/disabled/required (like it's already done for the registration page). • Moved to MVC 4 (.NET 4.5 is required). • Now Visual Studio 2012 is required to work ...SQL Server Partition Management: Partition Management Release 3.0: Release 3.0 adds support for SQL Server 2012 and is backward compatible with SQL Server 2008 and 2005. The release consists of: • A Readme file • The Executable • The source code (Visual Studio project) Enhancements include: -- Support for Columnstore indexes in SQL Server 2012 -- Ability to create TSQL scripts for staging table and index creation operations -- Full support for global date and time formats, locale independent -- Support for binary partitioning column types -- Fixes to is...PDF Library: PDFLib v2.0: Release notes This new version include many bug fixes and include support for stream objects and cross-reference object streams. New FeatureExtract images from the PDFMCEBuddy 2.x: MCEBuddy 2.3.10: Critical Update to 2.3.9: Changelog for 2.3.10 (32bit and 64bit) 1. AsfBin executable missing from build 2. Removed extra references from build to avoid conflict 3. Showanalyzer installation now checked on remote engine machine Changelog for 2.3.9 (32bit and 64bit) 1. Added support for WTV output profile 2. Added support for minimizing MCEBuddy to the system tray 3. Added support for custom archive folder 4. Added support to disable subdirectory monitoring 5. Added support for better TS fil...DotNetNuke® Community Edition CMS: 07.00.00: Major Highlights Fixed issue that caused profiles of deleted users to be available Removed the postback after checkboxes are selected in Page Settings > Taxonomy Implemented the functionality required to edit security role names and social group names Fixed JavaScript error when using a ";" semicolon as a profile property Fixed issue when using DateTime properties in profiles Fixed viewstate error when using Facebook authentication in conjunction with "require valid profile fo...CODE Framework: 4.0.21128.0: See change notes in the documentation section for details on what's new.Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.76: Fixed a typo in ObjectLiteralProperty.IsConstant that caused all object literals to be treated like they were constants, and possibly moved around in the code when they shouldn't be.Kooboo CMS: Kooboo CMS 3.3.0: New features: Dropdown/Radio/Checkbox Lists no longer references the userkey. Instead they refer to the UUID field for input value. You can now delete, export, import content from database in the site settings. Labels can now be imported and exported. You can now set the required password strength and maximum number of incorrect login attempts. Child sites can inherit plugins from its parent sites. The view parameter can be changed through the page_context.current value. Addition of c...Distributed Publish/Subscribe (Pub/Sub) Event System: Distributed Pub Sub Event System Version 3.0: Important Wsp 3.0 is NOT backward compatible with Wsp 2.1. Prerequisites You need to install the Microsoft Visual C++ 2010 Redistributable Package. You can find it at: x64 http://www.microsoft.com/download/en/details.aspx?id=14632x86 http://www.microsoft.com/download/en/details.aspx?id=5555 Wsp now uses Rx (Reactive Extensions) and .Net 4.0 3.0 Enhancements I changed the topology from a hierarchy to peer-to-peer groups. This should provide much greater scalability and more fault-resi...datajs - JavaScript Library for data-centric web applications: datajs version 1.1.0: datajs is a cross-browser and UI agnostic JavaScript library that enables data-centric web applications with the following features: OData client that enables CRUD operations including batching and metadata support using both ATOM and JSON payloads. Single store abstraction that provides a common API on top of HTML5 local storage technologies. Data cache component that allows reading data ranges from a collection and storing them locally to reduce the number of network requests. Changes...Team Foundation Server Administration Tool: 2.2: TFS Administration Tool 2.2 supports the Team Foundation Server 2012 Object Model. Visual Studio 2012 or Team Explorer 2012 must be installed before you can install this tool. You can download and install Team Explorer 2012 from http://aka.ms/TeamExplorer2012. There are no functional changes between the previous release (2.1) and this release.Coding Guidelines for C# 3.0, C# 4.0 and C# 5.0: Coding Guidelines for CSharp 3.0, 4.0 and 5.0: See Change History for a detailed list of modifications.Math.NET Numerics: Math.NET Numerics v2.3.0: Portable Library Build: Adds support for WP8 (.Net 4.0 and higher, SL5, WP8 and .NET for Windows Store apps) New: portable build also for F# extensions (.Net 4.5, SL5 and .NET for Windows Store apps) NuGet: portable builds are now included in the main packages, no more need for special portable packages Linear Algebra: Continued major storage rework, in this release focusing on vectors (previous release was on matrices) Thin QR decomposition (in addition to existing full QR) Static Cr...ExtJS based ASP.NET 2.0 Controls: FineUI v3.2.1: +2012-11-25 v3.2.1 +????????。 -MenuCheckBox?CheckedChanged??????,??????????。 -???????window.IDS??????????????。 -?????(??TabCollection,ControlBaseCollection)???,????????????????。 +Grid??。 -??SelectAllRows??。 -??PageItems??,?????????????,?????、??、?????。 -????grid/gridpageitems.aspx、grid/gridpageitemsrowexpander.aspx、grid/gridpageitems_pagesize.aspx。 -???????????????????。 -??ExpandAllRowExpanders??,?????????????????(grid/gridrowexpanderexpandall2.aspx)。 -??????ExpandRowExpande...VidCoder: 1.4.9 Beta: Updated HandBrake core to SVN 5079. Fixed crashes when encoding DVDs with title gaps.ZXing.Net: ZXing.Net 0.10.0.0: On the way to a release 1.0 the API should be stable now with this version. sync with rev. 2521 of the java version windows phone 8 assemblies improvements and fixesBlackJumboDog: Ver5.7.3: 2012.11.24 Ver5.7.3 (1)SMTP???????、?????????、??????????????????????? (2)?????????、?????????????????????????? (3)DNS???????CNAME????CNAME????????????????? (4)DNS????????????TTL???????? (5)???????????????????????、?????????????????? (6)???????????????????????????????Liberty: v3.4.3.0 Release 23rd November 2012: Change Log -Added -H4 A dialog which gives further instructions when attempting to open a "Halo 4 Data" file -H4 Added a short note to the weapon editor stating that dropping your weapons will cap their ammo -Reach Edit the world's gravity -Reach Fine invincibility controls in the object editor -Reach Edit object velocity -Reach Change the teams of AI bipeds and vehicles -Reach Enable/disable fall damage on the biped editor screen -Reach Make AIs deaf and/or blind in the objec...Umbraco CMS: Umbraco 4.11.1: NugetNuGet BlogRead the release blog post for 4.11.0. Read the release blog post for 4.11.1. Whats new50 bugfixes (see the issue tracker for a complete list) Read the documentation for the MVC bits. Breaking changesGetPropertyValue now returns an object, not a string (only affects upgrades from 4.10.x to 4.11.0) NoteIf you need Courier use the release candidate (as of build 26). The code editor has been greatly improved, but is sometimes problematic in Internet Explorer 9 and lower. Pr...New ProjectsConvection Game API: A basic game API written in C# using XNA 4.0CS^2 (Casaba's Simple Code Scanner): Casaba's simple code scanner is a tool for managing greps to be run over a source tree and correlating those greps to issues for bug generation. CSparse.NET: A Concise Sparse Matrix Package for .NETDocument Generation Utility: This is about extracting different XML entities and wrapping up with jumbled legal English alphabets and outputs as per File format defined in settings.DuinoExplorer: file manager, http, remote, copy & pasteGenomeOS: An experimental x86 object-oriented operanting system programmed in C/C++ and x86 intel assemblyHush.Project: DatabaseInvi: NALibreTimeTracker Client: Free alternative to the original TimeTracker Client.MO Virtual Router: Virtual Router For Windows 8My Word Game Publisher: This is a full web application which is developed in .NET 2.0 using c#, xml, MS SQL, aspx.MyWGP XmlDataValidator: This is designed and developed (in Silverlight 2, C#) to validate the requirements of data in xml files: the type & size of data, and the requirement status. It allows a user to choose the type of data, enter min & max size, and check the requirement status for data elements.OMX_AL_test_environment: OMX application layer simulation/testing environmentOrchard Coverflow: An Orchard CMS module that provides a way to create iTunes-like coverflow displays out of your Media items.SharePoint standart list form javascript utility (SPListFormUtility): SPListFormUtility is a small JavaScript library, that helps control the appearance and behavior of standart SharePoint list forms. SharePoint 2010, 2013 supportShuttle Core: Shuttle Core is a project that contains cross-cutting libraries for use in .net software development. The Prism architecture lack for the Interactions!: The Prism architecture lack for the Interactions! (Silverlight) The defect opens a popups more then once and creates memory leaks. Example of the lack here.YouCast: YouCast (from YouTube and Podcast) allows you to subscribe to video feeds on YouTube* as podcasts in any standard podcatcher like Zune PC, iTunes and so forth.ZEFIT: Zeiterfassungstool als Projekt im 4.Lehrjahr als Informatiker an der GIBB in Bern????: ?Windows Phone?????,??Windows Phone??????????????。

    Read the article

  • Team Software Development using Ruby on Rails

    - by Panoy
    I used to work alone on small to medium sized programming projects before and have no experience working in a team environment. Currently, there will be 3 of us in an in-house software development team that is tasked to develop a number of software for an academic institution. We have decided to use the web for the majority of the projects and are planning to choose Ruby on Rails for this and I would like to ask for your inputs, advices and approaches with regards to software development as a team using the RoR web framework. One thing that has really confounded me is how you divide the programming tasks of a project if there are 3 of you that are really doing the coding. It’s obvious that we as developers approach a problem in a modular way and finish it one after another. If the project consists of 3 modules, should each one of us focus on each of those modules? Would it be faster that way? How about if the 3 of us would focus on one module first (that’s what I really prefer). Is using a distributed version control system such as Git the answer to this type of problem? Please don’t forget to put your tips and experiences with regards to team software development. Cheers!

    Read the article

  • Submitting software to a competition, it becomes their property?

    - by myrkos
    So I'm about to submit a game to a competition, but as I looked through the rules a chunk grabbed my attention: All Entries become the sole and exclusive property of Sponsor and will not be acknowledged or returned. Sponsor shall own all right, title and interest in and to each Entry, including without limitation all results and proceeds thereof and all elements or constituent parts of Entry (including without limitation the Mobile App, the Design Documents, the Video Trailer, the Playable and all illustrations, logos, mechanicals, renderings, characters, graphics, designs, layouts or other material therein) and all copyrights and renewals and extensions of copyrights therein and thereto. Without limitation of the foregoing, each Eligible Entrant shall and hereby does absolutely and irrevocably assign and transfer all of his or her right, title and interest in his or her Entry to Sponsor, and Sponsor shall have the right and may authorize others to use, copy, sublicense, transmit, modify, manipulate, publish, delete, reproduce, perform, distribute, display and otherwise exploit the Entry (and to create and exploit derivative works thereof) in any manner, including without limitation to embody the Entry, in whole or in part, in apps and other works of any kind or nature created, developed, published or distributed by Sponsor and to and register as a trademark in any country in Sponsor’s name any component of the Entry, without such Eligible Entrant reserving any rights or claims with respect thereto. Sponsor shall have the exclusive right, in perpetuity, throughout the Territory to change, adapt, modify, use, combine with other material and otherwise exploit the Entry in all media now known or hereafter devised and in any manner, in its sole and absolute discretion, without the need for any payment or credit to Entrant. So the game will become the sponsor's property; however, they don't ask for source code. So will I still own the rights to the source code, whatever that means? And if it doesn't win said competition, will I be able to publish it myself without their trademarks? I am very new to software legality stuff, so I would appreciate any clarification. Since there's a possibility I won't even own the source, is it possible to make the game core engine open source software with a not-very-restrictive license and include that in the project, so I at least still own the game engine? Or does it not work that way?

    Read the article

  • PerlRegEx vs RegularExpressionsCore Delphi Units

    - by Jan Goyvaerts
    The RegularExpressionsCore unit that is part of Delphi XE is based on the latest class-based PerlRegEx unit that I developed. Embarcadero only made a few changes to the unit. These changes are insignificant enough that code written for earlier versions of Delphi using the class-based PerlRegEx unit will work just the same with Delphi XE. The unit was renamed from PerlRegEx to RegularExpressionsCore. When migrating your code to Delphi XE, you can choose whether you want to use the new RegularExpressionsCore unit or continue using the PerlRegEx unit in your application. All you need to change is which unit you add to the uses clause in your own units. Indentation and line breaks in the code were changed to match the style used in the Delphi RTL and VCL code. This does not change the code, but makes it harder to diff the two units. Literal strings in the unit were separated into their own unit called RegularExpressionsConsts. These strings are only used for error messages that indicate bugs in your code. If your code uses TPerlRegEx correctly then the user should not see any of these strings. My code uses assertions to check for out of bounds parameters, while Embarcadero uses exceptions. Again, if you use TPerlRegEx correctly, you should never get any assertions or exceptions. The Compile method raises an exception if the regular expression is invalid in both my original TPerlRegEx component and Embarcadero’s version. If your code allows the user to provide the regular expression, you should explicitly call Compile and catch any exceptions it raises so you can tell the user there is a problem with the regular expression. Even with user-provided regular expressions, you shouldn’t get any other assertions or exceptions if your code is correct. Note that Embarcadero owns all the rights to their RegularExpressionsCore unit. Like all the other RTL and VCL units, this unit cannot be distributed by myself or anyone other than Embarcadero. I do retain the rights to my original PerlRegEx unit which I will continue to make available for those using older versions of Delphi.

    Read the article

  • How safe is it to rely on thirdparty Python libs in a production product?

    - by skyler
    I'm new to Python and come from the write-everything-yourself world of PHP (at least this is how I always approached it). I'm using Flask, WTForms, Jinja2, and I've just discovered Flask-Login which I want to use. My question is about the reliability of using thirdparty libraries for core functionality in a project that is planned to be around for several years. I've installed these libraries (via pip) into a virtualenv environment. What happens if these libraries stop being distributed? Should I back up these libraries (are they eggs)? Can I store these libraries in my project itself, instead of relying on pip to install them in a virtualenv? And should I store these separately? I'm worried that I'll rely on a library for core functionality, and then one day I'll download an incompatible version through pip, or the author or maintainer will stop distributing it and it'll no longer be available. How can I protect against this, and ensure that any thirdparty libraries that I use in my projects will always be available as they are now?

    Read the article

  • Hidden Gems: Accelerating Oracle Data Integrator with SOA, Groovy, SDK, and XML

    - by Alex Kotopoulis
    On the last day of Oracle OpenWorld, we had a final advanced session on getting the most out of Oracle Data Integrator through the use of various advanced techniques. The primary way to improve your ODI processes is to choose the optimal knowledge modules for your load and take advantage of the optimized tools of your database, such as OracleDataPump and similar mechanisms in other databases. Knowledge modules also allow you to customize tasks, allowing you to codify best practices that are consistently applied by all integration developers. ODI SDK is another very powerful means to automate and speed up your integration development process. This allows you to automate Life Cycle Management, code comparison, repetitive code generation and change of your integration projects. The SDK is easily accessible through Java or scripting languages such as Groovy and Jython. Finally, all Oracle Data Integration products provide services that can be integrated into a larger Service Oriented Architecture. This moved data integration from an isolated environment into an agile part of a larger business process environment. All Oracle data integration products can play a part in thisracle GoldenGate can integrate into business event streams by processing JMS queues or publishing new events based on database transactions. Oracle GoldenGate can integrate into business event streams by processing JMS queues or publishing new events based on database transactions. Oracle Data Integrator allows full control of its runtime sessions through web services, so that integration jobs can become part of business processes. Oracle Data Service Integrator provides a data virtualization layer over your distributed sources, allowing unified reading and updating for heterogeneous data without replicating and moving data. Oracle Enterprise Data Quality provides data quality services to cleanse and deduplicate your records through web services.

    Read the article

  • How to sell logistical procedures that require less time to perform but more finesse?

    - by foampile
    I am working with a group where part of the responsibilities is managing a certain set of configuration files which, of course, have the same skeleton/structure across different environments but different values (like server, user, this setting, that setting etc.). Pretty classic scenario... The problem is that everyone just goes and modifies final, environment-specific files and basically repeats the work for every environment. Personally, I am offended to have to peform repeatable, mundane tasks in this day and age when we have technologies to automate it all. So I devised a very simple procedure of abstracting the files into templates, stubbing env-specific values with parameters and then wrote a simple Perl script that, given a template and an environment matrix with env-specific values for each param, produces the final file. So this is nothing special, cutting-edge or revolutionary -- I am pretty sure that 20 years ago efficient places did their CM like that. However, that requires that changes are made at the template level and then distributed across different environments using the script and not making changes in the final environment-specific files. This is where I am encountering resentment as they feel "comfortable" doing it their old, manual, repeated labor way. Personally, I don't have a problem with them working hard rather than smart but the problem is when I have to build on top of someone else's changes, I have to merge their changes into my template from a specific file, which takes time and is grueling. So my question is how to go about selling my method, which makes it so much faster in an environment that is resentful to change and where most things have to be done at the level of the least competent team member?

    Read the article

  • Is ZeroMQ a good choice to make a Python app and a C# managed assembly work together?

    - by Alex Bausk
    I have a task that involves talking to a .NET-based API (namely AutoCAD) to retrieve data, send commands, and react to events. I want to separate the API operations and the proper program logic (largely already implemented in Python) by using natural tools for both: a C# DLL for the former and a Python app for the latter. To connect these two pieces, I began exchanging JSON in ZeroMQ messages. I'm at early development stages but having recently discovered that ZeroMQ does not guarantee message delivery/order, I have reservations about whether this is a feasible way to go. Right now my app is a very basic REQ/REP pair and I plan to handle reacting to events and executing different commands by adding some sort of 'recipient-function' field to my message format. The reason that I want to use ZMQ is that I might be able to scale the software into a larger, multi-user, distributed solution sometime. I am a lay programmer so I would ask for your advice about this architecture. Should I just go ahead with it and plan to deal with message reliability/ordering when problems appear? Should I consider developing some kind of a REST wrapper around ZMQ?

    Read the article

  • The balance between client and server functionality

    - by Eugen Martynov
    I want to bring the discussion that started in our teams and get your opinion about it. Assume we have an user account which could have different credentials for authentication and associated email to recover. An user has possibility to do signup with an email or use his social profile to complete signup process. As an Rest API from the backend to client looks like: Create account Authorise Update user data Link social account Register email Verify email In addition our BE is distributed and divided between several services/servers/clusters. So different calls are related to different end points. In case of the social sign up some of steps should be skipped or simplified. For example, with Facebook signup we could already skip email registration and verification step (we ask email permission form user), linking the social account and pre-fill user displayed name. So we proposed to have another end point which will hide/combine different calls on BE and return whole process result to the clients. The pros for this approach: No more duplication of functionality between clients Speed up the networking and user experience The cons for this approach: Additional work for backend Probably most complex scenarios in future updates I would like to get your opinion or experience with this situation. Especially if you already experienced point #2 from against reasons.

    Read the article

  • How to Set Up a Hadoop Cluster Using Oracle Solaris (Hands-On Lab)

    - by Orgad Kimchi
    Oracle Technology Network (OTN) published the "How to Set Up a Hadoop Cluster Using Oracle Solaris" OOW 2013 Hands-On Lab. This hands-on lab presents exercises that demonstrate how to set up an Apache Hadoop cluster using Oracle Solaris 11 technologies such as Oracle Solaris Zones, ZFS, and network virtualization. Key topics include the Hadoop Distributed File System (HDFS) and the Hadoop MapReduce programming model. We will also cover the Hadoop installation process and the cluster building blocks: NameNode, a secondary NameNode, and DataNodes. In addition, you will see how you can combine the Oracle Solaris 11 technologies for better scalability and data security, and you will learn how to load data into the Hadoop cluster and run a MapReduce job. Summary of Lab Exercises This hands-on lab consists of 13 exercises covering various Oracle Solaris and Apache Hadoop technologies:     Install Hadoop.     Edit the Hadoop configuration files.     Configure the Network Time Protocol.     Create the virtual network interfaces (VNICs).     Create the NameNode and the secondary NameNode zones.     Set up the DataNode zones.     Configure the NameNode.     Set up SSH.     Format HDFS from the NameNode.     Start the Hadoop cluster.     Run a MapReduce job.     Secure data at rest using ZFS encryption.     Use Oracle Solaris DTrace for performance monitoring.  Read it now

    Read the article

  • Develop in trunk and then branch off, or in release branch and then merge back?

    - by Torben Gundtofte-Bruun
    Say that we've decided on following a "release-based" branching strategy, so we'll have a branch for each release, and we can add maintenance updates as sub-branches from those. Does it matter whether we: develop and stabilize a new release in the trunk and then "save" that state in a new release branch; or first create that release branch and only merge into the trunk when the branch is stable? I find the former to be easier to deal with (less merging necessary), especially when we don't develop on multiple upcoming releases at the same time. Under normal circumstances we would all be working on the trunk, and only work on released branches if there are bugs to fix. What is the trunk actually used for in the latter approach? It seems to be almost obsolete, because I could create a future release branch based on the most recent released branch rather than from the trunk. Details based on comment below: Our product consists of a base platform and a number of modules on top; each is developed and even distributed separately from each other. Most team members work on several of these areas, so there's partial overlap between people. We generally work only on 1 future release and not at all on existing releases. One or two might work on a bugfix for an existing release for short periods of time. Our work isn't compiled and it's a mix of Unix shell scripts, XML configuration files, SQL packages, and more -- so there's no way to have push-button builds that can be tested. That's done manually, which is a bit laborious. A release cycle is typically half a year or more for the base platform; often 1 month for the modules.

    Read the article

  • Enablement 2.0 Get Specialized!

    - by mseika
    Enablement 2.0 Get Specialized! The Oracle PartnerNetwork Specialized program is releasing new certifications on our latest products, and partners are invited to be the first candidates to get certified. Oracle's Certified Exams go through a rigorous review process called a "beta period". Here are a few advantages of taking a Beta Exam: Certification exams taken during the beta period count towards company Specializations. Most new Certified Specialist Exams have no training requirement. Beta Exams Vouchers are available in limited quantity, so request a voucher today by contacting the Partner Enablement Team and act fast to reserve your test from the list below. FREE Certification Testing Are you attending OPN Exchange @ OpenWorld? Then join us at OPN Specialist Test Fest! October 1st - 4th 2012, Marriott Marquis Hotel Pre-register now! Beta testing period will end on October, 6th, 2012 for the following exams: Oracle E-Business Suite R12 Project Essentials (1Z1-511) Beta testing period will end on October, 13th, 2012 for the following exams: Oracle Hyperion Data Relationship Management Essentials (1Z1-588) Beta testing period will end on November, 17th, 2012 for the following exams: Oracle Global Trade Management 6 Essentials (1Z1-589) Exams Coming Soon in Beta Oracle Fusion Distributed Order Orchestration Essentials Exam (1Z1-469) Take the exam(s) now at a near-by Pearson VUE testing center! Contact Us Please direct any inquiries you may have to the Oracle Partner Enablement team at [email protected] For More Information Oracle Certification Program Beta Exams OPN Certified Specialist Exam Study Guides OPN Certified Specialist FAQ

    Read the article

  • Enablement 2.0 Get Specialized!

    - by mseika
    Enablement 2.0 Get Specialized! The Oracle PartnerNetwork Specialized program is releasing new certifications on our latest products, and partners are invited to be the first candidates to get certified. Oracle's Certified Exams go through a rigorous review process called a "beta period". Here are a few advantages of taking a Beta Exam: Certification exams taken during the beta period count towards company Specializations. Most new Certified Specialist Exams have no training requirement. Beta Exams Vouchers are available in limited quantity, so request a voucher today by contacting the Partner Enablement Team and act fast to reserve your test from the list below. FREE Certification Testing Are you attending OPN Exchange @ OpenWorld? Then join us at OPN Specialist Test Fest! October 1st - 4th 2012, Marriott Marquis Hotel Pre-register now! Beta testing period will end on October, 6th, 2012 for the following exams: Oracle E-Business Suite R12 Project Essentials (1Z1-511) Beta testing period will end on October, 13th, 2012 for the following exams: Oracle Hyperion Data Relationship Management Essentials (1Z1-588) Beta testing period will end on November, 17th, 2012 for the following exams: Oracle Global Trade Management 6 Essentials (1Z1-589) Exams Coming Soon in Beta Oracle Fusion Distributed Order Orchestration Essentials Exam (1Z1-469) Take the exam(s) now at a near-by Pearson VUE testing center! Contact Us Please direct any inquiries you may have to the Oracle Partner Enablement team at [email protected] For More Information Oracle Certification Program Beta Exams OPN Certified Specialist Exam Study Guides OPN Certified Specialist FAQ

    Read the article

  • Enablement 2.0 Get Specialized!

    - by mseika
    Enablement 2.0 Get Specialized! The Oracle PartnerNetwork Specialized program is releasing new certifications on our latest products, and partners are invited to be the first candidates to get certified. Oracle's Certified Exams go through a rigorous review process called a "beta period". Here are a few advantages of taking a Beta Exam: Certification exams taken during the beta period count towards company Specializations. Most new Certified Specialist Exams have no training requirement. Beta Exams Vouchers are available in limited quantity, so request a voucher today by contacting the Partner Enablement Team and act fast to reserve your test from the list below. FREE Certification Testing Are you attending OPN Exchange @ OpenWorld? Then join us at OPN Specialist Test Fest! October 1st - 4th 2012, Marriott Marquis Hotel Pre-register now! Beta testing period will end on October, 6th, 2012 for the following exams: Oracle E-Business Suite R12 Project Essentials (1Z1-511) Beta testing period will end on October, 13th, 2012 for the following exams: Oracle Hyperion Data Relationship Management Essentials (1Z1-588) Beta testing period will end on November, 17th, 2012 for the following exams: Oracle Global Trade Management 6 Essentials (1Z1-589) Exams Coming Soon in Beta Oracle Fusion Distributed Order Orchestration Essentials Exam (1Z1-469) Take the exam(s) now at a near-by Pearson VUE testing center! Contact Us Please direct any inquiries you may have to the Oracle Partner Enablement team at [email protected] For More Information Oracle Certification Program Beta Exams OPN Certified Specialist Exam Study Guides OPN Certified Specialist FAQ

    Read the article

  • Enablement 2.0 Get Specialized!

    - by mseika
    Enablement 2.0 Get Specialized! The Oracle PartnerNetwork Specialized program is releasing new certifications on our latest products, and partners are invited to be the first candidates to get certified. Oracle's Certified Exams go through a rigorous review process called a "beta period". Here are a few advantages of taking a Beta Exam: Certification exams taken during the beta period count towards company Specializations. Most new Certified Specialist Exams have no training requirement. Beta Exams Vouchers are available in limited quantity, so request a voucher today by contacting the Partner Enablement Team and act fast to reserve your test from the list below. FREE Certification Testing Are you attending OPN Exchange @ OpenWorld? Then join us at OPN Specialist Test Fest! October 1st - 4th 2012, Marriott Marquis Hotel Pre-register now! Beta testing period will end on October, 6th, 2012 for the following exams: Oracle E-Business Suite R12 Project Essentials (1Z1-511) Beta testing period will end on October, 13th, 2012 for the following exams: Oracle Hyperion Data Relationship Management Essentials (1Z1-588) Beta testing period will end on November, 17th, 2012 for the following exams: Oracle Global Trade Management 6 Essentials (1Z1-589) Exams Coming Soon in Beta Oracle Fusion Distributed Order Orchestration Essentials Exam (1Z1-469) Take the exam(s) now at a near-by Pearson VUE testing center! Contact Us Please direct any inquiries you may have to the Oracle Partner Enablement team at [email protected] For More Information Oracle Certification Program Beta Exams OPN Certified Specialist Exam Study Guides OPN Certified Specialist FAQ

    Read the article

  • Reformatting and version control

    - by l0b0
    Code formatting matters. Even indentation matters. And consistency is more important than minor improvements. But projects usually don't have a clear, complete, verifiable and enforced style guide from day 1, and major improvements may arrive any day. Maybe you find that SELECT id, name, address FROM persons JOIN addresses ON persons.id = addresses.person_id; could be better written as / is better written than SELECT persons.id, persons.name, addresses.address FROM persons JOIN addresses ON persons.id = addresses.person_id; while working on adding more columns to the query. Maybe this is the most complex of all four queries in your code, or a trivial query among thousands. No matter how difficult the transition, you decide it's worth it. But how do you track code changes across major formatting changes? You could just give up and say "this is the point where we start again", or you could reformat all queries in the entire repository history. If you're using a distributed version control system like Git you can revert to the first commit ever, and reformat your way from there to the current state. But it's a lot of work, and everyone else would have to pause work (or be prepared for the mother of all merges) while it's going on. Is there a better way to change history which gives the best of all results: Same style in all commits Minimal merge work ? To clarify, this is not about best practices when starting the project, but rather what should be done when a large refactoring has been deemed a Good Thing™ but you still want a traceable history? Never rewriting history is great if it's the only way to ensure that your versions always work the same, but what about the developer benefits of a clean rewrite? Especially if you have ways (tests, syntax definitions or an identical binary after compilation) to ensure that the rewritten version works exactly the same way as the original?

    Read the article

  • Does F# kill C++?

    - by MarkPearl
    Okay, so the title may be a little misleading… but I am currently travelling and so have had very little time and access to resources to do much fsharping – this has meant that I am right now missing my favourite new language. I was interested to see this post on Stack Overflow this evening concerning the performance of the F# language. The person posing the question asked 8 key points about the F# language, namely… How well does it do floating-point? Does it allow vector instructions How friendly is it towards optimizing compilers? How big a memory foot print does it have? Does it allow fine-grained control over memory locality? Does it have capacity for distributed memory processors, for example Cray? What features does it have that may be of interest to computational science where heavy number processing is involved? Are there actual scientific computing implementations that use it? Now, I don’t have much time to look into a decent response and to be honest I don’t know half of the answers to what he is asking, but it was interesting to see what was put up as an answer so far and would be interesting to get other peoples feedback on these questions if they know of anything other than what has been covered in the answer section already.

    Read the article

  • How to keep the trunk stable when tests take a long time?

    - by Oak
    We have three sets of test suites: A "small" suite, taking only a couple of hours to run A "medium" suite that takes multiple hours, usually ran every night (nightly) A "large" suite that takes a week+ to run We also have a bunch of shorter test suites, but I'm not focusing on them here. The current methodology is to run the small suite before each commit to the trunk. Then, the medium suite runs every night, and if in the morning it turned out it failed, we try to isolate which of yesterday's commits was to blame, rollback that commit and retry the tests. A similar process, only at a weekly instead of nightly frequency, is done for the large suite. Unfortunately, the medium suite does fail pretty frequently. That means that the trunk is often unstable, which is extremely annoying when you want to make modifications and test them. It's annoying because when I check out from the trunk, I cannot know for certain it's stable, and if a test fails I cannot know for certain if it's my fault or not. My question is, is there some known methodology for handling these kinds of situations in a way which will leave the trunk always in top shape? e.g. "commit into a special precommit branch which will then periodically update the trunk every time the nightly passes". And does it matter if it's a centralized source control system like SVN or a distributed one like git? By the way I am a junior developer with a limited ability to change things, I'm just trying to understand if there's a way to handle this pain I am experiencing.

    Read the article

  • Best way: restructure an existing Team Foundation Server (TFS) solution

    - by dhh
    In my department we are developing several smaller AddOns for some unified communication server. For versioning and distributed development we use a Team Foundation Server 2012. But: there is only one large TFS solution for all of our applications and libraries: Main Solution Applications App 1 App 2 App 3 Externals Libraries Lib 1 Lib 2 Tools The "Application" path contains all main applications. Those are not depending on each other, but they depend on the Libraries and Externals projects. The "Externals" path contains some external DLLs referenced in our Applications and Libraries. The Libraries path contains commonly used libs (UI templates, Helper classes, etc.). They do not depend on each other and they are referenced in the Libraries and the Tools projects. The Tools path contains some helper programs like setup helpers, update web services, etc. Now, there's some major points why I'd like to change this structure: We can't use server builds. It's uncomfortable to manage TFS scrum management with sprints, impediments, etc. with a solution structure like that. Every developer always has access to all projects in the solution. A complete build lasts too long if one accidentally hits [F6] in Visual Studio... What would you change in this solution? How would you break those projects into smaller Solutions, how should those solutions be structured. My first approach would be, to create one TFS project for each Application, Library and Tool. But how can I ensure that e.g. App 2 always contains the newest version of Lib 1? Do I have to monitor changes on Lib 1 and update App 2 manually as soon as the Lib changes? Or can I somehow force Visual Studio to always use the newest version of an external project somehow?

    Read the article

  • Advice and resources on collaborative environments

    - by Tjaart
    I need some advice on collaborative software environments. More specifically, I am looking for books and reference materials that can aid me in understanding team and code structures and the interactions thereof. In other words books, blogs or white papers explaining: Different strategies for structuring teams that share common code between each other but have distinct individual functions? To summarise my question I would like to know what would be a good source of knowledge if I were to set up teams in an organisation that shared code but each unit still remained autonomous. I have done some research on this subject and explored: code review tools, distributed VCS, continuous integration tools, Unit testing automation. The tough part about implementing these tools are to determine where a good place would be to start, which tools are low hanging fruit, which tools or methods provide higher success rates. If someone asks me about code quality reference I point them to Code Complete. I am looking for an equivalent guide on software team structures and tools to make this equation work better. I realise that this question is quite vague but it arose as "we need to share code between teams without breaking each others stuff and causing management headaches and reams of red tape" The answer is definitely not simple and requires changes on many levels, hence the question. If the question is too vague please vote to close or delete. I would accept any good starting point as an answer.

    Read the article

  • Microsoft Lowers Cloud Barrier To Entry

    - by Herve Roggero
    Once in a while, the technology stack changes enough to create a disturbance in the IT industry. Microsoft did just that today and has officially closed the gap with its #1 competitor: Amazon. What is remarkable is that Microsoft is no longer an alternative to Amazon, it is becoming a clear leader in that space. Some of the new features include official support for durable Virtual Machines with high availability (cross-geographic replication), free WebSites to try Azure, MySQL database at no charge, a new distributed low-latency cache feature, Linux support, support with existing VPN hardware for seamless on-premise integration, a new partner ecosystem and much, much more. Amazon had an edge against Windows Azure in the IaaS (Infrastructure as a Service) space, until now. With the latest release from Microsoft Azure, the gap has been filled. In fact, it seems Amazon may now have a gap to fill… This is great news to everyone; it seems that cloud offerings are becoming more standardized with the more mature cloud providers, and the management stack and quality of service of each cloud provider is increasingly becoming the differentiator. With today’s announcements, it is becoming clear that cloud providers are pushing hard to increase their service footprint and lowering typical barriers to entry such as support for open-source operating systems, free trial offers, higher availability, faster deployment times and simpler enterprise integration.

    Read the article

  • I'm a SubVersion geek, why I should consider or not consider Mercurial or Git or any other DRCS?

    - by Pierre 303
    I tried to understand the benefits of DRCS. I must recognize I still doesn't get it. Here are my current beliefs. I'm ready to destroy them thanks to your expertise. I know I'm probably resisting to change. I just want to evaluate how much that change will cost me. Merging hell can be solved by just applying good practices such as continuous integration. There is no such good practice than having a private branch for a few days when you are in a self managing team with real collaboration. I use branching for that for very rare cases, and I keep a branch for every major version, in which I fix bugs merged from the trunk. I see the value of committing offline then pushing online. But continuous integration can help on this too. I work on very large projects, and I never noticed SubVersion to be slow even when the server is 5000km away on the internet and my small connection (less than 1024D/128U). Harddisk space is cheap, so having a copy of source code locally doesn't look like a problem to me. I already have a full copy of the last version on my disk. I don't understand the distributed thing there (maybe THIS IS the key to my understanding?) I not new in the industry, and judging by my difficulty to understand, I don't think DRCS are easier to understand than SubVersion like. If fact, I don't understand... Doctor, give me your diagnostic.

    Read the article

  • Need advice concerning Feature Based Development when knowledge DB is involved

    - by voroninp
    We develop BackOffice application which is used to edit our knowledge DB. Now our main product's development team is shifting to the feature based development and we need to support several DB's with not identical data schemes. (DS changes slightly from DB to DB) The information from knowledge Db is extracted by the script and then is distributed to the clients. We also need to support merging these DB's. We now analyze pros and cons of different approaches. We discuss this one: One working DB (WDB) with one DB for each feature branch (FDB). The approved data is moved from WDB to FDB. So we need to support only one script for each branch. This script will extract data from corresponding FDB. Nevertheless we are to code the differences between FDBs and WDB manually. May be some automatic mapping tools exist? I also wish to know whether classic solutions to the alike problems already exist. Can anyone share the best practices for this case?

    Read the article

  • Which version management design methodology to be used in a Dependent System nodes?

    - by actiononmail
    This is my first question so please indicate if my question is too vague and not understandable. My question is more related to High Level Design. We have a system (specifically an ATCA Chassis) configured in a Star Topology, having Master Node (MN) and other sub-ordinate nodes(SN). All nodes are connected via Ethernet and shall run on Linux OS with other proprietary applications. I have to build a recovery Framework Design so that any software entity, whether its Linux, Ramdisk or application can be rollback to previous good versions if something bad happens. Thus I think of maintaining a State Version Matrix over MN, where each State(1,2....n) represents Good Kernel, Ramdisk and application versions for each SN. It may happen that one SN version can dependent on other SN's version. Please see following diagram:- So I am in dilemma whether to use Package Management Methodology used by Debian Distributions (Like Ubuntu) or GIT repository methodology; in order to do a Rollback to previous good versions on either one SN or on all the dependent SNs. The method should also be easier for upgrading SNs along with MNs. Some of the features which I am trying to achieve:- 1) Upgrade of even single software entity is achievable without hindering others. 2) Dependency checks must be done before applying rollback or upgrade on each of the SN 3) User Prompt should be given in case dependency fails.If User still go for rollback, all the SNs should get notification to rollback there own releases (if required). 4) The binaries should be distributed on SNs accordingly so that recovery process is faster; rather fetching every time from MN. 5) Release Patches from developer for bug fixes, feature enhancement can be applied on running system. 6) Each version can be easily tracked and distinguishable. Thanks

    Read the article

  • Trac/SVN to DVCS Migration

    - by quanticle
    The project I'm currently working on is using Trac, with SVN integration. It's worked great until now. Now, however, we've taken on some additional developers and we're running into issues with branching and merging. Because of this, I think a move to a distributed version control system is in order. The problem is that Trac is very closely integrated with the SVN repository. We have tight integration between the tickets and the revision numbers of code changes corresponding to those tickets. In addition we have a support wiki that has a lot of data that helps the tech. support team. Is there a way we can migrate to git or mercurial without losing the benefits of Trac? I've looked at the git plugin for Trac, and I'm unsure of how well it works. Has anyone here used it with a project that's been migrated from SVN? EDIT: I should note that the most important priority for us is maintaining the links between Trac tickets and the corresponding changesets in SVN. That's a tool that we use every day, and it provides an easy way to jump to code changes when reviewing tickets. Wiki migration would be nice to have, but if it's not possible, we can continue to run the old system whilst we write some kind of a one-off script to migrate the content.

    Read the article

< Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >