Search Results

Search found 2170 results on 87 pages for 'workflow versioning'.

Page 52/87 | < Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >

  • Monodevelop SVN 1.7 support

    - by Gustavo Rubio
    I'm trying to checkout an SVN repo and it does get checked out, however the solution is not opened by default. I then tried to simply open an already checked out folder and I do not get versioning support. I tried this in both Windows and Linux with no success with MonoDevelop 3.0.4 and 3.0.5 My guess is that since the .sln file is in the path /trunk/code/project/project.sln and the new 1.7 format keeps only one ".svn" folder at the top of the checked out folder (e.g. /home/gustavo/ProjectSrc/ ) hence MD not "finding" the versioned code. Anybody has had success using SVN 1.7 and MonoDevelop?

    Read the article

  • Eclipse + Git: Can't add files to my repository

    - by Stegeman
    I use Eclipse (Helios) with PDT and egit. I have a project without versioning, so I created a git repository for it by doing: Team -> Share Project When I try to add the files of my project to the repository: Team -> Add I get an exception: Failed to add resource to index Failed to add resource to index Exception caught during execution of add command When I add the files manually on the command line, everything is working fine. Any ideas? EDIT: The error eclipse gives is: Caused by: org.eclipse.jgit.errors.ObjectWritingException: Unable to create new object: Z:\eage_layout\.git\objects\60\f30dd232bd6ddaeb198fb11400c2613a072189 at org.eclipse.jgit.storage.file.ObjectDirectoryInserter.insert(ObjectDirectoryInserter.java:100) at org.eclipse.jgit.api.AddCommand.call(AddCommand.java:177) The code I'm running is located on a virtual machine running on CentOs. I'm working on a windows machine and using a samba share to get access to the code on the virtual machine. I've put the filesystem permissions on my .git directory to 777, but still it does not work.

    Read the article

  • Creating a future proof .NET 3.5 SP1 installer prerequisite for setup.exe AND the .MSI

    - by Ruben Bartelink
    I've demanded .NET 3.5 SP1 a la http://stackoverflow.com/questions/88136/will-a-vs2008-setup-project-update-net-3-5-sp1. This makes the setup.exe check correctly. I've also added a "SP1" launch condition to my MSI so it doesn't let the user install my .NET 3.5SP1 app via launching the MSI (and replaced the [VSDNETMSG] in the Framework condition message with one that actually mentions SP1). From a future proofing point of view, this feels wrong. I want the condition to be: (NETVer=3.5 AND Net35SPLevel=1) OR (NETVer=>3.5) not (NETVer=3.5 AND Net35SPLevel=1) Is there any way to do that? The framework check doesnt have a condition property to allow me to add a sub-condition... Yes, I could also just not worry my pretty little head about it :P If one of the MS versioning experts out there reads this, if you're going to put stuff that code depends on into SPs, can you please make the installer be able to check for it OOTB. (I really wish they had come up with a better numbering scheme - the world and its dog could see that this was going to get confusing)

    Read the article

  • Are there any real benefits to including javascript dynamically rather than as script tags at the bo

    - by SB
    I've read that including the scripts dynamically may provide some better performance, however i'm not really seeing that on the small local tests I'm doing. I created a jquery plugin to dynamically load other plugins as necessary and am curious as to if this is actually a good idea. The following would be called onready or at the bottom of the page(I can provide the source for the plugin if anyone is interested): $.fn.executePlugin( 'qtip', // looks in default folder { required: '/javascript/plugin/easing.js', // not really required for qtip just testing it version: 1, //used for versioning and caching checkelement: '#thumbnail', // will not include plugin if $(element).length==0 css: 'page.css', // include this css file as well with plugin cache:true, // $.ajax will use cache:true success:function() { // success function to be called after the plugin loads - apply qtip to an element $('#thumbnail').qtip( { content: 'Some basic content for the tooltip', // Give it some content, in this case a simple string style: {name:'cream'}, }); } });

    Read the article

  • Database Version Control SQL Server 2008 Drop SP's and Functions

    - by Lieven Cardoen
    I'm working on versioning our database and now searching for a way to drop all stored procedures and functions from a C# Console Application. I'd rather not create a stored procedure that drops all stored procedures and functions. I has to be some sql executed from C#. I tried to drop the stored procedure before creating it, but I get this message: System.Data.SqlClient.SqlException: 'CREATE/ALTER PROCEDURE' must be the first statement in a query batch. Script for one SP for example: DROP PROCEDURE [dbo].[sp_Economatic_LoadJournalEntryFeedbackByData] SET ANSI_NULLS ON SET QUOTED_IDENTIFIER ON CREATE PROCEDURE [dbo].[sp_Economatic_LoadJournalEntryFeedbackByData] @Data VARCHAR(MAX) AS BEGIN ... END So I guess before creating all SP's and functions I'll need to drop all SP's and functions first with one sql script.

    Read the article

  • Server database -> client database update based on version

    - by user296191
    Hi, What is the recommended method of collecting items in a server database, versioning the database then deploying only the version differences to a client ? Should it by a field in the table (ie. Version: 3.3.9876) against each record ? Should it be DateTime (server based) in each record ? And whats the best way to just deploy the changes to a client with an older version of the database ? Is it a DUMP to a file with a Bulk import of some description ? Open to comments.. Suggestions. Database can be anything (firebird, mysql, sqlserver, sqlite)... Any info greatly appreciated.

    Read the article

  • What add-in/workbench framework is the best .NET alternative to Eclipse RCP?

    - by Winston Fassett
    I'm looking for a plugin-based application framework that is comparable to the Eclipse Plugin Framework, which to my simple mind consists of: a core plugin management framework (Equinox / OSGI), which provides the ability to declare extension endpoints and then discover and load plugins that service those endpoints. (this is different than Dependency Injection, but admittedly the difference is subtle - configuration is highly de-centralized, there are versioning concerns, it might involve an online plugin repository, and most importantly to me, it should be easy for the user to add plugins without needing to know anything about the underlying architecture / config files) many layers of plugins that provide a basic workbench shell with concurrency support, commands, preference sheets, menus, toolbars, key bindings, etc. That is just scratching the surface of the RCP, which itself is meant to serve as the foundation of your application, which you build by writing / assembling even more plugins. Here's what I've gleaned from the internet in the past couple of days... As far as I can tell, there is nothing in the .NET world that remotely approaches the robustness and maturity of the Eclipse RCP for Java but there are several contenders that do either #1 or #2 pretty well. (I should also mention that I have not made a final decision on WinForms vs WPF, so I'm also trying to understand the level of UI coupling in any candidate framework. I'm also wondering about platform coupling and source code licensing) I must say that the open-source stuff is generally less-documented but easier to understand, while the MS stuff typically has more documentation but is less accessible, so that with many of the MS technologies, I'm left wondering what they actually do, in a practical sense. These are the libraries I have found: SharpDevelop The first thing I looked at was SharpDevelop, which does both #1 and also #2 in a basic way (no insult to SharpDevelop, which is admirable - I just mean more basic than Eclipse RCP). However, SharpDevelop is an application more than a framework, and there are basic assumptions and limitations there (i.e. being somewhat coupled to WinForms). Still, there are some articles on CodeProject explaining how to use it as the foundation for an application. System.Addins It appears that System.Addins is meant to provide a robust add-in loading framework, with some sophisticated options for loading assemblies with varying levels of trusts and even running the out of process. It appears to be primarily code-based, and pretty code-heavy, with lots of assemblies that serve to insulate against versioning issues., using Guidance Automation to generate a good deal of code. So far I haven't found many System.AddIns articles that illustrate how it could be used to build something like an Eclipse RCP, and many people seem to be wringing their hands about its complexity. Mono.Addins It appears that Mono.Addins was influenced by System.Addins, SharpDevelop, and MonoDevelop. It seems to provide the basics from System.Addins, with less sophisticated options for plugin loading, but more simplicity, with attribute-based registration, XML manifests, and the infrastructure for online plugin repositories. It has a pretty good FAQ and documentation, as well as a fairly robust set of examples that really help paint a picture of how to develop an architecture like that of SharpDevelop or Eclipse. The examples use GTK for UI, but the framework itself is not coupled to GTK. So it appears to do #1 (add-in loading) pretty well and points the way to #2 (workbench framework). It appears that Mono.Addins was derived from MonoDevelop, but I haven't actually looked at whether MonoDevelop provides a good core workbench framework. Managed Extensibility Framework This is what everyone's talking about at the moment, and it's slowly getting clearer what it does, but I'm still pretty fuzzy, even after reading several posts on SO. The official word is that it "can live side-by-side" with System.Addins. However, it doesn't reference it and it appears to reproduce some of its functionality. It seems to me, then, that it is a simpler, more accessible alternative to System.Addins. It appears to be more like Mono.Addins in that it provides attribute-based wiring. It provides "catalogs" that can be attribute-based or directory-based. It does not seem to provide any XML or manifest-based wiring. So far I haven't found much documentation and the examples seem to be kind of "magical" and more reminiscent of attribute-based DI, despite the clarifications that MEF is not a DI container. Its license just got opened up, but it does reference WindowsBase -- not sure if that means it's coupled to Windows. Acropolis I'm not sure what this is. Is it MEF, or something that is still coming? Composite Application Blocks There are WPF and Winforms Composite Application blocks that seem to provide much more of a workbench framework. I have very little experience with these but they appear to rely on Guidance Automation quite a bit are obviously coupled with the UI layers. There are a few examples of combining MEF with these application blocks. I've done the best I could to answer my own question here, but I'm really only scratching the surface, and I don't have experience with any of these frameworks. Hopefully some of you can add more detail about the frameworks you have experience with. It would be great if we could end up with some sort of comparison matrix.

    Read the article

  • Typical practice for redistributing third party source code with your source code

    - by bglenn
    I'm releasing an application I wrote as an open-source project by creating a public source-code repository. I use a third-party library which is also open-source and freely redistributable. I'm not versioning the third-party library, but should I include it in my repository for the convenience of those cloning the repository or should I expect them to download the third-party library on their own? To be clear, I'm not asking if I should version the third-party code or if I can redistribute it, but whether it is standard practice to include third-party source code as a convenience.

    Read the article

  • Wiki Database, is there one?

    - by Faiz
    I was searching the net for something like a wiki database, just like wikipedia but instead stores structured content, editable by users. What I was looking for was an online database accessible by everyone where people can design the schema and data with proper versioning of both schema and data. I couldn't find any such site. I am not sure if it is my search skills or if there really is no wiki database as of now. Does anyone out there know anything like this? I think there is a great potential for something like this. A possible example will be a website with a GUI for querying a MySQL DB where any website visitor can create DB objects and populate data.

    Read the article

  • Getting consecutive version numbers from Hibernate's @Version usage once per transaction

    - by Cheradenine
    We use Hibernate with the following version definition for optimistic locking et. al: <version name="version" access="field" column="VERSION" type="long" unsaved-value="negative"/> This is fine and dandy; however, there is one small problem, which is that the first version for some entities is '0', and for others, it is '1'. Why this is happening, is that for some object graphs, an entity will be subject to both onSave and flushDirty - this is reasonable, such as if two object are circular dependencies. However, the version number gets incremented on both occasions, leading to the above '0' / '1' discrepancy. I'd really like the version number only to ever increment once per transaction. However, I can't see a simple way to do this in the hibernate versioning implementation, without hacking about with an Interceptor (which was how I generated a column value for version before, but I wanted hibernate to do it itself)..

    Read the article

  • Plugin Framework - can there be too many addin assemblies?

    - by spooner
    Hi, The product I'm working on needs to be built in such a way that we have a quote engine driven by a pluggable framework. We are currently thinking of using MAF, so we can leverage separation of the host and addin interfaces for versioning. However, I'm concerned that we'd have lots of assemblies, it's likely that we'd have one for each quote engine addin - of which there could be 100 going forward, we also need to support multiple versions, so there could be lots of assemblies in total. The quote engine also uses WF to drive it, which means each AppDomain for each addin will need a workflow runtime associated with it. This seems quite heavyweight, however we can unload unfrequently used addins. Does this seem like a good design? We've also looked at a single AppDomain solution using an IOC container to load addin types, but I'm concerned that we won't be able to unload any of the assemblies, given their quantity.

    Read the article

  • Any good Open Source or Cheap ASP.NET Catalog Applications

    - by Zhaph - Ben Duguid
    We're looking for a cheap-to-free "off the shelf" ASP.NET catalogue application, that will meet the following requirements: Support two kinds of listings: Suppliers of Services Suppliers of Products, and their Products Suppliers can be categorised by: Area of specialisation - including sub-categories Location Other data, e.g. where listing came from Versioning of supplier/product details Easy to use management interface Use masterpages so we can drop it into our existing site layout Run on a Windows 2003 server, with .NET 3.5 installed In an ideal world, the following additional requirements might be met: Suppliers can manage their own listings Other products that are available to us (that will obviously need some additional development to meet these requirements) are: Content Management System (MS) Commerce Server - bear in mind we're not selling the products/suppliers, just listing them Simple DB application. I'm happy to knock something up in MCMS/simple DB, I'm just looking to see if anyone's had any experience with off the shelf apps that could save us some time. I'm also happy to receive "Don't use this because" type answers. Thanks.

    Read the article

  • Use a Fake Http Channel to Unit Test with HttpClient

    - by Steve Michelotti
    Applications get data from lots of different sources. The most common is to get data from a database or a web service. Typically, we encapsulate calls to a database in a Repository object and we create some sort of IRepository interface as an abstraction to decouple between layers and enable easier unit testing by leveraging faking and mocking. This works great for database interaction. However, when consuming a RESTful web service, this is is not always the best approach. The WCF Web APIs that are available on CodePlex (current drop is Preview 3) provide a variety of features to make building HTTP REST services more robust. When you download the latest bits, you’ll also find a new HttpClient which has been updated for .NET 4.0 as compared to the one that shipped for 3.5 in the original REST Starter Kit. The HttpClient currently provides the best API for consuming REST services on the .NET platform and the WCF Web APIs provide a number of extension methods which extend HttpClient and make it even easier to use. Let’s say you have a client application that is consuming an HTTP service – this could be Silverlight, WPF, or any UI technology but for my example I’ll use an MVC application: 1: using System; 2: using System.Net.Http; 3: using System.Web.Mvc; 4: using FakeChannelExample.Models; 5: using Microsoft.Runtime.Serialization; 6:   7: namespace FakeChannelExample.Controllers 8: { 9: public class HomeController : Controller 10: { 11: private readonly HttpClient httpClient; 12:   13: public HomeController(HttpClient httpClient) 14: { 15: this.httpClient = httpClient; 16: } 17:   18: public ActionResult Index() 19: { 20: var response = httpClient.Get("Person(1)"); 21: var person = response.Content.ReadAsDataContract<Person>(); 22:   23: this.ViewBag.Message = person.FirstName + " " + person.LastName; 24: 25: return View(); 26: } 27: } 28: } On line #20 of the code above you can see I’m performing an HTTP GET request to a Person resource exposed by an HTTP service. On line #21, I use the ReadAsDataContract() extension method provided by the WCF Web APIs to serialize to a Person object. In this example, the HttpClient is being passed into the constructor by MVC’s dependency resolver – in this case, I’m using StructureMap as an IoC and my StructureMap initialization code looks like this: 1: using StructureMap; 2: using System.Net.Http; 3:   4: namespace FakeChannelExample 5: { 6: public static class IoC 7: { 8: public static IContainer Initialize() 9: { 10: ObjectFactory.Initialize(x => 11: { 12: x.For<HttpClient>().Use(() => new HttpClient("http://localhost:31614/")); 13: }); 14: return ObjectFactory.Container; 15: } 16: } 17: } My controller code currently depends on a concrete instance of the HttpClient. Now I *could* create some sort of interface and wrap the HttpClient in this interface and use that object inside my controller instead – however, there are a few why reasons that is not desirable: For one thing, the API provided by the HttpClient provides nice features for dealing with HTTP services. I don’t really *want* these to look like C# RPC method calls – when HTTP services have REST features, I may want to inspect HTTP response headers and hypermedia contained within the message so that I can make intelligent decisions as to what to do next in my workflow (although I don’t happen to be doing these things in my example above) – this type of workflow is common in hypermedia REST scenarios. If I just encapsulate HttpClient behind some IRepository interface and make it look like a C# RPC method call, it will become difficult to take advantage of these types of things. Second, it could get pretty mind-numbing to have to create interfaces all over the place just to wrap the HttpClient. Then you’re probably going to have to hard-code HTTP knowledge into your code to formulate requests rather than just “following the links” that the hypermedia in a message might provide. Third, at first glance it might appear that we need to create an interface to facilitate unit testing, but actually it’s unnecessary. Even though the code above is dependent on a concrete type, it’s actually very easy to fake the data in a unit test. The HttpClient provides a Channel property (of type HttpMessageChannel) which allows you to create a fake message channel which can be leveraged in unit testing. In this case, what I want is to be able to write a unit test that just returns fake data. I also want this to be as re-usable as possible for my unit testing. I want to be able to write a unit test that looks like this: 1: [TestClass] 2: public class HomeControllerTest 3: { 4: [TestMethod] 5: public void Index() 6: { 7: // Arrange 8: var httpClient = new HttpClient("http://foo.com"); 9: httpClient.Channel = new FakeHttpChannel<Person>(new Person { FirstName = "Joe", LastName = "Blow" }); 10:   11: HomeController controller = new HomeController(httpClient); 12:   13: // Act 14: ViewResult result = controller.Index() as ViewResult; 15:   16: // Assert 17: Assert.AreEqual("Joe Blow", result.ViewBag.Message); 18: } 19: } Notice on line #9, I’m setting the Channel property of the HttpClient to be a fake channel. I’m also specifying the fake object that I want to be in the response on my “fake” Http request. I don’t need to rely on any mocking frameworks to do this. All I need is my FakeHttpChannel. The code to do this is not complex: 1: using System; 2: using System.IO; 3: using System.Net.Http; 4: using System.Runtime.Serialization; 5: using System.Threading; 6: using FakeChannelExample.Models; 7:   8: namespace FakeChannelExample.Tests 9: { 10: public class FakeHttpChannel<T> : HttpClientChannel 11: { 12: private T responseObject; 13:   14: public FakeHttpChannel(T responseObject) 15: { 16: this.responseObject = responseObject; 17: } 18:   19: protected override HttpResponseMessage Send(HttpRequestMessage request, CancellationToken cancellationToken) 20: { 21: return new HttpResponseMessage() 22: { 23: RequestMessage = request, 24: Content = new StreamContent(this.GetContentStream()) 25: }; 26: } 27:   28: private Stream GetContentStream() 29: { 30: var serializer = new DataContractSerializer(typeof(T)); 31: Stream stream = new MemoryStream(); 32: serializer.WriteObject(stream, this.responseObject); 33: stream.Position = 0; 34: return stream; 35: } 36: } 37: } The HttpClientChannel provides a Send() method which you can override to return any HttpResponseMessage that you want. You can see I’m using the DataContractSerializer to serialize the object and write it to a stream. That’s all you need to do. In the example above, the only thing I’ve chosen to do is to provide a way to return different response objects. But there are many more features you could add to your own re-usable FakeHttpChannel. For example, you might want to provide the ability to add HTTP headers to the message. You might want to use a different serializer other than the DataContractSerializer. You might want to provide custom hypermedia in the response as well as just an object or set HTTP response codes. This list goes on. This is the just one example of the really cool features being added to the next version of WCF to enable various HTTP scenarios. The code sample for this post can be downloaded here.

    Read the article

  • Concurency problem with Isolation - read-committed

    - by Ratn Deo--Dev
    I have to write a simple demo for amount withdrawl from a joint Bank amount .Andy and Jen holds a joint bank account with number 123 . Suppose they have 100$ in their account .Jen and Andy are operating their account at the same time and both are trying to withdraw 90$ at the time being .My transaction Isolation is set to read-committed and both are able to withdraw money leaving the balance to -(minus)80$ although I have constraint that balance should never be less than 0. I am using hibernate .Is versioning only way to solve this problem or I should go for another Isolation level ?

    Read the article

  • Any experience on Sitecore CMS using Windows Sharepoint Services?

    - by Marcin B
    Hi guys, I plan to develop a solution based on Sitecore CMS which would allow the client company to manage their documents in a Sharepoint-fashion (versioning, diffs??). Of course, we're after limiting the number of licences required for the project, and the client doesn't yet have licences for MOSS, so we couldn't simply use the Sitecore Sharepoint Connector which from everything I read requires the MOSS installation, and won't work with bare WSS. From what I know Sitecore CMS doesn't version documents as such, but is there a possibility for a workaround? Ideally we'd work with Documents and Files in the same fashion as with normal pages, which enables DIFFs and stuff. Any ideas? Thanks, Marcin B

    Read the article

  • Webcast Q&A: ING on How to Scale Role Management and Compliance

    - by Tanu Sood
    Thanks to all who attended the live webcast we hosted on ING: Scaling Role Management and Access Certifications to Thousands of Applications on Wed, April 11th. Those of you who couldn’t join us, the webcast replay is now available. Many thanks to our guest speaker, Mark Robison, Enterprise Architect at ING for walking us through ING’s drivers and rationale for the platform approach, the phased implementation strategy, results & metrics, roadmap and recommendations. We greatly appreciate the insight he shared with us all on the deployment synergies between Oracle Identity Manager (OIM) and Oracle Identity Analytics (OIA) to enforce streamlined user and role management and scalable compliance. Mark was also kind enough to walk us through specific solutions features that helped ING manage the problem of role explosion and implement closed loop remediation. Our host speaker, Neil Gandhi, Principal Product Manager, Oracle rounded off the presentation by discussing common use cases and deployment scenarios we see organizations implement to automate user/identity administration and enforce closed-loop scalable compliance. Neil also called out the specific features in Oracle Identity Analytics 11gR1 that cater to expediting and streamlining compliance processes such as access certifications. While we tackled a few questions during the webcast, we have captured the responses to those that we weren’t able to get to here; our sincere thanks to Mark Robison for taking the time to respond to questions specific to ING’s implementation and strategy. Q. Did you include business friendly entitlment descriptions, or is the business seeing application descriptors A. We include very business friendly descriptions.  The OIA tool has the facility to allow this. Q. When doing attestation on job change, who is in the workflow to review and confirm that the employee should continue to have access? Is that a best practice?   A. The new and old manager  are in the workflow.  The tool can check for any Separation of Duties (SOD) violations with both having similiar accesses.  It may not be a best practice, but it is a reality of doing your old and new job for a transition period on a transfer. Q. What versions of OIM and OIA are being used at ING?   A. OIM 11gR1 and OIA 11gR1; the very latest versions available. Q. Are you using an entitlements / role catalog?   A. Yes. We use both roles and entitlements. Q. What specific unexpected benefits did the Identity Warehouse provide ING?   A. The most unanticipated was to help Legal Hold identify user ID's in the various applications.   Other benefits included providing a one stop shop for all aggregated ID information. Q. How fine grained are your application and entitlements? Did OIA, OIM support that level of granularity?   A. We have some very fine grained entitlements, but we role this up into approved Roles to allow for easier management.   For managing very fine grained entitlements, Oracle offers the Oracle Entitlement Server.  We currently do not own this software but are considering it. Q. Do you allow any individual access or is everything truly role based?   A. We are a hybrid environment with roles and individual positive and negative entitlements Q. Did you use an Agile methodology like scrum to deliver functionality during your project? A. We started with waterfall, but used an agile approach to provide benefits after the initial implementation Q. How did you handle rolling out the standard ID format to existing users? A. We just used the standard IDs for new users.  We have not taken on a project to address the existing nonstandard IDs. Q. To avoid role explosion, how do you deal with apps that require more than a couple of entitlement TYPES? For example, an app may have different levels of access and it may need to know the user's country/state to associate them with particular customers.   A. We focus on the functional user and craft the role around their daily job requirements.  The role captures the required application entitlements.  To keep role explosion down, we use role mining in OIA and also meet and interview the business.  It is an iterative process to get role consensus. Q. Great presentation! How many rounds of Certifications has ING performed so far?  A. Around 7 quarters and constant certifications on transfer. Q. Did you have executive support from the top down   A. Yes  The executive support was key to our success. Q. For your cloud instance are you using OIA or OIM as SaaS?  A. No.  We are just provisioning and deprovisioning to various Cloud providers.  (Service Now is an example) Q. How do you ensure a role owner does not get more priviliges as are intended and thus violates another role, e,g, a DBA Roles should not get tor rigt to run somethings as root, as this would affect the root role? A. We have SOD  checks.  Also all Roles are initially approved by external audit and the role owners have to certify the roles and any changes Q. What is your ratio of employees to roles?   A. We are still in process going through our various lines of business, so I do not have a final ratio.  From what we have seen, the ratio varies greatly depending on the Line of Business and the diversity of Job Functions.  For standardized lines of business such as call centers, the ratio is very good where we can have a single role that covers many employees.  For specialized lines of business like treasury, it can be one or two people per role. Q. Is ING using Oracle On Demand service ?   A. No Q. Do you have to implement or migrate to OIM in order to get the Identity Warehouse, or can OIA provide the identity warehouse as well if you haven't reached OIM yet? A. No, OIM deployment is not required to implement OIA’s Identity Warehouse but as you heard during the webcast, there are tremendous deployment synergies in deploying both OIA and OIM together. Q. When is the Security Governor product coming out? A. Oracle Security Governor for Healthcare is available today. Hope you enjoyed the webcast and we look forward to having you join us for the next webcast in the Customers Talk: Identity as a Platform webcast series: Toyota: Putting Customers First – Identity Platform as a Business Enabler Wednesday, May 16th at 10 am PST/ 1 pm EST Register Today You can also register for a live event at a city near you where Aberdeen’s Derek Brink will discuss the survey results from the recently published report “Analyzing Platform vs. Point Solution Approach in Identity”. And, you can do a quick (& free)  online assessment of your identity programs by benchmarking it against the 160 organizations surveyed  in the Aberdeen report, compliments of Oracle. Here’s the slide deck from our ING webcast: ING webcast platform View more presentations from OracleIDM

    Read the article

  • OpenGL: Textured Primitives + High Framerate

    - by James D
    Short version: What's the best practice going forward for efficiently rendering large numbers of independent texture-mapped, lighted 2D/3D primitives (circles, rects, etc.) in OpenGL? For example: a typical particle system using billboarded quads/triangles, point sprites, or whatever other technique, with blending. Because after reading this thread on the messiness of OpenGL versioning/deprecation I'm starting to have my doubts. My specific question is not the ABCs of displaying primitives in OpenGL, but rather how to do so efficiently in post-deprecation (or pre-deprecation) OpenGL, in a way that's going to be compatible with a wide range of commodity hardware and in a way that's not going to break or itself get deprecated, five years down the line. Thanks!

    Read the article

  • Testing Workflows &ndash; Test-First

    - by Timothy Klenke
    Originally posted on: http://geekswithblogs.net/TimothyK/archive/2014/05/30/testing-workflows-ndash-test-first.aspxThis is the second of two posts on some common strategies for approaching the job of writing tests.  The previous post covered test-after workflows where as this will focus on test-first.  Each workflow presented is a method of attack for adding tests to a project.  The more tools in your tool belt the better.  So here is a partial list of some test-first methodologies. Ping Pong Ping Pong is a methodology commonly used in pair programing.  One developer will write a new failing test.  Then they hand the keyboard to their partner.  The partner writes the production code to get the test passing.  The partner then writes the next test before passing the keyboard back to the original developer. The reasoning behind this testing methodology is to facilitate pair programming.  That is to say that this testing methodology shares all the benefits of pair programming, including ensuring multiple team members are familiar with the code base (i.e. low bus number). Test Blazer Test Blazing, in some respects, is also a pairing strategy.  The developers don’t work side by side on the same task at the same time.  Instead one developer is dedicated to writing tests at their own desk.  They write failing test after failing test, never touching the production code.  With these tests they are defining the specification for the system.  The developer most familiar with the specifications would be assigned this task. The next day or later in the same day another developer fetches the latest test suite.  Their job is to write the production code to get those tests passing.  Once all the tests pass they fetch from source control the latest version of the test project to get the newer tests. This methodology has some of the benefits of pair programming, namely lowering the bus number.  This can be good way adding an extra developer to a project without slowing it down too much.  The production coder isn’t slowed down writing tests.  The tests are in another project from the production code, so there shouldn’t be any merge conflicts despite two developers working on the same solution. This methodology is also a good test for the tests.  Can another developer figure out what system should do just by reading the tests?  This question will be answered as the production coder works there way through the test blazer’s tests. Test Driven Development (TDD) TDD is a highly disciplined practice that calls for a new test and an new production code to be written every few minutes.  There are strict rules for when you should be writing test or production code.  You start by writing a failing (red) test, then write the simplest production code possible to get the code working (green), then you clean up the code (refactor).  This is known as the red-green-refactor cycle. The goal of TDD isn’t the creation of a suite of tests, however that is an advantageous side effect.  The real goal of TDD is to follow a practice that yields a better design.  The practice is meant to push the design toward small, decoupled, modularized components.  This is generally considered a better design that large, highly coupled ball of mud. TDD accomplishes this through the refactoring cycle.  Refactoring is only possible to do safely when tests are in place.  In order to use TDD developers must be trained in how to look for and repair code smells in the system.  Through repairing these sections of smelly code (i.e. a refactoring) the design of the system emerges. For further information on TDD, I highly recommend the series “Is TDD Dead?”.  It discusses its pros and cons and when it is best used. Acceptance Test Driven Development (ATDD) Whereas TDD focuses on small unit tests that concentrate on a small piece of the system, Acceptance Tests focuses on the larger integrated environment.  Acceptance Tests usually correspond to user stories, which come directly from the customer. The unit tests focus on the inputs and outputs of smaller parts of the system, which are too low level to be of interest to the customer. ATDD generally uses the same tools as TDD.  However, ATDD uses fewer mocks and test doubles than TDD. ATDD often complements TDD; they aren’t competing methods.  A full test suite will usually consist of a large number of unit (created via TDD) tests and a smaller number of acceptance tests. Behaviour Driven Development (BDD) BDD is more about audience than workflow.  BDD pushes the testing realm out towards the client.  Developers, managers and the client all work together to define the tests. Typically different tooling is used for BDD than acceptance and unit testing.  This is done because the audience is not just developers.  Tools using the Gherkin family of languages allow for test scenarios to be described in an English format.  Other tools such as MSpec or FitNesse also strive for highly readable behaviour driven test suites. Because these tests are public facing (viewable by people outside the development team), the terminology usually changes.  You can’t get away with the same technobabble you can with unit tests written in a programming language that only developers understand.  For starters, they usually aren’t called tests.  Usually they’re called “examples”, “behaviours”, “scenarios”, or “specifications”. This may seem like a very subtle difference, but I’ve seen this small terminology change have a huge impact on the acceptance of the process.  Many people have a bias that testing is something that comes at the end of a project.  When you say we need to define the tests at the start of the project many people will immediately give that a lower priority on the project schedule.  But if you say we need to define the specification or behaviour of the system before we can start, you’ll get more cooperation.   Keep these test-first and test-after workflows in your tool belt.  With them you’ll be able to find new opportunities to apply them.

    Read the article

  • Xcode 4: deleting items in Build Settings

    - by occulus
    In XCode 4.0 there's a newly designed Build Settings page. My problem is that I can't see how to remove a setting once I've specified it. Example: I've changed "Versioning System" to "Apple Generic" at the target level. Afterwards I realised I should set it at the project level, and so I want to delete the target level setting. However, there's no way I can see to remove the setting -- you click on it and there's two options, neither of which is the default "not specified" empty setting. Hitting the delete key on the keyboard nothing. The same goes for fields that accept text -- if I try to delete a setting by just deleting all the text set, it still shows that field in green, but with no text in it, and regards it as the presence of a setting.

    Read the article

  • What tool/framework to use for technical documentation?

    - by Pangea
    We develop products and frameworks to be used with in our organization. I am looking for programmer friendly documentation tools. I have researched on few options sometime back but couldn't decide which one to use. I am looking for suggestions from the people who already used these tools. docbook: springframework and hibernate use this format and this looks good. but I believe they have customized the default xslt/stylesheet. Can I copy and use their xslt and css (ofcourse with colors and images changed). Can I integrate the doc generation using maven? wiki: this is not friendly to the technical document writers and the documentation doesn't look professional. versioning is also not possible I believe word docs: this is what we use currently but it is hard to link and reuse common documents. DITA?

    Read the article

  • Managing changes in memory-based data format

    - by kamziro
    So I've been using a compact data type in c++, and saving from memory or loading from the file involves just copying the bits of memory in and out. However, the obvious drawback of this is that if you need to add/remove elements on the data, it becomes kind of messy. There's also problems with versioning, suppose you distribute a program which uses version A of the data, and then the next day you make version B of it, and then later on version C. I suppose this can be solved by using something like xml or json. But suppose you can't do that for technical reasons. What is the best way to do this, apart from having to make different if cases etc (which would be pretty ugly, I'd imagine)

    Read the article

  • "conveyor belt" cache architecture

    - by Andrew Matthews
    I'm producing an application with a few peculiar internal communication characteristics that make the usual suspects for data storage and transport (Qs and RDBMSs) ill-fitted. I'm wondering whether there is a product out there that matches the following characteristics: all data put into it is peristent all reads are delivered out of memory data is universally available data lives where it is most needed data is versioned (nice to have) updates are transactional (I'd like ACID characteristics) data is potentially replicated, but always in sync works on windows is based on or has bindings for .NET is really fast is really robust is redundant is scalable I'm looking at things like Microsoft codename "Velocity", but I am not sure whether it fits all of the above characteristics. Likewise, Memcached is not a perfect fit either. The current version of this app opts for an RDBMS with a signaling system for inter-system sync, but latency is too high and versioning of the DB is a pain. I need all the robustness, but with none of the trade-offs.

    Read the article

  • Mandatory Parameters in Request object (WCF)

    - by Martin Moser
    lHi, I'm currently writing a WCF service. One of those methods get's a request object and returns a response object. In the request there are a couple of value-type members. Is there a way to define members are mandatory in the declarative way? I'm in an early stage of development and I don't want to start with versioning now. In addition I don't want to have method sig with 25 parameters, therefore I created the request object. The problem I have is that due to the value-types, I can never be sure if the consumer of the service intended to have the default value in there, or it was just by lazyness. On consumer side you don't easily detect that you probably missed that property. So I would like to have something that forces the caller of the service to provide an value, and if not he ideally get's a compile-time error. any ideas? tia, Martin

    Read the article

  • NHibernate 'IdentifierGenerationException' on an Update trigger

    - by Jan Jongboom
    In my database I have an id column defined as [autonumber] [int] IDENTITY(1,1) NOT NULL which is mapped in my .hbm.xml like: <id name="Id" column="autonumber" type="int"> <generator class="identity" /> </id> When calling session.Save() updates are successful committed to the database. When adding a versioning trigger I however get the error this id generator generates Int64, Int32, Int16 of type IdentifierGenerationException. The trigger is defined as: ALTER TRIGGER [dbo].[CatchUpdates_NVM_FDK_Kenmerken] ON [dbo].[NVM_FDK_Kenmerken] INSTEAD OF UPDATE AS BEGIN SET NOCOUNT ON UPDATE NVM_FDK_Kenmerken SET idIsActive = 0 WHERE internalId IN (SELECT internalId FROM INSERTED) INSERT INTO dbo.NVM_FDK_Kenmerken ( vestigingNummer , internalId , someOtherColumns, dateInserted, idIsActive ) SELECT vestigingNummer, internalId, someOtherColumns, GETDATE(), 1 FROM INSERTED END What am I doing wrong here? When doing manual updates everything works just fine and as expected.

    Read the article

  • Document Management System - Where to Store Files?

    - by Diego AC
    Hey, stack! I'm on charge of building an ASP.NET MVC Document Management System. It have to be able to do basic document management tasks like adding, editing and searching entries and also perform versioning. Anyways, I'm targeting PDF, Office and many image formats as the file attached to each document entry in the database. My question is: What design guidelines do pros follow when building the storage mechanism? Do they store the document files in the file system? Database? How file uploading is handled? I used to upload the files to a temporal location while the user was editing the data and move it to permanent storage when the user confirmed the entry creation. Is this good? Any suggestions on improvement?

    Read the article

< Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >