Search Results

Search found 10836 results on 434 pages for 'external dependencies'.

Page 152/434 | < Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >

  • #OOW 2012 @PARIS...talking Oracle and Clouds, and Optimized Datacenter

    - by Eric Bezille
    For those of you who want to get most out of Oracle technologies to evolve your IT to the Next Wave, I encourage you to register to the up coming Oracle Optimized Datacenter event that will take place in Paris on November 28th. You will get the opportunity to exchange with Oracle experts and customers having successfully evolve their IT by leveraging Oracle technologies. You will also get the latest news on some of the Oracle systems announcements made during OOW 2012. During this event we will make an update about Oracle and Clouds, from private to public and hybrid models. So in preparing this session, I thought it was a good start to make a status of Cloud Computing in France, and CIO requirements in particular. Starting in 2009 with the first Cloud Camp in Paris, the market has evolved, but the basics are still the same : think hybrid. From Traditional IT to Clouds One size doesn't fit all, and for big companies having already an IT in place, there will be parts eligible to external (public) cloud, and parts that would be required to stay inside the firewalls, so ability to integrate both side is key.  None the less, one of the major impact of Cloud Computing trend on IT, reported by Forrester, is the pressure it makes on CIO to evolve towards the same model that end-users are now used to in their day to day life, where self-service and flexibility are paramount. This is what is driving IT to transform itself toward "a Global Service Provider", or for some as "IT "is" the Business" (see : Gartner Identifies Four Futures for IT and CIO), and for both models toward a Private Cloud Service Provider. In this journey, there is still a big difference between most of existing external Cloud and a firm IT : the number of applications that a CIO has to manage. Most cloud providers today are overly specialized, but at the end of the day, there are really few business processes that rely on only one application. So CIOs has to combine everything together external and internal. And for the internal parts that they will have to make them evolve to a Private Cloud, the scope can be very large. This will often require CIOs to evolve from their traditional approach to more disruptive ones, the time has come to introduce new standards and processes, if they want to succeed. So let's have a look at the different Cloud models, what type of users they are addressing, what value they bring and most importantly what needs to be done by the  Cloud Provider, and what is left over to the user. IaaS, PaaS, SaaS : what's provided and what needs to be done First of all the Cloud Provider will have to provide all the infrastructure needed to deliver the service. And the more value IT will want to provide, the more IT will have to deliver and integrate : from disks to applications. As we can see in the above picture, providing pure IaaS, left a lot to cover for the end-user, that’s why the end-user targeted by this Cloud Service is IT people. If you want to bring more value to developers, you need to provide to them a development platform ready to use, which is what PaaS is standing for, by providing not only the processors power, storage and OS, but also the Database and Middleware platform. SaaS being the last mile of the Cloud, providing an application ready to use by business users, the remaining part for the end-users being configuring and specifying the application for their specific usage. In addition to that, there are common challenges encompassing all type of Cloud Services : Security : covering all aspect, not only of users management but also data flows and data privacy Charge back : measuring what is used and by whom Application management : providing capabilities not only to deploy, but also to upgrade, from OS for IaaS, Database, and Middleware for PaaS, to a full Business Application for SaaS. Scalability : ability to evolve ALL the components of the Cloud Provider stack as needed Availability : ability to cover “always on” requirements Efficiency : providing a infrastructure that leverage shared resources in an efficient way and still comply to SLA (performances, availability, scalability, and ability to evolve) Automation : providing the orchestration of ALL the components in all service life-cycle (deployment, growth & shrink (elasticity), upgrades,...) Management : providing monitoring, configuring and self-service up to the end-users Oracle Strategy and Clouds For CIOs to succeed in their Private Cloud implementation, means that they encompass all those aspects for each component life-cycle that they selected to build their Cloud. That’s where a multi-vendors layered approach comes short in terms of efficiency. That’s the reason why Oracle focus on taking care of all those aspects directly at Engineering level, to truly provide efficient Cloud Services solutions for IaaS, PaaS and SaaS. We are going as far as embedding software functions in hardware (storage, processor level,...) to ensure the best SLA with the highest efficiency. The beauty of it, as we rely on standards, is that the Oracle components that you are running today in-house, are exactly the same that we are using to build Clouds, bringing you flexibility, reversibility and fast path to adoption. With Oracle Engineered Systems (Exadata, Exalogic & SPARC SuperCluster, more specifically, when talking about Cloud), we are delivering all those components hardware and software already engineered together at Oracle factory, with a single pane of glace for the management of ALL the components through Oracle Enterprise Manager, and with high-availability, scalability and ability to evolve by design. To give you a feeling of what does that bring in terms just of implementation project timeline, for example with Oracle SPARC SuperCluster, we have a consistent track of record to have the system plug into existing Datacenter and ready in a week. This includes Oracle Database, OS, virtualization, Database Storage (Exadata Storage Cells in this case), Application Storage, and all network configuration. This strategy enable CIOs to very quickly build Cloud Services, taking out not only the complexity of integrating everything together but also taking out the automation and evolution complexity and cost. I invite you to discuss all those aspect in regards of your particular context face2face on November 28th.

    Read the article

  • LUKOIL Overseas Holding Optimizes Oil Field Development Projects with Integrated Project Management

    - by Melissa Centurio Lopes
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} LUKOIL Overseas Group is a growing oil and gas company that is an integral part of the vertically integrated oil company OAO LUKOIL. It is engaged in the exploration, acquisition, integration, and efficient development of oil and gas fields outside the Russian Federation to promote transforming LUKOIL into a transnational energy company. In 2010, the company signed a 20-year development project for the giant, West Qurna 2 oil field in Iraq. Executing 10,000 to 15,000 project activities simultaneously on 14 major construction and drilling projects in Iraq for the West Qurna-2 project meant the company needed a clear picture, in real time, of dependencies between its capital construction, geologic exploration and sinking projects—required for its building infrastructure oil field development projects in Iraq. LUKOIL Overseas Holding deployed Oracle’s Primavera P6 Enterprise Project Portfolio Management to generate structured project management information and optimize planning, monitoring, and analysis of all engineering and commercial activities—such as tenders, and bulk procurement of materials and equipment—related to oil field development projects. A word from LUKOIL Overseas Holding Ltd. “Previously, we created project schedules on desktop computers and uploaded them to the project server to be merged into one big file for each project participant to access. This was not scalable, as we’ve grown and now run up to 15,000 activities in numerous projects and subprojects at any time. With Oracle’s Primavera P6 Enterprise Project Portfolio Management, we can now work concurrently on projects with many team members, enjoy absolute security, and issue new baselines for all projects and project participants once a week, with ease.” – Sergey Kotov, Head of IT and the Communication Office, LUKOIL Mid-East Ltd. Oracle Primavera Solutions: · Facilitated managing dependencies between projects by enabling the general scheduler to reschedule all projects and subprojects once a week, realigning 10,000 to 15,000 project activities that the company runs at any time · Replaced Microsoft Project and a paper-based system with a complete solution that provides structured project data · Enhanced data security by establishing project management security policies that enable only authorized project members to edit their project tasks, while enabling each project participant to view all project data that are relevant to that individual’s task · Enabled the company to monitor project progress in comparison to the projected plan, based on physical project assets to determine if each project is on track to conclude within its time and budget limitations To view the full list of solutions view here. “Oracle Gold Partner Parma Telecom was key to our successful Primavera deployment, implementing the software’s basic functionalities, such as project content, timeframes management, and cost management, in addition to performing its integration with our enterprise resource planning system and intranet portal within ten months and in accordance with budgets,” said Rafik Baynazarov, head of the master planning and control office, LUKOIL Mid-East Ltd. “ To read the full version of the customer success story, please view here.

    Read the article

  • Caching factory design

    - by max
    I have a factory class XFactory that creates objects of class X. Instances of X are very large, so the main purpose of the factory is to cache them, as transparently to the client code as possible. Objects of class X are immutable, so the following code seems reasonable: # module xfactory.py import x class XFactory: _registry = {} def get_x(self, arg1, arg2, use_cache = True): if use_cache: hash_id = hash((arg1, arg2)) if hash_id in _registry: return _registry[hash_id] obj = x.X(arg1, arg2) _registry[hash_id] = obj return obj # module x.py class X: # ... Is it a good pattern? (I know it's not the actual Factory Pattern.) Is there anything I should change? Now, I find that sometimes I want to cache X objects to disk. I'll use pickle for that purpose, and store as values in the _registry the filenames of the pickled objects instead of references to the objects. Of course, _registry itself would have to be stored persistently (perhaps in a pickle file of its own, in a text file, in a database, or simply by giving pickle files the filenames that contain hash_id). Except now the validity of the cached object depends not only on the parameters passed to get_x(), but also on the version of the code that created these objects. Strictly speaking, even a memory-cached object could become invalid if someone modifies x.py or any of its dependencies, and reloads it while the program is running. So far I ignored this danger since it seems unlikely for my application. But I certainly cannot ignore it when my objects are cached to persistent storage. What can I do? I suppose I could make the hash_id more robust by calculating hash of a tuple that contains arguments arg1 and arg2, as well as the filename and last modified date for x.py and every module and data file that it (recursively) depends on. To help delete cache files that won't ever be useful again, I'd add to the _registry the unhashed representation of the modified dates for each record. But even this solution isn't 100% safe since theoretically someone might load a module dynamically, and I wouldn't know about it from statically analyzing the source code. If I go all out and assume every file in the project is a dependency, the mechanism will still break if some module grabs data from an external website, etc.). In addition, the frequency of changes in x.py and its dependencies is quite high, leading to heavy cache invalidation. Thus, I figured I might as well give up some safety, and only invalidate the cache only when there is an obvious mismatch. This means that class X would have a class-level cache validation identifier that should be changed whenever the developer believes a change happened that should invalidate the cache. (With multiple developers, a separate invalidation identifier is required for each.) This identifier is hashed along with arg1 and arg2 and becomes part of the hash keys stored in _registry. Since developers may forget to update the validation identifier or not realize that they invalidated existing cache, it would seem better to add another validation mechanism: class X can have a method that returns all the known "traits" of X. For instance, if X is a table, I might add the names of all the columns. The hash calculation will include the traits as well. I can write this code, but I am afraid that I'm missing something important; and I'm also wondering if perhaps there's a framework or package that can do all of this stuff already. Ideally, I'd like to combine in-memory and disk-based caching.

    Read the article

  • Business Case for investing time developing Stubs and BizUnit Tests

    - by charlie.mott
    I was recently in a position where I had to justify why effort should be spent developing Stubbed Integration Tests for BizTalk solutions. These tests are usually developed using the BizUnit framework. I assumed that most seasoned BizTalk developers would consider this best practice. Even though Microsoft suggest use of BizUnit on MSDN, I've not found a single site listing the justifications for investing time writing stubs and BizUnit tests. Stubs Stubs should be developed to isolate your development team from external dependencies. This is described by Michael Stephenson here. Failing to do this can result in the following problems: In contract-first scenarios, the external system interface will have been defined.  But the interface may not have been setup or even developed yet for the BizTalk developers to work with. By the time you open the target location to see the data BizTalk has sent, it may have been swept away. If you are relying on the UI of the target system to see the data BizTalk has sent, what do you do if it fails to arrive? It may take time for the data to be processed or it may be scheduled to be processed later. Learning how to use the source\target systems and investigations into where things go wrong in these systems will slow down the BizTalk development effort. By the time the data is visible in a UI it may have undergone further transformations. In larger development teams working together, do you all use the same source and target instances. How do you know which data was created by whose tests? How do you know which event log error message are whose?  Another developer may have “cleaned up” your data. It is harder to write BizUnit tests that clean up the data\logs after each test run. What if your B2B partners' source or target system cannot support the sort of testing you want to do. They may not even have a development or test instance that you can work with. Their single test instance may be used by the SIT\UAT teams. There may be licencing costs of setting up an instances of the external system. The stubs I like to use are generic stubs that can accept\return any message type.  Usually I need to create one per protocol. They should be driven by BizUnit steps to: validates the data received; and select a response messages (or error response). Once built, they can be re-used for many integration tests and from project to project. I’m not saying that developers should never test against a real instance.  Every so often, you still need to connect to real developer or test instances of the source and target endpoints\services. The interface developers may ask you to send them some data to see if everything still works.  Or you might want some messages sent to BizTalk to get confidence that everything still works beyond BizTalk. Tests Automated “Stubbed Integration Tests” are usually built using the BizUnit framework. These facilitate testing of the entire integration process from source stub to target stub. It will ensure that all of the BizTalk components are configured together correctly to meet all the requirements. More fine grained unit testing of individual BizTalk components is still encouraged.  But BizUnit provides much the easiest way to test some components types (e.g. Orchestrations). Using BizUnit with the Behaviour Driven Development approach described by Mike Stephenson delivers the following benefits: source: http://biztalkbddsample.codeplex.com – Video 1. Requirements can be easily defined using Given/When/Then Requirements are close to the code so easier to manage as features and scenarios Requirements are defined in domain language The feature files can be used as part of the documentation The documentation is accurate to the build of code and can be published with a release The scenarios are effective to document the scenarios and are not over excessive The scenarios are maintained with the code There’s an abstraction between the intention and implementation of tests making them easier to understand The requirements drive the testing These same tests can also be used to drive load testing as described here. If you don't do this ... If you don't follow the above “Stubbed Integration Tests” approach, the developer will need to manually trigger the tests. This has the following risks: Developers are unlikely to check all the scenarios each time and all the expected conditions each time. After the developer leaves, these manual test steps may be lost. What test scenarios are there?  What test messages did they use for each scenario? There is no mechanism to prove adequate test coverage. A test team may attempt to automate integration test scenarios in a test environment through the triggering of tests from a source system UI. If this is a replacement for BizUnit tests, then this carries the following risks: It moves the tests downstream, so problems will be found later in the process. Testers may not check all the expected conditions within the BizTalk infrastructure such as: event logs, suspended messages, etc. These automated tests may also get in the way of manual tests run on these environments.

    Read the article

  • Dependency injection: How to sell it

    - by Mel
    Let it be known that I am a big fan of dependency injection (DI) and automated testing. I could talk all day about it. Background Recently, our team just got this big project that is to built from scratch. It is a strategic application with complex business requirements. Of course, I wanted it to be nice and clean, which for me meant: maintainable and testable. So I wanted to use DI. Resistance The problem was in our team, DI is taboo. It has been brought up a few times, but the gods do not approve. But that did not discourage me. My Move This may sound weird but third-party libraries are usually not approved by our architect team (think: "thou shalt not speak of Unity, Ninject, NHibernate, Moq or NUnit, lest I cut your finger"). So instead of using an established DI container, I wrote an extremely simple container. It basically wired up all your dependencies on startup, injects any dependencies (constructor/property) and disposed any disposable objects at the end of the web request. It was extremely lightweight and just did what we needed. And then I asked them to review it. The Response Well, to make it short. I was met with heavy resistance. The main argument was, "We don't need to add this layer of complexity to an already complex project". Also, "It's not like we will be plugging in different implementations of components". And "We want to keep it simple, if possible just stuff everything into one assembly. DI is an uneeded complexity with no benefit". Finally, My Question How would you handle my situation? I am not good in presenting my ideas, and I would like to know how people would present their argument. Of course, I am assuming that like me, you prefer to use DI. If you don't agree, please do say why so I can see the other side of the coin. It would be really interesting to see the point of view of someone who disagrees. Update Thank you for everyone's answers. It really puts things into perspective. It's nice enough to have another set of eyes to give you feedback, fifteen is really awesome! This are really great answers and helped me see the issue from different sides, but I can only choose one answer, so I will just pick the top voted one. Thanks everyone for taking the time to answer. I have decided that it is probably not the best time to implement DI, and we are not ready for it. Instead, I will concentrate my efforts on making the design testable and attempt to present automated unit testing. I am aware that writing tests is additional overhead and if ever it is decided that the additional overhead is not worth it, personally I would still see it as a win situation since the design is still testable. And if ever testing or DI is a choice in future, the design can easily handle it.

    Read the article

  • How should I define Pom.xml in each Module so that web module can communicate with the other two ejb modules?

    - by Kayser
    Maven, maven, maven. It must be very nice and it is nice by a small application. Now I want to build an ear project: with two EJB Modules, a web Module and ear module to build an ear file. Web Module is dependent on the other ejb modules.. How should I define Pom.xml in each Module so that web module can communicate with the other two ejb modules in ear and the ear module builds the right ear file? What I have done before: Module 1 -- Basic Module. All other modules are dependent to this Module. Basic functionality like login etc. <packaging>ejb</packaging> Module 1 -- Data Module. All Entites are here Type EJB <dependency> <groupId>com.myCompnay</groupId> <artifactId>Modul_Basic</artifactId> <version>0.0.1-SNAPSHOT</version> <type>ejb</type> </dependency Module 2 -- Business Module. Businnes Facades are here. Type EJB <dependency> <groupId>com.myCompnay</groupId> <artifactId>Modul_Basic</artifactId> <version>0.0.1-SNAPSHOT</version> <type>ejb</type> </dependency Web Module - Type is WAR <dependency> <groupId>com.myCompnay</groupId> <artifactId>Modul_Basic</artifactId> <version>0.0.1-SNAPSHOT</version> <type>ejb</type> </dependency EAR Module -- In this project I try to build the project. <packaging>ear</packaging> <dependencies> <dependency> <groupId>com.myCompnay</groupId> <artifactId>Modul_Basic</artifactId> <version>0.0.1-SNAPSHOT</version> <type>ejb</type> </dependency <dependency> <groupId>com.myCompnay</groupId> <artifactId>Modul_Business</artifactId> <version>0.0.1-SNAPSHOT</version> <type>ejb</type> </dependency <dependency> <groupId>com.myCompnay</groupId> <artifactId>Modul_WEB</artifactId> <version>0.0.1-SNAPSHOT</version> <type>war</type> </dependency </dependencies> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-ear-plugin</artifactId> </plugin> </plugins> </build>

    Read the article

  • Changes in the Maven Embedded GlassFish plugin

    - by Romain Grecourt
    The plugin changed its Maven coordinates (a.k.a GAV) over time:  version <= 3.1.1 available under org.glassfish:maven-glassfish-embedded-plugin version >= 3.1.2 available under org.glassfish.embedded:maven-glassfish-embedded-plugin The goal “glassfish-embedded:run” has changed its way of reading the deployment configuration in the latest version: 4.0.Projects using previous versions of the plugin will stop working with this goal. Here is an example of the “old behavior”: 1 2 3 4 5 6 7 8 9 10 11 12 <plugin> <groupId>org.glassfish.embedded</groupId> <artifactId>maven-embedded-glassfish-plugin</artifactId> <version>3.1.2.2</version> <configuration> <app>target/${project.build.finalName}.war</app> <contextRoot>/</contextRoot> <goalPrefix>embedded-glassfish</goalPrefix> <autoDelete>true</autoDelete> <port>8080</port> </configuration> </plugin> The new behavior is as follow: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 <plugin> <groupId>org.glassfish.embedded</groupId> <artifactId>maven-embedded-glassfish-plugin</artifactId> <version>4.0</version> <configuration> <goalPrefix>embedded-glassfish</goalPrefix> <autoDelete>true</autoDelete> <port>8080</port> </configuration> <executions> <execution> <goals> <goal>deploy</goal> </goals> <configuration> <app>target/${project.build.finalName}.war</app> <contextRoot>/</contextRoot> </configuration> </execution> </executions> </plugin> The new version looks for execution of the deploy goal and the associated configuration, when running the goal ‘run’. Both would allow you to run the latest version of the glassfish-embedded jar, you’d only need to add it as a plugin dependency: 1 2 3 4 5 6 7 8 9 10 <plugin> [...] <dependencies> <dependency> <groupId>org.glassfish.main.extras</groupId> <artifactId>glassfish-embedded-all</artifactId> <version>4.0</version> </dependency> </dependencies> </plugin>

    Read the article

  • Access Control Service: Handling Errors

    - by Your DisplayName here!
    Another common problem with external authentication is how to deal with sign in errors. In active federation like WS-Trust there are well defined SOAP faults to communicate problem to a client. But with web applications, the error information is typically generated and displayed on the external sign in page. The relying party does not know about the error, nor can it help the user in any way. The Access Control Service allows to post sign in errors to a specified page. You setup this page in the relying party registration. That means that whenever an error occurs in ACS, the error information gets packaged up as a JSON string and posted to the page specified. This way you get structued error information back into you application so you can display a friendlier error message or log the error. I added error page support to my ACS2 sample, which can be downloaded here. How to turn the JSON error into CLR types The JSON schema is reasonably simple, the following class turns the JSON into an object: [DataContract] public class AcsErrorResponse {     [DataMember(Name = "context", Order = 1)]     public string Context { get; set; }     [DataMember(Name = "httpReturnCode", Order = 2)]     public string HttpReturnCode { get; set; }     [DataMember(Name = "identityProvider", Order = 3)]        public string IdentityProvider { get; set; }     [DataMember(Name = "timeStamp", Order = 4)]     public string TimeStamp { get; set; }     [DataMember(Name = "traceId", Order = 5)]     public string TraceId { get; set; }     [DataMember(Name = "errors", Order = 6)]     public List<AcsError> Errors { get; set; }     public static AcsErrorResponse Read(string json)     {         var serializer = new DataContractJsonSerializer( typeof(AcsErrorResponse));         var response = serializer.ReadObject( new MemoryStream(Encoding.Default.GetBytes(json))) as AcsErrorResponse;         if (response != null)         {             return response;         }         else         {             throw new ArgumentException("json");         }     } } [DataContract] public class AcsError {     [DataMember(Name = "errorCode", Order = 1)]     public string Code { get; set; }             [DataMember(Name = "errorMessage", Order = 2)]     public string Message { get; set; } } Retrieving the error information You then need to provide a page that takes the POST and deserializes the information. My sample simply fills a view that shows all information. But that’s for diagnostic/sample purposes only. You shouldn’t show the real errors to your end users. public class SignInErrorController : Controller {     [HttpPost]     public ActionResult Index()     {         var errorDetails = Request.Form["ErrorDetails"];         var response = AcsErrorResponse.Read(errorDetails);         return View("SignInError", response);     } } Also keep in mind that the error page is an anonymous page and that you are taking external input. So all the usual input validation applies.

    Read the article

  • ASP.NET MVC tries to load older version of Owin assembly

    - by d_mcg
    As a bit of context, I'm developing an ASP.NET MVC 5 application that uses OAuth-based authentication via Microsoft's OWIN implementation, for Facebook and Google only at this stage. Currently (as of v3.0.0, git-commit 4932c2f), the FacebookAuthenticationOptions and GoogleOAuth2AuthenticationOptions don't provide any property to force Facebook nor Google respectively to reauthenticate users (via appending the appropriate query string parameters) when signing in. Initially, I set out to override the following classes: FacebookAuthenticationOptions GoogleOAuth2AuthenticationOptions FacebookAuthenticationHandler (specifically AuthenticateCoreAsync()) GoogleOAuth2AuthenticationHandler (specifically AuthenticateCoreAsync()) yet discovered that the ~AuthenticationHandler classes are marked as internal. So I pulled a copy of the source for the Katana project (http://katanaproject.codeplex.com/) and modified the source accordingly. After compiling, I found that there are several dependencies that needed updating in order to use these updated assemblies (Microsoft.Owin.Security.Facebook and Microsoft.Owin.Security.Google) in the MVC project: Microsoft.Owin Microsoft.Owin.Security Microsoft.Owin.Security.Cookies Microsoft.Owin.Security.OAuth Microsoft.Owin.Host.SystemWeb This was done by replacing the existing project references to the 3.0.0 versions and updating those in web.config. Good news: the project compiles successfully. In debugging, I received an exception on startup: An exception of type 'System.IO.FileLoadException' occurred in [MVC web assembly].dll but was not handled in user code Additional information: Could not load file or assembly 'Microsoft.Owin.Security, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The located assembly's manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040) The underlying exception indicated that Microsoft.AspNet.Identity.Owin was trying to load v2.1.0 of Microsoft.Owin.Security when calling app.UseExternalSignInCookie() from Startup.ConfigureAuth(IAppBuilder app) in Startup.Auth.cs. Unfortunately that assembly (and its other dependency, Microsoft.AspNet.Identity.Owin) aren't part of the Project Katana solution, and I can't find any accessible repository for these assemblies online. Are the Microsoft.AspNet.Identity assemblies open source, like the Katana project? Is there a way to fool those assemblies to use the referenced v3.0.0 assemblies instead of v2.1.0? The /bin folder contains the 3.0.0 versions of the Owin assemblies. I've upgraded the NuGet packages for Microsoft.AspNet.Identity.Owin, and this is still an issue. Any ideas on how to resolve this issue?

    Read the article

  • Building a VS2010 solution from TFS2008

    - by slugster
    I have a TFS 2008 Build Agent that has been used to build .Net 3.5 applications. I now have a .Net 4.0 app which i want to compile on the same build agent. I have ensured that MSBuild 4.0 is installed on there and all the required componentry is also installed, but i am getting the following MSB4062 error when building: [Any CPU/Release] C:\Program Files\MSBuild\Microsoft\VisualStudio\v10.0\WebApplications\Microsoft.WebApplication.targets(244,5): error MSB4062: The "Microsoft.WebApplication.Build.Tasks.GetSilverlightItemsFromProperty" task could not be loaded from the assembly C:\Program Files\MSBuild\Microsoft\VisualStudio\v10.0\WebApplications\Microsoft.WebApplication.Build.Tasks.dll. Could not load file or assembly 'file:///C:\Program Files\MSBuild\Microsoft\VisualStudio\v10.0\WebApplications\Microsoft.WebApplication.Build.Tasks.dll' or one of its dependencies. This assembly is built by a runtime newer than the currently loaded runtime and cannot be loaded. Confirm that the declaration is correct, and that the assembly and all its dependencies are available. I am presuming that i get this because the TFSBuild.proj gets executed by MSBuild 3.5 which in turn means my solution is compiled with MSBuild 3.5. Am i correct with my diagnosis? Is there any way to ensure that TFS2008 uses MSBuild 4.0 for my solution? Can it be done on a single team project so that it doesn't affect any other team projects being built on the same build agent? Note that i have checked the question Build failing - VS2010 solution on TFS2008 and this is not a duplicate. Thanks :)

    Read the article

  • Grails Warnings/Errors during run-app

    - by Taylor L
    I'm currently seeing the warnings below when trying to run my Google App Engine/Grails test app in Eclipse. Warning, target causing name overwriting of name startLogging Warning: C:\Users\Some Person.grails\1.2.0\projects\test-grails\plugins\app-engine-0.8.8\grails-app\conf\spring not found. Warning: C:\Users\Some Person.grails\1.2.0\projects\test-grails\plugins\app-engine-0.8.8\grails-app\conf not found. Warning: C:\Users\Some Person.grails\1.2.0\projects\test-grails\plugins\app-engine-0.8.8\grails-app\conf\hibernate not found. Here is the output from the console: Base Directory: C:\Users\Some Person\workspace\test-grails Resolving dependencies... Dependencies resolved in 1160ms. Running script C:\grails-1.2.0\scripts\RunApp.groovy Environment set to development Warning, target causing name overwriting of name startLogging [groovyc] Compiling 1 source file to C:\Users\Some Person\workspace\test-grails\web-app\WEB-INF\classes [copy] Copying 1 file to C:\Users\Some Person\.grails\1.2.0\projects\test-grails [copy] Copying 1 file to C:\Users\Some Person\workspace\test-grails\web-app\WEB-INF Configuring persistence for AppEngine [copy] Warning: C:\Users\Some Person\.grails\1.2.0\projects\test-grails\plugins\app-engine-0.8.8\grails-app\conf\spring not found. [copy] Warning: C:\Users\Some Person\.grails\1.2.0\projects\test-grails\plugins\app-engine-0.8.8\grails-app\conf not found. [copy] Warning: C:\Users\Some Person\.grails\1.2.0\projects\test-grails\plugins\app-engine-0.8.8\grails-app\conf\hibernate not found. I get this error after creating a Grails project with Spring Tools Suite (STS) and then installing the app-engine plugin "grails install-plugin app-engine". Before, I install the app-engine plugin the Grails project runs correctly. Any ideas how to resolve these warnings?

    Read the article

  • Integration test failing through NUnit Gui/Console, but passes through TestDriven in IDE

    - by Cliff
    I am using NHibernate against an Oracle database with the NHibernate.Driver.OracleDataClientDriver driver class. I have an integration test that pulls back expected data properly when executed through the IDE using TestDriven.net. However, when I run the unit test through the NUnit GUI or Console, NHibernate throws an exception saying it cannot find the Oracle.DataAccess assembly. Obviously, this prevents me from running my integration tests as part of my CI process. NHibernate.HibernateException : The IDbCommand and IDbConnection implementation in the assembly Oracle.DataAccess could not be found. Ensure that the assembly Oracle.DataAccess is located in the application directory or in the Global Assembly Cache. If the assembly is in the GAC, use element in the application configuration file to specify the full name of the assembly.* I have tried making the assembly available in two ways, by copying it into the bin\debug folder and by adding the element in the config file. Again, both methods work when executing through TestDriven in the IDE. Neither work when executing through NUnit GUI/Console. The NUnit Gui log displays the following message. 21:42:26,377 ERROR [TestRunnerThread] ReflectHelper [(null)]- Could not load type Oracle.DataAccess.Client.OracleConnection, Oracle.DataAccess. System.BadImageFormatException: Could not load file or assembly 'Oracle.DataAccess, Version=2.111.7.20, Culture=neutral, PublicKeyToken=89b483f429c47342' or one of its dependencies. An attempt was made to load a program with an incorrect format. File name: 'Oracle.DataAccess, Version=2.111.7.20, Culture=neutral, PublicKeyToken=89b483f429c47342' --- System.BadImageFormatException: Could not load file or assembly 'Oracle.DataAccess' or one of its dependencies. An attempt was made to load a program with an incorrect format. File name: 'Oracle.DataAccess' I am running NUnit 2.4.8, TestDriven.net 2.24 and VS2008sp1 on Windows 7 64bit. Oracle Data Provider v2.111.7.20, NHibernate v2.1.0.4. Has anyone run into this issue, better yet, fixed it?

    Read the article

  • T4 vs CodeDom vs Oslo

    - by Ryan Riley
    In an application scaffolding project on which I'm working, I'm trying to decide whether to use Oslo, T4 or CodeDom for generating code. Our goals are to keep dependencies to a minimum and drive code generation for a domain driven design from user stories. The first step will be to create the tests from the user stories, but we want the domain experts to be able to write their stories in a variety of different media (e.g. custom app, Word, etc.) and still generate the tests from the stories. What I know so far: CodeDom requires .NET but can only output .NET class files (e.g. .cs, .vb). Level of difficulty is fairly high. T4 requires CodeDom and VS Standard+. Level of difficulty is fairly reasonable, especially with the T4 Toolbox. Oslo is very new. I have no idea of the dependencies, but I imagine you must be on at least .NET 3.5. I'm also not certain as to the code generation abilities or the complexity for adding new grammars. However, domain experts could probably write user stories in Intellipad quite easily. Also not sure about ease of converting stories in Word to an MGrammar. What are your thoughts, experiences, etc. with any of the above tools. We want to stick with Microsoft or open source tools.

    Read the article

  • Grails beginner problem "Failed to invoke Servlet 2.5"

    - by A Lion
    I'm trying to get Groovy on Grails set up on a Snow Leopard machine. I followed all the grails.com install directions and am trying to start the application from Grails: A Quick-Start Guide by Dave Klein. I ran grails create-app TekDays with no apparent problems and was able to cd to the TekDays folder, but when I try to run grails run-app I get the following: Welcome to Grails 1.3.1 - http://grails.org/ Licensed under Apache Standard License 2.0 Grails home is set to: /grails Base Directory: /apps/TekDays Resolving dependencies... Dependencies resolved in 4469ms. Running script /grails/scripts/RunApp.groovy Environment set to development [delete] Deleting directory /Users/name/.grails/1.3.1/projects/TekDays/tomcat Running Grails application.. 2010-05-24 21:42:39,559 [main] ERROR context.GrailsContextLoader - Error executing bootstraps: Failed to invoke Servlet 2.5 getContextPath method java.lang.IllegalStateException: Failed to invoke Servlet 2.5 getContextPath method at org.grails.tomcat.TomcatServer.start(TomcatServer.groovy:164) at grails.web.container.EmbeddableServer$start.call(Unknown Source) at _GrailsRun_groovy$_run_closure5_closure12.doCall(_GrailsRun_groovy:159) at _GrailsRun_groovy$_run_closure5_closure12.doCall(_GrailsRun_groovy) at _GrailsSettings_groovy$_run_closure10.doCall(_GrailsSettings_groovy:282) at _GrailsSettings_groovy$_run_closure10.call(_GrailsSettings_groovy) at _GrailsRun_groovy$_run_closure5.doCall(_GrailsRun_groovy:150) at _GrailsRun_groovy$_run_closure5.call(_GrailsRun_groovy) at _GrailsRun_groovy.runInline(_GrailsRun_groovy:116) at _GrailsRun_groovy.this$4$runInline(_GrailsRun_groovy) at _GrailsRun_groovy$_run_closure1.doCall(_GrailsRun_groovy:59) at RunApp$_run_closure1.doCall(RunApp.groovy:33) at gant.Gant$_dispatch_closure5.doCall(Gant.groovy:381) at gant.Gant$_dispatch_closure7.doCall(Gant.groovy:415) at gant.Gant$_dispatch_closure7.doCall(Gant.groovy) at gant.Gant.withBuildListeners(Gant.groovy:427) at gant.Gant.this$2$withBuildListeners(Gant.groovy) at gant.Gant$this$2$withBuildListeners.callCurrent(Unknown Source) at gant.Gant.dispatch(Gant.groovy:415) at gant.Gant.this$2$dispatch(Gant.groovy) at gant.Gant.invokeMethod(Gant.groovy) at gant.Gant.executeTargets(Gant.groovy:590) at gant.Gant.executeTargets(Gant.groovy:589) Caused by: java.lang.NoSuchMethodException: javax.servlet.ServletContext.getContextPath() at java.lang.Class.getMethod(Class.java:1605) ... 23 more I've googled every derivative of "Failed to invoke Servlet 2.5" that I can think of, but have thus far been unable to find anything that helps me understand, yet alone resolve, the error. Any advice to help me resolve this would be very much appreciated!

    Read the article

  • Using a custom MvcHttpHandler v2.0 Breaking change from 1.0 to 2.0 ?

    - by Myster
    Hi I have a site where part is webforms (Umbraco CMS) and part is MVC This is the HttpHandler to deal with the MVC functionality: public class Mvc : MvcHttpHandler { protected override void ProcessRequest(HttpContext httpContext) { httpContext.Trace.Write("Mvc.ashx", "Begin ProcessRequest"); string originalPath = httpContext.Request.Path; string newPath = httpContext.Request.QueryString["mvcRoute"]; if (string.IsNullOrEmpty(newPath)) newPath = "/"; httpContext.Trace.Write("Mvc.ashx", "newPath = "+newPath ); HttpContext.Current.RewritePath(newPath, false); base.ProcessRequest(HttpContext.Current); HttpContext.Current.RewritePath(originalPath, false); } } Full details of how this is implemented here This method works well in an MVC 1.0 website. However when I upgrade this site to MVC 2.0 following the steps in Microsoft's upgrade documentation; everything compiles, except at runtime I get this exception: Server Error in '/' Application. The resource cannot be found. Description: HTTP 404. The resource you are looking for (or one of its dependencies) could have been removed, had its name changed, or is temporarily unavailable. Please review the following URL and make sure that it is spelled correctly. Requested URL: /mvc.ashx Version Information: Microsoft .NET Framework Version:2.0.50727.4927; ASP.NET Version:2.0.50727.4927 This resource and it's dependencies are found fine in MVC 1.0 but not in MVC 2.0, is there an extra dependency I'd need to add? Is there something I'm missing? Is there a change in the way MVC 2.0 works?

    Read the article

  • Android Studio Could not call IncrementalTask.taskAction() on task ':project:dexDebug'

    - by akenawell85x
    I recently decided to switch from Eclipse to Android Studio. I imported a project I was working on and am now getting this error when I try to run the project. Gradle: Execution failed for task ':project:dexDebug'. > Could not call IncrementalTask.taskAction() on task ':project:dexDebug' I've been cruising this site for 2 days now and trying different suggestions to no avail. I did run gradlew compileDebug --stacktrace and this is what I got: C:\Users\adam\AndroidStudioProjects\projectProject>gradlew compileDebug --stacktrace Relying on packaging to define the extension of the main artifact has been deprecated and is scheduled to be removed in Gradle 2.0 :project:preBuild UP-TO-DATE :project:preDebugBuild UP-TO-DATE :project:preReleaseBuild UP-TO-DATE :project:prepareComAndroidSupportAppcompatV71800Library UP-TO-DATE :project:prepareComGoogleAndroidGmsPlayServices3225Library UP-TO-DATE :project:prepareDebugDependencies :project:compileDebugAidl UP-TO-DATE :project:compileDebugRenderscript UP-TO-DATE :project:generateDebugBuildConfig UP-TO-DATE :project:mergeDebugAssets UP-TO-DATE :project:mergeDebugResources UP-TO-DATE :project:processDebugManifest UP-TO-DATE :project:processDebugResources UP-TO-DATE :project:generateDebugSources UP-TO-DATE :project:compileDebug UP-TO-DATE BUILD SUCCESSFUL Total time: 10.459 secs However I am still getting that error when I try to actually run the project. Here is my build.gradle (i do have a 'libs' folder in my project with all the jars for a google maps/places app): buildscript { repositories { mavenCentral() } dependencies { classpath 'com.android.tools.build:gradle:0.6.+' } } apply plugin: 'android' repositories { mavenCentral() } android { compileSdkVersion 18 buildToolsVersion "18.1.1" defaultConfig { minSdkVersion 8 targetSdkVersion 18 } } dependencies { compile fileTree(dir: 'libs') compile 'com.google.android.gms:play-services:3.2.25' compile 'com.android.support:support-v4:18.0.0' compile 'com.android.support:appcompat-v7:+' } and my settings.gradle: include ':project', ':project:libs:android-support-v4', ':project:libs:google-api-client-1.10.3-beta', ':project:libs:google-api-client-android2-1.10.3-beta', ':project:libs:google-http-client-1.10.3-beta', ':project:libs:google-http-client-android2-1.10.3-beta', ':project:libs:google-oauth-client-1.10.1-beta', ':project:libs:gson-2.1', ':project:libs:guava-11.0.1', ':project:libs:jackson-core-asl-1.9.4', ':project:libs:jsr305-1.3.9', ':project:libs:protobuf-java-2.2.0', ':project:libs:GoogleAdMobAdsSdk-6.4.1' As I said, I've tried pretty much everything I have read on here about this error and am having no luck. Any help would be greatly appreciated.

    Read the article

  • Execution permission cannot be acquired for Outlook 2003 addin

    - by khushnuma
    I am developing a simple Outlook 2003 add-in using VSTO 2008. Everything works fine on development environment. But when I try to install the addin it gives following load error. I think there is some security related issue. Please help me in resolving this issue. Could not load file or assembly 'OutlookAddIn, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null' or one of its dependencies. Failed to grant permission to execute. (Exception from HRESULT: 0x80131418) ************** Exception Text ************** System.IO.FileLoadException: Could not load file or assembly 'OutlookAddIn, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null' or one of its dependencies. Failed to grant permission to execute. (Exception from HRESULT: 0x80131418) File name: 'OutlookAddIn, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null' --- System.Security.Policy.PolicyException: Execution permission cannot be acquired. at System.Security.SecurityManager.ResolvePolicy(Evidence evidence, PermissionSet reqdPset, PermissionSet optPset, PermissionSet denyPset, PermissionSet& denied, Boolean checkExecutionPermission) at System.Security.SecurityManager.ResolvePolicy(Evidence evidence, PermissionSet reqdPset, PermissionSet optPset, PermissionSet denyPset, PermissionSet& denied, Int32& securitySpecialFlags, Boolean checkExecutionPermission) at Microsoft.VisualStudio.Tools.Applications.Runtime.AppDomainManagerInternal.HandleOnlineOffline(Exception e, String basePath, String filePath) at Microsoft.VisualStudio.Tools.Applications.Runtime.AppDomainManagerInternal.LoadStartupAssembly(EntryPoint entryPoint, Dependency dependency, Dictionary`2 assembliesHash) at Microsoft.VisualStudio.Tools.Applications.Runtime.AppDomainManagerInternal.ConfigureAppDomain() at Microsoft.VisualStudio.Tools.Applications.Runtime.AppDomainManagerInternal.LoadAssembliesAndConfigureAppDomain(IHostServiceProvider serviceProvider) at Microsoft.VisualStudio.Tools.Applications.Runtime.AppDomainManagerInternal.LoadEntryPointsHelper(IHostServiceProvider serviceProvider) ************** Loaded Assemblies ************** mscorlib Assembly Version: 2.0.0.0 Win32 Version: 2.0.50727.3603 (GDR.050727-3600) CodeBase: file:///C:/WINDOWS/Microsoft.NET/Framework/v2.0.50727/mscorlib.dll ---------------------------------------- Microsoft.VisualStudio.Tools.Applications.Runtime Assembly Version: 8.0.0.0 Win32 Version: 8.0.50727.940 CodeBase: file:///C:/WINDOWS/assembly/GAC_MSIL/Microsoft.VisualStudio.Tools.Applications.Runtime/8.0.0.0__b03f5f7f11d50a3a/Microsoft.VisualStudio.Tools.Applications.Runtime.dll ---------------------------------------- Microsoft.Office.Tools.Common Assembly Version: 8.0.0.0 Win32 Version: 8.0.50727.940 CodeBase: file:///C:/WINDOWS/assembly/GAC_MSIL/Microsoft.Office.Tools.Common/8.0.0.0__b03f5f7f11d50a3a/Microsoft.Office.Tools.Common.dll ---------------------------------------- System Assembly Version: 2.0.0.0 Win32 Version: 2.0.50727.3053 (netfxsp.050727-3000) CodeBase: file:///C:/WINDOWS/assembly/GAC_MSIL/System/2.0.0.0__b77a5c561934e089/System.dll ---------------------------------------- System.Windows.Forms Assembly Version: 2.0.0.0 Win32 Version: 2.0.50727.3053 (netfxsp.050727-3000) CodeBase: file:///C:/WINDOWS/assembly/GAC_MSIL/System.Windows.Forms/2.0.0.0__b77a5c561934e089/System.Windows.Forms.dll ----------------------------------------

    Read the article

  • NoSuchProviderException: smtp with log4j SMTP appender

    - by user1016403
    I am using log4j to send an email when there is an exception. below is my log4j properties file configuration. log4j.rootLogger=WARN, R, email log4j.appender.R=org.apache.log4j.ConsoleAppender log4j.appender.R.layout=org.apache.log4j.PatternLayout log4j.appender.R.layout.ConversionPattern=%d{HH:mm:ss} %-5p [%c{1}]: %m%n log4j.appender.email=org.apache.log4j.net.SMTPAppender log4j.appender.email.BufferSize=10 log4j.appender.email.SMTPHost=myhost.com [email protected] [email protected] log4j.appender.email.Subject=Error log4j.appender.email.layout=org.apache.log4j.PatternLayout mine is maven project i have added dependencies for mail.jar, activation.jar and smtp.jar. But on application server startup itself i get below error: [ERROR] log4j:ERROR Error occured while sending e-mail notification. [ERROR] javax.mail.NoSuchProviderException: smtp [ERROR] at javax.mail.Session.getService(Session.java:782) [ERROR] at javax.mail.Session.getTransport(Session.java:708) [ERROR] at javax.mail.Session.getTransport(Session.java:651) [ERROR] at javax.mail.Session.getTransport(Session.java:631) [ERROR] at javax.mail.Session.getTransport(Session.java:686) [ERROR] at javax.mail.Transport.send0(Transport.java:166) Am i missing any thing here? What is the root cause of the error? is it because of incorrect SMTP host name? or is it because of any missing/conflicting dependencies?

    Read the article

  • Javassist failure in hibernate: invalid constant type: 60

    - by Kaleb Pederson
    I'm creating a cli tool to manage an existing application. Both the application and the tests build fine and run fine but despite that I receive a javassist failure when running my cli tool that exists within the jar: INFO: Bytecode provider name : javassist ... INFO: Hibernate EntityManager 3.5.1-Final Exception in thread "main" javax.persistence.PersistenceException: Unable to configure EntityManagerFactory at org.hibernate.ejb.Ejb3Configuration.configure(Ejb3Configuration.java:371) at org.hibernate.ejb.HibernatePersistence.createEntityManagerFactory(HibernatePersistence.java:55) at javax.persistence.Persistence.createEntityManagerFactory(Persistence.java:48) at javax.persistence.Persistence.createEntityManagerFactory(Persistence.java:32) ... at com.sophware.flexipol.admin.AdminTool.<init>(AdminTool.java:40) at com.sophware.flexipol.admin.AdminTool.main(AdminTool.java:69) Caused by: java.lang.RuntimeException: Error while reading file:flexipol-jar-with-dependencies.jar at org.hibernate.ejb.packaging.NativeScanner.getClassesInJar(NativeScanner.java:131) at org.hibernate.ejb.Ejb3Configuration.addScannedEntries(Ejb3Configuration.java:467) at org.hibernate.ejb.Ejb3Configuration.addMetadataFromScan(Ejb3Configuration.java:457) at org.hibernate.ejb.Ejb3Configuration.configure(Ejb3Configuration.java:347) ... 11 more Caused by: java.io.IOException: invalid constant type: 60 at javassist.bytecode.ConstPool.readOne(ConstPool.java:1027) at javassist.bytecode.ConstPool.read(ConstPool.java:970) at javassist.bytecode.ConstPool.<init>(ConstPool.java:127) at javassist.bytecode.ClassFile.read(ClassFile.java:693) at javassist.bytecode.ClassFile.<init>(ClassFile.java:85) at org.hibernate.ejb.packaging.AbstractJarVisitor.checkAnnotationMatching(AbstractJarVisitor.java:243) at org.hibernate.ejb.packaging.AbstractJarVisitor.executeJavaElementFilter(AbstractJarVisitor.java:209) at org.hibernate.ejb.packaging.AbstractJarVisitor.addElement(AbstractJarVisitor.java:170) at org.hibernate.ejb.packaging.FileZippedJarVisitor.doProcessElements(FileZippedJarVisitor.java:119) at org.hibernate.ejb.packaging.AbstractJarVisitor.getMatchingEntries(AbstractJarVisitor.java:146) at org.hibernate.ejb.packaging.NativeScanner.getClassesInJar(NativeScanner.java:128) ... 14 more Since I know the jar is fine as the unit and integration tests run against it, I thought it might be a problem with javassist, so I tried cglib. The bytecode provider then shows as cglib but I still get the exact same stack trace with javassist present in it. cglib is definitely in the classpath: $ unzip -l flexipol-jar-with-dependencies.jar | grep cglib | wc -l 383 I've tried with both hibernate 3.4 and 3.5 and get the exact same error. Is this a problem with javassist?

    Read the article

  • Jackson object mapping - map incoming JSON field to protected property in base class

    - by Pete
    We use Jersey/Jackson for our REST application. Incoming JSON strings get mapped to the @Entity objects in the backend by Jackson to be persisted. The problem arises from the base class that we use for all entities. It has a protected id property, which we want to exchange via REST as well so that when we send an object that has dependencies, hibernate will automatically fetch these dependencies by their ids. Howevery, Jackson does not access the setter, even if we override it in the subclass to be public. We also tried using @JsonSetter but to no avail. Probably Jackson just looks at the base class and sees ID is not accessible so it skips setting it... @MappedSuperclass public abstract class AbstractPersistable<PK extends Serializable> implements Persistable<PK> { @Id @GeneratedValue(strategy = GenerationType.AUTO) private PK id; public PK getId() { return id; } protected void setId(final PK id) { this.id = id; } Subclasses: public class A extends AbstractPersistable<Long> { private String name; } public class B extends AbstractPersistable<Long> { private A a; private int value; // getter, setter // make base class setter accessible @Override @JsonSetter("id") public void setId(Long id) { super.setId(id); } } Now if there are some As in our database and we want to create a new B via the REST resource: @POST @Consumes(MediaType.APPLICATION_JSON) @Produces(MediaType.APPLICATION_JSON) @Transactional public Response create(B b) { if (b.getA().getId() == null) cry(); } with a JSON String like this {"a":{"id":"1","name":"foo"},"value":"123"}. The incoming B will have the A reference but without an ID. Is there any way to tell Jackson to either ignore the base class setter or tell it to use the subclass setter instead? I've just found out about @JsonTypeInfo but I'm not sure this is what I need or how to use it. Thanks for any help!

    Read the article

  • Unable to import Eclipse project to Android studio

    - by Binoy Babu
    Whenever I try to import my Eclipse project to Android Studio I get the following error : You are using an old, unsupported version of Gradle. Please use version 1.8 or greater. Please point to a supported Gradle version in the project's Gradle settings or in the project's Gradle wrapper (if applicable.) Consult IDE log for more details (Help | Show Log) Im using Android Studio 0.3 and Ubuntu, I also tried it on a Windows 8 box with fresh install but getting the same error. I'm using default gradle wrapper and I tried checking and unchecking auto import option. Is this a bug? How can I get around it. How do I update gradle to 1.8 or check the current gradle version? My build.gradle is given below. buildscript { repositories { mavenCentral() } dependencies { classpath 'com.android.tools.build:gradle:0.6.3' // I also tried using 0.6.1 and 0.5.+ } } apply plugin: 'android' dependencies { compile fileTree(dir: 'libs', include: '*.jar') } android { compileSdkVersion 18 buildToolsVersion "18.0.1" sourceSets { main { manifest.srcFile 'AndroidManifest.xml' java.srcDirs = ['src'] resources.srcDirs = ['src'] aidl.srcDirs = ['src'] renderscript.srcDirs = ['src'] res.srcDirs = ['res'] assets.srcDirs = ['assets'] } // Move the tests to tests/java, tests/res, etc... instrumentTest.setRoot('tests') // Move the build types to build-types/<type> // For instance, build-types/debug/java, build-types/debug/AndroidManifest.xml, ... // This moves them out of them default location under src/<type>/... which would // conflict with src/ being used by the main source set. // Adding new build types or product flavors should be accompanied // by a similar customization. debug.setRoot('build-types/debug') release.setRoot('build-types/release') } }

    Read the article

  • MonoDevelop: Addin generation issues

    - by calmcajun
    I am having problems creating the root.mrep file for a updated MonoDevelop Addin. The Addin was originally built for MonoDevelop 2.2. I am able to build the updated project but when I use mdrun.exe setup rep-build to generate the .mrep files(main and root) it only generates the main.mrep file and not the root.mrep file. Is there a new way of building the addin that I am missing? I used both setup pack and then setup rep-build. I have listed a snippet of the addin.xml file below: <Addin id = "Addin" namespace = "MonoDevelop" name = "Monobjc development" author = "Rob L" copyright = "" description = "Addin" category = "Mac Development" version = "1.0"> <Runtime> <Import assembly="MonoDevelop.Sample.dll" /> <Import assembly="MonoDevelop.MacDev.dll" /> <Import assembly="Sample.Tools.dll" /> </Runtime> <Dependencies> <Addin id="Core" version="2.4" /> <Addin id="Core.Gui" version="2.4" /> <Addin id="Projects" version="2.4" /> <Addin id="Projects.Gui" version="2.4" /> <Addin id="Ide" version="2.4" /> </Dependencies> <MoreStuffHere /> </Addin>

    Read the article

  • Java/Spring: Why won't Spring use the validator object I have configured?

    - by GMK
    I'm writing a web app with Java & Spring 2.5.6 and using annotations for bean validation. I can get the basic annotation validation working fine, and Spring will even call a custom Validator declared with @Validator on the target bean. But it always instantiates a brand new Validator object to do it. This is bad because the new validator has none of the injected dependencies it needs to run, and so it throws a null pointer exception on validate. I need one of two things and I don't know how to do either. Convince Spring to use the validator I have already configured. Convince Spring to honor the @Autowired annotations when it creates the new validator. The validator has the @Component annotation, like this. @Component public class AccessCodeBeanValidator implements Validator { @Autowired private MessageSource messageSource; Spring finds the validator in the component scan, injects the autowired dependencies, but then ignores it and creates a new one at validation time. The only thing that I can do at the moment is add a validator reference into the controller for each validator object and use that ref directly, instead of relying on the bean validation framework to call the validator for me. It looks like this. // first validate via the annotations on the bean beanValidator.validate(accessCodeBean, result); // then validate using the specific validator class acbValidator.validate(accessCodeBean, result); if (result.hasErrors()) { If anyone knows how to convince spring to use the existing validator, instead of creating a new one, or how to make it do the autowiring when it creates a new one, I'd love to know.

    Read the article

  • Perl OO frameworks and program design - Moose and Conway's inside-out objects (Class::Std)

    - by Emmel
    This is more of a use-case type of question... but also generic enough to be more broadly applicable: In short, I'm working on a module that's more or less a command-line wrapper; OO naturally. Without going into too many details (unless someone wants them), there isn't a crazy amount of complexity to the system, but it did feel natural to have three or four objects in this framework. Finally, it's an open source thing I'll put out there, rather than a module with a few developers in the same firm working on it. First I implemented the OO using Class::Std, because Perl Best Practices (Conway, 2005) made a good argument for why to use inside-out objects. Full control over what attributes get accessed and so on, proper encapsulation, etc. Also his design is surprisingly simple and clever. I liked it, but then noticed that no one really uses this; in fact it seems Conway himself doesn't really recommend this anymore? So I moved to everyone's favorite, Moose. It's easy to use, although way way overkill feature-wise for what I want to do. The big, major downside is: it's got a slew of module dependencies that force users of my module to download them all. A minor downside is it's got way more functionality than I really need. What are recommendations? Inconvenience fellow developers by forcing them to use a possibly-obsolete module, or force every user of the module to download Moose and all its dependencies? Is there a third option for a proper Perl OO framework that's popular but neither of these two?

    Read the article

  • Heroku and Refinerycms: Application failed to start ~ attachment_fu problem

    - by John Deely
    Ok so I'm trying to get Refinerycms working with Heroku, and I'm new at all of this. I've set up an amazon s3 account and added keys and ids to the amazon_s3.yml files. When launched on Heroku at gart.heroku.com I get the following error: App failed to start /disk1/home/slugs/141557_e8490b3_d5eb/mnt/vendor/plugins/attachment_fu/lib/technoweenie/attachment_fu/backends/s3_backend.rb:187:in read': No such file or directory - /disk1/home/slugs/141557_e8490b3_d5eb/mnt/config/amazon_s3.yml (Errno::ENOENT) from /disk1/home/slugs/141557_e8490b3_d5eb/mnt/vendor/plugins/attachment_fu/lib/technoweenie/attachment_fu/backends/s3_backend.rb:187:inincluded' from /disk1/home/slugs/141557_e8490b3_d5eb/mnt/vendor/plugins/attachment_fu/lib/technoweenie/attachment_fu.rb:123:in include' from /disk1/home/slugs/141557_e8490b3_d5eb/mnt/vendor/plugins/attachment_fu/lib/technoweenie/attachment_fu.rb:123:inhas_attachment' from /disk1/home/slugs/141557_e8490b3_d5eb/mnt/app/models/image.rb:13 from /usr/local/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in gem_original_require' from /usr/local/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:inrequire' from /usr/local/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:158:in require' from /usr/local/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:265:inrequire_or_load' ... 42 levels... from /usr/local/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/builder.rb:29:in instance_eval' from /usr/local/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/builder.rb:29:ininitialize' from /home/heroku_rack/heroku.ru:1:in `new' from /home/heroku_rack/heroku.ru:1 The s3_backend.rb line 187 contains: @@s3_config = @@s3_config = YAML.load(ERB.new(File.read(@@s3_config_path)).result)[RAILS_ENV].symbolize_keys Any help would be great!

    Read the article

< Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >