Search Results

Search found 2170 results on 87 pages for 'workflow versioning'.

Page 30/87 | < Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >

  • Is Akka a good solution for a concurrent pipeline/workflow problem?

    - by herpylderp
    Disclaimer: I am brand new to Akka and the concept of Actors/Event-Driven Architectures in general. I have to implement a fairly complex problem where users can configure a "concurrent pipeline": Pipeline: consists of 1+ Stages; all Stages execute sequentially Stage: consists of 1+ Tasks; all Tasks execute in parallel Task: essentially a Java Runnable As you can see above, a Task is a Runnable that does some unit of work. Tasks are organized into Stages, which execute their Tasks in parallel. Stages are organized into the Pipeline, which executes its Stages sequentially. Hence if a user specifies the following Pipeline: CrossTheRoadSafelyPipeline Stage 1: Look Left Task 1: Turn your head to the left and look for cars Task 2: Listen for cars Stage 2: Look right Task 1: Turn your head to the right and look for cars Task 2: Listen for cars Then, Stage 1 will execute, and then Stage 2 will execute. However, while each Stage is executing, it's individual Tasks are executing in parallel/at the same time. In reality Pipelines will become very complicated, and with hundreds of Stages, dozens of Tasks per Stage (again, executing at the same time). To implement this Pipeline I can only think of several solutions: ESB/Apache Camel Guava Event Bus Java 5 Concurrency Actors/Akka Camel doesn't seem right because its core competency is integration not synchrony and orchestration across worker threads. Guava is great, but this doesn't really feel like a subscriber/publisher-type of problem. And Java 5 Concurrency (ExecutorService, etc.) just feels too low-level and painful. So I ask: is Akka a strong candidate for this type of problem? If so, how? If not, then why, and what is a good candidate?

    Read the article

  • Do you find it a challenge diagnosing issues with creating Requisitions to Purchase Orders Automatically?

    - by LindaJ-Oracle
    Do you find it a challenge diagnosing issues when there are problems with creating Requisitions to Purchase Orders automatically?  Well it has become much easier with the newly enhanced 'CREATEPO Workflow - Data Collection Script' available in Doc ID 1415918.1.Run the diagnostic and the output will include all the necessary information for problem solving; including: 1) Profile Option Values2) Default values for CREATEPO attributes3) Requisition header and line information4) Document Setup for requisitions5) Requisition approval workflow activity, attributes, errors and validation checks6) CREATEPO overall workflow activity, attributes, errors and validation checks7) CREATEPO requisition processing workflow activity, attributes, errors and validation checks8) CREATEPO approval workflow activity, attributes, errors and validation checks9) PO_WF_DEBUG messages10) Purchase order / Blanket release informationMore importantly now valuable errors and warnings are provided with links to the solutions!So you can potentially resolve the issue on your own, or if you still need Supports help proactively run the diagnostic before logging a Service Request and the data collection will be available for the analyst immediately.  Add Doc ID 1415918.1 to your favorites today.

    Read the article

  • Which Git-based MIS to track workflow like Trac/Redmine but on console minimastically?

    - by hhh
    Definitions MIS = management information system Some list about console based solutions here and some GUI-hacks here. Been fed up to install all those dependencies and no make -files with GUI -things so which console-based MIS would you suggest for a game-development team with graphical -repo, animation -repo, code -repo, stories -repo, etc ? P.s. I do use Git -submodules and the reason for repo -fragmentation is due to roles and size, certain repos such as graphic -repos tend to be quite large so better to keep them separate. Perhaps useful to readers interested about this http://stackoverflow.com/questions/5881578/trac-vs-redmine https://github.com/jchris/sofa

    Read the article

  • Best setup/workflow for distributed team to integrated DSVC with fragmented huge .NET site?

    - by lazfish
    So we have a team with 2 developers one manager. The dev server sits in a home office and the live server sits in a rack somewhere handled by the larger part of my company. We have freedom to do as we please but I want to incorporate Kiln DSVC and FogBugz for us with some standard procedures to make sense of our decisions/designs/goals. Our main product is web-based training through our .NET site with many videos etc, and we also do mobile apps for multiple platforms. Our code-base is a 15 yr old fragmented mess. The approach has been rogue .asp/.aspx pages with some class management implemented in the last 6 years. We still mix our html/vb/js all on the same file when we add a feature/page to our site. We do not separate the business logic from the rest of the code. Wiring anything up in VS for Intelli-sense or testing or any other benefit is more frustrating than it is worth, because of having to manually rejigger everything back to one file. How do other teams approach this? I noticed when I did wire everything up for VS it wants to make a class for all functions. Do people normally compile DLLs for page-specific functions that won't be reusable? What approaches make sense for getting our practices under control while still being able to fix old anti-patterns and outdated code and still moving towards a logical structure for future devs to build on?

    Read the article

  • iPhone Internationalization. What is the simplest workflow for me?

    - by dugla
    Hello, I am wising up and getting my internationalization act together. Right off the bat I am a bit swamped by all the docs Apple provides so I was wondering of someone could sketch a workflow for my situation. Before I begin, I browsed some Apple example code and noticed this NIB file - MainWindow.xib - in the Resources folder: This clearly has something to do with internationalization/localization. Could someone please explain how this is created and where in the workflow it happens? My app is fundamentally an imaging app with a few labels that I currently programmatically internationalize using NSLocalizedString(...). If I set all my labels programmatically and wrap all my strings with NSLocalizedString(...) can I completely ignore the NIB issues? Thanks in advance, Doug

    Read the article

  • Using SVN alone or in small workgroups - workflow approach?

    - by Industrial
    Hi everybody, I have spent some months working on a web application and we're come close to production stage. It's soon time to expand the development group with 1-3 people on this project. I have not too much experience on working with SVN, but It's obviously the choice for a big part of the larger companies out there, so I am guessing that the pros of SVN without a doubt outweights the time spent on commit/check ins / check outs etc. The workflow seems to become a bit more complicated with SVN, and even though I have read Version Control with Subversion by O'Reilly Media and I am not sure yet if it's overkill to use SVN for any reasons besides backup when developing alone or in a small (1-3 people) workgroup? How do you do it? What's your workflow with version control while working alone or in small workgroups? Thanks!

    Read the article

  • The Minimalist Approach to Content Governance - Create Phase

    - by Kellsey Ruppel
     Originally posted by John Brunswick. In this installment of our Minimalist Approach to Content Governance we finally get to the fun part of the content creation process! Once the content requester has addressed the items outlined in the Request Phase it is time to setup and begin the production of content.   For this to be done correctly it is important the the content be assigned appropriate workflow and security information. As in our prior phase, let's take a look at what can be done to streamline this process - as contributors are focused on getting information to their end users as quickly as possible. This often means that details around how to ensure that the materials are properly managed can be overlooked, but fortunately there are some techniques that leverage our content management system's native capabilities to automatically take care of some of the details. 1. Determine Access Why - Even if content is not something that needs to restricted due to security reasons, it is helpful to apply access rights so that the content ends up being visible only to users that it relates to. This will greatly improve user experience. For instance, if your team is working on a group project many of your fellow company employees do not need to see the content that is being worked on for that project. How - Make use of native content features that allow propagation of security and meta data from parent folders within your content system that have been setup for your particular effort. This makes it painless to enforce security, as well as meta data policies for even the most unorganized users. The default settings at a parent level can be set once the content creation request has been accepted and a location in the content management system is assigned for your specific project. Impact - Users can find information will less effort, as they will only be exposed to what they need for their work and can leverage advanced search features to take advantage of meta data assigned to content. The combination of default security and meta data will also help in running reports against the content in the Manage and Retire stages that we will discuss in the next 2 posts. 2. Assign Workflow (optional depending on nature of content) Why - Every case for workflow is going to be a bit different, but it generally involves ensuring that content conforms to management, legal and or editorial requirements. How - Oracle's Universal Content Management offers two ways of helping to workflow content without much effort. Workflow can be applied to content based on Criteria acting on meta data or explicitly assigned to content with a Basic workflow. Impact - Any content that needs additional attention before release is addressed, allowing users to comment and version until a suitable result is reached. By using inheritance from parent folders within the content management system content can automatically be given the right security, meta data and workflow information for a particular project's content. This relieves the burden of doing this for every piece of content from management teams and content contributors. We will cover more about the management phase within the content lifecycle in our next installment.

    Read the article

  • Method to integrate Powershell scripts with non-Windows workflow?

    - by Matt Simmons
    I love the smell of new machines in the morning. I'm automating a machine creation workflow that involves several separate systems across my infrastructure, some of which involve 15 year old perl scripts on Solaris hosts, PXE Booting Linux systems, and Powershell on Windows Server 2008. I can script each of the individual parts, and integrating the Linux and Unix automation is fairly straightforward, but I'm at a loss as to how to reliably tie together the Powershell scripts to the rest of the processes. I would prefer if the process began on a Linux host, since I imagine that it will end up as a web application living on an Apache server, but if it needs to begin on Windows, I am hesitantly okay with that. I would ideally like something along the lines of psexec for Linux to run against Windows, but the answer in that direction appears to by Cygwin, and as much as I appreciate all of the hard work that they put in, it has never felt right, if you know what I mean. It's great for a desktop and gives a lot of functionality, but I feel like Windows servers should be treated like Windows servers and not bastardized Unix machines (which, incidentally, is my argument against OSX servers, too, and they're actually Unix). Anyway, I don't want to go with Cygwin unless that's the last and only option. So I guess what I'm asking is if there is a way to execute jobs on Windows machines from Linux. Without Cygwin. I'm open to ideas and suggestions, including "Look idiot, everyone uses Cygwin, so suck it up and deal with it". Thanks in advance!

    Read the article

  • Replacement for public folder workflow, I'm confused as to how sharepoint does it.

    - by RodH257
    For years Microsoft has been slowly phasing out public folders, perhaps exchange 2010 really is the LAST TIME they'll be shipped... I've heard sharepoint is the replacement, but I don't understand full, can someone give me an idea of how to replace this workflow? In our office, we have projects, they have a project number, ie 10353. Each job folder has a public folder, organized in a hierachy like Projects Year Folder Subfolders The main subfolder we use is for genera correspondence. When an email is received that relates to a project, it is dragged and dropped (or right click move to) a public folder. Adding public folder favourites for each user helps this. When an email is sent, we have a custom email form, which is the default email form, but with a project number field next to the subject line. When you enter the job number in there, it carbon copies our filing system in, which reads the job number and puts the email in the public folder for you. if you need to refer to emails, you go to public folder and find them there. This isn't the best with large jobs, but it works ok. Now, I have limited experience with sharepoint (well, WSS), we've used it to do some neat discussion boards/polls etc as an intranet site, but I haven't seen much of its integration with outlook. The great thing about our solution is how tightly it integrates with outlook which is exactly where the emails are. If you want to forward an old email, you go to public folder and forward it, simple. Any solution that replaces it should be at least as easy as this. Improvements we would like would be to have better searching of emails, better support in exchange (ie future version) and also, custom forms in outlook are being phased out (the VBA kind), so avoiding these would be good. Does sharepoint do this? or what solutions do this kind of thing?

    Read the article

  • What is your workflow when designing HTML/CSS layouts?

    - by DMin
    I have been working with PHP/MySQL as a hobby for close to a couple of years now, I have been working with photoshop for a very long time, I know CSS & HTML well enough to write without any reference, so, I would not consider myself someone who's very new at this. I have recently started developing websites professionally - (only person working on the project). I have seen the power of Joomla and how you can make a website ready for your customer in a matter of hours(if not minutes). I find it very hard to make layouts that remotely look like the themes on joomla. I find making even simple layouts a very cumbersome process and takes a lot of time to get a good enough output. I have a feeling I may not be using the right tools or workflow for the job. What I wanted to findout was, as part of the industry : How do, you, make your website when you do it from scratch? What are the tools that you use? What is your workflow? Just noted a few things I know already, for your reference(You can skip this if you like) I have seen the export for web for Photoshop that exports CSS - but (as far as I know) exports only absolutely positioned webpages so they need to be beaten and fixed if you want to use them for example for joomla. I have used the SiteGrinder plugin for Photoshop that exports HTML/CSS. It looks promising but haven't tried it extensively. One of the tools that save loads of time is FireBug. This makes it easy to edit html and css on the fly and get the page looking exactly as you want it. Recently stumbled upon fireworks. But haven't explored it much. Thanks! :)

    Read the article

  • WF4 &ndash; Guess the number game!

    - by MarkPearl
    I posted yesterday how really good WF4 was looking. Today I thought I would show some real basics that I was able to figure out. This will be a simple example, I am going to make a flowchart workflow – which will prompt the user to guess the number until they guess the right number. Lets begin… Make a new project and make it a Workflow console Application. Then select the Workflow file and drag a FlowChart (2) to point 3. This will now show a green start circle in the designer form. We are going to work with primitives to start with. We are now going to drag a few objects onto the Workflow, We drag the WriteLine, Assign & Decision items onto the designer. Once they are dragged onto the designer we will want to link them up. The order that they are linked is critical since this will determine the order of the solution. In this case, we want the system to first ask “Guess a number”, then to wait for the user to input some code, and then to display “You got it” if they got it right, and “Try again” if they got it wrong. So we now link the arrows to the objects. This is done by moving the mouse pointer over the start objects and clicking on one of the toggles and then dragging it to the next object and releasing the button over one of the toggles. This will place an arrow from the source object to the target object. Okay… pretty simple stuff – now we just need these primitive objects to do stuff. Lets start with the WriteLine primitive. We place the text in inverted commas in the Text field. Because this field accepts any valid VB expression we could have put variables etc. in there if we wanted to. The next thing we want to do is allow the user to input a number. This brings up an interesting problem, if a user were to type in a number, there would need to be someway to declare a variable to hold that value for the life of the workflow. We can achieve this by declaring a variable. To declare a variable, move your cursor over the variables tab at the bottom of the workflow, and then type the name of the new variable in the “Create Variable” field and set it as shown in the image above. Now that we have a variable, we want to call the Console.Readline method and assign the inputted value from the Console to that variable. The code that cannot be seen is actually this – Convert.ToInt32(Console.ReadLine()) We now have a workflow that first prompts the user for a number, then allows the user to type in a number. We are almost done, we just need to make the system react to the value inputted. There are a few ways we could do this, I am going to use the Decision item. So select the Decision object on the designer and then view its properties (F4 for me), and in the condition field place a condition. For simplicity sake I have decided that if the user guesses 10, they will have guessed the number. This is now the completed workflow. Its really easy to understand and shows some really powerful principles for Business applications. You can run the application and see what it does. Imagine writing business solutions that do not worry about the exact flow of objects, but simply allows a business analyst or someone to configure the solution to work exactly as the business rules would dictate. And if the rules changed six months later all they would need to do is re-drag some of the flows. Now I do not know if WF4 will allow for this, but it feels like it is a step in the right direct.

    Read the article

  • General guidelines / workflow to convert or transfer video "professionally"?

    - by cloneman
    I'm an IT "professional" who sometimes has to deal with small video conversion / video cutting projects, and I'd like to learn "the right way" to do this. Every time I search Google, there's always a disaster for weird, low-maturity trialware, or random forums threads from 3-4 years ago indicating various antiquated method to do it. The big question is the following: What are the "general" guidelines and tools to transcode video into some efficient (lossless?) intermediary, for editing purposes, for the purpose of eventually re-encoding it after? It seems to me like even the simplest of formats and tasks are a disaster of endless trial & error, or expertise only known by hardened experts who have a swiss army kife of weird conversion tools that they use, almost as if mounting an attack against the project. Here are a few cases in point: Simple VOB files extracted from DVD footage can't be imported into Adobe Premiere directly. Virtualdub is an old software people keep recommending but doesn't seem to support newer formats. I don't even know how to tell with certainty which codecs a video has, and weather the image is interlaced or not, and what resolution and codecs I'm dealing with. Problems: Choosing a wrong interlace option which diminishes quality Choosing a wrong pixel aspect ratio (stretches the image) Choosing a wrong "project type" in Premiere causing footage to require scaling Being forced to use some weird program that will have any number of negative effects What I'm looking for: Books or "Real knowledge" on format conversions, recognized tools, etc. that aren't some random forum guides on how to deal with video formats. Workflow guidelines on identifying a format going from one format to another without problems as mentioned above. Documentation on what programs like Adobe Premiere can and can't do with regards to formats, so that I don't use a wrench as a hammer. TL;DR How should you convert or "prepare" a video file to ensure it will be supported by Premiere for editing? Is premiere a suitable program to handle cropping, encoding, or should other tools be used for this, when making a video montage from a variety of source formats? What are some good books to read that specifically deal with converting videos that use any number of codecs?

    Read the article

  • Where do you start your design - code, UI or workflow?

    - by Mmarquee
    Hi I was discussing this at work, and was wondering where people start their designs? We tend to start with designing code to solve the problem presented to us, but that is probably all of us are (or were) programmers. I was wondering where other people and organisations start their design. Do they start with solving the problem as a coding problem, sit down and design what UI to use, or map out the data or workflow? Thanks

    Read the article

  • How to catch 'exceptions' for out of order execution in Workflow Foundation 4?

    - by Alex Key
    Hi, I am attempting to model a worklfow using a "WCF Workflow Service" in .net / vs 2010 that needs to handle out of order execution gracefully (but not allow it - if thath makes sense!?) For example I have 2 receive activities one called Initialize and the other called GetValue inside a FlowChart. In most cases Initialize should be called first and GetValue after (as modled in the flow chart). However if GetValue is executed before Initialize I do not want to return an "out of order" exception (although when I look at the WCF test client, I can't actually see an exception). But instead a custom exception saying something like "you must initialize first". In theory I could model this with lots of parallel activities and conditions to check if Initialized / Running / Terminated etc. But the business process I am modelling if very very similar to a state machine... except it must handle people executing things in the wrong order. Ideally I would like to catch the "out of order" exception (thought I don't think it's really an exception as such), check the 'exception' to see which function was attempted to run and then handle it. I have done some research around enabling AllowBufferedReceive. However I don't want to be able to execute out of order (I don't think), but instead give a detailed response if it does happen. I've looked at the new beta state machine template for WF 4 - but i'm not sure if it does what i'm after? I'm not sure if I have the wrong end of the stick, so any help would be greatly appreciated. [EDIT] To help clarify... Sorry it's a tricky one to explain. The standard I am trying to implement (the e-learning standard SCORM RTE) is structured like a state machine i.e. certain functions can only be executed in certain states. However the standard specifies that if the calling clients tries to execute a function that it is not meant to, then a warning should be issued... for example "you cannot use GetValue(), because you have not yet Initialized". Ideally I'd like to structure the workflow as the theoretical state machine and not need to have to use multiple if/else's to handle all the scenarios where something could be executed out-of-order. I'd like to catch a out-of-order exception (but I don't think there is such an exception - as it's not in the debugger) and rethrow it.

    Read the article

  • Where do you start your design - code, UI, workflow or whatever?

    - by Mmarquee
    Hi I was discussing this at work, and was wondering where people start their designs? We tend to start with designing code to solve the problem presented to us, but that is probably all of us are (or were) programmers. I was wondering where other people and organisations start their design. Do they start with solving the problem as a coding problem, sit down and design what UI to use, or map out the data or workflow? Thanks

    Read the article

  • What Java/Scala or .NET web frameworks support modify source code and instantly run workflow e.i. wi

    - by Alexey
    As far as I can see the key advantage of dynamic languages like Ruby or Python over Java/Scala/C# etc is "hot" applying of your changes to source code to the running application. What are the frameworks for JVM or .NET that support the same workflow - apply changes to configuration and source code on the fly? Can they also watch changes to custom configurations and notify application? Note: Frameworks for dynamic languages on JVM/.NET like Grails or Compojure are out of scope here.

    Read the article

  • UppercuT v1.0 and 1.1&ndash;Linux (Mono), Multi-targeting, SemVer, Nitriq and Obfuscation, oh my!

    - by Robz / Fervent Coder
    Recently UppercuT (UC) quietly released version 1 (in August). I’m pretty happy with where we are, although I think it’s a few months later than I originally planned. I’m glad I held it back, it gave me some more time to think about some things a little more and also the opportunity to receive a patch for running builds with UC on Linux. We also released v1.1 very recently (December). UppercuT v1 Builds On Linux Perhaps the most significant changes to UC going v1 is that it now supports builds on Linux using Mono! This is thanks mostly to Svein Ackenhausen for the patches and working with me on getting it all working while not breaking the windows builds!  This means you can use mono on Windows or Linux. Notice the shell files to execute with Linux that come as part of UC now. Multi-Targeting Perhaps one of the hardest things to do that requires an automated build is multi-targeting. At v1 this is early, and possibly prone to some issues, but available.  We believe in making everything stupid simple, so it’s as simple as adding a comma to the microsoft.framework property. i.e. “net-3.5, net-4.0” to suddenly produce both framework builds. When you build, this is what you get (if you meet each framework’s requirements): At this time you have to let UC override the build location (as it does by default) or this will not work.  Semantic Versioning By now many of you have been using UppercuT for awhile and have watched how we have done versioning. Many of you who use git already know we put the revision hash in the informational/product version as the last octet. At v1, UppercuT has adopted the semantic versioning scheme. What does that mean? This is a short read, but a good one: http://SemVer.org SemVer (Semantic Versioning) is really using versioning what it was meant for. You have three octets. Major.Minor.Patch as in 1.1.0.  UC will use three different versioning concepts, one for the assembly version, one for the file version, and one for the product version. All versions - The first three octects of the version are owned by SemVer. Major.Minor.Patch i.e.: 1.1.0 Assembly Version - The assembly version would much closer follow SemVer. Last digit is always 0. Major.Minor.Patch.0 i.e: 1.1.0.0 File Version - The file version occupies the build number as the last digit. Major.Minor.Patch.Build i.e.: 1.1.0.2650 Product/Informational Version - The last octect of your product/informational version is the source control revision/hash. Major.Minor.Patch.RevisionOrHash i.e. (TFS/SVN): 1.1.0.235 i.e. (Git/HG): 1.1.0.a45ace4346adef0 SemVer is not on by default, the passive versioning scheme is still in effect. Notice that version.use_semanticversioning has been added to the UppercuT.config file (and version.patch in support of the third octet): Gems Support Gems support was added at v1. This will probably be deprecated as some point once there is an announced sunset for Nu v1. Application gems may keep it around since there is no alternative for that yet though (CoApp would be a possible replacement). Nitriq Support Nitriq is a code analysis tool like NDepend. It’s built by Mr. Jon von Gillern. It uses LINQ query language, so you can use a familiar idiom when analyzing your code base. It’s a pretty awesome tool that has a free version for those looking to do code analysis! To use Nitriq with UC, you are going to need the console edition.  To take advantage of Nitriq, you just need to update the location of Nitriq in the config: Then add the nitriq project files at the root of your source. Please refer to the Nitriq documentation on how these are created. UppercuT v1.1 Obfuscation One thing I started looking into was an easy way to obfuscate my code. I came across EazFuscator, which is both free and awesome. Plus the GUI for it is super simple to use. How do you make obfuscation even easier? Make it a convention and a configurable property in the UC config file! And the code gets obfuscated! Closing Definitely get out and look at the new release. It contains lots of chocolaty (sp?) goodness. And remember, the upgrade path is almost as simple as drag and drop!

    Read the article

  • Brainstorming: Weird JPA problem, possibly classpath or jar versioning problem???

    - by Vinnie
    I'm seeing a weird error message and am looking for some ideas as to what the problem could be. I'm sort of new to using the JPA. I have an application where I'm using Spring's Entity Manager Factory (LocalContainerEntityManagerFactoryBean), EclipseLink as my ORM provider, connected to a MySQL DB and built with Maven. I'm not sure if any of this matters..... When I deploy this application to Glassfish, the application works as expected. The problem is, I've created a set of stand alone unit tests to run outside of Glassfish that aren't working correctly. I get the following error (I've edited the class names a little) com.xyz.abc.services.persistence.entity.MyEntity cannot be cast to com.xyz.abc.services.persistence.entity.MyEntity The object cannot be cast to a class of the same type? How can that be? Here's a snippet of the code that is in error Query q = entityManager.createNamedQuery("MyEntity.findAll"); List entityObjects = q.getResultList(); for (Object entityObject: entityObjects) { com.xyz.abc.services.persistence.entity.MyEntity entity = (com.xyz.abc.services.persistence.entity.MyEntity) entityObject; Previously, I had this code that produced the same error: CriteriaQuery cq = entityManager.getCriteriaBuilder().createQuery(); cq.select(cq.from(com.xyz.abc.services.persistence.entity.MyEntity.class)); List entityObjects = entityManager.createQuery(cq).getResultList(); for (Object entityObject: entityObjects) { com.xyz.abc.services.persistence.entity.MyEntity entity = (com.xyz.abc.services.persistence.entity.MyEntity) entityObject; This code is question is the same that I have deployed to the server. Here's the innermost exception if it helps Caused by: java.lang.ClassCastException: com.xyz.abc.services.persistence.entity.MyEntity cannot be cast to com.xyz.abc.services.persistence.entity.MyEntity at com.xyz.abc.services.persistence.entity.factory.MyEntityFactory.createBeans(MyEntityFactory.java:47) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:115) ... 37 more I'm guessing that there's some jar I'm using in Glassfish that is different than the ones I'm using in test. I've looked at all the jars I have listed as "provided" and am pretty sure they are all the same ones from Glassfish. Let me know if you've seen this weird issue before, or any ideas for correcting it.

    Read the article

  • Libtool versioning of a library that depends on other libraries.

    - by Artyom
    Hello, I have a framework that uses Boost and CgiCC in the core application and in its interface. How should I version the library binary interface (a.k.a. libtool -version-info)? I have no problems tracking the changes in library itself when I make various changes. As it is clear for me how should I version. But... Both Boost and CgiCC libraries do not provide any backward compatible API/ABI and my library may be linked with quite arbitrary versions Boost and CgiCC so I can't provide any promise about the interfaces, so I can't really specify -version-info because even the same library compiled against different versions of Boost and CgiCC would not be compatible. So... What should I do? How should I version library? I know that I should not depend on Boost and CgiCC interfaces in first place, but this is what I get so far for existing stable version. This issue is addressed in next major release but I still have and want to maintain current release as it is very valuable.

    Read the article

  • g++ symbol versioning. Set it to GCC_3.0 using version 4 of g++

    - by Ismael
    Hi all I need to implemente a Java class which uses JNI to control a fiscal printer in XUbuntu 8.10 with sun-java6-jdk installed. The structure is the following: EpsonDriver.java loads libEpson.so libEpson is linked dynamically with EpsonFiscalProtocol.so ( provided by Epson, no source available ) and pthread I use javah to generate the header file, and the code compiles. Then I put the libEpson.so in $JAVA_HOME/jre/lib/i386, and EpsonDriver.java uses an static initializar System.loadLibrary("libEpson") That part works, however, when I try to use any of the methods I get an unsatisfiedLinkError exception. Some time ago, a coworker did a version that works, and using objdump -Dslx I got the following: Program Header: LOAD off 0x00000000 vaddr 0x00000000 paddr 0x00000000 align 2**12 filesz 0x0000ccc4 memsz 0x0000ccc4 flags r-x LOAD off 0x0000d000 vaddr 0x0000d000 paddr 0x0000d000 align 2**12 filesz 0x00000250 memsz 0x00044a5c flags rw- DYNAMIC off 0x0000d014 vaddr 0x0000d014 paddr 0x0000d014 align 2**2 filesz 0x000000f0 memsz 0x000000f0 flags rw- NOTE off 0x000000d4 vaddr 0x000000d4 paddr 0x000000d4 align 2**2 filesz 0x00000024 memsz 0x00000024 flags r-- STACK off 0x00000000 vaddr 0x00000000 paddr 0x00000000 align 2**2 filesz 0x00000000 memsz 0x00000000 flags rw- Dynamic Section: NEEDED EpsonFiscalProtocol.so NEEDED libpthread.so.0 NEEDED libstdc++.so.6 NEEDED libm.so.6 NEEDED libc.so.6 SONAME libcom_tichile_jpos_EpsonSerialDriver.so INIT 0x00007254 FINI 0x0000ba08 GNU_HASH 0x000000f8 STRTAB 0x00001f50 SYMTAB 0x00000ae0 STRSZ 0x00002384 SYMENT 0x00000010 PLTGOT 0x0000d108 PLTRELSZ 0x00000008 PLTREL 0x00000011 JMPREL 0x0000724c REL 0x000045c4 RELSZ 0x00002c88 RELENT 0x00000008 TEXTREL 0x00000000 VERNEED 0x00004564 VERNEEDNUM 0x00000002 VERSYM 0x000042d4 RELCOUNT 0x000000ac Version References: required from libstdc++.so.6: 0x056bafd3 0x00 05 CXXABI_1.3 0x08922974 0x00 04 GLIBCXX_3.4 required from libc.so.6: 0x0b792650 0x00 03 GCC_3.0 0x0d696910 0x00 02 GLIBC_2.0 In the recently compiled file I get: Program Header: LOAD off 0x00000000 vaddr 0x00000000 paddr 0x00000000 align 2**12 filesz 0x00005300 memsz 0x00005300 flags r-x LOAD off 0x00005300 vaddr 0x00006300 paddr 0x00006300 align 2**12 filesz 0x00000274 memsz 0x00010314 flags rw- DYNAMIC off 0x00005314 vaddr 0x00006314 paddr 0x00006314 align 2**2 filesz 0x000000e0 memsz 0x000000e0 flags rw- EH_FRAME off 0x00004a00 vaddr 0x00004a00 paddr 0x00004a00 align 2**2 filesz 0x00000154 memsz 0x00000154 flags r-- Dynamic Section: NEEDED libstdc++.so.5 NEEDED libm.so.6 NEEDED libgcc_s.so.1 NEEDED libc.so.6 SONAME EpsonFiscalProtocol.so INIT 0x00001cb4 FINI 0x00004994 HASH 0x000000b4 STRTAB 0x00000da4 SYMTAB 0x000004f4 STRSZ 0x00000acf SYMENT 0x00000010 PLTGOT 0x0000640c PLTRELSZ 0x00000270 PLTREL 0x00000011 JMPREL 0x00001a44 REL 0x000019dc RELSZ 0x00000068 RELENT 0x00000008 VERNEED 0x0000198c VERNEEDNUM 0x00000002 VERSYM 0x00001874 RELCOUNT 0x00000004 Version References: required from libstdc++.so.5: 0x056bafd2 0x00 04 CXXABI_1.2 required from libc.so.6: 0x09691f73 0x00 03 GLIBC_2.1.3 0x0d696910 0x00 02 GLIBC_2.0 So I suspect the main diference is the GCC_3.0 symbol I compile libcom_tichile_EpsonSerialDriver.so with the following command ( from memory as I not at work right now ) g++ -Wl,-soname=.... -shared -I/*jni libraries*/ -o libcom_tichile_jpos_EpsonSerialDriver -lEpsonFiscalProtocol -lpthread Is there any way to tell g++ to use that symbol version? Or any idea in how to make it work? EDIT: I have another non-working version with the followin dump: Program Header: LOAD off 0x00000000 vaddr 0x00000000 paddr 0x00000000 align 2**12 filesz 0x0000bf68 memsz 0x0000bf68 flags r-x LOAD off 0x0000cc0c vaddr 0x0000cc0c paddr 0x0000cc0c align 2**12 filesz 0x000005e8 memsz 0x00044df0 flags rw- DYNAMIC off 0x0000cc20 vaddr 0x0000cc20 paddr 0x0000cc20 align 2**2 filesz 0x000000f8 memsz 0x000000f8 flags rw- EH_FRAME off 0x0000b310 vaddr 0x0000b310 paddr 0x0000b310 align 2**2 filesz 0x000002bc memsz 0x000002bc flags r-- STACK off 0x00000000 vaddr 0x00000000 paddr 0x00000000 align 2**2 filesz 0x00000000 memsz 0x00000000 flags rw- RELRO off 0x0000cc0c vaddr 0x0000cc0c paddr 0x0000cc0c align 2**0 filesz 0x000003f4 memsz 0x000003f4 flags r-- Dynamic Section: NEEDED EpsonFiscalProtocol.so NEEDED libpthread.so.0 NEEDED libstdc++.so.6 NEEDED libm.so.6 NEEDED libgcc_s.so.1 NEEDED libc.so.6 SONAME libcom_tichile_jpos_EpsonSerialDriver.so INIT 0x000055d8 FINI 0x0000a968 HASH 0x000000f4 GNU_HASH 0x00000a30 STRTAB 0x00002870 SYMTAB 0x00001410 STRSZ 0x00002339 SYMENT 0x00000010 PLTGOT 0x0000cff4 PLTRELSZ 0x00000168 PLTREL 0x00000011 JMPREL 0x00005470 REL 0x00004ea8 RELSZ 0x000005c8 RELENT 0x00000008 VERNEED 0x00004e38 VERNEEDNUM 0x00000002 VERSYM 0x00004baa RELCOUNT 0x00000001 Version References: required from libstdc++.so.6: 0x056bafd3 0x00 05 CXXABI_1.3 0x08922974 0x00 03 GLIBCXX_3.4 required from libc.so.6: 0x09691f73 0x00 06 GLIBC_2.1.3 0x0d696914 0x00 04 GLIBC_2.4 0x0d696910 0x00 02 GLIBC_2.0 Now I think the main difference is in the GCC_3.0 symbol/ABI EDIT: Luckily, a coworker found a way to talk to the printer using Java

    Read the article

  • Joining different models in Django

    - by Andrew Roberts
    Let's say I have this data model: class Workflow(models.Model): ... class Command(models.Model): workflow = models.ForeignKey(Workflow) ... class Job(models.Model): command = models.ForeignKey(Command) ... Suppose somewhere I want to loop through all the Workflow objects, and for each workflow I want to loop through its Commands, and for each Command I want to loop through each Job. Is there a way to structure this with a single query? That is, I'd like Workflow.objects.all() to join in its dependent models, so I get a collection that has dependent objects already cached, so workflows[0].command_set.get() doesn't produce an additional query. Is this possible?

    Read the article

  • Incremental Compilation in Eclipse. ASTNode-s and SVN versioning

    - by Alex
    Hi there, I am building up some statistics after analyzing the source code in eclipse. But the overall process is too slow because i rebuild my model every time from scratch after each compilation. I am looking for a way to get only the changed parts of the code (as ASTNodes) and to rebuild just that part of my model. I suppose that even the changed compilation units and not the exact code elements would be enough after the user compiles and still would be a nice optimization. I am sure eclipse is capable of knowing what code elements are changed (and even to know their semantics), because when I use the subclipse plugin my changes are ordered by a code element (an import, a method, a variable declaration, etc). Well.. at least that plugin is capable of knowing that info. Thanks in advance

    Read the article

  • Correct way to protect a private API key when versioning a python application on a public git repo

    - by systempuntoout
    I would like to open-source a python project on Github but it contains an API key that should not be distributed. I guess there's something better than removing the key each time a "push" is committed to the repo. Imagine a simplified foomodule.py : import urllib2 API_KEY = 'XXXXXXXXX' urllib2.urlopen("http://example.com/foo?id=123%s" % API_KEY ).read() What i'm thinking is: Move the API_KEY in a second key.py module importing it on foomodule.py; i would then add key.py on .gitignore file. Same as 1 but using ConfigParser Do you know a good programmatic way to handle this scenario?

    Read the article

< Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >