Search Results

Search found 27207 results on 1089 pages for 'preferred solution'.

Page 347/1089 | < Previous Page | 343 344 345 346 347 348 349 350 351 352 353 354  | Next Page >

  • 2d shapes in XNA 4.0?

    - by Lautaro
    Having some experience of XNA but none of 3D programming. I have an idea i want to realize but i have not decided to do it in 3d or 2d. Im not sure which one will be best in XNA. I want to have a shape like a blob that can reshape depending on input. The morphing does not need to be very advanced. It could be a circle (2d) or globe (3d) that just has one point that moves slightly in a random direction. In ASP.NET i have made this through the 2d Draw classes where i can make lines, circles, squares etc and then modify the points that makes them up. But it seems to me that XNA does not have classes for making 2d shapes (can i get this confirmed?). If it had, then this would be the quickest solution for me.

    Read the article

  • I need help with algorithms, how do I improve?

    - by David Burr
    I usually do well at figuring out solutions to programming assignments but for some reason, I'm really struggling in my Algorithms class. I'm not failing but I know I can do better. When I'm confronted with problems like "Divide the array to 2 subarrays so that the sum of each subarray is equal to the other subarray," I feel like my brain won't cooperate and think and I end up not being able to solve it. Some of the things I'm doing right now to help myself: reading CLR (1st ed.) -- it takes a lot of time for stuff to sink in and I can't understand most of it solving some problems -- no matter how much I try, most of the time, I end up googling for the solution before I understand how to solve it I know that good algorithmic skills are very important because lots of good companies ask these sorts of questions in their interview process so I'm a bit worried right now. What else can can I do to improve my algorithmic/problem solving skills? Any advice on how to deal with this?

    Read the article

  • Can the overuse of custom taglibs disrupt the outsourcing of html designers?

    - by Renato Gama
    Yesterday me and a friend were talking about the overuse of custom taglibs! We create taglibs for everything! We create taglibs in order to wrap jQuery UI elements (tabs, button, etc), and other plugins elements as well. We often wrap them together in a single component. We use taglibs in a point that we almost have no pure html within the body tag. Our question is: is this a healthy habit??? Imagine two situations: 1) We hire an html designer and have the cost of a month for him to learn all this stuff. 2) We want to outsource the html development but no company would get our taglib library to learn, OR it become more expensive. We love taglibs as its been a lovely shortcut for javascipt development as we write it only once. What would be the best practices in this sense, and what would you suggest? We are looking for a future-proof solution (or an argument that agrees with ours).

    Read the article

  • Radiosity using a hemisphere

    - by P. Avery
    I'm working on a radiosity processor. I'm projecting scene geometry onto a hemisphere at a high order of tessellation during a visibility pass onto a 1024x1024 render target. The problem is that the edges of certain triangles are not being rendered to the item buffer( render target )...so when I test certain edges( or pixels during pixel shader ) for visibility during a reconstruction pass, visible edges are not identified and as a result the pixel for that edge is discarded. One solution was to increase the resolution of the item buffer( up to 4096x4096 )...this helped and more edges were visible, however, this was not fullproof. How do I increase visibility? Here is a screenshot of a scene after radiosity is applied: the seams are edges along a triangle face that were not visible due to the resolution of the item buffer... fixed the problem by sampling the item buffer w/8 points:

    Read the article

  • Unity Desktop Displays strange lines

    - by Alex Holsgrove
    Didn't quite know what title to give this problem, but hopefully the screenshot will explain more. I am running a Samsung R60+ laptop on Ubuntu 13.10 with a Radeon X1250 GPU. After I login and the Unity desktop shows, I can see these strange lines at the top of the screen. I presumed it was perhaps a driver issue and found this article to see if I could resolve the issue: https://help.ubuntu.com/community/RadeonDriver I cannot get on with Unity at all (where are all of the menus gone!) so perhaps reverting back to Gnome may be a solution in my case? I'd welcome any ideas please.

    Read the article

  • Video conference/chat tool that can be embedded in own website needed

    - by Olaf
    We are looking for a means (a tool, a commercial service) to enable a closed user group to start a live video conference in a browser, as part of the company website. Something like Skype, but embedded and available for everybody that has access to the page into which the tool is embedded. Most services require registration and the creation of a chat room on their website, or, as Skype or similar solutions, the installation of an extra software. What we need is a solution with some kind of a "hidden login", performed by the site's client script (which knows who the user is and forwards the credentials to the service). Any suggestion?

    Read the article

  • How to create a shared folder using command line on a server

    - by sadmicrowave
    After following the tutorial here I ran into a problem. Here is what I did. On my server I installed nfs-kernel-server and edited the /etc/exports file to include the folder I want to share: /var *(rw,sync) On my client machine I edited my fstab file to include share: //128.251.xxx.xxx/var/ ~/uslonsweb003 nfs #username=[username],password=[password], 0 0 Entered command: sudo mount -a which gives this error: mount.nfs: remote share not in 'host:dir' format Where did I go wrong with this setup? Also if there is a better way (using command line) to setup a folder share on an Ubuntu 10.10 server that will be accessed by other linux and windows machines please let me know. UPDATE: The mapped drive is now not letting me create,edit,delete files or folders (readonly access) my configuration is as follows: client fstab file: 128.251.xxx.xxx:/var /home/coreyf/uslonsweb003 nfs rw,hard,intr, 0 0 server exports file: /var *(rw,no_root_squash,sync,no_subtree_check) UPDATE 2: Using Allans solution my drive mounted correctly however after putting rw,intr as my additional parameters I cannot create, edit and delete folders/files.

    Read the article

  • How to Eliminate Black Bars from Powerpoint videos ?

    - by appu
    Hi, I am running a digital signage system for my client. The basic installation is a vertically oriented 42" LCD TV with a 1920x1080 screen resolution (reverse of 1920x1080 when setup normally, i.e. landscaped). Please check out the following link for a basic screen divisions layout I want to setup. http://flickr.com/photos/55097319@N03/5410208856 In the division labeled "ppt" I plan to run a powerpoint presentation. The screen division is 360x1476 resolution. As there isn't an option in powerpoint to specify slide size in terms of resolution so according to this article on indezine http://indezine.com/products/powerpoint/books/perfectmedicalpres02.html to get a screen resolution of my preference I divided 360 and 1476 each by 72 which gives me 5"x 20.5" as the slide size for my ppt. After setting up slide size as per above dimensions, I used sizer (http://brianapps.net) to resize my ppt window to 360x1476 so that when I record I do not get any black bars. But after launching recording there are side black bars visible which camtasia records and brings-inside cam studio with black bars. http://www.flickr.com/photos/55097319@N03/5409597049 My question is that after doing the above and as explained in the following video link why do I still get black bars. http://feedback.techsmith.com/techsmith/topics/eliminate_black_bars_in_your_powerpoint_recordings Is there an option in camtasia to stretch the recording to cover up black bars or any other alternative way I can get rid of those black bars while I get a ppt recorded as per my preferred dimensions. Notes: In the techsmith video above it asks to adjust my desktop screen resolution which my display chipset does not allow me to set. I set up show in powerpoint to be "browsed by an individual(window)". Also signage software only supports swf's and video file formats natively and not ppt, pptx etc. Thanks bhavani.

    Read the article

  • How do I get Poulsbo (GMA500) drivers to work?

    - by slayerman
    I'm trying to get working the Poulbo driver under Ubuntu 9.10. I've installed poulsbo-driver-2d poulsbo-driver-3d poulsbo-config packages from sudo add-apt-repository ppa:gma500/ppa. The packages installation is working. After I have to put in xorg.conf driver "psb". Then I reboot and I have no more display. I have to switch back to vesa in order to display back. Can someone give me some kind of solution?

    Read the article

  • gnome-control-center can't set display resolution under openbox

    - by Andy
    I'm running Ubuntu 11.10 with Openbox on my laptop. Since I need to plug different external displays into it and Openbox environment doesn't automatically pick them up, I thought the best solution I can come up with is to use gnome-control-center and it's display settings tool from within Openbox. But although this tool does detect monitors correctly, it can't do any change -- clicking Apply button just doesn't seem to do anything. So my questions are: 1) how to get this tool working? 2) how to run "Displays" tool directly from command-line, skipping control center? 3) is there a better way to automatically detect and set resolutions on internal/external monitors under Openbox? Please note I tried arandr too and it doesn't even work for my environment (doesn't detect external display plugging in at all). For what it's worth, my laptop is Lenovo G560, Ubuntu is x64 version with all the updates rolled over. Thanks for your consideration.

    Read the article

  • How to monetize and/or protect framework rights?

    - by Arthur Wulf White
    I made a game engine/framerwork for ActionScript 3 that allows very efficient and flexible level design for Platformers, Tower Defense game, RPG's, RTS and racing games. The algorithms I used are new and are not available in any other level editor I've seen. What are the best ways to benefit myself and others with my new framework? It is written for ActionScript 3 so unless I translate it to C# I'm guessing it will be decompiled and used by others. I want to have some lisence, allowing me to share the framework and still benefit from it. Any advice would be appreciated. This issue has been on my mind a lot this year. I am hoping to find a solution that will bring me some relief.

    Read the article

  • TELERIK LAUNCHES NEW AUTOMATED TESTING TOOLS PRODUCT LINE

    TELERIK LAUNCHES NEW AUTOMATED TESTING TOOLS PRODUCT LINE Merger with ArtOfTest repositions Telerik as a major player in the automated testing market Waltham, MA, April 13, 2010 Telerik, a leading provider of development tools and solutions for the Microsoft? .NET platform, today announced the launch of WebUI Test Studio 2010, an innovative and easy-to-use automated web-testing solution. Encompassing essential web technologies such as ASP.NET AJAX, Silverlight, and MVC, Teleriks WebUI Test Studio...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Printer State: Idle - /usr/lib/cups/backend/dnssd failed

    - by bilbo88
    A printer was installed and worked but does not now. A job is successfully submitted, but keeps waiting there for printing (as seen from lpq command). Ping to printer works fine. (The printer is on and works as it prints from Windows.) But on the Printer Property, it shows Printer State: Idle - /usr/lib/cups/backend/dnssd failed. The following command does not help either. sudo service avahi-daemon restart Does any one has a solution? Thank you for help.

    Read the article

  • How to fix an annoying ReSharper &ndash; NuGet error

    - by terje
    Originally posted on: http://geekswithblogs.net/terje/archive/2013/10/30/how-to-fix-an-annoying-resharper-ndash-nuget-error.aspxUsing NuGet in Visual Studio together with ReSharper may sometimes lead you into an annoying error where ReSharper indicates your code has an error, but the solution builds just fine. This may happen if you have a set of NuGet packages, and you either just restore them, or delete them on disk and then restore again.  Your code ends up looking like this, note the red missing functions, which comes from the Moq library - which is downloaded from NuGet:   while the Build is still fine, it compiles without any errors: This stackoverflow question gives some different approaches to solve this, but my experience have been that the Resharper Suspend-Resume trick most often solves the issue: In Visual Studio:  Go to Tools/Options/Resharper Press Suspend: When this is done the error markers disappear, since ReSharper now is inactive. Then just press Resume again: This has been submitted to Jetbrains support, ticket here: http://resharper-support.jetbrains.com/requests/3882) , if you want to follow it.

    Read the article

  • Drive Innovation from Data with Oracle Business Analytics

    - by Mike.Hallett(at)Oracle-BI&EPM
    Oracle is doing a big marketing push on the transformational value of Business Analytics to our customers, and we hope you as partners can get excited, involved and more business from this campaign.  Work with your local in-country BI business development manager and your partner channel manager: if you want to contribute and are struggling to make contact, then let me know ([email protected]) and I will facilitate introductions. Oracle Day Business Analytics Track Invite your customers to register for their local Oracle Day to get the latest news from OpenWorld and learn about Oracle's Big Data strategy and solution. There is a dedicated Business Analytics track. Business Analytics Facebook Hub Encourage your customers to "Like" the Business Analytics Facebook Page @ www.facebook.com/OracleBusinessAnalytics so they can receive useful and interesting information on their Facebook wall.

    Read the article

  • Render full-screen gradient or texture

    - by Filip Skakun
    What's the simplest way to fill the background of the screen with a gradient or a texture in Direct3D 10/11? I'm building a Windows 8 metro app in which the camera never moves and I render some content in D3D, but I need to fill the background with something else than a solid color. Do I need to figure out the size and position of a rectangle and position it in 3D space or can I have some simpler solution? I don't care about depth at all, I don't use any depth buffer since all my content is sorted back to front, so I could just start by drawing to the background.

    Read the article

  • KnockoutJS 2.3.0 - Uncaught Error: You cannot apply bindings multiple times to the same element.

    - by Aligned
    Originally posted on: http://geekswithblogs.net/Aligned/archive/2013/07/25/knockoutjs-2.3.0---uncaught-error-you-cannot-apply-bindings-multiple.aspxI upgrade KnockoutJs through Nuget and started getting the error ‘Uncaught Error: You cannot apply bindings multiple times to the same element.’ when I used applyBindings after the main page load. I had some dynamically added DOM elements and re-applying bindings worked before. It always seemed like a workaround/hack, but now Knockout is telling me that I shouldn’t do it. The quick way to fix this is to use ko.cleanNode($(‘#id’) and this works. A different/possibly better way, as suggested by x0n might be to use templates and Knockout’s template binding (<script type=’text/html’>…</script>).   Thanks again to the StackOverflow community for quickly providing me with the solution. Check out my question for all the details.

    Read the article

  • Should you charge clients hours spent on the wrong track?

    - by Lea Verou
    I took up a small CSS challenge to solve for a client and I'm going to be paid on a hourly rate. I eventually solved it, it took 4 hours but I spent roughly 30% of the time in the wrong track, trying a CSS3 solution that only worked in recent browsers and finally discovering that no fallback is possible via JS (like I originally thought). Should I charge the client that 30%? More details: I didn't provide an estimate, I liked the challenge per se, so I started working on it before giving an estimate (but I have worked with him before, so I know he's not one of those people that have unrealistic expectations). At the very worst I will have spent 4 unpaid hours on an intriguing CSS challenge. And I will give the fairest possible estimate for both of us, since I will have already done the work. :)

    Read the article

  • New Oracle Information Rights Management release (11.1.1.3)

    - by Simon Thorpe
    Just released is the latest version of the market leading document security technology from Oracle. Oracle IRM 11g is the result of over 12 years of development and innovation to allow customers to provide persistent security to their most confidential documents and emails. This latest release continues our refinement of the technology and features the following; Continued improvements to the web based Oracle IRM Management Website New features in the out of the box classification model New Java APIs improving application integration support Support for DB2 as the IRM database. Over the coming months we will see more releases from this technology as we improve format support, platform support and continue the strategy to for Oracle IRM as the most secure, scalable and usable document security solution in the market. Want to learn more about Oracle IRM? View our video presentation and demonstration or try using it for your self via our simple online self service demo. Keep up to date on Oracle via this blog or on our Twitter, YouTube and Facebook pages.

    Read the article

  • What to do with database in dev/production phases of a website?

    - by TheLQ
    For a while now I've been keeping a website I'm developing in the standard dev/production phases. Its been pretty simple: Mercurial repo for dev, repo for production. Do work in dev, get approved, push to production. But now I'm trying to apply this process to a new website that has a database and am struggling on how to figure out a development strategy. What I didn't mention above is that I do all my work on my own repo, push it to dev, then later push it to production, so its 3 different servers. So how do I manage my database? The obvious solution of mysqldump every commit isn't going to happen, and a dump at the end of the day isn't all that helpful when you want to undo later one change that happened in the middle of the day. What is the best way to accomplish this?

    Read the article

  • How do I migrate Exchange 2007 to new hardware?

    - by Graeme Donaldson
    As per my previous question, I have an Exchange 2007 box which is also a DC. Since I can't demote it while Exchange is installed, I want to move Exchange to a different server. Does anyone have any articles, tips or experiences to share on this? The last time I did this it was with Exchange 2003 and even that is a little rusty in my head. The setup is a single Exchange 2007 Hub/Edge/Mailbox/CAS server. Its currently on Windows Server 2008, I can migrate it to the same OS, or I can go to 2008 R2, I'm not really picky on that. We're running OWA/ActiveSync/POP3(S)/IMAP(S) for client access. I already have another fully functional DC/GC/DNS box in the same site and clients in the site are already using that for DNS. It's also the preferred site bridgehead for AD replication. Update: After reading Evan's answer I realised that my original question wasn't worded correctly. I'm not looking to do a swing migration, I actually need to move Exchange completely over to a new box. I have done swing migrations in the past, i.e. moving over to a temporary box and back to the original hardware afterwards, and I'm not really sure why I used that term in the original question since it's not what I intended. Any tips?

    Read the article

  • How to handle updated configuration when it's already been cloned for editing

    - by alexrussell
    Really sorry about the title that probably doesn't make much sense. Hopefully I can explain myself better here as it's something that's kinda bugged me for ages, and is now becoming a pressing concern as I write a bit of software with configuration. Most software comes with default configuration options stored in the app itself, and then there's a configuration file (let's say) that a user can edit. Once created/edited for the first time, subsequent updates to the application can not (easily) modify this configuration file for fear of clobbering the user's own changes to the default configuration. So my question is, if my application adds a new configurable parameter, what's the best way to aid discoverability of the setting and allow the user (developer) to override it as nicely as possible given the following constraints: I actually don't have a canonical default config in the application per se, it's more of a 'cascading filesystem'-like affair - the config template is stored in default/config.json and when the user wishes to edit the configuration, it's copied to user/config.json. If a user config is found it is used - there is no automatic overriding of a subset of keys, the whole new file is used and that's that. If there's no user config the default config is used. When a user wishes to edit the config they run a command to 'generate' it for them (which simply copies the config.json file from the default to the user directory). There is no UI for the configuration options as it's not appropriate to the userbase (think of my software as a library or something, the users are developers, the config is done in the user/config.json file). Due to my software being library-like there's no simple way to, on updating of the software, run some tasks automatically (so any ideas of look at the current config, compare to template config, add ing missing keys) aren't appropriate. The only solution I can think of right now is to say "there's a new config setting X" in release notes, but this doesn't seem ideal to me. If you want any more information let me know. The above specifics are not actually 100% true to my situation, but they represent the problem equally well with lower complexity. If you do want specifics, however, I can explain the exact setup. Further clarification of the type of configuration I mean: think of the Atom code editor. There appears to be a default 'template' config file somewhere, but as soon as a configuration option is edited ~/.atom/config.cson is generated and the setting goes in there. From now on is Atom is updated and gets a new configuration key, this file cannot be overwritten by Atom without a lot of effort to ensure that the addition/modification of the key does not clobber. In Atom's case, because there is a GUI for editing settings, they can get away with just adding the UI for the new setting into the UI to aid 'discoverability' of the new setting. I don't have that luxury. Clarification of my constraints and what I'm actually looking for: The software I'm writing is actually a package for a larger system. This larger system is what provides the configuration, and the way it works is kinda fixed - I just do a config('some.key') kinda call and it knows to look to see if the user has a config clone and if so use it, otherwise use the default config which is part of my package. Now, while I could make my application edit the user's configuration files (there is a convention about where they're stored), it's generally not done, so I'd like to live with the constraints of the system I'm using if possible. And it's not just about discoverability either, one large concern is that the addition of a configuration key won't actually work as soon as the user has their own copy of the original template. Adding the key to the template won't make a difference as that file is never read. As such, I think this is actually quite a big flaw in the design of the configuration cascading system and thus needs to be taken up with my upstream. So, thinking about it, based on my constraints, I don't think there's going to be a good solution save for either editing the user's configuration or using a new config file every time there are updates to the default configuration. Even the release notes idea from above isn't doable as, if the user does not follow the advice, suddenly I have a config key with no value (user-defined or default). So the new question is this: what is the general way to solve the problem of having a default configuration in template config files and allowing a user to make user-specific version of these in order to override the defaults? A per-key cascade (rather than per-file cascade) where the user only specifies their overrides? In this case, what happens if a configuration value is an array - do we replace or append to the default (or, more realistically, how does the user specify whether they wish to replace or append to)? It seems like configuration is kinda hard, so how is it solved in the wild?

    Read the article

  • Does google contribute ranking from cdn.example.com to example.com?

    - by DesignerGuy
    Background From my understanding, http://mywebsite.com/image.jpg, can help the ranking of http://mywebsite.com in a search engine, such as Google (obviously the search engine of primary concern). So, SEO-wise, moving an image to http://whatever-cdn.com/my-account/image.jpg is bad. A popular solution is to use a CNAME record, such as http://cdn.mywebsite.com, so that image.jpg can be accessed at http://cdn.mywebsite.com/image.jpg. The question Does http://cdn.mywebsite.com/image.jpg rank as effectively as http://mywebsite.com/image.jpg ? Does it help boost the main http://mywebsite.com ? Or, does it rank independently because it is a subdomain? Is there another option (a way to use a CDN without sacrificing ranking)?

    Read the article

  • Best Practices Generating WebService Proxies for Oracle Sales Cloud (Fusion CRM)

    - by asantaga
    I've recently been building a REST Service wrapper for Oracle Sales Cloud and initially all was going well, however as soon as I added all of my Web Service proxies I started to get weird errors..  My project structure looks like this What I found out was if I only had the InteractionsService & OpportunityService WebService Proxies then all worked ok, but as soon as I added the LocationsService Proxy, I would start to see strange JAXB errors. Example of the error message Exception in thread "main" javax.xml.ws.WebServiceException: Unable to create JAXBContextat com.sun.xml.ws.model.AbstractSEIModelImpl.createJAXBContext(AbstractSEIModelImpl.java:164)at com.sun.xml.ws.model.AbstractSEIModelImpl.postProcess(AbstractSEIModelImpl.java:94)at com.sun.xml.ws.model.RuntimeModeler.buildRuntimeModel(RuntimeModeler.java:281)at com.sun.xml.ws.client.WSServiceDelegate.buildRuntimeModel(WSServiceDelegate.java:762)at weblogic.wsee.jaxws.spi.WLSProvider$ServiceDelegate.buildRuntimeModel(WLSProvider.java:982)at com.sun.xml.ws.client.WSServiceDelegate.createSEIPortInfo(WSServiceDelegate.java:746)at com.sun.xml.ws.client.WSServiceDelegate.addSEI(WSServiceDelegate.java:737)at com.sun.xml.ws.client.WSServiceDelegate.getPort(WSServiceDelegate.java:361)at weblogic.wsee.jaxws.spi.WLSProvider$ServiceDelegate.internalGetPort(WLSProvider.java:934)at weblogic.wsee.jaxws.spi.WLSProvider$ServiceDelegate$PortClientInstanceFactory.createClientInstance(WLSProvider.java:1039)...... Looking further down I see the error message is related to JAXB not being able to find an objectFactory for one of its types Caused by: java.security.PrivilegedActionException: com.sun.xml.bind.v2.runtime.IllegalAnnotationsException: 6 counts of IllegalAnnotationExceptionsThere's no ObjectFactory with an @XmlElementDecl for the element {http://xmlns.oracle.com/apps/crmCommon/activities/activitiesService/}AssigneeRsrcOrgIdthis problem is related to the following location:at protected javax.xml.bind.JAXBElement com.oracle.xmlns.apps.crmcommon.activities.activitiesservice.ActivityAssignee.assigneeRsrcOrgId at com.oracle.xmlns.apps.crmcommon.activities.activitiesservice.ActivityAssignee This is very strange... My first thoughts are that when I generated the WebService Proxy I entered the package name as "oracle.demo.pts.fusionproxy.servicename" and left the generated types as blank. This way all the generated types get put into the same package hierarchy and when deployed they get merged... Sounds resaonable and appears to work but not in this case..  To resolve this I regenerate the proxy but this time setting : Package name : To the name of my package eg. oracle.demo.pts.fusionproxy.interactionsRoot Package for Generated Types :  Package where the types will be generated to, e.g. oracle.demo.pts.fusionproxy.SalesParty.types When I ran the application now, it all works , awesome eh???? Alas no, there is a serious side effect. The problem now is that to help coding I've created a collection of helper classes , these helper classes take parameters which use some of the "generic" datatypes, like FindCriteria. e.g. This wont work any more public static FindCriteria createCustomFindCriteria(FindCriteria pFc,String pAttributes) Here lies a gremlin of a problem.. I cant use this method anymore, this is because the FindCriteria datatype is now being defined two, or more times, in the generated code for my project. If you leave the Root Package for types blank it will get generated to com.oracle.xmlns, and if you populate it then it gets generated to your custom package.. The two datatypes look the same, sound the same (and if this were a duck would sound the same), but THEY ARE NOT THE SAME... Speaking to development, they recommend you should not be entering anything in the Root Packages section, so the mystery thickens why does it work.. Well after spending sometime with some colleagues of mine in development we've identified the issue.. Alas different parts of Oracle Fusion Development have multiple schemas with the same namespace, when the WebService generator generates its classes its not seeing the other schemas properly and not generating the Object Factories correctly...  Thankfully I've found a workaround Solution Overview When generating the proxies leave the Root Package for Generated Types BLANK When you have finished generating your proxies, use the JAXB tool XJC and generate Java classes for all datatypes  Create a project within your JDeveloper11g workspace and import the java classes into this project Final bit.. within the project dependencies ensure that the JAXB/XJC generated classes are "FIRST" in the classpath Solution Details Generate the WebServices SOAP proxies When generating the proxies your generation dialog should look like this Ensure the "unwrap" parameters is selected, if it isn't then that's ok, it simply means when issuing a "get" you need to extract out the Element Generate the JAXB Classes using XJC XJC provides a command line switch called -wsdl, this (although is experimental/beta) , accepts a HTTP WSDL and will generate the relevant classes. You can put these into a single batch/shell script xjc -wsdl https://fusionservername:443/appCmmnCompInteractions/InteractionService?wsdlxjc -wsdl https://fusionservername443/opptyMgmtOpportunities/OpportunityService?wsdl Create Project in JDeveloper to store the XJC "generated" JAXB classes Within the project folder create a filesystem folder called "src" and copy the generated files into this folder. JDeveloper11g should then see the classes and display them, if it doesnt try clicking the "refresh" button In your main project ensure that the JDeveloper XJC project is selected as a dependancy and IMPORTANT make sure it is at the top of the list. This ensures that the classes are at the front of the classpath And voilà.. Hopefully you wont see any JAXB generation errors and you can use common datatypes interchangeably in your project, (e.g. FindCriteria etc)

    Read the article

  • Strategies for Indexing Custom Fields in RavenDB

    - by Adrian Thompson Phillips
    In the relational database world, if I was developing a CRM system and wanted to have the user add their own custom fields that are searchable, I could have tables that store the name of the new column, the data type and the value, etc. (which would be less inefficient to index) or I could use the less elegant (but more searchable) solution that software like Dynamics and SharePoint use, whereas I create a load of columns on my aggregate root called CustomInt1, CustomInt2, etc. (which looks dirty and has a limit of how many custom fields a user can have, but has indexing advantages). But my questions is this, in NoSQL databases, what would be the best way of achieving the same thing? My priority would be for searchability. So what would be the best way to store this data? If I used a predefined set of properties (i.e. CustomData1, CustomData2, etc.), because these are all stored as JSON (i.e. strings) in the database, does this make it simpler because I don't have to worry about data types?

    Read the article

< Previous Page | 343 344 345 346 347 348 349 350 351 352 353 354  | Next Page >