Search Results

Search found 162 results on 7 pages for 'hierarchies'.

Page 6/7 | < Previous Page | 2 3 4 5 6 7  | Next Page >

  • Where to put external archives to configure running in Eclipse?

    - by Buggieboy
    As a Java/Eclipse noob coming from a Visual Studio .NET background, I'm trying to understand how to set up my run/debug environment in the Eclipse IDE so I can start stepping through code. I have built my application with separate src and bin hierarchies for each library project, and each project has its own jar. For example, in a Windows environment, I have something like: c:\myapp\myapp_main\src\com\mycorp\myapp\main ...and a parallel "bin" tree like this: c:\myapp\myapp_main\bin\com\mycorp\myapp\main Other supporting projects are, similarly: **c:\myapp\myapp_util\src\com\mycorp\myapp\uti**l (and a parallel "bin" tree.) ... etc. So, I end up with, e.g., myapp_util.jar in the ...\myapp_util\bin... path and then add that as an external archive to my myapp_main project. I also use utilities like gluegen-rt.jar, which I add ad external dependencies to the projects requiring them. I have been able to run outside of the Eclipse environment, by copying all my project jars, gluegen-rt DLL, etc., into a "lib" subfolder of some directory and executing something like: java -Djava.library.path=lib -DfullScreen=false -cp lib/gluegen-rt.jar;lib/myapp_main.jar;lib/myapp_util.jar; com.mycorp.myapp.Main When I first pressed F11 to debug, however, I got a message about something like /com/sun/../glugen... not being found by the class loader. So, to debug, or even just run, in Ecplipse, I tried setting up my VM arguments in the Galileo Debug - (Run/Debug) Configurations to be the command line above, beginning at "-Djava.libary.path...". I've put a lib subdirectory - just like the above with all jars and the native gluegen DLL - in various places, such as beneath the folder that my main jar is built in and as a subfolder of my Ecplipse starting workspace folder, but now Eclipse can't find the main class: java.lang.NoClassDefFoundError: com.mycorp.myapp.Main Although the Classpath says that it is using the "default classpath", whatever that is. Bottom line, how do I assemble the constituent files of a multi-project application so that I can run or debug in Ecplipse?

    Read the article

  • MySQL & PHP - Creating Multiple Parent Child Relations

    - by Ashok
    Hi, I'm trying to build a navigation system using categories table with hierarchies. Normally, the table would be defined as follows: id (int) - Primary key name (varchar) - Name of the Category parentid (int) - Parent ID of this Category referenced to same table (Self Join) But the catch is that I require that a category can be child to multiple parent categories.. Just like a Has and Belongs to Many (HABTM) relation. I know that if there are two tables, categories & items, we use a join table categories_items to list the HABTM relations. But here i'm not having two tables but only table but should somehow show HABTM relations to itself. Is this be possible using a single table? If yes, How? If not possible, what rules (table naming, fields) should I follow while creating the additional join table? I'm trying to achieve this using CakePHP, If someone can provide CakePHP solution for this problem, that would be awesome. Even if that's not possible, any solution about creating join table is appreciated. Thanks for your time.

    Read the article

  • How do I send an XML document to an ASP.NET MVC page for manipuation

    - by Decker
    I have some hierarchical data stored as an multiple XML files on the server according to a vendor's schema. In my ASP.NET MVC (2!) application, I'd like the user to choose one of these hierarchies (i.e. file -- I provide a list in my controller's Index action). When the user selects one to "edit" my edit action should return a page that presents the XML hierarchy (it's a representation of a folder tree). So my thoughts are that the view would return HTML that contained a JQuery on load ajax call back to the server for the XML data -- at which point I would present the tree using one of the many JQuery tree controls. On the client side I'd like the user to manipulate the tree and when done, I'd like to post back the new hierarchy where I would replace the original XML file that represents that hierarchy. So my questions are: What form should I use to send the data down? XML or JSON?. If I send down XML then I would have to not only read the XML -- which JQuery can do -- but I would also have to be able to modify that XML and then send it back. Can I use JQuery to modify this XML DOM? And will all the namespace declarations be preserved? What form should I send the data back? If I originally sent the client the hierarchy as JSON (using JsonResult), then presumably I would have a hierarchy of javascript objects. What options would I have to post that back? Would I have to recreate the XML reprentation on the client and post that back? Or should I serialize back to JSON, post that to the server, and then have the server do the work of recreating the XML according to the schema. Thanks for any advice.

    Read the article

  • WPF tree data binding

    - by Am
    Hi, I have a well defined tree repository. Where I can rename items, move them up, down, etc. Add new and delete. The data is stored in a table as follows: Index Parent Label Left Right 1 0 root 1 14 2 1 food 2 7 3 2 cake 3 4 4 2 pie 5 6 5 1 flowers 8 13 6 5 roses 9 10 7 5 violets 11 12 Representing the following tree: (1) root (14) (2) food (7) (8) flowers (13) (3) cake (4) (5) pie (6) (9) roeses (10) (11) violets (12) or root food cake pie flowers roses violets Now, my problem is how to represent this in a bindable way, so that a TreeView can handle all the possible data changes? Renaming is easy, all I need is to make the label an updatble field. But what if a user moves flowers above food? I can make the relevant data changes, but they cause a complete data change to all other items in the tree. And all the examples I found of bindable hierarchies are good for non static trees.. So my current (and bad) solution is to reload the displayed tree after relocation change. Any direction will be good. Thanks

    Read the article

  • getting proxies of the correct type in nhibernate

    - by Nir
    I have a problem with uninitialized proxies in nhibernate The Domain Model Let's say I have two parallel class hierarchies: Animal, Dog, Cat and AnimalOwner, DogOwner, CatOwner where Dog and Cat both inherit from Animal and DogOwner and CatOwner both inherit from AnimalOwner. AnimalOwner has a reference of type Animal called OwnedAnimal. Here are the classes in the example: public abstract class Animal { // some properties } public class Dog : Animal { // some more properties } public class Cat : Animal { // some more properties } public class AnimalOwner { public virtual Animal OwnedAnimal {get;set;} // more properties... } public class DogOwner : AnimalOwner { // even more properties } public class CatOwner : AnimalOwner { // even more properties } The classes have proper nhibernate mapping, all properties are persistent and everything that can be lazy loaded is lazy loaded. The application business logic only let you to set a Dog in a DogOwner and a Cat in a CatOwner. The Problem I have code like this: public void ProcessDogOwner(DogOwner owner) { Dog dog = (Dog)owner.OwnedAnimal; .... } This method can be called by many diffrent methods, in most cases the dog is already in memory and everything is ok, but rarely the dog isn't already in memory - in this case I get an nhibernate "uninitialized proxy" but the cast throws an exception because nhibernate genrates a proxy for Animal and not for Dog. I understand that this is how nhibernate works, but I need to know the type without loading the object - or, more correctly I need the uninitialized proxy to be a proxy of Cat or Dog and not a proxy of Animal. Constraints I can't change the domain model, the model is handed to me by another department, I tried to get them to change the model and failed. The actual model is much more complicated then the example and the classes have many references between them, using eager loading or adding joins to the queries is out of the question for performance reasons. I have full control of the source code, the hbm mapping and the database schema and I can change them any way I want (as long as I don't change the relationships between the model classes). I have many methods like the one in the example and I don't want to modify all of them. Thanks, Nir

    Read the article

  • I need to implement C# deep copy constructors with inheritance. What patterns are there to choose fr

    - by Tony Lambert
    I wish to implement a deepcopy of my classes hierarchy in C# public Class ParentObj : ICloneable { protected int myA; public virtual Object Clone () { ParentObj newObj = new ParentObj(); newObj.myA = theObj.MyA; return newObj; } } public Class ChildObj : ParentObj { protected int myB; public override Object Clone ( ) { Parent newObj = this.base.Clone(); newObj.myB = theObj.MyB; return newObj; } } This will not work as when Cloning the Child only a parent is new-ed. In my code some classes have large hierarchies. What is the recommended way of doing this? Cloning everything at each level without calling the base class seems wrong? There must be some neat solutions to this problem, what are they? Can I thank everyone for their answers. It was really interesting to see some of the approaches. I think it would be good if someone gave an example of a reflection answer for completeness. +1 awaiting!

    Read the article

  • Java method keyword "final" and its use

    - by Lukas Eder
    When I create complex type hierarchies (several levels, several types per level), I like to use the final keyword on methods implementing some interface declaration. An example: interface Garble { int zork(); } interface Gnarf extends Garble { /** * This is the same as calling {@link #zblah(0)} */ int zblah(); int zblah(int defaultZblah); } And then abstract class AbstractGarble implements Garble { @Override public final int zork() { ... } } abstract class AbstractGnarf extends AbstractGarble implements Gnarf { // Here I absolutely want to fix the default behaviour of zblah // No Gnarf shouldn't be allowed to set 1 as the default, for instance @Override public final int zblah() { return zblah(0); } // This method is not implemented here, but in a subclass @Override public abstract int zblah(int defaultZblah); } I do this for several reasons: It helps me develop the type hierarchy. When I add a class to the hierarchy, it is very clear, what methods I have to implement, and what methods I may not override (in case I forgot the details about the hierarchy) I think overriding concrete stuff is bad according to design principles and patterns, such as the template method pattern. I don't want other developers or my users do it. So the final keyword works perfectly for me. My question is: Why is it used so rarely in the wild? Can you show me some examples / reasons where final (in a similar case to mine) would be very bad?

    Read the article

  • Doxygen: grouping documentation by folder in a multi-project codebase

    - by John
    In one project, some pages were added - it was a new project in which doxygen was being tested properly - adding comments & pages - rather than simply auto-generating docs from our existing code-base. The problem is when doxygen is run on the main code-base, that project's pages show up at a top level. e.g they have a main-page and some sub-pages. But what we'd want is all those pages pushed down one level so you have main-project-main-project-pages. One question is what happens if multiple projects have a main-page? Do they get combined, or throw errors? Another question is if you can tell doxygen to use paths and containing folders to auto-generate groups or sections or page-hierarchies in some way? Going through all out projects to properly assign classes to groups is a mammoth task, so ideally everything in a directory would get put in a group of that name, as a way to make the documentation of non-doxygen codebases better. Sorry my question is a bit vague, the problem is even after reading the docs the terminology isn't totally clear yet. Hopefully the kind of question I'm asking is clear, if not I'll try to cobble an example file-structure together.

    Read the article

  • Testing subpackage modules in Python 3

    - by Mitchell Model
    I have been experimenting with various uses of hierarchies like this and the differences between absolute and relative imports, and can't figure out how to do routine things with the package, subpackages, and modules without simply putting everything on sys.path. I have a two-level package hierarchy: MyApp __init__.py Application __init__.py Module1 Module2 ... Domain __init__.py Module1 Module2 ... UI __init__.py Module1 Module2 ... I want to be able to do the following: Run test code in a Module's "if main" when the module imports from other modules in the same directory. Have one or more test code modules in each subpackage that runs unit tests on the modules in the subpackage. Have a set of unit tests that reside in someplace reasonable, but outside the subpackages, either in a sibling package, at the top-level package, or outside the top-level package (though all these might end up doing is running the tests in each subpackage) "Enter" the structure from any of the three subpackage levels, e.g. run code that just uses Domain modules, run code that just uses Application modules, but Application uses code from both Application and Domain modules, and run code from GUI uses code from both GUI and Application; for instance, Application test code would import Application modules but not Domain modules. After developing the bulk of the code without subpackages, continue developing and testing after organizing the modules into this hierarchy. I know how to use relative imports so that external code that puts MyApp on its sys.path can import MyApp, import any subpackages it wants, and import things from their modules, while the modules in each subpackage can import other modules from the same subpackage or from sibling packages. However, the development needs listed above seem incompatible with subpackage structuring -- in other words, I can't have it both ways: a well-structured multi-level package hierarchy used from the outside and also used from within, in particular for testing but also because modules from one design level (in particular the UI) should not import modules from a design level below the next one down. Sorry for the long essay, but I think it fairly represents the struggles a lot of people have been having adopting to the new relative import mechanisms.

    Read the article

  • WPF tree data binding model & repository

    - by Am
    Hi, I have a well defined tree repository. Where I can rename items, move them up, down, etc. Add new and delete. The data is stored in a table as follows: Index Parent Label Left Right 1 0 root 1 14 2 1 food 2 7 3 2 cake 3 4 4 2 pie 5 6 5 1 flowers 8 13 6 5 roses 9 10 7 5 violets 11 12 Representing the following tree: (1) root (14) (2) food (7) (8) flowers (13) (3) cake (4) (5) pie (6) (9) roeses (10) (11) violets (12) or root food cake pie flowers roses violets Now, my problem is how to represent this in a bindable way, so that a TreeView can handle all the possible data changes? Renaming is easy, all I need is to make the label an updatble field. But what if a user moves flowers above food? I can make the relevant data changes, but they cause a complete data change to all other items in the tree. And all the examples I found of bindable hierarchies are good for non static trees.. So my current (and bad) solution is to reload the displayed tree after relocation change. Any direction will be good. Thanks

    Read the article

  • Database for managing large volumes of (system) metrics

    - by symcbean
    Hi, I'm looking at building a system for managing and reporting stats on web page performance. I'll be collecting a lot more stats than are available in the standard log formats (approx 20 metrics) but compared to most types of database applications, the base data structure will be very simple. My problem is that I'll be accumulating a lot of data - in the region of 100,000 records (i.e. sets of metrics) per hour. Of course, resources are very limited! So that its possible to sensibly interact with the data, I'd need to consolidate each metric into one minute bins, broken down by URL, then for anything more than 1 day old, consolidated into 10 minute bins, then at 1 week, hourly bins. At the front end, I want to provide a view (prefereably as plots) of the last hour of data, with the facility for users to drill up/down through defined hierarchies of URLs (which do not always map directly to the hierarchy expressed in the path of the URL) and to view different time frames. Rather than coding all this myself and using a relational database, I was wondering if there were tools available which would facilitate both the management of the data and the reporting. I had a look at Mondrian however I can't see from the documentation I've looked at whether it's possible to drop the more granular information while maintaining the consolidated views of the data. RRDTool looks promising in terms of managing the data consolidation, but seems to be rather limited in terms of querying the dataset as a multi-dimensional/relational database. What else whould I be looking at?

    Read the article

  • Which options do I have for Java process communication?

    - by Dmitriy Matveev
    We have a place in a code of such form: void processParam(Object param) { wrapperForComplexNativeObject result = jniCallWhichMayCrash(param); processResult(result); } processParam - method which is called with many different arguments. jniCallWhichMayCrash - a native method which is intended to do some complex processing of it's parameter and to create some complex object. It can crash in some cases. wrapperForComplexNativeObject - wrapper type generated by SWIG processResult - a method written in pure Java which processes it's parameter by creation of several kinds (by the kinds I'm not meaning classes, maybe some like hierarchies) of objects: 1 - Some non-unique objects which are referencing each other (from the same hierarchy), these objects can have duplicates created from the invocations of processParam() method with different parameter values. Since it's costly to keep all the duplicates it's necessary to cache them. 2 - Some unique objects which are referencing each other (from the same hierarchy) and some of the objects of 1st kind. After processParam is executed for each of the arguments from some set the data created in processResult will be processed together. The problem is in fact that jniCallWhichMayCrash method may crash the entire JVM and this will be very bad. The reason of crash may be such that it can happen for one argument value and not for the other. We've decided that it's better to ignore crashes inside of JVM and just skip some chunks of data when such crashes occur. In order to do this we should run processParam function inside of separate process and pass the result somehow (HOW? HOW?! This is a question) to the main process and in case of any crashes we will only lose some part of data (It's ok) without lose of everything else. So for now the main problem is implementation of transport between different processes. Which options do I have? I can think about serialization and transmitting of binary data by the streams, but serialization may be not very fast due to object complexity. Maybe I have some other options of implementing this?

    Read the article

  • How to make 2 incompatible types, but with the same members, interchangeable?

    - by Quigrim
    Yesterday 2 of the guys on our team came to me with an uncommon problem. We are using a third-party component in one of our winforms applications. All the code has already been written against it. They then wanted to incorporate another third-party component, by the same vender, into our application. To their delight they found that the second component had the exact same public members as the first. But to their dismay, the 2 components have completely separate inheritance hierarchies, and implement no common interfaces. Makes you wonder... Well, makes me wonder. An example of the problem: public class ThirdPartyClass1 { public string Name { get { return "ThirdPartyClass1"; } } public void DoThirdPartyStuff () { Console.WriteLine ("ThirdPartyClass1 is doing its thing."); } } public class ThirdPartyClass2 { public string Name { get { return "ThirdPartyClass2"; } } public void DoThirdPartyStuff () { Console.WriteLine ("ThirdPartyClass2 is doing its thing."); } } Gladly they felt copying and pasting the code they wrote for the first component was not the correct answer. So they were thinking of assigning the component instant into an object reference and then modifying the code to do conditional casts after checking what type it was. But that is arguably even uglier than the copy and paste approach. So they then asked me if I can write some reflection code to access the properties and call the methods off the two different object types since we know what they are, and they are exactly the same. But my first thought was that there goes the elegance. I figure there has to be a better, graceful solution to this problem.

    Read the article

  • Benefits of PerformancePoint Services Using SharePoint Server 2010

    - by Wayne
    What is PerformancePoint Services? Most of the time it happens that the metrics that make up your key performance indicators are not simple values from a data source. In SharePoint Server 2007 PerformancePoint Services, you could create two kinds of KPI metrics: Simple single value metrics from any supported data source or Complex multiple value metrics from a single Analysis Services data source using MDX. Now things are even easier with Performance Point Services in SharePoint 2010. Let us check what is it? PerformancePoint Services in SharePoint Server 2010 is a performance management service that you can use to monitor and analyze your business. By providing flexible, easy-to-use tools for building dashboards, scorecards, reports, and key performance indicators (KPIs), PerformancePoint Services can help everyone across an organization make informed business decisions that align with companywide objectives and strategy. Scorecards, dashboards, and KPIs help drive accountability. Integrated analytics help employees move quickly from monitoring information to analyzing it and, when appropriate, sharing it throughout the organization. Prior to the addition of PerformancePoint Services to SharePoint Server, Microsoft Office PerformancePoint Server 2007 functioned as a standalone server. Now PerformancePoint functionality is available as an integrated part of the SharePoint Server Enterprise license, as is the case with Excel Services in Microsoft SharePoint Server 2010. The popular features of earlier versions of PerformancePoint Services are preserved along with numerous enhancements and additional functionality. New PerformancePoint Services features PerformancePoint Services now can utilize SharePoint Server scalability, collaboration, backup and recovery, and disaster recovery capabilities. Dashboards and dashboard items are stored and secured within SharePoint lists and libraries, providing you with a single security and repository framework. New features and enhancements of SharePoint 2010 PerformancePoint Services • With PerformancePoint Services, functioning as a service in SharePoint Server, dashboards and dashboard items are stored and secured within SharePoint lists and libraries, providing you with a single security and repository framework. The new architecture also takes advantage of SharePoint Server scalability, collaboration, backup and recovery, and disaster recovery capabilities. You also can include and link PerformancePoint Services Web Parts with other SharePoint Server Web Parts on the same page. The new architecture also streamlines security models that simplify access to report data. • The Decomposition Tree is a new visualization report type available in PerformancePoint Services. You can use it to quickly and visually break down higher-level data values from a multi-dimensional data set to understand the driving forces behind those values. The Decomposition Tree is available in scorecards and analytic reports and ultimately in dashboards. • You can access more detailed business information with improved scorecards. Scorecards have been enhanced to make it easy for you to drill down and quickly access more detailed information. PerformancePoint scorecards also offer more flexible layout options, dynamic hierarchies, and calculated KPI features. Using this enhanced functionality, you can now create custom metrics that use multiple data sources. You can also sort, filter, and view variances between actual and target values to help you identify concerns or risks. • Better Time Intelligence filtering capabilities that you can use to create and use dynamic time filters that are always up to date. Other improved filters improve the ability for dashboard users to quickly focus in on information that is most relevant. • Ability to include and link PerformancePoint Services Web Parts together with other PerformancePoint Services Web parts on the same page. • Easier to author and publish dashboard items by using Dashboard Designer. • SQL Server Analysis Services 2008 support. • Increased support for accessibility compliance in individual reports and scorecards. • The KPI Details report is a new report type that displays contextually relevant information about KPIs, metrics, rows, columns, and cells within a scorecard. The KPI Details report works as a Web part that links to a scorecard or individual KPI to show relevant metadata to the end user in SharePoint Server. This Web part can be added to PerformancePoint dashboards or any SharePoint Server page. • Create analytics reports to better understand underlying business forces behind the results. Analytic reports have been enhanced to support value filtering, new chart types, and server-based conditional formatting. To conclude, PerformancePoint Services, by becoming tightly integrated with SharePoint Server 2010, takes advantage of many enterprise-level SharePoint Server 2010 features. Unfortunately, SharePoint Foundation 2010 doesn’t include this feature. There are still many choices in SharePoint family of products that include SharePoint Server 2010, SharePoint Foundation, SharePoint Server 2007 and associated free SharePoint web parts and templates.

    Read the article

  • Oracle Business Intelligence Advanced - Hands-on Workshop para Parceiros - 18 a 21 de Janeiro

    - by Claudia Costa
    Workshop Description This FREE hands-on workshop highlights strengths of OBIEE 11g by providing attendees a hands-on experience with BI 11g product. OBIEE 11g has adopted the standardized infrastructure of Fusion Middleware to provide robust server capability along with highly anticipated advanced visualization components like Maps, Flash based charts, Scorecards and KPIs. This workshop focuses on new features and infrastructure components for the BI practitioners who are familiar with either OBIEE 10g or previous BI releases. After taking this course, Oracle Business Intelligence 11g Advanced, you will gain insight into OBIEE11g technology, reporting solutions and new features. Workshop provides opportunities to practice with OBIEE11g environment as hands on activities. Participant will gain in-depth understanding of new architecture of OBIEE 11g, security mode, installation/configuration as well as reporting aspects like, new ROLAP/MOLAP style hierarchical browsing, new chart types, Action Framework and Advanced Visualization. If you are a Business Intelligence practitioners and familiar with BI10g - you cannot afford to miss this 3-day workshop. Register Now! PresentationsBusiness Intelligence EE (OBIEE) 11g: Advanced Workshop ·         OBIEE 11g Overview ·         OBIEE 11g Architecture and Infrastructure ·         OBIEE 11g Installation, Configuration and Monitoring ·         OBIEE11g Security Model and BI Components ·         OBIEE 11g Homepage Overview ·         New Visualizations: Master-Detail Events, Charts, Hierarchies ·         Reports Building with OBIEE 11g and Catalog Management ·         Spatial Integration, Action Framework, Scorecards ·         OBIEE 11g Dashboards ·         OBIEE Integration Options  Lab OutlineOracle Business Intelligence (OBIEE) 11g: Advanced Workshop The labs enable OBIEE Core functionality through hands-on activities are based on a Oracle VirtualBox image with software and training samples pre-installed. This Advanced course has few labs optional during the workshop to allow for students to practice them on their own. The primary purpose of the workshop is to provide expertise of 11g features and infrastructure changes from 10g. Labs will allow you to explore concepts to: ·         Have a clear understanding of the OBIEE 11g architecture ·         Have a clear understanding of the OBIEE differentiators ·         OBIEE11g Security Model ·         OBIEE11g Environment Management ·         Report Building with OBIEE11g ·         OBIEE11g Dashboard and Homepage Environment ·         New Visualization features ·         Management of Reports, Dashboards and BI Catalog Objects Audience ·         Business Intelligence Evangelist ·         Business Intelligence Application Developer or Consultant ·         Data Warehouse Developer ·         Enterprise Architects ·         Industry Solutions Architects Prerequisites ·         Experience and Understanding of OBIEE 10g is required. ·         Good understanding of data modeling for reporting purpose ·         Strong experience with database technologies preferred Equipment RequirementsThis workshop requires attendees to provide their own laptops. Attendee laptops must meet the following minimum hardware/software requirements: OBIEE 11g environments requires at least 3 GB of RAM (4GB Preferred), without which student will not be able to complete labs. This workshop has environment that includes VM Image and also a software components that students will install on their laptop for the labs. ·         Minimum 3GB RAM. 25GB free disk space ·         Internet Explorer 7 ·         VirtualBox (the latest version) ·         Downloadable from http://www.virtualbox.org ·         WINRAR or 7zip ·         Downloadable from http://www.win-rar.com/download.html ·         Downloadable from http://www.7zip.com/ Attendees will be given a VirtualBox image for Oraclee BI 11g Workshop containing the software along with required toolset, database and data sets for the labs. AgendaThis class duration is 3 Days9:00am: Sign-in and Technical Set up9:30am : Workshop Starts5:00pm : Workhop Ends LocalHotel Holiday Inn Express - Porto Salvo - Lisboa This class is Free. Register early to confirm a seat! Oracle BI Advanced 11g Hands-on Workshop - Schedule Register Now! January 11-13, 2011: Kista, Sweden January 18-20, 2011: Lisbon, Portugal March 1-3, 2011: Reading, Berkshire, UK March 15-17, 2011: Colombes, Paris, France March 29-31, 2011: Amsterdam, Netherlands Questions? For registration questions please send an email to [email protected]. Para outras informações, por favor contacte Claudia Costa, telf: 214235027 ou pelo email   

    Read the article

  • My Right-to-Left Foot (T-SQL Tuesday #13)

    - by smisner
    As a business intelligence consultant, I often encounter the situation described in this month's T-SQL Tuesday, hosted by Steve Jones ( Blog | Twitter) – “What the Business Says Is Not What the  Business Wants.” Steve posed the question, “What issues have you had in interacting with the business to get your job done?” My profession requires me to have one foot firmly planted in the technology world and the other foot planted in the business world. I learned long ago that the business never says exactly what the business wants because the business doesn't have the words to describe what the business wants accurately enough for IT. Not only do technological-savvy barriers exist, but there are also linguistic barriers between the two worlds. So how do I cope? The adage "a picture is worth a thousand words" is particularly helpful when I'm called in to help design a new business intelligence solution. Many of my students in BI classes have heard me explain ("rant") about left-to-right versus right-to-left design. To understand what I mean about these two design options, let's start with a picture: When we design a business intelligence solution that includes some sort of traditional data warehouse or data mart design, we typically place the data sources on the left, the new solution in the middle, and the users on the right. When I've been called in to help course-correct a failing BI project, I often find that IT has taken a left-to-right approach. They look at the data sources, decide how to model the BI solution as a _______ (fill in the blank with data warehouse, data mart, cube, etc.), and then build the new data structures and supporting infrastructure. (Sometimes, they actually do this without ever having talked to the business first.) Then, when they show what they've built to the business, the business says that is not what we want. Uh-oh. I prefer to take a right-to-left approach. Preferably at the beginning of a project. But even if the project starts left-to-right, I'll do my best to swing it around so that we’re back to a right-to-left approach. (When circumstances are beyond my control, I carry on, but it’s a painful project for everyone – not because of me, but because the approach just doesn’t get to what the business wants in the most effective way.) By using a right to left approach, I try to understand what it is the business is trying to accomplish. I do this by having them explain reports to me, and explaining the decision-making process that relates to these reports. Sometimes I have them explain to me their business processes, or better yet show me their business processes in action because I need pictures, too. I (unofficially) call this part of the project "getting inside the business's head." This is starting at the right side of the diagram above. My next step is to start moving leftward. I do this by preparing some type of prototype. Depending on the nature of the project, this might mean that I simply mock up some data in a relational database and build a prototype report in Reporting Services. If I'm lucky, I might be able to use real data in a relational database. I'll either use a subset of the data in the prototype report by creating a prototype database to hold the sample data, or select data directly from the source. It all depends on how much data there is, how complex the queries are, and how fast I need to get the prototype completed. If the solution will include Analysis Services, then I'll build a prototype cube. Analysis Services makes it incredibly easy to prototype. You can sit down with the business, show them the prototype, and have a meaningful conversation about what the BI solution should look like. I know I've done a good job on the prototype when I get knocked out of my chair so that the business user can explore the solution further independently. (That's really happened to me!) We can talk about dimensions, hierarchies, levels, members, measures, and so on with something tangible to look at and without using those terms. It's not helpful to use sample data like Adventure Works or to use BI terms that they don't really understand. But when I show them their data using the BI technology and talk to them in their language, then they truly have a picture worth a thousand words. From that, we can fine tune the prototype to move it closer to what they want. They have a better idea of what they're getting, and I have a better idea of what to build. So right to left design is not truly moving from the right to the left. But it starts from the right and moves towards the middle, and once I know what the middle needs to look like, I can then build from the left to meet in the middle. And that’s how I get past what the business says to what the business wants.

    Read the article

  • Scripting Part 1

    - by rbishop
    Dynamic Scripting is a large topic, so let me get a couple of things out of the way first. If you aren't familiar with JavaScript, I can suggest CodeAcademy's JavaScript series. There are also many other websites and books that cover JavaScript from every possible angle.The second thing we need to deal with is JavaScript as a programming language versus a JavaScript environment running in a web browser. Many books, tutorials, and websites completely blur these two together but they are in fact completely separate. What does this really mean in relation to DRM? Since DRM isn't a web browser, there are no document, window, history, screen, or location objects. There are no events like mousedown or click. Trying to call alert('hello!') in DRM will just cause an error. Those concepts are all related to an HTML document (web page) and are part of the Browser Object Model or Document Object Model. DRM has its own object model that exposes DRM-related objects. In practice, feel free to use those sorts of tutorials or practice within your browser; Many of the concepts are directly translatable to writing scripts in DRM. Just don't try to call document.getElementById in your property definition!I think learning by example tends to work the best, so let's try getting a list of all the unique property values for a given node and its children. var uniqueValues = {}; var childEnumerator = node.GetChildEnumerator(); while(childEnumerator.MoveNext()) { var propValue = childEnumerator.GetCurrent().PropValue("Custom.testpropstr1"); print(propValue); if(propValue != null && propValue != '' && !uniqueValues[propValue]) uniqueValues[propValue] = true; } var result = ''; for(var value in uniqueValues){ result += "Found value " + value + ","; } return result;  Now lets break this down piece by piece. var uniqueValues = {}; This declares a variable and initializes it as a new empty Object. You could also have written var uniqueValues = new Object(); Why use an object here? JavaScript objects can also function as a list of keys and we'll use that later to store each property value as a key on the object. var childEnumerator = node.GetChildEnumerator(); while(childEnumerator.MoveNext()) { This gets an enumerator for the node's children. The enumerator allows us to loop through the children one by one. If we wanted to get a filtered list of children, we would instead use ChildrenWith(). When we reach the end of the child list, the enumerator will return false for MoveNext() and that will stop the loop. var propValue = childEnumerator.GetCurrent().PropValue("Custom.testpropstr1"); print(propValue); if(propValue != null && propValue != '' && !uniqueValues[propValue]) uniqueValues[propValue] = true; } This gets the node the enumerator is currently pointing at, then calls PropValue() on it to get the value of a property. We then make sure the prop value isn't null or the empty string, then we make sure the value doesn't already exist as a key. Assuming it doesn't we add it as a key with a value (true in this case because it makes checking for an existing value faster when the value exists). A quick word on the print() function. When viewing the prop grid, running an export, or performing normal DRM operations it does nothing. If you have a lot of print() calls with complicated arguments it can slow your script down slightly, but otherwise has no effect. But when using the script editor, all the output of print() will be shown in the Warnings area. This gives you an extremely useful debugging tool to see what exactly a script is doing. var result = ''; for(var value in uniqueValues){ result += "Found value " + value + ","; } return result; Now we build a string by looping through all the keys in uniqueValues and adding that value to our string. The last step is to simply return the result. Hopefully this small example demonstrates some of the core Dynamic Scripting concepts. Next time, we can try checking for node references in other hierarchies to see if they are using duplicate property values.

    Read the article

  • ADF Business Components

    - by Arda Eralp
    ADF Business Components and JDeveloper simplify the development, delivery, and customization of business applications for the Java EE platform. With ADF Business Components, developers aren't required to write the application infrastructure code required by the typical Java EE application to: Connect to the database Retrieve data Lock database records Manage transactions   ADF Business Components addresses these tasks through its library of reusable software components and through the supporting design time facilities in JDeveloper. Most importantly, developers save time using ADF Business Components since the JDeveloper design time makes typical development tasks entirely declarative. In particular, JDeveloper supports declarative development with ADF Business Components to: Author and test business logic in components which automatically integrate with databases Reuse business logic through multiple SQL-based views of data, supporting different application tasks Access and update the views from browser, desktop, mobile, and web service clients Customize application functionality in layers without requiring modification of the delivered application The goal of ADF Business Components is to make the business services developer more productive.   ADF Business Components provides a foundation of Java classes that allow your business-tier application components to leverage the functionality provided in the following areas: Simplifying Data Access Design a data model for client displays, including only necessary data Include master-detail hierarchies of any complexity as part of the data model Implement end-user Query-by-Example data filtering without code Automatically coordinate data model changes with business services layer Automatically validate and save any changes to the database   Enforcing Business Domain Validation and Business Logic Declaratively enforce required fields, primary key uniqueness, data precision-scale, and foreign key references Easily capture and enforce both simple and complex business rules, programmatically or declaratively, with multilevel validation support Navigate relationships between business domain objects and enforce constraints related to compound components   Supporting Sophisticated UIs with Multipage Units of Work Automatically reflect changes made by business service application logic in the user interface Retrieve reference information from related tables, and automatically maintain the information when the user changes foreign-key values Simplify multistep web-based business transactions with automatic web-tier state management Handle images, video, sound, and documents without having to use code Synchronize pending data changes across multiple views of data Consistently apply prompts, tooltips, format masks, and error messages in any application Define custom metadata for any business components to support metadata-driven user interface or application functionality Add dynamic attributes at runtime to simplify per-row state management   Implementing High-Performance Service-Oriented Architecture Support highly functional web service interfaces for business integration without writing code Enforce best-practice interface-based programming style Simplify application security with automatic JAAS integration and audit maintenance "Write once, run anywhere": use the same business service as plain Java class, EJB session bean, or web service   Streamlining Application Customization Extend component functionality after delivery without modifying source code Globally substitute delivered components with extended ones without modifying the application   ADF Business Components implements the business service through the following set of cooperating components: Entity object An entity object represents a row in a database table and simplifies modifying its data by handling all data manipulation language (DML) operations for you. These are basically your 1 to 1 representation of a database table. Each table in the database will have 1 and only 1 EO. The EO contains the mapping between columns and attributes. EO's also contain the business logic and validation. These are you core data services. They are responsible for updating, inserting and deleting records. The Attributes tab displays the actual mapping between attributes and columns, the mapping has following fields: Name : contains the name of the attribute we expose in our data model. Type : defines the data type of the attribute in our application. Column : specifies the column to which we want to map the attribute with Column Type : contains the type of the column in the database   View object A view object represents a SQL query. You use the full power of the familiar SQL language to join, filter, sort, and aggregate data into exactly the shape required by the end-user task. The attributes in the View Objects are actually coming from the Entity Object. In the end the VO will generate a query but you basically build a VO by selecting which EO need to participate in the VO and which attributes of those EO you want to use. That's why you have the Entity Usage column so you can see the relation between VO and EO. In the query tab you can clearly see the query that will be generated for the VO. At this stage we don't need it and just use it for information purpose. In later stages we might use it. Application module An application module is the controller of your data layer. It is responsible for keeping hold of the transaction. It exposes the data model to the view layer. You expose the VO's through the Application Module. This is the abstraction of your data layer which you want to show to the outside word.It defines an updatable data model and top-level procedures and functions (called service methods) related to a logical unit of work related to an end-user task. While the base components handle all the common cases through built-in behavior, customization is always possible and the default behavior provided by the base components can be easily overridden or augmented. When you create EO's, a foreign key will be translated into an association in our model. It defines the type of relation and who is the master and child as well as how the visibility of the association looks like. A similar concept exists to identify relations between view objects. These are called view links. These are almost identical as association except that a view link is based upon attributes defined in the view object. It can also be based upon an association. Here's a short summary: Entity Objects: representations of tables Association: Relations between EO's. Representations of foreign keys View Objects: Logical model View Links: Relationships between view objects Application Model: interface to your application  

    Read the article

  • DynamicQuery: How to select a column with linq query that takes parameters

    - by Richard77
    Hello, We want to set up a directory of all the organizations working with us. They are incredibly diverse (government, embassy, private companies, and organizations depending on them ). So, I've resolved to create 2 tables. Table 1 will treat all the organizations equally, i.e. it'll collect all the basic information (name, address, phone number, etc.). Table 2 will establish the hierarchy among all the organizations. For instance, Program for illiterate adults depends on the National Institute for Social Security which depends on the Labor Ministry. In the Hierarchy table, each column represents a level. So, for the example above, (i)Labor Ministry - Level1(column1), (ii)National Institute for Social Security - Level2(column2), (iii)Program for illiterate adults - Level3(column3). To attach an organization to an hierarchy, the user needs to go level by level(i.e. column by column). So, there will be at least 3 situations: If an adequate hierarchy exists for an organization(for instance, level1: US Embassy), that organization can be added (For instance, level2: USAID).-- US Embassy/USAID, and so on. How about if one or more levels are missing? - then they need to be added How about if the hierarchy need to be modified? -- not every thing need to be modified. I do not have any choice but working by level (i.e. column by column). I does not make sense to have all the levels in one form as the user need to navigate hierarchies to find the right one to attach an organization. Let's say, I have those queries in my repository (just that you get the idea). Query1 var orgHierarchy = (from orgH in db.Hierarchy select orgH.Level1).FirstOrDefault; Query2 var orgHierarchy = (from orgH in db.Hierarchy select orgH.Level2).FirstOrDefault; Query3, Query4, etc. The above queries are the same except for the property queried (level1, level2, level3, etc.) Question: Is there a general way of writing the above queries in one? So that the user can track an hierarchy level by level to attach an organization. In other words, not knowing in advance which column to query, I still need to be able to do so depending on some conditions. For instance, an organization X depends on Y. Knowing that Y is somewhere on the 3rd level, I'll go to the 4th level, linking X to Y. I need to select (not manually) a column with only one query that takes parameters. Thanks for helping

    Read the article

  • Oracle sample data problems

    - by Jay
    So, I have this java based data trasformation / masking tool, which I wanted to test out on Oracle 10g. The good part with Oracle 10g is that you get a load of sample schemas with half a million records in some. The schemas are : SH, OE, HR, IX and etc. So, I installed 10g, found out that the installation scripts are under ORACLE_HOME/demo/scripts. I customized these scripts a bit to run in batch mode. That solves one half of my requirement - to create source data for my testing my data transformation software. The second half of the requirement is that I create the same schemas under different names (TR_HR, TR_OE and so on...) without any data. These schemas would represent my target schemas. So, in short, my software would pick up data from a table in a schema and load it up in to the same table in a different schema. Now, I have two issues in creating my target schema and emptying it. I would like this in a batch job. But the oracle scripts you get, the sample schema names are not configurable. So, I tried creating a script, replacing OE with TR_OE, HR with TR_HR and so on. However, this approach is kind of irritating coz the sample schemas are kind of complicated in the way they are created; Oracle creates synonyms, views, materialized views, data types and lot of weird stuff. I would like the target schemas (TR_HR, TR_OE,...) to be empty. But some of the schemas have circular references, which would not allow me to delete data. The only work around seems to be removing certain foreign keys, deleting data and then adding the constraints back. Is there any easy way to all this, without all this fuss? I would need a complicated data set for my testing (complicated as in tables with triggers, multiple hierarchies.. for instance.. a child table that has children up to 5 levels, a parent table that refers to an IOT table and an IOT table that refers to a non-IOT table etc..). The sample schemas are just about perfect from a data set perspective. The only challenge I see is in automating this whole process of loading up the source schemas, and then creating the target schemas and emptying them. Appreciate your help and suggestions.

    Read the article

  • How to build, sort and print a tree of a sort?

    - by Tuplanolla
    This is more of an algorithmic dilemma than a language-specific problem, but since I'm currently using Ruby I'll tag this as such. I've already spent over 20 hours on this and I would've never believed it if someone told me writing a LaTeX parser was a walk in the park in comparison. I have a loop to read hierarchies (that are prefixed with \m) from different files art.tex: \m{Art} graphical.tex: \m{Art}{Graphical} me.tex: \m{About}{Me} music.tex: \m{Art}{Music} notes.tex: \m{Art}{Music}{Sheet Music} site.tex: \m{About}{Site} something.tex: \m{Something} whatever.tex: \m{Something}{That}{Does Not}{Matter} and I need to sort them alphabetically and print them out as a tree About Me (me.tex) Site (site.tex) Art (art.tex) Graphical (graphical.tex) Music (music.tex) Sheet Music (notes.tex) Something (something.tex) That Does Not Matter (whatever.tex) in (X)HTML <ul> <li>About</li> <ul> <li><a href="me.tex">Me</a></li> <li><a href="site.tex">Site</a></li> </ul> <li><a href="art.tex">Art</a></li> <ul> <li><a href="graphical.tex">Graphical</a></li> <li><a href="music.tex">Music</a></li> <ul> <li><a href="notes.tex">Sheet Music</a></li> </ul> </ul> <li><a href="something.tex">Something</a></li> <ul> <li>That</li> <ul> <li>Doesn't</li> <ul> <li><a href="whatever.tex">Matter</a></li> </ul> </ul> </ul> </ul> using Ruby without Rails, which means that at least Array.sort and Dir.glob are available. All of my attempts were formed like this (as this part should work just fine). def fss_brace_array(ss_input)#a concise version of another function; converts {1}{2}...{n} into an array [1, 2, ..., n] or returns an empty array ss_output = ss_input[1].scan(%r{\{(.*?)\}}) rescue ss_output = [] ensure return ss_output end #define tree s_handle = File.join(:content.to_s, "*") Dir.glob("#{s_handle}.tex").each do |s_handle| File.open(s_handle, "r") do |f_handle| while s_line = f_handle.gets if s_all = s_line.match(%r{\\m\{(\{.*?\})+\}}) s_all = s_all.to_a #do something with tree, fss_brace_array(s_all) and s_handle break end end end end #do something else with tree

    Read the article

  • Django Multi-Table Inheritance VS Specifying Explicit OneToOne Relationship in Models

    - by chefsmart
    Hope all this makes sense :) I'll clarify via comments if necessary. Also, I am experimenting using bold text in this question, and will edit it out if I (or you) find it distracting. With that out of the way... Using django.contrib.auth gives us User and Group, among other useful things that I can't do without (like basic messaging). In my app I have several different types of users. A user can be of only one type. That would easily be handled by groups, with a little extra care. However, these different users are related to each other in hierarchies / relationships. Let's take a look at these users: - Principals - "top level" users Administrators - each administrator reports to a Principal Coordinators - each coordinator reports to an Administrator Apart from these there are other user types that are not directly related, but may get related later on. For example, "Company" is another type of user, and can have various "Products", and products may be supervised by a "Coordinator". "Buyer" is another kind of user that may buy products. Now all these users have various other attributes, some of which are common to all types of users and some of which are distinct only to one user type. For example, all types of users have to have an address. On the other hand, only the Principal user belongs to a "BranchOffice". Another point, which was stated above, is that a User can only ever be of one type. The app also needs to keep track of who created and/or modified Principals, Administrators, Coordinators, Companies, Products etc. (So that's two more links to the User model.) In this scenario, is it a good idea to use Django's multi-table inheritance as follows: - from django.contrib.auth.models import User class Principal(User): # # # branchoffice = models.ForeignKey(BranchOffice) landline = models.CharField(blank=True, max_length=20) mobile = models.CharField(blank=True, max_length=20) created_by = models.ForeignKey(User, editable=False, blank=True, related_name="principalcreator") modified_by = models.ForeignKey(User, editable=False, blank=True, related_name="principalmodifier") # # # Or should I go about doing it like this: - class Principal(models.Model): # # # user = models.OneToOneField(User, blank=True) branchoffice = models.ForeignKey(BranchOffice) landline = models.CharField(blank=True, max_length=20) mobile = models.CharField(blank=True, max_length=20) created_by = models.ForeignKey(User, editable=False, blank=True, related_name="principalcreator") modified_by = models.ForeignKey(User, editable=False, blank=True, related_name="principalmodifier") # # # Please keep in mind that there are other user types that are related via foreign keys, for example: - class Administrator(models.Model): # # # principal = models.ForeignKey(Principal, help_text="The supervising principal for this Administrator") user = models.OneToOneField(User, blank=True) province = models.ForeignKey( Province) landline = models.CharField(blank=True, max_length=20) mobile = models.CharField(blank=True, max_length=20) created_by = models.ForeignKey(User, editable=False, blank=True, related_name="administratorcreator") modified_by = models.ForeignKey(User, editable=False, blank=True, related_name="administratormodifier") I am aware that Django does use a one-to-one relationship for multi-table inheritance behind the scenes. I am just not qualified enough to decide which is a more sound approach.

    Read the article

  • When is ¦ not equal to ¦?

    - by Trey Jackson
    Background. I'm working with netlists, and in general, people specify different hierarchies by using /. However, it's not illegal to actually use a / as a part of an instance name. For example, X1/X2/X3/X4 might refer to instance X4 inside another instance named X1/X2/X3. Or it might refer an instance named X3/X4 inside an instance named X2 inside an instance named X1. Got it? There's really no "regular" character that cannot be used as a part of an instance name, so you resort to a non-printable one, or ... perhaps one outside of the standard 0..127 ASCII chars. I thought I'd try (decimal) 166, because for me it shows up as the pipe: ¦. So... I've got some C++ code which constructs the path name using ¦ as the hierarchical separator, so the path above looks like X1¦X2/X3¦X4. Now the GUI is written in Tcl/Tk, and to properly translate this into human readable terms I need to do something like the following: set path [getPathFromC++] ;# returns X1¦X2/X3¦X4 set humanreadable [join [split $path ¦] /] Basically, replace the ¦ with / (I could also accomplish this with [string map]). Now, the problem is, the ¦ in the string I get from C++ doesn't match the ¦ I can create in Tcl. i.e. This fails: set path [getPathFromC++] ;# returns X1¦X2/X3¦X4 string match $path [format X1%cX2/X3%cX4 166 166] Visually, the two strings look identical, but string match fails. I even tried using scan to see if I'd mixed up the bit values. But set path [getPathFromC++] ;# returns X1¦X2/X3¦X4 set path2 [format X1%cX2/X3%cX4 166 166] for {set i 0} {$i < [string length $path]} {incr i} { set p [string range $path $i $i] set p2 [string range $path2 $i $i] scan %c $p c scan %c $p2 c2 puts [list $p $c :::: $p2 $c2 equal? [string equal $c $c2]] } Produces output which looks like everything should match, except the [string equal] fails for the ¦ characters with a print line: ¦ 166 :::: ¦ 166 equal? 0 For what it's worth, the character in C++ is defined as: const char SEPARATOR = 166; Any ideas why a character outside the regular ASCII range would fail like this? When I changed the separator to (decimal) 28 (^\), things worked fine. I just don't want to get bit by a similar problem on a different platform. (I'm currently using Redhat Linux).

    Read the article

  • Writing a managed wrapper for unmanaged (C++) code - custom types/structs

    - by Bobby
    faacEncConfigurationPtr FAACAPI faacEncGetCurrentConfiguration( faacEncHandle hEncoder); I'm trying to come up with a simple wrapper for this C++ library; I've never done more than very simple p/invoke interop before - like one function call with primitive arguments. So, given the above C++ function, for example, what should I do to deal with the return type, and parameter? FAACAPI is defined as: #define FAACAPI __stdcall faacEncConfigurationPtr is defined: typedef struct faacEncConfiguration { int version; char *name; char *copyright; unsigned int mpegVersion; unsigned long bitRate; unsigned int inputFormat; int shortctl; psymodellist_t *psymodellist; int channel_map[64]; } faacEncConfiguration, *faacEncConfigurationPtr; AFAIK this means that the return type of the function is a reference to this struct? And faacEncHandle is: typedef struct { unsigned int numChannels; unsigned long sampleRate; ... SR_INFO *srInfo; double *sampleBuff[MAX_CHANNELS]; ... double *freqBuff[MAX_CHANNELS]; double *overlapBuff[MAX_CHANNELS]; double *msSpectrum[MAX_CHANNELS]; CoderInfo coderInfo[MAX_CHANNELS]; ChannelInfo channelInfo[MAX_CHANNELS]; PsyInfo psyInfo[MAX_CHANNELS]; GlobalPsyInfo gpsyInfo; faacEncConfiguration config; psymodel_t *psymodel; /* quantizer specific config */ AACQuantCfg aacquantCfg; /* FFT Tables */ FFT_Tables fft_tables; int bitDiff; } faacEncStruct, *faacEncHandle; So within that struct we see a lot of other types... hmm. Essentially, I'm trying to figure out how to deal with these types in my managed wrapper? Do I need to create versions of these types/structs, in C#? Something like this: [StructLayout(LayoutKind.Sequential)] struct faacEncConfiguration { uint useTns; ulong bitRate; ... } If so then can the runtime automatically "map" these objects onto eachother? And, would I have to create these "mapped" types for all the types in these return types/parameter type hierarchies, all the way down until I get to all primitives? I know this is a broad topic, any advice on getting up-to-speed quickly on what I need to learn to make this happen would be very much appreciated! Thanks!

    Read the article

  • LLBLGen Pro v3.1 released!

    - by FransBouma
    Yesterday we released LLBLGen Pro v3.1! Version 3.1 comes with new features and enhancements, which I'll describe briefly below. v3.1 is a free upgrade for v3.x licensees. What's new / changed? Designer Extensible Import system. An extensible import system has been added to the designer to import project data from external sources. Importers are plug-ins which import project meta-data (like entity definitions, mappings and relational model data) from an external source into the loaded project. In v3.1, an importer plug-in for importing project elements from existing LLBLGen Pro v3.x project files has been included. You can use this importer to create source projects from which you import parts of models to build your actual project with. Model-only relationships. In v3.1, relationships of the type 1:1, m:1 and 1:n can be marked as model-only. A model-only relationship isn't required to have a backing foreign key constraint in the relational model data. They're ideal for projects which have to work with relational databases where changes can't always be made or some relationships can't be added to (e.g. the ones which are important for the entity model, but are not allowed to be added to the relational model for some reason). Custom field ordering. Although fields in an entity definition don't really have an ordering, it can be important for some situations to have the entity fields in a given order, e.g. when you use compound primary keys. Field ordering can be defined using a pop-up dialog which can be opened through various ways, e.g. inside the project explorer, model view and entity editor. It can also be set automatically during refreshes based on new settings. Command line relational model data refresher tool, CliRefresher.exe. The command line refresh tool shipped with v2.6 is now available for v3.1 as well Navigation enhancements in various designer elements. It's now easier to find elements like entities, typed views etc. in the project explorer from editors, to navigate to related entities in the project explorer by right clicking a relationship, navigate to the super-type in the project explorer when right-clicking an entity and navigate to the sub-type in the project explorer when right-clicking a sub-type node in the project explorer. Minor visual enhancements / tweaks LLBLGen Pro Runtime Framework Entity creation is now up to 30% faster and takes 5% less memory. Creating an entity object has been optimized further by tweaks inside the framework to make instantiating an entity object up to 30% faster. It now also takes up to 5% less memory than in v3.0 Prefetch Path node merging is now up to 20-25% faster. Setting entity references required the creation of a new relationship object. As this relationship object is always used internally it could be cached (as it's used for syncing only). This increases performance by 20-25% in the merging functionality. Entity fetches are now up to 20% faster. A large number of tweaks have been applied to make entity fetches up to 20% faster than in v3.0. Full WCF RIA support. It's now possible to use your LLBLGen Pro runtime framework powered domain layer in a WCF RIA application using the VS.NET tools for WCF RIA services. WCF RIA services is a Microsoft technology for .NET 4 and typically used within silverlight applications. SQL Server DQE compatibility level is now per instance. (Usable in Adapter). It's now possible to set the compatibility level of the SQL Server Dynamic Query Engine (DQE) per instance of the DQE instead of the global setting it was before. The global setting is still available and is used as the default value for the compatibility level per-instance. You can use this to switch between CE Desktop and normal SQL Server compatibility per DataAccessAdapter instance. Support for COUNT_BIG aggregate function (SQL Server specific). The aggregate function COUNT_BIG has been added to the list of available aggregate functions to be used in the framework. Minor changes / tweaks I'm especially pleased with the import system, as that makes working with entity models a lot easier. The import system lets you import from another LLBLGen Pro v3 project any entity definition, mapping and / or meta-data like table definitions. This way you can build repository projects where you store model fragments, e.g. the building blocks for a customer-order system, a user credential model etc., any model you can think of. In most projects, you'll recognize that some parts of your new model look familiar. In these cases it would have been easier if you would have been able to import these parts from projects you had pre-created. With LLBLGen Pro v3.1 you can. For example, say you have an Oracle schema called CRM which contains the bread 'n' butter customer-order-product kind of model. You create an entity model from that schema and save it in a project file. Now you start working on another project for another customer and you have to use SQL Server. You also start using model-first development, so develop the entity model from scratch as there's no existing database. As this customer also requires some CRM like entity model, you import the entities from your saved Oracle project into this new SQL Server targeting project. Because you don't work with Oracle this time, you don't import the relational meta-data, just the entities, their relationships and possibly their inheritance hierarchies, if any. As they're now entities in your project you can change them a bit to match the new customer's requirements. This can save you a lot of time, because you can re-use pre-fab model fragments for new projects. In the example above there are no tables yet (as you work model first) so using the forward mapping capabilities of LLBLGen Pro v3 creates the tables, PK constraints, Unique Constraints and FK constraints for you. This way you can build a nice repository of model fragments which you can re-use in new projects.

    Read the article

< Previous Page | 2 3 4 5 6 7  | Next Page >