Search Results

Search found 32254 results on 1291 pages for 'model view presenter'.

Page 594/1291 | < Previous Page | 590 591 592 593 594 595 596 597 598 599 600 601  | Next Page >

  • Online Introduction to Relational Databases (and not only) with Stanford University!

    - by Luca Zavarella
    How many of you know exactly the definition of "relational database"? What exactly the adjective "relational" refers to? Many of you allow themselves to be deceived, thinking this adjective is related to foreign key constraints between tables. Instead this adjective lurks in a world based on set theory, relational algebra and the concept of relationship intended as a table.Well, for those who want to deep the fundamentals of relational model, relational algebra, XML, OLAP and emerging "NoSQL" systems, Stanford University School of Engineering offers a public and free online introductory course to databases. This is the related web page: http://www.db-class.com/ The course will last 2 months, after which there will be a final exam. Passing the final exam will entitle the participants to receive a statement of accomplishment. A syllabus and more information is available here. Happy eLearning to you!

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #039

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 FQL – Facebook Query Language Facebook list following advantages of FQL: Condensed XML reduces bandwidth and parsing costs. More complex requests can reduce the number of requests necessary. Provides a single consistent, unified interface for all of your data. It’s fun! UDF – Get the Day of the Week Function The day of the week can be retrieved in SQL Server by using the DatePart function. The value returned by the function is between 1 (Sunday) and 7 (Saturday). To convert this to a string representing the day of the week, use a CASE statement. UDF – Function to Get Previous And Next Work Day – Exclude Saturday and Sunday While reading ColdFusion blog of Ben Nadel Getting the Previous Day In ColdFusion, Excluding Saturday And Sunday, I realize that I use similar function on my SQL Server Database. This function excludes the Weekends (Saturday and Sunday), and it gets previous as well as next work day. Complete Series of SQL Server Interview Questions and Answers Data Warehousing Interview Questions and Answers – Introduction Data Warehousing Interview Questions and Answers – Part 1 Data Warehousing Interview Questions and Answers – Part 2 Data Warehousing Interview Questions and Answers – Part 3 Data Warehousing Interview Questions and Answers Complete List Download 2008 Introduction to Log Viewer In SQL Server all the windows event logs can be seen along with SQL Server logs. Interface for all the logs is same and can be launched from the same place. This log can be exported and filtered as well. DBCC SHRINKFILE Takes Long Time to Run If you are DBA who are involved with Database Maintenance and file group maintenance, you must have experience that many times DBCC SHRINKFILE operations takes a long time but any other operations with Database are relatively quicker. mssqlsystemresource – Resource Database The purpose of resource database is to facilitates upgrading to the new version of SQL Server without any hassle. In previous versions whenever version of SQL Server was upgraded all the previous version system objects needs to be dropped and new version system objects to be created. 2009 Puzzle – Write Script to Generate Primary Key and Foreign Key In SQL Server Management Studio (SSMS), there is no option to script all the keys. If one is required to script keys they will have to manually script each key one at a time. If database has many tables, generating one key at a time can be a very intricate task. I want to throw a question to all of you if any of you have scripts for the same purpose. Maximizing View of SQL Server Management Studio – Full Screen – New Screen I had explained the following two different methods: 1) Open Results in Separate Tab - This is a very interesting method as result pan shows up in a different tab instead of the splitting screen horizontally. 2) Open SSMS in Full Screen - This works always and to its best. Not many people are aware of this method; hence, very few people use it to enhance performance. 2010 Find Queries using Parallelism from Cached Plan T-SQL script gets all the queries and their execution plan where parallelism operations are kicked up. Pay attention there is TOP 10 is used, if you have lots of transactional operations, I suggest that you change TOP 10 to TOP 50 This is the list of the all the articles in the series of computed columns. SQL SERVER – Computed Column – PERSISTED and Storage This article talks about how computed columns are created and why they take more storage space than before. SQL SERVER – Computed Column – PERSISTED and Performance This article talks about how PERSISTED columns give better performance than non-persisted columns. SQL SERVER – Computed Column – PERSISTED and Performance – Part 2 This article talks about how non-persisted columns give better performance than PERSISTED columns. SQL SERVER – Computed Column and Performance – Part 3 This article talks about how Index improves the performance of Computed Columns. SQL SERVER – Computed Column – PERSISTED and Storage – Part 2 This article talks about how creating index on computed column does not grow the row length of table. SQL SERVER – Computed Columns – Index and Performance This article summarized all the articles related to computed columns. 2011 SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Data Warehousing Concepts – Day 21 of 31 What is Data Warehousing? What is Business Intelligence (BI)? What is a Dimension Table? What is Dimensional Modeling? What is a Fact Table? What are the Fundamental Stages of Data Warehousing? What are the Different Methods of Loading Dimension tables? Describes the Foreign Key Columns in Fact Table and Dimension Table? What is Data Mining? What is the Difference between a View and a Materialized View? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Data Warehousing Concepts – Day 22 of 31 What is OLTP? What is OLAP? What is the Difference between OLTP and OLAP? What is ODS? What is ER Diagram? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Data Warehousing Concepts – Day 23 of 31 What is ETL? What is VLDB? Is OLTP Database is Design Optimal for Data Warehouse? If denormalizing improves Data Warehouse Processes, then why is the Fact Table is in the Normal Form? What are Lookup Tables? What are Aggregate Tables? What is Real-Time Data-Warehousing? What are Conformed Dimensions? What is a Conformed Fact? How do you Load the Time Dimension? What is a Level of Granularity of a Fact Table? What are Non-Additive Facts? What is a Factless Facts Table? What are Slowly Changing Dimensions (SCD)? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Data Warehousing Concepts – Day 24 of 31 What is Hybrid Slowly Changing Dimension? What is BUS Schema? What is a Star Schema? What Snow Flake Schema? Differences between the Star and Snowflake Schema? What is Difference between ER Modeling and Dimensional Modeling? What is Degenerate Dimension Table? Why is Data Modeling Important? What is a Surrogate Key? What is Junk Dimension? What is a Data Mart? What is the Difference between OLAP and Data Warehouse? What is a Cube and Linked Cube with Reference to Data Warehouse? What is Snapshot with Reference to Data Warehouse? What is Active Data Warehousing? What is the Difference between Data Warehousing and Business Intelligence? What is MDS? Explain the Paradigm of Bill Inmon and Ralph Kimball. SQL SERVER – Azure Interview Questions and Answers – Guest Post by Paras Doshi – Day 25 of 31 Paras Doshi has submitted 21 interesting question and answers for SQL Azure. 1.What is SQL Azure? 2.What is cloud computing? 3.How is SQL Azure different than SQL server? 4.How many replicas are maintained for each SQL Azure database? 5.How can we migrate from SQL server to SQL Azure? 6.Which tools are available to manage SQL Azure databases and servers? 7.Tell me something about security and SQL Azure. 8.What is SQL Azure Firewall? 9.What is the difference between web edition and business edition? 10.How do we synchronize On Premise SQL server with SQL Azure? 11.How do we Backup SQL Azure Data? 12.What is the current pricing model of SQL Azure? 13.What is the current limitation of the size of SQL Azure DB? 14.How do you handle datasets larger than 50 GB? 15.What happens when the SQL Azure database reaches Max Size? 16.How many databases can we create in a single server? 17.How many servers can we create in a single subscription? 18.How do you improve the performance of a SQL Azure Database? 19.What is code near application topology? 20.What were the latest updates to SQL Azure service? 21.When does a workload on SQL Azure get throttled? SQL SERVER – Interview Questions and Answers – Guest Post by Malathi Mahadevan – Day 26 of 31 Malachi had asked a simple question which has several answers. Each answer makes you think and ponder about the reality of the IT world. Look at the simple question – ‘What is the toughest challenge you have faced in your present job and how did you handle it’? and its various answers. Each answer has its own story. SQL SERVER – Interview Questions and Answers – Guest Post by Rick Morelan – Day 27 of 31 Rick Morelan of Joes2Pros has written an excellent blog post on the subject how to find top N values. Most people are fully aware of how the TOP keyword works with a SELECT statement. After years preparing so many students to pass the SQL Certification I noticed they were pretty well prepared for job interviews too. Yes, they would do well in the interview but not great. There seemed to be a few questions that would come up repeatedly for almost everyone. Rick addresses similar questions in his lucid writing skills. 2012 Observation of Top with Index and Order of Resultset SQL Server has lots of things to learn and share. It is amazing to see how people evaluate and understand different techniques and styles differently when implementing. The real reason may be absolutely different but we may blame something totally different for the incorrect results. Read the blog post to learn more. How do I Record Video and Webcast How to Convert Hex to Decimal or INT Earlier I asked regarding a question about how to convert Hex to Decimal. I promised that I will post an answer with Due Credit to the author but never got around to post a blog post around it. Read the original post over here SQL SERVER – Question – How to Convert Hex to Decimal. Query to Get Unique Distinct Data Based on Condition – Eliminate Duplicate Data from Resultset The natural reaction will be to suggest DISTINCT or GROUP BY. However, not all the questions can be solved by DISTINCT or GROUP BY. Let us see the following example, where a user wanted only latest records to be displayed. Let us see the example to understand further. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Oracle OpenWorld Healthcare Integration Session Highlights Challenges & Solutions

    - by Nitesh Jain
    In today’s session co-presented by Steve Schenks, Integration Architect from Ascension Health and Oracle’s Sundar Shenbagam and Suresh Sharma (apparently your initials must be SS to present during this session), interesting insights in many different areas including Steve’s descriptions of the challenges with their previous environment: Disparate hardware and software is an issue common across healthcare and most other industries…Larry Ellison spoke on this topic during Sundays’ keynote address.  In the last part of session, Suresh is planning to go over some of the best practices and lesson learned to implement successful healthcare applications and will discuss the different options to model Sequencing (FIF0) use cases (one of most common use cases in the provider market). The session was “Implementing Successful Healthcare Applications with Oracle SOA Suite” – Session # CON8546. For more information about this session, please contact Senior Principal Product Manager Suresh Sharma Ref : https://blogs.oracle.com/SOA/entry/oracle_openworld_healthcare_integration_session

    Read the article

  • Oracle OpenWorld Healthcare Integration Session Highlights Challenges & Solutions

    - by Bruce Tierney
    In today’s session co-presented by Steve Schenks, Integration Architect from Ascension Health and Oracle’s Sundar Shenbagam and Suresh Sharma (apparently your initials must be SS to present during this session), interesting insights in many different areas including Steve’s descriptions of the challenges with their previous environment: Disparate hardware and software is an issue common across healthcare and most other industries…Larry Ellison spoke on this topic during Sundays’ keynote address. In the last part of session, Suresh is planning to go over some of the best practices and lesson learned to implement successful healthcare applications and will discuss the different options to model Sequencing (FIF0) use cases (one of most common use cases in the provider market). The session was “Implementing Successful Healthcare Applications with Oracle SOA Suite” – Session # CON8546. For more information about this session, please contact Senior Principal Product Manager Suresh Sharma

    Read the article

  • Oracle on Oracle: Is that all?

    - by Darin Pendergraft
    On October 17th, I posted a short blog and a podcast interview with Chirag Andani, talking about how Oracle IT uses its own IDM products. Blog link here. In response, I received a comment from reader Jaime Cardoso ([email protected]) who posted: “- You could have talked about how by deploying Oracle's Open standards base technology you were able to integrate any new system in your infrastructure in days. - You could have talked about how by deploying federation you were enabling the business side to keep all their options open in terms of companies to buy and sell while maintaining perfect employee and customer's single view. - You could have talked about how you are now able to cut response times to your audit and security teams into 1/10th of your former times Instead you spent 6 minutes talking about single sign on and self provisioning? If I didn't knew your IDM offer so well I would now be wondering what its differences from Microsoft's offer was. Sorry for not giving a positive comment here but, please your IDM suite is very good and, you simply aren't promoting it well enough” So I decided to send Jaime a note asking him about his experience, and to get his perspective on what makes the Oracle products great. What I found out is that Jaime is a very experienced IDM Architect with several major projects under his belt. Darin Pendergraft: Can you tell me a bit about your experience? How long have you worked in IT, and what is your IDM experience? Jaime Cardoso: I started working in "serious" IT in 1998 when I became Netscape's technical specialist in Portugal. Netscape Portugal didn't exist so, I was working for their VAR here. Most of my work at the time was with Netscape's mail server and LDAP server. Since that time I've been bouncing between the system's side like Sun resellers, Solaris stuff and even worked with Sun's Engineering in the making of an Hierarchical Storage Product (Sun CIS if you know it) and the application's side, mostly in LDAP and IDM. Over the years I've been doing support, service delivery and pre-sales / architecture design of IDM solutions in most big customers in Portugal, to name a few projects: - The first European deployment of Sun Access Manager (SAPO – Portugal Telecom) - The identity repository of 5/5 of the Biggest Portuguese banks - The Portuguese government federation of services project DP: OK, in your blog response, you mentioned 3 topics: 1. Using Oracle's standards based architecture; (you) were able to integrate any new system in days: can you give an example? What systems, how long did it take, number of apps/users/accounts/roles etc. JC: It's relatively easy to design a user management strategy for a static environment, or if you simply assume that you're an <insert vendor here> shop and all your systems will bow to that vendor's will. We've all seen that path, the use of proprietary technologies in interoperability solutions but, then reality kicks in. As an ISP I recall that I made the technical decision to use Active Directory as a central authentication system for the entire IT infrastructure. Clients, systems, apps, everything was there. As a good part of the systems and apps were running on UNIX, then a connector became needed in order to have UNIX boxes to authenticate against AD. And, that strategy worked but, each new machine required the component to be installed, monitoring had to be made for that component and each new app had to be independently certified. A self care user portal was an ongoing project, AD access assumes the client is inside the domain, something the ISP's customers (and UNIX boxes) weren't nor had any intention of ever being. When the Windows 2008 rollout was done, Microsoft changed the Active Directory interface. The Windows administrators didn't have enough know-how about directories and the way systems outside the MS world behaved so, on the go live, things weren't properly tested and a general outage followed. Several hours and 1 roll back later, everything was back working. But, the ISP still had to change all of its applications to work with the new access methods and reset the effort spent on the self service user portal. To keep with the same strategy, they would also have to trust Microsoft not to change interfaces again. Simply by putting up an Oracle LDAP server in the middle and replicating the user info from the AD into LDAP, most of the problems went away. Even systems for which no AD connector existed had PAM in them so, integration was made at the OS level, fully supported by the OS supplier. Sun Identity Manager already had a self care portal, combined with a user workflow so, all the clearances had to be given before the account was created or updated. Adding a new system as a client for these authentication services was simply a new checkbox in the OS installer and, even True64 systems were, for the first time integrated also with a 5 minute work of a junior system admin. True, all the windows clients and MS apps still went to the AD for their authentication needs so, from the start everybody knew that they weren't 100% free of migration pains but, now they had a single point of problems to look at. If you're looking for numbers: - 500K directory entries (users) - 2-300 systems After the initial setup, I personally integrated about 20 systems / apps against LDAP in 1 day while being watched by the different IT teams. The internal IT staff did the rest. DP: 2. Using Federation allows the business to keep options open for buying and selling companies, and yet maintain a single view for both employee and customer. What do you mean by this? Can you give an example? JC: The market is dynamic. The company that's being bought today tomorrow will be sold again. Companies that spread on different markets may see the regulator forcing a sale of part of a company due to monopoly reasons and companies that are in multiple countries have to comply with different legislations. Our job, as IT architects, while addressing the customers and employees authentication services, is quite hard and, quite contrary. On one hand, we need to give access to all of our employees to the relevant systems, apps and resources and, we already have marketing talking with us trying to find out who's a customer of the bough company but not from ours to address. On the other hand, we have to do that and keep in mind we may have to break up all that effort and that different countries legislation may became a problem with a full integration plan. That's a job for user Federation. you don't want to be the one who's telling your President that he will sell that business unit without it's customer's database (making the deal worth a lot less) or that the buyer will take with him a copy of your entire customer's database. Federation enables you to start controlling permissions to users outside of your traditional authentication realm. So what if the people of that company you just bought are keeping their old logins? Do you want, because of that, to have a dedicated system for their expenses reports? And do you want to keep their sales (and pre-sales) people out of the loop in terms of your group's path? Control the information flow, establish a Federation trust circle and give access to your apps to users that haven't (yet?) been brought into your internal login systems. You can still see your users in a unified view, you obviously control if a user has access to any particular application, either that user is in your local database or stored in a directory on the other side of the world. DP: 3. Cut response times of audit and security teams to 1/10. Is this a real number? Can you give an example? JC: No, I don't have any backing for this number. One of the companies I did system Administration for has a SOX compliance policy in place (I remind you that I live in Portugal so, this definition of SOX may be somewhat different from what you're used to) and, every time the audit team says they'll do another audit, we have to negotiate with them the size of the sample and we spend about 15 man/days gathering all the required info they ask. I did some work with Sun's Identity auditor and, from what I've been seeing, Oracle's product is even better and, I've seen that most of the information they ask would have been provided in a few hours with the help of this tool. I do stand by what I said here but, to be honest, someone from Identity Auditor team would do a much better job than me explaining this time savings. Jaime is right: the Oracle IDM products have a lot of business value, and Oracle IT is using them for a lot more than I was able to cover in the short podcast that I posted. I want to thank Jaime for his comments and perspective. We want these blog posts to be informative and honest – so if you have feedback for the Oracle IDM team on any topic discussed here, please post your comments below.

    Read the article

  • Bluetooth drivers for Dell Latitude D830

    - by blomqvist
    I haven’t been using Bluetooth on my D830 before but now I got myself a Bluetooth mouse in order to get rid of the receiver for a wireless mouse. And then I discovered that Bluetooth did not work anymore after I installed Win7. Browsing Dell support pages for a a driver found me one (browsing by computer model etc..). But that drivers does not work. Installation looks ok and everything but no devices are found. i kept looking and found this driver, also at Dells site: http://support.us.dell.com/support/downloads/download.aspx?c=us&l=en&s=gen&releaseid=R231570&formatcnt=1&libid=0&typeid=-1&dateid=-1&formatid=-1&fileid=333170 And that one worked!  Problem solved an I am now happily using my Bluetooth mouse instead.

    Read the article

  • Browser-based MMOs (WebGL, WebSocket)

    - by Alon Gubkin
    Do you think it is technically possible to write a fully-fledged 3D MMO client with Browser JavaScript - WebGL for graphics, and WebSocket for Networking? Do you think future MMOs (and games generally) will wrriten with WebGL? Does today's JavaScript performance allow this? Let's say your development team was you as a developer, and another model creator (artist). Would you use a library like SceneJS for the game, or write straight WebGL? If you would use a library, but not SceneJS, please specify which. UPDATE (September 2012): RuneScape, which is a very popular 3D browser-based MMORPG that used Java Applets so far has announced that it will use HTML5 for their client (source). Java (left) and HTML5 (right)

    Read the article

  • Token based Authentication for WCF HTTP/REST Services: Authentication

    - by Your DisplayName here!
    This post shows some of the implementation techniques for adding token and claims based security to HTTP/REST services written with WCF. For the theoretical background, see my previous post. Disclaimer The framework I am using/building here is not the only possible approach to tackle the problem. Based on customer feedback and requirements the code has gone through several iterations to a point where we think it is ready to handle most of the situations. Goals and requirements The framework should be able to handle typical scenarios like username/password based authentication, as well as token based authentication The framework should allow adding new supported token types Should work with WCF web programming model either self-host or IIS hosted Service code can rely on an IClaimsPrincipal on Thread.CurrentPrincipal that describes the client using claims-based identity Implementation overview In WCF the main extensibility point for this kind of security work is the ServiceAuthorizationManager. It gets invoked early enough in the pipeline, has access to the HTTP protocol details of the incoming request and can set Thread.CurrentPrincipal. The job of the SAM is simple: Check the Authorization header of the incoming HTTP request Check if a “registered” token (more on that later) is present If yes, validate the token using a security token handler, create the claims principal (including claims transformation) and set Thread.CurrentPrincipal If no, set an anonymous principal on Thread.CurrentPrincipal. By default, anonymous principals are denied access – so the request ends here with a 401 (more on that later). To wire up the custom authorization manager you need a custom service host – which in turn needs a custom service host factory. The full object model looks like this: Token handling A nice piece of existing WIF infrastructure are security token handlers. Their job is to serialize a received security token into a CLR representation, validate the token and turn the token into claims. The way this works with WS-Security based services is that WIF passes the name/namespace of the incoming token to WIF’s security token handler collection. This in turn finds out which token handler can deal with the token and returns the right instances. For HTTP based services we can do something very similar. The scheme on the Authorization header gives the service a hint how to deal with an incoming token. So the only missing link is a way to associate a token handler (or multiple token handlers) with a scheme and we are (almost) done. WIF already includes token handler for a variety of tokens like username/password or SAML 1.1/2.0. The accompanying sample has a implementation for a Simple Web Token (SWT) token handler, and as soon as JSON Web Token are ready, simply adding a corresponding token handler will add support for this token type, too. All supported schemes/token types are organized in a WebSecurityTokenHandlerCollectionManager and passed into the host factory/host/authorization manager. Adding support for basic authentication against a membership provider would e.g. look like this (in global.asax): var manager = new WebSecurityTokenHandlerCollectionManager(); manager.AddBasicAuthenticationHandler((username, password) => Membership.ValidateUser(username, password));   Adding support for Simple Web Tokens with a scheme of Bearer (the current OAuth2 scheme) requires passing in a issuer, audience and signature verification key: manager.AddSimpleWebTokenHandler(     "Bearer",     "http://identityserver.thinktecture.com/trust/initial",     "https://roadie/webservicesecurity/rest/",     "WFD7i8XRHsrUPEdwSisdHoHy08W3lM16Bk6SCT8ht6A="); In some situations, SAML token may be used as well. The following configures SAML support for a token coming from ADFS2: var registry = new ConfigurationBasedIssuerNameRegistry(); registry.AddTrustedIssuer( "d1 c5 b1 25 97 d0 36 94 65 1c e2 64 fe 48 06 01 35 f7 bd db", "ADFS"); var adfsConfig = new SecurityTokenHandlerConfiguration(); adfsConfig.AudienceRestriction.AllowedAudienceUris.Add( new Uri("https://roadie/webservicesecurity/rest/")); adfsConfig.IssuerNameRegistry = registry; adfsConfig.CertificateValidator = X509CertificateValidator.None; // token decryption (read from config) adfsConfig.ServiceTokenResolver = IdentityModelConfiguration.ServiceConfiguration.CreateAggregateTokenResolver();             manager.AddSaml11SecurityTokenHandler("SAML", adfsConfig);   Transformation The custom authorization manager will also try to invoke a configured claims authentication manager. This means that the standard WIF claims transformation logic can be used here as well. And even better, can be also shared with e.g. a “surrounding” web application. Error handling A WCF error handler takes care of turning “access denied” faults into 401 status codes and a message inspector adds the registered authentication schemes to the outgoing WWW-Authenticate header when a 401 occurs. The next post will conclude with authorization as well as the source code download.   (Wanna learn more about federation, WIF, claims, tokens etc.? Click here.)

    Read the article

  • Difference between $ and # in ADF/JSF/JSP

    - by pavan.pvj
    Found this one interesting. So, picked it from one of the books and posting here.JSP 2.1 and JSF 1.2 - both of them use a unified Expression language. One major and the most obvious difference is between $ and #. JSP 2.1 uses $ and JSF 1.2 uses # in an EL. $ - immediate evaluation# - deferred evaluation$ - $ syntax executes expressions eagerly/immediately, which means that the result is returned immediately when the page renders.# - # syntax defers the expression evaluation to a point defined by the implementing technology. In general, JSF uses deferred EL evaluation because of its multiple lifecycle phases in which events are handled. To ensure the model is prepared before the values are accessed by EL, it must defer EL evaluation until the appropriate point in the life cycle.Note: This is picked up from Oracle Fusion Developer Guide (ISBN: 9780071622547). There is also a very good article here:http://java.sun.com/products/jsp/reference/techart/unifiedEL.html

    Read the article

  • Enum driving a Visual State change via the ViewModel

    - by Chris Skardon
    Exciting title eh? So, here’s the problem, I want to use my ViewModel to drive my Visual State, I’ve used the ‘DataStateBehavior’ before, but the trouble with it is that it only works for bool values, and the minute you jump to more than 2 Visual States, you’re kind of screwed. A quick search has shown up a couple of points of interest, first, the DataStateSwitchBehavior, which is part of the Expression Samples (on Codeplex), and also available via Pete Blois’ blog. The second interest is to use a DataTrigger with GoToStateAction (from the Silverlight forums). So, onwards… first let’s create a basic switch Visual State, so, a DataObj with one property: IsAce… public class DataObj : NotifyPropertyChanger { private bool _isAce; public bool IsAce { get { return _isAce; } set { _isAce = value; RaisePropertyChanged("IsAce"); } } } The ‘NotifyPropertyChanger’ is literally a base class with RaisePropertyChanged, implementing INotifyPropertyChanged. OK, so we then create a ViewModel: public class MainPageViewModel : NotifyPropertyChanger { private DataObj _dataObj; public MainPageViewModel() { DataObj = new DataObj {IsAce = true}; ChangeAcenessCommand = new RelayCommand(() => DataObj.IsAce = !DataObj.IsAce); } public ICommand ChangeAcenessCommand { get; private set; } public DataObj DataObj { get { return _dataObj; } set { _dataObj = value; RaisePropertyChanged("DataObj"); } } } Aaaand finally – hook it all up to the XAML, which is a very simple UI: A Rectangle, a TextBlock and a Button. The Button is hooked up to ChangeAcenessCommand, the TextBlock is bound to the ‘DataObj.IsAce’ property and the Rectangle has 2 visual states: IsAce and NotAce. To make the Rectangle change it’s visual state I’ve used a DataStateBehavior inside the Layout Root Grid: <i:Interaction.Behaviors> <ei:DataStateBehavior Binding="{Binding DataObj.IsAce}" Value="true" TrueState="IsAce" FalseState="NotAce"/> </i:Interaction.Behaviors> So now we have the button changing the ‘IsAce’ property and giving us the other visual state: Great! So – the next stage is to get that to work inside a DataTemplate… Which (thankfully) is easy money. All we do is add a ListBox to the View and an ObservableCollection to the ViewModel. Well – ok, a little bit more than that. Once we’ve got the ListBox with it’s ItemsSource property set, it’s time to add the DataTemplate itself. Again, this isn’t exactly taxing, and is purely going to be a Grid with a Textblock and a Rectangle (again, I’m nothing if not consistent). Though, to be a little jazzy I’ve swapped the rectangle to the other side (living the dream). So, all that’s left is to add some States to the template.. (Yes – you can do that), these can be the same names as the others, or indeed, something else, I have chosen to stick with the same names and take the extra confusion hit right on the nose. Once again, I add the DataStateBehavior to the root Grid element: <i:Interaction.Behaviors> <ei:DataStateBehavior Binding="{Binding IsAce}" Value="true" TrueState="IsAce" FalseState="NotAce"/> </i:Interaction.Behaviors> The key difference here is the ‘Binding’ attribute, where I’m now binding to the IsAce property directly, and boom! It’s all gravy!   So far, so good. We can use boolean values to change the visual states, and (crucially) it works in a DataTemplate, bingo! Now. Onwards to the Enum part of this (finally!). Obviously we can’t use the DataStateBehavior, it' only gives us true/false options. So, let’s give the GoToStateAction a go. Now, I warn you, things get a bit complex from here, instead of a bool with 2 values, I’m gonna max it out and bring in an Enum with 3 (count ‘em) 3 values: Red, Amber and Green (those of you with exceptionally sharp minds will be reminded of traffic lights). We’re gonna have a rectangle which also has 3 visual states – cunningly called ‘Red’, ‘Amber’ and ‘Green’. A new class called DataObj2: public class DataObj2 : NotifyPropertyChanger { private Status _statusValue; public DataObj2(Status status) { StatusValue = status; } public Status StatusValue { get { return _statusValue; } set { _statusValue = value; RaisePropertyChanged("StatusValue"); } } } Where ‘Status’ is my enum. Good times are here! Ok, so let’s get to the beefy stuff. So, we’ll start off in the same manner as the last time, we will have a single DataObj2 instance available to the Page and bind to that. Let’s add some Triggers (these are in the LayoutRoot again). <i:Interaction.Triggers> <ei:DataTrigger Binding="{Binding DataObject2.StatusValue}" Value="Amber"> <ei:GoToStateAction StateName="Amber" UseTransitions="False" /> </ei:DataTrigger> <ei:DataTrigger Binding="{Binding DataObject2.StatusValue}" Value="Green"> <ei:GoToStateAction StateName="Green" UseTransitions="False" /> </ei:DataTrigger> <ei:DataTrigger Binding="{Binding DataObject2.StatusValue}" Value="Red"> <ei:GoToStateAction StateName="Red" UseTransitions="False" /> </ei:DataTrigger> </i:Interaction.Triggers> So what we’re saying here is that when the DataObject2.StatusValue is equal to ‘Red’ then we’ll go to the ‘Red’ state. Same deal for Green and Amber (but you knew that already). Hook it all up and start teh project. Hmm. Just grey. Not what I wanted. Ok, let’s add a ‘ChangeStatusCommand’, hook that up to a button and give it a whirl: Right, so the DataTrigger isn’t picking up the data on load. On the plus side, changing the status is making the visual states change. So. We’ll cross the ‘Grey’ hurdle in a bit, what about doing the same in the DataTemplate? <Codey Codey/> Grey again, but if we press the button: (I should mention, pressing the button sets the StatusValue property on the DataObj2 being represented to the next colour). Right. Let’s look at this ‘Grey’ issue. First ‘fix’ (and I use the term ‘fix’ in a very loose way): The Dispatcher Fix This involves using the Dispatcher on the View to call something like ‘RefreshProperties’ on the ViewModel, which will in turn raise all the appropriate ‘PropertyChanged’ events on the data objects being represented. So, here goes, into turdcode-ville – population – me: First, add the ‘RefreshProperties’ method to the DataObj2: internal void RefreshProperties() { RaisePropertyChanged("StatusValue"); } (shudder) Now, add it to the hosting ViewModel: public void RefreshProperties() { DataObject2.RefreshProperties(); if (DataObjects != null && DataObjects.Count > 0) { foreach (DataObj2 dataObject in DataObjects) dataObject.RefreshProperties(); } } (double shudder) and now for the cream on the cake, adding the following line to the code behind of the View: Dispatcher.BeginInvoke(() => ((MoreVisualStatesViewModel)DataContext).RefreshProperties()); So, what does this *ahem* code give us: Awesome, it makes the single bound data object show the colour, but frankly ignores the DataTemplate items. This (by the way) is the same output you get from: Dispatcher.BeginInvoke(() => ((MoreVisualStatesViewModel)DataContext).ChangeStatusCommand.Execute(null)); So… Where does that leave me? What about adding a button to the Page to refresh the properties – maybe it’s a timer thing? Yes, that works. Right, what about using the Loaded event then eh? Loaded += (s, e) => ((MoreVisualStatesViewModel) DataContext).RefreshProperties(); Ahhh No. What about converting the DataTemplate into a UserControl? Anything is worth a shot.. Though – I still suspect I’m going to have to ‘RefreshProperties’ if I want the rectangles to update. Still. No. This DataTemplate DataTrigger binding is becoming a bit of a pain… I can’t add a ‘refresh’ button to the actual code base, it’s not exactly user friendly. I’m going to end this one now, and put some investigating into the use of the DataStateSwitchBehavior (all the ones I’ve found, well, all 2 of them are working in SL3, but not 4…)

    Read the article

  • Oracle Business Intelligence Advanced - Hands-on Workshop para Parceiros - 18 a 21 de Janeiro

    - by Claudia Costa
    Workshop Description This FREE hands-on workshop highlights strengths of OBIEE 11g by providing attendees a hands-on experience with BI 11g product. OBIEE 11g has adopted the standardized infrastructure of Fusion Middleware to provide robust server capability along with highly anticipated advanced visualization components like Maps, Flash based charts, Scorecards and KPIs. This workshop focuses on new features and infrastructure components for the BI practitioners who are familiar with either OBIEE 10g or previous BI releases. After taking this course, Oracle Business Intelligence 11g Advanced, you will gain insight into OBIEE11g technology, reporting solutions and new features. Workshop provides opportunities to practice with OBIEE11g environment as hands on activities. Participant will gain in-depth understanding of new architecture of OBIEE 11g, security mode, installation/configuration as well as reporting aspects like, new ROLAP/MOLAP style hierarchical browsing, new chart types, Action Framework and Advanced Visualization. If you are a Business Intelligence practitioners and familiar with BI10g - you cannot afford to miss this 3-day workshop. Register Now! PresentationsBusiness Intelligence EE (OBIEE) 11g: Advanced Workshop ·         OBIEE 11g Overview ·         OBIEE 11g Architecture and Infrastructure ·         OBIEE 11g Installation, Configuration and Monitoring ·         OBIEE11g Security Model and BI Components ·         OBIEE 11g Homepage Overview ·         New Visualizations: Master-Detail Events, Charts, Hierarchies ·         Reports Building with OBIEE 11g and Catalog Management ·         Spatial Integration, Action Framework, Scorecards ·         OBIEE 11g Dashboards ·         OBIEE Integration Options  Lab OutlineOracle Business Intelligence (OBIEE) 11g: Advanced Workshop The labs enable OBIEE Core functionality through hands-on activities are based on a Oracle VirtualBox image with software and training samples pre-installed. This Advanced course has few labs optional during the workshop to allow for students to practice them on their own. The primary purpose of the workshop is to provide expertise of 11g features and infrastructure changes from 10g. Labs will allow you to explore concepts to: ·         Have a clear understanding of the OBIEE 11g architecture ·         Have a clear understanding of the OBIEE differentiators ·         OBIEE11g Security Model ·         OBIEE11g Environment Management ·         Report Building with OBIEE11g ·         OBIEE11g Dashboard and Homepage Environment ·         New Visualization features ·         Management of Reports, Dashboards and BI Catalog Objects Audience ·         Business Intelligence Evangelist ·         Business Intelligence Application Developer or Consultant ·         Data Warehouse Developer ·         Enterprise Architects ·         Industry Solutions Architects Prerequisites ·         Experience and Understanding of OBIEE 10g is required. ·         Good understanding of data modeling for reporting purpose ·         Strong experience with database technologies preferred Equipment RequirementsThis workshop requires attendees to provide their own laptops. Attendee laptops must meet the following minimum hardware/software requirements: OBIEE 11g environments requires at least 3 GB of RAM (4GB Preferred), without which student will not be able to complete labs. This workshop has environment that includes VM Image and also a software components that students will install on their laptop for the labs. ·         Minimum 3GB RAM. 25GB free disk space ·         Internet Explorer 7 ·         VirtualBox (the latest version) ·         Downloadable from http://www.virtualbox.org ·         WINRAR or 7zip ·         Downloadable from http://www.win-rar.com/download.html ·         Downloadable from http://www.7zip.com/ Attendees will be given a VirtualBox image for Oraclee BI 11g Workshop containing the software along with required toolset, database and data sets for the labs. AgendaThis class duration is 3 Days9:00am: Sign-in and Technical Set up9:30am : Workshop Starts5:00pm : Workhop Ends LocalHotel Holiday Inn Express - Porto Salvo - Lisboa This class is Free. Register early to confirm a seat! Oracle BI Advanced 11g Hands-on Workshop - Schedule Register Now! January 11-13, 2011: Kista, Sweden January 18-20, 2011: Lisbon, Portugal March 1-3, 2011: Reading, Berkshire, UK March 15-17, 2011: Colombes, Paris, France March 29-31, 2011: Amsterdam, Netherlands Questions? For registration questions please send an email to [email protected]. Para outras informações, por favor contacte Claudia Costa, telf: 214235027 ou pelo email   

    Read the article

  • New e learning course on Business Intelligence

    - by simonsabin
    I just got this from fello SQL MVP Chris Testa O'Neil   "I am pleased to announce the release of the Author Model eCourseCollection 6233 AE: Implementing and Maintaining Business Intelligence in Microsoft® SQL Server® 2008: Integration Services, Reporting Services and Analysis Services This 24-hour collection provides you with the skills and knowledge required for implementing and maintaining business intelligence solutions on SQL Server 2008. You will learn about the SQL Server technologies, such as Integration Services, Analysis Services, and Reporting Services. This collection also helps students to prepare for Exam 70-448 and can be accessed from: http://www.microsoft.com/learning/elearning/course/6233.mspx   

    Read the article

  • problem with loading in .FBX meshes in DirectX 10

    - by N0xus
    I'm trying to load in meshes into DirectX 10. I've created a bunch of classes that handle it and allow me to call in a mesh with only a single line of code in my main game class. How ever, when I run the program this is what renders: In the debug output window the following errors keep appearing: D3D10: ERROR: ID3D10Device::DrawIndexed: Input Assembler - Vertex Shader linkage error: Signatures between stages are incompatible. The reason is that Semantic 'TEXCOORD' is defined for mismatched hardware registers between the output stage and input stage. [ EXECUTION ERROR #343: DEVICE_SHADER_LINKAGE_REGISTERINDEX ] D3D10: ERROR: ID3D10Device::DrawIndexed: Input Assembler - Vertex Shader linkage error: Signatures between stages are incompatible. The reason is that the input stage requires Semantic/Index (POSITION,0) as input, but it is not provided by the output stage. [ EXECUTION ERROR #342: DEVICE_SHADER_LINKAGE_SEMANTICNAME_NOT_FOUND ] The thing is, I've no idea how to fix this. The code I'm using does work and I've simply brought all of that code into a new project of mine. There are no build errors and this only appears when the game is running The .fx file is as follows: float4x4 matWorld; float4x4 matView; float4x4 matProjection; struct VS_INPUT { float4 Pos:POSITION; float2 TexCoord:TEXCOORD; }; struct PS_INPUT { float4 Pos:SV_POSITION; float2 TexCoord:TEXCOORD; }; Texture2D diffuseTexture; SamplerState diffuseSampler { Filter = MIN_MAG_MIP_POINT; AddressU = WRAP; AddressV = WRAP; }; // // Vertex Shader // PS_INPUT VS( VS_INPUT input ) { PS_INPUT output=(PS_INPUT)0; float4x4 viewProjection=mul(matView,matProjection); float4x4 worldViewProjection=mul(matWorld,viewProjection); output.Pos=mul(input.Pos,worldViewProjection); output.TexCoord=input.TexCoord; return output; } // // Pixel Shader // float4 PS(PS_INPUT input ) : SV_Target { return diffuseTexture.Sample(diffuseSampler,input.TexCoord); //return float4(1.0f,1.0f,1.0f,1.0f); } RasterizerState NoCulling { FILLMODE=SOLID; CULLMODE=NONE; }; technique10 Render { pass P0 { SetVertexShader( CompileShader( vs_4_0, VS() ) ); SetGeometryShader( NULL ); SetPixelShader( CompileShader( ps_4_0, PS() ) ); SetRasterizerState(NoCulling); } } In my game, the .fx file and model are called and set as follows: Loading in shader file //Set the shader flags - BMD DWORD dwShaderFlags = D3D10_SHADER_ENABLE_STRICTNESS; #if defined( DEBUG ) || defined( _DEBUG ) dwShaderFlags |= D3D10_SHADER_DEBUG; #endif ID3D10Blob * pErrorBuffer=NULL; if( FAILED( D3DX10CreateEffectFromFile( TEXT("TransformedTexture.fx" ), NULL, NULL, "fx_4_0", dwShaderFlags, 0, md3dDevice, NULL, NULL, &m_pEffect, &pErrorBuffer, NULL ) ) ) { char * pErrorStr = ( char* )pErrorBuffer->GetBufferPointer(); //If the creation of the Effect fails then a message box will be shown MessageBoxA( NULL, pErrorStr, "Error", MB_OK ); return false; } //Get the technique called Render from the effect, we need this for rendering later on m_pTechnique=m_pEffect->GetTechniqueByName("Render"); //Number of elements in the layout UINT numElements = TexturedLitVertex::layoutSize; //Get the Pass description, we need this to bind the vertex to the pipeline D3D10_PASS_DESC PassDesc; m_pTechnique->GetPassByIndex( 0 )->GetDesc( &PassDesc ); //Create Input layout to describe the incoming buffer to the input assembler if (FAILED(md3dDevice->CreateInputLayout( TexturedLitVertex::layout, numElements,PassDesc.pIAInputSignature, PassDesc.IAInputSignatureSize, &m_pVertexLayout ) ) ) { return false; } model loading: m_pTestRenderable=new CRenderable(); //m_pTestRenderable->create<TexturedVertex>(md3dDevice,8,6,vertices,indices); m_pModelLoader = new CModelLoader(); m_pTestRenderable = m_pModelLoader->loadModelFromFile( md3dDevice,"armoredrecon.fbx" ); m_pGameObjectTest = new CGameObject(); m_pGameObjectTest->setRenderable( m_pTestRenderable ); // Set primitive topology, how are we going to interpet the vertices in the vertex buffer md3dDevice->IASetPrimitiveTopology( D3D10_PRIMITIVE_TOPOLOGY_TRIANGLELIST ); if ( FAILED( D3DX10CreateShaderResourceViewFromFile( md3dDevice, TEXT( "armoredrecon_diff.png" ), NULL, NULL, &m_pTextureShaderResource, NULL ) ) ) { MessageBox( NULL, TEXT( "Can't load Texture" ), TEXT( "Error" ), MB_OK ); return false; } m_pDiffuseTextureVariable = m_pEffect->GetVariableByName( "diffuseTexture" )->AsShaderResource(); m_pDiffuseTextureVariable->SetResource( m_pTextureShaderResource ); Finally, the draw function code: //All drawing will occur between the clear and present m_pViewMatrixVariable->SetMatrix( ( float* )m_matView ); m_pWorldMatrixVariable->SetMatrix( ( float* )m_pGameObjectTest->getWorld() ); //Get the stride(size) of the a vertex, we need this to tell the pipeline the size of one vertex UINT stride = m_pTestRenderable->getStride(); //The offset from start of the buffer to where our vertices are located UINT offset = m_pTestRenderable->getOffset(); ID3D10Buffer * pVB=m_pTestRenderable->getVB(); //Bind the vertex buffer to input assembler stage - md3dDevice->IASetVertexBuffers( 0, 1, &pVB, &stride, &offset ); md3dDevice->IASetIndexBuffer( m_pTestRenderable->getIB(), DXGI_FORMAT_R32_UINT, 0 ); //Get the Description of the technique, we need this in order to loop through each pass in the technique D3D10_TECHNIQUE_DESC techDesc; m_pTechnique->GetDesc( &techDesc ); //Loop through the passes in the technique for( UINT p = 0; p < techDesc.Passes; ++p ) { //Get a pass at current index and apply it m_pTechnique->GetPassByIndex( p )->Apply( 0 ); //Draw call md3dDevice->DrawIndexed(m_pTestRenderable->getNumOfIndices(),0,0); //m_pD3D10Device->Draw(m_pTestRenderable->getNumOfVerts(),0); } Is there anything I've clearly done wrong or are missing? Spent 2 weeks trying to workout what on earth I've done wrong to no avail. Any insight a fresh pair eyes could give on this would be great.

    Read the article

  • Towards an F# .NET Reflector add-in

    - by CliveT
    When I had the opportunity to spent some time during Red Gate's recent "down tools" week on a project of my choice, the obvious project was an F# add-in for Reflector . To be honest, this was a bit of a misnomer as the amount of time in the designated week for coding was really less than three days, so it was always unlikely that very much progress would be made in such a small amount of time (and that certainly proved to be the case), but I did learn some things from the experiment. Like lots of problems, one useful technique is to take examples, get them to work, and then generalise to get something that works across the board. Unfortunately, I didn't have enough time to do the last stage. The obvious first step is to take a few function definitions, starting with the obvious hello world, moving on to a non-recursive function and finishing with the ubiquitous recursive Fibonacci function. let rec printMessage message  =     printfn  message let foo x  =    (x + 1) let rec fib x  =     if (x >= 2) then (fib (x - 1) + fib (x - 2)) else 1 The major problem in decompiling these simple functions is that Reflector has an in-memory object model that is designed to support object-oriented languages. In particular it has a return statement that allows function bodies to finish early. I used some of the in-built functionality to take the IL and produce an in-memory object model for the language, but then needed to write a transformer to push the return statements to the top of the tree to make it easy to render the code into a functional language. This tree transform works in some scenarios, but not in others where we simply regenerate code that looks more like CPS style. The next thing to get working was library level bindings of values where these values are calculated at runtime. let x = [1 ; 2 ; 3 ; 4] let y = List.map  (fun x -> foo x) x The way that this is translated into a set of classes for the underlying platform means that the code needs to follow references around, from the property exposing the calculated value to the class in which the code for generating the value is embedded. One of the strongest selling points of functional languages is the algebraic datatypes, which allow definitions via standard mathematical-style inductive definitions across the union cases. type Foo =     | Something of int     | Nothing type 'a Foo2 =     | Something2 of 'a     | Nothing2 Such a definition is compiled into a number of classes for the cases of the union, which all inherit from a class representing the type itself. It wasn't too hard to get such a de-compilation happening in the cases I tried. What did I learn from this? Firstly, that there are various bits of functionality inside Reflector that it would be useful for us to allow add-in writers to access. In particular, there are various implementations of the Visitor pattern which implement algorithms such as calculating the number of references for particular variables, and which perform various substitutions which could be more generally useful to add-in writers. I hope to do something about this at some point in the future. Secondly, when you transform a functional language into something that runs on top of an object-based platform, you lose some fidelity in the representation. The F# compiler leaves attributes in place so that tools can tell which classes represent classes from the source program and which are there for purposes of the implementation, allowing the decompiler to regenerate these constructs again. However, decompilation technology is a long way from being able to take unannotated IL and transform it into a program in a different language. For a simple function definition, like Fibonacci, I could write a simple static function and have it come out in F# as the same function, but it would be practically impossible to take a mass of class definitions and have a decompiler translate it automatically into an F# algebraic data type. What have we got out of this? Some data on the feasibility of implementing an F# decompiler inside Reflector, though it's hard at the moment to say how long this would take to do. The work we did is included the 6.5 EAP for Reflector that you can get from the EAP forum. All things considered though, it was a useful way to gain more familiarity with the process of writing an add-in and understand difficulties other add-in authors might experience. If you'd like to check out a video of Down Tools Week, click here.

    Read the article

  • OpenGL Performance Questions

    - by Daniel
    This subject, as with any optimisation problem, gets hit on a lot, but I just couldn't find what I (think) I want. A lot of tutorials, and even SO questions have similar tips; generally covering: Use GL face culling (the OpenGL function, not the scene logic) Only send 1 matrix to the GPU (projectionModelView combination), therefore decreasing the MVP calculations from per vertex to once per model (as it should be). Use interleaved Vertices Minimize as many GL calls as possible, batch where appropriate And possibly a few/many others. I am (for curiosity reasons) rendering 28 million triangles in my application using several vertex buffers. I have tried all the above techniques (to the best of my knowledge), and received almost no performance change. Whilst I am receiving around 40FPS in my implementation, which is by no means problematic, I am still curious as to where these optimisation 'tips' actually come into use? My CPU is idling around 20-50% during rendering, therefore I assume I am GPU bound for increasing performance. Note: I am looking into gDEBugger at the moment Cross posted at StackOverflow

    Read the article

  • Differences between software testing processes and techniques?

    - by Aptos
    I get confused between these terms. For examples, should Unit testing be listed as a software testing process or technique? I think unit testing is a software testing technique. And how about Test driven development? Can you give me some examples for software testing processes and techniques? In my opinion, software testing process is a part of the software development life cycle. For example, if we use V-Model, the software testing process will be System test, Acceptance test, Integration Test... Thank you.

    Read the article

  • texture mapping with lib3ds and SOIL help

    - by Adam West
    I'm having trouble with my project for loading a texture map onto a model. Any insight into what is going wrong with my code is fantastic. Right now the code only renders a teapot which I have assinged after creating it in 3DS Max. 3dsloader.cpp #include "3dsloader.h" Object::Object(std:: string filename) { m_TotalFaces = 0; m_model = lib3ds_file_load(filename.c_str()); // If loading the model failed, we throw an exception if(!m_model) { throw strcat("Unable to load ", filename.c_str()); } // set properties of texture coordinate generation for both x and y coordinates glTexGeni(GL_S, GL_TEXTURE_GEN_MODE, GL_EYE_LINEAR); glTexGeni(GL_T, GL_TEXTURE_GEN_MODE, GL_EYE_LINEAR); // if not already enabled, enable texture generation if(! glIsEnabled(GL_TEXTURE_GEN_S)) glEnable(GL_TEXTURE_GEN_S); if(! glIsEnabled(GL_TEXTURE_GEN_T)) glEnable(GL_TEXTURE_GEN_T); } Object::~Object() { if(m_model) // if the file isn't freed yet lib3ds_file_free(m_model); //free up memory glDisable(GL_TEXTURE_GEN_S); glDisable(GL_TEXTURE_GEN_T); } void Object::GetFaces() { m_TotalFaces = 0; Lib3dsMesh * mesh; // Loop through every mesh. for(mesh = m_model->meshes;mesh != NULL;mesh = mesh->next) { // Add the number of faces this mesh has to the total number of faces. m_TotalFaces += mesh->faces; } } void Object::CreateVBO() { assert(m_model != NULL); // Calculate the number of faces we have in total GetFaces(); // Allocate memory for our vertices and normals Lib3dsVector * vertices = new Lib3dsVector[m_TotalFaces * 3]; Lib3dsVector * normals = new Lib3dsVector[m_TotalFaces * 3]; Lib3dsTexel* texCoords = new Lib3dsTexel[m_TotalFaces * 3]; Lib3dsMesh * mesh; unsigned int FinishedFaces = 0; // Loop through all the meshes for(mesh = m_model->meshes;mesh != NULL;mesh = mesh->next) { lib3ds_mesh_calculate_normals(mesh, &normals[FinishedFaces*3]); // Loop through every face for(unsigned int cur_face = 0; cur_face < mesh->faces;cur_face++) { Lib3dsFace * face = &mesh->faceL[cur_face]; for(unsigned int i = 0;i < 3;i++) { memcpy(&texCoords[FinishedFaces*3 + i], mesh->texelL[face->points[ i ]], sizeof(Lib3dsTexel)); memcpy(&vertices[FinishedFaces*3 + i], mesh->pointL[face->points[ i ]].pos, sizeof(Lib3dsVector)); } FinishedFaces++; } } // Generate a Vertex Buffer Object and store it with our vertices glGenBuffers(1, &m_VertexVBO); glBindBuffer(GL_ARRAY_BUFFER, m_VertexVBO); glBufferData(GL_ARRAY_BUFFER, sizeof(Lib3dsVector) * 3 * m_TotalFaces, vertices, GL_STATIC_DRAW); // Generate another Vertex Buffer Object and store the normals in it glGenBuffers(1, &m_NormalVBO); glBindBuffer(GL_ARRAY_BUFFER, m_NormalVBO); glBufferData(GL_ARRAY_BUFFER, sizeof(Lib3dsVector) * 3 * m_TotalFaces, normals, GL_STATIC_DRAW); // Generate a third VBO and store the texture coordinates in it. glGenBuffers(1, &m_TexCoordVBO); glBindBuffer(GL_ARRAY_BUFFER, m_TexCoordVBO); glBufferData(GL_ARRAY_BUFFER, sizeof(Lib3dsTexel) * 3 * m_TotalFaces, texCoords, GL_STATIC_DRAW); // Clean up our allocated memory delete vertices; delete normals; delete texCoords; // We no longer need lib3ds lib3ds_file_free(m_model); m_model = NULL; } void Object::applyTexture(const char*texfilename) { float imageWidth; float imageHeight; glGenTextures(1, & textureObject); // allocate memory for one texture textureObject = SOIL_load_OGL_texture(texfilename,SOIL_LOAD_AUTO,SOIL_CREATE_NEW_ID,SOIL_FLAG_MIPMAPS); glPixelStorei(GL_UNPACK_ALIGNMENT,1); glBindTexture(GL_TEXTURE_2D, textureObject); // use our newest texture glGetTexLevelParameterfv(GL_TEXTURE_2D,0,GL_TEXTURE_WIDTH,&imageWidth); glGetTexLevelParameterfv(GL_TEXTURE_2D,0,GL_TEXTURE_HEIGHT,&imageHeight); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); // give the best result for texture magnification glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); //give the best result for texture minification glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP); // don't repeat texture glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP); // don't repeat textureglTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP); // don't repeat texture glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE,GL_MODULATE); glTexImage2D(GL_TEXTURE_2D,0,GL_RGB,imageWidth,imageHeight,0,GL_RGB,GL_UNSIGNED_BYTE,& textureObject); } void Object::Draw() const { // Enable vertex, normal and texture-coordinate arrays. glEnableClientState(GL_VERTEX_ARRAY); glEnableClientState(GL_NORMAL_ARRAY); glEnableClientState(GL_TEXTURE_COORD_ARRAY); // Bind the VBO with the normals. glBindBuffer(GL_ARRAY_BUFFER, m_NormalVBO); // The pointer for the normals is NULL which means that OpenGL will use the currently bound VBO. glNormalPointer(GL_FLOAT, 0, NULL); glBindBuffer(GL_ARRAY_BUFFER, m_TexCoordVBO); glTexCoordPointer(2, GL_FLOAT, 0, NULL); glBindBuffer(GL_ARRAY_BUFFER, m_VertexVBO); glVertexPointer(3, GL_FLOAT, 0, NULL); // Render the triangles. glDrawArrays(GL_TRIANGLES, 0, m_TotalFaces * 3); glDisableClientState(GL_VERTEX_ARRAY); glDisableClientState(GL_NORMAL_ARRAY); glDisableClientState(GL_TEXTURE_COORD_ARRAY); } 3dsloader.h #include "main.h" #include "lib3ds/file.h" #include "lib3ds/mesh.h" #include "lib3ds/material.h" class Object { public: Object(std:: string filename); virtual ~Object(); virtual void Draw() const; virtual void CreateVBO(); void applyTexture(const char*texfilename); protected: void GetFaces(); unsigned int m_TotalFaces; Lib3dsFile * m_model; Lib3dsMesh* Mesh; GLuint textureObject; GLuint m_VertexVBO, m_NormalVBO, m_TexCoordVBO; }; Called in the main cpp file with: VBO,apply texture and draw (pretty simple, how ironic) and thats it, please help me forum :)

    Read the article

  • Java EE/GlassFish Adoption Story by Kerry Wilson/Vanderbilt University

    - by reza_rahman
    Kerry Wilson is a Software Engineer at the Vanderbilt University Medical Center. He served in a consultant role to design a lightweight systems integration solution for the next generation Foundations Recovery Network using GlassFish, Java EE 6, JPA, @Scheduled EJBs, CDI, JAX-RS and JSF. He shared his story at the JavaOne 2013 Sunday GlassFish community event - check out the video below: Kerry outlined some of the details of the implementation and emphasized the fact that Java EE can be a great solution for applications that are considered small/lightweight. He mentioned the productivity gains through the modern Java EE programming model centered on annotations, POJOs and zero-configuration - comparing it with competing frameworks that aim towards similar productivity for lightweight applications. Kerry also stressed the quality of the excellent NetBeans integration with GlassFish and the need for community self-support in free, non-commercial open source projects like GlassFish. You can check out the details of his story on the GlassFish stories blog. Do you have a Java EE/GlassFish adoption story to share? Let us know and we will highlight it for the community.

    Read the article

  • Java EE/GlassFish Adoption Story by Kerry Wilson/Vanderbilt University

    - by reza_rahman
    Kerry Wilson is a Software Engineer at the Vanderbilt University Medical Center. He served in a consultant role to design a lightweight systems integration solution for the next generation Foundations Recovery Network using GlassFish, Java EE 6, JPA, @Scheduled EJBs, CDI, JAX-RS and JSF. He lives in Nashville, TN where he helps organize the Nashville Java User Group. Kerry shared his Java EE/GlassFish adoption story at the JavaOne 2013 Sunday GlassFish community event - check out the video below: Here is the slide deck for his talk: GlassFish Story by Kerry Wilson/Vanderbilt University Medical Center from glassfish Kerry outlined some of the details of the implementation and emphasized the fact that Java EE can be a great solution for applications that are considered small/lightweight. He mentioned the productivity gains through the modern Java EE programming model centered on annotations, POJOs and zero-configuration - comparing it with competing frameworks that aim towards similar productivity for lightweight applications. Kerry also stressed the quality of the excellent NetBeans integration with GlassFish and the need for community self-support in free, non-commercial open source projects like GlassFish.

    Read the article

  • Price comparison sites and its effect on Google ranking

    - by Jivago
    I am the webmaster of a website that contains roughly 10,000 products. I would be possibly interested to index those products in a price comparison site like PriceGrabber, Nextag, Shopbot, etc. The principle of price comparison sites is great for an actual user that want to compare prices but my main concern is the effect it could have on my actual ranking on Google... Since a site like Shopbot uses a CPC model (Cost-per-click), all the links on the website are builted to track clicks (IE: http://www.shopbot.ca/r.html?i=3&catc=2&refshop=5706&refshopcodeid=42587349), it uses redirection, no direct links (So no direct backlinking). In your opinion and/or experience, is this a smart, business wise, seo wise move or not? THANKS!

    Read the article

  • The UIManager Pattern

    - by Duncan Mills
    One of the most common mistakes that I see when reviewing ADF application code, is the sin of storing UI component references, most commonly things like table or tree components in Session or PageFlow scope. The reasons why this is bad are simple; firstly, these UI object references are not serializable so would not survive a session migration between servers and secondly there is no guarantee that the framework will re-use the same component tree from request to request, although in practice it generally does do so. So there danger here is, that at best you end up with an NPE after you session has migrated, and at worse, you end up pinning old generations of the component tree happily eating up your precious memory. So that's clear, we should never. ever, be storing references to components anywhere other than request scope (or maybe backing bean scope). So double check the scope of those binding attributes that map component references into a managed bean in your applications.  Why is it Such a Common Mistake?  At this point I want to examine why there is this urge to hold onto these references anyway? After all, JSF will obligingly populate your backing beans with the fresh and correct reference when needed.   In most cases, it seems that the rational is down to a lack of distinction within the application between what is data and what is presentation. I think perhaps, a cause of this is the logical separation between business data behind the ADF data binding (#{bindings}) façade and the UI components themselves. Developers tend to think, OK this is my data layer behind the bindings object and everything else is just UI.  Of course that's not the case.  The UI layer itself will have state which is intrinsically linked to the UI presentation rather than the business model, but at the same time should not be tighly bound to a specific instance of any single UI component. So here's the problem.  I think developers try and use the UI components as state-holders for this kind of data, rather than using them to represent that state. An example of this might be something like the selection state of a tabset (panelTabbed), you might be interested in knowing what the currently disclosed tab is. The temptation that leads to the component reference sin is to go and ask the tabset what the selection is.  That of course is fine in context - e.g. a handler within the same request scoped bean that's got the binding to the tabset. However, it leads to problems when you subsequently want the same information outside of the immediate scope.  The simple solution seems to be to chuck that component reference into session scope and then you can simply re-check in the same way, leading of course to this mistake. Turn it on its Head  So the correct solution to this is to turn the problem on its head. If you are going to be interested in the value or state of some component outside of the immediate request context then it becomes persistent state (persistent in the sense that it extends beyond the lifespan of a single request). So you need to externalize that state outside of the component and have the component reference and manipulate that state as needed rather than owning it. This is what I call the UIManager pattern.  Defining the Pattern The  UIManager pattern really is very simple. The premise is that every application should define a session scoped managed bean, appropriately named UIManger, which is specifically responsible for holding this persistent UI component related state.  The actual makeup of the UIManger class varies depending on a needs of the application and the amount of state that needs to be stored. Generally I'll start off with a Map in which individual flags can be created as required, although you could opt for a more formal set of typed member variables with getters and setters, or indeed a mix. This UIManager class is defined as a session scoped managed bean (#{uiManager}) in the faces-config.xml.  The pattern is to then inject this instance of the class into any other managed bean (usually request scope) that needs it using a managed property.  So typically you'll have something like this:   <managed-bean>     <managed-bean-name>uiManager</managed-bean-name>     <managed-bean-class>oracle.demo.view.state.UIManager</managed-bean-class>     <managed-bean-scope>session</managed-bean-scope>   </managed-bean>  When is then injected into any backing bean that needs it:    <managed-bean>     <managed-bean-name>mainPageBB</managed-bean-name>     <managed-bean-class>oracle.demo.view.MainBacking</managed-bean-class>     <managed-bean-scope>request</managed-bean-scope>     <managed-property>       <property-name>uiManager</property-name>       <property-class>oracle.demo.view.state.UIManager</property-class>       <value>#{uiManager}</value>     </managed-property>   </managed-bean> In this case the backing bean in question needs a member variable to hold and reference the UIManager: private UIManager _uiManager;  Which should be exposed via a getter and setter pair with names that match the managed property name (e.g. setUiManager(UIManager _uiManager), getUiManager()).  This will then give your code within the backing bean full access to the UI state. UI components in the page can, of course, directly reference the uiManager bean in their properties, for example, going back to the tab-set example you might have something like this: <af:paneltabbed>   <af:showDetailItem text="First"                disclosed="#{uiManager.settings['MAIN_TABSET_STATE'].['FIRST']}"> ...   </af:showDetailItem>   <af:showDetailItem text="Second"                      disclosed="#{uiManager.settings['MAIN_TABSET_STATE'].['SECOND']}">     ...   </af:showDetailItem>   ... </af:panelTabbed> Where in this case the settings member within the UI Manger is a Map which contains a Map of Booleans for each tab under the MAIN_TABSET_STATE key. (Just an example you could choose to store just an identifier for the selected tab or whatever, how you choose to store the state within UI Manger is up to you.) Get into the Habit So we can see that the UIManager pattern is not great strain to implement for an application and can even be retrofitted to an existing application with ease. The point is, however, that you should always take this approach rather than committing the sin of persistent component references which will bite you in the future or shotgun scattered UI flags on the session which are hard to maintain.  If you take the approach of always accessing all UI state via the uiManager, or perhaps a pageScope focused variant of it, you'll find your applications much easier to understand and maintain. Do it today!

    Read the article

  • Very good book for learning ADF

    - by kishore.kondepudi(at)oracle.com
    Am back!!!Its been a long time i have penned in here.Past month i got a bit Androided ;) with my new Captivate and experiments with Android.I promise to give looots of things coming weeks.Before that i have been getting many comments and mails from people interested in learning ADF to suggest a god book.While there aren't many out in the market now,the one by Frank Nimphius is very very good.I have gone through the book and its very apt for learning and getting to know the horizon of ADF.It has almost everything from Model,UI,Skinning,Internationalization,Security,Reusing lots and lots of ADF stuff.I recommend the book for all beginners and learners for ADF.In case you are in India you can order it to your home from flipkart directly.Here is the listingThere are two versions of the same book one is an international edition and another one is indian print from TMH.The cost is 585/- rupees for the indian one.The book is titled Oracle Fusion Developer Guide: Building Rich Internet Applications With Oracle ADF Business Components & ADF FacesEconomical price and an excellent book.Grab yours now and plough ADF ;)

    Read the article

  • Blender to Collada to Assimp - Rigid (Non-skinned) Animation

    - by gareththegeek
    I am trying to get simple animations to work, exporting from Blender and importing into my application. My first attempt was as follows: Open Blender at factory settings. Select the default cube and insert a location keyframe. Select another frame and move the cube. Insert a second location keyframe. Export to Collada. When I open the Collada file using assimp it contains zero animations, even though in Blender the cube animates correctly. On my next attempt, I inserted a bone armature with a single bone, made it the parent of the cube, and animated the bone instead. Again the animation worked correctly in Blender. Assimp now lists one animation but both key frames have the position [0, 0, 0] Does anyone have any suggestions as to how I can get animated (non-skinned) meshes from Blender into Assimp? My ultimate goal here is to export animated meshes from Blender, process them offline into my own model format, and load them into my SharpDX based graphics engine..

    Read the article

  • Level Design V.S. Modeler

    - by Ecurbed
    From what I understand being a level designer and a character/environment/object/etc Modeler are two different jobs, yet sometimes it feels like a Modeler can also do the job of the level designer. I know this also depends on the scale of the game. For small games maybe they are one and the same, but for bigger games they become two different jobs. I understand a background in some modeling could not hurt when it comes to level design, but the question I have is: Do jobs prefer people who can model for level designing? This way they can kill two birds with one stone and have someone to create the assets and design the level. What is your opinion of the training? Does level design contain skill sets that make them completely different from what a modeler can do, or is this an easy transition for a modeler to become a level designer? Can you be a bad level designer but a good modeler and vice versa?

    Read the article

  • MVC Pattern, ViewModels, Location of conversion.

    - by Pino
    I've been working with ASP.Net MVC for around a year now and have created my applications in the following way. X.Web - MVC Application Contains Controller and Views X.Lib - Contains Data Access, Repositories and Services. This allows us to drop the .Lib into any application that requires it. At the moment we are using Entity Framework, the conversion from EntityO to a more specific model is done in the controller. This set-up means if a service method returns an EntityO and then the Controller will do a conversion before the data is passed to a view. I'm interested to know if I should move the conversion to the Service so that the app doesn't have Entity Objects being passed around.

    Read the article

< Previous Page | 590 591 592 593 594 595 596 597 598 599 600 601  | Next Page >