Search Results

Search found 73044 results on 2922 pages for 'custom data attribute'.

Page 574/2922 | < Previous Page | 570 571 572 573 574 575 576 577 578 579 580 581  | Next Page >

  • Oracle's Global Single Schema

    - by david.butler(at)oracle.com
    Maximizing business process efficiencies in a heterogeneous environment is very difficult. The difficulty stems from the fact that the various applications across the Information Technology (IT) landscape employ different integration standards, different message passing strategies, and different workflow engines. Vendors such as Oracle and others are delivering tools to help IT organizations manage the complexities introduced by these differences. But the one remaining intractable problem impacting efficient operations is the fact that these applications have different definitions for the same business data. Business data is your business information codified for computer programs to use. A good data model will represent the way your organization does business. The computer applications your organization deploys to improve operational efficiency are built to operate on the business data organized into this schema.  If the schema does not represent how you do business, the applications on that schema cannot provide the features you need to achieve the desired efficiencies. Business processes span these applications. Data problems break these processes rendering them far less efficient than they need to be to achieve organization goals. Thus, the expected return on the investment in these applications is never realized. The success of all business processes depends on the availability of accurate master data.  Clearly, the solution to this problem is to consolidate all the master data an organization uses to run its business. Then clean it up, augment it, govern it, and connect it back to the applications that need it. Until now, this obvious solution has been difficult to achieve because no one had defined a data model sufficiently broad, deep and flexible enough to support transaction processing on all key business entities and serve as a master superset to all other operational data models deployed in heterogeneous IT environments. Today, the situation has changed. Oracle has created an operational data model (aka schema) that can support accurate and consistent master data across heterogeneous IT systems. This is foundational for providing a way to consolidate and integrate master data without having to replace investments in existing applications. This Global Single Schema (GSS) represents a revolutionary breakthrough that allows for true master data consolidation. Oracle has deep knowledge of applications dating back to the early 1990s.  It developed applications in the areas of Supply Chain Management (SCM), Product Lifecycle Management (PLM), Enterprise Resource Planning (ERP), Customer Relationship Management (CRM), Human Capital Management (HCM), Financials and Manufacturing. In addition, Oracle applications were delivered for key industries such as Communications, Financial Services, Retail, Public Sector, High Tech Manufacturing (HTM) and more. Expertise in all these areas drove requirements for GSS. The following figure illustrates Oracle's unique position that enabled the creation of the Global Single Schema. GSS Requirements Gathering GSS defines all the key business entities and attributes including Customers, Contacts, Suppliers, Accounts, Products, Services, Materials, Employees, Installed Base, Sites, Assets, and Inventory to name just a few. In addition, Oracle delivers GSS pre-integrated with a wide variety of operational applications.  Business Process Automation EBusiness is about maximizing operational efficiency. At the highest level, these 'operations' span all that you do as an organization.  The following figure illustrates some of these high-level business processes. Enterprise Business Processes Supplies are procured. Assets are maintained. Materials are stored. Inventory is accumulated. Products and Services are engineered, produced and sold. Customers are serviced. And across this entire spectrum, Employees do the procuring, supporting, engineering, producing, selling and servicing. Not shown, but not to be overlooked, are the accounting and the financial processes associated with all this procuring, manufacturing, and selling activity. Supporting all these applications is the master data. When this data is fragmented and inconsistent, the business processes fail and inefficiencies multiply. But imagine having all the data under these operational business processes in one place. ·            The same accurate and timely customer data will be provided to all your operational applications from the call center to the point of sale. ·            The same accurate and timely supplier data will be provided to all your operational applications from supply chain planning to procurement. ·            The same accurate and timely product information will be available to all your operational applications from demand chain planning to marketing. You would have a single version of the truth about your assets, financial information, customers, suppliers, employees, products and services to support your business automation processes as they flow across your business applications. All company and partner personnel will access the same exact data entity across all your channels and across all your lines of business. Oracle's Global Single Schema enables this vision of a single version of the truth across the heterogeneous operational applications supporting the entire enterprise. Global Single Schema Oracle's Global Single Schema organizes hundreds of thousands of attributes into 165 major schema objects supporting over 180 business application modules. It is designed for international operations, and extensibility.  The schema is delivered with a full set of public Application Programming Interfaces (APIs) and an Integration Repository with modern Service Oriented Architecture interfaces to make data available as a services (DaaS) to business processes and enable operations in heterogeneous IT environments. ·         Key tables can be extended with unlimited numbers of additional attributes and attribute groups for maximum flexibility.  o    This enables model extensions that reflect business entities unique to your organization's operations. ·         The schema is multi-organization enabled so data manipulation can be controlled along organizational boundaries. ·         It uses variable byte Unicode to support over 31 languages. ·         The schema encodes flexible date and flexible address formats for easy localizations. No matter how complex your business is, Oracle's Global Single Schema can hold your business objects and support your global operations. Oracle's Global Single Schema identifies and defines the business objects an enterprise needs within the context of its business operations. The interrelationships between the business objects are also contained within the GSS data model. Their presence expresses fundamental business rules for the interaction between business entities. The following figure illustrates some of these connections.   Interconnected Business Entities Interconnecte business processes require interconnected business data. No other MDM vendor has this capability. Everyone else has either one entity they can master or separate disconnected models for various business entities. Higher level integrations are made available, but that is a weak architectural alternative to data level integration in this critically important aspect of Master Data Management.    

    Read the article

  • To SYNC or not to SYNC – Part 3

    - by AshishRay
    I can't believe it has been almost a year since my last blog post. I know, that's an absolute no-no in the blogosphere. And I know that "I have been busy" is not a good excuse. So - without trying to come up with an excuse - let me state this - my apologies for taking such a long time to write the next Part. Without further ado, here goes. This is Part 3 of a multi-part blog article where we are discussing various aspects of setting up Data Guard synchronous redo transport (SYNC). In Part 1 of this article, I debunked the myth that Data Guard SYNC is similar to a two-phase commit operation. In Part 2, I discussed the various ways that network latency may or may not impact a Data Guard SYNC configuration. In this article, I will talk in details regarding why Data Guard SYNC is a good thing. I will also talk about distance implications for setting up such a configuration. So, Why Good? Why is Data Guard SYNC a good thing? Because, at the end of the day, this gives you the assurance of zero data loss - it doesn’t matter what outage may befall your primary system. Befall! Boy, that sounds theatrical. But seriously - think about this - it minimizes your data risks. That’s a big deal. Whether you have an outage due to bad disks, faulty hardware components, hardware / software bugs, physical data corruptions, power failures, lightning that takes out significant part of your data center, fire that melts your assets, water leakage from the cooling system, human errors such as accidental deletion of online redo log files - it doesn’t matter - you can have that “Om - peace” look on your face and then you can failover to the standby system, without losing a single bit of data in your Oracle database. You will be a hero, as shown in this not so imaginary conversation: IT Manager: Well, what’s the status? You: John is doing the trace analysis on the storage array. IT Manager: So? How long is that gonna take? You: Well, he is stuck, waiting for a response from <insert your not-so-favorite storage vendor here>. IT Manager: So, no root cause yet? You: I told you, he is stuck. We have escalated with their Support, but you know how long these things take. IT Manager: Darn it - the site is down! You: Not really … IT Manager: What do you mean? You: John is stuck, but Sreeni has already done a failover to the Data Guard standby. IT Manager: Whoa, whoa - wait! Failover means we lost some data, why did you do this without letting the Business group know? You: We didn’t lose any data. Remember, we had set up Data Guard with SYNC? So now, any problems on the production – we just failover. No data loss, and we are up and running in minutes. The Business guys don’t need to know. IT Manager: Wow! Are we great or what!! You: I guess … Ok, so you get it - SYNC is good. But as my dear friend Larry Carpenter says, “TANSTAAFL”, or "There ain't no such thing as a free lunch". Yes, of course - investing in Data Guard SYNC means that you have to invest in a low-latency network, you have to monitor your applications and database especially in peak load conditions, and you cannot under-provision your standby systems. But all these are good and necessary things, if you are supporting mission-critical apps that are supposed to be running 24x7. The peace of mind that this investment will give you is priceless, especially if you are serious about HA. How Far Can We Go? Someone may say at this point - well, I can’t use Data Guard SYNC over my coast-to-coast deployment. Most likely - true. So how far can you go? Well, we have customers who have deployed Data Guard SYNC over 300+ miles! Does this mean that you can also deploy over similar distances? Duh - no! I am going to say something here that most IT managers don’t like to hear - “It depends!” It depends on your application design, application response time / throughput requirements, network topology, etc. However, because of the optimal way we do SYNC, customers have been able to stretch Data Guard SYNC deployments over longer distances compared to traditional, storage-centric ways of doing this. The MAA Database 10.2 best practices paper Data Guard Redo Transport & Network Configuration, and Oracle Database 11.2 High Availability Best Practices Manual talk about some of these SYNC-related metrics. For example, a test deployment of Data Guard SYNC over 330 miles with 10ms latency showed an impact less than 5% for a busy OLTP application. Even if you can’t deploy Data Guard SYNC over your WAN distance, or if you already have an ASYNC standby located 1000-s of miles away, here’s another nifty way to boost your HA. Have a local standby, configured SYNC. How local is “local”? Again - it depends. One customer runs a local SYNC standby across the campus. Another customer runs it across 15 miles in another data center. Both of these customers are running Data Guard SYNC as their HA standard. If a localized outage affects their primary system, no problem! They have all the data available on the standby, to which they can failover. Very fast. In seconds. Wait - did I say “seconds”? Yes, Virginia, there is a Santa Claus. But you have to wait till the next blog article to find out more. I assure you tho’ that this time you won’t have to wait for another year for this.

    Read the article

  • Podcast Show Notes &ndash; Oracle Coherence and Data Grid Technology - Part 1

    - by Bob Rhubart
    This week’s ArchBeat Podcast program kicks off a three-part series featuring a discussion of Oracle Coherence and data grid technology. Listen to Part 1 The panelists for this discussion are: Cameron Purdy, VP of Development, Oracle Blog | Twitter | LinkedIn | Oracle Mix Aleksandar Seovic, founder and managing director at S4HC Inc. Blog| Twitter | LinkedIn | Oracle Mix | Oracle ACE Profile (Aleks is also the author of  Oracle Coherence 3.5 from Packt Publishing.) John Stouffer, independent consultant, Oracle Applications DBA/Architect Blog |  LinkedIn | Oracle Mix | Oracle ACE Profile Part two will be available on June 23, part 3 on June 30. Coming soon On July 7 the ArchBeat Podcast kicks of a series featuring an open discussion of Architecture and Agility. Stay tuned: RSS   Technorati Tags: oracle,otn,arch2arch,archbeat,coherence,data grid,cameron purdy,aleks seovic del.icio.us Tags: oracle,otn,arch2arch,archbeat,coherence,data grid,cameron purdy,aleks seovic

    Read the article

  • Accessing Server-Side Data from Client Script: Using WCF Services with jQuery and the ASP.NET Ajax Library

    Today's websites commonly exchange information between the browser and the web server using Ajax techniques - the browser executes JavaScript code typically in response to the page loading or some user action. This JavaScript makes an asynchronous HTTP request to the server. which then processes the request and, perhaps, returns data that the browser can then seamlessly integrate into the web page. Two earlier articles - Accessing JSON Data From an ASP.NET Page Using jQuery and Using Ajax Web Services, Script References, and jQuery, looked at using both jQuery and the ASP.NET Ajax Library on the browser to initiate an Ajax request and both ASP.NET pages and Ajax Web Services as the entities on the web server responsible for servicing such Ajax requests. This article continues our examination of techniques for implementing lightweight Ajax scenarios in an ASP.NET website. Specifically, it examines how to use the Windows Communication Foundation, or WCF, to serve data from the web server and how to use both the ASP.NET Ajax Library and jQuery to consume such services from the client-side. Read on to learn more! Read More >

    Read the article

  • Can’t use Sub Reports and Shared Datasets in BIDS - Data Retrieval failed for the subreport '###', located at ### Please check the log files

    - by simonsabin
    SQL Server 2008 R2 brings us a great new feature of shared data sets. This allows datasets used by parameters and multiple reports to be defined in one place and used in multiple reports. I’ve been developing a report for SQLBits that required use of sub reports. They all use a shared data set for the agenda items. Initially I was developing in report builder but have now gone back to BIDS. In doing this however all the reports bust reporting the following error Data Retrieval failed for the subreport...(read more)

    Read the article

  • Wondering What to Expect from Master Data Management at OpenWorld 2012? Hold On to Your Seats…

    - by Mala Narasimharajan
    The Countdown begins – just 23 days till OpenWorld hits San Francisco. Oracle OpenWorld 2012 for MDM promises to be chock full of interesting sessions, specifically focused on our customers. We’ve made sure that our sessions are balanced between product information, strategy and real world stories and last but certainly not least - lessons learned – straight from our customers. Attendee / Presenters Toolkit Oracle Master Data Management FOCUS ON DOCUMENT – For all MDM sessions at OOW - where and when Oracle Schedule Builder – use search terms such as : MDM, master data, customer hub, product hub and master data management Oracle Music Festival - AMAZING Line up!!  Oracle Customer Appreciation Night –NOT TO BE MISSED!! Oracle OpenWorld LIVE On-Demand Stay on top of all that’s OpenWorld – when it comes to MDM. We’ll be posting not-t- miss sessions and blogs on what our customer lineup will be like at the big show. Look forward to seeing you at OOW – and in case you didn’t get approval to attend- take advantage of our virtual on-demand conference. See you at OpenWorld 2012 ! 

    Read the article

  • Is there any reason to use "plain old data" classes?

    - by Michael
    In legacy code I occasionally see classes that are nothing but wrappers for data. something like: class Bottle { int height; int diameter; Cap capType; getters/setters, maybe a constructor } My understanding of OO is that classes are structures for data and the methods of operating on that data. This seems to preclude objects of this type. To me they are nothing more than structs and kind of defeat the purpose of OO. I don't think it's necessarily evil, though it may be a code smell. Is there a case where such objects would be necessary? If this is used often, does it make the design suspect?

    Read the article

  • Creating the Business Card Request InfoPath Form

    - by JKenderdine
    Business Card Request Demo Files Back in January I spoke at SharePoint Saturday Virginia Beach about InfoPath forms and Web Part deployment.  Below is some of the information and details regarding the form I created for the session.  There are many blogs and Microsoft articles on how to create a basic form so I won’t repeat that information here.   This blog will just explain a few of the options I chose when creating the solutions for SPS Virginia Beach.  The above link contains the zipped package files of the two InfoPath forms(no code solution and coded solution), the list template for the Location list I used, and the PowerPoint deck.  If you plan to use these templates, you will need to update the forms to work within your own environments (change data connections, code links, etc.).  Also, you must have the SharePoint Enterprise version, with InfoPath Services configured in order to use the Web Browser enabled forms. So what are the requirements for this template? Business Card Request Form Template Design Plan: Gather user information and requirements for card Pull in as much user information as possible. Use data from the user profile web services as a data source Show and hide fields as necessary for requirements Create multiple views – one for those submitting the form and Another view for the executive assistants placing the orders. Browser based form integrated into SharePoint team site Submitted directly to form library The base form was created using the blank template.  The table and rows were added using Insert tab and selecting Custom Table.  The use of tables is a great way to make sure everything lines up.  You do have to split the tables from time to time.  If you’ve ever split cells and then tried to re-align one to find that you impacted the others, you know why.  Here is what the base form looks like in InfoPath.   Show and hide fields as necessary for requirements You will notice I also used Sections within the form.  These show or hide depending on options selected or whether or not fields are blank.  This is a great way to prevent your users from feeling overwhelmed with a large form (this one wouldn’t apply).  Although not used in this one, you can also use various views with a tab interface.  I’ll show that in another post. Gather user information and requirements for card Pull in as much user information as possible. Use data from the user profile web services as a data source Utilizing rules you can load data when the form initiates (Data tab, Form Load).  Anything you can automate is always appreciated by the user as that is data they don’t have to enter.  For example, loading their user id or other user information on load: Always keep in mind though how much data you load and the method for loading that data (through rules, code, etc.).  They have an impact on form performance.  The form will take longer to load if you bring in a ton of data from external sources.  Laura Rogers has a great blog post on using the User Information List to load user information.   If the user has logged into SharePoint, then this can be used quite effectively and without a huge performance hit.   What I have found is that using the User Profile service via code behind or the Web Service “GetUserProfileByName” (as above) can take more time to load the user data.  Just food for thought. You must add the data connection in order for the above rules to work.  You can connect to the data connection through the Data tab, Data Connections or select Manage Data Connections link which appears under the main data source.  The data connections can be SharePoint lists or libraries, SQL data tables, XML files, etc.  Create multiple views – one for those submitting the form and Another view for the executive assistants placing the orders. You can also create multiple views for the users to enhance their experience.  Once they’ve entered the information and submitted their request for business cards, they don’t really need to see the main data input screen any more.  They just need to view what they entered. From the Page Design tab, select New View and give the view a name.  To review the existing views, click the down arrow under View: The ReviewView shows just what the user needs and nothing more: Once you have everything configured, the form should be tested within a Test SharePoint environment before final deployment to production.  This validates you don’t have any rules or code that could impact the server negatively. Submitted directly to form library   You will need to know the form library that you will be submitting to when publishing the template.  Configure the Submit data connection to connect to this library.  There is already one configured in the sample,  but it will need to be updated to your environment prior to publishing. The Design template is different from the Published template.  While both have the .XSN extension, the published template contains all the “package” information for the form.  The published form is what is loaded into Central Admin, not the design template. Browser based form integrated into SharePoint team site In Central Admin, under General Settings, select Manage Form Templates.  Upload the published form template and Activate it to a site collection. Now it is available as a content type to select in the form library.  Some documentation on publishing form templates:  Technet – Manage administrator approved form templates And that’s all our base requirements.  Hope this helps to give a good start.

    Read the article

  • Publish/Subscribe/Request for exchange of big, complex, and confidential data?

    - by Morten
    I am working on a project where a website needs to exchange complex and confidential (and thus encrypted) data with other systems. The data includes personal information, technical drawings, public documents etc. We would prefer to avoid the Request-Reply pattern to the dependent systems (and there are a LOT of them), as that would create an awful lot of empty traffic. On the other hand, I am not sure that a pure Publisher/Subscriber pattern would be apropriate -- mainly because of the complex and bulky nature of the data to be exchanged. For that reason we have discussed the possibility of a "publish/subscribe/request" solution. The Publish/Subscribe part would be to publish a message to the dependent systems, that something is ready for pickup. The actual content is then picked up by old-school Request-Reply action. How does this sound to you?? Regards, Morten

    Read the article

  • How can I deal with actor translations and other "noise" in third-party motion capture data?

    - by Charles
    I'm working on a game, and I've run into a problem with motion capture data. My team is using 3DS Max 2011 and trying to put free motion capture files on our models. The problem we're having is it has become extremely hard to find motion capture data that stays in place. We've found some great motion captures of things like walking and jumping but the actors themselves move within the data, so when we attach these animations to our models and bring them into XNA, the models walk forward even when they should technically be standing still (and then there's also the problem of them resetting at the end of the animation). How can we clean up, at runtime or asset-processing time, the animation in these motion capture files?

    Read the article

  • Should concrete classes avoid calling other concrete classes, except for data objects?

    - by Kazark
    In Appendix A to The Art of Unit Testing, Roy Osherove, speaking about ways to write testable code from the start, says, An abstract class shouldn't call concrete classes, and concerete classes shouldn't call concrete classes either, unless they're data objects (objects holding data, with no behavior). (259) The first half of the sentence is simply Dependency Inversion from SOLID. The second half seems rather extreme to me. That means that every time I'm going to write a class that isn't a simple data structure, which is most classes, I should write an interface or abstract class first, right? Is it really worthwhile to go that far in defining abstract classes an interfaces? Can anyone explain why in more detail, or refute it in spite of its benefit for testability?

    Read the article

  • The design of a generic data synchronizer, or, an [object] that does [actions] with the aid of [helpers]

    - by acheong87
    I'd like to create a generic data-source "synchronizer," where data-source "types" may include MySQL databases, Google Spreadsheets documents, CSV files, among others. I've been trying to figure out how to structure this in terms of classes and interfaces, keeping in mind (what I've read about) composition vs. inheritance and is-a vs. has-a, but each route I go down seems to violate some principle. For simplicity, assume that all data-sources have a header-row-plus-data-rows format. For example, assume that the first rows of Google Spreadsheets documents and CSV files will have column headers, a.k.a. "fields" (to parallel database fields). Also, eventually, I would like to implement this in PHP, but avoiding language-specific discussion would probably be more productive. Here's an overview of what I've tried. Part 1/4: ISyncable class CMySQL implements ISyncable GetFields() // sql query, pdo statement, whatever AddFields() RemFields() ... _dbh class CGoogleSpreadsheets implements ISyncable GetFields() // zend gdata api AddFields() RemFields() ... _spreadsheetKey _worksheetId class CCsvFile implements ISyncable GetFields() // read from buffer AddFields() RemFields() ... _buffer interface ISyncable GetFields() AddFields($field1, $field2, ...) RemFields($field1, $field2, ...) ... CanAddFields() // maybe the spreadsheet is locked for write, or CanRemFields() // maybe no permission to alter a database table ... AddRow() ModRow() RemRow() ... Open() Close() ... First Question: Does it make sense to use an interface, as above? Part 2/4: CSyncer Next, the thing that does the syncing. class CSyncer __construct(ISyncable $A, ISyncable $B) Push() // sync A to B Pull() // sync B to A Sync() // Push() and Pull() only differ in direction; factor. // Sync()'s job is to make sure that the fields on each side // match, to add fields where appropriate and possible, to // account for different column-orderings, etc., and of // course, to add and remove rows as necessary to sync. ... _A _B Second Question: Does it make sense to define such a class, or am I treading dangerously close to the "Kingdom of Nouns"? Part 3/4: CTranslator? ITranslator? Now, here's where I actually get lost, assuming the above is passable. Sometimes, two ISyncables speak different "dialects." For example, believe it or not, Google Spreadsheets (accessed through the Google Data API "list feed") returns column headers lower-cased and stripped of all spaces and symbols! That is, sys_TIMESTAMP is systimestamp, as far as my code can tell. (Yes, I am aware that the "cell feed" does not strip the name so; however cell-by-cell manipulation is too slow for what I'm doing.) One can imagine other hypothetical examples. Perhaps even the data itself can be in different "dialects." But let's take it as given for now, and not argue this if possible. Third Question: How would you implement "translation"? Note: Taking all this as an exercise, I'm more interested in the "idealized" design, rather than the practical one. (God knows that shipped sailed when I began this project.) Part 4/4: Further Thought Here's my train of thought to demonstrate I've thunk, albeit unfruitfully: First, I thought, primitively, "I'll just modify CMySQL::GetFields() to lower-case and strip field names so they're compatible with Google Spreadsheets." But of course, then my class should really be called, CMySQLForGoogleSpreadsheets, and that can't be right. So, the thing which translates must exist outside of an ISyncable implementor. And surely it can't be right to make each translation a method in CSyncer. If it exists outside of both ISyncable and CSyncer, then what is it? (Is it even an "it"?) Is it an abstract class, i.e. abstract CTranslator? Is it an interface, since a translator only does, not has, i.e. interface ITranslator? Does it even require instantiation? e.g. If it's an ITranslator, then should its translation methods be static? (I learned what "late static binding" meant, today.) And, dear God, whatever it is, how should a CSyncer use it? Does it "have" it? Is it, "it"? Who am I? ...am I, "I"? I've attempted to break up the question into sub-questions, but essentially my question is singular: How does one implement an object A that conceptually "links" (has) two objects b1 and b2 that share a common interface B, where certain pairs of b1 and b2 require a helper, e.g. a translator, to be handled by A? Something tells me that I've overcomplicated this design, or violated a principle much higher up. Thank you all very much for your time and any advice you can provide.

    Read the article

  • Is ignoring clients data during an automatic upgrading acceptable?

    - by A competent translator
    I recently faced a deletion of my calendar and notes data inside a third party application ( Horde ) on my online space. When I told the web hosting provider they replied that they were doing an upgrade to system software (!!) and client third party applications are not within their reach or backup policy. They don't even have a backup of my data. Is this practice acceptable from a web hosting provider, even for an individual clients ? I know that backing up my data is my responsibility but I anticipated that backup copies done by the host would be available when needed.

    Read the article

  • What is the correct way to reset and load new data into GL_ARRAY_BUFFER?

    - by Geto
    I am using an array buffer for colors data. If I want to load different colors for the current mesh in real time what is the correct way to do it. At the moment I am doing: glBindVertexArray(vao); glBindBuffer(GL_ARRAY_BUFFER, colorBuffer); glBufferData(GL_ARRAY_BUFFER, SIZE, colorsData, GL_STATIC_DRAW); glEnableVertexAttribArray(shader->attrib("color")); glVertexAttribPointer(shader->attrib("color"), 3, GL_FLOAT, GL_TRUE, 0, NULL); glBindBuffer(GL_ARRAY_BUFFER, 0); It works, but I am not sure if this is good and efficient way to do it. What happens to the previous data ? Does it write on top of it ? Do I need to call : glDeleteBuffers(1, colorBuffer); glGenBuffers(1, colorBuffer); before transfering the new data into the buffer ?

    Read the article

  • Low-level 10-finger multi-touch data on the Nexus 7?

    - by Croad Langshan
    I'm considering getting a Nexus 7 to do some multi-touch development on Ubuntu in the run-up to 13.04 (i.e., now :-). What APIs, /dev files, or protocols are available, or could be made available with not too much work on my part? What data is available from the device? The data I want to get my hands on is -- if I can -- the same as I get from /dev/uinput/event* from an Apple Magic Trackpad, viz: positions of all touches (could be as many as 10 simultaneous touches, but much more typically 6 or fewer) their size/pressure (in both x and y directions) their angle their identity -- i.e. an integer that is somewhat reliably preserved across touch events, for as long as a finger doesn't lift off the surface Not all of this data is essential -- but the more of it there is, the merrier.

    Read the article

  • Should interface only be used for behavior and not to show logical data grouped together?

    - by jags
    Should an interface only be used to specify certain behavior? Would it be wrong to use interface to group logically related data? To me it looks like we should not use interface to group logically related data as structure seems a better fit. A class may be used but class name should indicate something like DTO so that user gets the impression that class does not have any behavior. Please let me know if my assumption is correct. Also, are there any exceptions where interface can be used to group logically related data?

    Read the article

  • How can I pull data from PeopleSoft on demand?

    - by trpt4him
    I work in IT at a university and I'm working with about 5 different departments to develop a new process for students to apply to a specific school within the university (not the university as a whole). We're using a web-based college application vendor and adding the applicant questions for the school itself to the main university application. Currently the main application feeds into PeopleSoft. The IT staff here is building a new table to hold just our school's applicant data. I want to be able to access that data from PeopleSoft for use in external applications, but our IT staff doesn't really seem to understand what I'm requesting, as they simply tell me I can have access to the PS query tools. The problem is, I don't want to run just ad hoc queries, I want to be able to connect from outside PeopleSoft and show current data within the external app. I am unable to find documentation or get a clear answer to my question. Does PeopleSoft support access via a web services API or anything similar, and does that sound like the right direction for me to take?

    Read the article

  • Google active la réponse et l'update partielle dans son Data protocol, pour faciliter la mise à jour

    Google active la réponse partielle et l'update partielle dans son Data protocol, pour faciliter la mise à jour de ses APIs Dans le but de booster la rapidité de ses APIs, Google Data Protocol (qui permet aux développeurs d'écrire des applications liées avec les données contenues dans les produits Google) s'est vu attribué deux nouvelles fonctionnalités cette semaine, au stade expérimental : la réponse partielle, et l'update partielle. Conjointement, ces deux fonctions "peuvent significativement réduire les ressources consommées par le réseau, la mémoire et le CPU" dont on a besoin pour travailler avec les APIs de Google. Pour expliquer le rôle de la réponse partielle, l'équipe du Google Data Protocol donne ...

    Read the article

  • how can I add a custom non-DataTable column to my DataView, in a winforms ADO.net application?

    - by Greg
    Hi, How could I (/is it possible) to add a custom column to my DataView, and in this column display the result of a specific calculation. That is, I currently have a dataGridView which has a binding to a DataView, based on the DataTable from my database. I'd like to add an additional column to the dataGridView to display a number which is calculated by looking at this current row plus it's children row. In other words the info for the column isn't just derivable from the row data itself. Specific questions might be: a) where to add the column itself? to the DataView I assume? b) which method / event to trigger the re-calculation of the value of this custom column from ( / how do I control this) Thanks PS. I've also noted if I use the following code/approach I get a infinite loop... // Custom Items DataColumn dc = new DataColumn("OverallSize", typeof(long)); DT_Webfiles.Columns.Add(dc); DT_Webfiles.RowChanged += new DataRowChangeEventHandler(DT_Row_Changed); private static void DT_Row_Changed(object sender, DataRowChangeEventArgs e) { e.Row["OverallSize"] = e.Row["OverallSize"] ?? 0; e.Row["OverallSize"] = (long)e.Row["OverallSize"] + 1; } What other approach could avoid this looping. i.e. currently I'm saying update the value of the custom column when the row changes, however after then changing the row it triggers antoher 'row has changed' event...

    Read the article

  • How do I change an attribute in an HTML table's cell if I know the row and column index of the cell?

    - by Mark
    I know nothing about jQuery but am an experienced C++ programmer (not sure if that helps or hurts). I found jQuery code that gives me the row and column index of a cell in an HTML table when a user clicks on that cell. Using such row-column index numbers, I need to change an attribute's value in the previously selected cell and in the cell just clicked. The index numbers are produced and saved with this code: var $trCurrent = 0; // Index of cell selected when page opens var $tdCurrent = 0; // i.e., previously selected cell $(document).ready(function () { $("td").click(function () { // How toclear previously selected cell's attribute here? ('class', 'recent') var oTr = $(this).parents("tr"); $tdCurrent = oTr.children("td").index(this); }); $("tr").click(function () { $trCurrent = $(this)[0].rowIndex; // How to set new attributes here? ('class', 'current'); // and continue work using information from currently selected cell }); }); Any help or hints would be appreciated. I do not even know if this is the way I should get the index of the row and column. Thanks.

    Read the article

  • Can't open Websphere Portal 7.0 Login Page after its integration with a custom user registry?

    - by jack_sparrow
    I am currently working on a project related to Websphere Portal 7.0 on Windows Server 2008 R2,64 bit. I am trying to integrate websphere portal with my custom user registry.I have completed all the steps required to implement custom user registry in portal as given in IBM documentation.I am adding my custom repository to the default federated repositories of Portal 7.0.I have made the required changes under VMM Federated CUR Properties section in wkplc.properties.I am using configengine.bat file to configure Portal with user registry. But even completing all the steps,when I am trying to open the Portal Login Page through http://ip-address:port_of_portal/wps/portal ,it is throwing an exception on the console: "Error 500: com.ibm.wps.resolver.data.exceptions.URIProcessingIOException: EJCBD0021E: The URI dav:fs-type1/themes/PageBuilder2/theme.html and parameters [['themeURI'=, 'mime-type'= could not be processed: [EJCBD0021E: The URI dav:fs-type1/themes/PageBuilder2/theme.html and parameters [['themeURI'=, 'mime-type'= could not be processed: EJPSG0002E: Requested Member does not exist.uid=portal,o=defaultWIMFileBasedRealm/null] " and in logs Systemout.log "EJPSB0005E: Exception occurred during creation of the principal with Name uid=portal,o=defaultWIMFileBasedRealm and Principal Type USER caused by com.ibm.portal.puma.MemberNotFoundException: EJPSG0002E: Requested Member does not exist.uid=portal,o=defaultWIMFileBasedRealm/null". Here,"portal" is administrative user in my custom user registry.I am able to access WAS using /ibm/console through user "portal".Please suggest some way to handle this issue.

    Read the article

  • How to set ExportMetaData with multiple values as well as single w/ custom attribute ?

    - by Matthew M.
    I have the following ExportMetaData attributes set on my class: [Export(typeof(IDocumentViewer))] [ExportMetadata("Name", "MyViewer")] [ExportMetadata("SupportsEditing", true)] [ExportMetadata("Formats", DocFormat.DOC, IsMultiple = true)] [ExportMetadata("Formats", DocFormat.DOCX, IsMultiple = true)] [ExportMetadata("Formats", DocFormat.RTF, IsMultiple = true)] I also have a supporting interface: public interface IDocumentViewerMetaData { /// <summary> /// Gets the format. /// </summary> /// <value>The format.</value> IEnumerable<DocFormat> Formats { get; } /// <summary> /// Gets the name of the viewer /// </summary> /// <value>The name.</value> string Name { get; } /// <summary> /// Gets a value indicating whether this viewer supports editing /// </summary> /// <value><c>true</c> if [supports editing]; otherwise, <c>false</c>.</value> bool SupportsEditing { get; } } And of course my ImportMany: [ImportMany(typeof(IDocumentViewer))] public IEnumerable<Lazy<IDocumentViewer, IDocumentViewerMetaData>> _viewers { get; set; } What I would like to do is use a strongly-typed attribute class instead of using the ExportMetaData attribute. I have not figured out a way to do this while also supporting single values (Name, SupportsEditing, in the example above). I envision doing something similiar the following (or whatever is suggested as best): [Export(typeof(IDocumentViewer))] [DocumentViewerMetadata(Name = "MyViewer")] [DocumentViewerMetadata(SupportsEditing = true)] [DocumentViewerMetadata(Format = DocFormat.DOC)] [DocumentViewerMetadata(Format = DocFormat.DOCX)] I am fairly certain that there IS a way to do this, I just haven't found the right way to connect the dots. :) Thank you, Matthew

    Read the article

  • Having vCam on custom classes instead of the root class.

    - by Hwang
    Maybe some of you guys know bout vCam from http://bryanheisey.com/blog/?page_id=22 I'm trying to have the script running on a custom classes instead of a MovieClip in the library. But after some trying I fail so I stick back to having the MC in the library and load the MC from the project root action-script files. Now it works fine if I run the MC on the root as files, but for more organizing purposes on my action-script files, I was thinking of calling it from a custom classes(where I can control the vCam), then call the custom classes from the root action-script files. But seems like it won't work other than the root action-script files. I'm not sure whether I'm missing any codes between the two custom classes, or its not coded to run that way. If it's not, then its fine too just that I want the things more organize. Or if you have any idea how to 'by-pass' this, please do tell me so. In case you need my code for the 2 classes, here it is: package { import flash.display.MovieClip; import classes.vCamera; public class main extends MovieClip { private var vC2:vCamera = new vCamera(); public function main():void { addChild(vC2) } } } package classes{ import flash.display.MovieClip; import flash.display.Stage; import flash.events.Event; public class vCamera extends MovieClip{ private var vC:vCam = new vCam(); public function vCamera():void{ addEventListener(Event.ADDED_TO_STAGE, add2Stage) } private function add2Stage(event:Event):void{ vC.x=stage.stageWidth/2; vC.y=stage.stageHeight/2; vC.rotation=15; addChild(vC); } } }

    Read the article

  • jQuery/Javascript: Click event on a checkbox and the 'checked' attribute.

    - by b. e. hollenbeck
    The code: $('input.media-checkbox').live('click', function(e){ e.preventDefault(); var that = $(this); if (that.attr('checked') == 'checked'){ var m = that.attr('media'); var mid = 'verify_' + m; that.parents('div.state-container').find('ul.' + mid).remove(); that.attr('checked', false); } else { var url = AJAX_URL; $.ajax({ type: 'GET', url: url, dataType: 'html', success: function(data){ that.parents('li').siblings('li.verification').children('div.media-verification').append(data).fadeIn(500); that.attr('checked', 'checked'); } }); } return false; }); I am ajaxing in a form, then firing the click event on relevant checkboxes to ajax in another partial if necessary. The form is inserted nicely, and the click events are fired, checking the boxes that need to be checked and firing the second ajax, since the checked attribute of the checkbox was initially false. What's curdling my cheese is if I UNCHECK one of those boxes. Despite e.preventDefault(), the checked attribute is set to false BEFORE the test, so the if statement always executes the else statement. I've also tried this with $.is(':checked'), so I'm completely baffled. It appears that unchecked - checked state reads the original state, but checked - unchecked doesn't. Any help?

    Read the article

  • How to implement custom JSF component for drawing chart?

    - by Roman
    I want to create a component which can be used like: <mc:chart data="#{bean.data}" width="200" height="300" /> where #{bean.data} returns a collection of some objects or chart model object or something else what can be represented as a chart (to put it simple, let's assume it returns a collection of integers). I want this component to generate html like this: <img src="someimg123.png" width="200" height="300"/> The problem is that I have some method which can receive data and return image, like: public RenderedImage getChartImage (Collection<Integer> data) { ... } and I also have a component for drawing dynamic image: <o:dynamicImage width="200" height="300" data="#{bean.readyChartImage}/> This component generates html just as I need but it's parameter is array of bytes or RenderedImage i.e. it needs method in bean like this: public RenderedImage getReadyChartImage () { ... } So, one approach is to use propertyChangedListener on submit to set data (Collection<Integer>) for drawing chart and then use <o:dynamicImage /> component. But I'd like to create my own component which receives data and draws chart. I'm using facelets but it's not so important indeed. Any ideas how to create the desired component? P.S. One solution I was thinking about is not to use <o:dynamicImage/> and use some servlet to stream image. But I don't know how to implement that correctly and how to tie jsf component with servlet and how to save already built chart images (generating new same image for each request can cause performance problems imho) and so on..

    Read the article

< Previous Page | 570 571 572 573 574 575 576 577 578 579 580 581  | Next Page >