Search Results

Search found 10463 results on 419 pages for 'required'.

Page 186/419 | < Previous Page | 182 183 184 185 186 187 188 189 190 191 192 193  | Next Page >

  • Windows Server Certified as Secure Global Desktop Clients with EBS 12

    - by Steven Chan (Oracle Development)
    Oracle Secure Global Desktop provides secure access to centralized applications—Microsoft Windows, UNIX, mainframe, and midrange—from a wide variety of popular client devices, including Windows PCs, Oracle Solaris workstations, Linux PCs, and thin clients. Secure Global Desktop is certified for use with Microsoft Windows Server 2003 and 2008 virtualized environments acting as desktop clients connecting to Oracle E-Business Suite Release 12 environments.  32-bit and 64-bit versions of Microsoft Windows Server are certified. These combinations may also be used in conjunction with Oracle VM, if required. How does this work? For example, a Secure Desktop Client can connect to a Secure Global Desktop environment.  That environment can be running Microsoft Server 2008.  That environment can be used, in turn, as a "desktop client" to access Oracle E-Business Suite Release 12.1.3. Requirements EBS 12.1.3 + Windows Server 2008 R2 (64-bit) Secure Global Desktop version 4.6 or higher Internet Explorer 8 (32-bit and 64-bit) or Internet Explorer 9 (32-bit and 64-bit) JRE Plug-in 1.6.0_32 (32-bit and 64-bit) or later 1.6 releases EBS 12.1.3 + Windows Server 2008 (32-bit) Secure Global Desktop version 4.6 or higher Internet Explorer 8 (32-bit) or Internet Explorer 9 (32-bit) JRE Plug-in 1.6.0_32 (32-bit) or later 1.6 releases EBS 12.1.3 + Windows Server 2003 R2 (64-bit) Secure Global Desktop version 4.6 or higher Internet Explorer 8 (32-bit and 64-bit) JRE Plug-in 1.6.0_32 (32-bit and 64-bit) or later 1.6 releases EBS 12.1.3 + Windows Server 2003 R2 (32-bit) Secure Global Desktop version 4.6 or higher Internet Explorer 8 (32-bit) JRE Plug-in 1.6.0_32 (32-bit) or later 1.6 releases References Oracle Secure Global Desktop with E-Business Suite Release 12.1.3 (Note 1491211.1) Related Articles Oracle VM Templates Available for E-Business Suite 12.1.3 Support Policies for Virtualization Technologies and Oracle E-Business Suite Webcast Replay Available: Virtualization and Cloud Deployments of Oracle E-Business Suite

    Read the article

  • Failure to toubleshoot a juju charm deployment

    - by Bruno Pereira
    My environments.yaml looks like this: environments: test: type: local control-bucket: juju-a14dfae3830142d9ac23c499395c2785999 admin-secret: 6608267bbd6b447b8c90934167b2a294999 default-series: oneiric juju-origin: distro data-dir: /home/bruno/projects/juju juju bootstrap runs perfect: 2011-11-22 19:19:31,999 INFO Bootstrapping environment 'test' (type: local)... 2011-11-22 19:19:32,004 INFO Checking for required packages... 2011-11-22 19:19:33,584 INFO Starting networking... 2011-11-22 19:19:34,058 INFO Starting zookeeper... 2011-11-22 19:19:34,283 INFO Starting storage server... 2011-11-22 19:19:40,051 INFO Initializing zookeeper hierarchy 2011-11-22 19:19:40,247 INFO Starting machine agent (origin: distro)... [sudo] password for bruno: 2011-11-22 19:23:16,054 INFO Environment bootstrapped 2011-11-22 19:23:16,079 INFO 'bootstrap' command finished successfully Deploy from a known good charm is accepted (tried it with one that I am trying to create): juju deploy --repository=/home/bruno/projects/charms_repo/ local:teamspeak 2011-11-22 19:28:49,929 INFO Charm deployed as service: 'teamspeak' 2011-11-22 19:28:49,962 INFO 'deploy' command finished successfully After this I can see that juju debug-log shows activity and I can see the network indicator going on and off and activity on my hard-disk. Wait... Looking at juju status I get: services: teamspeak: charm: local:oneiric/teamspeak-1 relations: {} units: teamspeak/0: machine: 0 public-address: 192.168.122.226 relations: {} state: start_error juju debug-log does not help and I have no files under /var/log/juju or /var/lib/juju. Last juju debug-log only shows this: 2011-11-22 19:45:20,790 Machine:0: juju.agents.machine DEBUG: Units changed old:set(['wordpress/0']) new:set(['wordpress/0', 'teamspeak/0']) 2011-11-22 19:45:20,823 Machine:0: juju.agents.machine DEBUG: Starting service unit: teamspeak/0 ... 2011-11-22 19:45:21,137 Machine:0: juju.agents.machine DEBUG: Downloading charm local:oneiric/teamspeak-1 to /home/bruno/projects/juju/bruno-test/charms 2011-11-22 19:45:22,115 Machine:0: juju.agents.machine DEBUG: Starting service unit teamspeak/0 2011-11-22 19:45:22,133 Machine:0: unit.deploy INFO: Creating container teamspeak-0... 2011-11-22 19:47:04,586 Machine:0: unit.deploy INFO: Container created for teamspeak/0 2011-11-22 19:47:04,781 Machine:0: unit.deploy DEBUG: Charm extracted into container 2011-11-22 19:47:04,801 Machine:0: unit.deploy DEBUG: Starting container... 2011-11-22 19:47:07,086 Machine:0: unit.deploy INFO: Started container for teamspeak/0 2011-11-22 19:47:07,107 Machine:0: juju.agents.machine INFO: Started service unit teamspeak/0 How can I troubleshot what is happening here?

    Read the article

  • Visual Studio Shortcut: Surround With

    - by Jeff Widmer
    I learned a new Visual Studio keyboard shortcut today that is really awesome; the “Surround With” shortcut.  You can trigger the Surround With context menu by pressing the Ctrl-K, Ctrl-S key combination when on a line of code. Ctrl-K, Ctrl-S means to hold down the Control key and then press K and then while still holding down the Control key press S. Here is where this comes in handy: You type a line of code and then realize you need to put it within an if statement block. So you type “if” and hit tab twice to insert the if statement code snippet.  Then you highlight the previous line of code that you typed, and then either drag and drop it into the if-then block or cut and paste it.  That is not too bad but it is a lot of extra key clicks and mouse moves. Now try the same with the Surround With keyboard shortcut.  Just highlight that line of code that you just typed and press Ctrl-K, Ctrl-S and choose the if statement code snippet, hit tab, and POW!... you are done!  No more code moving/indenting required. Here is what the Surround With context menu looks like: Just up or down arrow inside the drop down list to the code snippet that you want to surround your currently selected text with.  Did I mention this is AWESOME! Now it is so simple to surround lines of code with an if-then block or a try-catch-finally block... things that usually took several key clicks and maybe one or two mouse moves. And this works in both Visual Studio 2008 and Visual Studio 2010 which means it has been around for a long time and I never knew about it.   Technorati Tags: Visual Studio Keyboard Shortcut

    Read the article

  • /users/tags should contain scores

    - by Sean Patrick Floyd
    I am implementing some simple JavaScript/bookmarklet based apps that show some reputation info, including the score in the User's top tags (roughly based on this previous bookmarklet of mine). Now I can get a user's top tags (using the API), and I can also get the per-tag score if the user is logged in, by dynamically parsing the tag's top users page. But it costs me one AJAX request per tag and I have to download 10+k to extract a single numeric value. It would save a lot of traffic if the tags in <api>/users/<userid>/tags had a score field. The data seems to be there, after all the top users pages use it, so it would just be a question of exposing the data. Suggested structure: "tags": [ { "name": { "description": "name of the tag", "values": "string", "optional": false, "suggested_buffer_size": 25 }, "score": { "description": "tag score, sum of up votes for answers on non-wiki questions", "values": "32-bit signed integer", "optional": false }, "count": { "description": "tag count, exact meaning depends on context", "values": "32-bit signed integer", "optional": false }, "restricted_to": { "description": "user types that can make use of this tag, lack of this field indicates it is useable by all", "values": "one of anonymous, unregistered, registered, or moderator", "optional": true }, "fulfills_required": { "description": "indicates whether this tag is one of those that is required to be on a post", "values": "boolean", "optional": false }, "user_id": { "description": "user associated with this tag, depends on context", "values": "32-bit signed integer", "optional": true } } ]

    Read the article

  • Partnering with your Applications – The Oracle AppAdvantage Story

    - by JuergenKress
    So, what is Oracle AppAdvantage? A practical approach to adopting cloud, mobile, social and other trends A guided path to aligning IT more closely with business objectives Maximizing the value of existing investments in applications A layered approach to simplifying IT, building differentiation and bringing innovation All of the above? Enhance the value of your existing applications investment with #Oracle #AppAdvantage Aligning biz and IT expectations on Simplifying IT, building Differentiation and Innovation #AppAdvantage Adopt a pace layered approach to extracting biz value from your apps with #AppAdvantage Bringing #cloud, #social, #mobile to your apps with #Oracle #AppAdvantage Embracing Situational IT In the next IT Leaders Editorial, Rick Beers discusses the necessity of IT disruption and #AppAdvantage. Rick Beers sheds light on the Situational Leadership and the path to success #AppAdvantage. Rick Beers draws parallels with CIO’s strategic thinking and #Oracle #AppAdvantage approach. Do you have this paper in your summer reading list? Aligning biz and IT #AppAdvantage What does Situational leadership have to do with Oracle AppAdvantage? Catch the next piece in Rick Beers’ monthly series of IT Leaders Editorial and find out. #AppAdvantage Middleware Minutes with Howard Beader – August edition In the quarterly column, @hbeader discusses impact of #cloud, #mobile, #fastdata on #middleware Making #cloud, #mobile, #fastdata a part of your IT strategy with #middleware What keeps the #oracle #middleware team busy? Find out in the inaugural post in quarterly update on #middleware Recent #middleware news update along with a preview of things to come from #Oracle, in @hbeader ‘s quarterly column In his inaugural post, Howard Beader, senior director for Oracle Fusion Middleware, discusses the recent industry trends including mobile, cloud, fast data, integration and how these are shaping the IT and business requirements. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: AppAdvantage,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • unmet dependencies and broken count>0 problem

    - by Simon
    I tried installing fbreader, following all the steps, but ended up with unmet dependencies, i also think a file is referenced in two locations at once and hence killing it.. any ideas how I can fix it? i've done alot of research and tried: simon@simon-Studio-1558:~$ sudo apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following packages were automatically installed and are no longer required: dkms patch Use 'apt-get autoremove' to remove them. The following extra packages will be installed: libzlcore0.12 The following NEW packages will be installed: libzlcore0.12 0 upgraded, 1 newly installed, 0 to remove and 61 not upgraded. 6 not fully installed or removed. Need to get 0 B/270 kB of archives. After this operation, 811 kB of additional disk space will be used. Do you want to continue [Y/n]? y (Reading database ... 179860 files and directories currently installed.) Unpacking libzlcore0.12 (from .../libzlcore0.12_0.12.10dfsg-4_i386.deb) ... dpkg: error processing /var/cache/apt/archives/libzlcore0.12_0.12.10dfsg-4_i386.deb (--unpack): trying to overwrite '/usr/lib/libzlcore.so.0.12.10', which is also in package libzlcore 0.12.10-1 No apport report written because MaxReports is reached already dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) Errors were encountered while processing: /var/cache/apt/archives/libzlcore0.12_0.12.10dfsg-4_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1) sorry for the formatting, but it basically isn't liking: dpkg: error processing /var/cache/apt/archives/libzlcore0.12_0.12.10dfsg-4_i386.deb (--unpack): trying to overwrite '/usr/lib/libzlcore.so.0.12.10', which is also in package libzlcore 0.12.10-1 Any ideas? Also I don't care about keeping the program, but the error is stopping sudo apt-get remove fbreader from working too.

    Read the article

  • SQL SERVER – 2012 RC0 Various Resources and Downloads

    - by pinaldave
    Microsoft SQL Server 2012 Release Candidate 0 (RC0) Microsoft SQL Server 2012 RC0 enables a cloud-ready information platform that will help organizations unlock breakthrough insights across the organization. Microsoft SQL Server 2012 Express RC Microsoft SQL Server 2012 Express RC0 is a powerful and reliable free data management system that delivers a rich set of features, data protection, and performance for embedded applications, lightweight Web Sites, applications, and local data stores. Microsoft SQL Server 2012 Semantic Language Statistics RC0 The Semantic Language Statistics Database is a required component for the Statistical Semantic Search feature in Microsoft SQL Server 2012 Semantic Language Statistics RC0. Microsoft SQL Server 2012 Release Candidate 0 (RC0) Manageability Tool Kit The Microsoft SQL Server 2012 Release Candidate 0 (RC0) Manageability Tool Kit is a collection of stand-alone packages which provide additional value for Microsoft SQL Server 2012 Release Candidate 0 (RC0). Microsoft SQL Server 2012 PowerPivot for Microsoft Excel 2010 Release Candidate 0 (RC0) Microsoft PowerPivot for Microsoft Excel 2010 provides ground-breaking technology; fast manipulation of large data sets, streamlined integration of data, and the ability to effortlessly share your analysis through Microsoft SharePoint Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Database, PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Ameristar Wins with Oracle GoldenGate’s Heterogeneous Real-Time Data Integration

    - by Irem Radzik
    Today we announced a press release about another successful project with Oracle GoldenGate. This time at Ameristar. Ameristar is a casino gaming company and needed a single data integration solution to connect multiple heterogeneous systems to its Teradata data warehouse. The project involves integration of Ameristar’s promotional and gaming data from 14 data sources across its 7 casino hotel properties in real time into a central Teradata data warehouse. The source systems include the Aristocrat gaming and MGT promotional management platforms running on Microsoft SQL Server 2000 databases. As you can notice, there was no Oracle Database involved in this project, but Ameristar’s IT leadership knew that  GoldenGate’s strong heterogeneous and real-time data integration capabilities is the right technology for their data warehousing project. With GoldenGate Ameristar was able to reduce data latency to the enterprise data warehouse, and use this real-time customer information for marketing teams in improving overall customer experience. Ameristar customers receive more targeted and timely campaign offers, and the company has more up-to-date visibility into financial metrics of the company. One other key benefit the company experienced with GoldenGate is in operational costs. The previous data capture solution Ameristar used was trigger based and required a lot of effort to manage. They needed dedicated IT staff to maintain it. With GoldenGate, the solution runs seamlessly without needing a fully-dedicated staff, giving the IT team at Ameristar more resources for their other IT projects. If you want to learn more about GoldenGate and the latest features for Oracle Database and non-Oracle databases, please watch our on demand webcast about Oracle GoldenGate 11g Release 2.

    Read the article

  • again again again…. it is Oracle Open World 2012

    - by JuergenKress
    Again… again I crashed my knee during kite surfing. Again the right knee, again the outside meniscus, again the same doctor, again the same operation, again they could sew my meniscus, again the same physiotherapy… again I will miss OOW. OOW session you should not miss Oracle PartnerNetwork Exchange Middleware stream Focus on SOA and BPM Focus on BPM For OFM Partner Advisory Councils please contact [email protected] Keynotes and General sessions to attend: Thomas Kurian: Tuesday, October 2 8:45 a.m. 9:45 a.m., Moscone North, Hall D Hasan Rizvi: General session middleware: Tuesday, October 3 10:15 am 11:15 am, Moscone North, Hall D If you can’t make it to San Francisco watch the keynotes live on-demand Tips and tricks for OOW Plan your visit well in advance! Which keynotes & session do you want to attend? Demo Grounds are highly recommended and the best of OOW! Which 1:1 meetings do you want to arrange? Attend a Partner or Customer Advisory Council? Attend a Country or Community Reception? Attire during OOW: casual clothing, comfortable shoes and light luggage! Do not forget to drink water. Sign an international travel and health insurance before you leave home! What we want from you! Send your tweets: twitter.com/soacommunity @soacommunity and share your pictures at http://www.facebook.com/soacommunity SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: OOW,Oracle Open World,SOA Community,Oracle SOA,Oracle BPM,BPM,Community,OPN,Jürgen Kress

    Read the article

  • BPMN is dead, long live BPEL!

    - by JuergenKress
    “BPMN is dead, long live BPEL” was the title of our panel discussion during the SOA & BPM Integration Days 2011. At the JAXenter my discussion summery was just published (in German). If you want to learn more about SOA & BPM make sure you register for our up-coming conference October 12th & 13th 2011 in Düsseldorf. The speakers include the top SOA and BPM experts in Germany: Thilo Frotscher & Kornelius Fuhrer & Björn Hardegen & Nicolai Josuttis & Michael Kopp & Dr. Dirk Krafzig & Jürgen Kress & Frank Leymann & Berthold Maier & Hajo Normann & Max J. Pucher & Bernd Rücker & Dr. Gregor Scheithauer & Danilo Schmiedel & Guido Schmutz & Dirk Slama & Heiko Spindler & Volker Stiehl & Bernd Trops & Clemens Utschig-Utschig & Tammo van Lessen & Dr. Hendrik Voigt & Torsten Winterberg  For details please become a member in the SOA Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) Blog Twitter LinkedIn Mix Forum Wiki Website

    Read the article

  • SQL SERVER – Import CSV into Database – Transferring File Content into a Database Table using CSVexpress

    - by pinaldave
    One of the most common data integration tasks I run into is a desire to move data from a file into a database table.  Generally the user is familiar with his data, the structure of the file, and the database table, but is unfamiliar with data integration tools and therefore views this task as something that is difficult.  What these users really need is a point and click approach that minimizes the learning curve for the data integration tool.  This is what CSVexpress (www.CSVexpress.com) is all about!  It is based on expressor Studio, a data integration tool I’ve been reviewing over the last several months. With CSVexpress, moving data between data sources can be as simple as providing the database connection details, describing the structure of the incoming and outgoing data and then connecting two pre-programmed operators.   There’s no need to learn the intricacies of the data integration tool or to write code.  Let’s look at an example. Suppose I have a comma separated value data file with data similar to the following, which is a listing of terminated employees that includes their hiring and termination date, department, job description, and final salary. EMP_ID,STRT_DATE,END_DATE,JOB_ID,DEPT_ID,SALARY 102,13-JAN-93,24-JUL-98 17:00,Programmer,60,"$85,000" 101,21-SEP-89,27-OCT-93 17:00,Account Representative,110,"$65,000" 103,28-OCT-93,15-MAR-97 17:00,Account Manager,110,"$75,000" 304,17-FEB-96,19-DEC-99 17:00,Marketing,20,"$45,000" 333,24-MAR-98,31-DEC-99 17:00,Data Entry Clerk,50,"$35,000" 100,17-SEP-87,17-JUN-93 17:00,Administrative Assistant,90,"$40,000" 334,24-MAR-98,31-DEC-98 17:00,Sales Representative,80,"$40,000" 400,01-JAN-99,31-DEC-99 17:00,Sales Manager,80,"$55,000" Notice the concise format used for the date values, the fact that the termination date includes both date and time information, and that the salary is clearly identified as money by the dollar sign and digit grouping.  In moving this data to a database table I want to express the dates using a format that includes the century since it’s obvious that this listing could include employees who left the company in both the 20th and 21st centuries, and I want the salary to be stored as a decimal value without the currency symbol and grouping character.  Most data integration tools would require coding within a transformation operation to effect these changes, but not expressor Studio.  Directives for these modifications are included in the description of the incoming data. Besides starting the expressor Studio tool and opening a project, the first step is to create connection artifacts, which describe to expressor where data is stored.  For this example, two connection artifacts are required: a file connection, which encapsulates the file system location of my file; and a database connection, which encapsulates the database connection information.  With expressor Studio, I use wizards to create these artifacts. First click New Connection > File Connection in the Home tab of expressor Studio’s ribbon bar, which starts the File Connection wizard.  In the first window, I enter the path to the directory that contains the input file.  Note that the file connection artifact only specifies the file system location, not the name of the file. Then I click Next and enter a meaningful name for this connection artifact; clicking Finish closes the wizard and saves the artifact. To create the Database Connection artifact, I must know the location of, or instance name, of the target database and have the credentials of an account with sufficient privileges to write to the target table.  To use expressor Studio’s features to the fullest, this account should also have the authority to create a table. I click the New Connection > Database Connection in the Home tab of expressor Studio’s ribbon bar, which starts the Database Connection wizard.  expressor Studio includes high-performance drivers for many relational database management systems, so I can simply make a selection from the “Supplied database drivers” drop down control.  If my desired RDBMS isn’t listed, I can optionally use an existing ODBC DSN by selecting the “Existing DSN” radio button. In the following window, I enter the connection details.  With Microsoft SQL Server, I may choose to use Windows Authentication rather than rather than account credentials.  After clicking Next, I enter a meaningful name for this connection artifact and clicking Finish closes the wizard and saves the artifact. Now I create a schema artifact, which describes the structure of the file data.  When expressor reads a file, all data fields are typed as strings.  In some use cases this may be exactly what is needed and there is no need to edit the schema artifact.  But in this example, editing the schema artifact will be used to specify how the data should be transformed; that is, reformat the dates to include century designations, change the employee and job ID’s to integers, and convert the salary to a decimal value. Again a wizard is used to create the schema artifact.  I click New Schema > Delimited Schema in the Home tab of expressor Studio’s ribbon bar, which starts the Database Connection wizard.  In the first window, I click Get Data from File, which then displays a listing of the file connections in the project.  When I click on the file connection I previously created, a browse window opens to this file system location; I then select the file and click Open, which imports 10 lines from the file into the wizard. I now view the file’s content and confirm that the appropriate delimiter characters are selected in the “Field Delimiter” and “Record Delimiter” drop down controls; then I click Next. Since the input file includes a header row, I can easily indicate that fields in the file should be identified through the corresponding header value by clicking “Set All Names from Selected Row. “ Alternatively, I could enter a different identifier into the Field Details > Name text box.  I click Next and enter a meaningful name for this schema artifact; clicking Finish closes the wizard and saves the artifact. Now I open the schema artifact in the schema editor.  When I first view the schema’s content, I note that the types of all attributes in the Semantic Type (the right-hand panel) are strings and that the attribute names are the same as the field names in the data file.  To change an attribute’s name and type, I highlight the attribute and click Edit in the Attributes grouping on the Schema > Edit tab of the editor’s ribbon bar.  This opens the Edit Attribute window; I can change the attribute name and select the desired type from the “Data type” drop down control.  In this example, I change the name of each attribute to the name of the corresponding database table column (EmployeeID, StartingDate, TerminationDate, JobDescription, DepartmentID, and FinalSalary).  Then for the EmployeeID and DepartmentID attributes, I select Integer as the data type, for the StartingDate and TerminationDate attributes, I select Datetime as the data type, and for the FinalSalary attribute, I select the Decimal type. But I can do much more in the schema editor.  For the datetime attributes, I can set a constraint that ensures that the data adheres to some predetermined specifications; a starting date must be later than January 1, 1980 (the date on which the company began operations) and a termination date must be earlier than 11:59 PM on December 31, 1999.  I simply select the appropriate constraint and enter the value (1980-01-01 00:00 as the starting date and 1999-12-31 11:59 as the termination date). As a last step in setting up these datetime conversions, I edit the mapping, describing the format of each datetime type in the source file. I highlight the mapping line for the StartingDate attribute and click Edit Mapping in the Mappings grouping on the Schema > Edit tab of the editor’s ribbon bar.  This opens the Edit Mapping window in which I either enter, or select, a format that describes how the datetime values are represented in the file.  Note the use of Y01 as the syntax for the year.  This syntax is the indicator to expressor Studio to derive the century by setting any year later than 01 to the 20th century and any year before 01 to the 21st century.  As each datetime value is read from the file, the year values are transformed into century and year values. For the TerminationDate attribute, my format also indicates that the datetime value includes hours and minutes. And now to the Salary attribute. I open its mapping and in the Edit Mapping window select the Currency tab and the “Use currency” check box.  This indicates that the file data will include the dollar sign (or in Europe the Pound or Euro sign), which should be removed. And on the Grouping tab, I select the “Use grouping” checkbox and enter 3 into the “Group size” text box, a comma into the “Grouping character” text box, and a decimal point into the “Decimal separator” character text box. These entries allow the string to be properly converted into a decimal value. By making these entries into the schema that describes my input file, I’ve specified how I want the data transformed prior to writing to the database table and completely removed the requirement for coding within the data integration application itself. Assembling the data integration application is simple.  Onto the canvas I drag the Read File and Write Table operators, connecting the output of the Read File operator to the input of the Write Table operator. Next, I select the Read File operator and its Properties panel opens on the right-hand side of expressor Studio.  For each property, I can select an appropriate entry from the corresponding drop down control.  Clicking on the button to the right of the “File name” text box opens the file system location specified in the file connection artifact, allowing me to select the appropriate input file.  I indicate also that the first row in the file, the header row, should be skipped, and that any record that fails one of the datetime constraints should be skipped. I then select the Write Table operator and in its Properties panel specify the database connection, normal for the “Mode,” and the “Truncate” and “Create Missing Table” options.  If my target table does not yet exist, expressor will create the table using the information encapsulated in the schema artifact assigned to the operator. The last task needed to complete the application is to create the schema artifact used by the Write Table operator.  This is extremely easy as another wizard is capable of using the schema artifact assigned to the Read Table operator to create a schema artifact for the Write Table operator.  In the Write Table Properties panel, I click the drop down control to the right of the “Schema” property and select “New Table Schema from Upstream Output…” from the drop down menu. The wizard first displays the table description and in its second screen asks me to select the database connection artifact that specifies the RDBMS in which the target table will exist.  The wizard then connects to the RDBMS and retrieves a list of database schemas from which I make a selection.  The fourth screen gives me the opportunity to fine tune the table’s description.  In this example, I set the width of the JobDescription column to a maximum of 40 characters and select money as the type of the LastSalary column.  I also provide the name for the table. This completes development of the application.  The entire application was created through the use of wizards and the required data transformations specified through simple constraints and specifications rather than through coding.  To develop this application, I only needed a basic understanding of expressor Studio, a level of expertise that can be gained by working through a few introductory tutorials.  expressor Studio is as close to a point and click data integration tool as one could want and I urge you to try this product if you have a need to move data between files or from files to database tables. Check out CSVexpress in more detail.  It offers a few basic video tutorials and a preview of expressor Studio 3.5, which will support the reading and writing of data into Salesforce.com. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • New Whitepaper: Best Practices for Gathering EBS Database Statistics

    - by Elke Phelps (Oracle Development)
    Most Oracle Applications DBAs and E-Business Suite users understand the importance of accurate database statistics.  Missing, stale or skewed statistics can adversely affect performance.  Oracle E-Business Suite statistics should only be gathered using FND_STATS or the Gather Statistics concurrent request. Gathering statistics with DBMS_STATS or the desupported ANALYZE command may result in suboptimal executions plans for E-Business Suite. Our E-Business Suite Performance Team has been busy implementing and testing new features for gathering statistics using FND_STATS in Oracle E-Business Suite databases.  The new features and guidelines for when and how to gather statistics are published in the following whitepaper: Best Practices for Gathering Statistics with Oracle E-Business Suite (Note 1586374.1) The new white paper details the following options for gathering statistics using FND_STATS and the Gather Statistics concurrent request:: History Mode - backup existing statistics prior to gather new statistics GATHER_AUTO Option - gather statistics for tables based upon % change Histograms - collect statistics for histograms AUTO Sampling - use the new FND_STATS feature that supports the AUTO option for using AUTO sample size Extended Statistics - use the new FND_STATS feature that supports the creation of column groups and automatic statistics collection on the column groups when table statistics are gathered Incremental Statistics - gather incremental statistics for partitioned tables The new white paper also includes examples and performance test cases for the following: Extended Optimizer Statistics Incremental Statistics Gathering Concurrent Statistics Gathering This white paper includes details about the standalone Oracle E-Business Suite Release 11i and 12 patches that are required to take advantage of this new functionality. Your feedback is welcome We would be very interested in hearing about your experiences with these new options for gathering statistics.  Please feel free to post your comments here or drop us a line privately.Related Oracle OpenWorld 2013 Session Getting Optimal Performance from Oracle E-Business Suite (CON8485) Related My Oracle Support Notes Collecting Statistics with Oracle EBS 11i and R12 (Note 368252.1) Non-EBS Related Blogs, White Papers and My Oracle Support Notes  Oracle Optimizer Blog Understanding Optimizer Statistic (white paper) Fixed Objects Statistics(GATHER_FIXED_OBJECTS_STATS) Considerations (Note 798257.1)

    Read the article

  • Iterative Conversion

    - by stuart ramage
    Question Received: I am toying with the idea of migrating the current information first and the remainder of the history at a later date. I have heard that the conversion tool copes with this, but haven't found any information on how it does. Answer: The Toolkit will support iterative conversions as long as the original master data key tables (the CK_* tables) are not cleared down from Staging (the already converted Transactional Data would need to be cleared down) and the Production instance being migrated into is actually Production (we have migrated into a pre-prod instance in the past and then unloaded this and loaded it into the real PROD instance, but this will not work for your situation. You need to be migrating directly into your intended environment). In this case the migration tool will still know all about the original keys and the generated keys for the primary objects (Account, SA, etc.) and as such it will be able to link the data converted as part of a second pass onto these entities. It should be noted that this may result in the original opening balances potentially being displayed with an incorrect value (if we are talking about Financial Transactions) and also that care will have to be taken to ensure that all related objects are aligned (eg. A Bill must have a set to bill segments, meter reads and a financial transactions, and these entities cannot exist independantly). It should also be noted that subsequent runs of the conversion tool would need to be 'trimmed' to ensure that they are only doing work on the objects affected. You would not want to revalidate and migrate all Person, Account, SA, SA/SP, SP and Premise details since this information has already been processed, but you would definitely want to run the affected transactional record validation and keygen processes. There is no real "hard-and-fast" rule around this processing since is it specific to each implmentations needs, but the majority of the effort required should be detailed in the Conversion Tool section of the online help (under Adminstration/ The Conversion Tool). The major rule is to ensure that you only run the steps and validation/keygen steps that you need and do not do a complete rerun for your subsequent conversion.

    Read the article

  • Advanced Oracle SOA Suite Oracle Open World 2012 SOA Presentations

    - by JuergenKress
    The list below only includes SOA presentations delivered or moderated by Oracle SOA Product Management. For a complete list of Oracle Open World 2012 presentations, please go here. Oracle SOA Suite, the Most Capable Tool for Every Possible Integration Challenge Using the Right Tools, Techniques, and Technologies for Integration Projects Administration and Management Essentials for Oracle SOA Suite 11g Extreme Performance and Scale Delivered by SOA on Oracle Exalogic Successful Application Integration and SOA Projects: Customer Panel How to Integrate Cloud Applications with Oracle SOA Suite Transforming the Utilities Industry with Oracle Fusion Middleware Cloud and On-Premises Applications Integration, Using Oracle Integration Adapters Delivering High Value B2B Gateways with Oracle SOA Suite 11g Implementing Successful Healthcare Applications with Oracle SOA Suite Migrating to Oracle SOA Suite: A Sun Java CAPS Customer Experience If Mobile Enablement Is on Your Mind, Oracle SOA Suite and Oracle Service Bus Can Help Building Shared Services Infrastructure with Oracle Service Bus: Customer Panel SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: OOW,OOW presentations,OOW soa ppt,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Cannot locate packages for Citrix install

    - by Noel Evans
    I'm trying to get a Citrix receiver installed on to Ubuntu 14.04 (64 bit) following Ubuntu's docs. The first line of instructions say to get these required packages: sudo apt-get install libmotif4:i386 nspluginwrapper lib32z1 libc6-i386 libxp6:i386 libxpm4:i386 libasound2:i386 But if I paste in that line, I get this error: Reading state information... Done E: Unable to locate package libmotif4 E: Unable to locate package libxp6 E: Unable to locate package libxpm4 E: Unable to locate package libasound2 My repository settings are below. Is there anything I'm missing in there? Otherwise what do I need to do to install these? $ cat /etc/apt/sources.list deb cdrom:[Ubuntu 14.04 LTS _Trusty Tahr_ - Release amd64 (20140417)]/ precise main restricted deb cdrom:[Ubuntu 14.04 LTS _Trusty Tahr_ - Release amd64 (20140417)]/ trusty main restricted deb-src http://archive.ubuntu.com/ubuntu trusty main restricted #Added by software-properties deb http://archive.ubuntu.com/ubuntu/ trusty main restricted universe multiverse deb-src http://archive.ubuntu.com/ubuntu/ trusty main restricted universe multiverse #Added by software-properties deb http://security.ubuntu.com/ubuntu/ trusty-security main restricted universe multiverse deb-src http://security.ubuntu.com/ubuntu/ trusty-security main restricted universe multiverse #Added by software-properties deb http://archive.ubuntu.com/ubuntu/ trusty-updates main restricted universe multiverse deb-src http://archive.ubuntu.com/ubuntu/ trusty-updates main restricted universe multiverse #Added by software-properties deb http://archive.ubuntu.com/ubuntu/ trusty-proposed main universe restricted multiverse deb http://archive.ubuntu.com/ubuntu/ trusty-backports main universe restricted multiverse

    Read the article

  • Questions, Knowledge Checks and Assessments

    - by ted.henson
    Questions should be used to reinforce concepts throughout the title. You have the option to include questions in the course, in assessments, in Knowledge Checks, or in any combination. Questions are required for creating knowledge checks and assessments. It is important to remember that questions that are not in assessments are not tracked. Be sure to structure your outline so that questions are added to the appropriate assignable unit. I usually recommend that questions appear directly below their relative section. This serves two purposes. First, it helps ensure that the related content and question stay relative to one another. Secondly, it ensures that when the "link to subject" option is used it will relate back to the relative content. Knowledge checks are created using the questions that have been added to the related assignable unit. Use Knowledge Checks to give users an additional opportunity to review what they have learned. Knowledge Check allows users to check their own knowledge without being tracked or scored. Many users like having this self check option, especially if they know they are going to be tested later. Each assignable unit can have its own Knowledge Check. Assessments provide a way to measure knowledge or understanding of the course material. The results of each assessment are scored and tracked. Assessments are created using the questions that have been added to the relative assignable unit(s). Each assignable unit, including the Title AU, can have multiple assessments. Consider how your knowledge paths will be structured when planning your assessments. For instance, you can create a multiple-activity knowledge path, with multiple assessments from the same title or assignable unit. Also remember, in Manager an assessment can be either a pre or post assessment. Pre-assessments allow the student to discover what is already known in a specific topic or subject and important if the personal course feature is being used. Post-assessments allow you test the student knowledge or understanding after completing the material.

    Read the article

  • Understanding node.js: some real-life examples

    - by steweb
    Hi all! As a curious web developer I've been hearing about node.js for several months and (just) now I'd like to learn it and, most of all, understand its "engine". So, as a real newbie about node.js I'm going to follow some tutorials. And as every new technology over the internet, find a very good and exhaustive tutorial is like looking for a needle in a haystack :) My "big question" can be split into this 3 sub-questions: I know node.js can be very useful to build web-chats. But, apart from this example (and from helloworld one :D), how could I use it? Which are the real-life examples that let me think i.e. "oh, it's fantastic, I could really integrate it for my daily projects"? I also know it implements some JS specifications. It is required to deeply know other programming languages apart from JS? Where can I find a good reference (basically, I don't want to search "node.js reference" on google hoping to be lucky enough to get some good websites)? Thanks everyone!

    Read the article

  • Glassfish alive or dead? WebLogic SE cost is less than Glassfish!

    - by JuergenKress
    Is a hot discussion in the community in the last few days! Send us your opinion on tiwtter @wlscommunity #Glassfish #WebLogicCommunity We posted theGlassFishStrategy.pptx at our WebLogic Community Workspace (WebLogic Community membership required). Please read also the Java EE and GlassFish Server Roadmap Update Bruno Borges ?Another great article covering story about #GlassFish. Comments starting to be reasonable ;-) 6 facts helped a lot http://adtmag.com/articles/2013/11/08/oracle-drops-glassfish.aspx … Adam Bien ?What Oracle Could Do For GlassFish Now: Move the sources to GitHub (GitHub is the most popular collaboration p... http://bit.ly/1d1uo24 JAXenter.com ?Oracle evangelist: “GlassFish Open Source Edition is not dead” http://jaxenter.com/oracle-evangelist-glassfish-open-source-edition-is-not-dead-48830.html … GlassFish 6 Facts About #GlassFish Announcement and the Future of #JavaEE http://bit.ly/1bbSVPf via @brunoborges David Blevins ?In support of our #GlassFish friends and open source in general: Feed the Fish http://www.tomitribe.com/blog/2013/11/feed-the-fish/ … #JavaEE #opensource #manifesto GlassFish ?GlassFish Server Open Source Edition 4.1 is scheduled for 2014. Version 5.0 as impl for #JavaEE8 https://blogs.oracle.com/theaquarium/entry/java_ee_and_glassfish_server … #Community focused C2B2 Consulting ?C2B2 continues to offer support for your operational #JEE applications running on #GlassFish http://blog.c2b2.co.uk/2013/11/oracle-dropping-commercial-support-of.html … #Java Markus Eisele ?RT @InfoQ: #GlassFish Commercial Edition is Dead http://bit.ly/17eFB0Z < at least they agree to my points... Adam Bien suggests: Move the sources to GitHub (GitHub is the most popular collaboration platform). It is more likely for an individual to contribute via GitHub, than the current infrastructure. Introduce a business friendlier license like e.g. the Apache license. Companies interesting in providing added value (and commercial support) on top of existing sources would appreciate it. Implement GitHub-based, open source, CI system with nightly builds. Introduce a transparent voting process / pull-request acceptance process. Release more frequently. Keep https://glassfish.java.net as the main hub. C2B2 offers Glassfish support by Steve Millidge Oracle have just announced that commercial support for GlassFish 4 will not be available from Oracle. In light of this announcement I thought I would put together some thoughts about how I see this development. I think the key word in this announcement is "commercial", nowhere does Oracle announce the "death of GlassFish" in contrary Oracle reaffirm; GlassFish Server Open Source Edition continues to be the strategic foundation for Java EE reference implementation going forward. And for developers, updates will be delivered as needed to continue to deliver a great developer experience for GlassFish Server Open Source Edition so GlassFish is not about to go away soon. In a similar fashion RedHat do not provide commercial support for WildFly and only provide commercial support for JBoss EAP. Admittedly JBoss EAP and WildFly are much closer together than GlassFish and WebLogic but WildFly and JBoss EAP are absolutely NOT the same thing. The key going forward to the viability of GlassFish as a production platform is how the GlassFish community develops; How often does the community release binary builds? How open is the community to bug fixes? How much engineering resource does Oracle commit to GlassFish? At this stage we just don't know the answers to these questions. If the GlassFish open source project continues on it's current trajectory without a commercial support offering then I don't see much of a problem. Oracle just have to work harder to sell migration paths to WebLogic in the same way as RedHat have to sell migration paths from WildFly to JBoss EAP. In the meantime C2B2 continues to offer support for your operational JEE applications running on GlassFish and we will endeavour to work with the community to get any bugs fixed. The key difference is we can no longer back our Expert Support with a support contract from Oracle for patches and fixes for any release greater than 3.x. Read the complete article here. 6 Facts About GlassFish Announcement By Bruno.Borges Fact #1 - GlassFish Open Source Edition is not dead GlassFish Server Open Source Edition will remain the reference implementation of Java EE. The current trunk is where an implementation for Java EE 8 will flourish, and this will become the future GlassFish 5.0. Calling "GlassFish is dead" does no good to the Java EE ecosystem. The GlassFish Community will remain strong towards the future of Java EE. Without revenue-focused mind, this might actually help the GlassFish community to shape the next version, and set free from any ties with commercial decisions. Fact #2 - OGS support is not over As I said before, GlassFish Server Open Source Edition will continue. Main change is that there will be no more future commercial releases of Oracle GlassFish Server. New and existing OGS 2.1.x and 3.1.x commercial customers will continue to be supported according to the Oracle Lifetime Support Policy. In parallel, I believe there's no other company in the Java EE business that offers commercial support to more than one build of a Java EE application server. This new direction can actually help customers and partners, simplifying decision through commercial negotiations. Fact #3 - WebLogic is not always more expensive than OGS Oracle GlassFish Server ("OGS") is a build of GlassFish Server Open Source Edition bundled with a set of commercial features called GlassFish Server Control and license bundles such as Java SE Support. OGS has at the moment of this writing the pricelist of U$ 5,000 / processor. One information that some bloggers are mentioning is that WebLogic is more expensive than this. Fact 3.1: it is not necessarily the case. The initial edition of WebLogic is called "Standard Edition" and falls into a policy where some “Standard Edition” products are licensed on a per socket basis. As of current pricelist, US$ 10,000 / socket. If you do the math, you will realize that WebLogic SE can actually be significantly more cost effective than OGS, and a customer can save money if running on a CPU with 4 cores or more for example. Quote from the price list: “When licensing Oracle programs with Standard Edition One or Standard Edition in the product name (with the exception of Java SE Support, Java SE Advanced, and Java SE Suite), a processor is counted equivalent to an occupied socket; however, in the case of multi-chip modules, each chip in the multi-chip module is counted as one occupied socket.” For more details speak to your Oracle sales representative - this is clearly at list price and every customer typically has a relationship with Oracle (like they do with other vendors) and different contractual details may apply. And although OGS has always been production-ready for Java EE applications, it is no secret that WebLogic has always been more enterprise, mission critical application server than OGS since BEA. Different editions of WLS provide features and upgrade irons like the WebLogic Diagnostic Framework, Work Managers, Side by Side Deployment, ADF and TopLink bundled license, Web Tier (Oracle HTTP Server) bundled licensed, Fusion Middleware stack support, Oracle DB integration features, Oracle RAC features (such as GridLink), Coherence Management capabilities, Advanced HA (Whole Service Migration and Server Migration), Java Mission Control, Flight Recorder, Oracle JDK support, etc. Fact #4 - There’s no major vendor supporting community builds of Java EE app servers There are no major vendors providing support for community builds of any Open Source application server. For example, IBM used to provide community support for builds of Apache Geronimo, not anymore. Red Hat does not commercially support builds of WildFly and if I remember correctly, never supported community builds of former JBoss AS. Oracle has never commercially supported GlassFish Server Open Source Edition builds. Tomitribe appears to be the exception to the rule, offering commercial support for Apache TomEE. Fact #5 - WebLogic and GlassFish share several Java EE implementations It has been no secret that although GlassFish and WebLogic share some JSR implementations (as stated in the The Aquarium announcement: JPA, JSF, WebSockets, CDI, Bean Validation, JAX-WS, JAXB, and WS-AT) and WebLogic understands GlassFish deployment descriptors, they are not from the same codebase. Fact #6 - WebLogic is not for GlassFish what JBoss EAP is for WildFly WebLogic is closed-source offering. It is commercialized through a license-based plus support fee model. OGS although from an Open Source code, has had the same commercial model as WebLogic. Still, one cannot compare GlassFish/WebLogic to WildFly/JBoss EAP. It is simply not the same case, since Oracle has had two different products from different codebases. The comparison should be limited to GlassFish Open Source / Oracle GlassFish Server versus WildFly / JBoss EAP. Read the complete article here WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: Glassfish,training,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Quickly Add Watermark To Multiple PDF Files Using “Batch PDF Watermark”

    - by Kavitha
    Want to add watermark to your PDF files with a single click? You can use the freeware Batch PDF Watermark. Batch PDF Watermark is super cool application that lets you add image or text watermarks to multiple files at a time. Office 2010 style ribbon user interface of the application is very easy to use and provides many options to configure watermark properties like – font styles, positioning, transparency levels, rotation of watermark image, scaling of watermark image and etc. Before running the watermark process, you can even preview it. To select multiple PDF files to watermark you can use “Add Files” option to hand pick required files or “Add Folder” option to choose all the PDF files available in the folder. Download Batch PDF Watermark [via liferocks] This article titled,Quickly Add Watermark To Multiple PDF Files Using “Batch PDF Watermark”, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • SDLC/Deployment/Documentation ERP/framework that minimizes developer misery

    - by foampile
    I was wondering if there are favorite SDLC/Deployment/Documentation/Versioning ERP/frameworks that work with popular SDLC methodologies, such as Agile, that minimize developer exposure to what most programmer hate to do most -- PAPERWORK ? Often, release management is extremely inefficient and there is a lot of data duplication across documents that are required to accompany changes -- e.g. when submitting a deployment request, I must list all files and their revisions from source control -- but why is that necessary if every file revision I check in is pinned to a work order and a deployment request is just a list of work orders -- such info should be able to be pulled from the system automatically without me needing to extract it and report it. And then there is a backout plan -- well just do everything in reverse from what you did to deploy -- why do you need specific instructions? Similar applies for documentation... So I am curious if there is an overall, all-encompassing ERP that includes source control and minimizes paperwork by sharing centralized data across different documents (such as documentation being pulled from javadoc without needing to write it separately) associated with SDLC yet does not compromise structure and control over the code base and release management.

    Read the article

  • again again again…. it is Oracle Open World 2012

    - by JuergenKress
    Again… again I crashed my knee during kite surfing. Again the right knee, again the outside meniscus, again the same doctor, again the same operation, again they could sew my meniscus, again the same physiotherapy… again I will miss OOW. OOW session you should not miss Oracle PartnerNetwork Exchange Middleware stream CAF Overall (WebLogic Server, Tuxedo, Coherence, Java Cloud Service, GlassFish) Oracle WebLogic Server Oracle Coherence Java Cloud Service GlassFish Traffic Director Tuxedo For OFM Partner Advisory Councils please contact [email protected] Keynotes and General sessions to attend: Thomas Kurian: Tuesday, October 2 8:45 a.m. 9:45 a.m., Moscone North, Hall D Hasan Rizvi: General session middleware: Tuesday, October 3 10:15 am 11:15 am, Moscone North, Hall D If you can’t make it to San Francisco watch the keynotes live on-demand Tips and tricks for OOW Plan your visit well in advance! Which keynotes & session do you want to attend? Demo Grounds are highly recommended and the best of OOW! Which 1:1 meetings do you want to arrange? Attend a Partner or Customer Advisory Council? Attend a Country or Community Reception? Attire during OOW: casual clothing, comfortable shoes and light luggage! Do not forget to drink water. Sign an international travel and health insurance before you leave home! What we want from you! Send your tweets: http://twitter.com/wlscommunity and @wlscommnity share your pictures at http://www.facebook.com/WebLogicCommunity WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: Oracle Open World,OOW,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Where's my MD.070?

    - by Dave Burke
    In a previous Blog entry titled “Where’s My MD.050” I discussed how the OUM Analysis Specification is the “new-and-improved” version of the more traditional Functional Design Document (or MD.050 for Oracle AIM stalwarts). In a similar way, the OUM Design Specification is an evolution of what we used to call the Technical Design Document (or MD.070). Let’s dig a little deeper…… In a traditional software development process, the “Design Task” would include all the time and resources required to design the software component(s), AND to create the final Technical Design Document. However, in OUM, we have created distinct Tasks for pure design work, along with an optional Task for pulling all of that work together into a Design Specification. Some of the Design Tasks shown above will result in their own Work Products (i.e. an Architecture Description), whilst other Tasks would act as “placeholders” for a specific work effort. In any event, the DS.140 Design Specification can include a combination of unique content, along with links to other Work Products, together which enable a complete technical description of the component, or solution, being designed. So next time someone asks “where’s my MD.070” the short answer would be to tell them to read the OUM Task description for DS.140 – Design Specification!

    Read the article

  • JavaCV IplImage to LWJGL Texture

    - by rendrag
    As a side project I've been attempting to make a dynamic display (for example a screen within a game) that shows images from my webcam. I've been messing around with JavaCV and LWJGL for the past few months and have a basic understanding of how they both work. I found this after scouring google, but I get an error that the ByteBuffer isn't big enough. IplImage img = cam.getFrame(); ByteBuffer buffer = img.asByteBuffer(); int textureID = glGenTextures(); //Generate texture ID glBindTexture(GL_TEXTURE_2D, textureID); //Bind texture ID //I don't know how much of the following is necessary //Setup wrap mode glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL12.GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL12.GL_CLAMP_TO_EDGE); //Setup texture scaling filtering glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); //Send texture data to OpenGL - this is the line that actually does stuff and that OpenGL has a problem with glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL12.GL_BGR, GL_UNSIGNED_BYTE, buffer); That last line throws this- Exception in thread "Thread-0" java.lang.IllegalArgumentException: Number of remaining buffer elements is 144, must be at least 921600. Because at most 921600 elements can be returned, a buffer with at least 921600 elements is required, regardless of actual returned element count at org.lwjgl.BufferChecks.throwBufferSizeException(BufferChecks.java:162) at org.lwjgl.BufferChecks.checkBufferSize(BufferChecks.java:189) at org.lwjgl.BufferChecks.checkBuffer(BufferChecks.java:230) at org.lwjgl.opengl.GL11.glTexImage2D(GL11.java:2845) at tests.TextureTest.getTexture(TextureTest.java:78) at tests.TextureTest.update(TextureTest.java:43) at lib.game.AbstractGame$1.run(AbstractGame.java:52) at java.lang.Thread.run(Thread.java:679)

    Read the article

  • SPARC64 VII+ Processor Core License Factor Reduced by 33%

    - by john.shell
    The Oracle processor core license factor has been a popular topic the last few months.  For those partners new to Oracle software licensing, the processor core license factor determines the number licensed CPUs that are required when running Oracle software (those charged on a per-CPU basis) on multi-core processors.My last entry talked about the core factor reduction for our T3 processor.  The core license factor for our newly announced SPARC64 VII+ processor is 0.5, which is a 33% reduction from the 0.75 rate used with our SPARC64 VI and VII processors.What does this mean for our partners?  Increased opportunity.  This change, similar to our T3-based systems, means that our hardware is the preferred platform for Oracle software. Still a little dizzy on the breadth of Oracle's software offering?  Do a simple scan of Oracle's software price lists. Consider this your target market.This change allows you to focus on total solution price or price/performance, not server prices or per core performance (a standard IBM sales tactic). That's the offensive side of the game.  Don't forget your defense.  One of the biggest customer benefits around the M-Series is investment protection.  The combination of a simple processor/board upgrade, along with a reduction in processor core license factor, makes upgrading one of the best financial moves for our customers.    One reminder.  The update to the processor core license factor only applies to the new VII+ processor - NOT the SPARC64 VI or VII processors.  You can find the official table here.

    Read the article

  • Growing Into Enterprise Architecture

    - by pat.shepherd
    I am writing this post as I am in an Enterprise Architecture class, specifically on the Oracle Enterprise Architecture Framework (OEAF).  I have been a long believer that SOA’s key strength is that it is the first IT approach that blends or unifies business and technology.  That is a common view and is certainly valid but is not completely true (or at least accurate).  As my personal view of EA is growing, I realize more than ever that doing EA is FAR MORE than creating a reference architecture, creating a physical architecture or picking a technology to standardize on.  Those are parts of the puzzle but not the whole puzzle by any stretch. I am now a firm believer that the various EA frameworks out there provide the rigor and structure required to allow the bridging of business strategy / vision to IT strategy / vision. The flow goes something like this: Business Strategy –> Business / Application / Information / Technology Architecture –> SOA Reference Architecture –> SOA Functional Architecture.  Governance is imbued throughout to help map, measure and verify the business-to-IT coherence. With those in place, then (and only then) can SOA fulfill it’s potential to be more that an integration strategy, more than a reuse strategy; but also a foundation for tying the results of IT to business vision. Fortunately, EA is a an ongoing process that it is never too late to get started with an understanding of frameworks such as TOGAF, FEA, or OEAF.  Also, EA is never ending in that it always needs to be apply, even once a full-blown Enterprise Architecture is established it needs to be constantly evolved.  For those who are getting deeper into EA as a discipline, there is plenty runway to grow as your company/customer begins to look more seriously at EA. I will close with a pointer to a Great Book I have recently read on this subject: Enterprise Architecture as Strategy (http://www.amazon.com/Enterprise-Architecture-Strategy-Foundation-Execution/dp/1591398398/ref=sr_1_1?ie=UTF8&s=books&qid=1268842865&sr=1-1)

    Read the article

< Previous Page | 182 183 184 185 186 187 188 189 190 191 192 193  | Next Page >