Search Results

Search found 132748 results on 5310 pages for 'oracle application server'.

Page 183/5310 | < Previous Page | 179 180 181 182 183 184 185 186 187 188 189 190  | Next Page >

  • SQLAuthority News – 5 days of SQL Server Reporting Service (SSRS) Summary

    - by Pinal Dave
    Earlier this week, I wrote five days series on SQL Server Reporting Service. The series is based on the book Beginning SSRS by Kathi Kellenberger. Supporting files are available with a free download from thewww.Joes2Pros.com web site. I just completed reading the book – it is a fantastic book and I am loving every bit of it. I new SSRS and I also knew how it is working however, I did not know was fine details of how I can get maximum out of the SSRS subject. This book has personally enabled me with the knowledge that I was missing in my knowledge back. Here is the question back to you – how many of you are working with SSRS and when you have a question you are left with no help online. There are not enough blogs or books available on this subject. The way Kathi has written this book is that it attempts to solve your day to day problem and make you think how you can take your daily problem and take it to the next level. Here is the article series which I have written on this subject and available to read: SQL SERVER – What is SSRS and Why SSRS is asked for in many Job Opening? Determine if SSRS 2012 is Installed on your SQL Server Installing SQL Server Data Tools and SSRS Create a Very First Report with the Report Wizard How to an Add Identity Column to Table in SQL Server Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Reporting Service, SSRS

    Read the article

  • SQLAuthority News – SQL Server Interview Questions And Answers Book Summary

    - by pinaldave
    Today we are using computers for various activities, motor vehicles for traveling to places, and mobile phones for conversation. How many of us can claim the invention of micro-processor, a basic wheel, or the telegraph? Similarly, this book was not written overnight. The journey of this book goes many years back with many individuals to be thanked for. To begin with, we want to thank all those interviewers who reject interviewees by saying they need to know ‘the key things’ regardless of having high grades in class. The whole concept of interview questions and answers revolves around knowing those ‘key things’. The core concept of this book will continue to evolve over time. I am sure many of you will come along with us on this journey and submit your suggestions to us to make this book a key reference for anybody who wants to start with SQL Server. Today we want to acknowledge the fact that you will help us keep this book alive forever with the latest updates. We want to thank everyone who participates in this journey with us. Though each of these chapters are geared towards convenience we highly recommend reading each of the sections irrespective of the roles you might be doing since each of the sections have some interesting trivia about working with SQL Server. In the industry the role of accidental DBA’s (especially with SQL Server) is very common. Hence if you have performed the role of DBA for a short stint and want to brush-up your fundamentals then the upcoming sections will be a great review. Table Of Contents Database Concepts With Sql Server Common Generic Questions & Answers Common Developer Questions Common Tricky Questions Miscellaneous Questions On Sql Server 2008 Dba Skills Related Questions Data Warehousing Interview Questions & Answers General Best Practices [Amazon] | [Flipkart] Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL Interview Questions and Answers, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • Ethernet card not detected on Ubuntu Server 12.04

    - by Dana
    My onboard ethernet isn't detected after a re-install of Server 12.04. For reasons I won't get into here, I had to put the server's drive into another machine to install Ubuntu, then swap back into the server. So the server starts up fine, except for the "Waiting for network configuration". I read in another article that Server, by default, doesn't handle new mac addresses for hardware changes dynamically, unlike Ubuntu Desktop, but a look at /etc/udev/rules.d/70-persistent-net.rules shows only one ethernet interface. Shouldn't it show both the old, and the new? lspci -vv shows an ethernet interface, so what the heck is going on? I should mention that the onboard LAN is enabled in the BIOS. And I know this isn't important, but all this started when I changed some network configuration settings in webmin before the re-install. It couldn't download any updates, so I tinkered a little. Broke, it, installed FreeNAS, which worked, but I didn't like it, then went back to Ubuntu Server, and now I'm in this pickle. Thanks for any advice!

    Read the article

  • Oracle Tutor: Installing Is Not Implementing or Why CIO's should care about End User Adoption

    - by emily.chorba(at)oracle.com
    Eighteen months ago I showed Tutor and UPK Productive Day One overview to a CIO friend of mine. He works in a manufacturing business which had been recently purchased by a global conglomerate. He had a major implementation coming up, but said that the corporate team would be coming in to handle the project. I asked about their end user training approach, but it was unclear to him at the time. We were in touch over the course of the implementation project. The major activities were data conversion, how-to workshops, General Ledger realignment, and report definition. The message was "Here's how we do it at corporate, and here's how you are going to do it." In short, it was an application software installation. The corporate team had experience and confidence and the effort through go-live was smooth. Some weeks after cutover, problems with customer orders began to surface. Orders could not be fulfilled in a timely fashion. The problem got worse, and the corporate emergency team was called in. After many days of analysis, the issue was tracked down and resolved, but by then there were weeks of backorders, and their customer base was impacted in a significant way. It took three months of constant handholding of customers by the sales force for good will to be reestablished, and this itself diminished a new product sales push. I learned of these results in a recent conversation with the CIO. I asked him what the solution to the problem was, and he replied that it was twofold. The first component was a lack of understanding by customer service reps about how a particular data item in order entry was to be filled in, resulting in discrepant order data. The second component was that product planners were using this data, along with data from other sources, to fill in a spreadsheet based on the abandoned system. This spreadsheet was the primary input for planning data. The result of these two inaccuracies was that key parts were not being ordered to effectively meet demand and the lead time for finished goods was pushed out by weeks. I reminded him about the Productive Day One approach, and it's focus on methodology and tools for end user training. A more collaborative solution workshop would have identified proper applications use in the new environment. Using UPK to document correct transaction entry would have provided effective guidelines to the CSRs for data entry. Using Oracle Tutor to document the manual tasks would have eliminated the use of an out of date spreadsheet. As we talked this over, he said, "I wish I knew when I started what I know now." Effective end user adoption is the most critical and most overlooked success factor in applications implementations. When the switch is thrown at go-live, employees need to know how to use the new systems to do their jobs. Their jobs are made up of manual steps and systems steps which must be performed in the right order for the implementing organization to operate smoothly. Use Tutor to document the manual policies and procedures, use UPK to document the systems tasks, and develop this documentation in conjunction with a solution workshop. This is the path to develop effective end user training material for a smooth implementation. Learn More For more information about Tutor, visit Oracle.com or the Tutor Blog. Post your questions at the Tutor Forum. Chuck Jones, Product Manager, Oracle Tutor and BPM

    Read the article

  • Oracle Hyperion Planning: Nueva versión 11.1.2, ya disponible.

    - by Oracle Aplicaciones
      v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false 21 false false false ES X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 21 false false false ES X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} Oralce Hyperion Planning, es una solución centralizada de elaboración de planificaciones, presupuestos y previsiones basada en Excel y en web, que integra procesos de planificación financiera y operativa. Esta aplicación proporciona una visión profunda de las operaciones de negocio y su impacto derivado sobre las finanzas, mediante una integración estrecha de los modelos de planificación financiera y operativa. La nueva versión de Oralce Hyperion Planning 11.1.2, ya está disponible e incorpora nuevas funcionalidades enfocadas a mejorar el proceso de presupuestación en las compañías. Esta nueva release basa sus nuevas mejoras en dotar al sistema de: Mayor Usabilidad Reducir el ciclo de Presupuesto Workflows Sofisticados Mayor control de aprobaciones Microsoft Office Presupuestación en Excel Nuevos Módulos Ampliar Mercados Libros Presupuestarios Información más Rápida Algunas de las principales mejoras incorporadas en esta versión podríamos destacar: 1-. Mejoras en la definición de los formularios, como incluir pestañas y secciones en los propios formularios, validaciones que controlen los datos presupuestados, poder realizar análisis Ad-hoc sobre los formularios en la web todo ello enfocado a hacer más sencilla la presupuestación por parte del usuario, , obteniendo la visión de la presupuestación deseada. Normal 0 21 false false false ES X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} 2-. Mejoras en la integración con Office: Integración de las tareas tanto en Excel como en Outlook, donde los usuarios podrán controlar los pasos y tareas a realizar en el proceso de presupuestación: Normal 0 21 false false false ES X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} 3-. Proceso de presupuestación completo en Excel: desde el Acceso a la lista de tareas hasta el envío y aprobación del presupuesto Normal 0 21 false false false ES X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} 4-. La funcionalidad de la gestión del proceso (Workflow) ,ha sido mejorada para permitir validaciones y aprobaciones más sofisticadas, soportando organizaciones matriciales con múltiples revisores, y aprobaciones , que pueden cambiar dependiendo de la información introducida por el propio usuario, por ejemplo, si un usuario introduce una inversión de más de 500.000 € la aprobación será realizada por el responsable de Capex y no por el responsable regional. Normal 0 21 false false false ES X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} Normal 0 21 false false false ES X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} Estas son solo algunas de las nuevas funcionalidades incorporadas en la release 11.1.2. Para ver mas información sobre Oracle Hyperion Planning haga click aqui

    Read the article

  • How to manage own bots at the server?

    - by Nikolay Kuznetsov
    There is a game server and people can play in game rooms of 2, 3 or 4. When a client connects to server he can send a request specifying a number of people or range he wants to play with. One of this value is valid: {2-4, 2-3, 3-4, 2, 3, 4} So the server maintains 3 separate queues for game room with 2, 3 and 4 people. So we can denote queues as #2, #3 and #4. It work the following way. If a client sends request, 3-4, then two separate request are added to queues #3 and #4. If queue #3 now have 3 requests from different people then game room with 3 players is created, and all other requests from those players are removed from all queues. Right now not many people are online simultaneously, so they apply for a game wait for some time and quit because game does not start in a reasonable time. That's a simple bot for beginning has been developed. So there is a need to patch server code to run a bot, if some one requests a game, but humans are not online. Input: request from human {2-4, 2-3, 3-4, 2, 3, 4} Output: number of bots to run and time to wait for each before connecting, depending on queues state. The problem is that I don't know how to manage bots properly at the server? Example: #3 has 1 request and #4 has 1 request Request from user is {3,4} then server can add one bot to play game with 3 people or two bots to play game of 4. Example: #3 has 1 request and #4 has 2 requests Request from user is {3,4} then in each case just one bot is needed so game with 4 players is more preferrable.

    Read the article

  • Turn-based Client-Server Card Game - Unicast (TCP) or Multicast (UDP)

    - by LDM91
    I am currently planning to make a card game project where the clients will communicate with the server in a turn-based and synchronous manner using messages sent over sockets. The problem I have is how to handle the following scenario: (Client takes it turn and sends its action to server) Client sends a message telling the server its move for the turn (e.g. plays the card 5 from its hand which needs to placed onto the table) Server receives messages and updates game state (server will hold all game state). Server iterates through a list of connected clients and sends a message to tell of them change in state Clients all refresh to display the state This is all based on using TCP, and looking at it now it seems a bit like the Observer pattern. The reason this seems to be an issue to me is this message doesn't seem to be point-to-point like the others as I want to send it to all the clients, and doesn't seem very efficient sending the same message in that way. I was thinking about using multicasting with UDP as then I could send the message to all the clients, however wouldn't this mean that the clients would in theory be able to message each other? There is of course the synchronous aspect as well, though this could be put on top of the UDP I guess. Basically, I would like to know what would be good practice as this project is really all about learning, and even though it won't be big enough to encounter performance issues from this I would like to consider them anyway. However, please note I am not interested in using message oriented middleware as a solution (I have experience with using MOM and I'm interested in considering other options excluding MOM if TCP sockets is a bad idea!).

    Read the article

  • How to access Ubuntu Server from local PC?

    - by Roland
    Today I installed my first web server, which is Ubuntu 12.04 LTS. I got Apache, PHP and MySql working, there is even MyPHPAdmin! Everything is working fine on that PC, but the problem is that I have no idea how to connect to this server from my PC. Just to clarify- I got one PC that I work on and got another one, which has Ubuntu Server running. I even managed to connect them through the router, which I made to work as a switch. I can see the Ubuntu Server on my Windows PC in "Network", but it's empty, I can't see any files. I tried to share a folder etc/www on Server, but it shows an error saying something about right, that I'm not this folder's owner. I guess I'm not doing the right thing at all, am I? Even if I could see shared folder on my Windows PC- I would still be not able to type "somedomain.com" on Windows PC and access for example index.php or MySql database. So, the question is- how do I configure Ubuntu Server to be accessible from Windows PC?

    Read the article

  • ??UFJ??????????????????????·?????WebLogic Server??????????????? ?????? ????? 2011?????|WebLogic Channel|??????

    - by ???02
    ??UFJ???????·?????IT????????UFJ???????????????(MUIT)?????UFJ?????????????????????????·????·????????????????????????WebLogic Server???????????????????·?????????IT?????????WebLogic Server??????? MUIT IT??????? ??????????2011?11?30???????????? ?????? ????? 2011??????????UFJ???????????????????Java EE???????WebLogic Server????????????????????(???)???UFJ???????·??????????????2????????·????????? MUIT????????????2009?7?1??????????????????????UFJ????????UFJIS?3???????????????UFJ???????UFJ???????·?????IT???????????????????????????????1,500????????????SIer??????????????? ????????MUIT??Java???2?????????PaaS?????/???????????????????????PaaS????????????????????????????????????????????????????????Java EE?????????????????????????????????????????????????????????????? OLTP??????????????????????????2000???????????????10???????????????????????·?????????????????????(???)????????/??????????·?????????????????????????????????????????????????????????Java EE???????????????????????????????????·??????WebLogic Server?????DBMS???Oracle Database????????? ?????????????????????????2006?????????2008????????????????????????????????????????????????????????????????????????????????????????"??·??????·???????"?????????MUIT?????????????????·??????????????????????????????????????????·?????GlassFish??DBMS??DB2???MySQL?????BPM??????Oracle BPEL Process Manager???????????????·?????????????WebLogic Server??? MUIT???????????????????????????WebLogic Server?????????????50?????????????????????????????????????????·???????????????????????????Oracle Real Application Clusters(RAC)????????????????? ?????????????·????????????·????????????Java EE??????????????Java EE????????????????????????????????Java EE????????????·??????·?????????????????????????????????????????????·???????WebLogic Server????????????? ???MUIT???WebLogic Server????????????????????????????????????????????????????WebLogic Scripting Tool(WLST)??????????????????????????????????????????????? ????????????????????????????????????WebLogic Server?????????????????????????????????????????Web??????????????URL?????????/?????????????·??????????????????1??????????????????????????????????????????????? ????????????????????????????????????????PDF??????????????????????????????????????????????????????????????????????????????????????????????????????(???) ??????????????????????????????????????????Oracle RAC??????1??WebLogic Server?????????RAC????????Oracle?????????????????????????????????????????????Oracle??????????????????????????????????RAC???Oracle?????????????????????????????????????????????????????????????Oracle???????????????????????????????????? ???????????????????????????????????????????????????????·??????????????????????????·??????????????WebLogic Server?????????????????????????????????????????JVM????????????????????????????????????????????????????????????????????????????? ?????????????????????????????????????????????????HTTP???????????????????????????????????????????????????·??????????????????????????1???????????????????????????????????????????????·?????????????????????????????????????????????? ????????????????WLST??WLST??Python?Java?????Jython????WebLogic Server???????????????????·????????????????????????????????????????????????????????????????????????MUIT???WebLogic Server????????????????WebLogic Server???????????????????????????????/??????????WLST???????????WLST????WebLogic Server???/?????????????WebLogic Channel?????????·???????! ?WebLogic Scripting Tool????WebLogic Server???/???????????????? ????????????WebLogic Server?????????????Java EE??????????????·??????????????????????????????????·????????????????·???????????????????????????(???)???????????????????????????????????MUIT????????????????WebLogic Server??????????????????????·??????·???????????WebLogic Server??????????????WebLogic Channel??????????????????????????????????WebLogic Server????????·??????????――?????·??????·???????????WebLogic???????????????????????????WebLogic Server??????????????????????TCO?――????????????????WebLogic???????

    Read the article

  • Directory service unavailiable, new hardware same settings

    - by Alex
    I'm working on a project with 2 sites connected by a VPN. Site 1 has the main server and there is a secondary server at site 2 which I am trying to replace. The current setup works perfectly however I can't for the life of me get the replacement server at site 2 up and running. I'm trying to replace like for like just upgraded hardware. I have installed the OS (all Server 2003 Standard SP2) and used exactly the same settings as the old server. I have setup Active Directory, DNS Server, DHCP Server and WINS Server configured. I have used all the same settings as the old server (except IP address and name). I can access the active directory but I can't do anything; add, edit, delete all returns "the directory service is unavaliable". No-one can login on any of the computers on site 2 and the internet is down. Plugging the old server back in and connecting it to the network rectifies the issue (so both new and old are connected at site 2), everyone can login and the internet is back (curious since the modem connects direct to the switch, and even with the new server online I can connect to the router via IP but not the net). I really don't have much experience but I've been roped into doing this because my company is too cheap to hire a real network admin. Any suggestions of where I can start to troubleshoot this, its driving me crazy and I only have a day before all the users are back on site.

    Read the article

  • How to fix Solr - Server is shutting down issue?

    - by Krunal
    I was having a running Solr 4.1 on Windows Server 2008 R2. The Solr is deployed on Tomcat. However, today it stops suddenly, and while accessing Solr it gives following error. HTTP Status 503 - Server is shutting down type Status report message Server is shutting down description The requested service is not currently available. On further looking into Logs, we got following: Log File: tomcat7-stderr.2013-05-09.txt May 09, 2013 8:00:40 PM org.apache.solr.core.CoreContainer finalize SEVERE: CoreContainer was not shutdown prior to finalize(), indicates a bug -- POSSIBLE RESOURCE LEAK!!! instance=2221663 Log File: catalina.2013-05-09.txt May 09, 2013 7:59:25 PM org.apache.solr.core.SolrResourceLoader <init> INFO: new SolrResourceLoader for directory: 'c:\solrdir\' May 09, 2013 7:59:29 PM org.apache.solr.common.SolrException log SEVERE: Exception during parsing file: null:org.xml.sax.SAXParseException; systemId: file:/c:/solr/solr.xml; lineNumber: 2; columnNumber: 6; The processing instruction target matching "[xX][mM][lL]" is not allowed. at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.createSAXParseException(Unknown Source) at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.fatalError(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLScanner.reportFatalError(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLScanner.scanPIData(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanPIData(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLScanner.scanPI(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl$PrologDriver.next(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.next(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(Unknown Source) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(Unknown Source) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(Unknown Source) at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(Unknown Source) at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(Unknown Source) at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(Unknown Source) at org.apache.solr.core.Config.<init>(Config.java:121) at org.apache.solr.core.CoreContainer.load(CoreContainer.java:428) at org.apache.solr.core.CoreContainer.load(CoreContainer.java:404) at org.apache.solr.core.CoreContainer$Initializer.initialize(CoreContainer.java:336) at org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:98) at org.apache.catalina.core.ApplicationFilterConfig.initFilter(ApplicationFilterConfig.java:281) at org.apache.catalina.core.ApplicationFilterConfig.getFilter(ApplicationFilterConfig.java:262) at org.apache.catalina.core.ApplicationFilterConfig.<init>(ApplicationFilterConfig.java:107) at org.apache.catalina.core.StandardContext.filterStart(StandardContext.java:4656) at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5309) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:901) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:877) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:633) at org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:977) at org.apache.catalina.startup.HostConfig$DeployWar.run(HostConfig.java:1655) at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source) at java.util.concurrent.FutureTask.run(Unknown Source) at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source) May 09, 2013 7:59:29 PM org.apache.solr.servlet.SolrDispatchFilter init SEVERE: Could not start Solr. Check solr/home property and the logs May 09, 2013 7:59:29 PM org.apache.solr.common.SolrException log SEVERE: null:org.apache.solr.common.SolrException: at org.apache.solr.core.CoreContainer.load(CoreContainer.java:431) at org.apache.solr.core.CoreContainer.load(CoreContainer.java:404) at org.apache.solr.core.CoreContainer$Initializer.initialize(CoreContainer.java:336) at org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:98) at org.apache.catalina.core.ApplicationFilterConfig.initFilter(ApplicationFilterConfig.java:281) at org.apache.catalina.core.ApplicationFilterConfig.getFilter(ApplicationFilterConfig.java:262) at org.apache.catalina.core.ApplicationFilterConfig.<init>(ApplicationFilterConfig.java:107) at org.apache.catalina.core.StandardContext.filterStart(StandardContext.java:4656) at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5309) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:901) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:877) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:633) at org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:977) at org.apache.catalina.startup.HostConfig$DeployWar.run(HostConfig.java:1655) at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source) at java.util.concurrent.FutureTask.run(Unknown Source) at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Caused by: org.xml.sax.SAXParseException; systemId: file:/c:/solrdir/solr.xml; lineNumber: 2; columnNumber: 6; The processing instruction target matching "[xX][mM][lL]" is not allowed. at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.createSAXParseException(Unknown Source) at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.fatalError(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLScanner.reportFatalError(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLScanner.scanPIData(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanPIData(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLScanner.scanPI(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl$PrologDriver.next(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.next(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(Unknown Source) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(Unknown Source) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(Unknown Source) at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(Unknown Source) at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(Unknown Source) at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(Unknown Source) at org.apache.solr.core.Config.<init>(Config.java:121) at org.apache.solr.core.CoreContainer.load(CoreContainer.java:428) ... 20 more May 09, 2013 7:59:29 PM org.apache.solr.servlet.SolrDispatchFilter init INFO: SolrDispatchFilter.init() done May 09, 2013 7:59:29 PM org.apache.catalina.startup.HostConfig deployDirectory INFO: Deploying web application directory C:\Program Files (x86)\Apache Software Foundation\Tomcat 7.0\webapps\docs May 09, 2013 7:59:30 PM org.apache.catalina.startup.HostConfig deployDirectory INFO: Deploying web application directory C:\Program Files (x86)\Apache Software Foundation\Tomcat 7.0\webapps\manager May 09, 2013 7:59:30 PM org.apache.catalina.startup.HostConfig deployDirectory INFO: Deploying web application directory C:\Program Files (x86)\Apache Software Foundation\Tomcat 7.0\webapps\ROOT May 09, 2013 7:59:30 PM org.apache.coyote.AbstractProtocol start INFO: Starting ProtocolHandler ["http-bio-8983"] May 09, 2013 7:59:30 PM org.apache.coyote.AbstractProtocol start INFO: Starting ProtocolHandler ["ajp-bio-8009"] May 09, 2013 7:59:30 PM org.apache.catalina.startup.Catalina start INFO: Server startup in 9578 ms May 09, 2013 8:00:40 PM org.apache.solr.core.CoreContainer finalize SEVERE: CoreContainer was not shutdown prior to finalize(), indicates a bug -- POSSIBLE RESOURCE LEAK!!! instance=2221663 Any idea what could be wrong and how to fix?

    Read the article

  • IIS, multiple CPU cores, application pools and worker processes - best configuration for a single si

    - by Egghead Design
    Hi We use Kentico CMS and I've exchanged emails with them about a web garden deployment. We have a single site running on a server with 8 cpu cores. In line with Kentico's advice, we have not altered the application pool web garden setting from the default i.e. it is set to a maximum number of worker processes of 1. Our experience is that the site only uses one of the cpu cores - the others are idling. When I emailed them about this, their response was that the OS/IIS would handle this and use other cores as necessary even though the application pool only has a single worker process. Now, I've a lot of respect for the guys at Kentico, but this doesn't seem right to me? Surely, if we want to use all cores, we need to permit eight worker processes (and implement session state storage in SQL server)? Many thanks Tony

    Read the article

  • Stark Expo Needs You

    - by [email protected]
    Train to Become a Master Cloud Operative Can't wait until September to get your Oracle fix? Then come visit us at the Stark Expo now. Marvel Entertainment has turned itself into one of the hottest media companies of the digital age, and at the heart of Marvel's growth and transformation is Oracle technology. Now, this successful collaboration finds its way to the big screen, as Oracle joins forces with Marvel to launch a special showcase Website and movie trailer for the upcoming Iron Man 2. In Iron Man 2, Oracle is a proud sponsor of Stark Expo, a world-class tradeshow that depends on a cloud computing architecture to ensure that systems are free from overload. Starting today, visitors to the showcase Website are invited to become Master Cloud Operatives and keep Stark Expo up and running. Complete your training, test your troubleshooting skills in the Oracle Pavilion, and qualify to receive a free movie poster.

    Read the article

  • Import Data from Excel sheet to DB Table through OAF page

    - by PRajkumar
    1. Create a New Workspace and Project File > New > General > Workspace Configured for Oracle Applications File Name – PrajkumarImportxlsDemo   Automatically a new OA Project will also be created   Project Name -- ImportxlsDemo Default Package -- prajkumar.oracle.apps.fnd.importxlsdemo   2. Add JAR file jxl-2.6.3.jar to Apache Library Download jxl-2.6.3.jar from following link – http://www.findjar.com/jar/net.sourceforge.jexcelapi/jars/jxl-2.6.jar.html   Steps to add jxl.jar file in Local Machine Right Click on ImportxlsDemo > Project Properties > Libraries > Add jar/Directory and browse to directory where jxl-2.6.3.jar has been downloaded and select the JAR file            Steps to add jxl.jar file at EBS middle tier On your EBS middile tier copy jxl.jar at $FND_TOP/java/3rdparty/standalone Add $FND_TOP/java/3rdparty/standalone\jxl.jar to custom classpath in Jser.properties file which is at $IAS_ORACLE_HOME/Apache/Jserv/etc wrapper.classpath=/U01/oracle/dev/devappl/fnd/11.5.0/java/3rdparty/stdalone/jxl.jar Bounce Apache Server   3. Create a New Application Module (AM) Right Click on ImportxlsDemo > New > ADF Business Components > Application Module Name -- ImportxlsAM Package -- prajkumar.oracle.apps.fnd.importxlsdemo.server   Check Application Module Class: ImportxlsAMImpl Generate JavaFile(s)   4. Create Test Table in which we will insert data from excel CREATE TABLE xx_import_excel_data_demo (    -- --------------------      -- Data Columns      -- --------------------      column1                 VARCHAR2(100),      column2                 VARCHAR2(100),      column3                 VARCHAR2(100),      column4                 VARCHAR2(100),      column5                 VARCHAR2(100),      -- --------------------      -- Who Columns      -- --------------------      last_update_date   DATE         NOT NULL,      last_updated_by    NUMBER   NOT NULL,      creation_date         DATE         NOT NULL,      created_by             NUMBER    NOT NULL,      last_update_login  NUMBER );   5. Create a New Entity Object (EO) Right click on ImportxlsDemo > New > ADF Business Components > Entity Object Name – ImportxlsEO Package -- prajkumar.oracle.apps.fnd.importxlsdemo.schema.server Database Objects -- XX_IMPORT_EXCEL_DATA_DEMO   Note – By default ROWID will be the primary key if we will not make any column to be primary key Check the Accessors, Create Method, Validation Method and Remove Method   6. Create a New View Object (VO) Right click on ImportxlsDemo > New > ADF Business Components > View Object Name -- ImportxlsVO Package -- prajkumar.oracle.apps.fnd.importxlsdemo.server   In Step2 in Entity Page select ImportxlsEO and shuttle it to selected list In Step3 in Attributes Window select all columns and shuttle them to selected list   In Java page Uncheck Generate Java file for View Object Class: ImportxlsVOImpl Select Generate Java File for View Row Class: ImportxlsVORowImpl -> Generate Java File -> Accessors   7. Add Your View Object to Root UI Application Module Right click on ImportxlsAM > Edit ImportxlsAM > Data Model > Select ImportxlsVO and shuttle to Data Model list   8. Create a New Page Right click on ImportxlsDemo > New > Web Tier > OA Components > Page Name -- ImportxlsPG Package -- prajkumar.oracle.apps.fnd.importxlsdemo.webui   9. Select the ImportxlsPG and go to the strcuture pane where a default region has been created   10. Select region1 and set the following properties:   Attribute Property ID PageLayoutRN AM Definition prajkumar.oracle.apps.fnd.importxlsdemo.server.ImportxlsAM Window Title Import Data From Excel through OAF Page Demo Window Title Import Data From Excel through OAF Page Demo   11. Create messageComponentLayout Region Under Page Layout Region Right click PageLayoutRN > New > Region   Attribute Property ID MainRN Item Style messageComponentLayout   12. Create a New Item messageFileUpload Bean under MainRN Right click on MainRN > New > messageFileUpload Set Following Properties for New Item --   Attribute Property ID MessageFileUpload Item Style messageFileUpload   13. Create a New Item Submit Button Bean under MainRN Right click on MainRN > New > messageLayout Set Following Properties for messageLayout --   Attribute Property ID ButtonLayout   Right Click on ButtonLayout > New > Item   Attribute Property ID Go Item Style submitButton Attribute Set /oracle/apps/fnd/attributesets/Buttons/Go   14. Create Controller for page ImportxlsPG Right Click on PageLayoutRN > Set New Controller Package Name: prajkumar.oracle.apps.fnd.importxlsdemo.webui Class Name: ImportxlsCO   Write Following Code in ImportxlsCO in processFormRequest import oracle.apps.fnd.framework.OAApplicationModule; import oracle.apps.fnd.framework.OAException; import java.io.Serializable; import oracle.apps.fnd.framework.webui.OAControllerImpl; import oracle.apps.fnd.framework.webui.OAPageContext; import oracle.apps.fnd.framework.webui.beans.OAWebBean; import oracle.cabo.ui.data.DataObject; import oracle.jbo.domain.BlobDomain; public void processFormRequest(OAPageContext pageContext, OAWebBean webBean) {  super.processFormRequest(pageContext, webBean);  if (pageContext.getParameter("Go") != null)  {   DataObject fileUploadData = (DataObject)pageContext.getNamedDataObject("MessageFileUpload");   String fileName = null;                 try   {    fileName = (String)fileUploadData.selectValue(null, "UPLOAD_FILE_NAME");   }   catch(NullPointerException ex)   {    throw new OAException("Please Select a File to Upload", OAException.ERROR);   }   BlobDomain uploadedByteStream = (BlobDomain)fileUploadData.selectValue(null, fileName);   try   {    OAApplicationModule oaapplicationmodule = pageContext.getRootApplicationModule();    Serializable aserializable2[] = {uploadedByteStream};    Class aclass2[] = {BlobDomain.class };    oaapplicationmodule.invokeMethod("ReadExcel", aserializable2,aclass2);   }   catch (Exception ex)   {    throw new OAException(ex.toString(), OAException.ERROR);   }  } }     Write Following Code in ImportxlsAMImpl.java import java.io.IOException; import java.io.InputStream; import jxl.Cell; import jxl.CellType; import jxl.Sheet; import jxl.Workbook; import jxl.read.biff.BiffException; import oracle.apps.fnd.framework.server.OAApplicationModuleImpl; import oracle.jbo.Row; import oracle.apps.fnd.framework.OAViewObject; import oracle.apps.fnd.framework.server.OAViewObjectImpl; import oracle.jbo.domain.BlobDomain; public void createRecord(String[] excel_data) {   OAViewObject vo = (OAViewObject)getImportxlsVO1();            if (!vo.isPreparedForExecution())    {   vo.executeQuery();      }                      Row row = vo.createRow();  try  {   for (int i=0; i < excel_data.length; i++)   {    row.setAttribute("Column" +(i+1) ,excel_data[i]);   }  }  catch(Exception e)  {   System.out.println(e.getMessage());   }  vo.insertRow(row);  getTransaction().commit(); }      public void ReadExcel(BlobDomain fileData) throws IOException {  String[] excel_data  = new String[5];  InputStream inputWorkbook = fileData.getInputStream();  Workbook w;          try  {   w = Workbook.getWorkbook(inputWorkbook);                       // Get the first sheet   Sheet sheet = w.getSheet(0);                       for (int i = 0; i < sheet.getRows(); i++)   {    for (int j = 0; j < sheet.getColumns(); j++)    {     Cell cell = sheet.getCell(j, i);     CellType type = cell.getType();     if (cell.getType() == CellType.LABEL)     {      System.out.println("I got a label " + cell.getContents());      excel_data[j] = cell.getContents();     }     if (cell.getType() == CellType.NUMBER)     {        System.out.println("I got a number " + cell.getContents());      excel_data[j] = cell.getContents();     }    }    createRecord(excel_data);   }  }              catch (BiffException e)  {   e.printStackTrace();  } }   15. Congratulation you have successfully finished. Run Your page and Test Your Work   Consider Excel PRAJ_TEST.xls with following data --       Lets Try to import this data into DB Table --          

    Read the article

  • .NET Framework version in Application Pools of IIS 7 on windows 2008

    - by Rodnower
    Hello, I have web service on IIS 7 on Windows 2008. This web service must dlls of .NET Framework 3.5 (I have error about System.Linq using when I try to browse the web site) The only place I found where it is possible to change .NET Framework version is application pools management, but The only two options I have are: no management code and .NET Framework 2. In add/remove programs I have .NET Framework 3.5 installed and event does to it repair and iisreset, but I still have only to options in application pools management. Any ideas? Thank you for ahead.

    Read the article

  • Partner Webcast – More out of Database Appliance with DB Options - 13 September 2012

    - by Thanos
    The Oracle Database Appliance is a new way to take advantage of the world's most popular database—Oracle Database 11g —in a single, easy-to-deploy and manage system. It's a complete package of software, server, storage, and networking that's engineered for simplicity; saving time and money by simplifying deployment, maintenance, and support of database workloads. But that is not all, with the support for all Oracle Database Options, Oracle Database Appliance can be the ideal solution for many use cases. Feature Benefit Simplifies deployment, maintenance, and support of high-availability database workloads Saves significant time and effort throughout the database administration lifecycle An engineered system of software, server, storage, and networking High availability for a wide range of custom and packaged OLTP and data warehousing application databases Simple one-button Installation, full-stack integrated patching and diagnostics Reduces planned and unplanned downtime by automatically monitoring and logging service requests with Oracle Support Built using the world’s #1 database Protects databases from server and storage failures with Oracle Real Application Clusters and Automatic Storage Management Unique Pay-As-You-Grow software licensing Reduces cost with flexibility to adjust your software spend as your business grows without the need for any hardware upgrades Discover the Oracle Database Appliance Value Proposition and learn how to position and combine it with database options to capture new business and easily roll out solutions safely and with maximum cost efficiency. This webcast is repeated once again for your benefit. Agenda: Oracle Database& Engineered Systems Innovation. What’s the Oracle Database Appliance ? Oracle Database Appliance Value Proposition. Oracle Database Appliance with Database Options Oracle Database Appliance Partners Business Delivery FormatThis FREE online LIVE eSeminar will be delivered over the Web. Registrations received less than 24hours prior to start time may not receive confirmation to attend. Duration: 1 hour Register Now! Oracle Database Appliance is available for purchase at the Oracle Store under Engineered Systems. For any questions please contact us at partner.imc-AT-beehiveonline.oracle-DOT-com Visit regularly our ISV Migration Center blog Or Follow us @oracleimc to learn more on Oracle Technologies as well as upcoming partner webcasts and events.

    Read the article

  • Java installation problem

    - by Zxy
    I cannot install java on my ubuntu 12.04: zero@ghostrider:~$ sudo apt-get purge openjdk* [sudo] password for zero: Reading package lists... Done Building dependency tree Reading state information... Done Note, selecting 'openjdk-6-demo' for regex 'openjdk*' Note, selecting 'openjdk-7-jre-headless' for regex 'openjdk*' Note, selecting 'uwsgi-plugin-jwsgi-openjdk-6' for regex 'openjdk*' Note, selecting 'openjdk-jre' for regex 'openjdk*' Note, selecting 'openjdk-7-source' for regex 'openjdk*' Note, selecting 'openjdk-6-dbg' for regex 'openjdk*' Note, selecting 'openjdk7-jdk' for regex 'openjdk*' Note, selecting 'openjdk-6-doc' for regex 'openjdk*' Note, selecting 'openjdk-7-jre-zero' for regex 'openjdk*' Note, selecting 'openjdk-7-demo' for regex 'openjdk*' Note, selecting 'openjdk-6-jre-headless' for regex 'openjdk*' Note, selecting 'openjdk-6-jdk' for regex 'openjdk*' Note, selecting 'openjdk-6-jre' for regex 'openjdk*' Note, selecting 'openjdk-6-jre-lib' for regex 'openjdk*' Note, selecting 'openjdk-6-jre-zero' for regex 'openjdk*' Note, selecting 'openjdk-7-dbg' for regex 'openjdk*' Note, selecting 'openjdk-7-doc' for regex 'openjdk*' Note, selecting 'openjdk-7-jdk' for regex 'openjdk*' Note, selecting 'openjdk-7-jre' for regex 'openjdk*' Note, selecting 'openjdk-6-source' for regex 'openjdk*' Note, selecting 'openjdk-7-jre-lib' for regex 'openjdk*' Note, selecting 'uwsgi-plugin-jvm-openjdk-6' for regex 'openjdk*' Package uwsgi-plugin-jvm-openjdk-6 is not installed, so not removed Package uwsgi-plugin-jwsgi-openjdk-6 is not installed, so not removed Package openjdk-6-dbg is not installed, so not removed Package openjdk-6-demo is not installed, so not removed Package openjdk-6-doc is not installed, so not removed Package openjdk-6-jdk is not installed, so not removed Package openjdk-6-jre is not installed, so not removed Package openjdk-6-jre-headless is not installed, so not removed Package openjdk-6-jre-lib is not installed, so not removed Package openjdk-6-source is not installed, so not removed Package openjdk-6-jre-zero is not installed, so not removed Package openjdk-7-dbg is not installed, so not removed Package openjdk-7-demo is not installed, so not removed Package openjdk-7-doc is not installed, so not removed Package openjdk-7-jdk is not installed, so not removed Package openjdk-7-jre is not installed, so not removed Package openjdk-7-jre-headless is not installed, so not removed Package openjdk-7-jre-lib is not installed, so not removed Package openjdk-7-jre-zero is not installed, so not removed Package openjdk-7-source is not installed, so not removed 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 1 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Setting up oracle-java7-installer (7u3-0~eugenesan~precise4) ... Downloading... --2012-06-11 23:56:42-- http://download.oracle.com/otn-pub/java/jdk/7u3-b04/jdk- 7u3-linux-i586.tar.gz Resolving download.oracle.com (download.oracle.com)... 64.209.77.18 Connecting to download.oracle.com (download.oracle.com)|64.209.77.18|:80... connected. HTTP request sent, awaiting response... 302 Moved Temporarily Location: https://edelivery.oracle.com/otn-pub/java/jdk/7u3-b04/jdk-7u3-linux-i586.tar.gz [following] --2012-06-11 23:56:42-- https://edelivery.oracle.com/otn-pub/java/jdk/7u3-b04/jdk-7u3-linux-i586.tar.gz Resolving edelivery.oracle.com (edelivery.oracle.com)... 95.101.122.174 Connecting to edelivery.oracle.com (edelivery.oracle.com)|95.101.122.174|:443... connected. HTTP request sent, awaiting response... 302 Moved Temporarily Location: http://download.oracle.com/errors/download-fail-1505220.html [following] --2012-06-11 23:56:44-- http://download.oracle.com/errors/download-fail-1505220.html Connecting to download.oracle.com (download.oracle.com)|64.209.77.18|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 5307 (5.2K) [text/html] Saving to: `./jdk-7u3-linux-i586.tar.gz' 0K ..... 100% 1007K=0.005s 2012-06-11 23:56:44 (1007 KB/s) - `./jdk-7u3-linux-i586.tar.gz' saved [5307/5307] Download done. sha256sum mismatch jdk-7u3-linux-i586.tar.gz Oracle JDK 7 is NOT installed. dpkg: error processing oracle-java7-installer (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Errors were encountered while processing: oracle-java7-installer E: Sub-process /usr/bin/dpkg returned an error code (1) zero@ghostrider:~$ sudo add-apt-repository ppa:eugenesan/java You are about to add the following PPA to your system: More info: https://launchpad.net/~eugenesan/+archive/java Press [ENTER] to continue or ctrl-c to cancel adding it Executing: gpg --ignore-time-conflict --no-options --no-default-keyring --secret- keyring /tmp/tmp.uGcZHfsoNF --trustdb-name /etc/apt/trustdb.gpg --keyring /etc/apt/trusted.gpg --primary-keyring /etc/apt/trusted.gpg --keyserver hkp://keyserver.ubuntu.com:80/ --recv 4346FBB158F4022C896164EEE61380B28313A596 gpg: requesting key 8313A596 from hkp server keyserver.ubuntu.com gpg: key 8313A596: "Launchpad synergy+" not changed gpg: Total number processed: 1 gpg: unchanged: 1 zero@ghostrider:~$ sudo apt-get update Ign http://tr.archive.ubuntu.com precise InRelease Ign http://tr.archive.ubuntu.com precise-updates InRelease Ign http://tr.archive.ubuntu.com precise-backports InRelease Hit http://tr.archive.ubuntu.com precise Release.gpg Hit http://tr.archive.ubuntu.com precise-updates Release.gpg Hit http://tr.archive.ubuntu.com precise-backports Release.gpg Hit http://tr.archive.ubuntu.com precise Release Ign http://extras.ubuntu.com precise InRelease Ign http://security.ubuntu.com precise-security InRelease Hit http://tr.archive.ubuntu.com precise-updates Release Ign http://ppa.launchpad.net precise InRelease Hit http://tr.archive.ubuntu.com precise-backports Release Hit http://tr.archive.ubuntu.com precise/main Sources Hit http://tr.archive.ubuntu.com precise/restricted Sources Hit http://tr.archive.ubuntu.com precise/universe Sources Hit http://tr.archive.ubuntu.com precise/multiverse Sources Hit http://tr.archive.ubuntu.com precise/main i386 Packages Hit http://tr.archive.ubuntu.com precise/restricted i386 Packages Hit http://tr.archive.ubuntu.com precise/universe i386 Packages Hit http://extras.ubuntu.com precise Release.gpg Hit http://ppa.launchpad.net precise Release.gpg Hit http://security.ubuntu.com precise-security Release.gpg Hit http://tr.archive.ubuntu.com precise/multiverse i386 Packages Hit http://tr.archive.ubuntu.com precise/main TranslationIndex Hit http://tr.archive.ubuntu.com precise/multiverse TranslationIndex Hit http://tr.archive.ubuntu.com precise/restricted TranslationIndex Hit http://tr.archive.ubuntu.com precise/universe TranslationIndex Hit http://tr.archive.ubuntu.com precise-updates/main Sources Hit http://tr.archive.ubuntu.com precise-updates/restricted Sources Hit http://tr.archive.ubuntu.com precise-updates/universe Sources Hit http://tr.archive.ubuntu.com precise-updates/multiverse Sources Hit http://tr.archive.ubuntu.com precise-updates/main i386 Packages Hit http://extras.ubuntu.com precise Release Hit http://ppa.launchpad.net precise Release Hit http://security.ubuntu.com precise-security Release Hit http://tr.archive.ubuntu.com precise-updates/restricted i386 Packages Hit http://tr.archive.ubuntu.com precise-updates/universe i386 Packages Hit http://tr.archive.ubuntu.com precise-updates/multiverse i386 Packages Hit http://tr.archive.ubuntu.com precise-updates/main TranslationIndex Hit http://tr.archive.ubuntu.com precise-updates/multiverse TranslationIndex Hit http://tr.archive.ubuntu.com precise-updates/restricted TranslationIndex Hit http://tr.archive.ubuntu.com precise-updates/universe TranslationIndex Hit http://tr.archive.ubuntu.com precise-backports/main Sources Hit http://tr.archive.ubuntu.com precise-backports/restricted Sources Hit http://tr.archive.ubuntu.com precise-backports/universe Sources Hit http://tr.archive.ubuntu.com precise-backports/multiverse Sources Hit http://tr.archive.ubuntu.com precise-backports/main i386 Packages Hit http://tr.archive.ubuntu.com precise-backports/restricted i386 Packages Hit http://tr.archive.ubuntu.com precise-backports/universe i386 Packages Hit http://tr.archive.ubuntu.com precise-backports/multiverse i386 Packages Hit http://tr.archive.ubuntu.com precise-backports/main TranslationIndex Hit http://extras.ubuntu.com precise/main Sources Hit http://ppa.launchpad.net precise/main Sources Hit http://security.ubuntu.com precise-security/main Sources Hit http://tr.archive.ubuntu.com precise-backports/multiverse TranslationIndex Hit http://tr.archive.ubuntu.com precise-backports/restricted TranslationIndex Hit http://tr.archive.ubuntu.com precise-backports/universe TranslationIndex Hit http://tr.archive.ubuntu.com precise/main Translation-en Hit http://tr.archive.ubuntu.com precise/multiverse Translation-en Hit http://extras.ubuntu.com precise/main i386 Packages Ign http://extras.ubuntu.com precise/main TranslationIndex Hit http://tr.archive.ubuntu.com precise/restricted Translation-en Hit http://tr.archive.ubuntu.com precise/universe Translation-en Hit http://tr.archive.ubuntu.com precise-updates/main Translation-en Hit http://tr.archive.ubuntu.com precise-updates/multiverse Translation-en Hit http://tr.archive.ubuntu.com precise-updates/restricted Translation-en Hit http://ppa.launchpad.net precise/main i386 Packages Ign http://ppa.launchpad.net precise/main TranslationIndex Hit http://security.ubuntu.com precise-security/restricted Sources Hit http://security.ubuntu.com precise-security/universe Sources Hit http://security.ubuntu.com precise-security/multiverse Sources Hit http://security.ubuntu.com precise-security/main i386 Packages Hit http://security.ubuntu.com precise-security/restricted i386 Packages Hit http://tr.archive.ubuntu.com precise-updates/universe Translation-en Hit http://tr.archive.ubuntu.com precise-backports/main Translation-en Hit http://tr.archive.ubuntu.com precise-backports/multiverse Translation-en Hit http://tr.archive.ubuntu.com precise-backports/restricted Translation-en Hit http://tr.archive.ubuntu.com precise-backports/universe Translation-en Hit http://security.ubuntu.com precise-security/universe i386 Packages Hit http://security.ubuntu.com precise-security/multiverse i386 Packages Hit http://security.ubuntu.com precise-security/main TranslationIndex Hit http://security.ubuntu.com precise-security/multiverse TranslationIndex Hit http://security.ubuntu.com precise-security/restricted TranslationIndex Hit http://security.ubuntu.com precise-security/universe TranslationIndex Hit http://security.ubuntu.com precise-security/main Translation-en Hit http://security.ubuntu.com precise-security/multiverse Translation-en Hit http://security.ubuntu.com precise-security/restricted Translation-en Hit http://security.ubuntu.com precise-security/universe Translation-en Ign http://ppa.launchpad.net precise/main Translation-en_US Ign http://extras.ubuntu.com precise/main Translation-en_US Ign http://ppa.launchpad.net precise/main Translation-en Ign http://extras.ubuntu.com precise/main Translation-en Reading package lists... Done zero@ghostrider:~$ sudo apt-get install oracle-java7-installer Reading package lists... Done Building dependency tree Reading state information... Done oracle-java7-installer is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 1 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? Y Setting up oracle-java7-installer (7u3-0~eugenesan~precise4) ... Downloading... --2012-06-11 23:57:11-- http://download.oracle.com/otn-pub/java/jdk/7u3-b04/jdk- 7u3-linux-i586.tar.gz Resolving download.oracle.com (download.oracle.com)... 64.209.77.18 Connecting to download.oracle.com (download.oracle.com)|64.209.77.18|:80... connected. HTTP request sent, awaiting response... 302 Moved Temporarily Location: https://edelivery.oracle.com/otn-pub/java/jdk/7u3-b04/jdk-7u3-linux-i586.tar.gz [following] --2012-06-11 23:57:11-- https://edelivery.oracle.com/otn-pub/java/jdk/7u3-b04/jdk-7u3-linux-i586.tar.gz Resolving edelivery.oracle.com (edelivery.oracle.com)... 95.101.122.174 Connecting to edelivery.oracle.com (edelivery.oracle.com)|95.101.122.174|:443... connected. HTTP request sent, awaiting response... 302 Moved Temporarily Location: http://download.oracle.com/errors/download-fail-1505220.html [following] --2012-06-11 23:57:12-- http://download.oracle.com/errors/download-fail-1505220.html Connecting to download.oracle.com (download.oracle.com)|64.209.77.18|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 5307 (5.2K) [text/html] Saving to: `./jdk-7u3-linux-i586.tar.gz' 0K ..... 100% 976K=0.005s 2012-06-11 23:57:12 (976 KB/s) - `./jdk-7u3-linux-i586.tar.gz' saved [5307/5307] Download done. sha256sum mismatch jdk-7u3-linux-i586.tar.gz Oracle JDK 7 is NOT installed. dpkg: error processing oracle-java7-installer (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Errors were encountered while processing: oracle-java7-installer E: Sub-process /usr/bin/dpkg returned an error code (1) zero@ghostrider:~$

    Read the article

  • SQL SERVER – What the Business Says Is Not What the Business Wants

    - by pinaldave
    This blog post is written in response to T-SQL Tuesday hosted by Steve Jones. Steve raised a very interesting question; every DBA and Database Developer has already faced this situation. When I read the topic, I felt that I can write several different examples here. Today, I will cover this scenario, which seems quite amusing. Shrinking Database Earlier this year, I was working on SQL Server Performance Tuning consultancy; I had faced very interesting situation. No matter how much I attempt to reduce the fragmentation, I always end up with heavy fragmentation on the server. After careful research, I figured out that one of the jobs was continuously Shrinking the Database – which is a very bad practice. I have blogged about my experience over here SQL SERVER – SHRINKDATABASE For Every Database in the SQL Server. I removed the incorrect shrinking process right away; once it was removed, everything continued working as it should be. After a couple of days, I learned that one of their DBAs had put back the same DBCC process. I requested the Senior DBA to find out what is going on and he came up with the following reason: “Business Requirement.” I cannot believe this! Now, it was time for me to go deep into the subject. Moreover, it had become necessary to understand the need. After talking to the concerned people here, I understood what they needed. Please read the exact business need in their own language. The Shrinking “Business Need” “We shrink the database because if we take backup after shrinking the database, the size of the same is smaller. Once we take backup, we have to send it to our remote location site. Our business requirement is that we need to always make sure that the file is smallest when we transfer it to remote server.” The backup is not affected in any way if you shrink the database or not. The size of backup will be the same. After a couple of the tests, they agreed with me. Shrinking will create performance issues for the same as it will introduce heavy fragmentation in the database. The Real Solution The real business need was that they needed the smallest possible backup file. We finally implemented a quick solution which they are still using to date. The solution was compressed backup. I have written about this subject in detail few years before SQL SERVER – 2008 – Introduction to New Feature of Backup Compression. Compressed backup not only creates a small filesize but also increases the speed of the database as well. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • What would be the optimal disk config for SQL Server 2008 R2?

    - by Kev
    We have a new Dell R710 server that came with the following storage configuration: 8 x 146GB SAS 10k 6Gbps disks 1 x Perc H700 Integrated Controller (2 x 4 disks - 2 ports each supporting 4 disks) What would be the optimal configuration if we were just after performance? What would be the optimal configuration if we were after performance but wanted data resilience. As per 2 above but with a hot standby disk? We plan to run Windows 2008 R2 and SQL Server 2008 R2. Maximising storage capacity isn't a prime concern.

    Read the article

  • Reach for the Stars…Even if you Miss you’ll Land in the Cloud

    - by Kristin Rose
    “You make investment in the next generation of technology, while continuing to invest in your existing.” – Larry Ellison Last week’s Oracle Cloud and Oracle Platinum Services announcement highlighted some of the exciting ways in which Oracle made the switch from being an On-Premise Application provider to both an On-Premise and Cloud Application provider. The announcement was lead by Oracle CEO Larry Ellison, and Oracle President Mark Hurd. Together they announced the industry’s broadest and most advanced Cloud strategy and introduced Oracle Cloud Social Services, a broad Enterprise Social Platform offering. Attendees also anxiously awaited Larry’s first tweet.Be sure to watch the webcast replay below to learn more about the new developments in Oracle's Cloud strategy, and game-changing advances in Oracle Support. Sending you Cloud Dreams and Twitter Wishes,The OPN Communications Team

    Read the article

  • Embedded Web Server Vs External Web Server

    - by Jetti
    So I've thought of creating a web application in either Lisp or another functional language and was thinking of embedding the web server into the application (have my application handle the HTTP requests). I don't see any issues with that, however, I'm new to creating web applications (and in the grand scheme of things, programming as well). Is there any drawbacks to handling HTTP requests within your program instead of using a web server? Are there any benefits?

    Read the article

  • Linked SQL Server to Oracle

    - by Jerid
    Getting this error when executing SQL Server query only when connected via remote Desktop. When running the query from desktop, no problem. Is this a tsnNames issue? Linked server to Oracle 9i Server: Msg 7399, Level 16, State 1, Line 1 OLE DB provider 'MSDAORA' reported an error. [OLE/DB provider returned message: Unspecified error] [OLE/DB provider returned message: ORA-01041: internal error. hostdef extension doesn't exist ] OLE DB error trace [OLE/DB Provider 'MSDAORA' ICommandText::Execute returned 0x80004005: ].

    Read the article

  • April 2010 Critical Patch Update Released

    - by eric.maurice
    Hi, this is Eric Maurice. Today Oracle released the April 2010 Critical Patch Update (CPUApr2010),the first one to include security fixes for Oracle Solaris. Today's Critical Patch Update (CPU) provides 47 new security fixes across the following product families: Oracle Database Server, Oracle Fusion Middleware, Oracle Collaboration Suite, Oracle E-Business Suite, Oracle PeopleSoft Enterprise, Oracle Life Sciences, Retail, and Communications Industry Suites, and Oracle Solaris. 28 of these 47 new vulnerabilities are remotely exploitable without authentication, but the criticality of the affected components and the severity of these vulnerabilities vary greatly. Customers should, as usual, refer to the Risk Matrices in the CPU Advisory to assess the relevance of these fixes for their environment (and the urgency with which to apply the fixes). 7 of the 47 new vulnerabilities affect various versions of Oracle Database Server. None of these 7 vulnerabilities are remotely exploitable without authentication. Furthermore, none of these fixes are applicable to client-only deployments. The most severe CVSS Base Score for the Database Server vulnerabilities is 7.1. As a reminder, information about Oracle's use of the CVSS 2.0 standard can be found in Note 394487.1 (My Oracle Support subscription required). Note that this Critical Patch Update includes fixes for vulnerabilities that were publicly disclosed by David Litchfield at the BlackHat DC Conference in early February (CVE-2010-0866 and CVE-2010-0867). 5 of the 47 new vulnerabilities affect various components of the Oracle Fusion Middleware product family. The highest CVSS Base Score for these vulnerabilities is 7.5. Note that the patches for Oracle WebLogic Server are cumulative and this Critical Patch Update therefore also includes a fix for a vulnerability (CVE-2010-0073) that was the subject of a Security Alert issued by Oracle on February 4, 2010. Customers, who have not applied the previously-released patch, should apply today's Critical Patch Update as soon as possible. As stated at the beginning of this blog, it is also noteworthy to highlight that this Critical Patch Update provides 16 new fixes for the Sun product line. With the recent close of the Sun acquisition both security organizations have worked diligently to align Sun's previous security practices with Oracle's. Java users know that Oracle released a Critical Patch Update for Java SE and Java For Business earlier this month (in accordance with the Java patching schedule previously published by Sun Microsystems). Please note that for the first time, the Java advisories included CVSS Scores to help assess the severity of the new vulnerabilities fixed with the advisory. The rapid inclusion of the Solaris product lines in the Critical Patch Update and the extension of Oracle Software Security Assurance to Sun technologies are evidence of the flexibility of Oracle's security assurance programs. These should also result in tangible security benefits for the users of the Oracle hardware and software stack (such as a predictable patching schedule for all Oracle products).

    Read the article

  • New Feature in ODI 11.1.1.6: ODI for Big Data

    - by Julien Testut
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} By Ananth Tirupattur Starting with Oracle Data Integrator 11.1.1.6.0, ODI is offering a solution to process Big Data. This post provides an overview of this feature. With all the buzz around Big Data and before getting into the details of ODI for Big Data, I will provide a brief introduction to Big Data and Oracle Solution for Big Data. So, what is Big Data? Big data includes: structured data (this includes data from relation data stores, xml data stores), semi-structured data (this includes data from weblogs) unstructured data (this includes data from text blob, images) Traditionally, business decisions are based on the information gathered from transactional data. For example, transactional Data from CRM applications is fed to a decision system for analysis and decision making. Products such as ODI play a key role in enabling decision systems. However, with the emergence of massive amounts of semi-structured and unstructured data it is important for decision system to include them in the analysis to achieve better decision making capability. While there is an abundance of opportunities for business for gaining competitive advantages, process of Big Data has challenges. The challenges of processing Big Data include: Volume of data Velocity of data - The high Rate at which data is generated Variety of data In order to address these challenges and convert them into opportunities, we would need an appropriate framework, platform and the right set of tools. Hadoop is an open source framework which is highly scalable, fault tolerant system, for storage and processing large amounts of data. Hadoop provides 2 key services, distributed and reliable storage called Hadoop Distributed File System or HDFS and a framework for parallel data processing called Map-Reduce. Innovations in Hadoop and its related technology continue to rapidly evolve, hence therefore, it is highly recommended to follow information on the web to keep up with latest information. Oracle's vision is to provide a comprehensive solution to address the challenges faced by Big Data. Oracle is providing the necessary Hardware, software and tools for processing Big Data Oracle solution includes: Big Data Appliance Oracle NoSQL Database Cloudera distribution for Hadoop Oracle R Enterprise- R is a statistical package which is very popular among data scientists. ODI solution for Big Data Oracle Loader for Hadoop for loading data from Hadoop to Oracle. Further details can be found here: http://www.oracle.com/us/products/database/big-data-appliance/overview/index.html ODI Solution for Big Data: ODI’s goal is to minimize the need to understand the complexity of Hadoop framework and simplify the adoption of processing Big Data seamlessly in an enterprise. ODI is providing the capabilities for an integrated architecture for processing Big Data. This includes capability to load data in to Hadoop, process data in Hadoop and load data from Hadoop into Oracle. ODI is expanding its support for Big Data by providing the following out of the box Knowledge Modules (KMs). IKM File to Hive (LOAD DATA).Load unstructured data from File (Local file system or HDFS ) into Hive IKM Hive Control AppendTransform and validate structured data on Hive IKM Hive TransformTransform unstructured data on Hive IKM File/Hive to Oracle (OLH)Load processed data in Hive to Oracle RKM HiveReverse engineer Hive tables to generate models Using the Loading KM you can map files (local and HDFS files) to the corresponding Hive tables. For example, you can map weblog files categorized by date into a corresponding partitioned Hive table schema. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Using the Hive control Append KM you can validate and transform data in Hive. In the below example, two source Hive tables are joined and mapped to a target Hive table. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} The Hive Transform KM facilitates processing of semi-structured data in Hive. In the below example, the data from weblog is processed using a Perl script and mapped to target Hive table. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Using the Oracle Loader for Hadoop (OLH) KM you can load data from Hive table or HDFS to a corresponding table in Oracle. OLH is available as a standalone product. ODI greatly enhances OLH capability by generating the configuration and mapping files for OLH based on the configuration provided in the interface and KM options. ODI seamlessly invokes OLH when executing the scenario. In the below example, a HDFS file is mapped to a table in Oracle. Development and Deployment:The following diagram illustrates the development and deployment of ODI solution for Big Data. Using the ODI Studio on your development machine create and develop ODI solution for processing Big Data by connecting to a MySQL DB or Oracle database on a BDA machine or Hadoop cluster. Schedule the ODI scenarios to be executed on the ODI agent deployed on the BDA machine or Hadoop cluster. ODI Solution for Big Data provides several exciting new capabilities to facilitate the adoption of Big Data in an enterprise. You can find more information about the Oracle Big Data connectors on OTN. You can find an overview of all the new features introduced in ODI 11.1.1.6 in the following document: ODI 11.1.1.6 New Features Overview

    Read the article

  • Aggregating cache data from OCEP in CQL

    - by Manju James
    There are several use cases where OCEP applications need to join stream data with external data, such as data available in a Coherence cache. OCEP’s streaming language, CQL, supports simple cache-key based joins of stream data with data in Coherence (more complex queries will be supported in a future release). However, there are instances where you may need to aggregate the data in Coherence based on input data from a stream. This blog describes a sample that does just that. For our sample, we will use a simplified credit card fraud detection use case. The input to this sample application is a stream of credit card transaction data. The input stream contains information like the credit card ID, transaction time and transaction amount. The purpose of this application is to detect suspicious transactions and send out a warning event. For the sake of simplicity, we will assume that all transactions with amounts greater than $1000 are suspicious. The transaction history is available in a Coherence distributed cache. For every suspicious transaction detected, a warning event must be sent with maximum amount, total amount and total number of transactions over the past 30 days, as shown in the diagram below. Application Input Stream input to the EPN contains events of type CCTransactionEvent. This input has to be joined with the cache with all credit card transactions. The cache is configured in the EPN as shown below: <wlevs:caching-system id="CohCacheSystem" provider="coherence"/> <wlevs:cache id="CCTransactionsCache" value-type="CCTransactionEvent" key-properties="cardID, transactionTime" caching-system="CohCacheSystem"> </wlevs:cache> Application Output The output that must be produced by the application is a fraud warning event. This event is configured in the spring file as shown below. Source for cardHistory property can be seen here. <wlevs:event-type type-name="FraudWarningEvent"> <wlevs:properties type="tuple"> <wlevs:property name="cardID" type="CHAR"/> <wlevs:property name="transactionTime" type="BIGINT"/> <wlevs:property name="transactionAmount" type="DOUBLE"/> <wlevs:property name="cardHistory" type="OBJECT"/> </wlevs:properties </wlevs:event-type> Cache Data Aggregation using Java Cartridge In the output warning event, cardHistory property contains data from the cache aggregated over the past 30 days. To get this information, we use a java cartridge method. This method uses Coherence’s query API on credit card transactions cache to get the required information. Therefore, the java cartridge method requires a reference to the cache. This may be set up by configuring it in the spring context file as shown below: <bean class="com.oracle.cep.ccfraud.CCTransactionsAggregator"> <property name="cache" ref="CCTransactionsCache"/> </bean> This is used by the java class to set a static property: public void setCache(Map cache) { s_cache = (NamedCache) cache; } The code snippet below shows how the total of all the transaction amounts in the past 30 days is computed. Rest of the information required by CardHistory object is calculated in a similar manner. Complete source of this class can be found here. To find out more information about using Coherence's API to query a cache, please refer Coherence Developer’s Guide. public static CreditHistoryData(String cardID) { … Filter filter = QueryHelper.createFilter("cardID = :cardID and transactionTime :transactionTime", map); CardHistoryData history = new CardHistoryData(); Double sum = (Double) s_cache.aggregate(filter, new DoubleSum("getTransactionAmount")); history.setTotalAmount(sum); … return history; } The java cartridge method is used from CQL as seen below: select cardID, transactionTime, transactionAmount, CCTransactionsAggregator.execute(cardID) as cardHistory from inputChannel where transactionAmount1000 This produces a warning event, with history data, for every credit card transaction over $1000. That is all there is to it. The complete source for the sample application, along with the configuration files, is available here. In the sample, I use a simple java bean to load the cache with initial transaction history data. An input adapter is used to create and send transaction events for the input stream.

    Read the article

< Previous Page | 179 180 181 182 183 184 185 186 187 188 189 190  | Next Page >