Search Results

Search found 5410 results on 217 pages for 'n tier architecture'.

Page 110/217 | < Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >

  • links for 2010-05-15

    - by Bob Rhubart
    Live Virtual SOA Training from Oracle University Enroll in "SOA: Architectural Concepts and Design Principles," a four-day Live Virtual Class that teaches you the key concepts associated with a SOA architecture, including principles, service design, and infrastructure. (tags: otn oracle soa architect training education)

    Read the article

  • ARM sort un IDE gratuit pour le développement natif sous Android : d'édition communautaire d'ARM Development Studio 5

    ARM sort un IDE gratuit pour le développement natif sous Android L'édition communautaire d'ARM Development Studio 5 ARM ltd, développeur de l'architecture éponyme, vient d'annoncer la disponibilité de Development Studio 5 (DS-5) en édition communautaire (CE). Cette édition permettra de développer sans frais de licence, des applications Android natives en C/C++ allant jusqu'à quatre fois plus vite que le code Java. Ce toolkit est fondé sur Eclipse. Il vient compléter les SDK et NDK (Native Develope...

    Read the article

  • Building Search Into Your Organization

    Most corporations understand that a great search strategy relies mostly on incorporating the absolute best practices throughout their organizations. Here are some tips to great content architecture across your organization.

    Read the article

  • Come See Us Next Week at VMworld 2014

    - by Larry Wake
    If you're at VMworld 2014 next week in San Francisco, come drop by booth 205.  We'll have folks from both the Oracle Solaris and Oracle ZFS Storage teams, so you can learn a lot more about what's new in Oracle Solaris 11.2, plus what the storage team has been up to, as they unleash their "it's perfect for virtualization" architecture, with a series of new VMware API integrations, that crushes both the other big-name storage vendors and the all-flash start-ups.

    Read the article

  • Fusion Middleware MAA ????????

    - by katsumii
    ???????????MAA???????????????????? SOA 11g ??????????·?????? OracleDB 11g ?????? (INOUE Katsumi @ Tokyo)??????????????????DB11g??????SecureFiles ??????????????? ???????????MAA?????????????????????Large OBject(LOB)????? ?????????????????????????????????????????????????????????????MAA???????????????????? Oracle Blogs ???????: [FMW] MAA Best Practices - Oracle Fusion MiddlewareMAA = Maximum Availability Architecture????????????????????????????????????????????(????????)???

    Read the article

  • ????????Oracle WebLogic Server ???????|WebLogic Channel|??????

    - by ???02
    WebLogic Server????????????????????????????WebLogic Server?????????????????WebLogic Server????????????????????????????????????Java??????????????????????????????????????????????????????????????¦Oracle WebLogic Server ?????????¦WebLogic Server???????¦?????????????????????¦?????????????¦???·??????¦WebLogic Server????¦WebLogic JDBC¦RAC?? ?????????????????????????????http://www.oracle.com/technetwork/jp/ondemand/application-grid/wls11g-architecture-201107-otn-sc-439536-ja.pdf

    Read the article

  • How to use Public IP in case of two ISP when two differs from each other

    - by user1471995
    Please bare with my long explanation but this is important to explain the actual problem. Please also pardon my knowledge with PFsense as i am new to this. I have single PFSense box with 3 Ethernet adapter. Before moving to configuration for these, i want to let you know i have two Ethernet based Internet Leased Line Connectivity let's call them ISP A and ISP B. Then last inetrface is LAN which is connected to network switch. Typical network diagram ISP A ----- PFSense ----> Switch ---- > Servers ISP B ----- ISP A (Initially Purchased) WAN IP:- 113.193.X.X /29 Gateway IP :- 113.193.X.A and other 4 usable public IP in same subnet(So the gateway for those IP are also same). ISP B (Recently Purchased) WAN IP:- 115.115.X.X /30 Gateway IP :- 115.115.X.B and other 5 usable public IP in different subnet(So the gateway for those IP is different), for example if 115.119.X.X2 is one of the IP from that list then the gateway for this IP is 115.119.X.X1. Configuration for 3 Interfaces Interface : WAN Network Port : nfe0 Type : Static IP Address : 113.193.X.X /29 Gateway : 113.193.X.A Interface : LAN Network Port : vr0 Type : Static IP Address : 192.168.1.1 /24 Gateway : None Interface : RELWAN Network Port : rl0 Type : Static IP Address : 115.115.X.X /30 (I am not sure of the subnet) Gateway : 115.115.X.B To use Public IP from ISP A i have done following steps a) Created Virtual IP using either ARP or IP Alias. b) Using Firewall: NAT: Port Forward i have created specific natting from one public IP to my internal Lan private IP for example :- WAN TCP/UDP * * 113.193.X.X1 53 (DNS) 192.168.1.5 53 (DNS) WAN TCP/UDP * * 113.193.X.X1 80 (HTTP) 192.168.1.5 80 (HTTP) WAN TCP * * 113.193.X.X2 80 (HTTP) 192.168.1.7 80 (HTTP) etc., c) Current state for Firewall: NAT: Outbound is Manual and whatever default rule are defined for the WAN those are only present. d) If this section in relevant then for Firewall: Rules at WAN tab then following default rule has been generated. * RFC 1918 networks * * * * * Block private networks * Reserved/not assigned by IANA * * * * * * To use Public IP from ISP B i have done following steps a) Created Virtual IP using either ARP or IP Alias. b) Using Firewall: NAT: Port Forward i have created specific natting from one public IP to my internal Lan private IP for example :- RELWAN TCP/UDP * * 115.119.116.X.X1 80 (HTTP) 192.168.1.11 80 (HTTP) c) Current state for Firewall: NAT: Outbound is Manual and whatever default rule are defined for the RELWAN those are only present. d) If this section in relevant then for Firewall: Rules at RELWAN tab then following default rule has been generated. * RFC 1918 networks * * * * * * Reserved/not assigned by IANA * * * * * * Last thing before my actual query is to make you aware that to have multiple Wan setup i have done following steps a) Under System: Gateways at Groups Tab i have created new group as following MultipleGateway WANGW, RELWAN Tier 2,Tier 1 Multiple Gateway Test b) Then Under Firewall: Rules at LAN tab i have created a rule for internal traffic as follows * LAN net * * * MultipleGateway none c) This setup works if unplug first ISP traffic start routing using ISP 2 and vice-versa. Now my main query and problem is i am not able to use public IP address allocated by ISP B, i have tried many small tweaks but not successful in anyone. The notable difference between the two ISP is a) In case of ISP A there Public usable IP address are on same subnet so the gateway used for the WAN ip is same for the other public IP address. b) In case of ISP B there public usable IP address are on different subnet so the obvious the gateway IP for them is different from WAN gateway's IP. Please let me know how to use ISP B public usable IP address, in future also i am going to rely for more IPs from ISP B only.

    Read the article

  • What characteristic of networking/TCP causes linear relation between TCP activity and latency?

    - by DeLongey
    The core of this problem is that our application uses websockets for real-time interfaces. We are testing our app in a new environment but strangely we're noticing an increasing delay in TCP websocket packets associated with an increase in websocket activity. For example, if one websocket event occurs without any other activity in a 1-minute period, the response from the server is instantaneous. However, if we slowly increase client activity the latency in server response increases with a linear relationship (each packet will take more time to reach the client with more activity). For those wondering this is NOT app-related since our logs show that our server is running and responding to requests in under 100ms as desired. The delay starts once the server processes the request and creates the TCP packet and sends it to the client (and not the other way around). Architecture This new environment runs with a Virtual IP address and uses keepalived on a load balancer to balance the traffic between instances. Two boxes sit behind the balancer and all traffic runs through it. Our host provider manages the balancer and we do not have control over that part of the architecture. Theory Could this somehow be related to something buffering the packets in the new environment? Thanks for your help.

    Read the article

  • Mac 10.5 Python libsvm 64 bit vs 32 bit

    - by shadowsoul
    I have a Mac 10.5 when I type "python" in terminal, it says Enthought Python Distribution -- www.enthought.com Version: 7.3-2 (64-bit) Python 2.7.3 |EPD 7.3-2 (64-bit)| (default, Apr 12 2012, 11:14:05) [GCC 4.0.1 (Apple Inc. build 5493)] on darwin Type "credits", "demo" or "enthought" for more information. then I go to my libsvm/python folder and type "make" which results in make -C .. lib if [ "Darwin" = "Darwin" ]; then \ SHARED_LIB_FLAG="-dynamiclib -W1,-install_name,libsvm.so.2"; \ else \ SHARED_LIB_FLAG="-shared -W1,-soname,libsvm.so.2"; \ fi; \ g++ ${SHARED_LIB_FLAG} svm.o -o libsvm.so.2 when I try to do "from svmutil import *" I get the error: OSError: dlopen(.../libsvm-3.12/python/../libsvm.so.2, 6): no suitable image found. Did find: .../libsvm-3.12/python/../libsvm.so.2: mach-o, but wrong architecture when I do "lipo -info libsvm.so.2", I get: Non-fat file: libsvm.so.2 is architecture: i386 So it looks like I'm running 64-bit python but libsvm ends up as a 32-bit program. Any way I can get it to compile as a 64-bit program?

    Read the article

  • How to install QEMU on Damn Small Linux?

    - by user2934303
    i'm trying to install QEMU on a Damn Small Linux installation in order to emulate pentium features in a 486 computer. Though DSL was descontinued, it's the only linux that runs reasonably on the 486 processor, most recent kernels doesn't even boot on 486 architecture. I tried Tiny Core Linux, but it doesn't work in 486, so i seem to have no escape here. The most recent image of DSL is from 2008, it uses kernel 2.4.x, and i couldn't find a way to compile QEMU on it. Firstly, it lacks several compile tools needed for compiling it, and, it have several dependency problems. I tried some pre-compiled packages, but the only one that worked was a QEMU 5.2 RPM package (it didn't had dependency problems), and it was way too old, it wasn't capable of running windows yet, it just gave me the option of emulating a code, not a full OS as windows, and it also didn't give me the option to choose which architecture i wanted it to emulate (-cpu option). Can anyone help me with this? Also, if someone can think of some alternative to it, i'd be grateful. Thanks.

    Read the article

  • How to auto-cc a system email account any time a user creates an appointment

    - by Ferdy
    I will not bother explaining my full architecture or reasons for wanting this in order to keep this question short: Is it possible to auto-cc a certain email account any time a Exchange user creates an appointment or meeting in his own calendar? Is it possible using rules? Our Exchange 2007 server is outsourced, I cannot change the configuration or install plugins server-side Preferably, it still should work server-side, because users may use the Outlook client but also Outlook Web Access Is there any other way, perhaps using group policies? My conclusion so far is that the only viable way to accomplish this is to build an Outlook add-on. The problem there is that it will need to be managed for thousands of desktop users and that the add-on will not work when using another client (OWA, mobile). An alternative architecture could be to pull the information from the user's calendar on a scheduled basis. Given that we are talking about a lot of users, scalability is a major issue, this has also been confirmed by Microsoft. Can you confirm that my thinking is correct or do you have any other solutions?

    Read the article

  • Installing Tcl and Tix in OSX

    - by Nate
    Hello, I'm having trouble installing Tix on OSX the version of Tix I am using is 8.4.3. I try to install it by following the instructions in the README % ./configure % make % make install And iat the very start of make it gives me: xXpm.o tixUnixWm.o -L/Library/Frameworks/Tcl.framework -ltclstub8.5 -L/Library/Frameworks/Tk.framework -ltkstub8.5 ld: warning: in /Library/Frameworks/Tcl.framework/libtclstub8.5.a, missing required architecture x86_64 in file ld: warning: in /Library/Frameworks/Tk.framework/libtkstub8.5.a, missing required architecture x86_64 in file Undefined symbols: (A whole long list of things) at the very end ld: symbol(s) not found collect2: ld returned 1 exit status make: *** [libTix8.4.3.dylib] Error 1 Edit: Here's all the errors in the middle.. ld: warning: in /Library/Frameworks/Tcl.framework/libtclstub8.5.a, missing required architecture x86_64 in file ld: warning: in /Library/Frameworks/Tk.framework/libtkstub8.5.a, missing required architecture x86_64 in file Undefined symbols: "_Tk_InitStubs", referenced from: _Tix_Init in tixInit.o "_Tcl_InitStubs", referenced from: _Tix_Init in tixInit.o "_tclStubsPtr", referenced from: _FreeParseOptions in tixClass.o _FreeParseOptions in tixClass.o _Tix_UninitializedClassCmd in tixClass.o _Tix_UninitializedClassCmd in tixClass.o _Tix_InstanceCmd in tixClass.o _Tix_InstanceCmd in tixClass.o _Tix_InstanceCmd in tixClass.o _Tix_InstanceCmd in tixClass.o _Tix_InstanceCmd in tixClass.o _Tix_InstanceCmd in tixClass.o _Tix_InstanceCmd in tixClass.o _Tix_InstanceCmd in tixClass.o _Tix_InstanceCmd in tixClass.o _Tix_InstanceCmd in tixClass.o _Tix_InstanceCmd in tixClass.o _Tix_CreateInstanceCmd in tixClass.o _SetupAttribute in tixClass.o _SetupAttribute in tixClass.o _SetupAttribute in tixClass.o _ClassTableDeleteProc in tixClass.o _CreateClassRecord in tixClass.o _InitClass in tixClass.o _InitClass in tixClass.o _InitClass in tixClass.o _InitClass in tixClass.o _InitClass in tixClass.o _InitClass in tixClass.o _InitClass in tixClass.o _InitClass in tixClass.o _Tix_ClassCmd in tixClass.o _EventProc in tixCmds.o _IdleHandler in tixCmds.o _MapEventProc in tixCmds.o _MapEventProc in tixCmds.o _Tix_GetDefaultCmd in tixCmds.o _Tix_GetDefaultCmd in tixCmds.o _Tix_TmpLineCmd in tixCmds.o _Tix_ParentWindow in tixCmds.o _Tix_ParentWindow in tixCmds.o _Tix_DoWhenMappedCmd in tixCmds.o _Tix_DoWhenMappedCmd in tixCmds.o _Tix_DoWhenMappedCmd in tixCmds.o _Tix_DoWhenIdleCmd in tixCmds.o _Tix_DoWhenIdleCmd in tixCmds.o _Tix_DoWhenIdleCmd in tixCmds.o _Tix_DoWhenIdleCmd in tixCmds.o _Tix_DoWhenIdleCmd in tixCmds.o _Tix_HandleOptionsCmd in tixCmds.o _Tix_HandleOptionsCmd in tixCmds.o _Tix_HandleOptionsCmd in tixCmds.o _Tix_HandleOptionsCmd in tixCmds.o _Tix_HandleOptionsCmd in tixCmds.o _Tix_HandleOptionsCmd in tixCmds.o _Tix_HandleOptionsCmd in tixCmds.o _Tix_HandleOptionsCmd in tixCmds.o _Tix_Get3DBorderCmd in tixCmds.o _Tix_Get3DBorderCmd in tixCmds.o _Tix_Get3DBorderCmd in tixCmds.o _tixStrDup in tixCompat.o _Tix_ArgcError in tixError.o _Tix_ValueMissingError in tixError.o _Tix_UnknownPublicMethodError in tixError.o _FreeClientStruct in tixGeometry.o _StructureProc in tixGeometry.o _StructureProc in tixGeometry.o _Tix_ManageGeometryCmd in tixGeometry.o _Tix_ManageGeometryCmd in tixGeometry.o _Tix_ManageGeometryCmd in tixGeometry.o _GeoLostSlaveProc in tixGeometry.o _GeoLostSlaveProc in tixGeometry.o _GeoReqProc in tixGeometry.o _Tix_SafeInit in tixInit.o _Tix_Init in tixInit.o _Tix_GetContext in tixMethod.o _Tix_SuperClass in tixMethod.o _Tix_FindConfigSpecByName in tixOption.o _Tix_ChangeOptions in tixOption.o _Tix_QueryOneOption in tixOption.o _Tix_GetVar in tixOption.o _Tix_SetScrollBarView in tixScroll.o _Tix_SetScrollBarView in tixScroll.o _Tix_UpdateScrollBar in tixScroll.o _Tix_CreateCommands in tixUtils.o _Tix_CreateCommands in tixUtils.o _DeleteHashTableProc in tixUtils.o _TixGetHashTable in tixUtils.o _Tix_SetRcFileName in tixUtils.o _Tix_CreateSubWindow in tixUtils.o _ReliefParseProc in tixUtils.o _Tix_HandleSubCmds in tixUtils.o _Tix_HandleSubCmds in tixUtils.o _Tix_HandleSubCmds in tixUtils.o _Tix_ZAlloc in tixUtils.o _Tix_GlobalVarEval in tixUtils.o _Tix_Exit in tixUtils.o _Tix_Exit in tixUtils.o _Tix_CreateWidgetCmd in tixWidget.o _Tix_CreateWidgetCmd in tixWidget.o _Tix_GrSelModify in tixGrSel.o _Tix_GrFreeSortItems in tixGrSort.o _SortCompareProc in tixGrSort.o _SortCompareProc in tixGrSort.o _SortCompareProc in tixGrSort.o _Tix_GrGetSortItems in tixGrSort.o _Tix_GrSort in tixGrSort.o _Tix_GrSort in tixGrSort.o _Tix_GrSort in tixGrSort.o _Tix_GrSort in tixGrSort.o _Tix_GrSort in tixGrSort.o _Tix_GrSort in tixGrSort.o _Tix_GrSort in tixGrSort.o _Tix_GrSort in tixGrSort.o _Tix_GrSort in tixGrSort.o _Tix_GrSort in tixGrSort.o _Tix_GetChars in tixGrUtl.o _Tix_GrConfigSize in tixGrUtl.o _Tix_GrConfigSize in tixGrUtl.o _Tix_GrConfigSize in tixGrUtl.o _Tix_GrConfigSize in tixGrUtl.o _Tix_HLCancelResizeWhenIdle in tixHList.o _Tix_HLFindElement in tixHList.o _CurSelection in tixHList.o _Tix_HLGeometryInfo in tixHList.o _Tix_HLGeometryInfo in tixHList.o _Tix_HLGeometryInfo in tixHList.o _UpdateOneScrollBar in tixHList.o _AllocElement in tixHList.o _WidgetCommand in tixHList.o _Tix_HLEntryCget in tixHList.o _Tix_HLResizeWhenIdle in tixHList.o _Tix_HLResizeWhenIdle in tixHList.o _NewElement in tixHList.o _NewElement in tixHList.o _NewElement in tixHList.o _NewElement in tixHList.o _NewElement in tixHList.o _NewElement in tixHList.o _NewElement in tixHList.o _NewElement in tixHList.o _NewElement in tixHList.o _NewElement in tixHList.o _NewElement in tixHList.o _NewElement in tixHList.o _NewElement in tixHList.o _NewElement in tixHList.o _WidgetConfigure in tixHList.o _WidgetConfigure in tixHList.o _Tix_HListCmd in tixHList.o _Tix_HListCmd in tixHList.o _Tix_HListCmd in tixHList.o _Tix_HListCmd in tixHList.o _Tix_HListCmd in tixHList.o _UpdateScrollBars in tixHList.o _FreeElement in tixHList.o _FreeElement in tixHList.o _Tix_HLDelete in tixHList.o _Tix_HLDelete in tixHList.o _WidgetDestroy in tixHList.o _WidgetDestroy in tixHList.o _Tix_HLXView in tixHList.o _Tix_HLXView in tixHList.o _Tix_HLXView in tixHList.o _Tix_HLXView in tixHList.o _Tix_HLXView in tixHList.o _Tix_HLSetSite in tixHList.o _Tix_HLSetSite in tixHList.o _Tix_HLSetSite in tixHList.o _ConfigElement in tixHList.o _Tix_HLAddChild in tixHList.o _Tix_HLAdd in tixHList.o _Tix_HLComputeGeometry in tixHList.o _Tix_HLResizeNow in tixHList.o _Tix_HLNearest in tixHList.o _SubWindowEventProc in tixHList.o _WidgetEventProc in tixHList.o _WidgetEventProc in tixHList.o _WidgetEventProc in tixHList.o _WidgetEventProc in tixHList.o _Tix_HLItemInfo in tixHList.o _Tix_HLItemInfo in tixHList.o _Tix_HLItemInfo in tixHList.o _Tix_HLItemInfo in tixHList.o _Tix_HLItemInfo in tixHList.o _Tix_HLItemInfo in tixHList.o _Tix_HLItemInfo in tixHList.o _Tix_HLItemInfo in tixHList.o _Tix_HLItemInfo in tixHList.o _Tix_HLItemInfo in tixHList.o _Tix_HLSelection in tixHList.o _Tix_HLSelection in tixHList.o _Tix_HLSelection in tixHList.o _Tix_HLSelection in tixHList.o _Tix_HLYView in tixHList.o _Tix_HLYView in tixHList.o _Tix_HLYView in tixHList.o _Tix_HLSeeElement in tixHList.o _WidgetDisplay in tixHList.o _WidgetDisplay in tixHList.o _WidgetDisplay in tixHList.o _Tix_HLSee in tixHList.o _Tix_HLInfo in tixHList.o _Tix_HLInfo in tixHList.o _Tix_HLInfo in tixHList.o _Tix_HLInfo in tixHList.o _Tix_HLInfo in tixHList.o _Tix_HLInfo in tixHList.o _Tix_HLAllocColumn in tixHLCol.o _Tix_HLColWidth in tixHLCol.o _Tix_HLColWidth in tixHLCol.o _Tix_HLColWidth in tixHLCol.o _Tix_HLColWidth in tixHLCol.o _Tix_HLGetColumn in tixHLCol.o _Tix_HLGetColumn in tixHLCol.o _Tix_HLGetColumn in tixHLCol.o _Tix_HLItemExists in tixHLCol.o _Tix_HLItemExists in tixHLCol.o _Tix_HLItemDelete in tixHLCol.o _Tix_HLItemCreate in tixHLCol.o _Tix_HLIndExists in tixHLInd.o _Tix_HLIndExists in tixHLInd.o _Tix_HLIndCGet in tixHLInd.o _Tix_HLIndSize in tixHLInd.o _Tix_HLIndSize in tixHLInd.o _Tix_HLIndDelete in tixHLInd.o _Tix_HLIndCreate in tixHLInd.o _Tix_HLIndConfig in tixHLInd.o _Tix_HLGetHeader in tixHLHdr.o _Tix_HLCreateHeaders in tixHLHdr.o _Tix_HLCreateHeaders in tixHLHdr.o _Tix_HLHdrExist in tixHLHdr.o _Tix_HLHdrExist in tixHLHdr.o _Tix_HLHdrSize in tixHLHdr.o _Tix_HLHdrSize in tixHLHdr.o _Tix_HLFreeHeaders in tixHLHdr.o _Tix_HLHdrCreate in tixHLHdr.o _DeleteTab in tixNBFrame.o _DeleteTab in tixNBFrame.o _WidgetDestroy in tixNBFrame.o _FindTab in tixNBFrame.o _ImageProc in tixNBFrame.o _TabConfigure in tixNBFrame.o _WidgetEventProc in tixNBFrame.o _WidgetEventProc in tixNBFrame.o _WidgetEventProc in tixNBFrame.o _WidgetConfigure in tixNBFrame.o _Tix_NoteBookFrameCmd in tixNBFrame.o _Tix_NoteBookFrameCmd in tixNBFrame.o _Tix_NoteBookFrameCmd in tixNBFrame.o _Tix_NoteBookFrameCmd in tixNBFrame.o _WidgetCommand in tixNBFrame.o _WidgetCommand in tixNBFrame.o _WidgetCommand in tixNBFrame.o _WidgetCommand in tixNBFrame.o _WidgetCommand in tixNBFrame.o _WidgetCommand in tixNBFrame.o _WidgetCommand in tixNBFrame.o _WidgetCommand in tixNBFrame.o _WidgetCommand in tixNBFrame.o _WidgetCommand in tixNBFrame.o _WidgetCommand in tixNBFrame.o _WidgetCommand in tixNBFrame.o _WidgetCommand in tixNBFrame.o _WidgetCommand in tixNBFrame.o _WidgetCommand in tixNBFrame.o _WidgetCommand in tixNBFrame.o _WidgetCommand in tixNBFrame.o _WidgetCommand in tixNBFrame.o _WidgetCommand in tixNBFrame.o _WidgetCommand in tixNBFrame.o _WidgetCommand in tixNBFrame.o _WidgetCommand in tixNBFrame.o _WidgetCommand in tixNBFrame.o _ResizeWhenIdle in tixTList.o _ResizeWhenIdle in tixTList.o _WidgetConfigure in tixTList.o _WidgetConfigure in tixTList.o _Tix_TListCmd in tixTList.o _Tix_TListCmd in tixTList.o _UpdateScrollBars in tixTList.o _WidgetCommand in tixTList.o _Tix_TLGeometryInfo in tixTList.o _Tix_TLGeometryInfo in tixTList.o _Tix_TLGeometryInfo in tixTList.o _Tix_TLSpecialEntryInfo in tixTList.o _Tix_TLSpecialEntryInfo in tixTList.o _Tix_TLSpecialEntryInfo in tixTList.o _FreeEntry in tixTList.o _WidgetComputeGeometry in tixTList.o _WidgetComputeGeometry in tixTList.o _WidgetComputeGeometry in tixTList.o _Tix_TLGetNearest in tixTList.o _Tix_TranslateIndex in tixTList.o _Tix_TLEntryCget in tixTList.o _WidgetDestroy in tixTList.o _WidgetDestroy in tixTList.o _Tix_TLGetNeighbor in tixTList.o _Tix_TLGetNeighbor in tixTList.o _Tix_TLInfo in tixTList.o _Tix_TLInfo in tixTList.o _Tix_TLInfo in tixTList.o _Tix_TLInfo in tixTList.o _Tix_TLIndex in tixTList.o _Tix_TLNearest in tixTList.o _WidgetEventProc in tixTList.o _WidgetEventProc in tixTList.o _WidgetEventProc in tixTList.o _ConfigElement in tixTList.o _Tix_TLEntryConfig in tixTList.o _Tix_TLInsert in tixTList.o _Tix_TLInsert in tixTList.o _Tix_TLInsert in tixTList.o _Tix_TLView in tixTList.o _Tix_TLView in tixTList.o _Tix_TLSetSite in tixTList.o _Tix_TLSetSite in tixTList.o _Tix_TLSetSite in tixTList.o _Tix_TLSee in tixTList.o _Tix_TLSee in tixTList.o _Tix_TLSelection in tixTList.o _Tix_TLSelection in tixTList.o _Tix_TLSelection in tixTList.o _Tix_TLSelection in tixTList.o _ImgCmpGet in tixImgCmp.o _FreeLine in tixImgCmp.o _AddNewLine in tixImgCmp.o _FreeItem in tixImgCmp.o _AddNewText in tixImgCmp.o _AddNewSpace in tixImgCmp.o _AddNewImage in tixImgCmp.o _AddNewBitmap in tixImgCmp.o _ImgCmpFreeResources in tixImgCmp.o _ImgCmpDelete in tixImgCmp.o _ImgCmpConfigureMaster in tixImgCmp.o _ImgCmpConfigureMaster in tixImgCmp.o _ImgCmpConfigureMaster in tixImgCmp.o _ImgCmpCmd in tixImgCmp.o _ImgCmpCmd in tixImgCmp.o _ImgCmpCmd in tixImgCmp.o _ImgCmpCmd in tixImgCmp.o _ImgCmpCmd in tixImgCmp.o _ImgCmpCmd in tixImgCmp.o _ImgCmpCmd in tixImgCmp.o _ImgCmpCreate in tixImgCmp.o _ImgCmpCreate in tixImgCmp.o _ImageProc in tixImgCmp.o _ImgXpmDelete in tixImgXpm.o _ImgXpmDelete in tixImgXpm.o _Tix_DefinePixmap in tixImgXpm.o _Tix_DefinePixmap in tixImgXpm.o _ImgXpmFree in tixImgXpm.o _ImgXpmFree in tixImgXpm.o _ImgXpmGetDataFromString in tixImgXpm.o _ImgXpmGetDataFromString in tixImgXpm.o _ImgXpmConfigureInstance in tixImgXpm.o _ImgXpmConfigureInstance in tixImgXpm.o _ImgXpmConfigureInstance in tixImgXpm.o _ImgXpmConfigureInstance in tixImgXpm.o _ImgXpmConfigureInstance in tixImgXpm.o _ImgXpmConfigureInstance in tixImgXpm.o _ImgXpmConfigureInstance in tixImgXpm.o _ImgXpmConfigureInstance in tixImgXpm.o _ImgXpmGet in tixImgXpm.o _ImgXpmConfigureMaster in tixImgXpm.o _ImgXpmConfigureMaster in tixImgXpm.o _ImgXpmConfigureMaster in tixImgXpm.o _ImgXpmConfigureMaster in tixImgXpm.o _ImgXpmConfigureMaster in tixImgXpm.o _ImgXpmConfigureMaster in tixImgXpm.o _ImgXpmConfigureMaster in tixImgXpm.o _ImgXpmCmd in tixImgXpm.o _ImgXpmCmd in tixImgXpm.o _ImgXpmCmd in tixImgXpm.o _ImgXpmCmd in tixImgXpm.o _ImgXpmCreate in tixImgXpm.o _ImgXpmCreate in tixImgXpm.o _TixpInitPixmapInstance in tixUnixXpm.o _TixpXpmAllocTmpBuffer in tixUnixXpm.o _TixpXpmAllocTmpBuffer in tixUnixXpm.o _TixpXpmFreeTmpBuffer in tixUnixXpm.o _TixpXpmFreeTmpBuffer in tixUnixXpm.o _TixpXpmFreeInstanceData in tixUnixXpm.o "_tclIntStubsPtr", referenced from: _Tix_CreateWidgetCmd in tixWidget.o "_tkIntStubsPtr", referenced from: _XLowerWindow in tixUnixWm.o "_tkIntXlibStubsPtr", referenced from: _IdleHandler in tixGrid.o _IdleHandler in tixGrid.o _IdleHandler in tixGrid.o _Tix_GrFormatGrid in tixGrFmt.o _Tix_GrFormatGrid in tixGrFmt.o _Tix_GrFormatGrid in tixGrFmt.o _Tix_GrFormatGrid in tixGrFmt.o _DrawElements in tixHList.o _DrawElements in tixHList.o _DrawElements in tixHList.o _WidgetDisplay in tixHList.o _WidgetDisplay in tixHList.o _WidgetDisplay in tixHList.o _WidgetDisplay in tixHList.o _Tix_HLDrawHeader in tixHLHdr.o _Tix_HLDrawHeader in tixHLHdr.o _WidgetDisplay in tixNBFrame.o _Tix_TextStyleSetTemplate in tixDiText.o _Tix_TextStyleSetTemplate in tixDiText.o _Tix_TextItemFree in tixDiText.o _Tix_TextItemConfigure in tixDiText.o _Tix_WindowItemUnmap in tixDiWin.o _Tix_WindowItemUnmap in tixDiWin.o _Tix_WindowStyleFree in tixDiWin.o _Tix_WindowStyleConfigure in tixDiWin.o _Tix_WindowStyleSetTemplate in tixDiWin.o _Tix_WindowStyleSetTemplate in tixDiWin.o _Tix_WindowStyleSetTemplate in tixDiWin.o _Tix_WindowStyleSetTemplate in tixDiWin.o _Tix_WindowItemFree in tixDiWin.o _Tix_WindowItemFree in tixDiWin.o _Tix_WindowItemDisplay in tixDiWin.o _Tix_WindowItemDisplay in tixDiWin.o _Tix_WindowItemDisplay in tixDiWin.o _Tix_WindowItemDisplay in tixDiWin.o _Tix_WindowItemConfigure in tixDiWin.o _SubWindowLostSlaveProc in tixDiWin.o _UnmapClient in tixForm.o _UnmapClient in tixForm.o _TixFm_AddToMaster in tixForm.o _TixFm_GetFormInfo in tixForm.o _TixFm_FindClientPtrByName in tixForm.o _GetMasterInfo in tixForm.o _TixFm_Check in tixForm.o _TixFm_Slaves in tixForm.o _ArrangeGeometry in tixForm.o _ArrangeGeometry in tixForm.o _ArrangeGeometry in tixForm.o _TixFm_SetClient in tixForm.o _TixFm_SetClient in tixForm.o _TixFm_SetClient in tixForm.o _TixFm_SetClient in tixForm.o _TixFm_Spring in tixForm.o _TixFm_SetGrid in tixForm.o _TixFm_LostSlaveProc in tixForm.o _TixFm_ForgetOneClient in tixForm.o _TixFm_DeleteMaster in tixForm.o _ConfigureAttachment in tixFormMisc.o _ConfigureAttachment in tixFormMisc.o _ConfigureAttachment in tixFormMisc.o _ConfigureAttachment in tixFormMisc.o _TixFm_Configure in tixFormMisc.o _TixFm_Configure in tixFormMisc.o _TixFm_Configure in tixFormMisc.o _TixFm_Configure in tixFormMisc.o _TixFm_Configure in tixFormMisc.o _TixFm_Configure in tixFormMisc.o _WidgetCmdDeletedProc in tixGrid.o _Tix_GrCGet in tixGrid.o _WidgetDestroy in tixGrid.o _WidgetDestroy in tixGrid.o _WidgetConfigure in tixGrid.o _Tix_GrConfig in tixGrid.o _Tix_GrConfig in tixGrid.o _Tix_GridCmd in tixGrid.o _Tix_GrView in tixGrid.o _IdleHandler in tixGrid.o _IdleHandler in tixGrid.o _IdleHandler in tixGrid.o _IdleHandler in tixGrid.o _IdleHandler in tixGrid.o _IdleHandler in tixGrid.o _Tix_GrFillCells in tixGrFmt.o _Tix_GrFillCells in tixGrFmt.o _Tix_GrFreeUnusedColors in tixGrFmt.o _Tix_GrFreeUnusedColors in tixGrFmt.o _GetInfo in tixGrFmt.o _Tix_GrSaveColor in tixGrFmt.o _Tix_GrFormatGrid in tixGrFmt.o _Tix_GrFormatGrid in tixGrFmt.o _Tix_GrFormatBorder in tixGrFmt.o _Tix_GrConfigSize in tixGrUtl.o _Tix_GrConfigSize in tixGrUtl.o _Tix_GrConfigSize in tixGrUtl.o _Tix_HLCGet in tixHList.o _WidgetCmdDeletedProc in tixHList.o _DrawElements in tixHList.o _DrawElements in tixHList.o _DrawElements in tixHList.o _WidgetConfigure in tixHList.o _Tix_HLConfig in tixHList.o _Tix_HLConfig in tixHList.o _Tix_HListCmd in tixHList.o _WidgetDestroy in tixHList.o _WidgetDestroy in tixHList.o _Tix_HLXView in tixHList.o _Tix_HLComputeGeometry in tixHList.o _Tix_HLYView in tixHList.o _WidgetDisplay in tixHList.o _WidgetDisplay in tixHList.o _WidgetDisplay in tixHList.o _WidgetDisplay in tixHList.o _WidgetDisplay in tixHList.o _WidgetDisplay in tixHList.o _WidgetDisplay in tixHList.o _WidgetDisplay in tixHList.o _Tix_HLColWidth in tixHLCol.o _Tix_HLItemCGet in tixHLCol.o _Tix_HLItemConfig in tixHLCol.o _Tix_HLItemConfig in tixHLCol.o _Tix_HLIndCGet in tixHLInd.o _Tix_HLIndConfig in tixHLInd.o _Tix_HLIndConfig in tixHLInd.o _Tix_HLCreateHeaders in tixHLHdr.o _Tix_HLFreeHeaders in tixHLHdr.o _Tix_HLDrawHeader in tixHLHdr.o _Tix_HLDrawHeader in tixHLHdr.o _WidgetCmdDeletedProc in tixNBFrame.o _DeleteTab in tixNBFrame.o _DeleteTab in tixNBFrame.o _WidgetDestroy in tixNBFrame.o _WidgetDestroy in tixNBFrame.o _WidgetComputeGeometry in tixNBFrame.o _WidgetDisplay in tixNBFrame.o _WidgetDisplay in tixNBFrame.o _WidgetDisplay in tixNBFrame.o _WidgetDisplay in tixNBFrame.o _WidgetDisplay in tixNBFrame.o _WidgetDisplay in tixNBFrame.o _WidgetDisplay in tixNBFrame.o _TabConfigure in tixNBFrame.o _WidgetConfigure in tixNBFrame.o _Tix_NoteBookFrameCmd in tixNBFrame.o _WidgetCommand in tixNBFrame.o _WidgetCommand in tixNBFrame.o _WidgetCommand in tixNBFrame.o _WidgetCommand in tixNBFrame.o _WidgetCommand in tixNBFrame.o _WidgetCommand in tixNBFrame.o _WidgetCmdDeletedProc in tixTList.o _Tix_TLCGet in tixTList.o _WidgetConfigure in tixTList.o _Tix_TLConfig in tixTList.o _Tix_TLConfig in tixTList.o _Tix_TListCmd in tixTList.o _Tix_TListCmd in tixTList.o _Tix_TListCmd in tixTList.o _Tix_TListCmd in tixTList.o _WidgetDisplay in tixTList.o _WidgetDisplay in tixTList.o _WidgetDisplay in tixTList.o _WidgetDisplay in tixTList.o _WidgetDisplay in tixTList.o _FreeEntry in tixTList.o _WidgetDestroy in tixTList.o _WidgetDestroy in tixTList.o _ImgCmpGet in tixImgCmp.o _FreeLine in tixImgCmp.o _AddNewLine in tixImgCmp.o _FreeItem in tixImgCmp.o _FreeItem in tixImgCmp.o _FreeItem in tixImgCmp.o _FreeItem in tixImgCmp.o _FreeItem in tixImgCmp.o _FreeItem in tixImgCmp.o _FreeItem in tixImgCmp.o _AddNewText in tixImgCmp.o _AddNewSpace in tixImgCmp.o _AddNewImage in tixImgCmp.o _AddNewBitmap in tixImgCmp.o _ImgCmpFreeResources in tixImgCmp.o _ImgCmpFreeResources in tixImgCmp.o _ImgCmpFreeResources in tixImgCmp.o _ImgCmpCmdDeletedProc in tixImgCmp.o _CalculateMasterSize in tixImgCmp.o _ImgCmpDisplay in tixImgCmp.o _ImgCmpDisplay in tixImgCmp.o _ImgCmpConfigureMaster in tixImgCmp.o _ImgCmpConfigureMaster in tixImgCmp.o _ImgCmpCmd in tixImgCmp.o _ImgCmpCmd in tixImgCmp.o _ImgCmpCmd in tixImgCmp.o _ImgXpmDelete in tixImgXpm.o _ImgXpmCmdDeletedProc in tixImgXpm.o _ImgXpmFree in tixImgXpm.o _ImgXpmFree in tixImgXpm.o _ImgXpmConfigureInstance in tixImgXpm.o _ImgXpmConfigureInstance in tixImgXpm.o _ImgXpmConfigureInstance in tixImgXpm.o _ImgXpmConfigureInstance in tixImgXpm.o _ImgXpmConfigureInstance in tixImgXpm.o _ImgXpmGet in tixImgXpm.o _ImgXpmGet in tixImgXpm.o _ImgXpmConfigureMaster in tixImgXpm.o _ImgXpmConfigureMaster in tixImgXpm.o _ImgXpmConfigureMaster in tixImgXpm.o _ImgXpmCmd in tixImgXpm.o _ImgXpmCmd in tixImgXpm.o _ImgXpmCmd in tixImgXpm.o _TixpDrawTmpLine in tixUnixDraw.o _TixpStartSubRegionDraw in tixUnixDraw.o _TixpEndSubRegionDraw in tixUnixDraw.o _TixpSubRegDrawImage in tixUnixDraw.o _TixpSubRegDrawImage in tixUnixDraw.o _TixpXpmRealizePixmap in tixUnixXpm.o _TixpXpmRealizePixmap in tixUnixXpm.o _TixpXpmRealizePixmap in tixUnixXpm.o _TixpXpmRealizePixmap in tixUnixXpm.o _TixpXpmRealizePixmap in tixUnixXpm.o _TixpXpmFreeInstanceData in tixUnixXpm.o _TixpXpmFreeInstanceData in tixUnixXpm.o ld: symbol(s) not found collect2: ld returned 1 exit status make: *** [libTix8.4.3.dylib] Error 1 Thanks -N

    Read the article

  • How to create RPM for 32-bit arch from a 64-bit arch server?

    - by Gnanam
    Our production server is running CentOS5 64-bit arch. Because there are no RPM available currently for SQLite latest version (v3.7.3), I created RPM using rpmbuild the very first time by following the instructions given here. I was able to successfully create RPM for 64-bit (x86_64) architecture. But am not able to create RPM for 32-bit (i386) architecture. It failed with the following errors: ... ... ... + ./configure --build=x86_64-redhat-linux-gnu --host=x86_64-redhat-linux-gnu --target=i386-redhat-linux-gnu --program-prefix= --prefix=/usr --exec-prefix=/usr --bindir=/usr/bin --sbindir=/usr/sbin --sysconfdir=/etc --datadir=/usr/share --includedir=/usr/include --libdir=/usr/lib64 --libexecdir=/usr/libexec --localstatedir=/var --sharedstatedir=/usr/com --mandir=/usr/share/man --infodir=/usr/share/info --enable-threadsafe checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for gawk... gawk checking whether make sets $(MAKE)... yes checking for style of include used by make... GNU checking for x86_64-redhat-linux-gnu-gcc... no checking for gcc... gcc checking for C compiler default output file name... configure: error: C compiler cannot create executables See `config.log' for more details. error: Bad exit status from /var/tmp/rpm-tmp.73141 (%build) RPM build errors: Bad exit status from /var/tmp/rpm-tmp.73141 (%build) This is the command I called: rpmbuild --target i386 -ba sqlite.spec My question is, how do I create RPM for 32-bit arch from a 64-bit arch server?

    Read the article

  • Cannot Install Phusion Passenger 3.0.13 with Nginx 1.2.1

    - by LightBe Corp
    I installed gem Passenger which installed 3.0.13. Then I executed passenger-install-nginx-module which is what the Nginx instructions on http://www.modrails.com said to do. It installs the latest stable version which is 1.2.1 according to the Nginx official wiki page. I said to install Nginx to /usr/local/nginx (which is the default if you go to the nginx wiki website). I get the following errors: Undefined symbols for architecture x86_64: "_pcre_free_study", referenced from: _ngx_pcre_free_studies in ngx_regex.o ld: symbol(s) not found for architecture x86_64 collect2: ld returned 1 exit status make[1]: *** [objs/nginx] Error 1 make: *** [build] Error 2 -------------------------------------------- It looks like something went wrong Please read our Users guide for troubleshooting tips: /Users/server1/.rvm/gems/[email protected]/gems/passenger-3.0.13/doc/Users guide Nginx.html If that doesn't help, please use our support facilities at: http://www.modrails.com/ We'll do our best to help you. I have done searches for several hours trying to find a resolution. I tried the Google Group for Phusion Passenger but did not find anything. I do not know if there is a mismatch in version numbers or not. The documentation says nothing about this error.

    Read the article

  • 503 Error After Microsoft Request Routing Is Installed - 32 bit 64 bit madness

    - by KenB
    I have a requirement to install the Microsoft Request Routing component for IIS 7.5 running on a Windows 2008 R2 SP1 64Bit machine. After installing Microsoft Request Routing via the Web Platform installer our ASP.NET 4.0 application gets a "HTTP Error 503. The service is unavailable." The Windows event log error details says: The Module DLL 'C:\Program Files\IIS\Application Request Routing\requestRouter.dll' could not be loaded due to a configuration problem. The current configuration only supports loading images built for a AMD64 processor architecture. The data field contains the error number. To learn more about this issue, including how to troubleshooting this kind of processor architecture mismatch error, see http://go.microsoft.com/fwlink/?LinkId=29349. I can make this error go away by changing the application pool to run in 32 bit mode by changing the "Enable 32-Bit Applications" setting to true. However I would prefer not to have to do that to resolve the issue. My questions are: Why is the Microsoft Request Routing feature trying to load a 32 bit version, isn't there a 64 bit version for it? How do I resolve this issue without having to change my application pool to a 32 bit mode?

    Read the article

  • sql server uninstallation issue

    - by angel
    I'm unable to remove SQL Server 2008 sp1 completely from my system. I'm using windows 7 ultimate. Everytime I try uninstalling it i get the following error. How can I remove it? here is the log: Overall summary: Final result: Failed: see details below Exit code (Decimal): -2068643839 Exit facility code: 1203 Exit error code: 1 Exit message: Failed: see details below Start time: 2013-06-24 21:10:38 End time: 2013-06-24 21:21:17 Requested action: Uninstall Log with failure: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20130624_210908\sql_rs_Cpu64_1.log Exception help link: http://go.microsoft.com/fwlink?LinkId=20476&ProdName=Microsoft+SQL+Server&EvtSrc=setup.rll&EvtID=50000&ProdVer=10.0.1600.22 Machine Properties: Machine name: ABHI-PC Machine processor count: 4 OS version: Windows Vista OS service pack: Service Pack 1 OS region: United States OS language: English (United States) OS architecture: x64 Process architecture: 64 Bit OS clustered: No Product features discovered: Product Instance Instance ID Feature Language Edition Version Clustered Sql Server 2008 MSSQLSERVER MSRS10.MSSQLSERVER Reporting Services 1033 Enterprise Edition 10.0.1600.22 No Sql Server 2008 Management Tools - Basic 10.0.1600.22 No Package properties: Description: SQL Server Database Services 2008 SQLProductFamilyCode: {628F8F38-600E-493D-9946-F4178F20A8A9} ProductName: SQL2008 Type: RTM Version: 10 SPLevel: 0 Installation edition: ENTERPRISE User Input Settings: ACTION: Uninstall CONFIGURATIONFILE: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20130624_210908\ConfigurationFile.ini FEATURES: RS,SSMS,SNAC_SDK,CE_RUNTIME,CE_TOOLS,SNAC HELP: False INDICATEPROGRESS: False INSTANCEID: INSTANCENAME: MSSQLSERVER MEDIASOURCE: QUIET: False QUIETSIMPLE: False X86: False Configuration file: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20130624_210908\ConfigurationFile.ini Detailed results: Feature: SQL Client Connectivity Status: Skipped MSI status: Passed Configuration status: Passed Feature: SQL Client Connectivity SDK Status: Skipped MSI status: Passed Configuration status: Passed Feature: Reporting Services Status: Failed: see logs for details MSI status: Passed Configuration status: Failed: see details below Configuration error code: 0xFFD65603 Configuration error description: Input string was not in a correct format. Configuration log: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20130624_210908\Detail.txt Feature: SQL Compact Edition Tools Status: Passed MSI status: Passed Configuration status: Passed Feature: SQL Compact Edition Runtime Status: Skipped MSI status: Passed Configuration status: Passed Feature: Management Tools - Basic Status: Failed: see logs for details MSI status: Passed Configuration status: Passed Rules with failures: Global rules: There are no scenario-specific rules. Rules report file: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20130624_210908\SystemConfigurationCheck_Report.htm

    Read the article

  • Understanding where an amazon ec2 instance run?

    - by kenzo450D
    I am currently using the aws api from my local desktop. I can successfully take backups of my amazon volumes, and even create an ami from it. Now when i wanted to run the instance to be built from this ami, where does the instance run? In their Elastic Cloud or the computer from which the command was issued. Suppose I want to create the new instance in a new region? (locations as defined in ec2-describe-regions) How would I do that? It seems i have a bad knowledge about how the relation between amazon volumes and instances? Please explain it. I am only allowed to use the CLI tools to do all of my work. I made a new snapshot of the existing instance, made an ami using ec2-register, made a keypair, and then followed these steps, http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/launching-an-instance.html#launching-an-instance-cli but i got an error as this Client.InvalidParameterValue: The requested instance type's architecture (i386) does not match the architecture in the manifest for aki-fc37bacc (x86_64) my local computer is 32bit. But I do not want to load instance on the local computer but on amazon servers?

    Read the article

  • Win a place at a SQL Server Masterclass with Kimberly Tripp and Paul Randal

    - by Testas
    The top things YOU need to know about managing SQL Server - in one place, on one day - presented by two of the best SQL Server industry trainers!And you could be there courtesy of UK SQL Server User Group and SQL Server Magazine! This week the UK SQL Server User Group will provide you with details of how to win a place at this must see seminar   You can also register for the seminar yourself at:www.regonline.co.uk/kimtrippsql More information about the seminar   Where: Radisson Edwardian Heathrow Hotel, London When: Thursday 17th June 2010 This one-day MasterClass will focus on many of the top issues companies face when implementing and maintaining a SQL Server-based solution. In the case where a company has no dedicated DBA, IT managers sometimes struggle to keep the data tier performing well and the data available. This can be especially troublesome when the development team is unfamiliar with the affect application design choices have on database performance. The Microsoft SQL Server MasterClass 2010 is presented by Paul S. Randal and Kimberly L. Tripp, two of the most experienced and respected people in the SQL Server world. Together they have over 30 years combined experience working with SQL Server in the field, and on the SQL Server product team itself. This is a unique opportunity to hear them present at a UK event which will:·         Debunk many of the ingrained misconceptions around SQL Server's behaviour   ·         Show you disaster recovery techniques critical to preserving your company's life-blood - the data   ·         Explain how a common application design pattern can wreak havoc in the database ·         Walk through the top-10 points to follow around operations and maintenance for a well-performing and available data tier! Please Note: Agenda may be subject to changeSessions AbstractsKEYNOTE: Bridging the Gap Between Development and Production  Applications are commonly developed with little regard for how design choices will affect performance in production. This is often because developers don't realize the implications of their design on how SQL Server will be able to handle a high workload (e.g. blocking, fragmentation) and/or because there's no full-time trained DBA that can recognize production problems and help educate developers. The keynote sets the stage for the rest of the day. Discussing some of the issues that can arise, explaining how some can be avoided and highlighting some of the features in SQL 2008 that can help developers and DBAs make better use of SQL Server, and troubleshoot when things go wrong.  SESSION ONE: SQL Server MythbustersIt's amazing how many myths and misconceptions have sprung up and persisted over the years about SQL Server - after many years helping people out on forums, newsgroups, and customer engagements, Paul and Kimberly have heard it all. Are there really non-logged operations? Can interrupting shrinks or rebuilds cause corruption? Can you override the server's MAXDOP setting? Will the server always do a table-scan to get a row count? Many myths lead to poor design choices and inappropriate maintenance practices so these are just a few of many, many myths that Paul and Kimberly will debunk in this fast-paced session on how SQL Server operates and should be managed and maintained. SESSION TWO: Database Recovery Techniques Demo-Fest Even if a company has a disaster recovery strategy in place, they need to practice to make sure that the plan will work when a disaster does strike. In this fast-paced demo session Paul and Kimberly will repeatedly do nasty things to databases and then show how they are recovered - demonstrating many techniques that can be used in production for disaster recovery. Not for the faint-hearted! SESSION THREE: GUIDs: Use, Abuse, and How To Move Forward Since the addition of the GUID (Microsoft’s implementation of the UUID), my life as a consultant and "tuner" has been busy. I’ve seen databases designed with GUID keys run fairly well with small workloads but completely fall over and fail because they just cannot scale. And, I know why GUIDs are chosen - it simplifies the handling of parent/child rows in your batches so you can reduce round-trips or avoid dealing with identity values. And, yes, sometimes it's even for distributed databases and/or security that GUIDs are chosen. I'm not entirely against ever using a GUID but overusing and abusing GUIDs just has to be stopped! Please, please, please let me give you better solutions and explanations on how to deal with your parent/child rows, round-trips and clustering keys! SESSION 4: Essential Database MaintenanceIn this session, Paul and Kimberly will run you through their top-ten database maintenance recommendations, with a lot of tips and tricks along the way. These are distilled from almost 30 years combined experience working with SQL Server customers and are geared towards making your databases more performant, more available, and more easily managed (to save you time!). Everything in this session will be practical and applicable to a wide variety of databases. Topics covered include: backups, shrinks, fragmentation, statistics, and much more! Focus will be on 2005 but we'll explain some of the key differences for 2000 and 2008 as well.    Speaker Biographies     Paul S.Randal  Kimberley L. Tripp Paul and Kimberly are a husband-and-wife team who own and run SQLskills.com, a world-renowned SQL Server consulting and training company. They are both SQL Server MVPs and Microsoft Regional Directors, with over 30 years of combined experience on SQL Server. Paul worked on the SQL Server team for nine years in development and management roles, writing many of the DBCC commands, and ultimately with responsibility for core Storage Engine for SQL Server 2008. Paul writes extensively on his blog (SQLskills.com/blogs/Paul) and for TechNet Magazine, for which he is also a Contributing Editor. Kimberly worked on the SQL Server team in the early 1990s as a tester and writer before leaving to found SQLskills and embrace her passion for teaching and consulting. Kimberly has been a staple at worldwide conferences since she first presented at TechEd in 1996, and she blogs at SQLskills.com/blogs/Kimberly. They have written Microsoft whitepapers and books for SQL Server 2000, 2005 and 2008, and are regular, top-rated presenters worldwide on database maintenance, high availability, disaster recovery, performance tuning, and SQL Server internals. Together they teach the SQL MCM certification and throughout Microsoft.In their spare time, they like to find frogfish in remote corners of the world.  

    Read the article

  • SQL Server Master class winner

    - by Testas
     The winner of the SQL Server MasterClass competition courtesy of the UK SQL Server User Group and SQL Server Magazine!    Steve Hindmarsh     There is still time to register for the seminar yourself at:  www.regonline.co.uk/kimtrippsql     More information about the seminar     Where: Radisson Edwardian Heathrow Hotel, London  When: Thursday 17th June 2010  This one-day MasterClass will focus on many of the top issues companies face when implementing and maintaining a SQL Server-based solution. In the case where a company has no dedicated DBA, IT managers sometimes struggle to keep the data tier performing well and the data available. This can be especially troublesome when the development team is unfamiliar with the affect application design choices have on database performance. The Microsoft SQL Server MasterClass 2010 is presented by Paul S. Randal and Kimberly L. Tripp, two of the most experienced and respected people in the SQL Server world. Together they have over 30 years combined experience working with SQL Server in the field, and on the SQL Server product team itself. This is a unique opportunity to hear them present at a UK event which will: Debunk many of the ingrained misconceptions around SQL Server's behaviour    Show you disaster recovery techniques critical to preserving your company's life-blood - the data    Explain how a common application design pattern can wreak havoc in the database Walk through the top-10 points to follow around operations and maintenance for a well-performing and available data tier! Please Note: Agenda may be subject to change  Sessions Abstracts  KEYNOTE: Bridging the Gap Between Development and Production    Applications are commonly developed with little regard for how design choices will affect performance in production. This is often because developers don't realize the implications of their design on how SQL Server will be able to handle a high workload (e.g. blocking, fragmentation) and/or because there's no full-time trained DBA that can recognize production problems and help educate developers. The keynote sets the stage for the rest of the day. Discussing some of the issues that can arise, explaining how some can be avoided and highlighting some of the features in SQL 2008 that can help developers and DBAs make better use of SQL Server, and troubleshoot when things go wrong.   SESSION ONE: SQL Server Mythbusters  It's amazing how many myths and misconceptions have sprung up and persisted over the years about SQL Server - after many years helping people out on forums, newsgroups, and customer engagements, Paul and Kimberly have heard it all. Are there really non-logged operations? Can interrupting shrinks or rebuilds cause corruption? Can you override the server's MAXDOP setting? Will the server always do a table-scan to get a row count? Many myths lead to poor design choices and inappropriate maintenance practices so these are just a few of many, many myths that Paul and Kimberly will debunk in this fast-paced session on how SQL Server operates and should be managed and maintained.   SESSION TWO: Database Recovery Techniques Demo-Fest  Even if a company has a disaster recovery strategy in place, they need to practice to make sure that the plan will work when a disaster does strike. In this fast-paced demo session Paul and Kimberly will repeatedly do nasty things to databases and then show how they are recovered - demonstrating many techniques that can be used in production for disaster recovery. Not for the faint-hearted!   SESSION THREE: GUIDs: Use, Abuse, and How To Move Forward   Since the addition of the GUID (Microsoft’s implementation of the UUID), my life as a consultant and "tuner" has been busy. I’ve seen databases designed with GUID keys run fairly well with small workloads but completely fall over and fail because they just cannot scale. And, I know why GUIDs are chosen - it simplifies the handling of parent/child rows in your batches so you can reduce round-trips or avoid dealing with identity values. And, yes, sometimes it's even for distributed databases and/or security that GUIDs are chosen. I'm not entirely against ever using a GUID but overusing and abusing GUIDs just has to be stopped! Please, please, please let me give you better solutions and explanations on how to deal with your parent/child rows, round-trips and clustering keys!   SESSION 4: Essential Database Maintenance  In this session, Paul and Kimberly will run you through their top-ten database maintenance recommendations, with a lot of tips and tricks along the way. These are distilled from almost 30 years combined experience working with SQL Server customers and are geared towards making your databases more performant, more available, and more easily managed (to save you time!). Everything in this session will be practical and applicable to a wide variety of databases. Topics covered include: backups, shrinks, fragmentation, statistics, and much more! Focus will be on 2005 but we'll explain some of the key differences for 2000 and 2008 as well. Speaker Biographies     Kimberley L. Tripp Paul and Kimberly are a husband-and-wife team who own and run SQLskills.com, a world-renowned SQL Server consulting and training company. They are both SQL Server MVPs and Microsoft Regional Directors, with over 30 years of combined experience on SQL Server. Paul worked on the SQL Server team for nine years in development and management roles, writing many of the DBCC commands, and ultimately with responsibility for core Storage Engine for SQL Server 2008. Paul writes extensively on his blog (SQLskills.com/blogs/Paul) and for TechNet Magazine, for which he is also a Contributing Editor. Kimberly worked on the SQL Server team in the early 1990s as a tester and writer before leaving to found SQLskills and embrace her passion for teaching and consulting. Kimberly has been a staple at worldwide conferences since she first presented at TechEd in 1996, and she blogs at SQLskills.com/blogs/Kimberly. They have written Microsoft whitepapers and books for SQL Server 2000, 2005 and 2008, and are regular, top-rated presenters worldwide on database maintenance, high availability, disaster recovery, performance tuning, and SQL Server internals. Together they teach the SQL MCM certification and throughout Microsoft.In their spare time, they like to find frogfish in remote corners of the world.   Speaker Testimonials  "To call them good trainers is an epic understatement. They know how to deliver technical material in ways that illustrate it well. I had to stop Paul at one point and ask him how long it took to build a particular slide because the animations were so good at conveying a hard-to-describe process." "These are not beginner presenters, and they put an extreme amount of preparation and attention to detail into everything that they do. Completely, utterly professional." "When it comes to the instructors themselves, Kimberly and Paul simply have no equal. Not only are they both ultimate authorities, but they have endless enthusiasm about the material, and spot on delivery. If either ever got tired they never showed it, even after going all day and all week. We witnessed countless demos over the course of the week, some extremely involved, multi-step processes, and I can’t recall one that didn’t go the way it was supposed to." "You might think that with this extreme level of skill comes extreme levels of egotism and lack of patience. Nothing could be further from the truth. ... They simply know how to teach, and are approachable, humble, and patient." "The experience Paul and Kimberly have had with real live customers yields a lot more information and things to watch out for than you'd ever get from documentation alone." “Kimberly, I just wanted to send you an email to let you know how awesome you are! I have applied some of your indexing strategies to our website’s homegrown CMS and we are experiencing a significant performance increase. WOW....amazing tips delivered in an exciting way!  Thanks again” 

    Read the article

  • Recap: Oracle Fusion Middleware Strategies Driving Business Innovation

    - by Harish Gaur
    Hasan Rizvi, Executive Vice President of Oracle Fusion Middleware & Java took the stage on Tuesday to discuss how Oracle Fusion Middleware helps enable business innovation. Through a series of product demos and customer showcases, Hassan demonstrated how Oracle Fusion Middleware is a complete platform to harness the latest technological innovations (cloud, mobile, social and Fast Data) throughout the application lifecycle. Fig 1: Oracle Fusion Middleware is the foundation of business innovation This Session included 4 demonstrations to illustrate these strategies: 1. Build and deploy native mobile applications using Oracle ADF Mobile 2. Empower business user to model processes, design user interface and have rich mobile experience for process interaction using Oracle BPM Suite PS6. 3. Create collaborative user experience and integrate social sign-on using Oracle WebCenter Portal, Oracle WebCenter Content, Oracle Social Network & Oracle Identity Management 11g R2 4. Deploy and manage business applications on Oracle Exalogic Nike, LA Department of Water & Power and Nintendo joined Hasan on stage to share how their organizations are leveraging Oracle Fusion Middleware to enable business innovation. Managing Performance in the Wrld of Social and Mobile How do you provide predictable scalability and performance for an application that monitors active lifestyle of 8 million users on a daily basis? Nike’s answer is Oracle Coherence, a component of Oracle Fusion Middleware and Oracle Exadata. Fig 2: Oracle Coherence enabled data grid improves performance of Nike+ Digital Sports Platform Nicole Otto, Sr. Director of Consumer Digital Technology discussed the vision of the Nike+ platform, a platform which represents a shift for NIKE from a  "product"  to  a "product +" experience.  There are currently nearly 8 million users in the Nike+ system who are using digitally-enabled Nike+ devices.  Once data from the Nike+ device is transmitted to Nike+ application, users access the Nike+ website or via the Nike mobile applicatoin, seeing metrics around their daily active lifestyle and even engage in socially compelling experiences to compare, compete or collaborate their data with their friends. Nike expects the number of users to grow significantly this year which will drive an explosion of data and potential new experiences. To deal with this challenge, Nike envisioned building a shared platform that would drive a consumer-centric model for the company. Nike built this new platform using Oracle Coherence and Oracle Exadata. Using Coherence, Nike built a data grid tier as a distributed cache, thereby provide low-latency access to most recent and relevant data to consumers. Nicole discussed how Nike+ Digital Sports Platform is unique in the way that it utilizes the Coherence Grid.  Nike takes advantage of Coherence as a traditional cache using both cache-aside and cache-through patterns.  This new tier has enabled Nike to create a horizontally scalable distributed event-driven processing architecture. Current data grid volume is approximately 150,000 request per minute with about 40 million objects at any given time on the grid. Improving Customer Experience Across Multiple Channels Customer experience is on top of every CIO's mind. Customer Experience needs to be consistent and secure across multiple devices consumers may use.  This is the challenge Matt Lampe, CIO of Los Angeles Department of Water & Power (LADWP) was faced with. Despite being the largest utilities company in the country, LADWP had been relying on a 38 year old customer information system for serving its customers. Their prior system  had been unable to keep up with growing customer demands. Last year, LADWP embarked on a journey to improve customer experience for 1.6million LA DWP customers using Oracle WebCenter platform. Figure 3: Multi channel & Multi lingual LADWP.com built using Oracle WebCenter & Oracle Identity Management platform Matt shed light on his efforts to drive customer self-service across 3 dimensions – new website, new IVR platform and new bill payment service. LADWP has built a new portal to increase customer self-service while reducing the transactions via IVR. LADWP's website is powered Oracle WebCenter Portal and is accessible by desktop and mobile devices. By leveraging Oracle WebCenter, LADWP eliminated the need to build, format, and maintain individual mobile applications or websites for different devices. Their entire content is managed using Oracle WebCenter Content and secured using Oracle Identity Management. This new portal automated their paper based processes to web based workflows for customers. This includes automation of Self Service implemented through My Account -  like Bill Pay, Payment History, Bill History and Usage Analysis. LADWP's solution went live in April 2012. Matt indicated that LADWP's Self-Service Portal has greatly improved customer satisfaction.  In a JD Power Associates website satisfaction survey, results indicate rankings have climbed by 25+ points, marking a remarkable increase in user experience. Bolstering Performance and Simplifying Manageability of Business Applications Ingvar Petursson, Senior Vice Preisdent of IT at Nintendo America joined Hasan on-stage to discuss their choice of Exalogic. Nintendo had significant new requirements coming their way for business systems, both internal and external, in the years to come, especially with new products like the WiiU on the horizon this holiday season. Nintendo needed a platform that could give them performance, availability and ease of management as they deploy business systems. Ingvar selected Engineered Systems for two reasons: 1. High performance  2. Ease of management Figure 4: Nintendo relies on Oracle Exalogic to run ATG eCommerce, Oracle e-Business Suite and several business applications Nintendo made a decision to run their business applications (ATG eCommerce, E-Business Suite) and several Fusion Middleware components on the Exalogic platform. What impressed Ingvar was the "stress” testing results during evaluation. Oracle Exalogic could handle their 3-year load estimates for many functions, which was better than Nintendo expected without any hardware expansion. Faster Processing of Big Data Middleware plays an increasingly important role in Big Data. Last year, we announced at OpenWorld the introduction of Oracle Data Integrator for Hadoop and Oracle Loader for Hadoop which helps in the ability to move, transform, load data to and from Big Data Appliance to Exadata.  This year, we’ve added new capabilities to find, filter, and focus data using Oracle Event Processing. This product can natively integrate with Big Data Appliance or runs standalone. Hasan briefly discussed how NTT Docomo, largest mobile operator in Japan, leverages Oracle Event Processing & Oracle Coherence to process mobile data (from 13 million smartphone users) at a speed of 700K events per second before feeding it Hadoop for distributed processing of big data. Figure 5: Mobile traffic data processing at NTT Docomo with Oracle Event Processing & Oracle Coherence    

    Read the article

  • Cloud Computing = Elasticity * Availability

    - by Herve Roggero
    What is cloud computing? Is hosting the same thing as cloud computing? Are you running a cloud if you already use virtual machines? What is the difference between Infrastructure as a Service (IaaS) and a cloud provider? And the list goes on… these questions keep coming up and all try to fundamentally explain what “cloud” means relative to other concepts. At the risk of over simplification, answering these questions becomes simpler once you understand the primary foundations of cloud computing: Elasticity and Availability.   Elasticity The basic value proposition of cloud computing is to pay as you go, and to pay for what you use. This implies that an application can expand and contract on demand, across all its tiers (presentation layer, services, database, security…).  This also implies that application components can grow independently from each other. So if you need more storage for your database, you should be able to grow that tier without affecting, reconfiguring or changing the other tiers. Basically, cloud applications behave like a sponge; when you add water to a sponge, it grows in size; in the application world, the more customers you add, the more it grows. Pure IaaS providers will provide certain benefits, specifically in terms of operating costs, but an IaaS provider will not help you in making your applications elastic; neither will Virtual Machines. The smallest elasticity unit of an IaaS provider and a Virtual Machine environment is a server (physical or virtual). While adding servers in a datacenter helps in achieving scale, it is hardly enough. The application has yet to use this hardware.  If the process of adding computing resources is not transparent to the application, the application is not elastic.   As you can see from the above description, designing for the cloud is not about more servers; it is about designing an application for elasticity regardless of the underlying server farm.   Availability The fact of the matter is that making applications highly available is hard. It requires highly specialized tools and trained staff. On top of it, it's expensive. Many companies are required to run multiple data centers due to high availability requirements. In some organizations, some data centers are simply on standby, waiting to be used in a case of a failover. Other organizations are able to achieve a certain level of success with active/active data centers, in which all available data centers serve incoming user requests. While achieving high availability for services is relatively simple, establishing a highly available database farm is far more complex. In fact it is so complex that many companies establish yearly tests to validate failover procedures.   To a certain degree certain IaaS provides can assist with complex disaster recovery planning and setting up data centers that can achieve successful failover. However the burden is still on the corporation to manage and maintain such an environment, including regular hardware and software upgrades. Cloud computing on the other hand removes most of the disaster recovery requirements by hiding many of the underlying complexities.   Cloud Providers A cloud provider is an infrastructure provider offering additional tools to achieve application elasticity and availability that are not usually available on-premise. For example Microsoft Azure provides a simple configuration screen that makes it possible to run 1 or 100 web sites by clicking a button or two on a screen (simplifying provisioning), and soon SQL Azure will offer Data Federation to allow database sharding (which allows you to scale the database tier seamlessly and automatically). Other cloud providers offer certain features that are not available on-premise as well, such as the Amazon SC3 (Simple Storage Service) which gives you virtually unlimited storage capabilities for simple data stores, which is somewhat equivalent to the Microsoft Azure Table offering (offering a server-independent data storage model). Unlike IaaS providers, cloud providers give you the necessary tools to adopt elasticity as part of your application architecture.    Some cloud providers offer built-in high availability that get you out of the business of configuring clustered solutions, or running multiple data centers. Some cloud providers will give you more control (which puts some of that burden back on the customers' shoulder) and others will tend to make high availability totally transparent. For example, SQL Azure provides high availability automatically which would be very difficult to achieve (and very costly) on premise.   Keep in mind that each cloud provider has its strengths and weaknesses; some are better at achieving transparent scalability and server independence than others.    Not for Everyone Note however that it is up to you to leverage the elasticity capabilities of a cloud provider, as discussed previously; if you build a website that does not need to scale, for which elasticity is not important, then you can use a traditional host provider unless you also need high availability. Leveraging the technologies of cloud providers can be difficult and can become a journey for companies that build their solutions in a scale up fashion. Cloud computing promises to address cost containment and scalability of applications with built-in high availability. If your application does not need to scale or you do not need high availability, then cloud computing may not be for you. In fact, you may pay a premium to run your applications with cloud providers due to the underlying technologies built specifically for scalability and availability requirements. And as such, the cloud is not for everyone.   Consistent Customer Experience, Predictable Cost With all its complexities, buzz and foggy definition, cloud computing boils down to a simple objective: consistent customer experience at a predictable cost.  The objective of a cloud solution is to provide the same user experience to your last customer than the first, while keeping your operating costs directly proportional to the number of customers you have. Making your applications elastic and highly available across all its tiers, with as much automation as possible, achieves the first objective of a consistent customer experience. And the ability to expand and contract the infrastructure footprint of your application dynamically achieves the cost containment objectives.     Herve Roggero is a SQL Azure MVP and co-author of Pro SQL Azure (APress).  He is the co-founder of Blue Syntax Consulting (www.bluesyntax.net), a company focusing on cloud computing technologies helping customers understand and adopt cloud computing technologies. For more information contact herve at hroggero @ bluesyntax.net .

    Read the article

  • Configuring Oracle HTTP Server 12c for WebLogic Server Domain

    - by Emin Askerov
    Oracle HTTP Server (OHS) 12c 12.1.2 which was released in July 2013 as a part of Oracle Web Tier 12c is the web server component of Oracle Fusion Middleware. In essence this is Apache HTTP Server 2.2.22 (with critical bug fixes from higher versions) which includes modules developed specifically by Oracle. It provides a listener functionality for Oracle WebLogic Server and the framework for hosting static pages, dynamic pages, and applications over the Web. OHS can be easily managed by Weblogic Management Framework, a set of tools which provides administrative capabilities (start, stop, lifecycle operations, etc.) for Oracle Fusion Middleware products. In other words all tools which are familiar to us (Node Manager, WLST, Administration Console, Fusion Middleware Control etc.) presented as a part of Weblogic Management Framework and using for managing Java and System Components both for Weblogic Server and Standalone Domain types. You can familiarize yourself with these terms using related documentation: 1. Introduction to Oracle HTTP Server: http://docs.oracle.com/middleware/1212/webtier/index.html 2. Weblogic Management Framework: http://docs.oracle.com/middleware/1212/core/ASCON/terminology.htm#ASCON11260 In the given post I would like to cover rather simple use case how to configure OHS as web proxy in Weblogic Cluster environment. For example, we have existing Weblogic Domain where some managed servers have been joined to cluster and host business applications. We need to configure web proxy component which will act as entry point, load balancer for our cluster for user requests. Of course, we could install old good Apache HTTP Server and configure mod_wl plugin. However this solution not optimal from manageability perspective: we need to install Apache, install additional plugin then configure it by editing configuration file which is not really convenient for FMW Administrators and often increase time of performing of simple administrative task. Alternatively, we could use OHS as System Component within Weblogic Domain and use full power of Weblogic Management Framework in order to configure, manage and monitor it! I like this idea! What about you? I hope after reading this post you will agree with me. First of all it is necessary to download OHS binaries. You can use this link for downloading: http://www.oracle.com/technetwork/java/webtier/downloads/index2-303202.html As we will use Fusion Middleware Control for managing OHS instances it is necessary to extend your domain with Enterprise Manager and Oracle ADF and JRF templates. This is not topic for focusing in this post, but you could get more information from documentation or one of my previous posts: http://docs.oracle.com/middleware/1212/wls/WLDTR/fmw_templates.htm#sthref64 https://blogs.oracle.com/imc/entry/the_specifics_of_adf_12c Note: you should have properly configured Node Manager utility for managing OHS instances Let’s consider configuration process step by step: 1. Shut down all Weblogic instances of existing domain including Admin Server; 2. Install Oracle HTTP Server. You should use your Fusion Middleware Home Path (e.g. /u01/Oracle/FMW12) for Installation Location and select Colocated HTTP Server option as Installation Type. I will not focus on this topic in this post. All information related to OHS installation you could find here: http://docs.oracle.com/middleware/1212/webtier/WTINS/install_gui.htm#i1082009 3. Next we need to extend our existing domain with OHS component. In order to do this you should do the following: a. Run Fusion Middleware Configuration Wizard (ORACLE_HOME/oracle_common/common/bin/config.sh); b. On the step 1 select Update an existing domain option and point your Fusion Middleware Home Path; c. On the step 2 check Oracle HTTP Server, Oracle Enterprise Manager Plugin for WEBTIER templates; d. Go through other steps without any changes and finish configuration process. 4. Start Admin Server and all managed servers related to your cluster 5. Log in to Enterprise Manager FMW Control using http://<hostname>:<port>/em URL 6. Now we will create OHS instance within our Weblogic Domain Infrastructure. Navigate to Weblogic Domain -> Administration -> Create/Delete OHS menu item; 7. Enter to edit mode, clicking Changes -> Lock&Edit menu item; 8. Create new OHS instance clicking Create button; 9. Define Instance Name (e.g. DevOSH) and Machine parameters; 10. Now we need to define listen port. By default OHS will use 7777 port number for income HTTP requests. We could change it to any free port number we would like to use. In order to do it, right click on our created OHS instance (left hand panel) and navigate to Administration -> Port Configuration; 11. Click on record with port number 7777 and then click Edit button; 12. Change port number value (in our case this will be 8080) and then click OK button; 13. Now we need to edit mod_wl_ohs configuration in order to enable OHS to act as proxy for WebLogic Server Instances/Cluster; 14. In order to do it right click on our created OHS instance (left panel) and navigate to Administration -> mod_wl_ohs Configuration; a. In Weblogic Cluster you should enter cluster address (define <host:port> for all managed servers which participated in cluster), e.g: 192.168.56.2:7004,192.168.56.2:7005 b. Define Weblogic Port parameter at which the Oracle WebLogic Server host is listening for connection requests from the module (or from other servers); c. Check Dynamic Server List option. This will dynamically update cluster list for every request; d. In the Location table define list of endpoint locations which you would like to process. In order to do this click Add Row button and define Location, Weblogic Cluster, Path Trim and Path Prefix parameters (if required); e. Click Apply button in order to save changes. 15. Activate changes clicking Changes ? Activate Changes menu item; 16. Finally we will start configured OHS instance. Right click on OHS instance tree item under Web Tier folder, select Control -> Start Up menu item; 17. Ensure that OHS instance up and running and then test your environment. Run deployed application to your Weblogic Cluster accessing via OHS web proxy; Normal 0 false false false RU X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-fareast-language:EN-US;}

    Read the article

  • Orchestrating the Virtual Enterprise

    - by John Murphy
    During the American Industrial Revolution, the Ford Motor Company did it all. It turned raw materials into a showroom full of Model Ts. It owned a steel mill, a glass factory, and an automobile assembly line. The company was both self-sufficient and innovative and went on to become one of the largest and most profitable companies in the world. Nowadays, it's unusual for any business to follow this vertical integration model because its much harder to be best in class across such a wide a range of capabilities and services. Instead, businesses focus on their core competencies and outsource other business functions to specialized suppliers. They exchange vertical integration for collaboration. When done well, all parties benefit from this arrangement and the collaboration leads to the creation of an agile, lean and successful "virtual enterprise." Case in point: For Sun hardware, Oracle outsources most of its manufacturing and all of its logistics to third parties. These are vital activities, but ones where Oracle doesn't have a core competency, so we shift them to business partners who do. Within our enterprise, we always retain the core functions of product development, support, and most of the sales function, because that's what constitutes our core value to our customers. This is a perfect example of a virtual enterprise.  What are the implications of this? It means that we must exchange direct internal control for indirect external collaboration. This fundamentally changes the relative importance of different business processes, the boundaries of security and information sharing, and the relationship of the supply chain systems to the ERP. The challenge is that the systems required to support this virtual paradigm are still mired in "island enterprise" thinking. But help is at hand. Developments such as the Web, social networks, collaboration, and rules-based orchestration offer great potential to fundamentally re-architect supply chain systems to better support the virtual enterprise.  Supply Chain Management Systems in a Virtual Enterprise Historically enterprise software was constructed to automate the ERP - and then the supply chain systems extended the ERP. They were joined at the hip. In virtual enterprises, the supply chain system needs to be ERP agnostic, sitting above each of the ERPs that are distributed across the virtual enterprise - most of which are operating in other businesses. This is vital so that the supply chain system can manage the flow of material and the related information through the multiple enterprises. It has to have strong collaboration tools. It needs to be highly flexible. Users need to be able to see information that's coming from multiple sources and be able to react and respond to events across those sources.  Oracle Fusion Distributed Order Orchestration (DOO) is a perfect example of a supply chain system designed to operate in this virtual way. DOO embraces the idea that a company's fulfillment challenge is a distributed, multi-enterprise problem. It enables users to manage the process and the trading partners in a uniform way and deliver a consistent user experience while operating over a heterogeneous, virtual enterprise. This is a fundamental shift at the core of managing supply chains. It forces virtual enterprises to think architecturally about how best to construct their supply chain systems.  Case in point, almost everyone has ordered from Amazon.com at one time or another. Our orders are as likely to be fulfilled by third parties as they are by Amazon itself. To deliver the order promptly and efficiently, Amazon has to send it to the right fulfillment location and know the availability in that location. It needs to be able to track status of the fulfillment and deal with exceptions. As a virtual enterprise, Amazon's operations, using thousands of trading partners, requires a very different approach to fulfillment than the traditional 'take an order and ship it from your own warehouse' model. Amazon had no choice but to develop a complex, expensive and custom solution to tackle this problem as there used to be no product solution available. Now, other companies who want to follow similar models have a better off-the-shelf choice -- Oracle Distributed Order Orchestration (DOO).  Consider how another of our customers is using our distributed orchestration solution. This major airplane manufacturer has a highly complex business and interacts regularly with the U.S. Government and major airlines. It sits in the middle of an intricate supply chain and needed to improve visibility across its many different entities. Oracle Fusion DOO gives the company an orchestration mechanism so it could improve quality, speed, flexibility, and consistency without requiring an organ transplant of these highly complex legacy systems. Many retailers face the challenge of dealing with brick and mortar, Web, and reseller channels. They all need to be knitted together into a virtual enterprise experience that is consistent for their customers. When a large U.K. grocer with a strong brick and mortar retail operation added an online business, they turned to Oracle Fusion DOO to bring these entities together. Disturbing the Peace with Acquisitions Quite often a company's ERP system is disrupted when it acquires a new company. An acquisition can inject a new set of processes and systems -- or even introduce an entirely new business like Sun's hardware did at Oracle. This challenge has been a driver for some of our DOO customers. A large power management company is using Oracle Fusion DOO to provide the flexibility to rapidly integrate additional products and services into its central fulfillment operation. The Flip Side of Fulfillment Meanwhile, we haven't ignored similar challenges on the supply side of the equation. Specifically, how to manage complex supply in a flexible way when there are multiple trading parties involved? How to manage the supply to suppliers? How to manage critical components that need to merge in a tier two or tier three supply chain? By investing in supply orchestration solutions for the virtual enterprise, we plan to give users better visibility into their network of suppliers to help them drive down costs. We also think this technology and full orchestration process can be applied to the financial side of organizations. An example is transactions that flow through complex internal structures to minimize tax exposure. We can help companies manage those transactions effectively by thinking about the internal organization as a virtual enterprise and bringing the same solution set to this internal challenge.  The Clear Front Runner No other company is investing in solving the virtual enterprise supply chain issues like Oracle is. Oracle is in a unique position to become the gold standard in this market space. We have the infrastructure of Oracle technology. We already have an Oracle Fusion DOO application which embraces the best of what's required in this area. And we're absolutely committed to extending our Fusion solution to other use cases and delivering even more business value.

    Read the article

  • D2K to OA Framework Transition

    - by PRajkumar
    What is the difference between D2K form and OA Framework? It is a very innocent but important question for someone that desires to make transition from D2K to OA Framework. I hope you have already read and implemented OA Framework Getting Started. I will re-visit my own experience of implementing HelloWorld program in "OA Framework". When I implemented HelloWorld a year ago, I had no clue as to what I was doing & why I was doing those steps. I merely copied the steps from Oracle Tutorial without understanding them. Hence in this blog, I will try to explain in simple manner the meaning of OA Framework HelloWorld Program and compare the steps to D2K form [where possible]. To keep things simple, only basics will be discussed. Following key Steps were needed for HelloWorld Step 1 Create a new Workspace and a new Project as dictated by Oracle's tutorial. When defining project, you will specify a default package, which in this case was oracle.apps.ak.hello This means the following: - ak is the short name of the Application in Oracle           [means fnd_applications.short_name] hello is the name of your project Step 2 Next, you will create a OA Page within hello project Think OA Page as the fmx file itself in D2K. I am saying so because this page gets attached to the form function. This page will be created within hello project, hence the package name oracle.apps.ak.hello.webui Note the webui, it is a convention to have page in webui, means this page represents the Web User Interface You will assign the default AM [OAApplicationModule]. Think of AM "Connection Manager" and "Transaction State Manager" for your page          I can't co-relate this to anything in D2k, as there is no concept of Connection Pooling and that D2k is not stateless. Reason being that as soon as you kick off a D2K Form, it connects to a single session of Oracle and sticks to that single Oracle database session. So is not the case in OAF, hence AM is needed. Step 3 You create Region within the Page. ·         Region is what will store your fields. Text input fields will be of type messageTextInput. Think of Canvas in D2K. You can have nested regions. Stacked Canvas in D2K comes the closest to this component of OA Framework Step 4 Add a button to one of the nested regions The itemStyle should be submitButton, in case you want the page to be submitted when this button is clicked There is no WHEN-BUTTON-PRESSED trigger in OAF. In Framework, you will add a controller java code to handle events like Form Submit button clicks. JDeveloper generates the default code for you. Primarily two functions [should I call methods] will be created processRequest [for UI Rendering Handling] and processFormRequest          Think of processRequest as WHEN-NEW-FORM-INSTANCE, though processRequest is very restrictive. Note What is the difference between processRequest and processFormRequest? These two methods are available in the Default Controller class that gets created. processFormRequest This method is commonly used to react/respond to the event that has taken place, for example click of a button. Some examples are if(oapagecontext.getParameter("Cancel") != null) (Do your processing for Cancellation/ Rollback) if(oapagecontext.getParameter("Submit") != null) (Do your validations and commit here) if(oapagecontext.getParameter("Update") != null) (Do your validations and commit here) In the above three examples, you could be calling oapagecontext.forwardImmediately to re-direct the page navigation to some other page if needed. processRequest In this method, usually page rendering related code is written. Effectively, each GUI component is a bean that gets initialised during processRequest. Those who are familiar with D2K forms, something like pre-query may be written in this method. Step 5 In the controller to access the value in field "HelloName" the command is String userContent = pageContext.getParameter("HelloName"); In D2k, we used :block.field. In OAFramework, at submission of page, all the field values get passed into to OAPageContext object. Use getParameter to access the field value To set the value of the field, use OAMessageTextInputBean field HelloName = (OAMessageTextInputBean)webBean.findChildRecursive("HelloName"); fieldHelloName.setText(pageContext,"Setting the default value" ); Note when setting field value in controller: Note 1. Do not set the value in processFormRequest Note 2. If the field comes from View Object, then do not use setText in controller Note 3. For control fields [that are not based on View Objects], you can use setText to assign values in processRequest method Lets take some notes to expand beyond the HelloWorld Project Note 1 In D2K-forms we sort of created a Window, attached to Canvas, and then fields within that Canvas. However in OA Framework, think of Page being fmx/Window, think of Region being a Canvas, and fields being within Regions. This is not a formal/accurate understanding of analogy between D2k and Framework, but is close to being logical. Note 2 In D2k, your Forms fmb file was compiled to fmx. It was fmx file that was deployed on mid-tier. In case of OAF, your OA Page is nothing but a XML file. We call this MDS [meta data]. Whatever name you give to "Page" in OAF, an XML file of the same name gets created. This xml file must then be loaded into database by using XML Importer command. Note 3 Apart from MDS XML file, almost everything else is merely deployed to your mid-tier. Usually this is underneath $JAVA_TOP/oracle/apps/../.. All java files will go underneath java top/oracle/apps/../.. etc. Note 4 When building tutorial, ignore the steps for setting "Attribute Sets". These are not mandatory. Oracle might just have developed their tutorials without including these. Think of these like Visual Attributes of D2K forms Note 5 Controller is where you will write any java code in OA Framework. You can create a Controller per Page or have a different Controller for each of the Regions with the same Page. Note 6 In the method processFormRequest of the Controller, you can access the values of the page by using notation pageContext.getParameter("<fieldname here>"). This method processFormRequest is executed when the OAF Screen/Page is submitted by click of a button. Note 7 Inside the controller, all the Database Related interactions for example interaction with View Objects happen via Application Module. But why so? Because Application Module Manages the transaction state of the Application. OAApplicationModuleImpl oaapplicationmoduleimpl = OAApplicationModuleImpl)oapagecontext.getApplicationModule(oawebbean); OADBTransaction oadbtransaction = OADBTransaction)oaapplicationmoduleimpl.getDBTransaction(); Note 8 In D2K, we have control block or a block based on database view. Similarly, in OA Framework, if the field does not have view Object attached, then it is like a control field. Hence in HelloWorld example, field HelloName is a control field [in D2K terminology]. A view Object can either be based on a view/table, synonym or on a SQL statement. Note 9 I wish to access the fields in multi record block that is based on view Object. Can I do this in Controller? Sure you can. To traverse through those records, do the below ·         Get the reference to the View Object using (OAViewObject)oapagecontext.getApplicationModule(oawebbean).findViewObject("VO Name Here") ·         Loop through the records in View Objects using count returned from oaviewobject.getFetchedRowCount() ·         For each record, fetch the value of the fields within the loop as oracle.jbo.Row row = oaviewobject.getRowAtRangeIndex(loop index here); (String)row.getAttribute("Column name of VO here ");

    Read the article

< Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >