Search Results

Search found 12562 results on 503 pages for 'secure delete'.

Page 392/503 | < Previous Page | 388 389 390 391 392 393 394 395 396 397 398 399  | Next Page >

  • MEB Support to NetBackup MMS

    - by Hema Sridharan
    In MySQL Enterprise Backup 3.6, new option was introduced to support backup to tapes via SBT interface. SBT stands for System Backup to Tape, an Oracle API that helps to perform backup and restore jobs via media management software such as Oracle's Secure Backup (OSB). There are other storage managers like IBM's Tivoli Storage Manager (TSM) and Symantec's Netbackup (NB) which are also supported by MEB but we don't guarantee that it will function as expected for every release. MEB supports SBT API version 2.0 In this blog, I am primarily going to focus the interface of MEB and Symantec's NB. If we are using tapes for backup, ensure that tape library and tape drives are compatible. Test Setup 1. Install NB 7.5 master and media servers in Linux OS. ( NB 7.1 can also be used but for testing purpose I used NB 7.5)2. Install MEB 3.8 also in Linux OS.3. Install NB admin console in your windows desktop and configure the NB master server from there. Note: Ensure that you have root user permission to install NetBackup. Configuration Steps for MEB and NB Once MEB and NB are installed, Ensure that NB is linked to MEB by specifying the library /usr/openv/netbackup/bin/libobk.so64 in the mysqlbackup command line using --sbt-lib-path. Configure the NB master server from windows console. That is configure the storage units by specifying the Storage unit name, Disk type, Media Server name etc.  Create NetBackup policies that are user selectable. But please make sure that policy type is "Oracle".  Define the clients where MEB will be executed. Some times this will be different host where MEB is run or some times in same Media server where NB and tapes are attached. Now once the installation and configuration steps are performed for MEB and NB, the next part is the actual execution.MEB should be run as single file backup using --backup-image option with prefix sbt:(it is a tag which tells MEB that it should stream the backup image through the SBT interface) which is sent to NB client via SBT interface . The resulting backup image is stored where NB stores the images that it backs up. The following diagram shows how MEB interacts with MMS through SBT interface. Backup The following parameters should also be ready for the execution,    --sbt-lib-path : Path to SBT library specific to NetBackup MMS. SBT lib for NetBackup  is in /usr/openv/netbackup/bin/libobk.so64    --sbt-environment: Environment variables must be defined specific to NetBackup. In our example below, we use     NB_ORA_SERV=myserver.com,    NB_ORA_CLIENT=myserver.com,    NB_ORA_POLICY=NBU-MEB    ORACLE_HOME = /export/home2/tmp/hema/mysql-server/ ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ./mysqlbackup --port=13000 --protocol=tcp --user=root --backup-image=sbt:bkpsbtNB --sbt-lib-path=/usr/openv/netbackup/bin/libobk.so64 --sbt-environment="NB_ORA_SERV=myserver.com, NB_ORA_CLIENT=myserver.com, NB_ORA_POLICY=NBU-MEB, ORACLE_HOME=/export/home2/tmp/hema/mysql-server/” --backup-dir=/export/home2/tmp/hema/MEB_bkdir/ backup-to-image ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Once backup is completed successfully, this should appear in Activity Monitor in NetBackup Console.For restore,  image contents has to be extracted using image-to-backup-dir command and then apply-log and copy-back steps are applied. ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ./mysqlbackup --sbt-lib-path=/usr/openv/netbackup/bin/libobk.so64  --backup-dir=/export/home2/tmp/hema/NBMEB/ --backup-image=sbt:bkpsbtNB image-to-backup-dir-----------------------------------------------------------------------------------------------------------------------------------Now apply logs as usual, shutdown the server and perform restore, restart the server and check the data contents. ./mysqlbackup   ---backup-dir=/export/home2/tmp/hema/NBMEB/  apply-log ./mysqlbackup --datadir=/export/home2/tmp/hema/mysql-server/mysql-5.5-meb-repo/mysql-test/var/mysqld.1/data/  --backup-dir=/export/home2/tmp/hema/MEB_bkpdir/ innodb_log_files_in_group=2 --innodb_log_file_size=5M --user=root --port=13000 --protocol=tcp copy-back The NB console should show 'Restore" job as done. If you don't see that there is something wrong with MEB or NetBackup.You can also refer to more detailed steps of MEB and NB integration in whitepaper here

    Read the article

  • Oracle's PeopleSoft Customer Advisory Boards Convene to Discuss Roadmap at Pleasanton Campus

    - by john.webb(at)oracle.com
    Last week we hosted all of the PeopleSoft CABs (Customer Advisory Boards) at our Pleasanton Development Center to review our detailed designs for future Feature Packs, PeopleSoft 9.2, and beyond. Over 150 customers from 79 companies attended representing a variety of industries, geographies, and company sizes. The PeopleSoft team relies heavily on this group to provide key input on our roadmap for applications as well as technology direction. A good product strategy is one part well thought out idea with many handfuls of customer validation, and very often our best ideas originate from these customer discussions. While the individual CABs have frequent interactions with our teams, it's always great to have all of them in one place and in person. Our attendance was up from last year which I attribute to two things: (1) More interest as a result of PeopleSoft 9.1 upgrade; (2) An improving economy allowing for more travel. Maybe we should index the second item meeting-to-meeting and use it as a market indicator - we'll see! We kicked off the day one session with an overview of the PeopleSoft Roadmap and I outlined our strategy around Feature Packs and PeopleSoft 9.2. Given the high adoption rate of PeopleSoft 9.1 (over 4x that of 9.0 given the same time lapse since the release date), there was a lot of interest around the 9.1 Feature Packs as a vehicle for continuous value. We provided examples of our 3 central design themes: Simplicity, Productivity, and lower TCO, including those already delivered via Feature Packs in 2010. A great example of this is the Company Directory feature in PeopleSoft HCM. The configuration capabilities and the new actionable links our CAB advised us on last Spring were made available to all customers late last year. We reviewed many more future Navigation changes that will fundamentally change the way users interact with PeopleSoft. Our old friend, the menu tree, is being relegated from center stage to a bit part, with new concepts like Activity Guides, Train Stops, Related Actions, Work Centers, Collaborative Workspaces, and Secure Enterprise Search bringing users what they need in a contextual, role based manner with fewer clicks. Paco Aubrejuan, our PeopleSoft GM, and Steve Miranda, the SVP for Fusion Applications, then discussed our plans around Oracle's Application Investment Strategy.  This included our continued investment in developing both PeopleSoft and Fusion as well as the co-existence strategy with new Fusion Apps integrating to PeopleSoft Apps. Should you want to view this presentation, a recording is available. Jeff Robbins, our lead PeopleTools Strategist, provided the roadmap for PeopleTools and discussed our continuing plan to deliver annual releases to further evolve the user experience. Numerous examples were highlighted with the Navigation techniques I mentioned previously. Jeff also provided a lot of food for thought around Lifecycle Management topics and how to remain current on releases with a  lower cost of ownership. Dennis Mesler, from Boise, was the guest speaker in this slot, who spoke about the new PeopleSoft Test Framework (PTF). Regression Testing is a key cost component when product updates are applied. This new tool (which is free to all PeopleSoft customers as part of PeopleTools 8.51) provides a meta data driven approach to recording and executing test scripts. Coupled with what our Usage Monitor enables, PTF provides our customers a powerful tool to lower costs and manage product updates more efficiently and at the time of their choosing. Beyond the general session, we broke out into the individual CABs: HCM, Financials, ESA/ALM, SRM, SCM, CRM, and PeopleTools/ Technology. A day and half of very engaging discussions around our plans took place for each product pillar. More about that to follow in future posts.      We capped the first day with a reception sponsored by our partners: InfoSys, SmartERP (represented by Doris Wong), and Grey Sparling  Solutions (represented by Chris Heller and Larry Grey). Great to see these old friends actively engaged in the very busy PeopleSoft ecosystem!   Jeff Robbins previews the roadmap for PeopleTools with the PeopleSoft CAB  

    Read the article

  • DELL Inspiron 9400 SD Card Reader not working on 10.11 ubuntu

    - by Mario Martz
    Im new in linux, just installed Ubuntu 11.10 in a Dell Inspiron 9400, everything works fine with the exception of the SD card reader, everytime I insert a card the computer doesnt do anything, its like the SD card reader is not there. I did a $lspci and it shows the next drivers 03:01.0 FireWire (IEEE 1394): Ricoh Co Ltd R5C832 IEEE 1394 Controller 03:01.1 SD Host controller: Ricoh Co Ltd R5C822 SD/SDIO/MMC/MS/MSPro Host Adapter (rev 19) 03:01.2 System peripheral: Ricoh Co Ltd R5C592 Memory Stick Bus Host Adapter (rev 0a) 03:01.3 System peripheral: Ricoh Co Ltd xD-Picture Card Controller (rev 05) Everytime I insert a memory card, dmesg shows the next d status 0x600b00 [ 2687.227351] end_request: I/O error, dev mmcblk0, sector 64 [ 2687.229436] mmcblk0: error -110 sending read/write command, response 0x0, card status 0x600b00 [ 2687.229440] end_request: I/O error, dev mmcblk0, sector 65 [ 2687.230512] mmcblk0: error -110 sending read/write command, response 0x0, card status 0x600b00 [ 2687.230515] end_request: I/O error, dev mmcblk0, sector 66 [ 2687.231588] mmcblk0: error -110 sending read/write command, response 0x0, card status 0x600b00 [ 2687.231592] end_request: I/O error, dev mmcblk0, sector 67 [ 2687.232674] mmcblk0: error -110 sending read/write command, response 0x0, card status 0x600b00 [ 2687.232678] end_request: I/O error, dev mmcblk0, sector 68 [ 2687.234763] mmcblk0: error -110 sending read/write command, response 0x0, card status 0x600b00 [ 2687.234766] end_request: I/O error, dev mmcblk0, sector 69 [ 2687.236864] mmcblk0: error -110 sending read/write command, response 0x0, card status 0x600b00 [ 2687.236868] end_request: I/O error, dev mmcblk0, sector 70 [ 2687.238942] mmcblk0: error -110 sending read/write command, response 0x0, card status 0x600b00 [ 2687.238946] end_request: I/O error, dev mmcblk0, sector 71 [ 2687.238949] Buffer I/O error on device mmcblk0, logical block 8 [ 2687.241028] mmcblk0: retrying using single block read [ 2687.243104] mmcblk0: error -110 sending read/write command, response 0x0, card status 0x600b00 [ 2687.243108] end_request: I/O error, dev mmcblk0, sector 64 [ 2687.245212] mmcblk0: error -110 sending read/write command, response 0x0, card status 0x600b00 [ 2687.245215] end_request: I/O error, dev mmcblk0, sector 65 [ 2687.247298] mmcblk0: error -110 sending read/write command, response 0x0, card status 0x600b00 [ 2687.247302] end_request: I/O error, dev mmcblk0, sector 66 [ 2687.248389] mmcblk0: error -110 sending read/write command, response 0x0, card status 0x600b00 [ 2687.248393] end_request: I/O error, dev mmcblk0, sector 67 [ 2687.250476] mmcblk0: error -110 sending read/write command, response 0x0, card status 0x600b00 [ 2687.250480] end_request: I/O error, dev mmcblk0, sector 68 [ 2687.252617] mmcblk0: error -110 sending read/write command, response 0x0, card status 0x600b00 [ 2687.252621] end_request: I/O error, dev mmcblk0, sector 69 [ 2687.254737] mmcblk0: error -110 sending read/write command, response 0x0, card status 0x600b00 and more of the same but with different sector number Im using Kernel 3.0.0-12-generic By the way, when I was installing it and ubuntu asks about installation (If I want to install it along with windows or delete something or change the partitions of the HDD) if I go to the window to change the partition of the disc, linux detects the SD Card (if there's one inserted course). Any help with this it would be appreciated -sorry for my english Thank you

    Read the article

  • Working with EO composition associations via ADF BC SDO web services

    - by Chris Muir
    ADF Business Components support the ability to publish the underlying Application Modules (AMs) and View Objects (VOs) as web services through Service Data Objects (SDOs).  This blog post looks at a minor challenge to overcome when using SDOs and Entity Objects (EOs) that use a composition association. Using the default ADF BC EO association behaviour ADF BC components allow you to work with VOs that are based on EOs that are a part of a parent-child composition association.  A composition association enforces that you cannot create records for the child outside the context of the parent.  As example when creating invoice-lines you want to enforce the individual lines have a relating parent invoice record, it just simply doesn't make sense to save invoice-lines without their parent invoice record. In the following screenshot using the ADF BC Tester it demonstrates the correct way to create a child Employees record as part of a composition association with Departments: And the following screenshot shows you the wrong way to create an Employee record: Note the error which is enforced by the composition association: (oracle.jbo.InvalidOwnerException) JBO-25030: Detail entity Employees with row key null cannot find or invalidate its owning entity.  Working with composition associations via the SDO web services  Shay Shmeltzer recently recorded a good video which demonstrates how to expose your ADF Business Components through the SDO interface. On exposing the VOs you get a choice of operation to publish including create, update, delete and more: For example through the SDO test interface we can see that the create operation will request the attributes for the VO exposed, in this case EmployeesView1: In this specific case though, just like the ADF BC Tester, an attempt to create this record will fail with JBO-25030, the composition association is still enforced: The correct way to to do this is through the create operation on the DepartmentsView1 which also lets you create employees record in context of the parent, thus satisfying the composition association rule: Yet at issue here is the create operation will always create both the parent Departments and Employees records.  What do we do if we've already previously created the parent Departments records, and we just want to create additional Employees records for that Department?  The create method of the EmployeeView1 as we saw previously doesn't allow us to do that, the JBO-3050 error will be raised. The solution is the "merge" operation on the parent Departments record: In this case for the Departments record you just need to supply the DepartmentId of the Department you want the Employees record to be associated with, as well as the new Employees record.  When invoked only the Employees record is created, and the supply of the DepartmentId of the Departments record satisfies the composition association without actually creating or updating the associated Department record that already exists in the database. Be warned however if you supply any more attributes for the Department record, it will result in a merge (update) of the associated Departments record too. 

    Read the article

  • Mounting a Microsoft Azure CloudDrive in a VMRole

    - by SeanBarlow
    Mounting a Drive in a VMRole is a little more complicated then a web or worker.  The Web and Worker roles offer OnStart and OnStop events, which you can use to mount or unmount your drives. The VMRole does not have these same events so you have to provide another way for the drives to be mounted or unmounted. The problem I have run into is what if you have multiple drives and you only want to mount certain drives. How do you let your user mount the drive. I am not going to go into details on what kind of GUI to present to the user. I have done this in a simple WPF application as well as a console application. We are going to need to get the storage account details. One thing to note when you are mounting cloud drives you cannot use https and have to use http. We force the use of http by using false when we create the CloudStorageAccount.   StorageCredentialsAccountAndKey credentials = new StorageCredentialsAccountAndKey("AccountName", "AccountKey"); CloudStorageAccount storageAccount = new CloudStorageAccount(credentials, false);   Next we need to get a reference to the container.   var blobClient = storageAccount.CreateCloudBlobClient(); var container = blobClient.GetContainerReference("ContainerName");   Now we need to get a list of the drives in the container   var drives = container.ListBlobs();   Now that we have a list of the drives in the container we can let the user choose which drive they want to mount. I am just selecting the 1st drive in the list for the example and getting the Uri of the drive.   var driveUri = drives.First().Uri;   Now that we have the Uri we need to get the reference to the drive. var drive = new CloudDrive(driveUri, storageAccount.Credentials);   Now all that is left is to mount the drive.   var driveLetter = drive.Mount(0, DriveMountOptions.None);   To unmount the drive all you have to do is call unmount on the drive. drive.Unmount();   You do need to make sure you unount the drives when you are done with them. I have run into issues with the drives being locked until the VMRole is rebooted. I have also managed to have a drive be permanently locked and I was forced to delete it and upload it again. I have been unable to reproduce the permanent lock but I am still trying. The CloudDrive class provides a handy method to retrieve all the mounted drives in the Role. foreach (var drive in CloudDrive.GetMountedDrives()) {          var mountedDrive = Account.CreateCloudDrive(drive.Value.PathAndQuery);          mountedDrive.Unmount(); }

    Read the article

  • New security configuration flag in UCM PS3

    - by kyle.hatlestad
    While the recent Patch Set 3 (PS3) release was mostly focused on bug fixes and such, a new configuration flag was added for security. In 10gR3 and prior versions, UCM had a component called Collaboration Manager which allowed for project folders to be created and groups of users assigned as members to collaborate on documents. With this component came access control lists (ACL) for content and folders. Users could assign specific security rights on each and every document and folder within a project. And it was possible to enable these ACL's without having the Collaboration Manager component enabled. But it took some special instructions (see technote# 603148.1) and added some extraneous pieces still related to Collaboration Manager. When 11g came out, Collaboration Manager was no longer available. But the configuration settings to turn on ACLs were still there. Well, in PS3 they've been cleaned up a bit and a new configuration flag has been added to simply turn on the ACL fields and none of the other collaboration bits. To enable ACLs: UseEntitySecurity=true Along with this configuration flag to turn ACLs on, you also need to define which Security Groups will honor the ACL fields. If an ACL is applied to a content item with a Security Group outside this list, it will be ignored. SpecialAuthGroups=HumanResources,Legal,Marketing Save the settings and restart the instance. Upon restart, two new metadata fields will be created: xClbraUserList, xClbraAliasList. If you are using OracleTextSearch as the search indexer, be sure to run a Fast Rebuild on the collection. On the Check In, Search, and Update pages, values are added by simply typing in the value and getting a type-ahead list of possible values. Select the value, click Add and then set the level of access (Read, Write, Delete, or Admin). If all of the fields are blank, then it simply falls back to just Security Group and Account access. As for how they are stored in the metadata fields, each entry starts with it's identifier: ampersand (&) symbol for users, "at" (@) symbol for groups, and colon (:) for roles. Following that is the entity name. And at the end is the level of access in paranthesis. e.g. (RWDA). And each entry is separated by a comma. So if you were populating values through batch loader or an external source, the values would be defined this way. Detailed information on Access Control Lists can be found in the Oracle Fusion Middleware System Administrator's Guide for Oracle Content Server.

    Read the article

  • Tips On Using The Service Contracts Import Program

    - by LuciaC
    Prior to release 12.1 there was no supported way to import contracts into the EBS Service Contracts application - there were no public APIs nor contract load programs provided.  From release 12.1 onwards the 'Service Contracts Import Program' is provided to load service contracts into the application. The Service Contracts Import functionality is explained in How to Use the Service Contracts Import Program - Scope and Limitations (Doc ID 1057242.1).  This note includes an attached document which explains the program architecture, shows the Entity Relationship Diagram and details the interface table definitions. The Import program takes data from the interface tables listed below and populates the contracts schema tables:  OKS_USAGE_COUNTERS_INTERFACE OKS_SALES_CREDITS_INTERFACEOKS_NOTES_INTERFACEOKS_LINES_INTERFACEOKS_HEADERS_INTERFACEOKS_COVERED_LEVELS_INTERFACEThese interface tables must be loaded via a custom load program.The Service Contracts Import concurrent request is then submitted to create contracts from this legacy data. The parameters to run the Import program are:  Parameter Description  Mode Validate only, Import  Batch Number Batch_Id (unique id populated into the OKS_HEADERS_INTERFACE table)  Number of Workers Number of workers required (these are spawned as separate sub-requests)  Commit size Represents number of successfully processed contracts commited to database The program spawns sub-requests for the import worker(s) and the 'Service Contracts Import Report'.  The data is validated prior to import and into the Contracts tables and will report errors in the Service Contracts Import Report program output file (Import Execution Report).  Troubleshooting tips are provided in R12.1 - Common Service Contract Import Errors (Doc ID 762545.1); this document lists some, but not all, import errors.  The document will be updated over time.  Additional help is given in Debugging Tip for Service Contracts Import Errors (Doc ID 971426.1).After you successfully import contracts, you can purge the records from the interface tables by running the Service Contracts Import Purge concurrent program. Note that there is no supported way to mass delete data from the Contracts schema tables once they are populated, so data loaded by the Import program must be fully tested and verified before the program is run to load data into a Production system.A Service Contracts Import Test program has been provided which will take an existing contract in the application and load the interface tables using the data from that contract.  This can be used as an example for guidance on how to load the interface tables.  The Test program functionality is explained in How to Use the Service Contracts Test Import Program Provided in Release 12.1 (Doc ID 761209.1).  Note that the Test program has some limitations which do not apply to the full Import program and is not a supported program, it is simply a testing tool.  

    Read the article

  • Recover Lost Form Data in Firefox

    - by Asian Angel
    Have you ever filled in a text area or form in a webpage and something happens before you can finish it? If you like the idea of recovering that lost data then you will want to have a look at the Lazarus: Form Recovery extension for Firefox. Lazarus: Form Recovery in Action For our first example we chose the comment text box area for one of the articles here at the website. As you can see we were not finished typing in the whole comment yet… Notice the “Lazarus Icon” in the lower right corner. Note: We simulated accidental tab closures for our two examples. After getting our webpage opened up again all of our text was gone. Right clicking within the text area showed two options available…”Recover Text & Recover Form”. Notice that our lost text was listed as a “sub menu”…this could be extremely useful in matching up the appropriate text to the correct webpage if you had multiple tabs open before something happened. Click on the correct text listing to insert it. So easy to finish writing our comment without having to start from zero again. In our second example we chose the sign-up form page for the website. As before we were not finished filling in the form… Getting the webpage opened back up showed the same problem as before…all the entered text was lost. This time we right clicked in the browser window area and there was that wonderful “Recover Form Command” waiting to be used. One click and… All of our lost form data was back and we were able to finish filling in the form. For those who may be interested you can disable Lazarus: Form Recovery on individual websites using the “Context Menu” for the “Status Bar Icon” Options There are three sections in the options and you should take a quick look through them to make any desired modifications in how Lazarus: Form Recovery functions. The first “Options Area” focuses on display/access for the extension. The second “Options Area” allows you to expand the type of data retained, enable removal of data within a given time frame, set up a password, disable search indexing, and enable form data retention while in “Private Browsing Mode”. The third “Options Area” focuses on the Lazarus database itself. Conclusion If you have ever lost text area or form data before then you know how much time could be lost in starting over. Lazarus: Form Recovery helps provide a nice backup solution to get you up and running once again with a minimum of effort. Links Download the Lazarus: Form Recovery extension (Mozilla Add-ons) Download the Lazarus: Form Recovery extension (Extension Homepage) Similar Articles Productive Geek Tips Quick Tip: Resize Any Textbox or Textarea in FirefoxWhy Doesn’t AutoComplete Always Work in Firefox?Pass Variables between Windows Forms Windows without ShowDialog()Using Secure Login in FirefoxAdd Search Forms to the Firefox Search Bar TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Looking for Good Windows Media Player 12 Plug-ins? Find Out the Celebrity You Resemble With FaceDouble Whoa ! Use Printflush to Solve Printing Problems Icelandic Volcano Webcams Open Multiple Links At One Go

    Read the article

  • permanently load module

    - by Radu
    I have a Compaq Presario CQ-61 320SQ, I am using Ubuntu 10.04 because after update to 10.10 my mouse and touchpad won't work, network won't work, sound won't work ... (I managed to fix most of them after almost a month of googling, but not all, my 2 Desktops have no problem with 10.10) so I decided to switch back to 10.04, where I have a problem: My broadband speed is very low beacause of the kernel module r8169, I downloaded the good module r8101 and every time the computer boots have a rc.local entry to fix this. Question: Can I load the modul permanently from a specific location. I heard about /etc/modules but there I need the module name, but I have to load it from a specific path (where is the default path for that) Thank you. So I studied the script: It creates the file r8101.ko in /lib/modules/uname -r/kernel/drivers/net so I think as long as nobody will delete that file, and I don't update the kernel, maybe adding r8108 to /etc/modules will work, and add r8169 to blacklist ... I will give it a try. EDIT2: So I added r8101 to /etc/modules and blacklist r8169 to /etc/modprobe.d/blacklist.conf It still uses the old module, lsmod prints: radu@adu:~$ lsmod | grep r8 r8101 67626 0 r8169 34108 0 mii 4381 1 r8169 EDIT: the module is loaded using this script that came with it: #!/bin/sh # invoke insmod with all arguments we got # and use a pathname, as insmod doesn't look in . by default TARGET_PATH=/lib/modules/`uname -r`/kernel/drivers/net echo echo "Check old driver and unload it." check=`lsmod | grep r8169` if [ "$check" != "" ]; then echo "rmmod r8169" /sbin/rmmod r8169 fi check=`lsmod | grep r8101` if [ "$check" != "" ]; then echo "rmmod r8101" /sbin/rmmod r8101 fi echo "Build the module and install" echo "-------------------------------" >> log.txt date 1>>log.txt make all 1>>log.txt || exit 1 module=`ls src/*.ko` module=${module#src/} module=${module%.ko} if [ "$module" == "" ]; then echo "No driver exists!!!" exit 1 elif [ "$module" != "r8169" ]; then if test -e $TARGET_PATH/r8169.ko ; then echo "Backup r8169.ko" if test -e $TARGET_PATH/r8169.bak ; then i=0 while test -e $TARGET_PATH/r8169.bak$i do i=$(($i+1)) done echo "rename r8169.ko to r8169.bak$i" mv $TARGET_PATH/r8169.ko $TARGET_PATH/r8169.bak$i else echo "rename r8169.ko to r8169.bak" mv $TARGET_PATH/r8169.ko $TARGET_PATH/r8169.bak fi fi fi echo "Depending module. Please wait." depmod -a echo "load module $module" modprobe $module echo "Completed." exit 0

    Read the article

  • ASP.NET MVC 3 Hosting :: How to Upgrade ASP.NET MVC 2 Project to ASP.NET MVC 3

    - by mbridge
    ASP.NET MVC 3 can be installed side by side with ASP.NET MVC 2 on the same computer, which gives you flexibility in choosing when to upgrade an ASP.NET MVC 2 application to ASP.NET MVC 3. The simplest way to upgrade is to create a new ASP.NET MVC 3 project and copy all the views, controllers, code, and content files from the existing MVC 2 project to the new project and then to update the assembly references in the new project to match the old project. If you have made changes to the Web.config file in the MVC 2 project, you must also merge those changes with the Web.config file in the MVC 3 project. To manually upgrade an existing ASP.NET MVC 2 application to version 3, do the following: 1. In both Web.config files in the MVC 3 project, globally search and replace the MVC version. Find the following: System.Web.Mvc, Version=2.0.0.0 Replace it with the following System.Web.Mvc, Version=3.0.0.0 There are three changes in the root Web.config and four in the Views\Web.config file. 2. In Solution Explorer, delete the reference to System.Web.Mvc (which points to the version 2 DLL). Then add a reference to System.Web.Mvc (v3.0.0.0). 3. In Solution Explorer, right-click the project name and then select Unload Project. Then right-click again and select Edit ProjectName.csproj. 4. Locate the ProjectTypeGuids element and replace {F85E285D-A4E0-4152-9332-AB1D724D3325} with {E53F8FEA-EAE0-44A6-8774-FFD645390401}. 5. Save the changes and then right-click the project and select Reload Project. 6. If the project references any third-party libraries that are compiled using ASP.NET MVC 2, add the following highlighted bindingRedirect element to the Web.config file in the application root under the configuration section: <runtime>   <assemblyBinding >     <dependentAssembly>       <assemblyIdentity name="System.Web.Mvc"           publicKeyToken="31bf3856ad364e35"/>       <bindingRedirect oldVersion="2.0.0.0" newVersion="3.0.0.0"/>     </dependentAssembly>   </assemblyBinding> </runtime> Another ASP.NET MVC 3 article: - Rolling with Razor in MVC v3 Preview - Deploying ASP.NET MVC 3 web application to server where ASP.NET MVC 3 is not installed - RenderAction with ASP.NET MVC 3 Sessionless Controllers

    Read the article

  • ATG Live Webcast June 28: Scrambling Sensitive Data in EBS 12 Cloned Environments

    - by BillSawyer
    Securing the Oracle E-Business Suite includes protecting the underlying E-Business data in production and non-production databases.  While steps can be taken to provide a secure configuration to limit EBS access, a better approach to protecting non-production data is simply to scramble (mask) the data in the non-production copy.   The Oracle E-Business Suite Template for Data Masking Pack can be used in situations where confidential or regulated data needs to be shared with other non-production users who need access to some of the original data, but not necessarily every table.  Examples of non-production users include internal application developers or external business partners such as offshore testing companies, suppliers or customers. The Oracle E-Business Suite Template for Data Masking Pack is applied to a non-production environment with the Enterprise Manager Grid Control Data Masking Pack.  When applied, the Oracle E-Business Suite Template for Data Masking Pack will create an irreversibly scrambled version of your production database for development and testing. This ATG Live Webcast is your chance to come learn about the Oracle E-Business Suite Release 12.1.3 Template for Data Masking Pack from the experts. Oracle E-Business Suite Release 12.1.3 Template for Data Masking The agenda for the Oracle E-Business Suite Template for Data Masking Pack webcast includes the following topics: What does data masking do in E-Business Suite environments? De-identify the data Mask sensitive data Maintain data validity How can EBS customers use data masking? References Join Eric Bing, Senior Director and Elke Phelps, Senior Principal Product Manager, as they discusses the Oracle E-Business Suite Template for Data Masking Pack.Date:                  Thursday, June 28, 2012Time:                 8:00AM Pacific Standard TimePresenters:     Eric Bing, Senior Director                           Elke Phelps, Senior Principal Product ManagerWebcast Registration Link (Preregistration is optional but encouraged) To hear the audio feed:    Domestic Participant Dial-In Number:           877-697-8128    International Participant Dial-In Number:      706-634-9568    Additional International Dial-In Numbers Link:    Dial-In Passcode:                                              100865To see the presentation:    The Direct Access Web Conference details are:    Website URL: https://ouweb.webex.com    Meeting Number:  599097152If you miss the webcast, or you have missed any webcast, don't worry -- we'll post links to the recording as soon as it's available from Oracle University.  You can monitor this blog for pointers to the replay. And, you can find our archive of our past webcasts and training here.If you have any questions or comments, feel free to email Bill Sawyer (Senior Manager, Applications Technology Curriculum) at BilldotSawyer-AT-Oracle-DOT-com.

    Read the article

  • How to Create a Task From an Email Message in Outlook 2013

    - by Lori Kaufman
    If you need to do something related to an email message you received, you can easily create a task from the message in Outlook. A task can be created that contains all the content of the message without requiring you to re-enter the information. Creating a task in Outlook from an email message is different from flagging the message. As it says on Microsoft’s site: “When you flag an email message, the message appears in the To-Do List in Tasks and on the Tasks peek. However, if you delete the message, it also disappears from the To-Do List in Tasks and on the Tasks peek. Flagging a message doesn’t create a separate task.” Using the method described below to create a task from an email message, the task is separate from the message. The original message can be deleted or changed and the related task will not be affected. In Outlook, make sure the Mail section is active. If not, click Mail on the Navigation Bar at the bottom of the Outlook window. Then, click on the message you want to add to a task and drag it to Tasks on the Navigation Bar. A new Task window displays containing the email message and allowing you to enter the subject of the task, the Start and Due dates, Status, Priority, among other settings. When you have specified the settings for the task, click Save & Close in the Actions section of the Task tab. When the Task window closes, the Mail section is still active. If you move your mouse over Tasks on the Navigation Bar, a snippet from the new task displays in a popup window (the Task peek). Click Tasks to go to the Tasks section of Outlook. The To-Do List displays with your newly-added task listed in the middle pane. The right pane displays the details of the task and the contents of the message included in the task (as pictured at the beginning of this article). Click on Tasks to see a complete listing of all your tasks, including the one you just added from your email message. Note that attachments in an email message added to a new task are not copied to the task. You can also create new tasks by dragging contacts, calendar items, and notes to Tasks on the Navigation Bar.     

    Read the article

  • links for 2011-03-08

    - by Bob Rhubart
    The Empowered Business "Someone needs to be the enterprise parent that asks the question, “do you really need that?” It may be a shiny new thing, but does it make a difference in the ability to accomplish the strategy and goals?" - Enterprise Architect Todd Biske (tags: enterprisearchitecture) Knowledge Workers in the British Raj "While we’ve used technology to change business, business has also evolved to the point that it’s changing how we think about and use technology." - Peter Evans Greenwood (tags: enterprisearchitecture enterprise2.0) Arun Gupta, Miles to go ...: OTN Developer Day Boston 2011 - Slides & Trip Report Arun Gupta shares slides from his Developer Day presentations. (tags: oracle otn java) Use WLST to Delete All JMS Messages From a Destination (James Bayer's Blog) James Bayer responds to a question. (tags: oracle otn weblogic jms) Triangle Circle Square: Apex in the Amazon Cloud Scott Wesley shares several links to resources covering Oracle Apex on an Amazon EC2 instance. (tags: oracle apex ec2 amazon cloud) William Vambenepe: Reading IBM's proposed standard for Cloud Architecture The always entertaining William Vambenepe gives IBM's proposed Cloud standards the full Ebert. (tags: oracle cloud ibm standards) Government Information Group Cloud Computing Research Study "The twin pressures of reduced budgets and the need for greater efficiency have led the federal government to strongly promote cloud computing as a solution whenever possible." (tags: cloudcomputing cloud) The Ron Batra Blog: Technology Whispers: Top 10 Reasons to go ExaData "Continuing my exploration of ExaData, I thought I'd take a minute to consolidate my thoughts into key reasons for which Oracle ExaData could be a good fit for your needs." - Oracle ACE Director Ron Batra (tags: oracle oracleace exadata) Oracle WebCenter: Composite Applications & Mash-Ups (Oracle Enterprise 2.0 Blog) "The new Business Mash-up editor allows business users to take any Oracle Application or 3rd party application and wire the backend data sources or APIs to a rich set of visualizations and reuse them in mashups." (tags: oracle webcenter enterprise2.0) Antonio Romero: Great Discussion of ETL and ELT Tooling in TDWI Linkedin Group Antonio says: "There’s a great discussion of ETL and ELT tooling going on in the official TDWI Linkedin group, under the heading 'How Sustainable is SQL for ETL?' It delves into a wide range of topics." (tags: oracle linkedin etl elt) YouTube - Bunny Inc. - Episode 1. Mr. CIO meets Mr. Executive Manager Yes, it's a commercial. But it's well done and it's funny. (tags: e20 enterprise2.0 webcenter) Markus Eisele: Both Weblogic and Glassfish are strategic products for Oracle Oracle ACE Director Markus Eisele shares selected quotes pulled from the recent TechCast Live interview with Oracle's Anil Gaur and Adam Leftik (tags: oracle java weblogic glassfish) How to become an Oracle SOA expert? (SOA Partner Community Blog) Jurgan Kress shares info and links for those interested in capitalizing on SOA. (tags: oracle soa)

    Read the article

  • Archbeat Link-O-Rama Top 10 Facebook Faves for October 20-26, 2013

    - by OTN ArchBeat
    Here's this week's list of the Top 10 items shared on the OTN ArchBeat Facebook Page from October 27 - November 2, 2013. Visualizing and Process (Twitter) Events in Real Time with Oracle Coherence | Noah Arliss This OTN Virtual Developer Day session explores in detail how to create a dynamic HTML5 Web application that interacts with Oracle Coherence as it’s processing events in real time, using the Avatar project and Oracle Coherence’s Live Events feature. Part of OTN Virtual Developer Day: Harnessing the Power of Oracle WebLogic and Oracle Coherence, November 5, 2013. 9am to 1pm PT / 12pm to 4pm ET / 1pm to 5pm BRT. Register now! HTML5 Application Development with Oracle WebLogic Server | Doug Clarke This free OTN Virtual Developer Day session covers the support for WebSockets, RESTful data services, and JSON infrastructure available in Oracle WebLogic Server. Part of OTN Virtual Developer Day: Harnessing the Power of Oracle WebLogic and Oracle Coherence, November 5, 2013. 9am to 1pm PT / 12pm to 4pm ET / 1pm to 5pm BRT. Register now! Video: ADF BC and REST services | Frederic Desbiens Spend a few minutes with Oracle ADF principal product manager Frederic Desbiens and learn how to publish ADF Business Components as RESTful web services. One Client Two Clusters | David Felcey "Sometimes its desirable to have a client connect to multiple clusters, either because the data is dispersed or for instance the clusters are in different locations for high availability," says David Felcey. David shows you how in this post, which includes a simple example. Exceptions Handling and Notifications in ODI | Christophe Dupupet Oracle Fusion Middleware A-Team director Christophe Dupupet reviews the techniques that are available in Oracle Data Integrator to guarantee that the appropriate individuals are notified in the event that ODI processes are impacted by network outages or other mishaps. Securing WebSocket applications on Glassfish | Pavel Bucek WebSocket is a key capability standardized into Java EE 7. Many developers wonder how WebSockets can be secured. One very nice characteristic for WebSocket is that it in fact completely piggybacks on HTTP. In this post Pavel Bucek demonstrates how to secure WebSocket endpoints in GlassFish using TLS/SSL. Oracle Coherence, Split-Brain and Recovery Protocols In Detail | Ricardo Ferreira Ricardo Ferreira's article "provides a high level conceptual overview of Split-Brain scenarios in distributed systems," focusing on a "specific example of cluster communication failure and recovery in Oracle Coherence." Non-programmatic Authentication Using Login Form in JSF (For WebCenter & ADF) | JayJay Zheng Oracle ACE JayJay Zheng shares an approach that "avoids the programmatic authentication and works great for having a custom login page developed in WebCenter Portal integrated with OAM authentication." The latest article in the Industrial SOA series looks at mobile computing and how companies are developing SOA to go. http://pub.vitrue.com/PUxT Tech Article: SOA in Real Life: Mobile Solutions The ACE Director Thing | Dr. Frank Munz Frank Munz finally gets around to blogging about achieving Oracle ACE Director status and shares some interesting insight into what will change—and what won't—thanks to that new status. A good, short read for those interested in learning more about the Oracle ACE program. Thought for the Day "Even if you're on the right track, you'll get run over if you just sit there." — Will Rogers, American humorist (November 4, 1879 – August 15, 1935) Source: brainyquote.com

    Read the article

  • Scan Your Thumb Drive for Viruses from the AutoPlay Dialog

    - by Mysticgeek
    It’s always a good idea to scan someone’s flash drive for viruses when you use it on your PC. Today we look at how to use Microsoft Security Essentials to scan thumb drives via the AutoPlay dialog. Editor Note: This technique was created by our friend Ramesh Srinivasan from the winhelponline tech blog. If you haven’t done so already, download and install Microsoft Security Essentials (link below), which has earned the How-To Geek official endorsement. Next download the mseautoplay.zip (link below). Unzip the file to view its contents. Then move the msescan.vbs script file into the Windows directory. Next double-click on the mseautoplay.reg file… Click Yes to the warning dialog window asking if you’re sure you want to add to the registry. After it’s added you’ll get a confirmation message…click OK. Now when you pop in a thumb drive, when AutoPlay comes up you will have the options to scan it with MSE first. MSE starts the scan of the thumb drive…   You can use this to scan any removable media. Here is an example of the ability to scan a DVD with MSE before opening any files. You can also go into Control Panel and set it as a default option of AutoPlay. Open Control Panel, View by Large icons, and click on AutoPlay. Notice that now when you go to change the default options for different types of media, Scanning with MSE is now included in the dropdown lists. Remove Settings If you want to remove the MSE AutoPlay Handler, Ramesh was kind enough to create an undo registry file. Double-click on undo.reg from the original MSE AutoPlay folder and click yes to the message to remove the setting.   Then you will need to go into the Windows directory and manually delete the msescan.vbs script file. This is an awesome trick which will allow you to scan your thumb drives and other removable media from the AutoPlay dialog. We tested it out on XP, Vista, and Windows 7 and it works perfectly on each one. Download mseautoplay.zip Download Microsoft Security Essentials Read Our Review of MSE Similar Articles Productive Geek Tips Disable AutoPlay in Windows VistaFind Your Missing USB Drive on Windows XPDisable Autoplay of Audio CDs and USB DrivesHow To Remove Antivirus Live and Other Rogue/Fake Antivirus MalwareScan Files for Viruses Before You Download With Dr.Web TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 VMware Workstation 7 Acronis Online Backup Sculptris 1.0, 3D Drawing app AceStock, a Tiny Desktop Quote Monitor Gmail Button Addon (Firefox) Hyperwords addon (Firefox) Backup Outlook 2010 Daily Motivator (Firefox)

    Read the article

  • PENGUIN IS GETTING READY FOR ORACLE OPENWORLD 2012

    - by Zeynep Koch
    Are you looking for reasons to attend Oracle Openworld, how about below Oracle Linux sessions and hands-on-labs.  1. General Session: Oracle Linux Strategy and Roadmap  In this session, Oracle executives will discuss Linux strategy; the roadmap; contributions to the Linux mainline kernel; and what's in store for upcoming releases of Oracle Linux and the Unbreakable Enterprise Kernel. Don’t miss this session. 2. New Features in Oracle Linux- A Technical Deep Dive Collaborating with the Linux community, Oracle engineers contribute to advancing Linux for mission-critical deployments. In this technical session, attendees will learn about the recent developments in Oracle Linux and the Unbreakable Enterprise Kernel 3. Why Switch to Oracle Linux?  Oracle is the only company that provides a complete Linux solution from applications to disk, fully optimized for Oracle hardware and software, with one-stop support. In this session you will hear from two customers that have successfully implemented Oracle Linux and saved 50 to 90 percent on Linux support costs as well as the reasons to switch to Oracle Linux. 4. Debugging and Configuration Best Practices for Oracle Linux This is one of our best attended sessions and most informative. In this best practices session, learn how to save time and money while preventing headaches and hassles. Discover expert secrets to get your Linux systems up and running (and keep them running), avoid common pitfalls, prevent problems, and circumvent known issues. 5. Top Technical Tips for Automatic and Secure Oracle Linux Deployments In this session, attendees will learn about how to easily deploy and install Oracle Linux systems using various technologies like Kickstart, Oracle Enterprise Manager OpsCenter, and Oracle VM Templates for applications on Linux. Additionally, the session will share useful Linux security tips and introduce utilities to help with hardening and securely operating an Oracle Linux system. We also have a great session in Oracle Develop track: 6. DTrace for Oracle Linux Initially announced at last year's Oracle Openworld, DTrace for Oracle Linux is now available for the Unbreakable Enterprise Kernel R.2. In this session held by one of the engineers working on the DTrace for Linux port, you will learn how you can use this powerful and flexible framework in your development environment. If you prefer to really have practical experience, don’t miss our two Hands-on-Labs where we will cover: HOL-1 : Oracle Linux Package Management: Configuring and Enabling Services In this session you will be Installing and configuring Oracle VM VirtualBox, importing the Oracle Linux virtual appliance. You will then use the package management on Oracle Linux using RPM and yum. You will also be able to review Ksplice, zero downtime kernel updates that enable you to apply security updates, patches and critical bug fixes without rebooting. HOL-2: Oracle Linux Storage Management with LVM and Device Mapper In this session you will learn about storage management with LVM2, the Linux Logical Volume Manager, Btrfs, preparing block devices, creating physical and logical volumes, creating file systems on top of logical volumes, and resizing file systems dynamically. You will also practice setting up software RAID devices, configuring encrypted block devices. You will also see Oracle Linux and Kpslice in the three demopods we will feature at Exhibition demogrounds. One in MySQL Connect and two in Oracle Openworld. What more do you need to come to San Francisco? Oh, I forgot to mention we also have great weather in fall.. Check out the Content Catalog and register to attend Oracle Linux sessions.

    Read the article

  • New Management Console in Java SE Advanced 8u20

    - by Erik Costlow-Oracle
    Java SE 8 update 20 is a new feature release designed to provide desktop administrators with better control of their managed systems. The release notes for 8u20 are available from the public JDK release notes page. This release is not a Critical Patch Update (CPU). I would like to call attention to two noteworthy features of Oracle Java SE Advanced, the commercially supported version of Java SE for enterprises that require both support and specialized tools. The new Advanced Management Console provides a way to monitor and understand client systems at scale. It allows organizations to track usage and more easily create and manage client configuration like Deployment Rule Sets (DRS). DRS can control execution of tracked applications as well as specify compatibility of which application should use which Java SE installation. The new MSI Installer integrates into various desktop management tools, making it easier to customize and roll out different Java SE versions. Advanced Management Console The Advanced Management Console is part of Java SE Advanced designed for desktop administrators, whose users need to run many different Java applications. It provides usage tracking for those Applet & Web Start applications to help identify them for guided DRS creation. DRS can then be verified against the tracked data, to ensure that end-users can run their application against the appropriate Java version with no prompts. Usage tracking also has a different definition for Java SE than it does for most software applications. Unlike most applications where usage can be determined by a simple run-count, Java is a platform used for launching other applications. This means that usage tracking must answer both "how often is this Java SE version used" and "what applications are launched by it." Usage Tracking One piece of Java SE Advanced is a centralized usage tracker. Simply placing a properties file on the client informs systems to report information to this usage tracker, so that the desktop administrator can better understand usage. Information is sent via UDP to prevent any delay on the client. The usage tracking server resides at a central location on the intranet to collect information from those clients. The information is stored in a normalized database for performance, meaning that a single usage tracker can handle a large number of clients. Guided Deployment Rule Sets Deployment Rule Sets were introduced in Java 7 update 40 (September 2013) in order to help administrators control security prompts and guide compatibility. A previous post, Deployment Rule Sets by Example, explains how to configure a rule set so that most applications run against the most secure version but a specific applet may run against the Java version that was current several years ago. There are a different set of questions that can be asked by a desktop administrator in a large or distributed firm: Where are the Java RIAs that our users need? Which RIA needs which Java version? Which users need which Java versions? How do I verify these answers once I have them? The guided deployment rule set creation uses usage tracker data to identify applications both by certificate hash and location. After creating the rules, a comparison tool exists to verify them against the tracked data: If you intend to run an RIA, is it green? If something specific should be blocked, is it red? This makes user-testing easier. MSI Installer The Windows Installer format (MSI) provides a number of benefits for desktop administrators that customize or manage software at scale. Unlike the basic installer that most users obtain from Java.com or OTN, this installer is built around customization and integration with various desktop management products like SCCM. Desktop administrators using the MSI installer can use every feature provided by the format, such as silent installs/upgrades, low-privileged installations, or self-repair capabilities Customers looking for Java SE Advanced can download the MSI installer through their My Oracle Support (MOS) account. Java SE Advanced The new features in Java SE Advanced make it easier for desktop administrators to identify and control client installations at scale. Administrators at organizations that want either the tools or associated commercial support should consider Java SE Advanced.

    Read the article

  • The Evolution of Television and Home Entertainment

    - by Bill Evjen
    This is a group that is focused on entertainment in the aviation industry. I am attending their conference for the first time as it relates to my job at Swank Motion Pictures and what we do for our various markets. I will post my notes here. The Evolution of Television and Home Entertainment by Patrick Cosson, Veebeam TV has been the center of living rooms for sometime. Conversations and culture evolve around the TV. The way we consume this content has dramatically been changing. After TV, we had the MTV revolution of TV. It has created shorter attention spans, it made us more materialistic, narcissistic, and not easily impressed. Then we came to the Internet. The amount of content has expanded. It contains a ton of user-generated content, provides filtering, organization, distribution. We now have a problem. We are in the age of digital excess. We can access whatever we want. In conjunction with this – we are moving. The challenge we have now is curation. The trends  we see: rapid shift from scheduled to on demand consumption. A move to Internet protocols from cable Rapid fragmentation of media a transition from the TV set to a variety of screens Social connections bring mediators and amplifiers. TiVo – the shift to on demand It is because of a time-crunch Provides personal experiences Once old consumption habits are changed, there is no way back! Experiences are that people are loading up content and then bringing it with them on planes, to hotels, etc. Rapid fragmentation of media sources Many new professional content sources and channels, the rise of digital distribution, and the rise of user-generated content contribute to the wealth of content sources and abundant choice. Netflix, BBC iPlayer, hulu, Pandora, iTunes, Amazon Video, Vudu, Voddler, Spotify (these companies didn’t exist 5 years ago). People now expect this kind of consumption. People are now thinking how to deliver all these tools. Transition from the TV set to multi-screens The TV screen has traditionally been the dominant consumption screen for TV and video. Now the PC, game consoles, and various mobile devices are rapidly becoming common video devices. Multi-screens are now the norm. Social connections becoming key mediators What increasingly funnels traffic on the web, social networking enablers, will become an integral part of the discovery, consumption and sharing model for Television. The revolution will be broadcasted on Facebook and Twitter. There is business disruption There are a lot of new entrants Rapid internationalization Increasing competition from existing media players A fragmenting audience base Web browser Freedom to access any site The fight over the walled garden Most devices are not powerful enough to support a full browser PC will always be present in the living room Wireless link between PC and TV Output 1080p, plays anything, secure Key players and their challenges Services Internet media is increasingly interconnected to social media and publicly shared UGC Content delivery moving to IPTV Rights management issues are creating silos and hindering a great user experience and growth Devices Devices are becoming people’s windows into all kinds of media from all kinds of sources There won’t be a consolidation of the device landscape, rather the opposite Finding the right niche makes the most sense. We are moving to an on demand world of streaming world. People want full access to anything.

    Read the article

  • Reduce ERP Consolidation Risks with Oracle Master Data Management

    - by Dain C. Hansen
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Reducing the Risk of ERP Consolidation starts first and foremost with your Data.This is nothing new; companies with multiple misaligned ERP systems are often putting inordinate risk on their business. It can translate to too much inventory, long lead times, and shipping issues from poorly organized and specified goods. And don’t forget the finance side! When goods are shipped and promises are kept/not kept there’s the issue of accounts. No single chart of counts translates to no accountability. So – I’ve decided. I need to consolidate! Well, you can’t consolidate ERP applications [for that matter any of your applications] without first considering your data. This means looking at how your data is being integrated by these ERP systems, how it is being synchronized, what information is being shared, or not being shared. Most importantly, making sure that the data is mastered. What is the best way to do this? In the recent webcast: Reduce ERP consolidation Risks with Oracle Master Data Management we outlined 3 key guidelines: #1: Consolidate your Product Data#2: Consolidate your Customer, Supplier (Party Data) #3: Consolidate your Financial Data Together these help customers achieve reduced risk, better customer intimacy, reducing inventory levels, elimination of product variations, and finally a single master chart of accounts. In the case of Oracle's customer Zebra Technologies, they were able to consolidate over 140 applications by mastering their data. Ultimately this gave them 60% cost savings for the year on IT spend. Oracle’s Solution for ERP Consolidation: Master Data Management Oracle's enterprise master data management (MDM) can play a big role in ERP consolidation. It includes a set of products that consolidates and maintains complete, accurate, and authoritative master data across the enterprise and distributes this master information to all operational and analytical applications as a shared service. It’s optimized to work with any application source (not just Oracle’s) and can integrate using technology from Oracle Fusion Middleware (i.e. GoldenGate for data synchronization and real-time replication or ODI with its E-LT optimized bulk data and transformation capability). In addition especially for ERP consolidation use cases it’s important to leverage the AIA and SOA capabilities as part of Fusion Middleware to connect these multiple applications together and relay the data into the correct hub. Oracle’s MDM strategy is a unique offering in the industry, one that has common elements across the top and bottom in Middleware, BI/DW, Engineered systems combined with Enterprise Data Quality to enable comprehensive Data Governance at all levels. In addition, Oracle MDM provides the best-in-class capabilities to master all variations of data, including customer, supplier, product, financial data. But ultimately at the center of Oracle MDM is your data, making it more trusted, making it secure and accessible as part of a role-based approach, and getting it to make sense to you in any situation, whether it’s a specific ERP process like we talked about or something that is custom to your organization. To learn more about these techniques in ERP consolidation watch our webcast or goto our Oracle MDM website at www.oracle.com/goto/mdm

    Read the article

  • Safely deploying changes to production servers

    - by oazabir
    When you deploy incremental changes on a production server, which is running and live all the time, you some times see error messages like “Compiler Error Message: The Type ‘XXX’ exists in both…”. Sometimes you find Application_Start event not firing although you shipped a new class, dll or web.config. Sometimes you find static variables not getting initialized and so on. There are so many weird things happen on webservers when you incrementally deploy changes to the server and the server has been up and running for several weeks. So, I came up with a full proof house keeping steps that we always do whenever we deploy some incremental change to our websites. These steps ensure that the web sites are properly recycled , cached are cleared, all the data stored at Application level is initialized. First of all you should have multiple web servers behind load balancer. This way you can take one server our of the production traffic, do your deployment and house keeping tasks like restarting IIS, and then put it back. Then you can do it for the second server and so on. This ensures there’s no outage for customer. If you can do it reasonable fast, hopefully customers won’t notice discrepancy between the servers some having new code and some having old code. You should only do this when your changes aren’t drastic. For ex, you aren’t delivering a complete revamped UI. In that case, some users hitting server1 with latest UI will suddenly get a completely different experience and then on next page refresh, they might hit server2 with old code and get a totally different experience. This works for incremental non-dramatic changes only.   During deployment you should follow these steps: Take server X out of load balancer so that it does not get any traffic. Stop all windows services on the server. Stop IIS. Delete the Temporary ASP.NET folders of all .NET versions incase you have multiple .NET versions running. You can follow this link. Deploy the changes. Flush any distributed cache you have, for ex, Velocity or Memcached. Start IIS. Start the windows services on the server. Warm up all websites by hitting major URLs on the websites. You should have some automated script to do this. You can use tinyget to hit some major URLs, especially pages that take a lot of time to compile. Read my post on keeping websites warm with zero coding. Put server X back to load balancer so that it starts receiving traffic. That’s it. It should give you a clean deployment and prevent unexpected errors. You should print these steps and hang on the desk of your deployment guys so that they never forget during deployment pressure.

    Read the article

  • How to Format a USB Drive in Ubuntu Using GParted

    - by Trevor Bekolay
    If a USB hard drive or flash drive is not properly formatted, then it will not show up in the Ubuntu Places menu, making it hard to interact with. We’ll show you how to format a USB drive using the tool GParted. Note: Formatting a USB drive will destroy any data currently stored on it. If you think that your USB drive is already properly formatted, but Ubuntu just isn’t picking it up, try unplugging it and plugging it back in to a different USB slot, or restarting your machine with the device plugged in on start-up. Open a terminal by clicking on Applications in the top-left of the screen, then Accessories > Terminal. GParted should be installed by default, but we’ll make sure it’s installed by entering the following command in the terminal: sudo apt-get install gparted To open GParted, enter the following command in the terminal: sudo gparted Find your USB drive in the drop-down box at the top right of the GParted window. The drive should be unallocated – if it has a valid partition on it, then you may be looking at the wrong drive. Note: Make sure you’re on the correct drive, as making changes on the wrong hard drive with GParted can delete all data on a hard drive! Assuming you’re on the right drive, right-click on the unallocated grey block and click New. In the window that pops up, change the File System to fat32 for USB Flash Drives, NTFS for USB Hard Drives that will be used in Windows, or ext3/ext4 for USB Hard Drives that will be used exclusively in Linux. Add a label if you’d like, and then click Add. Click the green checkmark and then the Apply button to apply the changes. GParted will now format your drive. If you’re formatting a large USB Hard Drive, this can take some time. Once the process is done, you can close GParted, and the drive will now show up in the Places menu. Clicking on the drive will mount it and open it in a File Browser window. It will also add a shortcut to the drive on the Desktop by default. Your USB drive is now ready to store your files! Similar Articles Productive Geek Tips Using GParted to Resize Your Windows Vista PartitionInstall an RPM Package on Ubuntu LinuxCreate a Persistent Bootable Ubuntu USB Flash DriveShare Ubuntu Home Directories using SambaCreate a Samba User on Ubuntu TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Fun with 47 charts and graphs Tomorrow is Mother’s Day Check the Average Speed of YouTube Videos You’ve Watched OutlookStatView Scans and Displays General Usage Statistics How to Add Exceptions to the Windows Firewall Office 2010 reviewed in depth by Ed Bott

    Read the article

  • Remote synchronization

    - by Tomas Mysik
    Hi all, today we would like to show you another improvement we have prepared for NetBeans 7.2. Today, let's talk a little bit about remote synchronization. If you already use our simple (S)FTP client, this enhancement could be useful for you. Simply right click on Source Files and select Synchronize. Please notice that the remote synchronization works better only on the whole project (it means that the Source Files must be selected). The Synchronize action is also available on individual files (more files can be selected at once) but the suggested operation (download, upload etc.) does not work so precisely. Also please notice that the suggested operations are not 100% reliable since the timestamps provided by FTP servers are not exact. Once the remote files (their names and paths only, of course) are fetched, the main dialog appears: As you can see, NetBeans tries to suggest you operations (upload, download etc.) which should be done for each individual file of your project. If you are interested only in some particular changes, you can simply filter the list: Since we have a file conflict, we need to resolve it first. Fortunately this is very easy because we just select the desired file and click the Diff button . The remote version of our file is downloaded and compared with the local version. The resut is displayed in the dialog where you can easily apply and/or refuse the remote changes or even simply type manually to the local version of the selected file: Once we are done with our changes, the operation for the selected file changes to Upload and the file is marked with * (since we made some changes). Please notice that if you now click the Cancel button, in fact no changes are done in our local file. As you can see, if we have one or more files selected, we can change their operation to: no operation (file won't be synchronized) download upload delete (both local and remote file) reset (the operation is resetted to the original one suggested by NetBeans and also all changes done via Diff action are discarded) Now we are ready to synchronize our project. NetBeans will show us the synchronization summary (this dialog can be omitted, see the Show Summary checkbox on the previous image). The synchronization itself starts and we can see its progress and of course its result. As always, all the operations can be reviewed in the Output window. That's all for today, as always, please test it and report all the issues or enhancements you find in NetBeans BugZilla (component php, subcomponent FTP support).

    Read the article

  • Announcement: Employee Info Starter Kit (v6.0–ASP.NET MVC Edition) is Released

    - by Mohammad Ashraful Alam
    Originally posted on: http://geekswithblogs.net/joycsharp/archive/2013/06/16/announcement-employee-info-starter-kit-v6.0asp.net-mvc-edition-is-released.aspxAfter a long wait, the next version of Employee Info Starter Kit is released! This starter kit is basically a project template that contains code samples targeting a specific technology, such as ASP.NET Web Form, ASP.NET MVC etc. Since its first release, this open source project gained a huge popularity in the developer community and had 250K+ combined downloads. This starter kit is honored to be placed at the official ASP.NET site, along with other asp.net starter kits, which all are being considered as the “best” ASP.NET coding standards, recommended by Microsoft. EISK is showcased in Microsoft’s Channel 9’s Weekly Show, as well. The ASP.NET MVC Edition of the new version 6.0 bundles most of the greatest and successful platforms, frameworks and technologies together, to enable web developers to learn and build manageable and high performance web applications with rich user experience effectively and quickly. User End Specifications Creating a new employee record Read existing employee records Update an existing employee record Delete existing employee records Role based security model Key Technology Areas ASP.NET MVC 4 Entity Framework 4.3.1 Sql Server Compact Edition 4 Visual Studio 2012 QuickStart Guide Getting started with EISK 6.0 ASP.NET is pretty easy. Once you've Visual Studio 2012 installed, then just follow the steps as provided below: Download the EISK 6.0 MVC version. Extract the file. From the extracted folder, click the solution file "Eisk.MVC-VS2012.sln". Right click the "Eisk.MVC" project node and select "Select set as StartUp Project". Hit Ctrl+F5 and explore! Architectural Overview Overall architecture is based on Model-View-Controller pattern Support for desktop & mobile browsers. Usage of Domain Model, Repository and Unit of Work pattern from Domain Driven Development approach Usage of Data Annotations in model (entity) classes to centralize basic validation mechanism that facilitates DRY principle Usage of IValidatableObject interface in model (entity) classes that isolates custom business logic from application layer Usage of OOP inheritance and Value Object pattern in model (entity) classes that provides reusability in application architecture Usage of View Model, Editor Model pattern that provides mechanism for testable view rendering logic Several helper classes and extension methods to enable developers build application with reduced code If you want to learn more about it in details, just check the following links: Getting Started - Hands on Coding Walkthrough – Technology Stack - Design & Architecture Enjoy!

    Read the article

  • Deploying an SSL Application to Windows Azure &ndash; The Dark Secret

    - by ToStringTheory
    When working on an application that had been in production for some time, but was about to have a shopping cart added to it, the necessity for SSL certificates came up.  When ordering the certificates through the vendor, the certificate signing request (CSR) was generated through the providers (http://register.com) web interface, and within a day, we had our certificate. At first, I thought that the certification process would be the hard part…  Little did I know that my fun was just beginning… The Problem I’ll be honest, I had never really secured a site before with SSL.  This was a learning experience for me in the first place, but little did I know that I would be learning more than the simple procedure.  I understood a bit about SSL already, the mechanisms in how it works – the secure handshake, CA’s, chains, etc…  What I didn’t realize was the importance of the CSR in the whole process.  Apparently, when the CSR is created, a public key is created at the same time, as well as a private key that is stored locally on the PC that generated the request.  When the certificate comes back and you import it back into IIS (assuming you used IIS to generate the CSR), all of the information is combined together and the SSL certificate is added into your store. Since at the time the certificate had been ordered for our site, the selection to use the online interface to generate the CSR was chosen, the certificate came back to us in 5 separate files: A root certificate – (*.crt file) An intermediate certifcate – (*.crt file) Another intermediate certificate – (*.crt file) The SSL certificate for our site – (*.crt file) The private key for our certificate – (*.key file) Well, in case you don’t know much about Windows Azure and SSL certificates, the first thing you should learn is that certificates can only be uploaded to Azure if they are in a PFX package – securable by a password.  Also, in the case of our SSL certificate, you need to include the Private Key with the file.  As you can see, we didn’t have a PFX file to upload. If you don’t get the simple PFX from your hosting provider, but rather the multiple files, you will soon find out that the process has turned from something that should be simple – to one that borders on a circle of hell… Probably between the fifth and seventh somewhere… The Solution The solution is to take the files that make up the certificates chain and key, and combine them into a file that can be imported into your local computers store, as well as uploaded to Windows Azure.  I can not take the credit for this information, as I simply researched a while before finding out how to do this. Download the OpenSSL for Windows toolkit (Win32 OpenSSL v1.0.1c) Install the OpenSSL for Windows toolkit Download and move all of your certificate files to an easily accessible location (you'll be pointing to them in the command prompt, so I put them in a subdirectory of the OpenSSL installation) Open a command prompt Navigate to the folder where you installed OpenSSL Run the following command: openssl pkcs12 -export –out {outcert.pfx} –inkey {keyfile.key}      –in {sslcert.crt} –certfile {ca1.crt} –certfile (ca2.crt) From this command, you will get a file, outcert.pfx, with the sum total of your ssl certificate (sslcert.crt), private key {keyfile.key}, and as many CA/chain files as you need {ca1.crt, ca2.crt}. Taking this file, you can then import it into your own IIS in one operation, instead of importing each certificate individually.  You can also upload the PFX to Azure, and once you add the SSL certificate links to the cloud project in Visual Studio, your good to go! Conclusion When I first looked around for a solution to this problem, there were not many places online that had the information that I was looking for.  While what I ended up having to do may seem obvious, it isn’t for everyone, and I hope that this can at least help one developer out there solve the problem without hours of work!

    Read the article

  • trying to use mod_proxy with httpd and tomcat

    - by techsjs2012
    I been trying to use mod_proxy with httpd and tomcat... I have on VirtualBox running Scientific Linux which has httpd and tomcat 6 on it.. I made two nodes of tomcat6. I followed this guide like 10 times and still cant get the 2nd node of tomcat working.. http://www.richardnichols.net/2010/08/5-minute-guide-clustering-apache-tomcat/ Here is the lines from my http.conf file <Proxy balancer://testcluster stickysession=JSESSIONID> BalancerMember ajp://127.0.0.1:8009 min=10 max=100 route=node1 loadfactor=1 BalancerMember ajp://127.0.0.1:8109 min=10 max=100 route=node2 loadfactor=1 </Proxy> ProxyPass /examples balancer://testcluster/examples <Location /balancer-manager> SetHandler balancer-manager AuthType Basic AuthName "Balancer Manager" AuthUserFile "/etc/httpd/conf/.htpasswd" Require valid-user </Location> Now here is my server.xml from node1 <?xml version='1.0' encoding='utf-8'?> <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <!-- Note: A "Server" is not itself a "Container", so you may not define subcomponents such as "Valves" at this level. Documentation at /docs/config/server.html --> <Server port="8005" shutdown="SHUTDOWN"> <!--APR library loader. Documentation at /docs/apr.html --> <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" /> <!--Initialize Jasper prior to webapps are loaded. Documentation at /docs/jasper-howto.html --> <Listener className="org.apache.catalina.core.JasperListener" /> <!-- Prevent memory leaks due to use of particular java/javax APIs--> <Listener className="org.apache.catalina.core.JreMemoryLeakPreventionListener" /> <!-- JMX Support for the Tomcat server. Documentation at /docs/non-existent.html --> <Listener className="org.apache.catalina.mbeans.ServerLifecycleListener" /> <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" /> <!-- Global JNDI resources Documentation at /docs/jndi-resources-howto.html --> <GlobalNamingResources> <!-- Editable user database that can also be used by UserDatabaseRealm to authenticate users --> <Resource name="UserDatabase" auth="Container" type="org.apache.catalina.UserDatabase" description="User database that can be updated and saved" factory="org.apache.catalina.users.MemoryUserDatabaseFactory" pathname="conf/tomcat-users.xml" /> </GlobalNamingResources> <!-- A "Service" is a collection of one or more "Connectors" that share a single "Container" Note: A "Service" is not itself a "Container", so you may not define subcomponents such as "Valves" at this level. Documentation at /docs/config/service.html --> <Service name="Catalina"> <!--The connectors can use a shared executor, you can define one or more named thread pools--> <!-- <Executor name="tomcatThreadPool" namePrefix="catalina-exec-" maxThreads="150" minSpareThreads="4"/> --> <!-- A "Connector" represents an endpoint by which requests are received and responses are returned. Documentation at : Java HTTP Connector: /docs/config/http.html (blocking & non-blocking) Java AJP Connector: /docs/config/ajp.html APR (HTTP/AJP) Connector: /docs/apr.html Define a non-SSL HTTP/1.1 Connector on port 8080 <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> --> <!-- A "Connector" using the shared thread pool--> <!-- <Connector executor="tomcatThreadPool" port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> --> <!-- Define a SSL HTTP/1.1 Connector on port 8443 This connector uses the JSSE configuration, when using APR, the connector should be using the OpenSSL style configuration described in the APR documentation --> <!-- <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" /> --> <!-- Define an AJP 1.3 Connector on port 8009 --> <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" /> <!-- An Engine represents the entry point (within Catalina) that processes every request. The Engine implementation for Tomcat stand alone analyzes the HTTP headers included with the request, and passes them on to the appropriate Host (virtual host). Documentation at /docs/config/engine.html --> <!-- You should set jvmRoute to support load-balancing via AJP ie : <Engine name="Catalina" defaultHost="localhost" jvmRoute="jvm1"> --> <Engine name="Catalina" defaultHost="localhost" jvmRoute="node1"> <!--For clustering, please take a look at documentation at: /docs/cluster-howto.html (simple how to) /docs/config/cluster.html (reference documentation) --> <!-- <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/> --> <!-- The request dumper valve dumps useful debugging information about the request and response data received and sent by Tomcat. Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.valves.RequestDumperValve"/> --> <!-- This Realm uses the UserDatabase configured in the global JNDI resources under the key "UserDatabase". Any edits that are performed against this UserDatabase are immediately available for use by the Realm. --> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> <!-- Define the default virtual host Note: XML Schema validation will not work with Xerces 2.2. --> <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> <!-- SingleSignOn valve, share authentication between web applications Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.authenticator.SingleSignOn" /> --> <!-- Access log processes all example. Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="localhost_access_log." suffix=".txt" pattern="common" resolveHosts="false"/> --> </Host> </Engine> </Service> </Server> now here is the server.xml file from node2 <?xml version='1.0' encoding='utf-8'?> <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <!-- Note: A "Server" is not itself a "Container", so you may not define subcomponents such as "Valves" at this level. Documentation at /docs/config/server.html --> <Server port="8105" shutdown="SHUTDOWN"> <!--APR library loader. Documentation at /docs/apr.html --> <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" /> <!--Initialize Jasper prior to webapps are loaded. Documentation at /docs/jasper-howto.html --> <Listener className="org.apache.catalina.core.JasperListener" /> <!-- Prevent memory leaks due to use of particular java/javax APIs--> <Listener className="org.apache.catalina.core.JreMemoryLeakPreventionListener" /> <!-- JMX Support for the Tomcat server. Documentation at /docs/non-existent.html --> <Listener className="org.apache.catalina.mbeans.ServerLifecycleListener" /> <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" /> <!-- Global JNDI resources Documentation at /docs/jndi-resources-howto.html --> <GlobalNamingResources> <!-- Editable user database that can also be used by UserDatabaseRealm to authenticate users --> <Resource name="UserDatabase" auth="Container" type="org.apache.catalina.UserDatabase" description="User database that can be updated and saved" factory="org.apache.catalina.users.MemoryUserDatabaseFactory" pathname="conf/tomcat-users.xml" /> </GlobalNamingResources> <!-- A "Service" is a collection of one or more "Connectors" that share a single "Container" Note: A "Service" is not itself a "Container", so you may not define subcomponents such as "Valves" at this level. Documentation at /docs/config/service.html --> <Service name="Catalina"> <!--The connectors can use a shared executor, you can define one or more named thread pools--> <!-- <Executor name="tomcatThreadPool" namePrefix="catalina-exec-" maxThreads="150" minSpareThreads="4"/> --> <!-- A "Connector" represents an endpoint by which requests are received and responses are returned. Documentation at : Java HTTP Connector: /docs/config/http.html (blocking & non-blocking) Java AJP Connector: /docs/config/ajp.html APR (HTTP/AJP) Connector: /docs/apr.html Define a non-SSL HTTP/1.1 Connector on port 8080 <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> --> <!-- A "Connector" using the shared thread pool--> <!-- <Connector executor="tomcatThreadPool" port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> --> <!-- Define a SSL HTTP/1.1 Connector on port 8443 This connector uses the JSSE configuration, when using APR, the connector should be using the OpenSSL style configuration described in the APR documentation --> <!-- <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" /> --> <!-- Define an AJP 1.3 Connector on port 8009 --> <Connector port="8109" protocol="AJP/1.3" redirectPort="8443" /> <!-- An Engine represents the entry point (within Catalina) that processes every request. The Engine implementation for Tomcat stand alone analyzes the HTTP headers included with the request, and passes them on to the appropriate Host (virtual host). Documentation at /docs/config/engine.html --> <!-- You should set jvmRoute to support load-balancing via AJP ie : <Engine name="Catalina" defaultHost="localhost" jvmRoute="jvm1"> --> <Engine name="Catalina" defaultHost="localhost" jvmRoute="node2"> <!--For clustering, please take a look at documentation at: /docs/cluster-howto.html (simple how to) /docs/config/cluster.html (reference documentation) --> <!-- <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/> --> <!-- The request dumper valve dumps useful debugging information about the request and response data received and sent by Tomcat. Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.valves.RequestDumperValve"/> --> <!-- This Realm uses the UserDatabase configured in the global JNDI resources under the key "UserDatabase". Any edits that are performed against this UserDatabase are immediately available for use by the Realm. --> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> <!-- Define the default virtual host Note: XML Schema validation will not work with Xerces 2.2. --> <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> <!-- SingleSignOn valve, share authentication between web applications Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.authenticator.SingleSignOn" /> --> <!-- Access log processes all example. Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="localhost_access_log." suffix=".txt" pattern="common" resolveHosts="false"/> --> </Host> </Engine> </Service> </Server> I dont know what it is. but I been trying for days

    Read the article

< Previous Page | 388 389 390 391 392 393 394 395 396 397 398 399  | Next Page >