Search Results

Search found 10388 results on 416 pages for 'hardware selection'.

Page 338/416 | < Previous Page | 334 335 336 337 338 339 340 341 342 343 344 345  | Next Page >

  • Antenna Aligner Part 10: Updates and emails…

    - by Chris George
    Since my last post back in July, I’ve not done huge amounts of work on my app for two reasons. Firstly, no time! Secondly, I wanted to leave it out in the wild for a while and see what happened. Well, what happened?  over 1,300 users, that’s what’s happened!  This uptake is beyond my wildest expectations, and apart from a couple of issues that I’ll mention in a minute, most of the feedback has been very positive indeed! I’ve had several emails giving me feedback and reporting issues, all of which I have made a point of replying to immediately. This act alone has met with favourable replies! One of the main issues was with iPad. So it turns out that my app is only accurate in portrait mode. Turning it into landscape will offset the direction by +-90degrees! Whoops! I think I’ve fixed this by disabling the orientation switching, but I have not yet had an iPad to test this on. I had several emails from iPod Touch users claiming the app did not work for them. Specifically, the compass view did not work. On investigation, it turns out that the iPod Touch does not have the compass hardware required to do this. Unfortunately there is no way to exclude iPod Touch’s from the list of supported devices, so I’ve just had to make it very clear in the itunes description that the device is not fully supported.  You can still get the list of transmitters, but you then have to use a real compass to get the bearing. But that’s not the end of the world. Several customers have requested the aerial polarisation to be displayed in the app. I was already working on this, and the data was already there, it was just a case of displaying this in the UI. I have a solution now, and this will be in the next release. Of course, with the Digital switchover in full swing across the UK, there have been one set of data updates (in 1.0.3), and another is due shortly. This reflects the transmitters as they switch over the digital fully and their power output increased. So all in all I’m very pleased with the feedback I’ve had, and I’m looking to get the next release out there by early December (allowing for the 2-3 week Apple approval lag!)  

    Read the article

  • Today's Links (6/20/2011)

    - by Bob Rhubart
    Why your security sucks | Eric Knorr A conversation with InfoWorld security expert Roger Grimes reveals why the latest burst of attacks is just business as usual. JDev 11g R2 - ADF BC Dependency Diagram Feature | Andrejus Baranovskis Oracle ACE Director Andrejus Baranovkis continues his exploration of JDeveloper 11g R2. Mobile Apps Put the Web in Their Rear-view Mirror | Charles Newark-French "Our analysis shows that, for the first time ever, daily time spent in mobile apps surpasses desktop and mobile web consumption," says Newark-French. "This stat is even more remarkable if you consider that it took less than three years for native mobile apps to achieve this level of usage, driven primarily by the popularity of iOS and Android platforms." Vivek Kundra, a public servant who gets stuff done | Craig Newmark Craigslist founder Craig Newmark bids farewell to the nation's first CIO. Weblogic, QBrowser and topics | Eric Elzinga Elzinga says: "Besides using the Weblogic Console to add subscribers to our topics we can also use QBrowser to browse queues and topics on your Weblogic Server." Java EE talks at JAX Conf | Arun Gupta Arun Gupta shares links to several Java EE presentations taking place at this week's Jax Conference in San Jose, CA. Development gotchas and silver bullets | Andy Mulholland Mulholland explains why "Software development has to change to fit with new business practices!" Oracle is Proud Sponsor of Gartner Security and Risk Management Summit 2011 | Troy Kitch Oracle will have a very strong presence at this year’s Gartner Security and Risk Management Summit 2011 in Washington D.C., June 20-23. Database Web Service using Toplink DB Provider | Vishal Jain "With JDeveloper 11gR2 you can now create database based web services using JAX-WS Provider," says Jain. Sample Chapter: A Fusion Applications Technical Overview An excerpt from "Managing Oracle Fusion Applications" by Richard Bingham, published by Oracle Press, May 2011. White Paper: Oracle Optimized Solution for Enterprise Cloud Infrastructure This paper provides recommendations and best practices for optimizing virtualization infrastructures when deploying the Oracle Enterprise Cloud Infrastructure. White paper: Oracle Optimized Solution for Lifecycle Content Management Authors Donna Harland and Nick Klosk illustrate how Oracle Enterprise Content Management Suite and Oracle’s Sun Storage Archive Manager work Oracle’s Sun hardware. Bay Area Coherence Special Interest Group Date: Thursday, July 21, 2011 Time: 4:30pm - 8:15pm ET - Note that Parking at 475 Sansome Closes at 8:30pm Location: Oracle Office,475 Sansome Street, San Francisco, CA Google Map Speakers: Chris Akker, Solutions Engineer, F5 Paul Cleary, Application Architect, Oracle Alexey Ragozin, Independent Consultant Brian Oliver, Oracle

    Read the article

  • BizTalk 2009 - Installing BizTalk Server 2009 on XP for Development

    - by StuartBrierley
    At my previous employer, when developing for BizTalk Server 2004 using Visual Studio 2003, we made use of separate development and deployment environments; developing in Visual Studio on our client PCs and then deploying to a seperate shared BizTalk 2004 Server from there.  This server was part of a multi-server Standard BizTalk environment comprising of separate BizTalk Server 2004 and SQL Server 2000 servers.  This environment was implemented a number of years ago by an outside consulting company, and while it worked it did occasionally cause contention issues with three developers deploying to the same server to carry out unit testing! Now that I am making the design and implementation decisions about the environment that BizTalk will be developed in and deployed to, I have chosen to create a single "server" installation on my development PC, installling SQL Server 2008, Visual Studio 2008 and BizTalk Server 2009 on a single system.  The client PC in use is actually a MacBook Pro running Windows XP; not the most powerful of systems for high volume processing but it should be powerful enough to allow development and initial unit testing to take place. I did not need to, and so chose not to, install all of the components detailed in the Microsoft guide for installing BizTalk 2009 on Windows XP but I did follow the basics of the procedures detailed within.  Outlined below are the highlights of this process and any details of what choices I made.   Install IIS I had previsouly installed Windows XP, including all current service packs and critical updates.  At the time of installation this included Service Pack 3, the .Net Framework 3.5 and MS Windows Installer 3.1.  Having a running XP system, my first step was to install IIS - this is quite straightforward and posed no difficulties. Install Visual Studio 2008 The next step for me was to install Visual Studio 2008.  Making sure to select a custom installation is crucial at this point, as you need to make sure that you deselect SQL Server 2005 Express Edition as it can cause the BizTalk installation to fail.  The installation guide suggests that you only select Visual C# when selecting features to install, but  I decided that due to some legacy systems I have code for that I would also select the VB and ASP options. Visual Studio 2008 Service Pack 1 Following the completion of the installation of Visual Studio itself you should then install the Visual Studio 2008 Service Pack 1. SQL Server 2008 Standard Edition The next step before intalling BizTalk Server 2009 itself is to install SQL Server 2008 Standard Edition. On the feature selection screen make sure that you select the follwoing options: Database Engine Services SQL Server Replication Full-Text Search Analysis Services Reporting Services Business Intelligence Development Studio Client Tools Connectivity Integration Services Management Tools Basic and Complete Use the default instance and the same accounts for all SQL server instances - in my case I used the Network Service and Local Service accounts for the two sets of accounts. On the database engine configuration screen I selected windows authentication and added the current user, adding the same user again on the Analysis services Configuration screen.  All other screens were left on the default settings. The SQL Server 2008 installation also included the installation of hotfix for XP KB942288-v3, the Windows Installer 4.5 Redistributable. System Configuration At this stage I took a moment to disable the SQL Server shared memory protocol and enable the Named Pipes and TCP/IP protocols.  These can be found in the SQL Server Configuration Manager > SQL Server Network Configuration > Protocols for MSSQLServer.  I also made sure that the DTC settings were configured correctley.   BizTalk Server 2009 The penultimate step is to install BizTalk Server 2009 Standard Edition. I had previsouly downloaded the redistributable prerequisites as a CAB file so was able to make use of this when carrying out the installation. When selecting which components to install I selected: Server Runtime BizTalk EDI/AS2 Runtime WCF Adapter Runtime Portal Components Administrative Tools WFC Administartion Tools Developer Tools and SDK, Enterprise SSO Administration Module Enterprise SSO Master Secret Server Business Rules Components BAM Alert Provider BAM Client BAM Eventing Once installation has completed clear the launch BizTalk Server Configuration check box and select finish. Verify the Installation Before configuring BizTalk Server it is a good idea to check that BizTalk Server 2009 is installed and that SQL Server 2008 has started correctly.  The easiest way to verify the BizTalk installation is check the Programs and Features in Control panel.  Check that SQL is started by looking in the SQL Server Configuration Manager. Configure BizTalk Server 2009 Finally we are ready to configure BizTalk Server 2009.  To start this I opted for a custom configuration that allowed me to choose in more detail the settings to be used. For all databases I selected the local server and default database names. For all Accounts I used a local account that had been created specifically for the BizTalk Services. For all windows groups I allowed the configuration wizard to create the default local groups. The configuration wizard then ran:   Upon completion you will be presented with a screen detailing the success or failure of the configuration.  If your configuration failed you will need to sort out the issues and try again (it is possible to save the configuration settings for later use if you want too - except passwords of course!).  If you see lots of nice green ticks - congratulations BizTalk Server 2009 on XP is now installed and configured ready for development.

    Read the article

  • VLC will sometimes have issues displaying video in fullscreen. What could cause this? How would I troubleshoot the issue?

    - by George Marian
    Recently VLC has been having issues displaying video in fullscreen mode. AFAIK, nothing has changed with the video card drivers and it's certainly the same version of VLC. (/me shakes a fist at the repository maintainers) This has worked without issue in the past. In fact, I've had as many as 6 instances of VLC running, each playing a video. One was always fullscreen on my second monitor, while the others were tiled on my primary monitor. I was able to toggle any of the other 5 into fullscreen mode and the video displayed without issue. Lately, I've been having trouble running 2 instances in fullscreen mode. (Sometimes, even a single instance will not display the video in fullscreen.) VLC will continue to play the video, but in fullscreen mode I see nothing but a black screen. Sometimes, the video will display if I maximize the VLC window. Other times, I have to settle for a smaller sized window. I don't know if this is pertinent, but sometimes changing the min/max state of a Firefox window (Minefield, specifically) seemed to allow the troublesome instance to display the video in fullscreen mode. However, that did not prove to be a consistent workaround. Sometimes, it seemed that closing a Firefox window did the trick, though that isn't consistently successful either. (I futzed with Firefox, because with the crazy number of windows and tabs that I normally have open, it regularly hogs about 1 GB of RAM.) Another bit of funkiness that comes to mind is the fact that my secondary monitor is considered the primary on boot-up. I use xrandr to designate the real 1st monitor as primary after boot-up, as suggested by someone in a question I asked on the Unix & Linux SE site. Specs: Ubuntu 10.10 w/ Gnome and Compiz 8GB RAM AMD Phenom II 965 Black Edition Asus M4A79 Deluxe mobo XFX ATI Radeon HD 5750 w/ 1GB RAM VLC is configured to use the hardware overlay for video (as per the default setting) Does anyone have an idea what may cause this issue or how I may go about troubleshooting it? Update: Right now I have 2 instances of VLC playing, each in fullscreen mode on a separate monitor. This is what I see:

    Read the article

  • CISDI Cloud - Industrial Cloud Computing Platform based on Oracle Products

    - by Wenyu Duan
    In today's era, Cloud Computing is becoming integral to the vision and corporate strategy of leading organizations and is often seen as a key business driver to achieve growth and innovation. Headquartered in Chongqing, China, CISDI Engineering Co., Ltd. is a large state-owned engineering company, offering consulting, engineering design, EPC contracting, and equipment integration services to steel producers all over the world. With over 50 years of experience, CISDI offers quality services for every aspect of production for projects in the metal industry and the company has evolved into a leading international engineering service group with 18 subsidiaries providing complete lifecycle for E&C projects. CISDI group delegation led by Mr. Zhaohui Yu, CEO of CISDI Group, Mr. Zhiyou Li, CEO of CISDI Info, Mr. Qing Peng, CTO of CISDI Info and Mr. Xin Xiao, Head of CISDI Info's R&D joined Oracle OpenWorld 2012 and presented a very impressive cloud initiative case in their session titled “E&C Industry Solution in CISDI Cloud - An Industrial Cloud Computing Platform Based on Oracle Products”. CISDI group plans to expand through three phases in the construction of its cloud computing platform: first, it will relocate its existing technologies to Oracle systems, along with establishing private cloud for CISDI; secondly, it will gradually provide mixed cloud services for its subsidiaries and partners; and finally it plans to launch an industrial cloud with a highly mature, secure and scalable environment providing cloud services for customers in the engineering construction and steel industries, among others. “CISDI Cloud” will become the growth engine for the organization to expand its global reach through online services and achieving the strategic objective of being the preferred choice of E&C companies worldwide. The new cloud computing platform is designed to provide access to the shared computing resources pool in a self-service, dynamic, elastic and measurable way. It’s flexible and scalable grid structure can support elastic expansion and sustainable growth, and can bring significant benefits in speed, agility and efficiency. Further, the platform can greatly cut down deployment and maintenance costs. CISDI delegation highlighted these points as the key reasons why the group decided to have a strategic collaboration with Oracle for building this world class industrial cloud - - Oracle’s strategy: Open, Complete and Integrated - Oracle as the only company who can provide engineered system, with complete product chain of hardware and software - Exadata, Exalogic, EM 12c to provide solid foundation for "CISDI Cloud" The cloud blueprint and advanced architecture for industrial cloud computing platform presented in the session shows how Oracle products and technologies together with industrial applications from CISDI can provide end-end portfolio of E&C industry services in cloud. CISDI group was recognized for business leadership and innovative solutions and was presented with Engineering and Construction Industry Excellence Award during Oracle OpenWorld.

    Read the article

  • Oracle’s Sun Server X4-8 with Built-in Elastic Computing

    - by kgee
    We are excited to announce the release of Oracle's new 8-socket server, Sun Server X4-8. It’s the most flexible 8-socket x86 server Oracle has ever designed, and also the most powerful. Not only does it use the fastest Intel® Xeon® E7 v2 processors, but also its memory, I/O and storage subsystems are all designed for maximum performance and throughput. Like its predecessor, the Sun Server X4-8 uses a “glueless” design that allows for maximum performance for Oracle Database, while also reducing power consumption and improving reliability. The specs are pretty impressive. Sun Server X4-8 supports 120 cores (or 240 threads), 6 TB memory, 9.6 TB HDD capacity or 3.2 TB SSD capacity, contains 16 PCIe Gen 3 I/O expansion slots, and allows for up to 6.4 TB Sun Flash Accelerator F80 PCIe Cards. The Sun Server X4-8 is also the most dense x86 server with its 5U chassis, allowing 60% higher rack-level core and DIMM slot density than the competition.  There has been a lot of innovation in Oracle’s x86 product line, but the latest and most significant is a capability called elastic computing. This new capability is built into each Sun Server X4-8.   Elastic computing starts with the Intel processor. While Intel provides a wide range of processors each with a fixed combination of core count, operational frequency, and power consumption, customers have been forced to make tradeoffs when they select a particular processor. They have had to make educated guesses on which particular processor (core count/frequency/cache size) will be best suited for the workload they intend to execute on the server.Oracle and Intel worked jointly to define a new processor, the Intel Xeon E7-8895 v2 for the Sun Server X4-8, that has unique characteristics and effectively combines the capabilities of three different Xeon processors into a single processor. Oracle system design engineers worked closely with Oracle’s operating system development teams to achieve the ability to vary the core count and operating frequency of the Xeon E7-8895 v2 processor with time without the need for a system level reboot.  Along with the new processor, enhancements have been made to the system BIOS, Oracle Solaris, and Oracle Linux, which allow the processors in the system to dynamically clock up to faster speeds as cores are disabled and to reach higher maximum turbo frequencies for the remaining active cores. One customer, a stock market trading company, will take advantage of the elastic computing capability of Sun Server X4-8 by repurposing servers between daytime stock trading activity and nighttime stock portfolio processing, daily, to achieve maximum performance of each workload.To learn more about Sun Server X4-8, you can find more details including the data sheet and white papers here.Josh Rosen is a Principal Product Manager for Oracle’s x86 servers, focusing on Oracle’s operating systems and software. He previously spent more than a decade as a developer and architect of system management software. Josh has worked on system management for many of Oracle's hardware products ranging from the earliest blade systems to the latest Oracle x86 servers.

    Read the article

  • Live CD / Live USB much faster than full install

    - by user29347
    I've observed it on both laptops I own! HP Compaq nx6125 and Ubuntu 11.04 x64 - somewhat solved Lenovo Thinkpad T500 and Ubuntu 11.10 x64 - help needed! I'm still struggling with the Thinkpad to get performance level similar to that of 10 y.o. laptops... All in all a really serious issue with multiple versions of Ubuntu that renders computers with perfectly compatible hardware unusable, as far as out of the box experience is concerned. Troubleshooting resultant issues seems to be a hard case even for users with some experience with installing graphics drivers. EDIT: I can't really post additional details. Two different ubuntu versions, two laptops, two different set of graph. drivers (OS vs ATI prop.) - all with the same symptoms. Also I can't stress enough how massive the performance degradation is compared to a healthy system. For that reason I ask for input from people who may know roughly what are we dealing with here. I can post more details if we were to focus on my current Thinkpad T500. In that case my current system details: Lenovo Thinkpad T500 Ubuntu 11.10 x64 ATI Mobility Radeon HD 3650 (also see the "What I have already tried" section about Intel graphics tested) ATI Catalyst 11.10 drivers OCZ Agility 3 SSD but! same with the default driver for ATI the card same with the prop. driver for the ATI card from Jockey (Additional drivers applet) What I have already tried: 0. Switching to Intel integrated card (Intel GMA 4500M HD) with the default driver - same effects = may indicate not driver related problem but a problem with something of global influence like e.g. nomodeset or other I don't even know about. (What you can read above) ATI Catalyst 11.10 and radeon.modeset=0 boot parameter + disabled Wait for VBlank. Unity 2D Ubuntu 10.04 LTS tested (ubuntu-10.04.3-desktop-i386.iso): Both live USB and installed version blazing fast! (on the default drivers - without even installing the proprietary fglrx drivers). re2 a) seems to give me the only significant results (still poor) - perfect Unity elements performance with the same crawling stuttering/lagging when dragging windows around. re2 b) this happens often http://i17.photobucket.com/albums/b68/Bucic/ubuntuforumsorg/Screenshotat2011-10-28083140.png re2 c) Sometimes I am able to witness a normal performance when dragging a window around but only for a second or two. When I try to shake it longer it starts to lag and it will keep lagging like that with an increased probability of what you see in the sshot in point re2 b). re2 d) I can't establish the radeon.modeset=0 influence though. Once it seems to work be smooth with it, the other time - without it. Really can't tell.

    Read the article

  • Download Internet Explorer 9 RTM

    - by Harish Ranganathan
    The much anticipated RTM release of Internet Explorer 9 (IE9) happened today.  IE9 preview release was first showcased at MIX 2010 and post that there were 7-8 Platform Preview releases.  Also, IE9 Beta came out in September 2010 with close to 10 million downloads within a month.  More recently, the RC version was out with much improved performance.  Today, marks the launch of IE9 RTM.  What this means is that, within an year, the IE Team has shipped the stable product, much faster than the earlier cycles for IE8 and IE7.  I wanted to clarify a few things (myths) that arise in common 1. I am already using Chrome and its faster for me, why would I need IE9 IE9 uses 100% hardware acceleration which means, you are going to get the best of performance compared to any other browser that shipped/will ship in future.  With native Windows support, IE9 will outperform all other browsers in terms of performance. 2. What about standards and security Agreed IE6 hasn’t been in the best of standards, but why would someone compare IE6 which was released almost 10 years back.  Later, we shipped IE7 and IE8 which had the best of standards and supports during their timeframes, but one would agree that standards and specifications keep getting updated and its hard to keep pace with the same for older browsers.  Example. HTML5 support is not there in IE8 but it is very much there in IE9.  IE9 supports most of the stable standards of HTML5 and its going to provide preview releases for the work-in-progress standards. 3. IE doesn’t keep in pace with other browsers Agreed! we don’t force/release updates on major versions in very short time periods.  What we do is provide Windows Update that provides security updates/patches and other critical updates for not just IE but the whole of Windows operating system 4. I am running Windows XP, what do I do? This is the trickiest part.  Windows XP isn’t the supported operating system for IE9 and there are various reasons to it.  The recommended operating system is Windows Vista and Windows 7.  In the interest of technology and its pace, we had to discontinue Windows XP both from a retail selling perspective as well as IE9 support.  But, the recent 2 years has seen PCs/Laptops only shipped with Windows Vista or Windows 7 so, it shouldn't affect them. 5. Where do I verify IE9’s performance/standard support and other information. http://samples.msdn.microsoft.com/ietestcenter/  Here below is a snapshot of one of the tests. Clearly IE9 outperforms all other browsers and will continue to outperform them in future.  You can download IE9 from www.beautyoftheweb.com Cheers!!!

    Read the article

  • Customer Support Spotlight: Clemson University

    - by cwarticki
    I've begun a Customer Support Spotlight series that highlights our wonderful customers and Oracle loyalists.  A week ago I visited Clemson University.  As I travel to visit and educate our customers, I provide many useful tips/tricks and support best practices (as found on my blog and twitter). Most of all, I always discover an Oracle gem who deserves recognition for their hard work and advocacy. Meet George Manley.  George is a Storage Engineer who has worked in Clemson's Data Center all through college, partially in the Hardware Architecture group and partially in the Storage group. George and the rest of the Storage Team work with most all of the storage technologies that they have here at Clemson. This includes a wide array of different vendors' disk arrays, with the most of them being Oracle/Sun 2540's.  He also works with SAM/QFS, ACSLS, and our SL8500 Tape Libraries (all three Oracle/Sun products). (pictured L to R, Matt Schoger (Oracle), Mark Flores (Oracle) and George Manley) George was kind enough to take us for a data center tour.  It was amazing.  I rarely get to see the inside of data centers, and this one was massive. Clemson Computing and Information Technology’s physical resources include the main data center located in the Information Technology Center at the Innovation Campus and Technology Park. The core of Clemson’s computing infrastructure, the data center has 21,000 sq ft of raised floor and is powered by a 14MW substation. The ITC power capacity is 4.5MW.  The data center is the home of both enterprise and HPC systems, and is staffed by CCIT staff on a 24 hour basis from a state of the art network operations center within the ITC. A smaller business continuance data center is located on the main campus.  The data center serves a wide variety of purposes including HPC (supercomputing) resources which are shared with other Universities throughout the state, the state's medicaid processing system, and nearly all other needs for Clemson University. Yes, that's no typo (14,256 cores and 37TB of memory!!! Thanks for the tour George and thank you very much for your time.  The tour was fantastic. I enjoyed getting to know your team and I look forward to many successes from Clemson using Oracle products. -Chris WartickiGlobal Customer Management

    Read the article

  • The winning combination: Oracle VM Server for x86 + Oracle Sun Fire HW

    - by Karim Berrah
    You might be wondering why OVM Server for x86 (OVM/x86 here and below) should be seriously considered as a nice (business point of view) alternative to standard Hypervisors, if you are virtualizing Oracle Software, especially if you are planning to move to Oracle x86 Hardware (rackmount or blades). Well, let see some "not well known" facts that might interest you and help you in saving more money for your entire company (and not only the Virtulization team). Fact 1: OVM/x86 is considered as a hard partitionning technology (check page 2 of Oracle Server Partitionning Licencing Policies), so if you are buying new servers based on the latest INTEL Xeon E7 CPUs (10 cores per Socket) and have some licencing issues in deploying further Oracle SW, because you are using a hypervisor not recognized as a hard partitionning technology (like VMware), then you need to check here how to do it with OVM . This might help you to continue to deploy your Oracle DB instances on new x86 HW (12 cores, 40 cores, 64 cores servers) in a reasonable way, without having to pay licences for 12 CPU, 40 CPUs or 64 CPUs. You might also consider migrating your legacy Oracle DB DBs to a virtualized environment like OVM/x86 an recover some CPU licences, that can be reused somewhere else in production. Fact 2: OVM/x86 is free to use, without any extra licence for any specific feature (LiveMigration, High Availability, Embedded Management Console). If you want to use it on non Oracle HW, there is a support fee per  system and per year, that is much below VMware support (Oracle VM Premier Limited Support for systems up to 2 CPUs, and Oracle VM Premier Support for any bigger system, independently on the number of populated sockets). Fact 3: support is included with your Oracle x86 HW support (OPS for systems)  and you can re-install on you system Oracle Linux, Oracle Solaris or Oracle VM server for x86, without beeing charged, an keeping the same support level. Fact 4: it is less expensive to virtualize Oracle Linux or Oracle Solaris on OVM/x86 with Oracle HW that any other similar solution with VMware, because all the VMs are then supported and licenced when you buy Oracle HW with OPS. Fact 5: Oracle VM Templates bring you many Virtual Machines already installed, patched and optimized for various Oracle applications. And to be more specific, those templates are fully supported by Oracle, which is not really true when it comes to another hypervisor. By optimized VM Kernel, I mean PV drivers, OVM-ready kernels in the VM, single source clock for all the VMS, better memory management of the VM ... Fact 6: there is no extra costs for a management console. OVM comes with a free OVM Manager package for Linux.  More infos: Latest announcement of OVM/x86 update 2.2.2 A short flash demo of OVM server for x86 A short flash demo on OVM Templates and Virtual Assembly Builder Oracle Linux Support and Oracle VM Support Global Price List  ISVs: Benefits for Independant Sofwtare Vendors (ISVs) in using OVM/x86 Consultant Services: Advanced Customer Services for OVM/x86  Technical Features Best practices and Guideline for OVM with Oracle Blades Reduce TCO and get more Value from your x86 Infrastructure

    Read the article

  • no volume in kubuntu 10.04

    - by neha
    hello,I am having both gnome and kde on my system.as my gnome is working perfectly but in KDE is there is no sound being generated. output of apley -l and lspci commands is as follows.. neha@neha-laptop:~$ aplay -l **** List of PLAYBACK Hardware Devices **** card 0: Intel [HDA Intel], device 0: STAC92xx Analog [STAC92xx Analog] Subdevices: 1/1 Subdevice #0: subdevice #0 card 0: Intel [HDA Intel], device 3: INTEL HDMI [INTEL HDMI] Subdevices: 1/1 Subdevice #0: subdevice #0 and output of lspci command is: neha@neha-laptop:~$ lspci 00:00.0 Host bridge: Intel Corporation Mobile PM965/GM965/GL960 Memory Controller Hub (rev 0c) 00:02.0 VGA compatible controller: Intel Corporation Mobile GM965/GL960 Integrated Graphics Controller (rev 0c) 00:02.1 Display controller: Intel Corporation Mobile GM965/GL960 Integrated Graphics Controller (rev 0c) 00:1a.0 USB Controller: Intel Corporation 82801H (ICH8 Family) USB UHCI Controller #4 (rev 02) 00:1a.1 USB Controller: Intel Corporation 82801H (ICH8 Family) USB UHCI Controller #5 (rev 02) 00:1a.7 USB Controller: Intel Corporation 82801H (ICH8 Family) USB2 EHCI Controller #2 (rev 02) 00:1b.0 Audio device: Intel Corporation 82801H (ICH8 Family) HD Audio Controller (rev 02) 00:1c.0 PCI bridge: Intel Corporation 82801H (ICH8 Family) PCI Express Port 1 (rev 02) 00:1c.1 PCI bridge: Intel Corporation 82801H (ICH8 Family) PCI Express Port 2 (rev 02) 00:1c.4 PCI bridge: Intel Corporation 82801H (ICH8 Family) PCI Express Port 5 (rev 02) 00:1d.0 USB Controller: Intel Corporation 82801H (ICH8 Family) USB UHCI Controller #1 (rev 02) 00:1d.1 USB Controller: Intel Corporation 82801H (ICH8 Family) USB UHCI Controller #2 (rev 02) 00:1d.2 USB Controller: Intel Corporation 82801H (ICH8 Family) USB UHCI Controller #3 (rev 02) 00:1d.7 USB Controller: Intel Corporation 82801H (ICH8 Family) USB2 EHCI Controller #1 (rev 02) 00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev f2) 00:1f.0 ISA bridge: Intel Corporation 82801HEM (ICH8M) LPC Interface Controller (rev 02) 00:1f.1 IDE interface: Intel Corporation 82801HBM/HEM (ICH8M/ICH8M-E) IDE Controller (rev 02) 00:1f.2 SATA controller: Intel Corporation 82801HBM/HEM (ICH8M/ICH8M-E) SATA AHCI Controller (rev 02) 00:1f.3 SMBus: Intel Corporation 82801H (ICH8 Family) SMBus Controller (rev 02) 02:09.0 FireWire (IEEE 1394): Ricoh Co Ltd R5C832 IEEE 1394 Controller (rev 05) 02:09.1 SD Host controller: Ricoh Co Ltd R5C822 SD/SDIO/MMC/MS/MSPro Host Adapter (rev 22) 02:09.2 System peripheral: Ricoh Co Ltd R5C843 MMC Host Controller (rev 12) 02:09.3 System peripheral: Ricoh Co Ltd R5C592 Memory Stick Bus Host Adapter (rev 12) 02:09.4 System peripheral: Ricoh Co Ltd xD-Picture Card Controller (rev ff) 09:00.0 Ethernet controller: Marvell Technology Group Ltd. 88E8040 PCI-E Fast Ethernet Controller (rev 12) 0b:00.0 Network controller: Broadcom Corporation BCM4312 802.11a/b/g (rev 01) can anyone help me??

    Read the article

  • Zukunftsmusik auf der Oracle OpenWorld 2013

    - by Alliances & Channels Redaktion
    "The future begins at Oracle OpenWorld", das Motto weckt große Erwartungen! Wie die Zukunft aussehen könnte, davon konnten sich 60.000 Besucherinnen und Besucher aus 145 Ländern vor Ort in San Francisco selbst überzeugen: In sage und schreibe 2.555 Sessions – verteilt über Downtown San Francisco – ging es dort um Zukunftstechnologien und neue Entwicklungen. Wie soll man zusammenfassen, was insgesamt 3.599 Speaker, fast die Hälfte übrigens Kunden und Partner, in vier Tagen an technologischen Visionen entwickelt und präsentiert haben? Nehmen wir ein konkretes Beispiel, das in diversen Sessions immer wieder auftauchte: Das „Internet of Things“, sprich „intelligente“ Alltagsgegenstände, deren eingebaute Minicomputer ohne den Umweg über einen PC miteinander kommunizieren und auf äußere Einflüsse reagieren. Für viele ist das heute noch Neuland, doch die Weiterentwicklung des Internet of Things eröffnet für Oracle, wie auch für die Partner, ein spannendes Arbeitsfeld und natürlich auch einen neuen Markt. Die omnipräsenten Fokus-Themen der viertägigen größten Hauskonferenz von Oracle hießen in diesem Jahr Customer Experience und Human Capital Management. Spannend für Partner waren auch die Strategien und die Roadmap von Oracle sowie die Neuigkeiten aus den Bereichen Engineered Systems, Cloud Computing, Business Analytics, Big Data und Customer Experience. Neue Rekorde stellte die Oracle OpenWorld auch im Netz auf: Mehr als 2,1 Millionen Menschen besuchten diese Veranstaltung online und nutzten dabei über 224 Social-Media Kanäle – fast doppelt so viele wie noch vor einem Jahr. Die gute Nachricht: Die Oracle OpenWorld bleibt online, denn es besteht nach wie vor die Möglichkeit, OnDemand-Videos der Keynote- und Session-Highlights anzusehen: Gehen Sie einfach auf Conference Video Highlights  und wählen Sie aus acht Bereichen entweder eine Zusammenfassung oder die vollständige Keynote beziehungsweise Session. Dort finden Sie auch Videos der eigenen Fach-Konferenzen, die im Umfeld der Oracle OpenWorld stattfanden: die JavaOne, die MySQL Connect und der Oracle PartnerNetwork Exchange. Beim Oracle PartnerNetwork Exchange wurden, ganz auf die Fragen und Bedürfnisse der Oracle Partner zugeschnitten, Themen wie Cloud für Partner, Applications, Engineered Systems und Hardware, Big Data, oder Industry Solutions behandelt, und es gab, ganz wichtig, viel Gelegenheit zu Austausch und Vernetzung. Konkret befassten sich dort beispielsweise Sessions mit Cloudanwendungen im Gesundheitsbereich, mit der Erstellung überzeugender Business Cases für Kundengespräche oder mit Mobile und Social Networking. Die aus Deutschland angereisten über 40 Partner trafen sich beim OPN Exchange zu einem anregenden gemeinsamen Abend mit den anderen Teilnehmern. Dass die Oracle OpenWorld auch noch zum sportlichen Highlight werden würde, kam denkbar unerwartet: Zeitgleich mit der Konferenz wurde nämlich in der Bucht von San Francisco die entscheidende 19. Etappe des Americas Cup ausgetragen. Im traditionsreichen Segelwettbewerb lag Team Oracle USA zunächst mit 1:8 zurück, schaffte es aber dennoch, den Sieg vor dem lange Zeit überlegenen Team Neuseeland zu holen und somit den Titel zu verteidigen. Selbstverständlich fand die Oracle OpenWorld auch ein großes Medienecho. Wir haben eine Auswahl für Sie zusammengestellt: - ChannelPartner- Computerwoche - Heise - Silicon über Big Data - Silicon über 12c

    Read the article

  • Zukunftsmusik auf der Oracle OpenWorld 2013

    - by Alliances & Channels Redaktion
    "The future begins at Oracle OpenWorld", das Motto weckt große Erwartungen! Wie die Zukunft aussehen könnte, davon konnten sich 60.000 Besucherinnen und Besucher aus 145 Ländern vor Ort in San Francisco selbst überzeugen: In sage und schreibe 2.555 Sessions – verteilt über Downtown San Francisco – ging es dort um Zukunftstechnologien und neue Entwicklungen. Wie soll man zusammenfassen, was insgesamt 3.599 Speaker, fast die Hälfte übrigens Kunden und Partner, in vier Tagen an technologischen Visionen entwickelt und präsentiert haben? Nehmen wir ein konkretes Beispiel, das in diversen Sessions immer wieder auftauchte: Das „Internet of Things“, sprich „intelligente“ Alltagsgegenstände, deren eingebaute Minicomputer ohne den Umweg über einen PC miteinander kommunizieren und auf äußere Einflüsse reagieren. Für viele ist das heute noch Neuland, doch die Weiterentwicklung des Internet of Things eröffnet für Oracle, wie auch für die Partner, ein spannendes Arbeitsfeld und natürlich auch einen neuen Markt. Die omnipräsenten Fokus-Themen der viertägigen größten Hauskonferenz von Oracle hießen in diesem Jahr Customer Experience und Human Capital Management. Spannend für Partner waren auch die Strategien und die Roadmap von Oracle sowie die Neuigkeiten aus den Bereichen Engineered Systems, Cloud Computing, Business Analytics, Big Data und Customer Experience. Neue Rekorde stellte die Oracle OpenWorld auch im Netz auf: Mehr als 2,1 Millionen Menschen besuchten diese Veranstaltung online und nutzten dabei über 224 Social-Media Kanäle – fast doppelt so viele wie noch vor einem Jahr. Die gute Nachricht: Die Oracle OpenWorld bleibt online, denn es besteht nach wie vor die Möglichkeit, OnDemand-Videos der Keynote- und Session-Highlights anzusehen: Gehen Sie einfach auf Conference Video Highlights und wählen Sie aus acht Bereichen entweder eine Zusammenfassung oder die vollständige Keynote beziehungsweise Session. Dort finden Sie auch Videos der eigenen Fach-Konferenzen, die im Umfeld der Oracle OpenWorld stattfanden: die JavaOne, die MySQL Connect und der Oracle PartnerNetwork Exchange. Beim Oracle PartnerNetwork Exchange wurden, ganz auf die Fragen und Bedürfnisse der Oracle Partner zugeschnitten, Themen wie Cloud für Partner, Applications, Engineered Systems und Hardware, Big Data, oder Industry Solutions behandelt, und es gab, ganz wichtig, viel Gelegenheit zu Austausch und Vernetzung. Konkret befassten sich dort beispielsweise Sessions mit Cloudanwendungen im Gesundheitsbereich, mit der Erstellung überzeugender Business Cases für Kundengespräche oder mit Mobile und Social Networking. Die aus Deutschland angereisten über 40 Partner trafen sich beim OPN Exchange zu einem anregenden gemeinsamen Abend mit den anderen Teilnehmern. Dass die Oracle OpenWorld auch noch zum sportlichen Highlight werden würde, kam denkbar unerwartet: Zeitgleich mit der Konferenz wurde nämlich in der Bucht von San Francisco die entscheidende 19. Etappe des Americas Cup ausgetragen. Im traditionsreichen Segelwettbewerb lag Team Oracle USA zunächst mit 1:8 zurück, schaffte es aber dennoch, den Sieg vor dem lange Zeit überlegenen Team Neuseeland zu holen und somit den Titel zu verteidigen. Selbstverständlich fand die Oracle OpenWorld auch ein großes Medienecho. Wir haben eine Auswahl für Sie zusammengestellt: - ChannelPartner- Computerwoche - Heise - Silicon über Big Data - Silicon über 12c

    Read the article

  • Oracle Optimized Solutions at Oracle OpenWorld 2012

    - by ferhatSF
    Have you registered for Oracle OpenWorld 2012 in San Francisco from September 30 to October 4? Visit the Oracle OpenWorld 2012 site today for registration and more information. Come join us to hear how Oracle Optimized Solutions can help you save money, reduce integration risks, and improve user productivity. Oracle Optimized Solutions are designed, pre-tested, tuned and fully documented architectures for optimal performance and availability. They provide written guidelines to help size, configure, purchase and deploy enterprise solutions that address common IT problems. Built with flexibility in mind, Oracle Optimized Solutions can be deployed as complete solutions or easily tailored to meet your specific needs - they are proven to save money, reduce integration risks and improve user productivity. Here is a preview of the planned Oracle OpenWorld sessions(*) on Oracle Optimized Solutions. October 1, 2012 Monday Time Session ID Title Location 12:15 PM CON7916 Accelerate Oracle E-Business Suite Deployment with SPARC SuperCluster Moscone West - 2001 03:15 PM GEN9691 General Session: Accelerate Your Business with the Oracle Hardware Advantage Moscone North - Hall D 04:45 PM CON4821 Building a Flexible Enterprise Cloud Infrastructure on Oracle SPARC Systems Moscone West - 2001 October 2, 2012 Tuesday Time Session ID Title Location 10:15 AM CON4561 Backup-and-Recovery Best Practices with Oracle Engineered Systems Products Moscone South - 252 11:45 AM CON3851 Optimizing JD Edwards EnterpriseOne on SPARC T4 Servers for Best Performance Moscone West - 2000 01:15 PM GEN11472 General Session: Breakthrough Efficiency in Private Cloud Infrastructure Moscone West - 3014 01:15 PM CON4600 Extreme Storage Scale and Efficiency: Lessons from a 100,000-Person Organization Moscone South - 252 05:00 PM CON9465 Next-Generation Directory: Oracle Unified Directory Moscone West - 3008 05:00 PM CON4088 Accelerate Your SAP Landscape with the Oracle SPARC SuperCluster Moscone West - 2001 05:00 PM CON7743 High-Performance Security for Oracle Applications Using SPARC T4 Systems Moscone West - 2000 05:00 PM CON3857 Archive Strategies for 100 Percent Data Availability Moscone South - 270 October 3, 2012 Wednesday Time Session ID Title Location 10:15 AM CON6528 Configure Oracle Hybrid Columnar Compression to Optimize Query Database Performance up to 10x Moscone South - 252 11:45 AM CON2590 Breakthrough in Private Cloud Management on SPARC T-Series Servers Moscone South - 270 01:15 PM CON4289 Oracle Optimized Solution for Siebel CRM at ACCOR Moscone West - 2000 05:00 PM CON7570 Improve PeopleSoft HCM Performance and Reliability with SPARC SuperCluster Moscone South - 252 * Schedule subject to change In addition, there will be Oracle Optimized Solutions Hands-On-Labs sessions planned. Please enroll ahead of time as space is limited: Oracle Optimized Solutions: Hands on Labs in Oracle OpenWorld Place: Marriott Marquis - Salon 14/15 Date and Time Session ID Title Monday October 1, 2012 01:45 PM HOL9868 Enterprise Cloud Infrastructure for SPARC with Oracle Enterprise Manager Ops Center 12c Monday October 1, 2012 03:15 PM HOL9907 Oracle Virtual Desktop Infrastructure Performance and Tablet Mobility Wednesday October 3, 2012 05:00 PM HOL9870 x86 Enterprise Cloud Infrastructure with Oracle VM 3.x and Sun ZFS Storage Appliance Thursday October 4, 2012 11:15 AM HOL9869 0 to Database Backup and Recovery in 60 Minutes Oracle Optimized Solutions executives and experts will also be at hand for discussions and follow ups. And don’t forget to catch live demonstrations of our complete Oracle Optimized Solutions while at Oracle OpenWorld 2012 in San Francisco. We recommend the use of the Schedule Builder tool to plan your visit to the conference and for pre-enrollment in sessions of your interest. We hope to see you there!

    Read the article

  • MySQL Connect 9 Days Away – Optimizer Sessions

    - by Bertrand Matthelié
    72 1024x768 Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Following my previous blog post focusing on InnoDB talks at MySQL Connect, let us review today the sessions focusing on the MySQL Optimizer: Saturday, 11.30 am, Room Golden Gate 6: MySQL Optimizer Overview—Olav Sanstå, Oracle The goal of MySQL optimizer is to take a SQL query as input and produce an optimal execution plan for the query. This session presents an overview of the main phases of the MySQL optimizer and the primary optimizations done to the query. These optimizations are based on a combination of logical transformations and cost-based decisions. Examples of optimization strategies the presentation covers are the main query transformations, the join optimizer, the data access selection strategies, and the range optimizer. For the cost-based optimizations, an overview of the cost model and the data used for doing the cost estimations is included. Saturday, 1.00 pm, Room Golden Gate 6: Overview of New Optimizer Features in MySQL 5.6—Manyi Lu, Oracle Many optimizer features have been added into MySQL 5.6. This session provides an introduction to these great features. Multirange read, index condition pushdown, and batched key access will yield huge performance improvements on large data volumes. Structured explain, explain for update/delete/insert, and optimizer tracing will help users analyze and speed up queries. And last but not least, the session covers subquery optimizations in Release 5.6. Saturday, 7.00 pm, Room Golden Gate 4: BoF: Query Optimizations: What Is New and What Is Coming? This BoF presents common techniques for query optimization, covers what is new in MySQL 5.6, and provides a discussion forum in which attendees can tell the MySQL optimizer team which optimizations they would like to see in the future. Sunday, 1.15 pm, Room Golden Gate 8: Query Performance Comparison of MySQL 5.5 and MySQL 5.6—Øystein Grøvlen, Oracle MySQL Release 5.6 contains several improvements in the query optimizer that create improved performance for complex queries. This presentation looks at how MySQL 5.6 improves the performance of many of the queries in the DBT-3 benchmark. Based on the observed improvements, the presentation discusses what makes the specific queries perform better in Release 5.6. It describes the relevant new optimization techniques and gives examples of the types of queries that will benefit from these techniques. Sunday, 4.15 pm, Room Golden Gate 4: Powerful EXPLAIN in MySQL 5.6—Evgeny Potemkin, Oracle The EXPLAIN command of MySQL has long been a very useful tool for understanding how MySQL will execute a query. Release 5.6 of the MySQL database offers several new additions that give more-detailed information about the query plan and make it easier to understand at the same time. This presentation gives an overview of new EXPLAIN features: structured EXPLAIN in JSON format, EXPLAIN for INSERT/UPDATE/DELETE, and optimizer tracing. Examples in the session give insights into how you can take advantage of the new features. They show how these features supplement and relate to each other and to classical EXPLAIN and how and why the MySQL server chooses a particular query plan. You can check out the full program here as well as in the September edition of the MySQL newsletter. Not registered yet? You can still save US$ 300 over the on-site fee – Register Now!

    Read the article

  • ACORD LOMA Session Highlights Policy Administration Trends

    - by [email protected]
    Helen Pitts, senior product marketing manager for Oracle Insurance, attended and is blogging from the ACORD LOMA Insurance Forum this week. Above: Paul Vancheri, Chief Information Officer, Fidelity Investments Life Insurance Company. Vancheri gave a presentation during the ACORD LOMA Insurance Systems Forum about the key elements of modern policy administration systems and how insurers can mitigate risk during legacy system migrations to safely introduce new technologies. When I had a few particularly challenging honors courses in college my father, a long-time technology industry veteran, used to say, "If you don't know how to do something go ask the experts. Find someone who has been there and done that, don't be afraid to ask the tough questions, and apply and build upon what you learn." (Actually he still offers this same advice today.) That's probably why my favorite sessions at industry events, like the ACORD LOMA Insurance Forum this week, are those that include insight on industry trends and case studies from carriers who share their experiences and offer best practices based upon their own lessons learned. I had the opportunity to attend a particularly insightful session Wednesday as Craig Weber, senior vice president of Celent's Insurance practice, and Paul Vancheri, CIO of Fidelity Life Investments, presented, "Managing the Dynamic Insurance Landscape: Enabling Growth and Profitability with a Modern Policy Administration System." Policy Administration Trends Growing the business is the top issue when it comes to IT among both life and annuity and property and casualty carriers according to Weber. To drive growth and capture market share from competitors, carriers are looking to modernize their core insurance systems, with 65 percent of those CIOs participating in recent Celent research citing plans to replace their policy administration systems. Weber noted that there has been continued focus and investment, particularly in the last three years, by software and technology vendors to offer modern, rules-based, configurable policy administration solutions. He added that these solutions are continuing to evolve with the ongoing aim of helping carriers rapidly meet shifting business needs--whether it is to launch new products to market faster than the competition, adapt existing products to meet shifting consumer and /or regulatory demands, or to exit unprofitable markets. He closed by noting the top four trends for policy administration either in the process of being adopted today or on the not-so-distant horizon for the future: Underwriting and service desktops New business automation Convergence of ultra-configurable and domain content-rich systems Better usability and screen design Mitigating the Risk When Making the Decision to Modernize Third-party analyst research from advisory firms like Celent was a key part of the due diligence process for Fidelity as it sought a replacement for its legacy policy administration system back in 2005, according to Vancheri. The company's business opportunities were outrunning system capability. Its legacy system had not been upgraded in several years and was deficient from a functionality and currency standpoint. This was constraining the carrier's ability to rapidly configure and bring new and complex products to market. The company sought a new, modern policy administration system, one that would enable it to keep pace with rapid and often unexpected industry changes and ahead of the competition. A cross-functional team that included representatives from finance, actuarial, operations, client services and IT conducted an extensive selection process. This process included deep documentation review, pilot evaluations, demonstrations of required functionality and complex problem-solving, infrastructure integration capability, and the ability to meet the company's desired cost model. The company ultimately selected an adaptive policy administration system that met its requirements to: Deliver ease of use - eliminating paper and rework, while easing the burden on representatives to sell and service annuities Provide customer parity - offering Web-based capabilities in alignment with the company's focus on delivering a consistent customer experience across its business Deliver scalability, efficiency - enabling automation, while simplifying and standardizing systems across its technology stack Offer desired functionality - supporting Fidelity's product configuration / rules management philosophy, focus on customer service and technology upgrade requirements Meet cost requirements - including implementation, professional services and licenses fees and ongoing maintenance Deliver upon business requirements - enabling the ability to drive time to market for new products and flexibility to make changes Best Practices for Addressing Implementation Challenges Based upon lessons learned during the company's implementation, Vancheri advised carriers to evaluate staffing capabilities and cultural impacts, review business requirements to avoid rebuilding legacy processes, factor in dependent systems, and review policies and practices to secure customer data. His formula for success: upfront planning + clear requirements = precision execution. Achieving a Return on Investment Vancheri said the decision to replace their legacy policy administration system and deploy a modern, rules-based system--before the economic downturn occurred--has been integral in helping the company adapt to shifting market conditions, while enabling growth in its direct channel sales of variable annuities. Since deploying its new policy admin system, the company has reduced its average time to market for new products from 12-15 months to 4.5 months. The company has since migrated its other products to the new system and retired its legacy system, significantly decreasing its overall product development cycle. From a processing standpoint Vancheri noted the company has achieved gains in automation, information, and ease of use, resulting in improved real-time data edits, controls for better quality, and tax handling capability. Plus, with by having only one platform to manage, the company has simplified its IT environment and is well positioned to deliver system enhancements for greater efficiencies. Commitment to Continuing the Investment In the short and longer term future Vancheri said the company plans to enhance business functionality to support money movement, wire automation, divorce processing on payout contracts and cost-based tracking improvements. It also plans to continue system upgrades to remain current as well as focus on further reducing cycle time, driving down maintenance costs, and integrating with other products. Helen Pitts is senior product marketing manager for Oracle Insurance focused on life/annuities and enterprise document automation.

    Read the article

  • News, Perspektiven und jede Menge Gesprächsstoff - Der Oracle Partner Day 2012

    - by A&C Redaktion
    Was für ein Tag! Unter dem Motto „Maximize your Potential“ kamen über 470 Teilnehmerinnen und Teilnehmer beim Oracle Partner Day 2012 zusammen. Hier drehte sich alles um unsere Partner, die, wie Silvia Kaske, Senior Director Alliances & Channel Europe North, in ihrer Begrüßung betonte, „ein sehr wichtiger Baustein in der Wachstumsstrategie von Oracle“ sind. Wie einmalig diese Partnerschaft ist, betonte auch David Callaghan, Senior Vice President EMEA Alliances & Channel in seiner Keynote. Niemand sonst habe, so Callaghan, in ähnlichem Ausmaß wie Oracle, Hardware und Software tatsächlich integriert. So manche Anbieter würden zwar beides zusammenschnüren, aber bei weitem nicht so optimal abgestimmt und verflochten, wie beim „Red Stack“ von Oracle. Neben Keynotes von Jürgen Kunz, SVP Technology Northern Europe & Country Leader Germany, und Christian Werner, Senior Director Alliances & Channels Germany, zu Neuheiten und Entwicklungspotenzialen im Oracle Universum und den Präsentationen aus verschiedenen Spezialisierung-Fachgebieten, gab es sogar einen Blick in die Zukunft der IT: Der Informatiker Professor Hermann Maurer präsentierte nicht nur existierende und geplante Innovationen, etwa die berüchtigte Computerbrille, die bald das Smartphone abzulösen soll – eine ordentliche Portion Science-Fiction war auch dabei. Im Laufe des Tages nutzten diverse Partner die Möglichkeit, vor Ort den Test als OPN Implementation Specialist beim Testcenter Pearson Vue abzulegen. Viele Teilnehmer zeigten sich beeindruckt von den vielen guten Gesprächen untereinander und schöpften die Möglichkeit zum Networking und Erfahrungsaustausch voll aus. Bei einem so dichten Programm ist es natürlich schwierig, wirklich alles mitzunehmen. Daher haben wir die Präsentationen, die auf dem Oracle Partner Day gehalten wurden, hier in der Agenda noch einmal für Sie zusammengestellt. Spannend wurde es bei der Oracle Partner Award Ceremony: Zum zweiten Mal wurden dort deutsche Partner ausgezeichnet, die sich mit besonderem Engagement und Erfolg spezialisiert haben. Wer die glücklichen Gewinner sind und was ihr Unternehmen auszeichnet, lesen sie ebenfalls hier im Blog. Allen Siegern gratulieren wir noch einmal ganz herzlich! Nachdem es im voraus schon wilde Spekulationen gab, was sich wohl hinter der „Oracle Sports Challenge“ verbergen würde, wollen wir diese Frage auch hier auflösen: Wer nach dem vielen Sitzen Lust auf Bewegung hatte, konnte sich verschiedenen, mehr oder weniger sportlichen Herausforderungen stellen. Zu meistern waren verschiedene Geschicklichkeits-Spiele, unter anderem ein fast mannshoher „Oracle Stack“, den es in Yenga-Manier aufrecht zu erhalten galt, Torschüsse auf ein Tor, das von einem vollautomatischen Robo-Keeper bewacht wurde und eine Video-Wand mit einem spielerischen Reaktionstest rund um den „Red Stack“. Den ganzen Tag über konnten die Teilnehmer hinter QR-Codes versteckte Buchstaben sammeln und mit etwas Glück und Geschick einen von drei iPod Supernanos gewinnen. Abgerundet wurde das Programm durch Auftritte der Entertainment-Saxophonistinnen „Hot Sax Club“, der beeindruckenden Fußball-Freestyler mit ihrer Ballakrobatik, dem Close-up Magier Marc Gassert und unseren DJ, der für Stimmung sorgte. Eindrücke und Highlights vom Oracle Partner Day in Frankfurt sehen Sie hier, im Best-of-Video und in unserer Fotogalerie. Lassen Sie einen gelungenen Tag noch einmal Revue passieren – oder sehen Sie, was Sie alles verpasst haben. Aber: nicht traurig sein, der nächste Oracle Partner Day kommt bestimmt!

    Read the article

  • News, Perspektiven und jede Menge Gesprächsstoff - Der Oracle Partner Day 2012

    - by A&C Redaktion
    Was für ein Tag! Unter dem Motto „Maximize your Potential“ kamen über 470 Teilnehmerinnen und Teilnehmer beim Oracle Partner Day 2012 zusammen. Hier drehte sich alles um unsere Partner, die, wie Silvia Kaske, Senior Director Alliances & Channel Europe North, in ihrer Begrüßung betonte, „ein sehr wichtiger Baustein in der Wachstumsstrategie von Oracle“ sind. Wie einmalig diese Partnerschaft ist, betonte auch David Callaghan, Senior Vice President EMEA Alliances & Channel in seiner Keynote. Niemand sonst habe, so Callaghan, in ähnlichem Ausmaß wie Oracle, Hardware und Software tatsächlich integriert. So manche Anbieter würden zwar beides zusammenschnüren, aber bei weitem nicht so optimal abgestimmt und verflochten, wie beim „Red Stack“ von Oracle. Neben Keynotes von Jürgen Kunz, SVP Technology Northern Europe & Country Leader Germany, und Christian Werner, Senior Director Alliances & Channels Germany, zu Neuheiten und Entwicklungspotenzialen im Oracle Universum und den Präsentationen aus verschiedenen Spezialisierung-Fachgebieten, gab es sogar einen Blick in die Zukunft der IT: Der Informatiker Professor Hermann Maurer präsentierte nicht nur existierende und geplante Innovationen, etwa die berüchtigte Computerbrille, die bald das Smartphone abzulösen soll – eine ordentliche Portion Science-Fiction war auch dabei. Im Laufe des Tages nutzten diverse Partner die Möglichkeit, vor Ort den Test als OPN Implementation Specialist beim Testcenter Pearson Vue abzulegen. Viele Teilnehmer zeigten sich beeindruckt von den vielen guten Gesprächen untereinander und schöpften die Möglichkeit zum Networking und Erfahrungsaustausch voll aus. Bei einem so dichten Programm ist es natürlich schwierig, wirklich alles mitzunehmen. Daher haben wir die Präsentationen, die auf dem Oracle Partner Day gehalten wurden, hier in der Agenda noch einmal für Sie zusammengestellt. Spannend wurde es bei der Oracle Partner Award Ceremony: Zum zweiten Mal wurden dort deutsche Partner ausgezeichnet, die sich mit besonderem Engagement und Erfolg spezialisiert haben. Wer die glücklichen Gewinner sind und was ihr Unternehmen auszeichnet, lesen sie ebenfalls hier im Blog. Allen Siegern gratulieren wir noch einmal ganz herzlich! Nachdem es im voraus schon wilde Spekulationen gab, was sich wohl hinter der „Oracle Sports Challenge“ verbergen würde, wollen wir diese Frage auch hier auflösen: Wer nach dem vielen Sitzen Lust auf Bewegung hatte, konnte sich verschiedenen, mehr oder weniger sportlichen Herausforderungen stellen. Zu meistern waren verschiedene Geschicklichkeits-Spiele, unter anderem ein fast mannshoher „Oracle Stack“, den es in Yenga-Manier aufrecht zu erhalten galt, Torschüsse auf ein Tor, das von einem vollautomatischen Robo-Keeper bewacht wurde und eine Video-Wand mit einem spielerischen Reaktionstest rund um den „Red Stack“. Den ganzen Tag über konnten die Teilnehmer hinter QR-Codes versteckte Buchstaben sammeln und mit etwas Glück und Geschick einen von drei iPod Supernanos gewinnen. Abgerundet wurde das Programm durch Auftritte der Entertainment-Saxophonistinnen „Hot Sax Club“, der beeindruckenden Fußball-Freestyler mit ihrer Ballakrobatik, dem Close-up Magier Marc Gassert und unseren DJ, der für Stimmung sorgte. Eindrücke und Highlights vom Oracle Partner Day in Frankfurt sehen Sie hier, im Best-of-Video und in unserer Fotogalerie. Lassen Sie einen gelungenen Tag noch einmal Revue passieren – oder sehen Sie, was Sie alles verpasst haben. Aber: nicht traurig sein, der nächste Oracle Partner Day kommt bestimmt!

    Read the article

  • Ubuntu turns off instead of suspend / sleeping

    - by Marcos
    I'm using Ubuntu 11.10 on my notebook as the main OS. How ever, I've been facing a serious problem: Every time I try to suspend it, or it just stays idle for a while, the computer shuts down, instead of suspending. It actually seems to be suspended, but when I press the button the awake it, it turns on, the open works are all lost. How can i fix this? P.s: On windows 7 the suspend / sleep resumes just fine. Here is a complete list of hardware: 00:00.0 Host bridge [0600]: Intel Corporation 2nd Generation Core Processor Family DRAM Controller [8086:0104] (rev 09) 00:02.0 VGA compatible controller [0300]: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller [8086:0116] (rev 09) 00:16.0 Communication controller [0780]: Intel Corporation 6 Series/C200 Series Chipset Family MEI Controller #1 [8086:1c3a] (rev 04) 00:1a.0 USB Controller [0c03]: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #2 [8086:1c2d] (rev 04) 00:1b.0 Audio device [0403]: Intel Corporation 6 Series/C200 Series Chipset Family High Definition Audio Controller [8086:1c20] (rev 04) 00:1c.0 PCI bridge [0604]: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 1 [8086:1c10] (rev b4) 00:1c.1 PCI bridge [0604]: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 2 [8086:1c12] (rev b4) 00:1c.2 PCI bridge [0604]: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 3 [8086:1c14] (rev b4) 00:1d.0 USB Controller [0c03]: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #1 [8086:1c26] (rev 04) 00:1f.0 ISA bridge [0601]: Intel Corporation HM65 Express Chipset Family LPC Controller [8086:1c49] (rev 04) 00:1f.2 SATA controller [0106]: Intel Corporation 6 Series/C200 Series Chipset Family 6 port SATA AHCI Controller [8086:1c03] (rev 04) 00:1f.3 SMBus [0c05]: Intel Corporation 6 Series/C200 Series Chipset Family SMBus Controller [8086:1c22] (rev 04) 01:00.0 Network controller [0280]: Realtek Semiconductor Co., Ltd. RTL8188CE 802.11b/g/n WiFi Adapter [10ec:8176] (rev 01) 02:00.0 System peripheral [0880]: JMicron Technology Corp. SD/MMC Host Controller [197b:2382] (rev 80) 02:00.2 SD Host controller [0805]: JMicron Technology Corp. Standard SD Host Controller [197b:2381] (rev 80) 02:00.3 System peripheral [0880]: JMicron Technology Corp. MS Host Controller [197b:2383] (rev 80) 02:00.5 Ethernet controller [0200]: JMicron Technology Corp. JMC250 PCI Express Gigabit Ethernet Controller [197b:0250] (rev 03) I've updated the kernel to 3.2 still and hibernate or suspend is still not working. SWAP is 1/4 of my RAM.

    Read the article

  • Fun with Sun Ray, 3D, Oracle VM x86 and SRIOV

    - by wim.coekaerts
    One of the things I like about my job is that I get to play around with stuff and make use of the technologies we work on in my teams. Sort of my own little playground. It allows me to study the products in great detail and put them to use in ways that individual product teams don't always intend them to be used for :) but that makes it fun. I have a lot of this set up at home because... work is sort of hobby and I just like to tinker with it. Anyway, a few weeks ago I was looking at my sun ray rig at home and how well 3D works. Google Earth and some basic opengl tests like glxspheres combined with virtualgl. It resulted in some very cool demos recorded with my little camera (sorry for the crappy quality of the video :-) : OVDC (soft client) on my mac Sun Ray 2FS Never mind the hickups during zoom, that's because I was using the scrollwheel on my mouse and I can't scroll uninterrupted :) Anyway, this is quite cool ! The setup for this was the following : Sun Ray on LAN, Sun Ray Server 5 latest installed on OL5.5 inside a VM running on Oracle VM 2.2 (hardware virt, with a virtual network (vif)) and the virtualgl rendering happened on another box (wopr5) that runs linux on a little atom D520 with an ION2 gpu. So network goes from Sun Ray to Sun Ray Server to wopr5 and back. Given that this is full screen 3D it puts a good amount of load on the network and it's pretty cool that SRS was just a VM :) So, separately, I had written a little blog entry about using sriov and oracle vm a while back. link to sriov blog entry Last night when I came home I wanted to do some more playing around with SRIOV and live migrate. To do this, I wanted to set up a VM with 2 network interfaces, one virtual network (vif) and then one that's one of the SRIOV virtual functions from my network card. Inside the guest they show as eth0 and eth1, and then bond them using a standard linux bonding device (bond0 here) with active active links. The goal here is that on live migrate, we would detach the VF (eth1 in guest in this case), the bond would then just hum along on eth0 (vif) we can live migrate the VM and then on the other server after the migrate completes we re-attach a VF to the VM there and eth1 pops up again and the bond uses both eth0/eth1 to do its work. So, to set this up, I figured, why not use my sun ray server VM because the 3D work generates a nice network load and is very latency/timing sensitive. In the end, I ran glxspheres on my sunray server (vm) displaying on my sun ray 2 fs and while that was running, I did my live migrate test of this vm (unplug pci VF, migrate, reconnect vf) and guess what, it just kept running :) veryyyyyy cool. now, it was supposed to, but it's always nice to see it actually work, for real. Here's a diagram of it. No gimics - just real technology at work ! enjoy :)

    Read the article

  • 7 Reasons for Abandonment in eCommerce and the need for Contextual Support by JP Saunders

    - by Tuula Fai
    Shopper confidence, or more accurately the lack thereof, is the bane of the online retailer. There are a number of questions that influence whether a shopper completes a transaction, and all of those attributes revolve around knowledge. What products are available? What products are on offer? What would be the cost of the transaction? What are my options for delivery? In general, most online businesses do a good job of answering basic questions around the products as the shopper engages in the online journey, navigating the product catalog and working through the checkout process. The needs that are harder to address for the shopper are those that are less concerned with product specifics and more concerned with deciding whether the transaction met their needs and delivered value. A recent study by the Baymard Institute [1] finds that more than 60% of ecommerce site visitors will abandon their shopping cart. The study also identifies seven reasons for abandonment out of the commerce process [2]. Most of those reasons come down to poor usability within the commerce experience. Distractions. External distractions within the shopper’s external environment (TV, Children, Pets, etc.) or distractions on the eCommerce page can drive shopper abandonment. Ideally, the selection and check-out process should be straightforward. One common distraction is to drive the shopper away from the task at hand through pop-ups or re-directs. The shopper engaging with support information in the checkout process should not be directed away from the page to consume support. Though confidence may improve, the distraction also means abandonment may increase. Poor Usability. When the experience gets more complicated, buyer’s remorse can set in. While knowledge drives confidence, a lack of understanding erodes it. Therefore it is important that the commerce process is streamlined. In some cases, the number of clicks to complete a purchase is lengthy and unavoidable. In these situations, it is vital to ensure that the complexity of your experience can be explained with contextual support to avoid abandonment. If you can illustrate the solution to a complex action while the user is engaged in that action and address customer frustrations with your checkout process before they arise, you can decrease abandonment. Fraud. The perception of potential fraud can be enough to deter a buyer. Does your site look credible? Can shoppers trust your brand? Providing answers on the security of your experience and the levels of protection applied to profile information may play as big a role in ensuring the sale, as does the support you provide on the product offerings and purchasing process. Does it fit? If it is a clothing item or oversized furniture item, another common form of abandonment is for the shopper to question whether the item can be worn by the intended user. Providing information on the sizing applied to clothing, physical dimensions, and limitations on delivery/returns of oversized items will also assist the sale. A photo alone of the item will help, as it answers some of those questions, but won’t assuage all customer concerns about sizing and fit. Sometimes the customer doesn’t want to buy. Prospective buyers might be browsing through your catalog to kill time, or just might not have the money to purchase the item! You are unlikely to provide any information in contextual support to increase the likelihood to buy if the shopper already has no intentions of doing so. The customer will still likely abandon. Ensuring that any questions are proactively answered as they browse through your site can only increase their likelihood to return and buy at a future date. Can’t Buy. Errors or complexity at checkout can be another major cause of abandonment. Good contextual support is unlikely to help with severe errors caused by technical issues on your site, but it will have a big impact on customers struggling with complexity in the checkout process and needing a question answered prior to completing the sale. Embedded support within the checkout process to patiently explain how to complete a task will help increase conversion rates. Additional Costs. Tax, shipping and other costs or duties can dramatically increase the cost of the purchase and when unexpected, can increase abandonment, particularly if they can’t be adequately explained. Again, a lack of knowledge erodes confidence in the purchase, and cost concerns in particular, erode the perception of your brand’s trustworthiness. Again, providing information on what costs are additive and why they are being levied can decrease the likelihood that the customer will abandon out of the experience. Knowledge drives confidence and confidence drives conversion. If you’d like to understand best practices in providing contextual customer support in eCommerce to provide your shoppers with confidence, download the Oracle Cloud Service and Oracle Commerce - Contextual Support in Commerce White Paper. This white paper discusses the process of adding customer support, including a suggested process for finding where knowledge has the most influence on your shoppers and practical step-by-step illustrations on how contextual self-service can be added to your online commerce experience. Resources: [1] http://baymard.com/checkout-usability [2] http://baymard.com/blog/cart-abandonment

    Read the article

  • How to Secure a Data Role by Multiple Business Units

    - by Elie Wazen
    In this post we will see how a Role can be data secured by multiple Business Units (BUs).  Separate Data Roles are generally created for each BU if a corresponding data template generates roles on the basis of the BU dimension. The advantage of creating a policy with a rule that includes multiple BUs is that while mapping these roles in HCM Role Provisioning Rules, fewer number of entires need to be made. This could facilitate maintenance for enterprises with a large number of Business Units. Note: The example below applies as well if the securing entity is Inventory Organization. Let us take for example the case of a user provisioned with the "Accounts Payable Manager - Vision Operations" Data Role in Fusion Applications. This user will be able to access Invoices in Vision Operations but will not be able to see Invoices in Vision Germany. Figure 1. A User with a Data Role restricting them to Data from BU: Vision Operations With the role granted above, this is what the user will see when they attempt to select Business Units while searching for AP Invoices. Figure 2.The List Of Values of Business Units is limited to single one. This is the effect of the Data Role granted to that user as can be seen in Figure 1 In order to create a data role that secures by multiple BUs,  we need to start by creating a condition that groups those Business Units we want to include in that data role. This is accomplished by creating a new condition against the BU View .  That Condition will later be used to create a data policy for our newly created Role.  The BU View is a Database resource and  is accessed from APM as seen in the search below Figure 3.Viewing a Database Resource in APM The next step is create a new condition,  in which we define a sql predicate that includes 2 BUs ( The ids below refer to Vision Operations and Vision Germany).  At this point we have simply created a standalone condition.  We have not used this condition yet, and security is therefore not affected. Figure 4. Custom Role that inherits the Purchase Order Overview Duty We are now ready to create our Data Policy.  in APM, we search for our newly Created Role and Navigate to “Find Global Policies”.  we query the Role we want to secure and navigate to view its global policies. Figure 5. The Job Role we plan on securing We can see that the role was not defined with a Data Policy . So will create one that uses the condition we created earlier.   Figure 6. Creating a New Data Policy In the General Information tab, we have to specify the DB Resource that the Security Policy applies to:  In our case this is the BU View Figure 7. Data Policy Definition - Selection of the DB Resource we will secure by In the Rules Tab, we  make the rule applicable to multiple values of the DB Resource we selected in the previous tab.  This is where we associate the condition we created against the BU view to this data policy by entering the Condition name in the Condition field Figure 8. Data Policy Rule The last step of Defining the Data Policy, consists of  explicitly selecting  the Actions that are goverened by this Data Policy.  In this case for example we select the Actions displayed below in the right pane. Once the record is saved , we are ready to use our newly secured Data Role. Figure 9. Data Policy Actions We can now see a new Data Policy associated with our Role.  Figure 10. Role is now secured by a Data Policy We now Assign that new Role to the User.  Of course this does not have to be done in OIM and can be done using a Provisioning Rule in HCM. Figure 11. Role assigned to the User who previously was granted the Vision Ops secured role. Once that user accesses the Invoices Workarea this is what they see: In the image below the LOV of Business Unit returns the two values defined in our data policy namely: Vision Operations and Vision Germany Figure 12. The List Of Values of Business Units now includes the two we included in our data policy. This is the effect of the data role granted to that user as can be seen in Figure 11

    Read the article

  • SSAS Maestro Training in July 2012 #ssasmaestro #ssas

    - by Marco Russo (SQLBI)
    A few hours ago Chris Webb blogged about SSAS Maestro and I’d like to propagate the news, adding also some background info. SSAS Maestro is the premier certification on Analysis Services that selects the best experts in Analysis Services around the world. In 2011 Microsoft organized two rounds of training/exams for SSAS Maestros and up to now only 11 people from the first wave have been announced – around 10% of attendees of the course! In the next few days the new Maestros from the second round should be announced and this long process is caused by many factors that I’m going to explain. First, the course is just a step in the process. Before the course you receive a list of topics to study, including the slides of the course. During the course, students receive a lot of information that might not have been included in the slides and the best part of the course is class interaction. Students are expected to bring their experience to the table and comparing case studies, experiences and having long debates is an important part of the learning process. And it is also a part of the evaluation: good questions might be also more important than good answers! Finally, after the course, students have their homework and this may require one or two months to be completed. After that, a long (very long) evaluation process begins, taking into account homework, labs, participation… And for this reason the final evaluation may arrive months later after the course. We are going to improve and shorten this process with the next courses. The first wave of SSAS Maestro had been made by invitation only and now the program is opening, requiring a fee to participate in order to cover the cost of preparation, training and exam. The number of attendees will be limited and candidates will have to send their CV in order to be admitted to the course. Only experienced Analysis Services developers will be able to participate to this challenging program. So why you should do that? Well, only 10% of students passed the exam until now. So if you need 100% guarantee to pass the exam, you need to study a lot, before, during and after the course. But the course by itself is a precious opportunity to share experience, create networking and learn mission-critical enterprise-level best practices that it’s hard to find written on books. Oh, well, many existing white papers are a required reading *before* the course! The course is now 5 days long, and every day can be *very* long. We’ll have lectures and discussions in the morning and labs in the afternoon/evening. Plus some more lectures in one or two afternoons. A heavy part of the course is about performance optimization, capacity planning, monitoring. This edition will introduce also Tabular models, and don’t expect something you might find in the SSAS Tabular Workshop – only performance, scalability monitoring and optimization will be covered, knowing Analysis Services is a requirement just to be accepted! I and Chris Webb will be the teachers for this edition. The course is expensive. Applying for SSAS Maestro will cost around 7000€ plus taxes (reduced to 5000€ for students of a previous SSAS Maestro edition). And you will be locked in a training room for the large part of the week. So why you should do that? Well, as I said, this is a challenging course. You will not find the time to check your email – the content is just too much interesting to think you can be distracted by something else. Another good reason is that this course will take place in Italy. Well, the course will take place in the brand new Microsoft Innovation Campus, but in general we’ll be able to provide you hints to get great food and, if you are willing to attach one week-end to your trip, there are plenty of places to visit (and I’m not talking about the classic Rome-Florence-Venice) – you might really need to relax after such a week! Finally, the marking process after the course will be faster – we’d like to complete the evaluation within three months after the course, considering that 1-2 months might be required to complete the homework. If at this point you are not scared: registration will open in mid-April, but you can already write to [email protected] sending your CV/resume and a short description of your level of SSAS knowledge and experience. The selection process will start early and you may want to put your admission form on top of the FIFO queue!

    Read the article

  • MySQL for Excel 1.1.0 GA has been released

    - by Javier Treviño
    The MySQL Windows Experience Team is proud to announce the release of MySQL for Excel version 1.1.0 GA, one of our newest products contained in the MySQL Installer suite. You can download it from our official Downloads page at http://dev.mysql.com/downloads/installer/. The 1.1.0 release of MySQL for Excel introduces the following features: Edit MySQL Data. Edit MySQL Data This may be the coolest feature so far; users will be able to edit the data in a MySQL table using MS Excel in a very friendly and intuitive way.  Edit Data supports inserting new rows, deleting existing rows and updating existing data as easy as playing with data in an Excel’s spreadsheet and pushing changes back to the server.  Also this version contains the following bug fixes: Enabled the following checkboxes in the Append Data's Advanced Options dialog and added code in the Append Data dialog to use the checkboxes as follows: Automatically store the column mapping for the given table     If checked the current mapping will be stored automatically after clicking the Append button if the append operation is successful and there is no mapping for the current connection.schema.table already; the new mapping is stored with a proposed name of Mapping. Reload stored column mapping for the selected table automatically     If checked the first Stored Mapping found where all column names in the source grid match all column names in the target grid is automatically selected and applied when the Append Data dialog is loaded. Fixed code in Append Data that applies a stored column mapping to skip target columns where the associated mapping is empty (saved as a -1). Enclosed the Add-In's startup code in a try-catch block in order to log any possible error thrown during startup; and added information messages to the log at the beginning of the Add-In's startup code and at the end of the shutdown code.  Also changed the wrapper method that calls the MySQLUtility to write messages to the log to make logging easier, thus changed the log call throughout all the code that contains a try-catch block. Added code to the main wix configuration file to check if a newer version is already installed and if so abort the installation Fixed code to refresh the Import Procedure Form's preview grid's data source to repaint its contents every time the Call button is pressed. Added code to re-pull connections after connections are migrated from Excel to Workbench. Fixed code so when the Append Data's Automatic Mapping is performed any subsequent change on a mapping resets the mapping to a Manual Mapping. Added code to the InfoDialog class to set the button text to "Show Details" or "Hide Details" depending on the status of the Details text container. Fixed a GUID in the main wix configuration file so now previous versions are uninstalled during a new installation. Added an option to the Export Data's Advanced Options dialog to remove columns with no data, by default the Export Dialog will only flag those columns as Excluded. Added code to display a warning and paint a column red if the column name in the Export Data dialog is not set, display a warning if the table name is not set, and stack warnings but not display them if a column is Excluded, warnings are displayed normally for columns if they are not Excluded anymore.  Added code to prevent the Append and Export of Data if more than 1 selection is made (selecting more than 1 area holding the Ctrl key while selecting Excel cells). Fixed problem that prevented MySQL for Excel from loading when Display settings in Windows 7 is set to Adjust to Best Performance (Oracle bug 14521405 - UNHANDLED EXCEPTION IS THROWN WHEN LOADING MYSQL FOR EXCEL). Fixed code that renames the auto-generated Primary Key column when the Table name changes since it was not detecting if a column with the same name already existed in the table. The column duplication was not actually happening, it looked that way because the automatically generated PK column was not detecting a column had that same name. Fixed code in Export Data dialog to always set an empty string instead of null to the MySQLDataColumn properties that stores MySQL data types (MySQLDataType, RowsFrom1stDataType and RowsFrom2ndDataType). Added code to display a warning and color red a column which Data Type has not been set by the user or has been manually cleared. Added code to output to the application log exception messages consistently in all places where exceptions are catched. A series of blog posts explaining the new Edit MySQL Data feature and the other existing features are coming in this blog. You can access the MySQL for Excel documentation at http://dev.mysql.com/doc/refman/5.5/en/mysql-for-excel.html You can also post questions on our MySQL for Excel forum found at http://forums.mysql.com/. You can also post questions on our MySQL for Excel forum found at http://forums.mysql.com/. Enjoy and thanks for the support!

    Read the article

  • Textures do not render on ATI graphics cards?

    - by Mathias Lykkegaard Lorenzen
    I'm rendering textured quads to an orthographic view in XNA through hardware instancing. On Nvidia graphics cards, this all works, tested on 3 machines. On ATI cards, it doesn't work at all, tested on 2 machines. How come? Culling perhaps? My orthographic view is set up like this: Matrix projection = Matrix.CreateOrthographicOffCenter(0, graphicsDevice.Viewport.Width, -graphicsDevice.Viewport.Height, 0, 0, 1); And my elements are rendered with the Z-coordinate 0. Edit: I just figured out something weird. If I do not call this spritebatch code above doing my textured quad rendering code, then it won't work on Nvidia cards either. Could that be due to culling information or something like that? Batch.Instance.SpriteBatch.Begin(SpriteSortMode.Immediate, BlendState.AlphaBlend, SamplerState.LinearClamp, DepthStencilState.Default, RasterizerState.CullNone); ... spriteBatch.End(); Edit 2: Here's the full code for my instancing call. public void DrawTextures() { Batch.Instance.SpriteBatch.Begin(SpriteSortMode.Texture, BlendState.AlphaBlend, SamplerState.LinearClamp, DepthStencilState.Default, RasterizerState.CullNone, textureEffect); while (texturesToDraw.Count > 0) { TextureJob texture = texturesToDraw.Dequeue(); spriteBatch.Draw(texture.Texture, texture.DestinationRectangle, texture.TintingColor); } spriteBatch.End(); #if !NOTEXTUREINSTANCING // no work to do if (positionInBufferTextured > 0) { device.BlendState = BlendState.Opaque; textureEffect.CurrentTechnique = textureEffect.Techniques["Technique1"]; textureEffect.Parameters["Texture"].SetValue(darkTexture); textureEffect.CurrentTechnique.Passes[0].Apply(); if ((textureInstanceBuffer == null) || (positionInBufferTextured > textureInstanceBuffer.VertexCount)) { if (textureInstanceBuffer != null) textureInstanceBuffer.Dispose(); textureInstanceBuffer = new DynamicVertexBuffer(device, texturedInstanceVertexDeclaration, positionInBufferTextured, BufferUsage.WriteOnly); } if (positionInBufferTextured > 0) { textureInstanceBuffer.SetData(texturedInstances, 0, positionInBufferTextured, SetDataOptions.Discard); } device.Indices = textureIndexBuffer; device.SetVertexBuffers(textureGeometryBuffer, new VertexBufferBinding(textureInstanceBuffer, 0, 1)); device.DrawInstancedPrimitives(PrimitiveType.TriangleStrip, 0, 0, textureGeometryBuffer.VertexCount, 0, 2, positionInBufferTextured); // now that we've drawn, it's ok to reset positionInBuffer back to zero, // and write over any vertices that may have been set previously. positionInBufferTextured = 0; } #endif }

    Read the article

< Previous Page | 334 335 336 337 338 339 340 341 342 343 344 345  | Next Page >