Search Results

Search found 61615 results on 2465 pages for 'execution time'.

Page 340/2465 | < Previous Page | 336 337 338 339 340 341 342 343 344 345 346 347  | Next Page >

  • Friday Fun: Christmas Tree Light Up

    - by Asian Angel
    Another week has thankfully passed by, so it is time to take a break and have some fun. This week’s game tests your ability to light up the whole Christmas tree…can you figure out the correct wiring configuration? Christmas Tree Light Up The object of the game is simple…light up all of the bulbs on the Christmas tree. While the game may look quick and easy at first you will need to do some thinking and experimenting to come up with the correct wiring configuration. The instructions are very simple…just click on any of the wiring sections or bulbs to rotate them. Keep in mind that you may have to click a few times to line the wiring sections or bulbs up as desired since the rotation is always clockwise. Note: You will need use all of the wiring sections available to completely light the tree up. Each time you will be presented with a different starting setup coming from your power source. Time to hook up the lights! Note: It is recommended that you disable the sound for the game since the “rotation” sounds can be slightly irritating. A nice start but there are still a lot of bulbs to light up. Getting closer… Almost there…only two more bulbs to light up. Success! Have fun playing! Play Christmas Tree Light Up Latest Features How-To Geek ETC The 50 Best Registry Hacks that Make Windows Better The How-To Geek Holiday Gift Guide (Geeky Stuff We Like) LCD? LED? Plasma? The How-To Geek Guide to HDTV Technology The How-To Geek Guide to Learning Photoshop, Part 8: Filters Improve Digital Photography by Calibrating Your Monitor Our Favorite Tech: What We’re Thankful For at How-To Geek Settle into Orbit with the Voyage Theme for Chrome and Iron Awesome Safari Compass Icons Set Escape from the Exploding Planet Wallpaper Move Your Tumblr Blog to WordPress Pytask is an Easy to Use To-Do List Manager for Your Ubuntu System Snowy Christmas House Personas Theme for Firefox

    Read the article

  • Coders For Charities

    - by Robz / Fervent Coder
    Last weekend I had the opportunity to give back to the community doing what I love. As geeks we don’t usually have this opportunity. The event is called Coders 4 Charities (C4C) and it’s a grueling weekend of coding for nearly 30 hours over the weekend. When you finish you get to present to the charity and all of the other groups what you have completed. From the site: Coders For Charities is a 3-day charity event that pairs charities and local software developers. Charities often do not have the funds to implement a new website or intranet or database solution. Software developers often do not volunteer for charities because their skills do not apply. This event is the perfect marriage of these two needs; software developers volunteering their time to help charities better serve their community though the latest technology! The actual event was lined with multiple charities and about 50 developers, designers, business analysts, etc, each working with a different charity to come up with a solution that they could implement in less than 3 days. C4C provided a place and food for us so that we wouldn’t have to leave much during the time we had to implement our solution. They also provided games like Rock Band so we could get away and clear our minds for a few moments if necessary. I don’t think we made it down there to play, but the food and drinks were a huge help for us. The charity we we picked was Harvest Home. They had a need for an online intranet site where they could track membership and gardening. Over the next few days we worked on a site we could give them. Below is a screen shot with private data marked out. It was an awesome and humbling experience to be able to give back to a charity and I’m happy I was a part of it. I would definitely do it again. How often do we get to use our abilities to volunteer our time to a charity?

    Read the article

  • Getting started with Oracle Database In-Memory Part III - Querying The IM Column Store

    - by Maria Colgan
    In my previous blog posts, I described how to install, enable, and populate the In-Memory column store (IM column store). This weeks post focuses on how data is accessed within the IM column store. Let’s take a simple query “What is the most expensive air-mail order we have received to date?” SELECT Max(lo_ordtotalprice) most_expensive_order FROM lineorderWHERE  lo_shipmode = 5; The LINEORDER table has been populated into the IM column store and since we have no alternative access paths (indexes or views) the execution plan for this query is a full table scan of the LINEORDER table. You will notice that the execution plan has a new set of keywords “IN MEMORY" in the access method description in the Operation column. These keywords indicate that the LINEORDER table has been marked for INMEMORY and we may use the IM column store in this query. What do I mean by “may use”? There are a small number of cases were we won’t use the IM column store even though the object has been marked INMEMORY. This is similar to how the keyword STORAGE is used on Exadata environments. You can confirm that the IM column store was actually used by examining the session level statistics, but more on that later. For now let's focus on how the data is accessed in the IM column store and why it’s faster to access the data in the new column format, for analytical queries, rather than the buffer cache. There are four main reasons why accessing the data in the IM column store is more efficient. 1. Access only the column data needed The IM column store only has to scan two columns – lo_shipmode and lo_ordtotalprice – to execute this query while the traditional row store or buffer cache has to scan all of the columns in each row of the LINEORDER table until it reaches both the lo_shipmode and the lo_ordtotalprice column. 2. Scan and filter data in it's compressed format When data is populated into the IM column it is automatically compressed using a new set of compression algorithms that allow WHERE clause predicates to be applied against the compressed formats. This means the volume of data scanned in the IM column store for our query will be far less than the same query in the buffer cache where it will scan the data in its uncompressed form, which could be 20X larger. 3. Prune out any unnecessary data within each column The fastest read you can execute is the read you don’t do. In the IM column store a further reduction in the amount of data accessed is possible due to the In-Memory Storage Indexes(IM storage indexes) that are automatically created and maintained on each of the columns in the IM column store. IM storage indexes allow data pruning to occur based on the filter predicates supplied in a SQL statement. An IM storage index keeps track of minimum and maximum values for each column in each of the In-Memory Compression Unit (IMCU). In our query the WHERE clause predicate is on the lo_shipmode column. The IM storage index on the lo_shipdate column is examined to determine if our specified column value 5 exist in any IMCU by comparing the value 5 to the minimum and maximum values maintained in the Storage Index. If the value 5 is outside the minimum and maximum range for an IMCU, the scan of that IMCU is avoided. For the IMCUs where the value 5 does fall within the min, max range, an additional level of data pruning is possible via the metadata dictionary created when dictionary-based compression is used on IMCU. The dictionary contains a list of the unique column values within the IMCU. Since we have an equality predicate we can easily determine if 5 is one of the distinct column values or not. The combination of the IM storage index and dictionary based pruning, enables us to only scan the necessary IMCUs. 4. Use SIMD to apply filter predicates For the IMCU that need to be scanned Oracle takes advantage of SIMD vector processing (Single Instruction processing Multiple Data values). Instead of evaluating each entry in the column one at a time, SIMD vector processing allows a set of column values to be evaluated together in a single CPU instruction. The column format used in the IM column store has been specifically designed to maximize the number of column entries that can be loaded into the vector registers on the CPU and evaluated in a single CPU instruction. SIMD vector processing enables the Oracle Database In-Memory to scan billion of rows per second per core versus the millions of rows per second per core scan rate that can be achieved in the buffer cache. I mentioned earlier in this post that in order to confirm the IM column store was used; we need to examine the session level statistics. You can monitor the session level statistics by querying the performance views v$mystat and v$statname. All of the statistics related to the In-Memory Column Store begin with IM. You can see the full list of these statistics by typing: display_name format a30 SELECT display_name FROM v$statname WHERE  display_name LIKE 'IM%'; If we check the session statistics after we execute our query the results would be as follow; SELECT Max(lo_ordtotalprice) most_expensive_order FROM lineorderWHERE lo_shipmode = 5; SELECT display_name FROM v$statname WHERE  display_name IN ('IM scan CUs columns accessed',                        'IM scan segments minmax eligible',                        'IM scan CUs pruned'); As you can see, only 2 IMCUs were accessed during the scan as the majority of the IMCUs (44) in the LINEORDER table were pruned out thanks to the storage index on the lo_shipmode column. In next weeks post I will describe how you can control which queries use the IM column store and which don't. +Maria Colgan

    Read the article

  • The Cloud is STILL too slow!

    - by harry.foxwell(at)oracle.com
    If you've been in the computing industry sufficiently long enough to remember dialup modems and other "ancient" technologies, you might be tempted to marvel at today's wonderfully powerful multicore PCs, ginormous disks, and blazingly fast networks.  Wow, you're in Internet Nirvana, right!  Well, no, not by a long shot.Considering the exponentially growing expectations of what the Web, that is, "the Cloud", is supposed to provide, today's Web/Cloud services are still way too slow.Already we are seeing cloud-enabled consumer devices that are stressing even the most advanced public network services.  Like the iPad and its competitors, ever more powerful smart-phones, and an imminent hoard of special purpose gadgets such as the proposed "cloud camera" (see http://gdgt.com/discuss/it-time-cloud-camera-found-out-cnr/ ).And at the same time that the number and type of cloud services are growing, user tolerance for even the slightest of download delays is rapidly decreasing.  Ten years ago Web developers followed the "8-Second Rule", (average time a typical Web user would tolerate for a page to download and render).  Not anymore; now it's less than 3 seconds, and only a bit longer for mobile devices (see http://www.technologyreview.com/files/54902/GoogleSpeed_charts.pdf).  How spoiled we've become!Google, among others, recognizes this problem and is working to encourage the development of a faster Web (see http://www.technologyreview.com/web/32338/). They, along with their competitors and ISPs, will have to encourage and support significantly better Web performance in order to provide the types of services envisioned for the Cloud.  How will they do this? Through the development of faster components, better use of caching technologies, and the really tough one - exploiting parallelism. Not that parallel technologies like multicore processors are hard to build...we already have them.  It's just that we're not that good yet at using them effectively.  And if we don't get better, users will abandon cloud-based services...in less than 3 seconds.

    Read the article

  • How to dealing with the "programming blowhard"?

    - by Peter G.
    (Repost, I posted this in the wrong section before, sorry) So I'm sure everyone has run into this person at one point or another, someone catches wind of your project or idea and initially shows some interest. You get to talking about some of your methods and usually around this time they interject stating how you should use method X instead, or just use library Y. But not as a friendly suggestion, but bordering on a commandment. Often repeating the same advice over and over like a overzealous parrot. Personally, I like to reinvent the wheel when I'm learning, or even just for fun, even if it turns out worse than what's been done before. But this person apparently cannot fathom recreating ANY utility for such purposes, or possibly try something that doesn't strictly follow traditional OOP practices, and will settle for nothing except their sense of perfection, and thus naturally heave their criticism sludge down my ears full force. To top it off, they eventually start justifying their advice (retardation) by listing all the incredibly complex things they've coded single-handedly (usually along the lines of "trust me, I've made/used program X for a long time, blah blah blah"). Now, I'm far from being a programming master, I'm probably not even that good, and as such I value advice and critique, but I think advice/critique has a time and place. There is also a big difference between being helpful and being narcissistic. In the past I probably would have used a somewhat stronger George Carlin style dismissal, but I don't think burning bridges is the best approach anymore. Maybe I'm just an asshole, but do you have any advice on how to deal with this kind of verbal flogging?

    Read the article

  • The Next RAC, ASM and Linux Forum. May 4, 2010 Beit HP Raanana

    - by alejandro.vargas
    The next RAC, ASM and Linux forum will take place next week, you are still on time to register : Israel Oracle Users Group RAC,ASM and Linux Forum This time we will have a panel formed by Principal Oracle Advanced Customer Services Engineers and RAC experts Galit Elad and Nickita Chernovski and Senior Oracle Advanced Customer Services Engineers and RAC experts Roy Burstein and Dorit Noga. They will address the subject: 5 years of experience with RAC at Israeli Customers, lessons learned. It is a wonderful opportunity to meet with the people that is present at most major implementations and helped to solve all major issues along the last years. In addition we will have 2 most interesting Customer Presentations: Visa Cal DBA Team Leader Harel Safra will tell about their experience with scalability using standard Linux Servers for their mission critical data warehouse. Bank Discount Infrastructure DBA Uril Levin, who is in charge of the Bank Backup and Recovery Project, will speak about their Corporate Backup Solution using RMAN; that includes an end to end solution for VLDBS and mission critical databases. One of the most interesting RMAN implementations in Israel. This time I will not be able to attend myself as I'm abroad on business, Galit Elad will greet you and will lead the meeting. I'm sure you will enjoy a very, very interesting meeting. Best Regards Alejandro

    Read the article

  • Exalogic&ndash;The One Day Installation Challenge

    - by james.bayer
    It’s a really exciting time for the extended WebLogic community as we are enjoying seeing the impressive results of Exalogic deployments.  At Oracle Open World, a lot of people I spoke with came away impressed with the raw performance.  However, Exalogic offers a lot more than just raw performance.  I had the pleasure of working with Ram Sivaram during one of the Exalogic training sessions in Santa Clara.  In this video diary, he shows the Exalogic machine arrive on the shipping dock, get unpacked, wired up, powered on, configured, and installed with a WebLogic Server cluster in just about 10 hours.  I’ve worked with customers in the past that have taken several weeks or longer to get an environment ready after the hardware arrives.  This typically involves many different specialized teams in their organization.  Mohamad Afshar just wrote a great explanation of the benefit of Engineered Systems and contrasting that to the status quo.  Being able to streamline deployment of middleware capacity will have a lot of value for customers shortening time to deployment.  Thanks for the video Ram, you’ve set a high bar, we’ll see if anyone can top your time!  

    Read the article

  • Instructor Insight: Using the Container Database in Oracle Database 12 c

    - by Breanne Cooley
    The first time I examined the Oracle Database 12c architecture, I wasn’t quite sure what I thought about the Container Database (CDB). In the current release of the Oracle RDBMS, the administrator now has a choice of whether or not to employ a CDB. Bundling Databases Inside One Container In today’s IT industry, consolidation is a common challenge. With potentially hundreds of databases to manage and maintain, an administrator will require a great deal of time and resources to upgrade and patch software. Why not consider deploying a container database to streamline this activity? By “bundling” several databases together inside one container, in the form of a pluggable database, we can save on overhead process resources and CPU time. Furthermore, we can reduce the human effort required for periodically patching and maintaining the software. Minimizing Storage Most IT professionals understand the concept of storage, as in solid state or non-rotating. Let’s take one-to-many databases and “plug” them into ONE designated container database. We can minimize many redundant pieces that would otherwise require separate storage and architecture, as was the case in previous releases of the Oracle RDBMS. The data dictionary can be housed and shared in one CDB, with individual metadata content for each pluggable database. We also won’t need as many background processes either, thus reducing the overhead cost of the CPU resource. Improve Security Levels within Each Pluggable Database  We can now segregate the CDB-administrator role from that of the pluggable-database administrator as well, achieving improved security levels within each pluggable database and within the CDB. And if the administrator chooses to use the non-CDB architecture, everything is backwards compatible, too.  The bottom line: it's a good idea to at least consider using a CDB. -Christopher Andrews, Senior Principal Instructor, Oracle University

    Read the article

  • Issue 15: SVP Focus

    - by rituchhibber
         SVP FOCUS FOCUS -- Chris Baker SVP Oracle Worldwide ISV-OEM-Java Sales Chris Baker is the Global Head of ISV/OEM Sales responsible for working with ISV/OEM partners to maximise Oracle's business through those partners, whilst maximising those partners’ business to their end users. Chris works with partners, customers, innovators, investors and employees to develop innovative business solutions using Oracle products, services and skills. RESOURCES -- Oracle PartnerNetwork (OPN) OPN Solutions Catalog Oracle Exastack Program Oracle Exastack Optimized Oracle Cloud Computing Oracle Engineered Systems Oracle and Java SUBSCRIBE FEEDBACK PREVIOUS ISSUES "By taking part in marketing activities, our partners accelerate their sales cycles." -- Firstly, could you please explain Oracle's current strategy for ISV partners, globally and in EMEA? Oracle customers use independent software vendor (ISV) applications to run their businesses. They use them to generate revenue and to fulfil obligations to their own customers. Our strategy is very straight-forward. We want all of our ISV partners and OEMs to concentrate on the things that they do the best—building applications to meet the unique industry and functional requirements of their customer. We want to ensure that we deliver a best-in-class application platform so ISVs are free to concentrate their effort on their application functionality and user experience We invest over four billion dollars in research and development every year, and we want our ISVs to benefit from all of that investment in operating systems, virtualisation, databases, middleware, engineered systems, and other hardware. By doing this, we help them to reduce their costs, gain more consistency and agility for quicker implementations, and also rapidly differentiate themselves from other application vendors. It's all about simplification because we believe that around 25 to 30 percent of the development costs incurred by many ISVs are caused by customising infrastructure and have nothing to do with their applications. Our strategy is to enable our ISV partners to standardise their application platform using engineered architecture, so they can write once to the Oracle stack and deploy seamlessly in the cloud, on-premise, or in hybrid deployments. It's really important that architecture is the same in order to keep cost and time overheads at a minimum, so we provide standardisation and an environment that enables our ISVs to concentrate on the core business that makes them the most money and brings them success. How do you believe this strategy is helping the ISVs to work hand-in-hand with Oracle to ensure that end customers get the industry-leading solutions that they need? We work with our ISVs not just to help them be successful, but also to help them market themselves. We have something called the 'Oracle Exastack Ready Program', which enables ISVs to publicise themselves as 'Ready' to run the core software platforms that run on Oracle's engineered systems including Exadata and Exalogic. So, for example, they can become 'Database Ready' which means that they use the latest version of Oracle Database and therefore can run their application without modification on Exadata or the Oracle Database Appliance. Alternatively, they can become WebLogic Ready, Oracle Linux Ready and Oracle Solaris Ready which means they run on the latest release and therefore can run their application, with no new porting work, on Oracle Exalogic. Those 'Ready' logos are important in helping ISVs advertise to their customers that they are using the latest technologies which have been fully tested. We now also have Exadata Ready and Exalogic Ready programmes which allow ISVs to promote the certification of their applications on these platforms. This highlights these partners to Oracle customers as having solutions that run fluently on the Oracle Exadata Database Machine, the Oracle Exalogic Elastic Cloud or one of our other engineered systems. This makes it easy for customers to identify solutions and provides ISVs with an avenue to connect with Oracle customers who are rapidly adopting engineered systems. We have also taken this programme to the next level in the shape of 'Oracle Exastack Optimized' for partners whose applications run best on the Oracle stack and have invested the time to fully optimise application performance. We ensure that Exastack Optimized partner status is promoted and supported by press releases, and we help our ISVs go to market and differentiate themselves through the use of our technology and the standardisation it delivers. To date we have had several hundred organisations successfully work through our Exastack Optimized programme. How does Oracle's strategy of offering pre-integrated open platform software and hardware allow ISVs to bring their products to market more quickly? One of the problems for many ISVs is that they have to think very carefully about the technology on which their solutions will be deployed, particularly in the cloud or hosted environments. They have to think hard about how they secure these environments, whether the concern is, for example, middleware, identity management, or securing personal data. If they don't use the technology that we build-in to our products to help them to fulfil these roles, they then have to build it themselves. This takes time, requires testing, and must be maintained. By taking advantage of our technology, partners will now know that they have a standard platform. They will know that they can confidently talk about implementation being the same every time they do it. Very large ISV applications could once take a year or two to be implemented at an on-premise environment. But it wasn't just the configuration of the application that took the time, it was actually the infrastructure - the different hardware configurations, operating systems and configurations of databases and middleware. Now we strongly believe that it's all about standardisation and repeatability. It's about making sure that our partners can do it once and are then able to roll it out many different times using standard componentry. What actions would you recommend for existing ISV partners that are looking to do more business with Oracle and its customer base, not only to maximise benefits, but also to maximise partner relationships? My team, around the world and in the EMEA region, is available and ready to talk to any of our ISVs and to explore the possibilities together. We run programmes like 'Excite' and 'Insight' to help us to understand how we can help ISVs with architecture and widen their environments. But we also want to work with, and look at, new opportunities - for example, the Machine-to-Machine (M2M) market or 'The Internet of Things'. Over the next few years, many millions, indeed billions of devices will be collecting massive amounts of data and communicating it back to the central systems where ISVs will be running their applications. The only way that our partners will be able to provide a single vendor 'end-to-end' solution is to use Oracle integrated systems at the back end and Java on the 'smart' devices collecting the data—a complete solution from device to data centre. So there are huge opportunities to work closely with our ISVs, using Oracle's complete M2M platform, to provide the infrastructure that enables them to extract maximum value from the data collected. If any partners don't know where to start or who to contact, then they can contact me directly at [email protected] or indeed any of our teams across the EMEA region. We want to work with ISVs to help them to be as successful as they possibly can through simplification and speed to market, and we also want all of the top ISVs in the world based on Oracle. What opportunities are immediately opened to new ISV partners joining the OPN? As you know OPN is very, very important. New members will discover a huge amount of content that instantly becomes accessible to them. They can access a wealth of no-cost training and enablement materials to build their expertise in Oracle technology. They can download Oracle software and use it for development projects. They can help themselves become more competent by becoming part of a true community and uncovering new opportunities by working with Oracle and their peers in the Oracle Partner Network. As well as publishing massive amounts of information on OPN, we also hold our global Oracle OpenWorld event, at which partners play a huge role. This takes place at the end of September and the beginning of October in San Francisco. Attending ISV partners have an unrivalled opportunity to contribute to elements such as the OpenWorld / OPN Exchange, at which they can talk to other partners and really begin thinking about how they can move their businesses on and play key roles in a very large ecosystem which revolves around technology and standardisation. Finally, are there any other messages that you would like to share with the Oracle ISV community? The crucial message that I always like to reinforce is architecture, architecture and architecture! The key opportunities that ISVs have today revolve around standardising their architectures so that they can confidently think: "I will I be able to do exactly the same thing whenever a customer is looking to deploy on-premise, hosted or in the cloud". The right architecture is critical to being competitive and to really start changing the game. We want to help our ISV partners to do just that; to establish standard architecture and to seize the opportunities it opens up for them. New market opportunities like M2M are enormous - just look at how many devices are all around you right now. We can help our partners to interface with these devices more effectively while thinking about their entire ecosystem, rather than just the piece that they have traditionally focused upon. With standardised architecture, we can help people dramatically improve their speed, reach, agility and delivery of enhanced customer satisfaction and value all the way from the Java side to their centralised systems. All Oracle ISV partners must take advantage of these opportunities, which is why Oracle will continue to invest in and support them. Oracle OpenWorld 2010 Whether you attended Oracle OpenWorld 2009 or not, don't forget to save the date now for Oracle OpenWorld 2010. The event will be held a little earlier next year, from 19th-23rd September, so please don't miss out. With thousands of sessions and hundreds of exhibits and demos already lined up, there's no better place to learn how to optimise your existing systems, get an inside line on upcoming technology breakthroughs, and meet with your partner peers, Oracle strategists and even the developers responsible for the products and services that help you get better results for your end customers. Register Now for Oracle OpenWorld 2010! Perhaps you are interested in learning more about Oracle OpenWorld 2010, but don't wish to register at this time? Great! Please just enter your contact information here and we will contact you at a later date. How to Exhibit at Oracle OpenWorld 2010 Sponsorship Opportunities at Oracle OpenWorld 2010 Advertising Opportunities at Oracle OpenWorld 2010 -- Back to the welcome page

    Read the article

  • Troubleshoot broken ZFS

    - by BBK
    I have one zpool called tank in RaidZ1 with 5x1TB SATA HDDs. I'm using Ubuntu Server 11.10 Oneric, kernel 3.0.0-15-server. Installed ZFS from ppa also I'm using zfs-auto-snapshot. The ZFS file system when zfs module loaded to the kernel hangs my computer. Before it I created few new file systems: zfs create -V 10G tank/iscsi1 zfs create -V 10G tank/iscsi2 zfs create -V 10G tank/iscsi3 I shared them through iSCSI by /dev/tank/iscsiX path. And my computer started to hanging sometimes when I used tank/iscsiX by iSCSI, do not know why exactly. I switched off iSCSI and started to remove this file systems: zfs destroy tank/iscsi3 I'm also using zfs-auto-snapshot so I had snapshots and without -r key my command not destroying the FS. So I issued next command: zfs destroy tank/iscsi3 -r The tank/iscsi3 FS was clean and contain nothing - it was destroyed without an issue. But tank/iscsi2 and tank/iscsi1 contained a lot of information. I tried zfs destroy tank/iscsi2 -r After some time my computer hang out. I rebooted computer. It didn't boot very fast, HDDs starts working like a crazy making a lot of noise, after 15 minutes HDDs stopped go crazy and OS booted at last. All seems to be ok - tank/iscsi2 was destroyed. After file systems at the tank was accessible, zpool status showed no corruption. I issued new command: zfs destroy tank/iscsi1 -r Situation was repeated - after some time my computer hang out. But this time ZFS seams not to healed itself. After computer switched on it started to work: loading scripts and kernel modules, after zfs starting to work it hanging my computer. I need to recover else ZFS file systems which lying in the same zpool. Few month ago I backup OS to flash drive. Booting from backed-up OS and import have the same results - OS starts hanging. How to recover my data at ZFS tank?

    Read the article

  • Partner Webcast - Oracle SOA Suite 12c: Connect 4 Cloud, Mobile, IoT with On-premise

    - by Thanos Terentes Printzios
    The pace of new business projects continues to grow from increasing customer self-service to seamlessly connecting all your back office and in-the-field applications. At the same time increased integration complexity may seem inevitable as organizations are suddenly faced with the requirement to support three new integration challenges:  » Cloud Integration - integrate with the cloud, rapidly integrate a growing list of cloud applications with existing applications » Mobile Integration - the urgency to mobile-enable existing applications » IoT Integration - begin development on the latest trend of connecting Internet of Things (IoT) devices to your existing infrastructure. Oracle SOA Suite 12c, the latest version of the industry’s most complete and unified application integration and SOA solution, aims to simplify, accelerate and optimize integrations. Oracle SOA Suite 12c and its associated products, Oracle Managed File Transfer, Oracle Cloud and Application Adapters, B2B and healthcare integration, offer the industry’s most highly integrated platform for solving the increased integration challenges. Oracle SOA Suite 12c is a complete, integrated and best-of-breed platform. It enables next generation integration capabilities through: · A unified toolset for the development of services and composite applications.· A standards-based platform that is service enabled and easily consumable by modern web applications, allowing enterprises to quickly and easily adapt to changes in their business and IT environments.· Greater visibility, controls and analytics to govern how services and processes are deployed, reused and changed across their entire lifecycle. Join us to find out more about the new features of Oracle SOA Suite 12c and how it enables you to reduce time to market for new project integration and to reduce integration cost and complexity. Oracle SOA Suite is the ability to simplify by integrating the disparate requirements of cloud, mobile, and IoT devices with existing on-premise applications. Agenda: Oracle SOA Suite 12c new Features Cloud Integration Mobile Enablement Internet of Things (IoT) Summary - Q&A Delivery Format This FREE online LIVE eSeminar will be delivered over the Web. Registrations received less than 24hours prior to start time may not receive confirmation to attend. Presenter: Heba Fouad – FMW Specialist, Technology Adoption, ECEMEA Partner Business Development Date: Thursday, August 28th, 10pm CEST (8am UTC/11am EEST)Duration: 1 hour Register Here For any questions please contact us at partner.imc-AT-beehiveonline.oracle-DOT-com

    Read the article

  • Best practice for setting Effect parameters in XNA

    - by hichaeretaqua
    I want to ask if there is a best practice for setting Effect parameters in XNA. Or in other words, what exactly happens when I call pass.Apply(). I can imagine multiple scenarios: Each time Apply is called, all effect parameters are transferred to the GPU and therefor it has no real influence how often I set a parameter. Each time Apply is called, only the parameters that got reset are transferred. So caching Set-operations that don't actually set a new value should be avoided. Each time Apply is called, only the parameters that got changed are transferred. So caching Set-operations is useless. This whole questions is bootless because no one of the mentions ways has any noteworthy impact on game performance. So the final question: Is it useful to implement some caching of set operation like: private Matrix _world; public Matrix World { get{ return _world; } set { if (value == world) return; _effect.Parameters["xWorld"].SetValue(value); _world = value; } } Thanking you in anticipation.

    Read the article

  • Tuning Red Gate: #4 of Some

    - by Grant Fritchey
    First time connecting to these servers directly (keys to the kingdom, bwa-ha-ha-ha. oh, excuse me), so I'm going to take a look at the server properties, just to see if there are any issues there. Max memory is set, cool, first possible silly mistake clear. In fact, these look to be nicely set up. Oh, I'd like to see the ANSI Standards set by default, but it's not a big deal. The default location for database data is the F:\ drive, where I saw all the activity last time. Cool, the people maintaining the servers in our company listen, parallelism threshold is set to 35 and optimize for ad hoc is enabled. No shocks, no surprises. The basic setup is appropriate. On to the problem database. Nothing wrong in the properties. The database is in SIMPLE recovery, but I think it's a reporting system, so no worries there. Again, I'd prefer to see the ANSI settings for connections, but that's the worst thing I can see. Time to look at the queries, tables, indexes and statistics because all the information I've collected over the last several days suggests that we're not looking at a systemic problem (except possibly not enough memory), but at the traditional tuning issues. I just want to note that, I started looking at the system, not the queries. So should you when tuning your environment. I know, from the data collected through SQL Monitor, what my top poor performing queries are, and the most frequently called, etc. I'm starting with the most frequently called. I'm going to get the execution plan for this thing out of the cache (although, with the cache dumping constantly, I might not get it). And it's not there. Called 1.3 million times over the last 3 days, but it's not in cache. Wow. OK. I'll see what's in cache for this database: SELECT  deqs.creation_time,         deqs.execution_count,         deqs.max_logical_reads,         deqs.max_elapsed_time,         deqs.total_logical_reads,         deqs.total_elapsed_time,         deqp.query_plan,         SUBSTRING(dest.text, (deqs.statement_start_offset / 2) + 1,                   (deqs.statement_end_offset - deqs.statement_start_offset) / 2                   + 1) AS QueryStatement FROM    sys.dm_exec_query_stats AS deqs         CROSS APPLY sys.dm_exec_sql_text(deqs.sql_handle) AS dest         CROSS APPLY sys.dm_exec_query_plan(deqs.plan_handle) AS deqp WHERE   dest.dbid = DB_ID('Warehouse') AND deqs.statement_end_offset > 0 AND deqs.statement_start_offset > 0 ORDER BY deqs.max_logical_reads DESC ; And looking at the most expensive operation, we have our first bad boy: Multiple table scans against very large sets of data and a sort operation. a sort operation? It's an insert. Oh, I see, the table is a heap, so it's doing an insert, then sorting the data and then inserting into the primary key. First question, why isn't this a clustered index? Let's look at some more of the queries. The next one is deceiving. Here's the query plan: You're thinking to yourself, what's the big deal? Well, what if I told you that this thing had 8036318 reads? I know, you're looking at skinny little pipes. Know why? Table variable. Estimated number of rows = 1. Actual number of rows. well, I'm betting several more than one considering it's read 8 MILLION pages off the disk in a single execution. We have a serious and real tuning candidate. Oh, and I missed this, it's loading the table variable from a user defined function. Let me check, let me check. YES! A multi-statement table valued user defined function. And another tuning opportunity. This one's a beauty, seriously. Did I also mention that they're doing a hash against all the columns in the physical table. I'm sure that won't lead to scans of a 500,000 row table, no, not at all. OK. I lied. Of course it is. At least it's on the top part of the Loop which means the scan is only executed once. I just did a cursory check on the next several poor performers. all calling the UDF. I think I found a big tuning opportunity. At this point, I'm typing up internal emails for the company. Someone just had their baby called ugly. In addition to a series of suggested changes that we need to implement, I'm also apologizing for being such an unkind monster as to question whether that third eye & those flippers belong on such an otherwise lovely child.

    Read the article

  • Questioning pythonic type checking

    - by Pace
    I've seen countless times the following approach suggested for "taking in a collection of objects and doing X if X is a Y and ignoring the object otherwise" def quackAllDucks(ducks): for duck in ducks: try: duck.quack("QUACK") except AttributeError: #Not a duck, can't quack, don't worry about it pass The alternative implementation below always gets flak for the performance hit caused by type checking def quackAllDucks(ducks): for duck in ducks: if hasattr(duck,"quack"): duck.quack("QUACK") However, it seems to me that in 99% of scenarios you would want to use the second solution because of the following: If the user gets the parameters wrong then they will not be treated like a duck and there will be no indication. A lot of time will be wasted debugging why there is no quacking going on until the user finally realizes his silly mistake. The second solution would throw a stack trace as soon the user tried to quack. If the user has any bugs in their quack() method which cause an AttributeError then those bugs will be silently swallowed. Once again time will be wasted digging for the bug when the second solution would simply give a stack trace showing the immediate issue. In fact, it seems to me that the only time you would ever want to use the first method is when: The block of code in question is in an extremely performance critical section of your application. Following the principal of "avoid premature optimization" you would only realize this of course, after you had implemented the safer approach and found it to be a bottleneck. There are many types of quacking objects out there and you are only interested in quacking objects that quack with a very specific set of arguments (this seems to be a very rare case to me). Given this, why is it that so many people prefer the first approach over the second approach? What is it that I am missing? Also, I realize there are other solutions (such as using abcs) but these are the two solutions I seem to see most often for the basic case.

    Read the article

  • Integrating different branches from external sources into a single Mercurial repository

    - by dukeofgaming
    I'm currently working in a company using Perforce and am making way for distributed version control with Mercurial. I've had success importing Perforce history using the perfarce (quite a suitable name, I laugh every time I see/say it) however, this only works with a single branch at a time. Here's how my P4 integration setup works: In perforce, create a "client", which is kind of a description of what you will be constantly updating/checking-out. This can only address one branch at a time (trunk or other). Once you do this, run hg clone p4://<server>/<client_name> Go to .hg/hgrc and put the perforce path line: perforce = p4://<server>/<client_name> Work normally with the code under mercurial, do hg pull perforce to sync up, hg push to export a changelist What I'd like to be able to do is have a perforce path per branch and have everything work in the same repository. Now, pushing is not a problem, however, if I pull the history from another branch it would end up at the default branch. I'd like to be able to do something like hg pull perforce-R5 and have it land in mercurial's R5 branch. Even if I have no merging history, it would be sweet enough to be able to preserve it. There are also other plugins for CVCSs that let you integrate mercurial, but AFAIK the subversion one has the same problem. I don't think there is a straight-through way of doing this, but as long as I could automate the process with some hooks and scripts in a single Mercurial machine, that would be good enough.

    Read the article

  • MOSS 2010 Deploy Farm Solution using STSADM

    - by H(at)Ni
    Today, I've been working on deploying farm solutions to another farm and I was surprised that it can only be done using STSADM.exe. Below are the steps that I've done to get it to work : 1. Use the command addsolution  and give it the path of the wsp file which was something like that : stsadm -o addsolution -filename C:\MySolution.wsp 2. Use the command deploysolution and give the solution name as a parameter like that : stsadm -o deploysolution -name MySolution.wsp -immediate -allowgacdeployment If then you encountered an error saying : The timer job for this operation has been created, but it will fail because the administrative service for this server is not enabled. If the timer job is sched uled to run at a later time, you can run the jobs all at once using stsadm.exe - o execadmsvcjobs. To avoid this problem in the future, enable the Microsoft Shar ePoint Foundation administrative service, or run your operation through the STSA DM.exe command line utility. then use the following command to enforce the execution of your deployment: Start-SPAdminJob And that's it, you'll have it working as expected :)

    Read the article

  • Code Generation and IDE vs writing per Hand

    - by sytycs
    I have been programming for about a year now. Pretty soon I realized that I need a great Tool for writing code and learned Vim. I was happy with C and Ruby and never liked the idea of an IDE. Which was encouraged by a lot of reading about programming.[1] However I started with (my first) Java Project. In a CS Course we were using Visual Paradigm and encouraged to let the program generate our code from a class diagram. I did not like that Idea because: Our class diagram was buggy. Students more experienced in Java said they would write the code per hand. I had never written any Java before and would not understand a lot of the generated code. So I took a different approach and wrote all methods per Hand (getter and Setter included). My Team-members have written their parts (partly generated by VP) in an IDE and I was "forced" to use it too. I realized they had generated equal amounts of code in a shorter amount of time and did not spend a lot of time setting their CLASSPATH and writing scripts for compiling that son of a b***. Additionally we had to implement a GUI and I dont see how we could have done that in a sane matter in Vim. So here is my Problem: I fell in love with Vim and the Unix way. But it looks like for getting this job done (on time) the IDE/Code generation approach is superior. Do you have equal experiences? Is Java by the nature of the language just more suitable for an IDE/Code generated approach? Or am I lacking the knowledge to produce equal amounts of code "per Hand"? [1] http://heather.cs.ucdavis.edu/~matloff/eclipse.html

    Read the article

  • high performance with xen, vmware or virtualbox

    - by Marchosius
    I was wondering which is the best method to go about if I want to play win based games. I do not want to go with the dual boot method as this will cost me time to restart, login and run a os to do my work or pass the time, and some of my apps rely on win and my graphics to run. for example Daz3d, Photoshop, Flash etc. Now I read about HVM(hardware virtual machines) and then I know about the 3D virtualisation of VMware and VirtualBox. How ever the 2 later virtualise the 3D not using the full power of the GPU. So this option wont perform perfect for latest games like D3. I was wondering if anyone have experience in HVM(like xen if i am not mistaken) and tried something similar to access the full power of the GPU and successfully run newer games and other products relying on the GPU? Will be the first time setting up a HVM, no experience in this so don't know what to expect. This will help a lot as I do not want to revert back to win or as mentioned do dual boot.

    Read the article

  • Compilateurs JavaScript : Mozilla ne compte pas se laisser distancer par Google mais reconnait le travail de ses développeurs

    Compilateur JavaScript : Firefox ne compte pas se laisser distancer par Chrome Mais Mozilla reconnait le travail des développeurs de Google La vitesse d'exécution du JavaScript dans les navigateurs semble occuper une place capitale. L'amélioration des moteurs est toujours mis en avant à chaque nouvelles versions des navigateurs. Cette semaine, pour la sortie de Chrome OS, Chrome (le navigateur) n'a pas fait exception à la règle avec une communication très appuyée sur Crankshaft. Et les premières comparaisons issues du premier Benchmark n'ont pas tardé. David Mandeling, membre de l'équipe JavaScript chez Mozilla, vient en effet de publier sur son blog ...

    Read the article

  • problem with loading in .FBX meshes in DirectX 10

    - by N0xus
    I'm trying to load in meshes into DirectX 10. I've created a bunch of classes that handle it and allow me to call in a mesh with only a single line of code in my main game class. How ever, when I run the program this is what renders: In the debug output window the following errors keep appearing: D3D10: ERROR: ID3D10Device::DrawIndexed: Input Assembler - Vertex Shader linkage error: Signatures between stages are incompatible. The reason is that Semantic 'TEXCOORD' is defined for mismatched hardware registers between the output stage and input stage. [ EXECUTION ERROR #343: DEVICE_SHADER_LINKAGE_REGISTERINDEX ] D3D10: ERROR: ID3D10Device::DrawIndexed: Input Assembler - Vertex Shader linkage error: Signatures between stages are incompatible. The reason is that the input stage requires Semantic/Index (POSITION,0) as input, but it is not provided by the output stage. [ EXECUTION ERROR #342: DEVICE_SHADER_LINKAGE_SEMANTICNAME_NOT_FOUND ] The thing is, I've no idea how to fix this. The code I'm using does work and I've simply brought all of that code into a new project of mine. There are no build errors and this only appears when the game is running The .fx file is as follows: float4x4 matWorld; float4x4 matView; float4x4 matProjection; struct VS_INPUT { float4 Pos:POSITION; float2 TexCoord:TEXCOORD; }; struct PS_INPUT { float4 Pos:SV_POSITION; float2 TexCoord:TEXCOORD; }; Texture2D diffuseTexture; SamplerState diffuseSampler { Filter = MIN_MAG_MIP_POINT; AddressU = WRAP; AddressV = WRAP; }; // // Vertex Shader // PS_INPUT VS( VS_INPUT input ) { PS_INPUT output=(PS_INPUT)0; float4x4 viewProjection=mul(matView,matProjection); float4x4 worldViewProjection=mul(matWorld,viewProjection); output.Pos=mul(input.Pos,worldViewProjection); output.TexCoord=input.TexCoord; return output; } // // Pixel Shader // float4 PS(PS_INPUT input ) : SV_Target { return diffuseTexture.Sample(diffuseSampler,input.TexCoord); //return float4(1.0f,1.0f,1.0f,1.0f); } RasterizerState NoCulling { FILLMODE=SOLID; CULLMODE=NONE; }; technique10 Render { pass P0 { SetVertexShader( CompileShader( vs_4_0, VS() ) ); SetGeometryShader( NULL ); SetPixelShader( CompileShader( ps_4_0, PS() ) ); SetRasterizerState(NoCulling); } } In my game, the .fx file and model are called and set as follows: Loading in shader file //Set the shader flags - BMD DWORD dwShaderFlags = D3D10_SHADER_ENABLE_STRICTNESS; #if defined( DEBUG ) || defined( _DEBUG ) dwShaderFlags |= D3D10_SHADER_DEBUG; #endif ID3D10Blob * pErrorBuffer=NULL; if( FAILED( D3DX10CreateEffectFromFile( TEXT("TransformedTexture.fx" ), NULL, NULL, "fx_4_0", dwShaderFlags, 0, md3dDevice, NULL, NULL, &m_pEffect, &pErrorBuffer, NULL ) ) ) { char * pErrorStr = ( char* )pErrorBuffer->GetBufferPointer(); //If the creation of the Effect fails then a message box will be shown MessageBoxA( NULL, pErrorStr, "Error", MB_OK ); return false; } //Get the technique called Render from the effect, we need this for rendering later on m_pTechnique=m_pEffect->GetTechniqueByName("Render"); //Number of elements in the layout UINT numElements = TexturedLitVertex::layoutSize; //Get the Pass description, we need this to bind the vertex to the pipeline D3D10_PASS_DESC PassDesc; m_pTechnique->GetPassByIndex( 0 )->GetDesc( &PassDesc ); //Create Input layout to describe the incoming buffer to the input assembler if (FAILED(md3dDevice->CreateInputLayout( TexturedLitVertex::layout, numElements,PassDesc.pIAInputSignature, PassDesc.IAInputSignatureSize, &m_pVertexLayout ) ) ) { return false; } model loading: m_pTestRenderable=new CRenderable(); //m_pTestRenderable->create<TexturedVertex>(md3dDevice,8,6,vertices,indices); m_pModelLoader = new CModelLoader(); m_pTestRenderable = m_pModelLoader->loadModelFromFile( md3dDevice,"armoredrecon.fbx" ); m_pGameObjectTest = new CGameObject(); m_pGameObjectTest->setRenderable( m_pTestRenderable ); // Set primitive topology, how are we going to interpet the vertices in the vertex buffer md3dDevice->IASetPrimitiveTopology( D3D10_PRIMITIVE_TOPOLOGY_TRIANGLELIST ); if ( FAILED( D3DX10CreateShaderResourceViewFromFile( md3dDevice, TEXT( "armoredrecon_diff.png" ), NULL, NULL, &m_pTextureShaderResource, NULL ) ) ) { MessageBox( NULL, TEXT( "Can't load Texture" ), TEXT( "Error" ), MB_OK ); return false; } m_pDiffuseTextureVariable = m_pEffect->GetVariableByName( "diffuseTexture" )->AsShaderResource(); m_pDiffuseTextureVariable->SetResource( m_pTextureShaderResource ); Finally, the draw function code: //All drawing will occur between the clear and present m_pViewMatrixVariable->SetMatrix( ( float* )m_matView ); m_pWorldMatrixVariable->SetMatrix( ( float* )m_pGameObjectTest->getWorld() ); //Get the stride(size) of the a vertex, we need this to tell the pipeline the size of one vertex UINT stride = m_pTestRenderable->getStride(); //The offset from start of the buffer to where our vertices are located UINT offset = m_pTestRenderable->getOffset(); ID3D10Buffer * pVB=m_pTestRenderable->getVB(); //Bind the vertex buffer to input assembler stage - md3dDevice->IASetVertexBuffers( 0, 1, &pVB, &stride, &offset ); md3dDevice->IASetIndexBuffer( m_pTestRenderable->getIB(), DXGI_FORMAT_R32_UINT, 0 ); //Get the Description of the technique, we need this in order to loop through each pass in the technique D3D10_TECHNIQUE_DESC techDesc; m_pTechnique->GetDesc( &techDesc ); //Loop through the passes in the technique for( UINT p = 0; p < techDesc.Passes; ++p ) { //Get a pass at current index and apply it m_pTechnique->GetPassByIndex( p )->Apply( 0 ); //Draw call md3dDevice->DrawIndexed(m_pTestRenderable->getNumOfIndices(),0,0); //m_pD3D10Device->Draw(m_pTestRenderable->getNumOfVerts(),0); } Is there anything I've clearly done wrong or are missing? Spent 2 weeks trying to workout what on earth I've done wrong to no avail. Any insight a fresh pair eyes could give on this would be great.

    Read the article

  • DIY LEGO Settlers of Catan Board Mixes Two Geeky Hobbies in One

    - by Jason Fitzpatrick
    While Settlers of Catan (a modular board game) and LEGO (a modular building system) seems destined to fit perfectly together, the execution of a functional Catan board out of LEGO bricks is tricky. Check out this polished build to see it done right. Courtesy of LEGO enthusiast Micheal Thomas, this Settlers of Catan build overcomes the problem of fitting the numerous modular Catan board pieces together by using an underlying framework to provided a preset pocket for each tile. The framework also doubles as a perfect place to lady down the roads and settlements pieces in the game. Currently the project is listed in LEGO Cuusoo–a sort of LEGOland version of Kickstarter–so pay it a visit and log a vote in support of the project. You can also check out the Michael’s Flickr stream to see multiple photos of the build in order to get ideas for your own Settlers of Catan set. LEGO Settlers of Catan [via Mashable] How to Use an Xbox 360 Controller On Your Windows PC Download the Official How-To Geek Trivia App for Windows 8 How to Banish Duplicate Photos with VisiPic

    Read the article

  • 'Table' cannot have children of type 'ListViewDataItem'

    - by kazim sardar mehdi
    I was using System.Web.UI.WebControls.Table System.Web.UI.WebControlsTableRow System.Web.UI.WebControls.TableCell objects inside LayoutTemplate’s InstantiateIn method that’s inherited from ITemplateas stated below:Table = new Table(); when compile got the following error on the YPOD(Yellow page of Death) 'Table' cannot have children of type 'ListViewDataItem'. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.ArgumentException: 'Table' cannot have children of type 'ListViewDataItem'.   Solution:   I Replaced System.Web.UI.WebControls.Table System.Web.UI.WebControls.TableRow System.Web.UI.WebControls.TableCell withSystem.Web.UI.HtmlControls.HtmlTable System.Web.UI.HtmlControls.HtmlTableRow System.Web.UI.HtmlControls.HtmlTableCell and problem solved.

    Read the article

  • debugging JBoss 100% CPU usage

    - by NateS
    Originally posted on Server Fault, where it was suggested this question might better asked here. We are using JBoss to run two of our WARs. One is our web app, the other is our web service. The web app accesses a database on another machine and makes requests to the web service. The web service makes JMS requests to other machines, aggregates the data, and returns it. At our biggest client, about once a month the JBoss Java process takes 100% of all CPUs. The machine running JBoss has 8 CPUs. Our web app is still accessible during this time, however pages take about 3 minutes to load. Restarting JBoss restores everything to normal. The database machine and all the other machines are fine, only the machine running JBoss is affected. Memory usage is normal. Network utilization is normal. There are no suspect error messages in the JBoss logs. I have set up a test environment as close as possible to the client's production environment and I've done load testing with as much as 2x the number of concurrent users. I have not gotten my test environment to replicate the problem. Where do we go from here? How can we narrow down the problem? Currently the only plan we have is to wait until the problem occurs in production on its own, then do some debugging to determine the cause. So far people have just restarted JBoss when the problem occurred to minimize down time. Next time it happens they will get a developer to take a look. The question is, next time it happens, what can be done to determine the cause? We could setup a separate JBoss instance on the same box and install the web app separately from the web service. This way when the problem next occurs we will know which WAR has the problem (assuming it is our code). This doesn't narrow it down much though. Should I enable JMX remote? This way the next time the problem occurs I can connect with VisualVM and see which threads are taking the CPU and what the hell they are doing. However, is there a significant down side to enabling JMX remote in a production environment? Is there another way to see what threads are eating the CPU and to get a stacktrace to see what they are doing? Any other ideas? Thanks!

    Read the article

  • Devoxx UK JCP & Adopt-a-JSR activities

    - by Heather VanCura
    Devoxx UK starts this week!  The JCP Program is organizing many activities throughout the conference, including some tables in the Hackergarten area on 12-13 June.  Topics include Java EE, Data Grids, Java SE 8 (Lambdas and Date & Time API), Money & Currency API and OpenJDK.  We will have two book signings by Richard Warburton and Peter Pilgrim during the Hackergarten - free signed copy of their books at these times - first come, first served (limited quantities available).  Thursday night is the party and the Birds of a Feather (BoF) sessions - come with your favorite questions and topics related to the JCP, Adopt-a-JSR and Adopt OpenJDK Programs!  See below for the schedule of activities; I will fill in details for each session tomorrow.    Thursday 12 June 10:20 - 12:50 Java EE -- Arun Gupta 13:30-17:00 Lambdas/Date & Time API --Richard Warburton & Raoul-Gabriel Urma (also a book signing with Richard Warburon during the afternoon break) 14:30-17:30 Data Grids - Peter Lawrey 14:30-18:00 Money & Currency -- Anatole Tresch 18:45 Adopt OpenJDK BoF session (Java EE BoF runs concurrently) 19:45 JCP & Adopt-a-JSR BoF session Friday 13 June 10:20-13:00 OpenJDK -- Mani Sarkar  10:20- 14:30 Money & Currency -- Anatole Tresch 10:20 - 13:00 Java EE -- Peter Pilgrim 13:00-13:30 Peter Pilgrim Java EE 7 Book signing sponsored by JCP @ lunch time 13:30 - 15:30 JCP.Next/JSR 364 -- Heather VanCura

    Read the article

  • JCP Party Tonight...10th Annual JCP Award Unveiling

    - by heathervc
    Tonight is the night-attend the Presentation of the 10th Annual JCP Awards! This year's JCP Award nominee list has been finalized, and the winners will be announced tonight during the JCP party at the Infusion Lounge.  We will open the doors at 6:30 PM; awards presentation at 7:00 PM.  This year's three award categories are Member of the Year, Outstanding Spec Lead, and Most Significant JSR. The JCP Member/Participant of the Year shines the light on who has shown the leadership and commitment that led to the most positive impact on the community. The Outstanding Spec Lead highlights the individual who led a specific JSR with exceptional efficiency and execution. The Most Significant JSR recognizes the most significant JSR for the Java community in the past year. Read the final list of the nominees and their profiles now.  Hope to see you there!

    Read the article

< Previous Page | 336 337 338 339 340 341 342 343 344 345 346 347  | Next Page >