Search Results

Search found 9282 results on 372 pages for 'complete'.

Page 235/372 | < Previous Page | 231 232 233 234 235 236 237 238 239 240 241 242  | Next Page >

  • Need help identifing what resources (eg. In MIT OpenCourseWare) can help me prepare for a test [closed]

    - by jiewmeng
    I am entering uni soon. I can sit for a placement test to see if I elegible for exemptions. The details are http://www.comp.nus.edu.sg/undergraduates/TestScope11_12.html Or CS2100 Computer Organisation (please click title) The objective of this module is to familiarise students with the fundamentals of computing devices. Through this module students will understand the basics of data representation, and how the various parts of a computer work, separately and with each other. This allows students to understand the issues in computing devices, and how these issues affect the implementation of solutions. Topics covered include data representation systems, combinational and sequential circuit design techniques, assembly language, processor execution cycles, pipelining, memory hierarchy and input/output systems. Recommended Textbooks Digital Design: Principles and Practices [DDPP] by John F. Wakerly, Prentice-Hall. ISBN 0-13-324500-4. Computer Organizations and Design (The hardware/software interface) by David A. Patterson and John L. Hennessy. CS2105 Introduction to Computer Networks (please click title) This course aims to provide a broad introduction to computer networks and some appreciations of network application programming. It covers a range of topics including basic data communication and computer network concepts, protocols, networked computing concepts and principles, network applications development and network security. The emphasis of teaching is on the working principles and application of computer networks. As an integral part of the course, tutorials and practical assignments enforcing learning will also be given. These assignments provide an early exposure in network application programming and they should be able to complete by using personal computers and school's network facilities. Topics included: An overview of computer networks and the Internet Basic data communications Application layer Transport layer Network layer and routing Link layer and local area networks Recommended Textbook James F. Kurose & Keith W. Ross, Computer networking: A top-down approach featuring internet, Addison Wesley, 2001 I am wondering what resources eg. MIT OpenCourseWare or other universities resources are available to help he perpare for these particular modubles. I am thinking does the Networking one look like CCNA? The computer oragization. Its like electronics, assembly etc? I learnt some electronics in Poly but looking at the sample papers, uni looks very different... I have about 1 month to prepare if I want any chance of exempting from these modules :) any help?

    Read the article

  • Impatient Customers Make Flawless Service Mission Critical for Midsize Companies

    - by Richard Lefebvre
    At times, I can be an impatient customer. But I’m not alone. Research by The Social Habit shows that among customers who contact a brand, product, or company through social media for support, 32% expect a response within 30 minutes and 42% expect a response within 60 minutes! 70% of respondents to another study expected their complaints to be addressed within 24 hours, irrespective of how they contacted the company. I was intrigued when I read a recent blog post by David Vap, Group Vice President of Product Development for Oracle Service Cloud. It’s about “Three Secrets to Innovation” in customer service. In David’s words: 1) Focus on making what’s hard simple 2) Solve real problems for real people 3) Don’t just spin a good vision. Do something about it  I believe midsize companies have a leg up in delivering on these three points, mainly because they have no other choice. How can you grow a business without listening to your customers and providing flawless service? Big companies are often weighed down by customer service practices that have been churning in bureaucracy for years or even decades. When the all-in-one printer/fax/scanner I bought my wife for Christmas (call me a romantic) failed after sixty days, I wasted hours of my time navigating the big brand manufacturer’s complex support and contact policies only to be offered a refurbished replacement after I shipped mine back to them. There was not a happy ending. Let's just say my wife still doesn't have a printer.  Young midsize companies need to innovate to grow. Established midsize company brands need to innovate to survive and reach the next level. Midsize Customer Case Study: The Boston Globe The Boston Globe, established in 1872 and the winner of 22 Pulitzer Prizes, is fighting the prevailing decline in the newspaper industry. Businessman John Henry invested in the Globe in 2013 because he, “…believes deeply in the future of this great community, and the Globe should play a vital role in determining that future”. How well the paper executes on its bold new strategy is truly mission critical—a matter of life or death for an industry icon. This customer case study tells how Oracle’s Service Cloud is helping The Boston Globe “do something about” and not just “spin” it’s strategy and vision via improved customer service. For example, Oracle RightNow Chat Cloud Service is now the preferred support channel for its online environments. The average e-mail or phone call can take three to four minutes to complete while the average chat is only 30 to 40 seconds. It’s a great example of one company leveraging technology to make things simpler to solve real problems for real people. Related: Oracle Cloud Service a leader in The Forrester Wave™: Customer Service Solutions For Small And Midsize Teams, Q2 2014

    Read the article

  • Motivating yourself to actually write the code after you've designed something

    - by dpb
    Does it happen only to me or is this familiar to you too? It's like this: You have to create something; a module, a feature, an entire application... whatever. It is something interesting that you have never done before, it is challenging. So you start to think how you are going to do it. You draw some sketches. You write some prototypes to test your ideas. You are putting different pieces together to get the complete view. You finally end up with a design that you like, something that is simple, clear to everybody, easy maintainable... you name it. You covered every base, you thought of everything. You know that you are going to have this class and that file and that database schema. Configure this here, adapt this other thingy there etc. But now, after everything is settled, you have to sit down and actually write the code for it. And is not challenging anymore.... Been there, done that! Writing the code now is just "formalities" and makes it look like re-iterating what you've just finished. At my previous job I sometimes got away with it because someone else did the coding based on my specifications, but at my new gig I'm in charge of the entire process so I have to do this too ('cause I get payed to do it). But I have a pet project I'm working on at home, after work and there is just me and no one is paying me to do it. I do the creative work and then when time comes to write it down I just don't feel like it (lets browse the web a little, see what's new on P.SE, on SO etc). I just want to move to the next challenging thing, and then to the next, and the next... Does this happen to you too? How do you deal with it? How do you convince yourself to go in and write the freaking code? I'll take any answer.

    Read the article

  • Driving Growth through Smarter Selling

    - by Samantha.Y. Ma
    With the proliferation of social media and mobile technologies, the world of selling and buying has drastically changed, as buyers now have access to more information than they did in the past. In fact, studies have shown that buyers complete 60 percent of the buying process before they even engage with a salesperson. The old models of selling no longer work effectively; and the new way of selling is driven by customer insights. To succeed, sales need to be proactive, not reactive. They need to engage with the customer early, sometimes even before the customer’s needs are fully understood. In fact, the best sales reps prescribe a solution that the customer doesn't even know they need, often by leveraging social media to listen, engage and collaborate with peers. And they fully tap into the power of analytics and data to drive results.  Let’s look at some stats regarding challenges facing sales today. According to recent studies, sales reps spend 78 percent of their time doing administrative things -- such as planning, searching for information, data entry -- and only 22 percent of the time actually selling. Furthermore, 40 percent of B2B sales reps miss their quota, and only 3 percent of companies can say with confidence that their forecasts are “always accurate.” How do you drive growth in this modern day and age? It's not just getting your sales teams to work harder; it's helping them work smarter and providing them with a solution they want to use, on the device(s) they already know, giving them critical insights and tools to be more productive, increase win rates, and close deals faster. Oracle Sales Cloud was designed to do exactly that. It enables smarter selling that allows reps to sell more, managers to know more, and companies to grow more.  Let’s face it—if all CRM solutions worked well, sales executives wouldn’t be having the same headaches as they had in the past. Join Oracle’s Thomas Kurian and Doug Clemmans on Tuesday, October 22 as they explain: • How today’s sales processes have rendered many CRM systems obsolete • The secrets to smarter selling, leveraging mobile, social, and big data • How Oracle Sales Cloud enables smarter selling—as proven by Oracle and its customers Take the first step down the path toward smarter selling. With Oracle Sales Cloud, reps sell more, managers know more, and companies grow more.

    Read the article

  • Getting Started Plugging into the "Find in Projects" Dialog

    - by Geertjan
    In case you missed it amidst all the code in yesterday's blog entry, the "Find in Projects" dialog is now pluggable. I think that's really cool. The code yesterday gives you a complete example, but let's break it down a bit and deconstruct down to a very simple hello world scenario. We'll end up with as many extra tabs in the "Find in Projects" dialog as we need, for example, three in this case:  And clicking on any of those extra tabs will, in this simple example, simply show us this: Once we have that, we'll be able to continue adding small bits of code over the next few blog entries until we have something more useful. So, in this blog entry, you'll literally be able to display "Hello World" within a new tab in the "Find in Projects" dialog: import javax.swing.JComponent; import javax.swing.JLabel; import org.netbeans.spi.search.provider.SearchComposition; import org.netbeans.spi.search.provider.SearchProvider; import org.netbeans.spi.search.provider.SearchProvider.Presenter; import org.openide.NotificationLineSupport; import org.openide.util.lookup.ServiceProvider; @ServiceProvider(service = SearchProvider.class) public class ExampleSearchProvider1 extends SearchProvider { @Override public Presenter createPresenter(boolean replaceMode) { return new ExampleSearchPresenter(this); } @Override public boolean isReplaceSupported() { return false; } @Override public boolean isEnabled() { return true; } @Override public String getTitle() { return "Demo Extension 1"; } public class ExampleSearchPresenter extends SearchProvider.Presenter { private ExampleSearchPresenter(ExampleSearchProvider1 sp) { super(sp, true); } @Override public JComponent getForm() { return new JLabel("Hello World"); } @Override public SearchComposition composeSearch() { return null; } @Override public boolean isUsable(NotificationLineSupport nls) { return true; } } } That's it, not much code, works fine in NetBeans IDE 7.2 Beta, and is easier to digest than the big chunk from yesterday. If you make three classes like the above in a NetBeans module, and you install it, you'll have three new tabs in the "Find in Projects" dialog. The only required dependencies are Dialogs API, Lookup API, and Search in Projects API. Read the javadoc linked above and then in next blog entries we'll continue to build out something like the sample you saw in yesterday's blog entry.

    Read the article

  • Issue 56 - Super Stylesheets Skinning in DotNetNuke 5

    May 2010 Welcome to Issue 56 of DNN Creative Magazine In this issue we show you how to use the powerful new Super Stylesheets skinning feature in DotNetNuke 5. Super Stylesheets are ideal for both beginner and experienced skin designers, they provide skin layouts using CSS. The advantage of Super Stylesheets is that you can easily create a skin layout which works in all browsers without the need to learn complex CSS techniques. They are also very quick to build and you can change a skin layout in a matter of minutes rather than hours. We show you how to build a skin from the very beginning using Super Stylesheets, we show you how to create various skin layouts, as well as multi-layouts. We also show you how to style the skin, how to add tokens such as the logo, menu, login links etc. and walk you through how to create a fully working skin from scratch. Following this we continue the Open Web Studio tutorials, this month we demonstrate how to create an installable DotNetNuke PA module using OWS. This is an essential technique which allows you to package up the OWS applications that you have created and build them into an installable zip package. The zip file is then installable as a standard DotNetNuke module which means you can easily install your OWS applications on other DotNetNuke installations by simply installing them as a standard DotNetNuke module. To finish, we have part six of the "How to Build a News Application with DotNetMushroom Rapid Application Developer (RAD)" article, where we demonstrate how to create a News Carousel using RAD, JQuery and the JCarousel plugin. This issue comes complete with 15 videos. Skinning: Super Stylesheets Skinning in DotNetNuke 5 - DNN Layouts (12 videos - 98mins) Module Development Series: How to Create an Installable DotNetNuke PA Module Using OWS (3 videos - 23mins) How to Implement a News Carousel Using DotNetMushroom RAD and JQuery View issue 56 to download all of the videos in one zip file DNN Creative Magazine for DotNetNuke Web Designers Covering DotNetNuke module video reviews, video tutorials, mp3 interviews, resources and web design tips for working with DotNetNuke. In 56 issues we have created 578 videos!Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • 4.8M wasn't enough so we went for 5.055M tpmc with Unbreakable Enterprise Kernel r2 :-)

    - by wcoekaer
    We released a new set of benchmarks today. One is an updated tpc-c from a few months ago where we had just over 4.8M tpmc at $0.98 and we just updated it to go to 5.05M and $0.89. The other one is related to Java Middleware performance. You can find the press release here. Now, I don't want to talk about the actual relevance of the benchmark numbers, as I am not in the benchmark team. I want to talk about why these numbers and these efforts, unrelated to what they mean to your workload, matter to customers. The actual benchmark effort is a very big, long, expensive undertaking where many groups work together as a big virtual team. Having the virtual team be within a single company of course helps tremendously... We already start with a very big server setup with tons of storage, many disks, lots of ram, lots of cpu's, cores, threads, large database setups. Getting the whole setup going to start tuning, by itself, is no easy task, but then the real fun starts with tuning the system for optimal performance -and- stability. A benchmark is not just revving an engine at high rpm, it's actually hitting the circuit. The tests require long runs, require surviving availability tests, such as surviving crashes -and- recovery under load. In the TPC-C example, the x4800 system had 4TB ram, 160 threads (8 sockets, hyperthreaded, 10 cores/socket), tons of storage attached, tons of luns visible to the OS. flash storage, non flash storage... many things at high scale that all have to be perfectly synchronized. During this process, we find bugs, we fix bugs, we find performance issues, we fix performance issues, we find interesting potential features to investigate for the future, we start new development projects for future releases and all this goes back into the products. As more and more customers, for Oracle Linux, are running larger and larger, faster and faster, more mission critical, higher available databases..., these things are just absolutely critical. Unrelated to what anyone's specific opinion is about tpc-c or tpc-h or specjenterprise etc, there is a ton of effort that the customer benefits from. All this work makes Oracle Linux and/or Oracle Solaris better platforms. Whether it's faster, more stable, more scalable, more resilient. It helps. Another point that I always like to re-iterate around UEK and UEK2 : we have our kernel source git repository online. Complete changelog of the mainline kernel, and our changes, easy to pull, easy to dissect, easy to know what went in when, why and where. No need to go log into a website and manually click through pages to hopefully discover changes or patches. No need to untar 2 tar balls and run a diff.

    Read the article

  • One Year Oracle SocialChat - The Movie

    - by mprove
    Tweet | Like | Watch on Vimeo You’ve just watched – hopefully – my first short movie. Thank you! Here is a bit of the back stage story. About 6 weeks ago colleagues from SNBC (Social Network and Business Collaboration) announced a Social Use Case Competition. It was expected to submit a video of 2 to 5 minutes duration on the Social Enterprise (our internal phrase for Enterprise 2.0). Hmm – I had a few vague ideas, but no script – no actors – no experience in film making. Really the best conditions to try something! I chose our weekly SocialChats as my main topic. But if you don’t do Danish Dogma cinema, you still need a script. Hence I played around with the SocialChat’s archive, and all of a sudden a script and even the actors appeared in front of me. The words that you have just seen are weekly topics. Slightly abridged and rearranged to form a story. Exciting, next phase. How to get it on digital celluloid? I have to confess I am still impressed by epic. (Keep in mind, epic was done in 2004.) And my actors – words – call for a typographic style already. The main part was done over a weekend with Apple Keynote. And I even found a wonderful matching soundtrack among my albums: Didge Goes World by Delago. I picked parts of Second Day and Seventh Day. Literally, the rhythm was set, and I "just" had to complete the movie. Tools used – apart from trial and error: Keynote, Pixelmator, GarageBand, iMovie. Finally I want to mention that I am extremely thankful to BSC Music for granting permissions to use the tracks for this short film! Without this sound it would have been just an ordinary slide show. – Internal note: The next SocialChat is on Death by PowerPoint vs. Presentation Zen. CU this Friday 3pm Greenwich / 7am Pacific.

    Read the article

  • Ubuntu 12.04 x64 LTS VPN Server not changing IP

    - by user288778
    I used this guide http://silverlinux.blogspot.co.uk/2012/05/how-to-pptp-vpn-on-ubuntu-1204-pptpd.html and it worked fine. I'm able to connect but the problem is, that my IP being changed to "localip" not "remote ip". This is what I get from tail -f /var/log/syslog [code] June 6 00:09:19 instant5860 NetworkManager[1456]: Unmanaged Device found; state CONNECTED forced (see http://bugs.launchpad.net/bugs/191889) June 6 00:09:19 instant5860 NetworkManager[1456]: Marking connection 'Wired connection 1' invalid. June 6 00:09:19 instant5860 NetworkManager[1456]: Activation (eth1) failed. June 6 00:09:19 instant5860 NetworkManager[1456]: Activation (eth1) Stage 4 of 5 (IPv4 Configure Timeout) complete. June 6 00:09:19 instant5860 NetworkManager[1456]: (eth1): device state change: failed - disconnected (reason 'none') [120 30 0] June 6 00:09:19 instant5860 NetworkManager[1456]: (eth1): deactivating device (reason 'none') [0] June 6 00:09:19 instant5860 NetworkManager[1456]: Unmanaged Device found; state CONNECTED forced. June----- avahi-daemon[440]: Withdrawing address record for fe80......... on eth1 Jun------avahi-daemon[440]: Leaving mDNS multicast group on interface eth1. IPv6 with address fe80..... Jun------avahi-daemon[440]: Interface eth1.IPv6 no longer relevant for mDNS. Jun------avahi-daemon[440]: Joining mDNS multicast group on interface eth1.IPv6 with address fe80.... Jun------avahi-daemon[440]: New relevant interface eth1.IPv6 for mDNS Jun------avahi-daemon[440]: Registering new address record for fe80..... on eth1.*. Jun - snmpd[1172]: error on subcontainer 'ia_addr' insert (-1) dbusp382]: [syste] Activating service name='org.freedesktop.PackageKit' (using servicehelper) AptDaemon: INFO: Initializing daemon AptDaemon.PackageKit: INFO: Initializing PackageKit compat layer dbus[382]: [system] Successfu;;y activated service 'org.freedesktop.PackageKit' AptDaemon.PackageKit: INFO: Initializing PackageKit transaction AptDaemon.Worker: INFO: Simulating trans: /org/debian/apt/transaction/233beca013a0473ea34d9dea805af5df AptDaemon.Worker: INFO: Processing transaction /org/debian/apt... AptDaemon.PackageKit: INFO: Get updates() AptDaemon.Worker: INFO: Finished snmpd[1172]: error on subcontainer pptpd[23611]: CTRL: Client 82.33.... control connection started pptpd[23611]: CTRL: Starting call (launching pppd, opening GRE) pptpd[23611]: pppd 2.4.5 started by root uid 0 pptpd[23611]: Using interface ppp0 pptpd[23611]: Connect ppp0 <-- /dev/pts/1 NetworkManager[1456]: SCPlugin - Ifupdown: device added (path: /sys/devices/virtual/net/ppp0, iface: ppp0) NetworkManager[1456]:SCPlugin - Ifupdown: device added (path: /sys/devices/virtual/net/ppp0, iface: ppp0): no ifupdown configuration found. pptpd[23612]: peer from calling number 82... authorized. kernel: [2918261.416923] init: ufw pre-start process (23613) terminated with status 1 dhclient: DHCPDISCOVER on eth1 to 255.255.255.255 port 67 interval 7 CTRL: Ignored a SET LING info packet with real ACCMs! local IP address:109.0.121.197 remote IP address: 109.0.84.56 dhclient: DHCPDISCOVER on eth1 to 255.255.255.255 port 67 interval 13 NetworkManager[1456]: (eth1): DHCPv4 request timed out. NetworkManager[1456]: (eth1): canceled DHCP transaction, DHCP client pid 23280 NetworkManager[1456]: Activation (eth1) Stage 4 of 5 (IPv4 Configure Timeout) scheduled... NetworkManager[1456]: Activation (eth1) Stage 4 of 5 (IPv4 Configure Timeout) started... NetworkManager[1456]: (eth1): device state change: ip-config - failed (reason 'ip-config-unavailable') [70 120 5[ NetworkManager[1456]: Unmanaged 'ia_addr' insert (-1)[/code]

    Read the article

  • Managing Regulated Content in WebCenter: USDM and Oracle Offer a New Part 11 Compliant Solution for Life Sciences

    - by Michael Snow
    Guest post today provided by Oracle partner, USDM  Regulated Content in WebCenterUSDM and Oracle offer a new Part 11 compliant solution for Life Sciences (White Paper) Life science customers now have the ability to take advantage of all of the benefits of Oracle’s WebCenter Content, a global leader in Enterprise Content Management.   For the past year, USDM has been developing best practice compliance solutions to meet regulated content management requirements for 21 CFR Part 11 in WebCenter Content. USDM has been an expert in ECM for life sciences since 1999 and in 2011, certified that WebCenter was a 21CFR Part 11 compliant content management platform (White Paper).  In addition, USDM has built Validation Accelerators Packs for WebCenter to enable life science organizations to quickly and cost effectively validate this world class solution.With the Part 11 certification, Oracle’s WebCenter now provides regulated life science organizations  the ability to manage REGULATORY content in WebCenter, as well as the ability to take advantage of ALL of the additional functionality of WebCenter, including  a complete, open, and integrated portfolio of portal, web experience management, content management and social networking technology.  Here are a few screen shot examples of Part 11 functionality included in the product: E-Sign, E-Sign Rendor, Meta Data History, Audit Trail Report, and Access Reporting. Gone are the days that life science companies have to spend millions of dollars a year to implement, maintain, and validate ECM systems that no longer meet the ever changing business and regulatory requirements.  Life science companies now have the ability to use WebCenter Content, an ECM system with a substantially lower cost of ownership and unsurpassed functionality.Oracle has been #1 in life sciences because of their ability to develop cost effective, easy-to-use, scalable solutions which help increase insight and efficiency to drive growth for their customers.  Adding a world class ECM solution to this product portfolio allows life science organizations the chance to get rid of costly ECM systems that no longer meet their needs and use WebCenter, part of the Oracle Fusion Technology stack, with their other leading enterprise applications.USDM provides:•    Expertise in Life Science ECM Business Processes•    Prebuilt Life Science Configuration in WebCenter •    Validation Accelerator Packs for WebCenterUSDM is very proud to support Oracle’s expanding commitment to Life Sciences…. For more information please contact:  [email protected] Oracle will be exhibiting at DIA 2012 in Philadelphia on June 25-27. Stop by our booth (#2825) to learn more about the advantages of a centralized ECM strategy and see the Oracle WebCenter Content solution, our 21 CFR Part 11 compliant content management platform.

    Read the article

  • Focusing on Mobile @ Oracle OpenWorld

    - by Carlos Chang
    Plenty of exciting trends in the industry today: Cloud, Big Data, Mobile, etc. The first two are amazing of course, but for me, it's mobile, mobile and... MOBILE.   Why? Think back to the mozilla browser (Marc Andreessen's mozilla, not today's mozilla.org), Netscape and the nascent beginnings of the World Wide Web. Amazing times. Companies were just starting to set up their home pages, basic HTML, hyperlinks, images, ooooh, aaaah.  Yahoo! was *the* search engine back then. :-\   Anywhoo, I would pose that mobile today, we are in a similar junction. Sure, there's millions of apps on Apple's App Store and Google Play, but within the enterprise, it's just getting started. I'm talking about going beyond the simple, tactical apps such as calendaring, contacts or directory service lookup. And while mobile first a common mantra, I'm referring to mobile plus which includes and looks upon the whole enterprise holistically and adds new parameters, such as your GPS location, perhaps even your vital signs. (Apple's health kit?)  Everything is going mobile. Everything connected. But with the enterprise - scalability, security, integration, app management, user management, etc. Amazing times ahead. Ok, got that off my mind. Oracle OpenWorld 2014 - Going Mobile!  If you're coming to the big dance, I've highlighted some key mobile sessions below. And if you see me around, and there's a bar within reach, high five me for a beer. I mean, if you read this far, and didn't already jump to the list below, I think you deserve one.   Cheers!  Monday, 9/29/14 at 10:15 AM - General Session: Time for You to Rethink Mobile? Oracle Mobile Strategy and Roadmap Tuesday, 9/30/14 @ 12:00 PM; MW3020 - Develop and Deploy Mobile Applications with Oracle’s Mobile Wednesday, 10/1/2014 @10:15 AM; MW 3022 Introduction to Oracle Mobile Application Framework Wednesday, 10/1/2014 @11:30 AM Accelerate Enterprise Mobility with Oracle Mobile Cloud Service Click here to view the complete Focus on Mobile sessions at this years Oracle OpenWorld 2014, and don't forget to follow @OracleMobile on Twitter. 

    Read the article

  • C# 5: At last, async without the pain

    - by Alex.Davies
    For me, the best feature in Visual Studio 11 is the async and await keywords that come with C# 5. I am a big fan of asynchronous programming: it frees up resources, in particular the thread that a piece of code needs to run in. That lets that thread run something else, while waiting for your long-running operation to complete. That's really important if that thread is the UI thread, or if it's holding a lock because it accesses some data structure. Before C# 5, I think I was about the only person in the world who really cared about asynchronous programming. The trouble was that you had to go to extreme lengths to make code asynchronous. I would forever be writing methods that, instead of returning a value, accepted an extra argument that is a "continuation". Then, when calling the method, I'd have to pass a lambda in to it, which contained all the stuff that needed to happen after the method finished. Here is a real snippet of code that is in .NET Demon: m_BuildControl.FilterEnabledForBuilding(     projects,     enabledProjects = m_OutOfDateProjectFinder.FilterNeedsBuilding(         enabledProjects,         newDirtyProjects =         {             // Mark any currently broken projects as dirty             newDirtyProjects.UnionWith(m_BrokenProjects);             // Copy what we found into the set of dirty things             m_DirtyProjects = newDirtyProjects;             RunSomeBuilds();         })); It's just obtuse. Who puts a lambda inside a lambda like that? Well, me obviously. But surely enabledProjects should just be the return value of FilterEnabledForBuilding? And newDirtyProjects should just be the return value of FilterNeedsBuilding? C# 5 async/await lets you write asynchronous code without it looking so stupid. Here's what I plan to change that code to, once we upgrade to VS 11: var enabledProjects = await m_BuildControl.FilterEnabledForBuilding(projects); var newDirtyProjects = await m_OutOfDateProjectFinder.FilterNeedsBuilding(enabledProjects); // Mark any currently broken projects as dirty newDirtyProjects.UnionWith(m_BrokenProjects); // Copy what we found into the set of dirty things m_DirtyProjects = newDirtyProjects; RunSomeBuilds(); Much easier to read! But how is this the same code? If we were on the UI thread, doesn't the UI thread have to block while FilterEnabledForBuilding runs? No, it doesn't, and that's the magic of the await keyword! It cuts your method up into its constituent pieces, much like I did manually with lambdas before. When you run it, only the piece up to the first await actually runs. The rest is passed to FilterEnabledForBuilding as a continuation, which will get called back whenever that method is finished. In the meantime, our thread returns, and can go back to making the UI responsive, or whatever else threads do in their spare time. This is actually a massive simplification, and if you're interested in all the gory details, and speed hacks that the await keyword actually does for you, I recommend Jon Skeet's blog posts about it.

    Read the article

  • Running Built-In Test Simulator with SOA Suite Healthcare 11g in PS4 and PS5

    - by Shub Lahiri, A-Team
    Background SOA Suite for Healthcare Integration pack comes with a pre-installed simulator that can be used as an external endpoint to generate inbound and outbound HL7 traffic on specified MLLP ports. This is a command-line utility that can be very handy when trying to build a complete end-to-end demo within a standalone, closed environment. The ant-based utility accepts the name of a configuration file as the command-line input argument. The format of this configuration file has changed between PS4 and PS5. In PS4, the configuration file was XML based and in PS5, it is name-value property based. The rest of this note highlights these differences and provides samples that can be used to run the first scenario from the product samples set. PS4 - Configuration File The sample configuration file for PS4 is shown below. The configuration file contains information about the following items: Directory for incoming and outgoing files for the host running SOA Suite Healthcare Polling Interval for the directory External Endpoint Logical Names External Endpoint Server Host Name and Ports Message throughput to be simulated for generating outbound messages Documents to be handled by different endpoints A copy of this file can be downloaded from here. PS5 - Configuration File The corresponding sample configuration file for PS5 is shown below. The configuration file contains similar information about the sample scenario but is not in XML format. It has name-value pairs specified in the form of a properties file. This sample file can be downloaded from here. Simulator Configuration Before running the simulator, the environment has to be set by defining the proper ANT_HOME and JAVA_HOME. The following extract is taken from a working sample shell script to set the environment: Also, as a part of setting the environment, template jndi.properties and logging.properties can be generated by using the following ant command: ant -f ant-b2bsimulator-util.xml b2bsimulator-prop Sample jndi.properties and logging.properties are shown below and can be modified, as needed. The jndi.properties contains information about connectivity to the local Weblogic Managed Server instance and the logging.properties file controls the amount of logging that can be generated from the running simulator process. Simulator Usage - Start and Stop The command syntax to launch the simulator via ant is the same in PS4 and PS5. Only the appropriate configuration file has to be supplied as the command-line argument, for example: ant -f ant-b2bsimulator-util.xml b2bsimulatorstart -Dargs="simulator1.hl7-config.xml" This will start the simulator and will keep running to provide an active external endpoint for SOA Healthcare Integration engine. To stop the simulator, a similar ant command can be used, for example: ant -f ant-b2bsimulator-util.xml b2bsimulatorstop

    Read the article

  • ATG Live Webcast Nov. 29th: Endeca "Evolutionizes" E-Business Suite

    - by Bill Sawyer
    If you have ever wanted any of the following within Oracle E-Business Suite: Complete Data View Advanced Searching Across Organizations and Flexfields Advanced Visualization including Charts, Metrics, and Cross Tabs Guided Navigation Then you might want to attend this webcast to learn more about Oracle Endeca's integration with Oracle E-Business Suite. Oracle Endeca includes an unstructured data correlation and analytics engine, together with catalog search and guided navigation capabilities. This webcasts focuses on the details behind Oracle Endeca's integration with Oracle E-Business Suite. It demonstrates how you can extend the use of Oracle Endeca into other areas of Oracle E-Business Suite. Date:             Thursday, November 29, 2012Time:             8:00 AM - 9:00 AM Pacific Standard TimePresenter:   Osama Elkady, Senior DirectorWebcast Registration Link (Preregistration is optional but encouraged) To hear the audio feed:   Domestic Participant Dial-In Number:           877-697-8128    International Participant Dial-In Number:      706-634-9568    Additional International Dial-In Numbers Link:    Dial-In Passcode:                                              103192To see the presentation:    The Direct Access Web Conference details are:    Website URL: https://ouweb.webex.com    Meeting Number:  595335921If you miss the webcast, or you have missed any webcast, don't worry -- we'll post links to the recording as soon as it's available from Oracle University.  You can monitor this blog for pointers to the replay. And, you can find our archive of our past webcasts and training here. If you have any questions or comments, feel free to email Bill Sawyer (Senior Manager, Applications Technology Curriculum) at BilldotSawyer-AT-Oracle-DOT-com.

    Read the article

  • Suggestions for Summer Intern Application Assignments

    - by orangepips
    As part of our application process we want prospective college interns to complete an assignment on their own - either programming or analytical - to give us something tangible to evaluate such as code or a flowchart. I have two ideas for these assignments, one programming and one analytical, I am interested in gathering feedback about these. Programming Assignment Generate an a month's calendar for a given date. The first row should indicate the days of the week (e.g. Sunday - Saturday). Each subsequent row should contain a week's days. The date supplied should be highlighted (e.g. bolded). I am thinking we'll probably proscribe the output format even more strictly - probably down to what the HTML source should look like including CSS classes. Thinking is this forces answerers to actually do some work if they merely copy a solution from the internet. Analytical Assignment Diagram or describe in prose a system for managing a set of traffic lights for traffic at a four way intersection. Each direction (i.e. North, South, East and West) has two lanes (i.e. right and left). The left lane is turn only and has green arrow light to indicate right of way. The system is able to detect if lanes have cars in them and change the lights accordingly. I would expect a flow chart or some prose describing a finite state machine that deals with each contingency. This would hopefully provide some indication of the applicant's ability to reason through a logic problem of sorts and articulate an approach for solving. Areas Seeking Feedback Is it unreasonable to ask this of applicants? If not, is it better to request before or after a phone screen? Are these questions too hard or easy for a collegiate audience? Any suggestions for alternate questions? Do these seem like good tools for analyzing people who would part of a software development life cycle? Programming language suggestions - I'm thinking Java, Python and/or C# (we're actually a ColdFusion shop).

    Read the article

  • ArchBeat Link-o-Rama for 2012-06-05

    - by Bob Rhubart
    Why is enterprise software often so complicated? | Rajesh Raheja rraheja.wordpress.com Rajesh Raheja shares "a few examples of requirements that lead to creation of complex platform infrastructures that up the complex enterprise software." Educause Top-Ten IT Issues - the most change in a decade or more | Cole Clark blogs.oracle.com Cole Clark discusses why "higher education IT must change in order to fully realize the potential for transforming the institution, and therefore it's people must learn new skills, understand and accept new ways of solving problems, and not be tied down by past practices or institutional inertia." Oracle VM RAC template - what it took | Wim Coekaerts blogs.oracle.com Wim Coekaerts shares an example that shows how easy it is to deploy a complete Oracle RAC cluster with Oracle VM. Oracle Cloud and Oracle Platinum Services Announcements oracle.com Featuring Larry Ellison and Mark Hurd. Wednesday, June 06, 2012. 1:00 p.m. PT – 2:30 p.m. PT Creating an Oracle Endeca Information Discovery 2.3 Application Part 1 : Scoping and Design | Mark Rittman www.rittmanmead.com Oracle ACE Director Mark Rittman launches a new series that dives into "the various stages in building a simple Oracle Endeca Information Discovery application, using the recent Endeca Information Discovery 2.3 release." Introducing Decision Tables in the SOA Suite 11g | Lucas Jellama technology.amis.nl Oracle ACE Director Lucas Jellema demonstrates how "the decision table can be put to good use to implement the business logic behind the classical game of Rock, Paper and Scissors." Application integration: reorganise, recycle, repurpose | Andrew Clarke radiofreetooting.blogspot.com "Integration is a topic which is in everybody's baliwick," says Oracle ACE Andrew Clarke. "The business people want to get the best value from their existing IT investments. The architects need to understand the interfaces between the silos and across the layers. The developers have to implement it." Using XA Transactions in Coherence-based Applications | Jonathan Purdy blogs.oracle.com Purdy shares "a few common approaches when integrating Coherence into applications via the use of an application server's transaction manager." Thought for the Day "The difficulty lies, not in the new ideas, but in escaping from the old ones..." — John Maynard Keynes (June 5, 1883 - April 4, 1946) Source: Quotations Page

    Read the article

  • External hard drive issue

    - by Blind Fish
    I am running Ubuntu 12.04 and I have a 500GB external hard drive, formatted in ext4, which I have used for about a year to store batches of data that I am sifting through. About once a week I move a chunk of data from the external drive to my internal drive for processing. As I do that I delete the data from the external drive and empty the trash, thereby clearing up space on the external and also preventing myself from grabbing stuff that I've already sorted through. The vast majority of this is text files, but there are a few .jpegs and .mp4s thrown in, if that matters. Anyway, this has been working without a hitch for nearly a year. So today I plug in the drive and I have an odd issue. I was able to move folders from the external over to my internal drive with no problem, but when I tried to delete those files I was unable to do so. I kept getting the "unable to send file to trash, do you wish to delete permanently?" message. I clicked yes / delete all, but no files were actually deleted. It just sat there. Even worse, while the system was trying in vain to delete those files I was unable to move more files over. In short, I was stuck. I canceled the operation, unmounted and remounted the drive, and then I had an even bigger problem. I have a spreadsheet of all the different folders on there ( exactly 818 ) and yet when I opened the drive in Nautilus, it was only finding 512 folders. So I had 306 missing folders, but my free space was unchanged. Immediately I think that the drive may be corrupted. I went into Disk Utility and ran the check disk option. It said that the drive was NOT clean, but offered no remedy or option to repair. I went back into the drive in Nautilus and once again attempted to delete the files I had already moved. Same issue. I clicked on Details and it said that it was unable to create a trashing file I/O error. I've looked, and it's not a permissions issue. The drive and all folders that are in it all belong to me, and they are all read / write. I've started running badblocks on the drive, but it looks like that is going to take several hours to complete. Any ideas?

    Read the article

  • How to show pending messages using WLST?

    - by lmestre
    Here are the steps: 1. . ./setDomainEnv.sh2. java weblogic.WLST3. connect('weblogic','welcome1','t3://localhost:7001')4. domainRuntime()5. cd('ServerRuntimes/MS1/JMSRuntime/MS1.jms/JMSServers/JMSServer1/Destinations/JMSModule1!Queue1')6. cursor1=cmo.getMessages('true',9999999,10)                                                 **String(selector),Integer(timeout),Integer(state)7. msgs = cmo.getNext(cursor1, 10)                  ** This step gets 10 messages, you can call again cmo.getNext(cursor1, 10) to get the next 10 msgs8. print(msgs)My assumption, is that you had created:a. Managed Server MS1.b. JMS Server JMSServer1.c. Module called JMSModule1.d. Inside of JMSModule1, a Queue called Queue1.If you read my previous post:How to get Messages Pending Count from a Queue using WLST? https://blogs.oracle.com/LuzMestre/entry/how_to_get_messages_pendingYou can see that both are very similar.  Sometimes it is difficult to get a WLST Script sample, but you can use ls() function to know about other functionalities you don't have a sample code.***Until step 5, nothing new comparing to my previous post.5. cd('ServerRuntimes/MS1/JMSRuntime/MS1.jms/JMSServers/JMSServer1/Destinations/JMSModule1!Queue1')6. ls()You will see, MessagesPendingCount, getMessages along a lot of other functionalities available in this Queue. e.g, you can see:-r-x   getMessages                                  String : String(selector),Integer(timeout),Integer(state)Here you can check the complete MBean Reference:http://docs.oracle.com/cd/E23943_01/apirefs.1111/e13951/core/index.htmlSee JMSDestinationRuntimeMBean.Enjoy!

    Read the article

  • Designing a completly new database/gui solution for my compnay

    - by user1277304
    I'm no expert when it come to Everything Visual Studio 2010 and utilizing SQL server 2008. I'm sure some of my personal projects I've built for personal use would get laughed off the face of the planet, but SQLCe has been the solution I was looking for those home type of projects. And they work, flawlessly. Now I feel it's time to step up to the big league. I want to develop a complete, unified and module based solution for my compnay that I'm working for. We're still using stuff from the 80s for goodness sake! I use Excel and query the ancient database on my own because I can't stand the GUI. Nothing against people of age, but the IDE our programmers are using is from the stone age, and they use APL of all things with it. I've yet to see a radio buttton control anywhere in the GUI where it would make sense. Anyway, I want to do this right from the ground up. I'm by no means a newbie when it comes to programming in .NET 2010, however, I want the entire solution to be professionaly done. I want version control, test projects, project flow, SQL 2008 integration and all the bells and whistles that come with that. I know for a fact that if we had something like that runnning, not only would development costs and time be slashed four fold, but the possibilities for expansion and performance would sky rocket. (Between the GUI an our DB engine, it can only use ONE CORE! ONE! It's 2012 for goodness sake!) Our buisness is growing and our current ancient solution just can't keep up, and I'd hate to see our buisness go down in flames because our programmer is stuck in the 80's and refuses to use anything current. So I ask you guys, the experts and know-it-alls, where do I start? Are there any gems of good books out there in the haystack of all "This for dummies" type of deals? I already have several people backing me in this endevour, and while it may seem brash to just usurp the current programmers, I'm doing this for the company as a whole. Thank you guys for your time.

    Read the article

  • Best Practices for High Volume CPA Import Operations with ebXML in B2B 11g

    - by Shub Lahiri, A-Team
    Background B2B 11g supports ebXML messaging protocol, where multiple CPAs can be imported via command-line utilities.  This note highlights one aspect of the best practices for import of CPA, when large numbers of CPAs in the excess of several hundreds are required to be maintained within the B2B repository. Symptoms The import of CPA usually is a 2-step process, namely creating a soa.zip file using b2bcpaimport utility based on a CPA properties file and then using b2bimport to import the b2b repository.  The commands are provided below: ant -f ant-b2b-util.xml b2bcpaimport -Dpropfile="<Path to cpp_cpa.properties>" -Dstandard=true ant -f ant-b2b-util.xml b2bimport -Dlocalfile=true -Dexportfile="<Path to soa.zip>" -Doverwrite=true Usually the first command completes fairly quickly regardless of the number of CPAs in the repository. However, as the number of trading partners within the repository goes up, the time to complete the second command could go up to ~30 secs per operation. So, this could add up to a significant amount, if there is a need to import hundreds of CPA in a production system within a limited downtime, maintenance window.  Remedy In situations, where there is a large number of entries to be imported, it is best to setup a staging environment and go through the import operation of each individual CPA in an empty repository. Since, this will be done in an empty repository, the time taken for completion should be reasonable.  After all the partner profiles have been imported, a full repository export can be taken to capture the metadata for all the entries in one file.  If this single file with all the partner entries is imported in a loaded repository, the total time taken for import of all the CPAs should see a dramatic reduction. Results Let us take a look at the numbers to see the benefit of this approach. With a pre-loaded repository of ~400 partners, the individual import time for each entry takes ~30 secs. So, if we had to import another 100 partners, the individual entries will take ~50 minutes (100 times ~30 secs). On the other hand, if we prepare the repository export file of the same 100 partners from a staging environment earlier, the import takes about ~5 mins. The total processing time for the loading of metadata, specially in a production environment, can thus be shortened by almost a factor of 10. Summary The following diagram summarizes the entire approach and process. Acknowledgements The material posted here has been compiled with the help from B2B Engineering and Product Management teams.

    Read the article

  • Profiling Startup Of VS2012 &ndash; Ants Profiler

    - by Alois Kraus
    I just downloaded ANTS Profiler 7.4 to check how fast it is and how deep I can analyze the startup of Visual Studio 2012. The Pro version which is useful does cost 445€ which is ok. To measure a complex system I decided to simply profile VS2012 (Update 1) on my older Intel 6600 2,4GHz with 3 GB RAM and a 32 bit Windows 7. Ants Profiler is really easy to use. So lets try it out. The Ants Profiler does want to start the profiled application by its own which seems to be rather common. I did choose Method Level timing of all managed methods. In the configuration menu I did want to get all call stacks to get full details. Once this is configured you are ready to go.   After that you can select the Method Grid to view Wall Clock Time in ms. I hate percentages which are on by default because I do want to look where absolute time is spent and not something else.   From the Method Grid I can drill down to see where time is spent in a nice and I can look at the decompiled methods where the time is spent. This does really look nice. But did you see the size of the scroll bar in the method grid? Although I wanted all call stacks I do get only about 4 pages of methods to drill down. From the scroll bar count I would guess that the profiler does show me about 150 methods for the complete VS startup. This is nonsense. I will never find a bottleneck in VS when I am presented only a fraction of the methods that were actually executed. I have also tried in the configuration window to also profile the extremely trivial functions but there was no noticeable difference. It seems that the Ants Profiler does filter away way too many details to be useful for bigger systems. If you want to optimize a CPU bound operation inside NUnit then Ants Profiler is with its line level timings a very nice tool to work with. But for bigger stuff it is certainly not usable. I also do not like that I must start the profiled application from the profiler UI. This makes it hard to profile processes which are started by some other process. Next: JetBrains dotTrace

    Read the article

  • DTLoggedExec 1.1.2008.4 Released!

    - by Davide Mauri
    Today I've relased the latest version of my DTExec replacement tool, DTLoggedExec. The main changes are the following: Used a new strategy for version numbers. Now it will follow the following pattern Major.Minor.TargetSQLServerVersion.Revision Added support for Auto Configurations Fixed a bug that reported incorrect number of errors and warnings to Log Providers Fixed a buf that prevented correct casting of values when using /Set and /Param options Errors and Warnings are now counted more precisely. Updated database and log import scripts to categorize logs by projects and sections. E.g.: Project: MyBIProject; Sections: Staging, Datawarehouse Removed unused report stored procedures from database Updated Samples: 12 samples are now available to show ALL DTLoggedExec features From this version only SSIS 2008 will be supported http://dtloggedexec.codeplex.com/releases/view/62218  It useful to say something more on a couple of specific points: From this version only SSIS 2008 will be supportedYes, Integration Services 2005 are not supported anymore. The latest version capable of running SSIS 2005 Packages is the 1.0.0.2. Updated database and log import scripts to categorize logs by projects and sectionsWhen you import a log file, you can now assign it to a Project and to a Section of that project. In this way it's easier to gather statistical information for an entire project or a subsection of it. This also allows to store logged data of package belonging to different projects in the same database. For example:  Updated SamplesA complete set of samples that shows how to use all DTLoggedExec features are now shipped with the product. Enjoy! Added support for Auto ConfigurationsThis point will have a post on its own, since it's quite important and is by far the biggest new feature introduced in this release. To explain it in a few words, I can just say that you don't need to waste time with complex DTS configuration files or options, since a package will configure itself automatically. You just need to write a single statement as a parameter for DTLoggedExec. This feature can simplify deployment *a lot* :)   I the next days I'll write the mentioned post on Auto-Configurations and i'll update the documentation available on theDTLoggedExec website:   http://dtloggedexec.davidemauri.it/MainPage.ashx

    Read the article

  • Why do "Joke" programming languages exist? [closed]

    - by ThePlan
    First of all please be aware this post contains some abusive language but I hope it will not bother anyone. I apologize for the bad language but that's what the name is. As I've been doing documentation on existing programming languages attempting to make a complete list of them I stumbled across terrible programming languages, which were clearly not made for actual use and implementation due to their insane difficulty. Languages such as Brainfu*k and LOLCODE or Whitespace are fool languages because they have no real use. For example, a "Hello world" program written in BrainFu*k. Taken from Wikipedia: The following program prints "Hello World!" and a newline to the screen: +++++ +++++ initialize counter (cell #0) to 10 [ use loop to set the next four cells to 70/100/30/10 > +++++ ++ add 7 to cell #1 > +++++ +++++ add 10 to cell #2 > +++ add 3 to cell #3 > + add 1 to cell #4 <<<< - decrement counter (cell #0) ] > ++ . print 'H' > + . print 'e' +++++ ++ . print 'l' . print 'l' +++ . print 'o' > ++ . print ' ' << +++++ +++++ +++++ . print 'W' > . print 'o' +++ . print 'r' ----- - . print 'l' ----- --- . print 'd' > + . print '!' > . print '\n' or another example taken from LOLCODE language: HAI CAN HAS STDIO? PLZ OPEN FILE "LOLCATS.TXT"? AWSUM THX VISIBLE FILE O NOES INVISIBLE "ERROR!" KTHXBYE These languages are very difficult to learn/read/work with. My question is - Why do they exist? What is the purpose of them? Also, is there an official "name" for these type of languages?

    Read the article

  • Seven Accounting Changes for 2010

    - by Theresa Hickman
    I read a very interesting article called Seven Accounting Changes That Will Affect Your 2010 Annual Report from SmartPros that nicely summarized how 2010 annual financial statements will be impacted.  Here’s a Reader’s Digest version of the changes: 1.  Changes to revenue recognition if you sell bundled products with multiple deliverables: Old Rule: You needed to objectively establish the “fair value” of each bundled item. So if you sold a dishwasher plus installation and could not establish the fair value of the installation, you might have to delay recognizing revenue of the dishwasher days or weeks later until it was installed. New Rule (ASU 2009-13): “Objective” proof of each service or good is no longer required; you can simply estimate the selling price of the installation and warranty. So the dishwasher vendor can recognize the dishwasher revenue immediately at the point of sale without waiting a few weeks for the installation. Then they can recognize the estimated value of the installation after it is complete. 2.  Changes to revenue recognition for devices with embedded software: Old Rule: Hardware devices with embedded software, such as the iPhone, had to follow stringent software revrec rules. This forced Apple to recognize iPhone revenues over two years, the period of time that software updates were provided. New Rule (ASU 2009-14): Software revrec rules no longer apply to these devices with embedded software; these devices can now follow ASU 2009-13. This allows vendors, such as Apple, to recognize revenue sooner. 3.  Fair value disclosures: Companies (both public and private) now need to spend extra time gathering, summarizing, and disclosing information about items measured at fair value, such as significant transfers in and out of Level 1(quoted market price), Level 2 (valuation based on observable markets), and Level 3 (valuations based on internal information). 4.  Consolidation of variable interest entities (a.k.a special purpose entities): Consolidation rules for variable interest entities now require a qualitative, not quantitative, analysis to determine the primary beneficiary. Instead of simply looking at the percentage of voting interests, the primary beneficiary could have less than the majority interests as long as it has the power to direct the activities and absorb any losses.  5.  XBRL: Starting in June 2011, all U.S. public companies are required to file financial statements to the SEC using XBRL. Note: Oracle supports XBRL reporting. 6.  Non-GAAP financial disclosures: Companies that report non-GAAP measures of performance, such as EBITDA in SEC filings, have more flexibility.  The new interpretations can be found here: http://www.sec.gov/divisions/corpfin/guidance/nongaapinterp.htm.  7.  Loss contingencies disclosures: Companies should expect additional scrutiny of their loss disclosures, such as those from litigation losses, in their annual financial statements. The SEC wants more disclosures about loss contingencies sooner instead of after the cases are settled.

    Read the article

  • Announcing the Next AppAdvantage IT Leader Documentary

    - by Tanu Sood
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Close on the heels of the very successful launch of Oracle BPM 12c, we are so excited to be announcing the next video in the Oracle AppAdvantage IT Leaders Series. As you know, the Oracle AppAdvantage IT Leaders Series profiles a successful executive in an industry leading organization that has embarked on a business transformation or platform modernization journey by taking full advantage of the Oracle AppAdvantage pace layered architecture. Tune in on Tuesday, September 9th at 10 am Pacific/1 pm Eastern to catch our IT executive, Regis Louis in an in-depth conversation with IT executives from Siram S.p.a., a major energy services management company in Italy. The documentary will explore how the company is doing process orchestration and business process management to build inter-department collaboration, maintain data integrity and offer complete transparency throughout their Request for Proposal (RFP) process. Oracle’s technology expert will then do a comprehensive walk-through of Siram’s IT model, the various components and how Oracle Fusion Middleware technologies are enabling their Oracle E-Business Suite processes. Experts will be at hand to answer your questions live. Check out this live documentary webcast and find out how organizations like yours are building business agility leveraging existing investments in Oracle Applications. Register today. And if you are on twitter, send your questions to @OracleMiddle with #ITLeader, #AppAdvantage. We look forward to connecting with you on Tue, Sep 9. Siram Achieves Commercial Efficiency with Improved Business Process Agility Date: Tuesday, September 9, 2014 Time: 10:00 AM PT / 1:00 PM ET /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

< Previous Page | 231 232 233 234 235 236 237 238 239 240 241 242  | Next Page >