Search Results

Search found 27181 results on 1088 pages for 'oracle desktop virtualization'.

Page 687/1088 | < Previous Page | 683 684 685 686 687 688 689 690 691 692 693 694  | Next Page >

  • Changes to File Store Provider in UCM PS3

    - by Kevin Smith
    In the recent PS3 release of UCM (11.1.1.4.0) there are some significant changes to the File Store Provider (FSP) configuration. For new PS3 installs (not upgrades from PS2) the FSP default storage rule includes a dispersion rule that will change the web-layout and vault paths by adding dispersion directories to the paths to limit the number of files in the vault and web-layout directories. What that means is that if you install a new PS3 UCM instance and migrate content in from a previous version of UCM the web URL will change. That is a critical problem for web sites and just general document management. See below for some details on the FSP configuration in PS3 and how you can change the default behavior. use the link below to read the rest of this post where I describe the issue in detaill and provide instructions for how to modify a PS3 instance to use the old format for the web-layout path.

    Read the article

  • A Virtual Dilemma

    - by antony.reynolds
    Solving a Gotcha with VirtualBox Guest Additions I was just building a new virtual machine based off an existing image that didn’t have the Virtual Box Guest Additions enabled.  The guest additions allow tight integration between the guest OS and the host environment, providing seemless mouse transfer and the ability to take advantage of full video screen size.  The guest additions need to be linked with the kernel which requires the kernel-devel package to be installed.  After installing this package and then trying to add the guest additions it failed, suggesting that I might not have the kernel-devel package that I had installed.  After a little though I finally realized what had happened.  When I grabbed the kernel-devel package I hadn’t checked the version of my kernel.  The kernel-devel I downloaded didn’t match the revision of the kernel I was running!  Hence my problems.  I upgraded the kernel to the same revision as my kernel-devel package and rebooted.  I had installed dkms so I was pleased to see that my VBox Additions successfully built and the mouse and screen now worked as expected. So now you know my embarrassing story for the day :-)

    Read the article

  • Critical Threads Optimization

    - by Rafael Vanoni
    Background One of the more common issues we've been seeing in the field is the growing difficulty in optimizing performance of multi-threaded applications. A good portion of this difficulty is due to the increasing complexity of modern processors that present various degrees of sharing relationships between hardware components. Take any current CMT processor and you'll find any number of CPUs sharing execution pipelines, floating point units, caches, etc. Consequently, applying the traditional recipe of one software thread for each CPU will have varying degrees of success, according to the layout of the underlying hardware. On top of this increasing complexity we've also seen processors with features that aim at dynamically resourcing software threads according to their utilization. Intel's Turbo Boost allows processors to increase their operating frequency if there is enough thermal headroom available and the processor isn't fully utilized. More recently, the SPARC T4 processor introduced dynamic threading, allowing each core to dynamically allocate more resources to its active CPUs. Both cases are in essence recognizing that current processors will be running a wide mix of workloads, some will be designed for throughput, others for low latency. The hardware is providing mechanisms to dynamically resource threads according to their runtime behavior. We're very aware of these challenges in Solaris, and have been working to provide the best out of box performance while providing mechanisms to further optimize applications when necessary. The Critical Threads Optimzation was introduced in Solaris 10 8/11 and Solaris 11 as one such mechanism that allows customers to both address issues caused by contention over shared hardware resources and explicitly take advantage of features such as T4's dynamic threading. What it is The basic idea is to allow performance critical threads to execute with more exclusive access to hardware resources. For example, when deploying an application that implements a producer/consumer model, it'll likely be advantageous to give the producer more exclusive access to the hardware instead of having it competing for resources with all the consumers. In the case of a T4 based system, we may want to have a producer running by itself on a single core and create one consumer for each of the remaining CPUs. With the Critical Threads Optimization we're extending the semantics of scheduling priorities (which thread should run first) to include priority over shared resources (which thread should have more "space"). Now the scheduler will not only run higher priority threads first: it will also provide them with more exclusive access to hardware resources if they are available. How does it work ? Using the previous example in Solaris 11, all you'd have to do would be to place the producer in the Fixed Priority (FX) scheduling class at priority 60, or in the Real Time (RT) class at any priority and Solaris will try to give it more "hardware space". On both Solaris 10 8/11 and Solaris 11 this can be achieved through the existing priocntl(1,2) and priocntlset(2) interfaces. If your application already assigns these priorities to performance critical threads, there's no additional step you need to take. One important aspect of this optimization is that it requires some level of idleness in the system, either as a result of sizing the application before hand or through periods of transient idleness during runtime. If the system is fully committed, the scheduler will put all the available CPUs to work.Best practices If you're an application developer, we encourage you to look into assigning the right priorities for the different threads in your application. Solaris provides different scheduling classes (Time Share, Interactive, Fair Share, Fixed Priority and Real Time) that offer different policies and behaviors. It is not always simple to figure out which set of threads are critical to the performance of a workload, and it may not always be feasible to take advantage of this optimization, but we believe that this can be correctly (and safely) done during development. Overall, the out of box performance in Solaris should meet your workload's requirements. If you are looking into that extra bit of performance, then the Critical Threads Optimization may be what you're looking for.

    Read the article

  • org.openide.awt.ColorComboBox

    - by Geertjan
    It's the time of year when a lot of NetBeans Platform tutorials are being reviewed, revised, and rewritten. Today I'm looking at the NetBeans Platform Paint Application Tutorial. Suddenly I remembered seeing something in a recent API Changes document about a new class, ColorComboBox. That means I can make the tutorial a lot simpler, since Tim Boudreau's external ColorChooser.jar is now superfluous. Here's what the ColorComboBox looks like: It works perfectly. Of course, the nice thing about using that JAR was that it showed the user how to incorporate external JARs, but I'll make sure to make a note of that in the tutorial, along the lines of "If you don't like the NetBeans Platform color combobox, and would like to replace it with your own, such as Tim's ColorChooser.jar or a JavaFX color chooser, take the following steps." In short, if you're using NetBeans APIs, write this on your ceiling above your bed: http://bits.netbeans.org/dev/javadoc/apichanges.html, check that page regularly (mark it in your calendar to do first thing every Monday morning) and you'll be aware of the latest changes as they happen.

    Read the article

  • Eight Geektacular Christmas Projects for Your Day Off

    - by Jason Fitzpatrick
    It’s Christmas Eve and if you’re lucky you’ve got some time off ahead of you. Let’s put that time to good use with some holiday-centered geeking out. Come on in for LEGO ornaments, Darth Vader snow flakes, and Christmas light hacks galore. Latest Features How-To Geek ETC How to Use the Avira Rescue CD to Clean Your Infected PC The Complete List of iPad Tips, Tricks, and Tutorials Is Your Desktop Printer More Expensive Than Printing Services? 20 OS X Keyboard Shortcuts You Might Not Know HTG Explains: Which Linux File System Should You Choose? HTG Explains: Why Does Photo Paper Improve Print Quality? An Alternate Star Wars Christmas Special [Video] Sunset in a Tropical Paradise Wallpaper Natural Wood Grain Icons for Your Desktop and App Launcher Docks My Blackberry Is Not Working! The Apple Too?! [Funny Video] Hidden Tracks Your Stolen Mac; Free Until End of January Why the Other Checkout Line Always Moves Faster

    Read the article

  • Learn more about SPARC by listening to our newly recorded podcasts

    - by Cinzia Mascanzoni
    Please listen to our newly recorded series of four podcasts focused on SPARC. The topics are: How SPARC T4 Servers Open New Opportunities SPARC Roadmap and SPARC T4 Architecture Highlights SPARC T4 For Installed Base Refresh and Consolidation SPARC T4 – How Does it Stack up Against the Competition? Rob Ludeman, from SPARC Product Management, and Thomas Ressler, WWA&C Alliances Consultant, are your hosts. The intent is to continue to help you understand how to position and sell SPARC/T4 into your customer architecture.Details on how to access these podcasts can be found here.

    Read the article

  • CON6714 - Mixed-Language Development: Leveraging Native Code from Java

    - by Darryl Gove
    Here's the abstract from my JavaOne talk: There are some situations in which it is necessary to call native code (C/C++ compiled code) from Java applications. This session describes how to do this efficiently and how to performance-tune the resulting applications. The objectives for the session are: Explain reasons for using native code in Java applications Describe pitfalls of calling native code from Java Discuss performance-tuning of Java apps that use native code I'll cover how to call native code from Java, debugging native code, and then I'll dig into performance tuning the code. The talk is not going too deep on performance tuning - focusing on the JNI specific topics; I'll do a bit more about performance tuning in my OpenWorld talk later in the day.

    Read the article

  • OSB 11g & SAP – Single Channel/Program ID for Multiple IDOCs

    - by Shub Lahiri, A-Team
    Background This note is a supplement to the blog entry, SOA 11g & SAP – Single Channel/Program ID for Multiple IDOCs by Greg Mally. Greg has shown how a single SOA Suite composite can be used with iWay Adapters to receive multiple IDOC types via a single channel in the adapter, corresponding to a single programID on the SAP system. We will try to address the same requirements within the OSB framework here. Project Built - Design Time The basic build of an OSB project with iWay SAP Adapter, as seen in another entry in this blog, consists of working in OSB Design console and Application Explorer. OSB Design Time - Part 1 We will create a placeholder project first in OSB with a proper directory structure, so that we can export the WSDL, XSD and the JCA binding information from Application Explorer directly into this project. Application Explorer - iWay Design Time Tool Receiving IDOCs is classified as an inbound event within Application Explorer. For setting up events, a channel is first defined (e.g. iDoc_Channel) using the same PROGRAMID (RFC destination), as defined within SAP for the OSB server. Next, the same channel is used to export the JCA Inbound Event artifacts for the candidate IDOC, e.g. DEBMAS06 directly to the pre-created OSB project. Note that the validation for schema has been turned off. As a result, this will allow the adapter, at runtime, to use a single channel to receive multiple IDOC types from SAP and pass them on to the OSB runtime engine without any validation. In other words, we do not have to repeat the above step for each IDOC type. OSB Design Time - Part 2 Create 2 simple XML based Business Services to write to a file, e.g.  SAP_DEBMAS_File and SAP_MATMAS_File. Next, generate a Proxy Service using the JCA binding file exported from Application Explorer in the previous section. In the generated proxy service, edit the message flow and add a route node. Add a routing table in the route node with the following routing function. fn:local-name-from-QName(fn:node-name($body/*[1])) This function takes advantage of the fact that the XML payload at runtime, after translation by adapter, has the IDOC type as the top element. With the routing function in place, build the routing table to add 2 branches to route the IDOCs to the appropriate Business Service for writing the XML payload to files in separate directories. This completes the build of the OSB project. Testing - Run-Time After deployment and activation, the SAP adapter will wait to receive multiple types of IDOCs sent from the SAP system using a single channel. Upon receipt of the IDOCs, the OSB project will route them appropriately to save the corresponding XML payloads for different IDOC types in different directories.

    Read the article

  • TomEE Integration in NetBeans Next

    - by Geertjan
    At JavaOne 2013, there was a lot of buzz around the TomEE server, e.g., many Tweets, nice party, and a new TomEE consulting company. For those tracking TomEE developments, it is interesting to note that recently the NetBeans IDE development builds have had added to them... TomEE support. Note: The TomEE support described here is not in NetBeans IDE 7.4, but in development builds for the next release of NetBeans IDE.For example, with NetBeans IDE development builds you're able to: register TomEE as a server in the Services window (TomEE has several distributions, e.g., one can use the "with JAX-RS" one, for example) create a Java EE 6 web project (e.g., Maven based) against this server create JPA entities from database create JAX-RS classes from JPA entities create JSF pages from JPA entities the IDE lets you create a new data source for TomEE and deploy it to the server the IDE figures out the components that are already packaged in TomEE, and the fact that (unlike with regular Tomcat), it does not need to package any components such as JSF implementation, persistence provider, or JAX-RS runtime, so that the resulting WAR file is very small the IDE can also do "deploy on save" with TomEE, so that your development cycle is very fast Adam Bien blogged about how he set up TomEE sometime ago, here. The official support in NetBeans IDE will be much more tightly integrated, simplifying the steps Adam describes. For example, the IDE does step 2 from Adam's blog for you, i.e., it sets up TomEE deployment roles. Moreover, it knows about all the technologies included in TomEE so that it can optimize the packaging; it knows about TomEE's persistence setup; it can work with TomEE data sources, etc. Below you see a Maven-based Java EE 6 PrimeFaces application (all entities and JSF pages generated from a database) deployed to TomEE in NetBeans IDE: And here's the management console for configuring and finetuning TomEE in NetBeans IDE: When I tried out the NetBeans IDE development build and TomEE, to see how everything fits together, I was surprised at how fast TomEE started up. Not sure what they did to it, but seems like a server on steroids. And setting it up in NetBeans IDE was trivial. Add the simple set up of TomEE in NetBeans IDE to the many benefits that the widely praised out of the box NetBeans Maven tools make possible, together with the fact that not one single plugin had to be installed to get everything you see described here up and running... and you have a really powerful combination of dev tools, all for free.

    Read the article

  • Netcat I/O enhancements

    - by user13277689
    When Netcat integrated into OpenSolaris it was already clear that there will be couple of enhancements needed. The biggest set of the changes made after Solaris 11 Express was released brings various I/O enhancements to netcat shipped with Solaris 11. Also, since Solaris 11, the netcat package is installed by default in all distribution forms (live CD, text install, ...). Now, let's take a look at the new functionality: /usr/bin/netcat alternative program name (symlink) -b bufsize I/O buffer size -E use exclusive bind for the listening socket -e program program to execute -F no network close upon EOF on stdin -i timeout extension of timeout specification -L timeout linger on close timeout -l -p port addr previously not allowed usage -m byte_count Quit after receiving byte_count bytes -N file pattern for UDP scanning -I bufsize size of input socket buffer -O bufsize size of output socket buffer -R redir_spec port redirection addr/port[/{tcp,udp}] syntax of redir_spec -Z bypass zone boundaries -q timeout timeout after EOF on stdin Obviously, the Swiss army knife of networking tools just got a bit thicker. While by themselves the options are pretty self explanatory, their combination together with other options, context of use or boundary values of option arguments make it possible to construct small but powerful tools. For example: the port redirector allows to convert TCP stream to UDP datagrams. the buffer size specification makes it possible to send one byte TCP segments or to produce IP fragments easily. the socket linger option can be used to produce TCP RST segments by setting the timeout to 0 execute option makes it possible to simulate TCP/UDP servers or clients with shell/python/Perl/whatever script etc. If you find some other helpful ways use please share via comments. Manual page nc(1) contains more details, along with examples on how to use some of these new options.

    Read the article

  • Smarty: Configurable Comments and Code Templates

    - by Martin Fousek
    Hello, today we would like to show you few improvements we have prepared in PHP Smarty Framework for NetBeans 7.3. So let's talk about adjustable toggle comment action and code templates. Configurable Comments As some of you requested we implemented toggle comment action with adjustable behavior. In NetBeans 7.3 you can choose in Options between commenting as a "Smarty comments everywhere" or "Language sensitive comments" in Smarty Templates. Toggle comment language sensitive: Toggle comment as Smarty comment everywhere: Code Templates In NetBeans 7.3 we will provide by default many code templates inside Smarty templates or directly inside Smarty tags. Available should be code templates for all built-in or custom functions and modifiers of Smarty 3.x. Besides that you should be able to define additional custom templates easily in Options -> Editor -> Code Templates for "Smarty Templates" or directly for "Smarty Markup" (which means code templates inside Smarty tag). You can also take advantage of selection's template which are able to wrap your code with chosen Smarty tag. That's all for today. As always, please test it and report all the issues or enhancements you find in NetBeans BugZilla (component php, subcomponent Smarty).

    Read the article

  • Developing for Windows CE platform?

    - by grmbl
    I'm looking in creating some applications for workers to use on the workfloor. They'll be using Psion NEO devices running Windows CE 5.0. My skillset allows for C#, PHP, ASP.Net (+ webservices). Application requirements: should connect to our ERP system running on IBM iSeries (AS400). should be run in fullscreen (effectively hiding the OS). usability touch functionality. I have tried the following: Full winform application ran through RDP session: [+] easy deployment using .rdp file. [+] application can be run on desktop environment too. [+] rdp host can easily access DB2 using IBM drivers. [+] GUI works ok on small screen. [-] environment = terminal server. (which is already under heavy use) Full winform application running on device OS: [+] environment = local. [+] responsive. [-] must use a webservice to access DB2. [-] deployment... [-] fixed platform (no desktop) Console application running on device OS: [+] environment = local. [+] very responsive. [-] must use a webservice to access DB2. [-] no fullscreen or other window options? [-] deployment... [-] fixed platform (no desktop) I'm considering creating a web application but it seems the OS comes with IE 5? I don't want to alter the OS in any way! (install other browsers etc.) I would like to have an application that's responsive, easy to deploy, fullscreen and optionally multiplatform. I have seen handheld devices using terminal (emulation?) with a console like interface. This seems to be native to the device but I'm afraid this requires modest knowledge of C++? It seems that using RDP is the way to go but, I came here for advice and look for people that have been in the same situation willing to share their experience. There does not seem to be many "best practices" on the web that could help me decide the best way of working. Greetings

    Read the article

  • Crawling a Content Folio

    - by Kyle Hatlestad
    Content Folios in WebCenter Content allow you to assemble, track, and access a logical group of documents and/or links.  It allows you to manage them as just a list of items (simple folio) or organized as a hierarchy (advanced folio).  The built-in UI in content server allows you to work with these folios, but publishing them or consuming them externally can be a bit of a challenge.   [Read More]

    Read the article

  • Crawling a Content Folio

    - by Kyle Hatlestad
    Content Folios in WebCenter Content allow you to assemble, track, and access a logical group of documents and/or links.  It allows you to manage them as just a list of items (simple folio) or organized as a hierarchy (advanced folio).  The built-in UI in content server allows you to work with these folios, but publishing them or consuming them externally can be a bit of a challenge.   The folios themselves are actually XML files that contain the structure, attributes, and pointers to the content items.  So to publish this somewhere, such as a Site Studio page, you could perhaps use an XML parser to traverse the structure and create your output.  But XML parsers are not always the easiest or most efficient to use.  In order to more easily crawl and consume a Content Folio, Ed Bryant - Principal Sales Consultant, wrote a component to do just that.  His component adds a service which does all the work for you and returns the folio structure as a simple resultset.  So consuming and publishing that folio on a Site Studio page or in your portal using RIDC is a breeze!  For example, let's take an advanced Content Folio example like this: If we look at the native file, the XML looks like this: But if we access the folio using the new service - http://server/cs/idcplg?IdcService=FOLIO_CRAWL&dDocName=ecm008003&IsPageDebug=1 - this is what the result set looks like (using the IsPageDebug parameter). Given this as the result set, it makes it very easy to consume and repurpose that folio. You can download a copy of the sample component here. Special thanks to Ed for letting me share this component!

    Read the article

  • JFall 2012

    - by Geertjan
    JFall 2012 was over far too soon! Seven tracks going on simultaneously in a great location, with many artifacts reminding me of JavaOne, and nice snacks and drinks afterwards. The day started, as such things always do, with a keynote. Thanks to @royvanrijn for the photo below, I didn't take any myself and without a picture this report might have been too dry: What you see above is Steve Chin riding into the keynote hall on his NightHacking bike. The keynote was interesting, I can't be too complimentary about it, since I was part of it myself. Bert Ertman introduced the day and then Steve Chin took over, together with Sharat Chander, Tom Eugelink, Timon Veenstra, and myself. We had a strict choreography for the keynote, one that would ensure a lot of variation and some unexpected surprises, such as Steve being thrown off the stage a few times by Bert because of mentioning JavaOne too many times, rather than the clearly much cooler JFall. Steve talked about JavaOne and the direction Java is headed in, Sharat talked about JavaME and embedded devices, Steve and Tom did a demo involving JavaFX, I did a Project Easel demo, and Timon from Ordina talked about his Duke's Choice Award winning AgroSense project. I think the Project Easel demo (which I repeated later in a screencast for Parleys arranged by Eugene Boogaart) came across well and several people I spoke to especially like the roundtrip/bi-directional work that can be done, from browser to IDE and back again, very simply and intuitively. (In a long conversation on the drive back home afterwards, the scenario of a designer laying out the UI in HTML and then handing the HTML to a developer for back-end work, a developer who would then find it convenient to open the HTML in a browser and quickly navigate from the browser to the resources within the IDE, was discussed and considered to be extremely interesting and worth considering adopting NetBeans for, for no other reason than that.) Later I attended a session by David Delabassee on Java EE 7, Hans Dockter on Gradle, and Sander Mak on cross-build injection attacks. I was sorry to have missed Martijn Verburg's session, which sounded like it was really fantastic, among others, such as Gerrit Grunwald. I did a session too, entitled "Unlocking the Java EE 6 Platform", which was very well attended, pretty much a full room, and the demo went very smoothly. I talked to many people, e.g., a long time with Hans Dockter about how cool Gradle is and how great the Gradle/NetBeans plugin is turning out to be. I also had a long conversation (and did a demo) with Chris Chedgey, from Structure101, after his session, which was incredibly well attended; very interesting how popular modularity is. I met several people for the first time, as well as some colleagues from past places I've worked at. All in all, it was a great conference, unfortunately too short, which was very well attended (clearly over 1000) people, with several international speakers, as well as international attendees such as Mattias Karlsson, Sweden JUG leader. And, unsurprisingly, I came across NetBeans Platform applications again, none of which I had ever heard of before. In each case, "our fat client application" was mentioned in passing, never as a main application, and never in a context where there are plans for the application to be migrated to the web or mobile, simply because doing so makes no business sense at all. Great times at JFall, looking forward to meeting with some of the people I met again soon.

    Read the article

  • Building a custom Xsession with VNC access

    - by Disco
    I have a small project where I'll need to build a very minimal X11 environnement for a cyber coffee kind shop. My idea is to have a simple server which will create a dozen of VNC Daemon listening on a different port (each port = one client). The server is working, i can connect using vnc to different port. Now i'm looking for a solution to create a customized desktop for each client; with a bare minimum of apps which i want to be able to add for each user. Like user1 will have app1 and app2, user2 will have app1 only etc. I plan to use openbox as a WM but no clue on 'how' to add custom icons on the desktop of it. Any clue, starting point would be interesting.

    Read the article

  • JavaOne 2012 session slides: "Dev Berkeley DB & DB Mobile Server for Java Embedded Tech"

    - by hinkmond
    The latest JavaOne 2012 slides are available on the Web. Here's the presentation that Eric Jensen and I did on "Developing Berkeley DB & DB Mobile Server for Java Embedded Technology". Enjoy! See: Click here for the slides in a new window It was fun to present this talk at JavaOne 2012 with Eric. We had some good questions from the audience. Let me know in the Comments if you have any further questions. I'll pass all the good questions to Eric and keep the bad questions for myself. Hinkmond

    Read the article

  • Isis Finally Rolls Out

    - by David Dorf
    Google has rolled their wallet out for several chains; I see the NFC readers in Walgreen's when I'm sent their for milk.  But Isis has been relatively quiet until now.  As of last week they have finally launched in their two test cities: Austin, and Salt Lake City.  Below are the supported carriers and phones as of now, but more phones will be added later. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} AT&T supports: HTC One™ X, LG Escape™, Samsung Galaxy Exhilarate™, Samsung Galaxy S® III, Samsung Galaxy Rugby Pro™ T-Mobile supports: Samsung Galaxy S® II, Samsung Galaxy S® III, Samsung Galaxy S® Relay 4G Verizon supports: Droid Incredible 4G LTE. Of course iPhone owners have no wallet since Apple didn't included an NFC chip. To start using Isis, you have to take your NFC-capable phone to your carrier's store to get the SIM replaced with a more sophisticated one that has a secure element configured for Isis.  The "secure element" is the cryptographic logic that secures mobile payments.  Carriers like the secure element in the SIM while non-carriers (like Google) prefer the secure element in the phone's electronics. (I'm not entirely sure if you could support both Isis and Google Wallet on the same phone.  Anybody know?) Then you can download the Isis app from Google Play and load your cards.  Most credit cards are supported, and there's a process to verify the credit cards are valid.  Then you can select from the list of participating retailers to "follow."  Selecting a retailer allows that retailer to give you offers via the app. The app is well done and easy to use.  You can select a default payment type and also switch between them easily.  When the phone is tapped on the reader, there are two exchanges of information.  The payment information is transferred, and then the Isis "SmartTap" information which includes optional loyalty number and digital coupons.  Of course the value of mobile wallets comes from the ease of handling all three data types (i.e. payment, loyalty, offers). There are several advertisements for Isis running now, and my favorite is below.

    Read the article

  • Polite busy-waiting with WRPAUSE on SPARC

    - by Dave
    Unbounded busy-waiting is an poor idea for user-space code, so we typically use spin-then-block strategies when, say, waiting for a lock to be released or some other event. If we're going to spin, even briefly, then we'd prefer to do so in a manner that minimizes performance degradation for other sibling logical processors ("strands") that share compute resources. We want to spin politely and refrain from impeding the progress and performance of other threads — ostensibly doing useful work and making progress — that run on the same core. On a SPARC T4, for instance, 8 strands will share a core, and that core has its own L1 cache and 2 pipelines. On x86 we have the PAUSE instruction, which, naively, can be thought of as a hardware "yield" operator which temporarily surrenders compute resources to threads on sibling strands. Of course this helps avoid intra-core performance interference. On the SPARC T2 our preferred busy-waiting idiom was "RD %CCR,%G0" which is a high-latency no-nop. The T4 provides a dedicated and extremely useful WRPAUSE instruction. The processor architecture manuals are the authoritative source, but briefly, WRPAUSE writes a cycle count into the the PAUSE register, which is ASR27. Barring interrupts, the processor then delays for the requested period. There's no need for the operating system to save the PAUSE register over context switches as it always resets to 0 on traps. Digressing briefly, if you use unbounded spinning then ultimately the kernel will preempt and deschedule your thread if there are other ready threads than are starving. But by using a spin-then-block strategy we can allow other ready threads to run without resorting to involuntary time-slicing, which operates on a long-ish time scale. Generally, that makes your application more responsive. In addition, by blocking voluntarily we give the operating system far more latitude regarding power management. Finally, I should note that while we have OS-level facilities like sched_yield() at our disposal, yielding almost never does what you'd want or naively expect. Returning to WRPAUSE, it's natural to ask how well it works. To help answer that question I wrote a very simple C/pthreads benchmark that launches 8 concurrent threads and binds those threads to processors 0..7. The processors are numbered geographically on the T4, so those threads will all be running on just one core. Unlike the SPARC T2, where logical CPUs 0,1,2 and 3 were assigned to the first pipeline, and CPUs 4,5,6 and 7 were assigned to the 2nd, there's no fixed mapping between CPUs and pipelines in the T4. And in some circumstances when the other 7 logical processors are idling quietly, it's possible for the remaining logical processor to leverage both pipelines. Some number T of the threads will iterate in a tight loop advancing a simple Marsaglia xor-shift pseudo-random number generator. T is a command-line argument. The main thread loops, reporting the aggregate number of PRNG steps performed collectively by those T threads in the last 10 second measurement interval. The other threads (there are 8-T of these) run in a loop busy-waiting concurrently with the T threads. We vary T between 1 and 8 threads, and report on various busy-waiting idioms. The values in the table are the aggregate number of PRNG steps completed by the set of T threads. The unit is millions of iterations per 10 seconds. For the "PRNG step" busy-waiting mode, the busy-waiting threads execute exactly the same code as the T worker threads. We can easily compute the average rate of progress for individual worker threads by dividing the aggregate score by the number of worker threads T. I should note that the PRNG steps are extremely cycle-heavy and access almost no memory, so arguably this microbenchmark is not as representative of "normal" code as it could be. And for the purposes of comparison I included a row in the table that reflects a waiting policy where the waiting threads call poll(NULL,0,1000) and block in the kernel. Obviously this isn't busy-waiting, but the data is interesting for reference. _table { border:2px black dotted; margin: auto; width: auto; } _tr { border: 2px red dashed; } _td { border: 1px green solid; } _table { border:2px black dotted; margin: auto; width: auto; } _tr { border: 2px red dashed; } td { background-color : #E0E0E0 ; text-align : right ; } th { text-align : left ; } td { background-color : #E0E0E0 ; text-align : right ; } th { text-align : left ; } Aggregate progress T = #worker threads Wait Mechanism for 8-T threadsT=1T=2T=3T=4T=5T=6T=7T=8 Park thread in poll() 32653347334833483348334833483348 no-op 415 831 124316482060249729303349 RD %ccr,%g0 "pause" 14262429269228623013316232553349 PRNG step 412 829 124616702092251029303348 WRPause(8000) 32443361333133483349334833483348 WRPause(4000) 32153308331533223347334833473348 WRPause(1000) 30853199322432513310334833483348 WRPause(500) 29173070315032223270330933483348 WRPause(250) 26942864294930773205338833483348 WRPause(100) 21552469262227902911321433303348

    Read the article

  • Gradle Support in NetBeans IDE 7.2

    - by Geertjan
    Russel Winder and Steve Chin spent half an hour, and then gave up, setting up NetBeans IDE to use Gradle, because they couldn't find the NetBeans Gradle plugin, during Steve's NightHacking tour. That need happen no more because Attila Kelemen's NetBeans Gradle plugin is now available in the Plugin Manager in NetBeans IDE 7.2: Aside from opening Gradle-based applications, you can now also create new ones: Details and documentation: https://github.com/kelemen/netbeans-gradle-project

    Read the article

  • Beyond S&OP: Integrated Business Planning

    - by Paul Homchick
    In most corporations, planning is done at the department level — leaving disconnects and gaps across different departments. Finance sets revenue and profit goals with minimum validation from Manufacturing that the company has the resources, material, capacity, or demand to reach these goals. On the operations side, Manufacturing is developing plans to balance demand and supply but seldom knows if the resulting "plan" will meet the budgets on which the company's revenue and profit goals are based. The Sales department agrees to quotas that meet Finance's revenue goals without a complete understanding of what manufacturing can deliver. Integrated Business Planning (IBP) bridges these gaps in corporate planning systems. Integrated Business Planning integrates the financial planning provided by EPM systems with operations planning provided by Sales and Operations Planning solutions. This means that revenue goals and budgets are validated against a bottom-up operating plan, and that the operating plan is reconciled against financial goals. When detailed changes are made to the operations plan, planners can immediately see the big picture impact of the changes. IBP also addresses one the CFO's big concerns—the reliability of the revenue forecast. Operating plans are updated daily or weekly from a precise forecast based on current market conditions. These updated plans are then made available so that financial analysts are working with data that best represents what is going to happen - not what they projected would happen based on last quarter's data. For a discussion in more depth, see my article: Improve Reliability of Financial Forecasts with Integrated Business Planning in Supply & Demand Chain-Executive Magazine.

    Read the article

  • Beyond S&OP: Integrated Business Planning

    - by Paul Homchick
    In most corporations, planning is done at the department level — leaving disconnects and gaps across different departments. Finance sets revenue and profit goals with minimum validation from Manufacturing that the company has the resources, material, capacity, or demand to reach these goals. On the operations side, Manufacturing is developing plans to balance demand and supply but seldom knows if the resulting "plan" will meet the budgets on which the company's revenue and profit goals are based. The Sales department agrees to quotas that meet Finance's revenue goals without a complete understanding of what manufacturing can deliver. Integrated Business Planning (IBP) bridges these gaps in corporate planning systems. Integrated Business Planning integrates the financial planning provided by EPM systems with operations planning provided by Sales and Operations Planning solutions. This means that revenue goals and budgets are validated against a bottom-up operating plan, and that the operating plan is reconciled against financial goals. When detailed changes are made to the operations plan, planners can immediately see the big picture impact of the changes. IBP also addresses one the CFO's big concerns—the reliability of the revenue forecast. Operating plans are updated daily or weekly from a precise forecast based on current market conditions. These updated plans are then made available so that financial analysts are working with data that best represents what is going to happen - not what they projected would happen based on last quarter's data. For a discussion in more depth, see my article: Improve Reliability of Financial Forecasts with Integrated Business Planning in Supply & Demand Chain-Executive Magazine.

    Read the article

  • Limitations of User-Defined Customer Events (FA Type Profile)

    - by Rajesh Sharma
    CC&B automatically creates field activities when a specific Customer Event takes place. This depends on the way you have setup your Field Activity Type Profiles, the templates within, and associated SP Condition(s) on the template. CC&B uses the service point type, its state and referenced customer event to determine which field activity type to generate.   Customer events available in the base product include: Cut for Non-payment (CNP) Disconnect Warning (DIWA) Reconnect for Payment (REPY) Reread (RERD) Stop Service (STOP) Start Service (STRT) Start/Stop (STSP)   Note the Field values/codes defined for each event.   CC&B comes with a flexibility to define new set of customer events. These can be defined in the Look Up - CUST_EVT_FLG. Values from the Look Up are used on the Field Activity Type Profile Template page.     So what's the use of having user-defined Customer Events? And how will the system detect such events in order to create field activity(s)?   Well, system can only detect such events when you reference a user-defined customer event on a Severance Event Type for an event type Create Field Activities.     This way you can create additional field activities of a specific field activity type for user-defined customer events.   One of our customers adopted this feature and created a user-defined customer event CNPW - Cut for Non-payment for Water Services. This event was then linked on a Field Activity Type Profile and referenced on a Severance Event - CUT FOR NON PAY-W. The associated Severance Process was configured to trigger a reconnection process if it was cancelled (done by defining a Post Cancel Algorithm). Whenever this Severance Event was executed, a specific type of Field Activity was generated for disconnection purposes. The Field Activity type was determined by the system from the Field Activity Type Profile referenced for the SP Type, SP's state and the referenced user-defined customer event. All was working well until the time when they realized that in spite of the Severance Process getting cancelled (when a payment was made); the Post Cancel Algorithm was not executed to start a Reconnection Severance Process for the purpose of generating a reconnection field activity and reconnecting the service.   Basically, the Post Cancel algorithm (if specified on a Severance Process Template) is triggered when a Severance Process gets cancelled because a credit transaction has affected/relieved a Service Agreement's debt.   So what exactly was happening? Now we come to actual question as to what are limitations in having user-defined customer event.   System defined/base customer events are hard-coded across the entire system. There is an impact even if you remove any customer event entry from the Look Up. User-defined customer events are not recognized by the system anywhere else except in the severance process, as described above.   There are few programs which have routines to first validate the completion of disconnection field activities, which were raised as a result of customer event CNP - Cut for Non-payment in order to perform other associated actions. One such program is the Post Cancel Algorithm, referenced on a Severance Process Template, generally used to reconnect services which were disconnected from other Severance Event, specifically CNP - Cut for Non-Payment. Post cancel algorithm provided by the product - SEV POST CAN does the following (below is the algorithm's description):   This algorithm is called after a severance process has been cancelled (typically because the debt was paid and the SA is no longer eligible to be on the severance process). It checks to see if the process has a completed 'disconnect' event and, if so, starts a reconnect process using the Reconnect Severance Process Template defined in the parameter.    Notice the underlined text. This algorithm implicitly checks for Field Activities having completed status, which were generated from Severance Events as a result of CNP - Cut for Non-payment customer event.   Now if we look back to the customer's issue, we can relate that the Post Cancel algorithm was triggered, but was not able to find any 'Completed' CNP - Cut for Non-payment related field activity. And hence was not able to start a reconnection severance process. This was because a field activity was generated and completed for a customer event CNPW - Cut for Non-payment of Water Services instead.   To conclude, if you introduce new customer events, you should be aware that you don't extend or simulate base customer events, the ones that are included in the base product, as they are further used to provide/validate additional business functions.  

    Read the article

< Previous Page | 683 684 685 686 687 688 689 690 691 692 693 694  | Next Page >