Search Results

Search found 18126 results on 726 pages for 'oracle policy automation'.

Page 354/726 | < Previous Page | 350 351 352 353 354 355 356 357 358 359 360 361  | Next Page >

  • PCI Compliance Book Suggestion

    - by Joel Weise
    I am always looking for good books on security, compliance and of course, PCI.  Here is one I think you will find very useful. "PCI Compliance, Third Edition: Understand and Implement Effective PCI Data Security Standard Compliance" by Branden Williams and Anton Chuvakin.  [Fair disclosure - Branden and I work together on the Information Systems Security Association Journal's editorial board.]   The primary reason I like this book is that the authors take a holistic architectural approach to PCI compliance and that to me is the most safe and sane way to approach PCI.  Using such an architectural approach to PCI is, in my humble opinion, the underlying intent of PCI.  Don't create a checklist of the PCI DSS and then map a solution to each.  That is a recipe for disaster.  Instead, look at how the different components and their configurations work together in a synergistic fashion.  In short, create a security architecture and governance framework (the ISO 27000 series is a good place to start) that begins with an evaluation of the requirements laid down in the PCI DSS, as well as your other applicable compliance, business and technical requirements.  By developing an integrated security architecture you should be able to not only address current requirements, but also be in a position to quickly address future ones as well.

    Read the article

  • Hack Fest Going Strong!

    - by Yolande Poirier
    Today was the first day of  the Hack Fest at Devoxx, the Java developer conference in Belgium.  The Hack Fest started with the Raspberry Pi & Leap Motion hands-on lab. Vinicius Senger introduced the Java Embedded, Arduino and Raspberry Pi. Java Champion Geert Bevin presented the Leap Motion, a controller sensing your hands and fingers to play games by controlling the mouse as an example. "Programmers are cooler than musicians because they can create entire universe using all senses" explained Geert In teams, participants started building applications using Raspberry Pi, sensors and relays. One team tested the performance of Tomcat, Java EE and Java Embedded Suite on the Raspberry Pi. Another used built an text animation using a LCD screen. Teams are using the Leap Motion to close and open programs on the desktop and other teams are using it as a game control. 

    Read the article

  • C++11 Tidbits: Decltype (Part 2, trailing return type)

    - by Paolo Carlini
    Following on from last tidbit showing how the decltype operator essentially queries the type of an expression, the second part of this overview discusses how decltype can be syntactically combined with auto (itself the subject of the March 2010 tidbit). This combination can be used to specify trailing return types, also known informally as "late specified return types". Leaving aside the technical jargon, a simple example from section 8.3.5 of the C++11 standard usefully introduces this month's topic. Let's consider a template function like: template <class T, class U> ??? foo(T t, U u) { return t + u; } The question is: what should replace the question marks? The problem is that we are dealing with a template, thus we don't know at the outset the types of T and U. Even if they were restricted to be arithmetic builtin types, non-trivial rules in C++ relate the type of the sum to the types of T and U. In the past - in the GNU C++ runtime library too - programmers used to address these situations by way of rather ugly tricks involving __typeof__ which now, with decltype, could be rewritten as: template <class T, class U> decltype((*(T*)0) + (*(U*)0)) foo(T t, U u) { return t + u; } Of course the latter is guaranteed to work only for builtin arithmetic types, eg, '0' must make sense. In short: it's a hack. On the other hand, in C++11 you can use auto: template <class T, class U> auto foo(T t, U u) -> decltype(t + u) { return t + u; } This is much better. It's generic and a construct fully supported by the language. Finally, let's see a real-life example directly taken from the C++11 runtime library as implemented in GCC: template<typename _IteratorL, typename _IteratorR> inline auto operator-(const reverse_iterator<_IteratorL>& __x, const reverse_iterator<_IteratorR>& __y) -> decltype(__y.base() - __x.base()) { return __y.base() - __x.base(); } By now it should appear be completely straightforward. The availability of trailing return types in C++11 allowed fixing a real bug in the C++98 implementation of this operator (and many similar ones). In GCC, C++98 mode, this operator is: template<typename _IteratorL, typename _IteratorR> inline typename reverse_iterator<_IteratorL>::difference_type operator-(const reverse_iterator<_IteratorL>& __x, const reverse_iterator<_IteratorR>& __y) { return __y.base() - __x.base(); } This was guaranteed to work well with heterogeneous reverse_iterator types only if difference_type was the same for both types.

    Read the article

  • Cross-Channel Survey Report

    - by David Dorf
    The folks at Retail Touchpoints surveyed 84 retailers on the topic of cross-channel and have published the results in Completing the Cross-Channel Challenge.  Below is an overview video that summarizes the findings and cites retailer examples. One thing is clear: customers demand Commerce Anywhere, the ability to shop when, where, and the way they want.  So retailers are doing what it takes to revamp their business to meet their customers' demands.

    Read the article

  • COLLABORATE 13 Call for Papers

    - by Marc Weintraub
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Attention PeopleSoft customers!  Speak at the largest user-led PeopleSoft conference of the year and attend for free! Interested in submitting a presentation for COLLABORATE 13? October 12 is the deadline to submit your abstract. The COLLABORATE 13- Quest forum is your home for high-level education sessions around PeopleSoft. Presenting doesn’t just mean giving a solo lecture: you can present with a vendor, give a demonstration (internet will be provided), facilitate a hot topic discussion or even offer best practices from an experience your company has been through. Remember, to submit an abstract now, all you need is a short description of your presentation. Think you don't have a story to tell? Think again! Check out the COLLABORATE 13- Quest forum now to better understand what we are looking for. A selection committee of other PeopleSoft users will review all sessions and select the most relevant, customer-focused sessions possible to make COLLABORATE a great learning experience for everyone. Don't forget, one speaker from each session selected will be eligible to receive a complimentary registration to the entire event *some rules apply. Also, don’t forget to include your functional counterpart. The selection committee is looking to increase the amount of functional users attending and want to help them glean the most out of the event. Thank you for your time and please let the selection committee know if you have any questions about submitting a presentation. We look forward to seeing you at COLLABORATE 13 in Denver! Quest's COLLABORATE '13 website - http://www.questdirect.org/collaborate /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Four Easy Ways to Save a Rocky CRM Relationship

    - by Divya Malik
     Today, I am pleased to introduce our guest blogger Luke Christianson. Luke is  an Application Sales rep based out of Minneapolis, MN.  You can find him on LinkedIn and follow him on Twitter. In any relationship, sooner or later, the excitement fades away.  The honeymoon period gives way to the old routines you had, before you committed to each other and you eventually begin doing things apart from one another.  I’m not talking about a marriage…  Well, I guess I am.Commitment to a CRM tool and building a deep and lasting relationship is not much different than the basics of a traditional love story.  After your controlled CRM pilot program, and maybe the National Sales Meeting where you couldn’t escape those three wonderful letters, CRM, you will soon find that if you haven’t designed an environment where it’s going to enable your reps to make more money, the relationship is doomed.   . If you’re currently in a dysfunctional CRM relationship, here are 4 simple tips to re-engaging users and getting that spark back. Shadow a Sales Rep:   Chances are you can find out exactly what is preventing your sales reps from using the application by simply watching how they go about their day.  Sales reps are driven by money, not by additional administrative duties.  Your system needs to be setup so that they can get the information they need quickly, facilitate making key updates and run their business out of one easy-to-use application.  Increase your sales team’s productivity by 5% automatically:    Cancel the weekly forecast calls with your reps and require them update their opportunities in CRM.  Something else that I’ve seen work extremely well, is when you do Monthly or Quarterly reviews, do not let your sales reps bring anything into the room with them; no spreadsheets, notebooks, or computers.  Everything they need to tell you should be able to be put into CRM and fully accessible by the Sales Manager at any time.  Tool time:      Make sure the tools that you have selected meet both your short-term goals and your long term goals.   You need tools that can adapt like your business does.  You probably can’t wait two months for an update to a picklist value or for the addition of a simple workflow rule.  Do you feel the tools that are in place can create the experience you want for your users? and finally, if all else fails... Keep It Simple, Stupid:     Do you really need to require 15 fields to create an Opportunity?  Do you need to clutter the interface with different reports that don’t add daily value?  Most CRM systems on the market today are flexible enough today that your admin could clean up most of the unnecessary interface ‘noise’ in a few hours.  If they're not, see #3. Every strong relationship can be tedious at times, you’ll fight and eventually make amends, you may even threaten to upgrade to a newer model…  But be patient and think about what you want to achieve and you’ll find a partner for life.

    Read the article

  • Revisiting the Generations

    - by Row Henson
    I was asked earlier this year to contribute an article to the IHRIM publication – Workforce Solutions Review.  My topic focused on the reality of the Gen Y population 10 years after their entry into the workforce.  Below is an excerpt from that article: It seems like yesterday that we were all talking about the entry of the Gen Y'ers into the workforce and what a radical change that would have on how we attract, retain, motivate, reward, and engage this new, younger segment of the workforce.  We all heard and read that these youngsters would be more entrepreneurial than their predecessors – the Gen X'ers – who were said to be more loyal to their profession than their employer. And, we heard that these “youngsters” would certainly be far less loyal to their employers than the Baby Boomers or even earlier Traditionalists. It was also predicted that – at least for the developed parts of the world – they would be more interested in work/life balance than financial reward; they would need constant and immediate reinforcement and recognition and we would be lucky to have them in our employment for two to three years. And, to keep them longer than that we would need to promote them often so they would be continuously learning since their long-term (10-year) goal would be to own their own business or be an independent consultant.  Well, it occurred to me recently that the first of the Gen Y'ers are now in their early 30s and it is time to look back on some of these predictions. Many really believed the Gen Y'ers would enter the workforce with an attitude – expect everything to be easy for them – have their employers meet their demands or move to the next employer, and I believe that we can now say that, generally, has not been the case. Speaking from personal experience, I have mentored a number of Gen Y'ers and initially felt that with a 40-year career in Human Resources and Human Resources Technology – I could share a lot with them. I found out very quickly that I was learning at least as much from them! Some of the amazing attributes I found from these under-30s was their fearlessness, ease of which they were able to multi-task, amazing energy and great technical savvy. They were very comfortable with collaborating with colleagues from both inside the company and peers outside their organization to problem-solve quickly. Most were eager to learn and willing to work hard.  This brings me to the generation that will follow the Gen Y'ers – the Generation Z'ers – those born after 1998. We have come full circle. If we look at the Silent Generation or Traditionalists, we find a workforce that preceded the television and even very early telephones. We Baby Boomers (as I fall right squarely in this category) remembered the invention of the television and telephone – but laptop computers and personal digital assistants (PDAs) were a thing of “StarTrek” and other science fiction movies and publications. Certainly, the Gen X'ers and Gen Y'ers grew up with the comfort of these devices just as we did with calculators. But, what of those under the age of 10 – how will the workplace look in 15 more years and what type of workforce will be required to operate in the mobile, global, virtual world. I spoke to a friend recently who had her four-year-old granddaughter for a visit. She said she found her in the den in front of the TV trying to use her hand to get the screen to move! So, you see – we have come full circle. The under-70 Traditionalist grew up in a world without TV and the Generation Z'er may never remember the TV we knew just a few years ago. As with every generation – we spend much time generalizing on their characteristics. The most important thing to remember is every generation – just like every individual – is different. The important thing for those of us in Human Resources to remember is that one size doesn’t fit all. What motivates one employee to come to work for you and stay there and be productive is very different than what the next employee is looking for and the organization that can provide this fluidity and flexibility will be the survivor for generations to come. And, finally, just when we think we have it figured out, a multitude of external factors such as the economy, world politics, industries, and technologies we haven’t even thought about will come along and change those predictions. As I reach retirement age – I do so believing that our organizations are in good hands with the generations to follow – energetic, collaborative and capable of working hard while still understanding the need for balance at work, at home and in the community! Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Chargeback and billing across public and private clouds

    - by llaszews
    Had a great conversation today regarding the need for metering, chargeback, and billing of cloud computing resources. The person I spoken with at a Fortune 1000 company increased the scope and magnitude of the issue of billing for cloud computing resources beyond what I had previously considered. I believed that doing any type of chargeback and billing for one public, private or hybrid installation was difficult. This person pointed out that the problem is even bigger in scope. The reality is many companies are using multiple public cloud vendors and have many different private cloud data centers. A customer may use AWS for some smaller public cloud applications, Salesforce.com (SaaS), Rackspace for IaaS, Savvis for colocation and a variety of Iaas and PaaS implementations for the private cloud. How does a company get a consolidated bill for all these different cloud environments? I am not sure their is an answer right now.

    Read the article

  • The JavaFX Community Site on Java.net

    - by Tori Wieldt
    Community activity surrounding JavaFX has been steadily growing, with tweets, blog posts, and projects increasing in number. We are pleased to announce that there is now a JavaFX community site on Java.net at the following URL: javafxcommunity.com  This site is an aggregator of JavaFX information, where you can find links to JavaFX blog posts, tweets, and other resources.  Gerrit Grunwald and Jim Weaver are the community leaders for this site, and they welcome your feedback on how to make the JavaFX Community site more useful to you! Learn more on Jim Weaver’s Rich-Client Java Blog. 

    Read the article

  • How to deal with MySQL Connector/ODBC error "Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock'"

    - by user12653020
    I am sure many users run into a mysterious problem when perfectly working ODBC configurations started failing with errors like: Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' The above error message might be preceded with something like [nxDc[yQ]. At the same time odbc.ini specifies in its DSN different SOCKET=/tmp/mysql.sock or a TCP connection SERVER=<remote_host_or_ip>. The question is, what had happened that the ODBC driver started to ignore the DSN options? The clue lies in the corrupted string [nxDc[yQ], which actually was [UnixODBC][MySQL] with each 2nd symbol removed. This is the case of bad conversion from SQLCHAR to SQLWCHAR. The UnixODBC driver manager took a single-byte character string from the client application and tried to convert it into the wide (multi-byte) characters for the Unicode version of MyODBC driver: Initially the piece of the connection string was represented by 1-byte chars like: [S][E][R][V][E][R][=][m][y][h][o][s][t][;] after the bad conversion to wide chars (commonly 2-byte UTF-16) [SE][RV][ER][=m][yh][os][t;] instead of [S\0][E\0][R\0][V\0][E\0][R\0][=\0][m\0][y\0][h\0][o\0][s\0][t\0][;\0] Naturally, the MyODBC driver could not parse the bad string and tried to use the default connection type (SOCKET) with the default value (/var/lib/mysql/mysql.sock) Now we know what happened, but why it happened? In most cases it happened because of using ODBCManageDataSourcesQ4 utility or its older analog ODBCConfig. When registering ODBC drivers they put lots of additional options and one of these options badly affects the UnixODBC driver manager itself. The solution is simple - remove or comment out the option in odbcinst.ini file (it is empty by default) set for the driver: [MySQL ODBC 5.2.6 Driver] Description    = Driver         = /home/dbs/myodbc526/lib/libmyodbc5w.so Driver64       = /home/dbs/myodbc526/lib/libmyodbc5w.so Setup          = /home/dbs/myodbc526/lib/libmyodbc5S.so Setup64        = /home/dbs/myodbc526/lib/libmyodbc5S.so UsageCount     = 1 CPTimeout      = 0 CPTimeToLive   = 0 IconvEncoding  =  # <--------- remove this line Trace          = TraceFile      = TraceLibrary   = After applying this simple solution (remove the line with IconvEncoding = ) everything came to normal. Prior to removing that line I tried putting different encoding names there, but the result was not good, so I really don't know how to properly use it. Unfortunately, UnixODBC manuals say nothing about it. Therefore, removing this option was the only way to get things done.

    Read the article

  • Siebel 2012-IP Release is now GA

    - by Richard Lefebvre
    The Siebel development team is pleased to announce the general availability of the highly anticipated 2012 Siebel Innovation Pack on 12/12/2012. The journey began last year as a concept at Open World that invigorated the Siebel customer base and partners across the globe, culminating in this 2012-IP release that delivers much valued usability enhancements on an existing release. Open UI and Siebel Mobile are the key innovations that are released as part of the 2012-IP on both 8.1.1.9 and the 8.2.2.2 releases. These innovations are a giant leap forward in facilitating Siebel usability while supporting multiple browsers and devices. Siebel Mobile released as part of the IP provide connected Mobile solutions that support key Horizontal Sales, Field Service, Life Sciences and Consumer Goods flows. See the Siebel Open UI Dada Sheet here.

    Read the article

  • Basic is Best

    - by Eric A. Stephens
    Fellow foodies will recognize the recent movement towards "farm-to-table" restaurants. These venues attempt to simplify their menus and source ingredients as close to the source as possible. I had the opportunity to dine at such a restaurant the other evening. I was gushing about the appetizer to my server when she described the preparation for the item and then punctuated her comments with "basic is best". I reminded my fellow enterprise architect diners there was an architecture lesson in that statement. They rolled their eyes and chuckled. But they also knew I was right. I'm reminded of Frederick Brooks' book The Mythical Man Month and his latest The Design of Design. The former must read book talks about complexity. But he refrains from damning all complexity. The world we live in and enterprises we strive to transform with enterprise architecture are complicated organisms, much like the human body. But sometimes a simple solution is the best approach. Fewer applications (think: portfolio rationalization). Fewer components. Fewer lines of code. Whatever level of abstraction you are working at, less is more. I'm reminded of the enterprise architecture principle "Control Technical Diversity". At one firm I created pithy catch phrases for each principles. I named this one "Less is More". But perhaps another variation is what my server said the other night, "Basic is Best".

    Read the article

  • Data Source Connection Pool Sizing

    - by Steve Felts
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} One of the most time-consuming procedures of a database application is establishing a connection. The connection pooling of the data source can be used to minimize this overhead.  That argues for using the data source instead of accessing the database driver directly. Configuring the size of the pool in the data source is somewhere between an art and science – this article will try to move it closer to science.  From the beginning, WLS data source has had an initial capacity and a maximum capacity configuration values.  When the system starts up and when it shrinks, initial capacity is used.  The pool can grow to maximum capacity.  Customers found that they might want to set the initial capacity to 0 (more on that later) but didn’t want the pool to shrink to 0.  In WLS 10.3.6, we added minimum capacity to specify the lower limit to which a pool will shrink.  If minimum capacity is not set, it defaults to the initial capacity for upward compatibility.   We also did some work on the shrinking in release 10.3.4 to reduce thrashing; the algorithm that used to shrink to the maximum of the currently used connections or the initial capacity (basically the unused connections were all released) was changed to shrink by half of the unused connections. The simple approach to sizing the pool is to set the initial/minimum capacity to the maximum capacity.  Doing this creates all connections at startup, avoiding creating connections on demand and the pool is stable.  However, there are a number of reasons not to take this simple approach. When WLS is booted, the deployment of the data source includes synchronously creating the connections.  The more connections that are configured in initial capacity, the longer the boot time for WLS (there have been several projects for parallel boot in WLS but none that are available).  Related to creating a lot of connections at boot time is the problem of logon storms (the database gets too much work at one time).   WLS has a solution for that by setting the login delay seconds on the pool but that also increases the boot time. There are a number of cases where it is desirable to set the initial capacity to 0.  By doing that, the overhead of creating connections is deferred out of the boot and the database doesn’t need to be available.  An application may not want WLS to automatically connect to the database until it is actually needed, such as for some code/warm failover configurations. There are a number of cases where minimum capacity should be less than maximum capacity.  Connections are generally expensive to keep around.  They cause state to be kept on both the client and the server, and the state on the backend may be heavy (for example, a process).  Depending on the vendor, connection usage may cost money.  If work load is not constant, then database connections can be freed up by shrinking the pool when connections are not in use.  When using Active GridLink, connections can be created as needed according to runtime load balancing (RLB) percentages instead of by connection load balancing (CLB) during data source deployment. Shrinking is an effective technique for clearing the pool when connections are not in use.  In addition to the obvious reason that there times where the workload is lighter,  there are some configurations where the database and/or firewall conspire to make long-unused or too-old connections no longer viable.  There are also some data source features where the connection has state and cannot be used again unless the state matches the request.  Examples of this are identity based pooling where the connection has a particular owner and XA affinity where the connection is associated with a particular RAC node.  At this point, WLS does not re-purpose (discard/replace) connections and shrinking is a way to get rid of the unused existing connection and get a new one with the correct state when needed. So far, the discussion has focused on the relationship of initial, minimum, and maximum capacity.  Computing the maximum size requires some knowledge about the application and the current number of simultaneously active users, web sessions, batch programs, or whatever access patterns are common.  The applications should be written to only reserve and close connections as needed but multiple statements, if needed, should be done in one reservation (don’t get/close more often than necessary).  This means that the size of the pool is likely to be significantly smaller then the number of users.   If possible, you can pick a size and see how it performs under simulated or real load.  There is a high-water mark statistic (ActiveConnectionsHighCount) that tracks the maximum connections concurrently used.  In general, you want the size to be big enough so that you never run out of connections but no bigger.   It will need to deal with spikes in usage, which is where shrinking after the spike is important.  Of course, the database capacity also has a big influence on the decision since it’s important not to overload the database machine.  Planning also needs to happen if you are running in a Multi-Data Source or Active GridLink configuration and expect that the remaining nodes will take over the connections when one of the nodes in the cluster goes down.  For XA affinity, additional headroom is also recommended.  In summary, setting initial and maximum capacity to be the same may be simple but there are many other factors that may be important in making the decision about sizing.

    Read the article

  • Multiple Java EE Agents on Single Managed Server

    - by tina.wang
    A default JEE agent is created when you create domain, which is named as OracleDIAgent. 1. In Studio, duplicate the agent, change its name to genAgent, change the web application context to genagent. 2: Go to datasource of genAgent, drop all datasources.3: Generate server template. put the jar file under odi\common\templates\wls 4: Deploy this template by update the existing domain. Bring up the config.cmd, choose update existing domain. 5: Update the domain using the template that just generated. Go through the Configuration wizard. (I did not modify anything or configure anything here). 6: The wizard will give information says the deployment was successful. 7: Bring up the admin server and ODI_server1. 

    Read the article

  • Solving the context menu problem with drag and drop in trees

    - by Frank Nimphius
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} The following drag-and-drop problem has been reported on OTN: An ADF Faces tree component is configured with a af:collectionDropTarget tag to handle drop events. The same tree component also has a context menu defined that is shown when users select the tree with the right mouse button. The problem now was - and I could reproduce this - that the context menu stopped working after the first time the tree handled a drop event. The drag and drop use case is to associate employees from a table to a department in the tree using drag and drop. The drop handler code in the managed bean looked up the tree node that received the drop event to determine the department ID to assign to the employee. For this code similar to the one shown below was used List dropRowKey = (List) dropEvent.getDropSite(); //if no dropsite then drop area was not a data area if(dropRowKey == null){    return DnDAction.NONE; }                tree.setRowKey(dropRowKey); JUCtrlHierNodeBinding dropNode = (JUCtrlHierNodeBinding) tree.getRowData(); Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} So what happens in this code? The drop event contains the dropSite reference, which is the row key of the tree node that received the drop event. The code then sets the key to the tree in a call to getRowDate() returns the node information for the drop target (the department). This however causes the tree state to go out of synch with its model (ADF tree binding), which is known to cause issues. In this use case the issue caused by this is that the context menu no longer shows up. To fix the problem, the code needs to be changes to read the current row key from the key, then perform the drop operation and at the end set the origin (or model) row key back //memorize current row key Object currentRowKey = tree.getRowKey();        List dropRowKey = (List) dropEvent.getDropSite(); //if no dropsite then drop area was not a data area if(dropRowKey == null){   return DnDAction.NONE;   }              tree.setRowKey(dropRowKey); JUCtrlHierNodeBinding dropNode = (JUCtrlHierNodeBinding) tree.getRowData(); ... do your stuff here .... //set current row key back tree.setRowKey(currentRowKey); AdfFacesContext.getCurrentInstance().addPartialTarget(tree); Node the code line that sets the row key back to its original value.

    Read the article

  • Hack Fest at Devoxx

    - by Yolande Poirier
    On November 11th and 12th, Devoxx attendees will get the chance to build a Java embedded application onsite. During the Raspberry Pi & Leap Motion hands-on labs on Monday and Tuesday mornings, you will learn about Raspberry Pi development with Java embedded using Leap Motion and other sensors. The afternoons are hacking time on a project of your choice. You can get your inspiration from existing projects. You can also use their project source code and improve on already developed applications.  The goal is for you to create something fun and innovative in only a couple of days, no matter your experience in embedded systems.  We provide you with equipment like the Raspberry Pi, sensors, and Leap Motion. Thanks to Stephan Janssen for lending us 10 Leap Motions for the Hack Fest. Raspberry Pi and sensors are pre-configured. You will access the sensors via a web address. You can build a project alone if you want. We also give the opportunity to brainstorm ideas with other attendees and maybe build something more complex. You will get one-on-one help from top-notch coaches. Vinicius Senger has tons of experience with Java and the Raspberry. He runs Java embedded challenges and give training year round. Geert Bevin contributed to many open source projects and his latest venture is with the Leap Motion. Bruno Borges's expertise is in connecting backend logic with great interfaces. Yara Senger is a Java Champion and a great Java embedded mentor.    Don't miss this opportunity! This is your chance to transform your idea into a Raspberry Pi or a Leap Motion application.

    Read the article

  • Context is Everything

    - by Angus Graham
    Normal 0 false false false EN-CA X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} Context is Everything How many times have you have you asked a question only to hear an answer like “Well, it depends. What exactly are you trying to do?”.  There are times that raw information can’t tell us what we need to know without putting it in a larger context. Let's take a real world example.  If I'm a maintenance planner trying to figure out which assets should be replaced during my next maintenance window, I'm going to go to my Asset Management System.  I can get it to spit out a list of assets that have failed several times over the last year.  But what are these assets connected to?  Is there any safety consequences to shutting off this pipeline to do the work?  Is some other work that's planned going to conflict with replacing this asset?  Several of these questions can't be answered by simply spitting out a list of asset IDs.  The maintenance planner will have to reference a diagram of the plant to answer several of these questions. This is precisely the idea behind Augmented Business Visualization. An Augmented Business Visualization (ABV) solution is one where your structured data (enterprise application data) and your unstructured data (documents, contracts, floor plans, designs, etc.) come together to allow you to make better decisions.  Essentially we're showing your business data into its context. AutoVue allows you to create ABV solutions by integrating your enterprise application with AutoVue’s hotspot framework. Hotspots can be defined for your document. Users can click these hotspots to trigger actions in your enterprise app. Similarly, the enterprise app can highlight the hotspots in your document based on its business data, creating a visual dashboard of your business data in the context of your document. ABV is not new. We introduced the hotspot framework in AutoVue 20.1 with text hotspots. Any text in a PDF or 2D CAD drawing could be turned into a hotspot. In 20.2 we have enhanced this to include 2 new types of hotspots: 3D and regional hotspots. 3D hotspots allow you to turn 3D parts into hotspots. Hotspots can be defined based on the attributes of the part, so you can create hotspots based on part numbers, material, date of delivery, etc.  Regional hotspots allow an administrator to define rectangular regions on any PDF, image, or 2D CAD drawing. This is perfect for cases where the document you’re using either doesn’t have text in it (a JPG or TIFF for example) or if you want to define hotspots that don’t correspond to the text in the document. There are lots of possible uses for AutoVue hotspots.  A great demonstration of how our hotspot capabilities can help add context to enterprise data in the Energy sector can be found in the following AutoVue movies: Maintenance Planning in the Energy Sector - Watch it Now Capital Construction Project Management in the Energy Sector  -  Watch it Now Commissioning and Handover Process for the Energy Sector  -  Watch it Now

    Read the article

  • P-Commerce – What The Heck Is That?

    - by Michael Hylton
    We’ve heard of e-commerce, m-commerce (Mobile Commerce), and f-commerce (Facebook Commerce) but what is p-commerce?  It’s not truly a customer touchpoint or channel but the emphasis on personalization of the buying experience. Ask yourself how well do you know your customer?  Are you able to take what you know about them and apply it to their commerce activity with you and personalize the shopping experience? Much of this is dictated by have a complete 360 degree view of your customer, collecting data from your website, sales interactions, historical commerce purchases, call center activity, how they got to your website, etc. and applying it to their current commerce interaction.  Customers expect to have a similar interaction on your website as they would in your brick-and-mortar store, displaying the products and services that they might be interested in purchasing.

    Read the article

  • Trust

    - by mprove
    I sense traffic of this blog w/o a present reason. Hmm. What about this,  brief musings about trust: Each software, each website, each social platform, each community building effort is a matter of trust building. You make a social promise to continue the effort, and to care for the commitment of the users or community members. It is easy to offer more to your community. On the other hand, it is quite difficult or impossible to take something away, or to close down or end the product or community without disappointing someone. cheers,Matthias

    Read the article

  • Facial Recognition for Retail

    - by David Dorf
    My son decided to do his science project on how the brain recognizes faces.  Faces are so complicated and important that the brain has a dedicated area for just that purpose.  During our research, we came across some emerging uses for facial recognition in the retail industry. If you believe the movies, recognizing faces as they walk by a camera is easy for computers but that's not the reality.  Huge investments are being made by the U.S. government in this area, with a focus on airport security.  Now, companies like Eye See are leveraging that research for marketing purposes.  They do things like track eyes while viewing newspaper ads to see which ads get more "eye time."  This can help marketers make better placement and color decisions. But what caught my eye (that was too easy) was their new mannequins that watch shoppers.  These mannequins, being tested at European retailers like Benetton, watch shoppers that walk by and identify their gender, race, and age.  This helps the retailer better understand the types of customers being attracted to the outfit on the mannequin.  Of course to be most accurate, the software has pictures of the employees so they can be filtered out.  Since the mannequins are closer to the shoppers and at eye-level, they are more accurate than traditional in-ceiling LP cameras. Marketing agency RedPepper is offering retailers the ability to recognize loyalty shoppers at their doors using Facedeal.  For customers that have opted into the program, when they enter the store their face is recognized and they are checked in.  Then, as a reward, they are sent an offer on their smartphone. It won't be long before retailers begin to listen to shoppers are they walk the aisles, then keywords can be collected and aggregated to give the retailer an idea of what people are saying about their stores and products.  Sentiment analysis based on what's said or even facial expressions can't be far off. Clearly retailers need to be cautions and respect customer privacy.  That's why these technologies are emerging slowly.  But since the next generation of shoppers are less concerned about privacy, I expect these technologies to appear sporadically in the next five years then go mainstream.  Time will tell.

    Read the article

  • Constituent Experience Counts In Public Sector

    - by Michael Seback
      Businesses and government organizations are operating in an era of the empowered customer where service  and communication channels are challenged every day.  Consumers in the private sector have high expectations from purchasing gifts online, reading reviews on social sites, and expecting the companies they do business with to know and reward them.   In the Public Sector, constituents also expect government organizations to provide consistent and timely service across agencies and touch points.  Examples include requesting critical city services, applying for social assistance or reviewing insurance plans for a health insurance exchange. If an individual does not receive the services they need at the right time and place, it can create a dire situation – involving housing, food or healthcare assistance. Government organizations need to deliver a fast, reliable and personalized experience to constituents. Look at a few recent statistics from a Government focused survey: How do you define good customer service? 70 % improved services, 48% shortest time to provide information, 44% shortest time to resolve complaints What are ways/opportunities to improve customer service? 69% increased collaboration across agencies and 41% increased customer service channels Are you using data collected to make informed decisionsto improve customer service efforts? 39% data collection is limited, not used to improve decision making Source: Re-Imagining Customer Service in Government, 2012 Click here to see the highlights.  Would you like to get started – read Eight Steps to great constituent experiences for government.

    Read the article

  • Proving What You are Worth

    - by Ted Henson
    Here is a challenge for everyone. Just about everyone has been asked to provide or calculate the Return on Investment (ROI), so I will assume everyone has a method they use. The problem with stopping once you have an ROI is that those in the C-Suite probably do not care about the ROI as much as Return on Equity (ROE). Shareholders are mostly concerned with their return on the money the invested. Warren Buffett looks at ROE when deciding whether to make a deal or not. This article will outline how you can add more meaning to your ROI and show how you can potentially enhance the ROE of the company.   First I want to start with a base definition I am using for ROI and ROE. Return on investment (ROI) and return on equity (ROE) are ways to measure management effectiveness, parts of a system of measures that also includes profit margins for profitability, price-to-earnings ratio for valuation, and various debt-to-equity ratios for financial strength. Without a set of evaluation metrics, a company's financial performance cannot be fully examined by investors. ROI and ROE calculate the rate of return on a specific investment and the equity capital respectively, assessing how efficient financial resources have been used. Typically, the best way to improve financial efficiency is to reduce production cost, so that will be the focus. Now that the challenge has been made and items have been defined, let’s go deeper. Most research about implementation stops short at system start-up and seldom addresses post-implementation issues. However, we know implementation is a continuous improvement effort, and continued efforts after system start-up will influence the ultimate success of a system.   Most UPK ROI’s I have seen only include the cost savings in developing the training material. Some will also include savings based on reduced Help Desk calls. Using just those values you get a good ROI. To get an ROE you need to go a little deeper. Typically, the best way to improve financial efficiency is to reduce production cost, which is the purpose of implementing/upgrading an enterprise application. Let’s assume the new system is up and running and all users have been properly trained and are comfortable using the system. You provide senior management with your ROI that justifies the original cost. What you want to do now is develop a good base value to a measure the current efficiency. Using usage tracking you can look for various patterns. For example, you may find that users that are accessing UPK assistance are processing a procedure, such as entering an order, 5 minutes faster than those that don’t.  You do some research and discover each minute saved in processing a claim saves the company one dollar. That translates to the company saving five dollars on every transaction. Assuming 100,000 transactions are performed a year, and all users improve their performance, the company will be saving $500,000 a year. That $500,000 can be re-invested, used to reduce debt or paid to the shareholders.   With continued refinement during the life cycle, you should be able to find ways to reduce cost. These are the type of numbers and productivity gains that senior management and shareholders want to see. Being able to quantify savings and increase productivity may also help when seeking a raise or promotion.

    Read the article

  • Tech Article: Tired of Null Pointer Exceptions? Use Java SE 8's Optional!

    - by Tori Wieldt
    A wise man once said you are not a real Java programmer until you've dealt with a null pointer exception. The null reference is the source of many problems because it is often used to denote the absence of a value. Java SE 8 introduces a new class called java.util.Optional that can alleviate some of these problems. In the tech article "Tired of Null Pointer Exceptions? Use Java SE 8's Optional!" Java expert Raoul-Gabriel Urma shows you how to make your code more readable and protect it against null pointer exceptions. Urma explains "The purpose of Optional is not to replace every single null reference in your codebase but rather to help design better APIs in which—just by reading the signature of a method—users can tell whether to expect an optional value. In addition, Optional forces you to actively unwrap an Optional to deal with the absence of a value; as a result, you protect your code against unintended null pointer exceptions." Learn how to go from writing painful nested null checks to writing declarative code that is composable, readable, and better protected from null pointer exceptions. Read "Tired of Null Pointer Exceptions? Use Java SE 8's Optional!"

    Read the article

  • The Softer Side of Customer Experience

    - by Christina McKeon
    It’s election season in the U.S., and you know what that means. It means I stop by the recycling bin in my garage before entering the house with the contents of my mailbox. A couple of weeks ago, I was doing my usual direct mail purge when I came across a piece from The Container Store®. This piece would have gone straight to the recycling bin, but the title stopped me: Learn what WE STAND FOR! Under full disclaimer, I’m probably a “frequent flier” at The Container Store. One can never be too organized! Now, back to the direct mail piece. I opened it to discover that The Container Store has taken their customer experience beyond “a shopping experience that makes you smile” to giving customers more insight and transparency into how they feel about their employees, the vendors they partner with, and the communities they live in. The direct mail piece included several employees showcasing a skill, hobby or talent with their photo and a personal note that used one word to describe what these employees believe The Container Store stands for. I do not recall the last time I read through an entire piece of direct mail. But this time, I pored over all the comments and photos.  Summer, a salesperson, believes that one word is PASSION. Thomas in distribution center inventory systems chooses the word ACTION. The list goes on to include MATCHLESS, FUN, FAMILY, LOVE, and EMPOWERMENT. The Container Store is running a contest asking you to tell them what nonprofit organization you stand for. Anyone can submit their favorite nonprofit to win cash, products and services from The Container Store. Don’t forget about the softer side of customer experience. With many organizations working feverishly to transform their business into being more customer-centric, it’s easy to get caught up in processes and technology. Focusing on people and social responsibility often falls behind and becomes a lower priority. Keeping people and social responsibility at the forefront is crucial. Your customers will use your processes and technology, but they will see or hear your people and feel their passion. The latter is what they will remember most about your brand. I’m sure there are many other great examples of the softer side of customer experience. Please share your examples in the comments section.

    Read the article

  • Who keeps removing that file?

    - by mgerdts
    Over the years, I've had many times when some file gets removed and there's no obvious culprit.  With dtrace, it is somewhat easy to figure out:  #! /usr/sbin/dtrace -wqs syscall::unlinkat:entry /cleanpath(copyinstr(arg1)) == "/dev/null"/ {         stop();         printf("%s[%d] stopped before removing /dev/null\n", execname, pid);         system("ptree %d; pstack %d", pid, pid); } That script will stop the process trying to remove /dev/null before it does it.  You can allow it to continue by restarting (unstopping?) the command with prun(1) or killing it with kill -9.  If you want the command to continue automatically after getting the ptree and pstack output, you can add "; prun %d" and another pid argument to the system() call.

    Read the article

< Previous Page | 350 351 352 353 354 355 356 357 358 359 360 361  | Next Page >