Search Results

Search found 69138 results on 2766 pages for 'oracle data mining'.

Page 439/2766 | < Previous Page | 435 436 437 438 439 440 441 442 443 444 445 446  | Next Page >

  • Brasero Burns Data, Not Time - or Piles of Discs

    <b>Linux Insider:</b> "There are a lot of CD/DVD burners for Linux out there, but Brasero stands out as a straightforward, easy-to-use, burner that has some nice extra features but won't make you relearn a lot of complex commands if you only use it occasionally. One nicety is the option to start a burn project and finish it much later, even if you're not using a rewritable disc."

    Read the article

  • Be aware of the difference between CURRENT_DATE and SYSDATE

    - by Kevin Smith
    I was running some queries in SQL Developer against the WebCenter Content (WCC) schema that included date fields such as dInDate. I was comparing the dates against CURRENT_DATE. I was not getting the expected results. I did some googlng and didn’t find a solution, but I did run across a reference to SYSDATE. I tried SYSDATE in my queries and got the expected results. I did a TO_CHAR on the two date fields and found they returned different times. CURRENT_DATE returned the time from my laptop which was  in the EDT time zone. SYSDATE returned the time from the database server which happened to be in the PDT time zone. I guess if both the database server and my laptop were in the same time zone I would not have seen any problem. Here is the query I ran to display the two fields. select to_char(current_date,'DD-MON-YY HH:MI:SS'), to_char(sysdate,'DD-MON-YY HH:MI:SS') from dual; As you can see from the screen shot from SQL Developer they definitely returned different times. I’m sure there is some command or setting you can use to prevent this problem, but for me the take away is to use SYSDATE in your queries when you want to do any date comparison.

    Read the article

  • Displaying Exceptions Thrown or Caught in Managed Beans

    - by Frank Nimphius
    Just came a cross a sample written by Steve Muench, which somewhere deep in its implementation details uses the following code to route exceptions to the ADF binding layer to be handled by the ADF model error handler (which can be customized by overriding the DCErrorHandlerImpl class and configuring the custom class in DataBindings.cpx file) To route an exception to the ADFm error handler, Steve used the following code ((DCBindingContainer)BindingContext.getCurrent().getCurrentBindingsEntry()).reportException(ex); The same code however can be used in managed beans as well to enforce consistent error handling in ADF. As an example, lets assume a managed bean method hits an exception. To simulate this, let's use the following code: public void onToolBarButtonAction(ActionEvent actionEvent) {    throw new JboException("Just to tease you !!!!!");        } The exception shows at runtime as displayed in the following image: Assuming a try-catch block is used to intercept the exception caused by a managed bean action, you can route the error message display to the ADF model error handler. Again, let's simulate the code that would need to go into a try-catch block public void onToolBarButtonAction(ActionEvent actionEvent) {    JboException ex = new JboException("Just to tease you !!!!!");  BindingContext bctx = BindingContext.getCurrent();    ((DCBindingContainer)bctx.getCurrentBindingsEntry()).reportException(ex); } The error now displays as shown in the image below As you can see, the error is now handled by the ADFm Error handler, which - as mentioned before - could be a custom error handler. Using the ADF model error handling for displaying exceptions thrown in managed beans require the current ADF Faces page to have an associated PageDef file (which is the case if the page or view contains ADF bound components). Note that to invoke methods exposed on the business service it is recommended to always work through the binding layer (method binding) so that in case of an error the ADF model error handler is automatically used.

    Read the article

  • OM: Effective Troubleshooting Techniques to Debug Order Import

    - by ChristineS
    There is a new document available to help you debug Order Import in Order Management:  Effective Troubleshooting Techniques to Debug Order Import Issues (Doc ID 1558196.1) The white paper addresses debugging from a technical perspective. This approach is to assist users in understanding the actual issue, as well as help them in the early resolution of any order import issue. It will walk you through several cases, with supporting debug logs / trace files taken for each case. Educating you along the way for what debug logs / trace files should be gathered, as trace files are not always needed.  It will also walk you through the supporting documents so you will know what to look for in your case. Please refer to this note the next time you have an Order Import error. Or you could step through it now, so you are informed the next time you encounter an Order Import error.  Happy debugging!

    Read the article

  • Partner Training on Endeca 2-Days Hands-on Fundamentals

    - by Mike.Hallett(at)Oracle-BI&EPM
    Normal 0 false false false EN-GB X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Normal 0 false false false EN-GB X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Utrecht, NL - Monday, January 28 until Tuesday, January 29 : To Register Click here   cost €475 per person Utrecht, NL - Thursday, January 31 until Friday, February 1 :  To Register Click here   cost €475 per person Oracle Belgium - Wednesday, February 6 to Thursday February 7: To Register Click here   cost €535 per person Oracle Belgium - Thursday, February 28 until Friday, March 1 :  To Register Click here   cost €535 per person The Oracle Endeca Information Discovery (OEID) fundamentals training is designed to give partners an understanding of OEID’s features, and how it complements the existing Oracle Business Intelligence suite. Participants will learn how to develop & implement solutions using a Data Discovery method.  Training is in Dutch. This is a two-day class which start with the introduction of Endeca in the proposition of Oracle Business Analytics. The underlying architecture and technology will also be covered. The majority of this fundamentals training is based on a hands-on wokrshop. In this workshop all participants will build several Endeca dashboards based on worked out examples. During this workshop we will also spend time on how to extract social media and other unstructured data combined with text enrichment. This training is developed and will be given by Aotrta Business Intelligence who is Oracle Approved Education Center for OBIEE and OEID in EMEA. Prerequisites You must bring a 64-bit laptop with you for the Hands-on labs: Attendees should have experience and familiarity with the basic concepts of business intelligence and be OPN Partners with Gold or above membership.

    Read the article

  • CMT Blog: Virtual IO gets better for LDoms

    - by uwes
    As we all know virtual IO is of great use in the IT environments of today but when it comes to performance we often have to pay the price. In his Blog entry Improved vDisk Performance for Ldoms, Stefan Hinker explained how the new implementation of the vdisk/vds software stack in Solaris 11.1 SRU 19 (and a Solaris 10 patch sortly afterwards) significantly improves latency and throughput of the vitual disk IO.

    Read the article

  • C++11 Tidbits: access control under SFINAE conditions

    - by Paolo Carlini
    Lately I have been spending quite a bit of time on the SFINAE ("Substitution failure is not an error") features of C++, fixing and tweaking various bits of the GCC implementation. An important missing piece was the implementation of the resolution of DR 1170 which, in a nutshell, mandates that access checking is done as part of the substitution process. Consider: class C { typedef int type; }; template <class T, class = typename T::type> auto f(int) - char; template <class> auto f(...) -> char (&)[2]; static_assert (sizeof(f<C>(0)) == 2, "Ouch"); According to the resolution, the static_assert should not fire, and the snippet should compile successfully. The reason being that the first f overload must be removed from the candidate set because C::type is private to C. On the other hand, before the resolution of DR 1170, the expected behavior was for the first overload to remain in the candidate set, win over the second one, to eventually lead to an access control error (*). GCC mainline (would be 4.8) finally implements the DR, thus benefiting the many modern programming techniques heavily exploiting SFINAE, among which certainly the GNU C++ runtime library itself, which relies on it for the internals of <type_traits> and in several other places. Note that the resolution of the DR is active even in C++98 mode, not just in C++11 mode, because it turned out that the traditional behavior, as implemented in GCC, wasn't fully consistent in all the possible circumstances. (*) In practice, GCC didn't really implement this, the static_assert triggered instead.

    Read the article

  • How can I clear *old* browsing data from Google Chrome Linux, while keeping more recent data?

    - by Norman Ramsey
    I can find plenty of information on how to clear Google Chrome's recent browsing data, in various periods, as well as clearing all browsing data. But I want to clear old browsing data—say for a start, anything over two months old. (I'm trying to save space on a crowded laptop.) Does anyone know any principled way to do this, or shall I just dive into ~/.config/google, start removing likely-looking files, and hope for the best. I run Google Chrome on Debian Linux.

    Read the article

  • New study shows supply chain cost management increased from 6.0% to 6.9%

    - by John Murphy
    A global survey of supply chain managers indicates that aggressively managing costs and creating a flexible supply chain are major factors for businesses in successfully growing market share as the economy rebounds. Results also show supply chain managers are investing in systems and developing partnerships that enable greater visibility with their supply chain partners. http://www.mhia.org/news/industry/11429/flexible-supply-chains-drive-growth-in-revenue-and-profit

    Read the article

  • Iterative and Incremental Principle Series 4: Iteration Planning – (a.k.a What should I do today?)

    - by llowitz
    Welcome back to the fourth of a five part series on applying the Iteration and Incremental principle.  During the last segment, we discussed how the Implementation Plan includes the number of the iterations for a project, but not the specifics about what will occur during each iteration.  Today, we will explore Iteration Planning and discuss how and when to plan your iterations. As mentioned yesterday, OUM prescribes initially planning your project approach at a high level by creating an Implementation Plan.  As the project moves through the lifecycle, the plan is progressively refined.  Specifically, the details of each iteration is planned prior to the iteration start. The Iteration Plan starts by identifying the iteration goal.  An example of an iteration goal during the OUM Elaboration Phase may be to complete the RD.140.2 Create Requirements Specification for a specific set of requirements.  Another project may determine that their iteration goal is to focus on a smaller set of requirements, but to complete both the RD.140.2 Create Requirements Specification and the AN.100.1 Prepare Analysis Specification.  In an OUM project, the Iteration Plan needs to identify both the iteration goal – how far along the implementation lifecycle you plan to be, and the scope of work for the iteration.  Since each iteration typically ranges from 2 weeks to 6 weeks, it is important to identify a scope of work that is achievable, yet challenging, given the iteration goal and timeframe.  OUM provides specific guidelines and techniques to help prioritize the scope of work based on criteria such as risk, complexity, customer priority and dependency.  In OUM, this prioritization helps focus early iterations on the high risk, architecturally significant items helping to mitigate overall project risk.  Central to the prioritization is the MoSCoW (Must Have, Should Have, Could Have, and Won’t Have) list.   The result of the MoSCoW prioritization is an Iteration Group.  This is a scope of work to be worked on as a group during one or more iterations.  As I mentioned during yesterday’s blog, it is pointless to plan my daily exercise in advance since several factors, including the weather, influence what exercise I perform each day.  Therefore, every morning I perform Iteration Planning.   My “Iteration Plan” includes the type of exercise for the day (run, bike, elliptical), whether I will exercise outside or at the gym, and how many interval sets I plan to complete.    I use several factors to prioritize the type of exercise that I perform each day.  Since running outside is my highest priority, I try to complete it early in the week to minimize the risk of not meeting my overall goal of doing it twice each week.  Regardless of the specific exercise I select, I follow the guidelines in my Implementation Plan by applying the 6-minute interval sets.  Just as in OUM, the iteration goal should be in context of the overall Implementation Plan, and the iteration goal should move the project closer to achieving the phase milestone goals. Having an Implementation Plan details the strategy of what I plan to do and keeps me on track, while the Iteration Plan affords me the flexibility to juggle what I do each day based on external influences thus maximizing my overall success. Tomorrow I’ll conclude the series on applying the Iterative and Incremental approach by discussing how to manage the iteration duration and highlighting some benefits of applying this principle.

    Read the article

  • Managing Operational Risk of Financial Services Processes – part 1/ 2

    - by Sanjeevio
    Financial institutions view compliance as a regulatory burden that incurs a high initial capital outlay and recurring costs. By its very nature regulation takes a prescriptive, common-for-all, approach to managing financial and non-financial risk. Needless to say, no longer does mere compliance with regulation will lead to sustainable differentiation.  Genuine competitive advantage will stem from being able to cope with innovation demands of the present economic environment while meeting compliance goals with regulatory mandates in a faster and cost-efficient manner. Let’s first take a look at the key factors that are limiting the pursuit of the above goal. Regulatory requirements are growing, driven in-part by revisions to existing mandates in line with cross-border, pan-geographic, nature of financial value chains today and more so by frequent systemic failures that have destabilized the financial markets and the global economy over the last decade.  In addition to the increase in regulation, financial institutions are faced with pressures of regulatory overlap and regulatory conflict. Regulatory overlap arises primarily from two things: firstly, due to the blurring of boundaries between lines-of-businesses with complex organizational structures and secondly, due to varying requirements of jurisdictional directives across geographic boundaries e.g. a securities firm with operations in US and EU would be subject different requirements of “Know-Your-Customer” (KYC) as per the PATRIOT ACT in US and MiFiD in EU. Another consequence and concomitance of regulatory change is regulatory conflict, which again, arises primarily from two things: firstly, due to diametrically opposite priorities of line-of-business and secondly, due to tension that regulatory requirements create between shareholders interests of tighter due-diligence and customer concerns of privacy. For instance, Customer Due Diligence (CDD) as per KYC requires eliciting detailed information from customers to prevent illegal activities such as money-laundering, terrorist financing or identity theft. While new customers are still more likely to comply with such stringent background checks at time of account opening, existing customers baulk at such practices as a breach of trust and privacy. As mentioned earlier regulatory compliance addresses both financial and non-financial risks. Operational risk is a non-financial risk that stems from business execution and spans people, processes, systems and information. Operational risk arising from financial processes in particular transcends other sources of such risk. Let’s look at the factors underpinning the operational risk of financial processes. The rapid pace of innovation and geographic expansion of financial institutions has resulted in proliferation and ad-hoc evolution of back-office, mid-office and front-office processes. This has had two serious implications on increasing the operational risk of financial processes: ·         Inconsistency of processes across lines-of-business, customer channels and product/service offerings. This makes it harder for the risk function to enforce a standardized risk methodology and in turn breaches harder to detect. ·         The proliferation of processes coupled with increasingly frequent change-cycles has resulted in accidental breaches and increased vulnerability to regulatory inadequacies. In summary, regulatory growth (including overlap and conflict) coupled with process proliferation and inconsistency is driving process compliance complexity In my next post I will address the implications of this process complexity on financial institutions and outline the role of BPM in lowering specific aspects of operational risk of financial processes.

    Read the article

  • Excel: making line charts so the line goes through all data points

    - by Mike
    Hi I've got data based on over 50+ years for various products. Unfortunately not all products have data for each year. I've created a line chart to show the movement (quantity sold) of these products over the years. It works well, except where the data points are too far apart i.e. 1965 and then 1975. For some reason there is no line. It's not perfect data because of the missing years, but I can live with that, I just want to see the trend, and not just sporadic dots; squares or crosses. Any help or links greatly appreciated. Mike

    Read the article

  • Virtualized data centre&ndash;Part four: The design

    - by marc dekeyser
    Welcome back to the fourth post in this series! Today we will have a look at what Microsoft recommends as a “private cloud design” and what I will make of it. Whilst my own solution is based of the reference architecture, it is quite different indeed! An important thing to know is that, whilst I am using the private cloud as a reference, I am skipping most of the steps in designing a private cloud. If that is why you are here, please read the links at the end of the article and skim through my own content. A private cloud is much more process driven than just building a virtual infrastructure… The architecture of it all… So imagine for a minute that you have unlimited funds to build this lab of yours… You’d want redundancy on all levels and separation of each network where possible! Unfortunately we don’t have that luxury and, as you saw me hinting at in the previous article, our own design will be more limited but still quite capable! Networking From the networking perspective I will not have a fully redundant network, after all, this is but a lab environment! Thanks to Server 2012 I will be able to use bonding on my NIC’s and use LACP to improve the performance on that part. Storage As I mentioned in the previous article a Synology DS1218+ will be used for iSCSI provisioning. This device has 2 NICs on-board which can be bonded in to one 2 Gbps interface giving me a decent throughput and making the disks the most limiting factor in the storage design. Domain controllers and extra infrastructure Server 2012 completely supports running domain controllers virtualized and has no need to actually have a reachable DC when booting… That being said I need a remote access machine to power on the hosts (I have no need for them running 24/7) and a possible System Center VMM 2012 box (although server 2012 is not supported until SP1 :( ). Undecided on if I am to install those boxes separately or as a virtual machine… Which amounts to… Something like this pretty picture!                   Sources Microsoft Private Cloud Solutions Repository (en-US) http://social.technet.microsoft.com/wiki/contents/articles/12131.microsoft-private-cloud-solutions-repository-en-us.aspx Reference  Architecture: http://social.technet.microsoft.com/wiki/contents/articles/3819.reference-architecture-for-private-cloud.aspx Private Cloud Reference Model: http://social.technet.microsoft.com/wiki/contents/articles/4399.private-cloud-reference-model.aspx

    Read the article

  • Project Gantt chart using ADF BC

    - by shantala.sankeshwar
    This article describes simple example of using Project Gantt chart using ADF Business components.Use Case DescriptionLet us create a simple Project Gantt chart using ADF Business components & try to get the selected tasks details. Implementation stepsA project Gantt chart is used for project management. The chart lists tasks vertically and shows the duration of each task as a bar on a horizontal time line.To create a basic project gantt chart,we first need to define  2 tables as below:1)task_table with taskid,task_type,start_date & end_date 2)subtask_table with subtaskid,subtask_type,start_date, end_date &  taskidNow we can create Business components for the above 2 tables .Then we will create new jspx page -projectGantt.jspx Drop TaskView1 as Gantt->Project: Select all required columns under tasks & subtasks tabs of 'create Project Gantt chart' dialog.We have created Project Gantt chart that lists tasks & its subtasks.Now if we need to get all task details selected by the user then define taskSelectionListener for the dvt:projectGantt in jspx source page: taskSelectionListener="#{test.taskSelectlistener}" public void taskListener(TaskSelectionEvent taskSelectionEvent) {// This codes gives all the tasks selected by user System.out.println("Selected task details +taskSelectionEvent.getTask());            }Run the above page & note that it shows all details of tasks nodes & expanding these tasks nodes shows its corresponding subtasks details.Now if user selects 2 tasks,we can see that it prints the complete task details for the selected tasks.

    Read the article

  • What is the usage of Spaly Trees in the real world?

    - by Meena
    I decided to learn about Balance search trees, so I picked 2-3-4 and splay trees. I'm wondering what are the examples of splay trees usage in the real world? In this Cornell: http://www.cs.cornell.edu/courses/cs3110/2009fa/recitations/rec-splay.html I read that splay trees are 'A good example is a network router'. But from rest of the explanation seams like network routers use hash tables and not splay trees since the lookup time is constant instead of O(log n). thanks!

    Read the article

  • BPM in Retail Industry

    - by Sanjeev Sharma
    The following series of blog posts discuss common BPM use-cases in the Retail industry: Retail 2.0 represents the transformation in the retail industry triggered by the accelerated shift towards online and mobile technologies and social shopping paradigms. Never before has the consumer been of more importance or should i say in greater control, especially so due to the shrinking information asymmetry between merchants and consumers that has tilted the balance of power in the latter’s favor. For details, click Customer Experience Management for Retail 2.0 - part 1 / 2 Below is a concept architecture for streamlining front-end, mid-office and back-end interfaces through shared process to achieve consistency and efficiency in managing the customer experience from order capture to order provisioning. For details, click Customer Experience Management for Retail 2.0 - part 2 / 2 ARTS Retail Reference Model (Coming Soon!)

    Read the article

  • A Generic RIDC Test Program

    - by Kevin Smith
    Many times I have found it useful to use a java program that communicates with WebCenter Content (WCC) using RIDC for testing. I might not have access to the web GUI or need to test a service running as a specific user. In the past I had created a number of "one off" programs that submitted specific services, e.g GET_SEARCH_RESULTS, DOCINFO, etc. Recently I decided to create a generic RIDC test program that could submit any service with the desired parameters based on a configuration file. The programs gets the following information from the configuration file: WCC connection information (host, port) User to use to run service Service to run Any parameters for the service The program will make a connection to the WCC server, send the service request, and print the results of the service call using the getResponseAsString() method. Here is a sample configuration file: ridc.host=localhostridc.port=4444ridc.user=sysadminridc.idcservice=GET_SEARCH_RESULTSidcservice.QueryText=dDocType <matches> `Document`idcservice.SortField=dDocNameidcservice.SortDesc=ASC There is a readme file included in the zip with instructions for how to configure and run the program. The program takes one command line argument, the configuration file name. The configuration file name is optional and defaults to config.properties. If you have any suggestions for improvements let me know. Right now it only submits a single service call each time you run it. One enhancement I have already thought about would be to allow you to specify multiple services to tun in the configuration file. You can do that with the current program by having multiple configuration files and running the program multiple times, each with a different configuration file. You can download the program here.

    Read the article

  • Pre-Loading von Tabellen in 11g

    - by Ulrike Schwinn (DBA Community)
    Tabellen und Indizes in den Cache zu laden, damit möglichst wenig I/O durchgeführt wird, ist eine häufig anzutreffende Anforderung. Diese Technik nennt man auch Pre-Loading oder Pre-Caching von Datenbank Objekten. Die Durchführung ist dabei sehr einfach. Gleich zu Beginn werden spezielle SQL Statements wie SELECT Statements mit Full Table Scan oder Index Scan durchgeführt, damit die entsprechenden Objekte vollständig in den Cache geladen werden können. Besonders interessant ist dieser Aspekt auch im Zusammenhang mit der Erstellung von Testumgebungen. Falls beispielsweise kein Warmup möglich ist, kann man bevor der eigentliche Test durchgeführt wird, bestimmte Tabellen und Indizes mit dieser Technik vorab in den Buffer Cache laden.  Der folgende Artikel zeigt wie man eine Tabelle in 11g in den Buffer Cache laden kann und gibt Tipps zur Durchführung.

    Read the article

  • Relationships in a Chen ERD

    - by Nibroc A Rehpotsirhc
    I am working on a Chen ERD to model our organizations merchandise. Our central entity is a Style. We have supplemental entities of Color and Season. I am defining our assortment as the relationship between these three entities, and this relationship itself will have attributes and is defined by the three entities which participate in the mandatory relationship. The rules are; Many Styles can be offered in a Season, and a Style can be offered in many Seasons. Within a Season, a Style can be offered in Many Colors. I then have 2 other entities, one of which I believe is a weak entity, Climate, and the other may be weak, but I am not sure, this being Transaction Channel. I am thinking of these as relationships off of a relationship? Meaning, for a given Style/Color combination offered in a Season, it can be available through 1 or more Transaction Channels. Additionally, within a season, a given Style/Color combination can be intended for 1 or more Climates. Is it valid to have relationships off of relationships? Or does this requirement dictate that I should think of this Style/Color/Season relationship as an entity itself, and define the relationships to Climate and Transaction Channel off of this entity?

    Read the article

  • Gamification = -10#/3mo

    - by erikanollwebb
    One of the purposes of gamification of anything is to see if you can modify the behavior of the user. In the enterprise, that might mean getting sales people to enter more information into a CRM system, encouraging employees to update their HR records, motivating people to participate in forums and discussions, or process invoices more quickly.  Wikipedia defines behavior modification as "the traditional term for the use of empirically demonstrated behavior change techniques to increase or decrease the frequency of behaviors, such as altering an individual's behaviors and reactions to stimuli through positive and negative reinforcement of adaptive behavior and/or the reduction of behavior through its extinction, punishment and/or satiation."  Gamification is just a way to modify someone's behavior using game mechanics. And the magic question is always whether it works. So I thought I would present my own little experiment from the last few months.  This spring, I upgraded to a Samsung Galaxy 4.  It's a pretty sweet phone in many ways, but one of the little extras I discovered was a built in app called S Health. S Health is an app that you can use to track calories, weight, exercise and it has a built in pedometer. I looked at it when I got the phone, but assumed you had to turn it on to use it so I didn't look at it much.  But sometime in July, I realized that in fact, it just ran in the background and was quietly tracking my steps, with a goal of 10,000 per day.  10,000 steps per day is this magic number recommended by the Surgeon General and the American Heart Association.  Dr. Oz pushes it as the goal for daily exercise.  It's about 5 miles of walking. I'm generally not the kind of person who always has my phone with me.  I leave it in my purse and pull it out when I need it.  But then I realized that meant I wasn't getting a good measure of my steps.  I decided to do a little experiment, and carry it with me as much as possible for a week.  That's when I discovered the gamification that changed my life over the last 3 months.  When I hit 10,000 steps, the app jingled out a little "success!" tune and I got a badge.  I was hooked.  I started carrying my phone.  I started making sure I had shoes I could walk in with me.  I started walking at lunch time, because I realized how often I sat at my desk for 8-10 hours every day without moving.  I started pestering my husband to walk with me after work because I hadn't hit my 10,000 yet, leading him at one point to say "I'm not as much a slave to that badge as you are!"  I started looking at parking lots differently.  Can't get a space up close?  No worries, just that many steps toward my 10,000.  I even tried to see if there was a second power user level at 15,000 or 20,000 (*sadly, no).  If I was close at the end of the day, I have done laps around my house until I got my badge.  I have walked around the block one more time to get my badge.  I have mentally chastised myself when I forgot to put my phone in my pocket because I don't know how many steps I got.  The badge below I got when my boss and I were in New York City and we walked around the block of our hotel just to watch the badge pop up. There are a bunch of tools out on the market now that have similar ideas for helping you to track your exercise, make it social.  There are apps (my favorite is still Zombies, Run!).  You could buy a FitBit or UP by Jawbone.   Interactive fitness makes the Expresso stationary bike with built in video games.  All designed to help you be more aware of your activity and keep you engaged and motivated.  And the idea is to help you change your behavior. I know someone who would spend extra time and work hard on the Expresso because he had built up strategies for how to kill the most dragons while he was riding to get more points.  When the machine broke down, he didn't ride a different bike because it just wasn't that interesting. But for me, just the simple jingle and badge have been all I needed.  I admit, I still giggle gleefully when I hear the tune sing out from my pocket. After a few weeks, I noticed I had dropped a few pounds.  Not a lot, just 2-3.  But then I was really hooked.  I started making a point both to eat a little less and hit 10,000 steps as much as I could.  I bemoaned that during the floods in Boulder, I wasn't hitting my 10,000 steps.  And now, a few months later, I'm almost 10 lbs lighter. All for 1 badge a day. So yes, simple gamification can increase motivation and engagement.  And that can lead to changes in behavior.  Now the job is to apply that to the enterprise space in a meaningful and engaging way. 

    Read the article

  • Calculate minimum ext3 partition size for certain amount of data

    - by Daniel Beck
    These following ext3 partitions contain identical data. As we can see, the larger the partition size, the more space is required for the same files: Filesystem 1K-blocks Used Available Use% Mounted on /dev/loop11 3965777 561064 3199964 15% [...] /dev/loop19 573029 543843 29186 95% [...] Filesystem Size Used Avail Use% Mounted on /dev/loop11 3.8G 548M 3.1G 15% [...] /dev/loop19 560M 532M 29M 95% [...] Filesystem Inodes IUsed IFree IUse% Mounted on /dev/loop11 1024000 1656 1022344 1% [...] /dev/loop19 1024000 1656 1022344 1% [...] I start with a partition of fixed size that possibly wasted a lot of space and I want to create a partition that is able to hold that data but with (almost) minimal size. How can I reliably calculate that minimal partition size needed for storing a certain amount of data? The amount of data changes over time, and I need to automate these calculations.

    Read the article

  • DOAG Conference 2011: Seven Flavors of Database Upgrades

    - by Mike Dietrich
    Thanks to everybody who did attend at my DOAG Conference session in Nürnberg this year "Seven Flavor of Database Upgrades" (or in German: "7 Wege zum Datenbank-Upgrade - Geschichten, die das Leben schrieb"). And thanks for your patience staying with me in overtime as well In case you'd like to download the slides I've presented at the session please download them via this link or from the download section to your right.

    Read the article

  • One Does Like To Code: DevoxxUK

    - by Tori Wieldt
    What's happening at Devoxx UK? I'll be talking to Rock Star speakers, Community leaders, authors, JSR leads and more.  This video is a short introduction.   Check out these great sessions: Thursday, June 12Perchance to Stream with Java 8by Paul Sandoz13:40 - 14:30 | Room 1 Making the Internet-of-Things a Reality with Embedded Javaby Simon Ritter11:50 - 12:40 | Room 4 Java SE 8 Lambdas and Streams Labby Simon Ritter17:00 - 20:00 | Room Mezzanine Safety Not Guaranteed: Sun. Misc. Unsafe and the Quest for Safe Alternativesby Paul Sandoz18:45 - 19:45 | Room 3 Join the Java EvolutionHeather VanCuraPatrick Curran19:45 – 20:45 | Room 2  Glassfish is Here to StayDavid DelabasseeAntonio Goncalves19:45 – 20:45 | Room Expo Here is the full line-up of sessions. Devoxx UK includes a Hackergarten, where can devs work an Open Source project of their choice. The Adopt OpenJDK and Adopt a JSR Program folks will be there to help attendees contribute back to Java SE and Java EE itself!   Saturday includes a special Devoxx4Kids event in conjunction with the London Java Community. It's design to teach 10-16 year-olds simple programming concepts, robotics, electronics, and games making. Workshops include LEGO Mindstorms (robotic engineering), Greenfoot (programming), Arduino (electronics), Scratch (games making), Minecraft Modding (game hacking) and NAO (robotic programming). Small fee, you must register. If you can't attend Devoxx UK in person, stay tuned to the YouTube/Java channel. I'll be doing plenty of interviews so you can join the fun from around the world. 

    Read the article

< Previous Page | 435 436 437 438 439 440 441 442 443 444 445 446  | Next Page >