Search Results

Search found 5642 results on 226 pages for 'coding efficiency'.

Page 85/226 | < Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >

  • Test Driven Development with vxml

    - by Malcolm Anderson
    It's been 3 years since I did any coding and am starting back up with Java using netBeans and glassfish.  Right off the bat I noticed two things about Java's ease of use.  The java ide (netBeans) has finally caught up with visual studio, and jUnit, has finally caught up with nUnit.  netBeans intellisense exists and I don't have to subclass everything in jUnit.    Now on to the point of this very short post ( request)   I'm trying to figure out how to do test driven development with vxml and have not found anythnig yet.  I've done my google search, but unfortunately, TDD in IVR land has something to do with helping the hearing impared. I've found a vxml simulator or two, but none of their marketing is getting my hopes up.    My request - if you have done any agile engineering work with vxml, contact me, I need to pick your brain and bring some ideas back to my team.   Thanks in advance.

    Read the article

  • Can a loosely typed language be considered true object oriented?

    - by user61852
    Can a loosely typed programming language like PHP be really considered object oriented? I mean, the methods don't have returning types and method parameters has no declared type either. Doesn't class design require methods to have a return type? Don't methods signatures have specifically-typed parameters? How can OOP techniques help you code in PHP if you always have to check the types of parameters received because the language doesn't enforce types? Please, if I'm wrong, explain it to me. When you design things using UML, then code classes in PHP with no return-typed methods and no-type parameters... Is the code really compliant with the UML design? You spend time designing the architecture of your software, then the compiler doesn't force the programmer to follow your design while coding, letting he/she assign any object variable to any other variable with no "type-mismatch" warning.

    Read the article

  • JFXtras Project: More cool features for your JavaFX app

    - by terrencebarr
    JFXtras in an open source project that provides a bunch of interesting components and pieces to make your JavaFX application even more productive, engaging, and, yes, sexy. And saves you coding time along the way. Check out the new JFXtras Ensemble demo, which showcases in one fell swoop all the features and bits you can take advantage of. Also, bookmark Jim Weaver’s excellent blog to keep up with all things JavaFX and rich client. Cheers, – Terrence Filed under: Mobile & Embedded Tagged: JavaFX, JFXtras, Open Source

    Read the article

  • Oracle MDM at the MDM Summit in San Francisco

    - by David Butler
    Oracle is sponsoring the Product MDM track at this year’s MDM & Data Governance San Francisco Summit. Sachin Patel, Director of Product Strategy, Product Hub Applications, at Oracle will present the keynote: Product Master Data Management for Today’s Enterprise. Here’s the abstract: Today businesses struggle to boost operational efficiency and meet new product launch deadlines due to poor and cumbersome administrative processes. One of the primary reasons enterprises are unable to achieve cohesion is due to various domain silos and fragmented product data. This adversely affects business performance including, but not limited to, excess inventories, under-leveraged procurement spend, downstream invoicing or order errors and lost sales opportunities. In this session, you will learn the key elements and business processes that are required for you to master an enterprise product record. Additionally you will gain insights into how to improve the accuracy of your data and deliver reliable and consistent product information across your enterprise. This provides a high level of confidence that business managers can achieve their goals. In this session, you will understand how adopting a Master Data Management strategy for product information can help your enterprise change course towards a more profitable, competitive and successful business. Cisco Systems will join Sachin and cover their experiences, lessons learned and best practices. If you are in the Bay Area and interested in mastering your product data for the benefit of multiple applications, business processes and analytical systems, please join us at the Hyatt, Fisherman’s Wharf this Thursday, June 30th.

    Read the article

  • The illusion of Competence

    - by tony_lombardo
    Working as a contractor opened my eyes to the developer food chain.  Even though I had similar experiences earlier in my career, the challenges seemed much more vivid this time through.  I thought I’d share a couple of experiences with you, and the lessons that can be taken from them. Lesson 1: Beware of the “funnel” guy.  The funnel guy is the one who wants you to funnel all thoughts, ideas and code changes through him.  He may say it’s because he wants to avoid conflicts in source control, but the real reason is likely that he wants to hide your contributions.  Here’s an example.  When I finally got access to the code on one of my projects, I was told by the developer that I had to funnel all of my changes through him.  There were 4 of us coding on the project, but only 2 of us working on the UI.  The other 2 were working on a separate application, but part of the overall project.  So I figured, I’ll check it into SVN, he reviews and accepts then merges in.  Not even close.  I didn’t even have checkin rights to SVN, I had to email my changes to the developer so he could check those changes in.  Lesson 2: If you point out flaws in code to someone supposedly ‘higher’ than you in the developer chain, they’re going to get defensive.  My first task on this project was to review the code, familiarize myself with it.  So of course, that’s what I did.  And in familiarizing myself with it, I saw so many bad practices and code smells that I immediately started coming up with solutions to fix it.  Of course, when I reviewed these changes with the developer (guy who originally wrote the code), he smiled and nodded and said, we can’t make those changes now, it’s too destabilizing.  I recommended we create a new branch and start working on refactoring, but branching was a new concept for this guy and he was worried we would somehow break SVN. How about some concrete examples? I started out by recommending we remove NUnit dependency and tests from the application project, and create a separate Unit testing project.  This was met with a little bit of resistance because - “How do I access the private methods?”  As it turned out there weren’t really any private methods that weren’t exposed by public methods, so I quickly calmed this fear. Win 1 Loss 0 Next, I recommended that all of the File IO access be wrapped in Using clauses, or at least properly wrapped in try catch finally.  This recommendation was accepted.. but never implemented. Win 2  Loss 1 Next recommendation was to refactor the command pattern implementation.  The command pattern was implemented, but it wasn’t really necessary for the application.  More over, the fact that we had 100 different command classes, each with it’s own specific command parameters class, made maintenance a huge hassle.  The same code repeated over and over and over.  This recommendation was declined, the code was too fragile and this change would destabilize it.  I couldn’t disagree, though it was the commands themselves in many cases that were fragile. Win 2 Loss 2 Next recommendation was to aid performance (and responsiveness) of the application by using asynchronous service calls.  This on was accepted. Win 2 Loss 3 If you’re paying any attention, you’re wondering why the async service calls was scored as a loss.. Let me explain.  The service call was made using the async pattern.  Followed by a thread.sleep  <facepalm>. Now it’s easy to be harsh on this kind of code, especially if you’re an experienced developer.  But I understood how most of this happened.  One junior guy, working as hard as he can to build his first real world application, with little or no guidance from anyone else.  He had his pattern book and theory of programming to help him, but no real world experience.  He didn’t know how difficult it would be to trace the crashes to the coding issues above, but he will one day.  The part that amazed me was the management position that “this guy should be a team lead, because he’s worked so hard”.  I’m all for rewarding hard work, but when you reward someone by promoting them past the point of their competence, you’re setting yourself and them up for failure.  And that’s lesson 3.  Just because you’ve got a hard worker, doesn’t mean he should be leading a development project.  If you’re a junior guy busting your ass, keep at it.  I encourage you to try new things, but most importantly to learn from your mistakes.  And correct your mistakes.  And if someone else looks at your code and shows you a laundry list of things that should be done differently, don’t take it personally – they’re really trying to help you.  And if you’re a senior guy, working with a junior guy, it’s your duty to point out the flaws in the code.  Even if it does make you the bad guy.  And while I’ve used “guy” above, I mean both men and women.  And in some cases mutant dinosaurs. 

    Read the article

  • "Are You There?".. India Tops Logistics List of Emerging Nations

    - by [email protected]
    It's just amazing how far, wide and deep modern supply chains are extending. AMR reported on 15 Apr (M.Burkett, A.Reese) in a SCM webcast that 'Penetrating Emerging Markets" was the top priotiy for organizations based on a recent survey. I took this as both adding new consumers to their prospect-list as well as leveraging 'lower cost labor arbitrage". (Read '3 Billion Capitalists") Supply Chain Quarterly reports that India and Brazil received the highest ranking of the logistics markets in developing nations India tops the list of emerging nations that scores the attractiveness of logistics markets to foreign investors. Developed by the UK-based research firm Transport Intelligence, the new  Emerging Market Logistics Index rated 38 developing countries on 3 factors. 1. "Market size and growth attractiveness," considered a country's economic output, projected growth rate, and population size.  2. "Market compatibility," which examined how well-matched a nation was with the services offered by global logistics providers. This includes a country's security levels, market accessibility, foreign direct investment, distribution of wealth and population, and development of its service sector. 3. "Connectedness," which rated the efficiency of customs and border controls, liner shipping connections, and transportation infrastructure. India claimed the top spot due to its market size and growth prospects. Brazil is second because of its economic performance, good levels of market accessibility, and improving domestic and international transport connections. Are you there? For more information see www.transportintelligence.com/articles_papers. The top 10 emerging countries India Brazil Indonesia Mexico Russia Turkey United Arab Emirates Egypt Saudi Arabia Malaysia Source: Transport Intelligence, The Emerging Markets Logistics Index, March 2010

    Read the article

  • Modified Strategy Design Pattern

    - by Samuel Walker
    I've started looking into Design Patterns recently, and one thing I'm coding would suit the Strategy pattern perfectly, except for one small difference. Essentially, some (but not all) of my algorithms, need an extra parameter or two passed to them. So I'll either need to pass them an extra parameter when I invoke their calculate method or store them as variables inside the ConcreteAlgorithm class, and be able to update them before I call the algorithm. Is there a design pattern for this need / How could I implement this while sticking to the Strategy Pattern? I've considered passing the client object to all the algorithms, and storing the variables in there, then using that only when the particular algorithm needs it. However, I think this is both unwieldy, and defeats the point of the strategy pattern. Just to be clear I'm implementing in Java, and so don't have the luxury of optional parameters (which would solve this nicely).

    Read the article

  • How should I log time spent on multiple tasks?

    - by xenoterracide
    In Joel's blog on evidence based scheduling he suggests making estimates based on the smallest unit of work and logging extra work back to the original task. The problem I'm now experiencing is that I'll have create object A with subtask method A which creates object B and test all of the above. I create tasks for each of these that seems to be resulting in ok-ish estimates (need practice), but when I go to log work I find that I worked on 4 tasks at once because I tweak method A and find a bug in the test and refactor object B all while coding it. How should I go about logging this work? should I say I spent, for example, 2 hours on each of the 4 tasks I worked on in the 8 hour day?

    Read the article

  • How to code a 4x shader/filter which emulates arcade crt display behavior?

    - by Arthur Wulf White
    I want to write a shader/filer probably in adobe Pixel Bender that will do the best job possible in emulating the fill of an oldskul monochromatic arcade CRT screen. Much like this here: http://filthypants.blogspot.com/2012/07/customizing-cgwgs-crt-pixel-shader.html Here are some attributes I know will exist in this filter: It will take in a low res image 160 x 120 and return a medium res image 640 x 480. It will add scanlines It will blur the color channels to create that color bleeding effect It will distort the shape of the image from a perfect rectangle into a rounder shape. The question is, could you please provide any other attributes that are beneficial to emulating an arcade CRT feel and links and resources on coding these effects. Thanks

    Read the article

  • Python rpg adivce? [closed]

    - by nikita.utiu
    I have started coding an text rpg engine in python. I have basic concepts laid down, like game state saving, input, output etc. I was wondering how certain scripted game mechanics(eg. debuffs that increase damage received from a certain player or multiply damage by the number of hits received, overriding of the mobs default paths for certain events etc) are implemented usually implemented. Some code bases or some other source code would be useful(not necessarily python). Thanks in advance.

    Read the article

  • Closed-loop Recommendation Engines: Analyst Insight report on Oracle Real-Time Decisions (RTD)

    - by Mike.Hallett(at)Oracle-BI&EPM
    In November 2011, Helena Schwenk of MWD Advisors, published her analysis on Oracle Real-Time Decisions.  She summarizes as follows: "In contrast to other popular approaches to implementing predictive analytics, RTD focuses on learning from each interaction and using these insights to adjust what is presented, offered or displayed to a customer. Likewise its capabilities for optimising decisions within the context of specific business goals and a report-driven framework for assessing the performance of models and decisions make it a strong contender for organisations that want to continuously improve decision making as part of a customer experience marketing, e-commerce optimisation and operational process efficiency initiative." This is an outstanding report to share with a prospect or client as it goes into great detail about the product and its capabilities.  It also highlights the differences in Oracle's Real-Time Decisions product vs. other closed loop recommendation engines. I encourage you to share this report with your clients and prospects. It can be downloaded directly from here - MWD Advisors Vendor Profile: Oracle Real-Time Decisions. (expires in November 2012) Highlights: "At the core of RTD lies a learning engine that combines business rules and adaptive predictive models to deliver recommendations to operational systems while simultaneously learning from experiences." "While closed-loop recommendation engines are becoming more prevalent... there are a number of features that distinguish RTD: It makes its decisions in the context of the business objectives, such as maximising customer revenue or reducing service costs Its support for operational integration offers organisations some flexibility in how they implement the offering."

    Read the article

  • How can a large company foster excellence in its engineers?

    - by Joshiatto
    I am tasked with improving the skills (quality & speed) of engineers in my company. Here are some ideas: Pair Programming TDD Automated Check-in Policies Talks given by experts Awards for coding excellence Encourage competition among engineers to contribute to GitHub Publish standards and practices docs on the intranet site "Gamification" of engineering. Somehow make becoming badasses into a game they will enjoy playing Training Showcase github checkins on screens around the office Add an "engineer of the month" to the intranet home page How can I drive traffic to the intranet home page? What crazy futuristic idea would drive engineers to go to the page every day to see who of their peers are making more money than them (inferred via recognition) and then go off and improve their skills and productivity to see their standings improve on the home page??? Or any ideas just to foster collaboration and love for their jobs so they start taking more pride in their work?? Don't take my ideas as symptomatic of our org. I take full responsibility for not knowing the right way to do this.

    Read the article

  • How can I make a permanently updated copy of a file in a different place to the original file?

    - by Mark
    I use two computers, a Linux one for coding and building and a Windows one which has the programming application to load the built program onto the hardware. Both computers have access to a network drive which I use to pass the files from Linux to Windows. My problem is, that every time I build I have to copy the files from where the are created to the network drive. How can I make some sort of file in the network drive on Ubuntu that always mirrors the file which is built in the different location, like a pointer? Thanks!

    Read the article

  • What resources will help me understand the data model for QC 10.0 in order to write my SQL queries?

    - by srihari
    I am a fresher in Quality Center 10.0 HP software testing tool. As per my understanding in order to generate reports from QC and to troubleshoot the scenarios, we need to write SQL queries in the QC back end database. In my case it is SQL db. I downloaded the database reference help file but I could not understand from where I can start. It just gave the table name and its information. For a starter like me are there any online tutorials or helpful websites,hands on exercises,scenario's where I can better understand how to write queries for the QC data model? I am very confident about the SQL coding itself, what I want to know is how to query on the QC database tables based on the scenarios that occur in QC tool. Please suggest. Thanks, Srihari

    Read the article

  • PeopleSoft Mobile Expenses and Mobile Approvals now available in FSCM 9.1

    - by Howard Shaw
    Oracle is pleased to announce the release of two new applications, PeopleSoft Mobile Expenses and PeopleSoft Mobile Approvals, which are now generally available in PeopleSoft FSCM 9.1. These are the first two of many upcoming applications designed and built to cater directly to the mobile workforce by providing user-friendly access to key business functions on a smartphone or tablet. Enter and Submit Expenses Anytime, Anywhere PeopleSoft Mobile Expenses provides the ability to enter employee expense reports quickly and easily, for busy travelers on the go. The contemporary, streamlined user interface is optimized for mobile devices (that support HTML 5), such as tablets or smartphones, and provides a simple-to-use tool for capturing expenses as they are being incurred, submitting expense reports while waiting at the airport, approving your employees’ expense reports, and more. And since it is part of the PeopleSoft Mobile Applications suite, you don’t have to wait until you return home or to the office, which can lead to improved efficiencies. The user interface and gesture actions (for example, swipe, touch, and so on) will be immediately familiar to mobile device users, and is specifically targeted to keep the experience as streamlined as possible for just the tasks you need to get to while on the go. In addition, PeopleSoft Mobile Expenses leverages all of the powerful expense policy compliance tools delivered by PeopleSoft Expenses, contributing to reduced spend and increased efficiency throughout your organization. PeopleSoft Mobile Expenses is integrated directly with PeopleSoft Mobile Approvals, so managers can quickly approve submitted expense reports in addition to entering or reviewing their own expenses. Manage Approvals Anytime, Anywhere PeopleSoft Mobile Approvals improves productivity and keeps business moving forward when your users are on the go without comprising business imperatives and operational policies. This innovative solution is delivered using the latest HTML 5 technology to allow customers to manage their critical tasks anytime through any device. PeopleSoft Approvals enables your users to approve transactions through the desktop, smart phones or tablet devices. This will speed up the approval process thus avoiding potential late payment penalties and supports early payment discounts for invoices. For more information, please watch the Video Feature Overviews (VFO) available on YouTube (links below) or contact your application sales representative. PeopleSoft Mobile ExpensesPeopleSoft Mobile Approvals The PeopleSoft Mobile Applications 9.1 documentation update for Bundle 23 is available under MOS Document ID 1495035.1.

    Read the article

  • Should we have a database independent SQL like query language in Django?

    - by Yugal Jindle
    Note : I know we have Django ORM already that keeps things database independent and converts to the database specific SQL queries. Once things starts getting complicated it is preferred to write raw SQL queries for better efficiency. When you write raw sql queries your code gets trapped with the database you are using. I also understand its important to use the full power of your database that can-not be achieved with the django orm alone. My Question : Until I use any database specific feature, why should one be trapped with the database. For instance : We have a query with multiple joins and we decided to write a raw sql query. Now, that makes my website postgres specific. Even when I have not used any postgres specific feature. I feel there should be some fake sql language which can translate to any database's sql query. Even Django's ORM can be built over it. So, that if you go out of ORM but not database specific - you can still remain database independent. I asked the same question to Jacob Kaplan Moss (In person) : He advised me to stay with the database that I like and endure its whole power, to which I agree. But my point was not that we should be database independent. My point is we should be database independent until we use a database specific feature. Please explain, why should be there a fake sql layer over the actual sql ?

    Read the article

  • Becoming A Great Developer

    - by Lee Brandt
    Image via Wikipedia I’ve been doing the whole programming thing for awhile and reading and watching some of the best in the business. I have come to notice that the really great developers do a few things that (I think) makes them great. Now don’t get me wrong, I am not saying that I am one of these few. I still struggle with doing some of the things that makes one great at development. Coincidently, many of these things also make you a better person period. Believe That Guidance Is Better Than Answers This is one I have no problem with. I prefer guidance any time I am learning from another developer. Answers may get you going, but guidance will leave you stranded. At some point, you will come across a problem that can only be solved by thinking for yourself and this is where that guidance will really come in handy. You can use that guidance and extrapolate whatever technology to salve that problem (if it’s the right tool for solving that problem). The problem is, lots of developers simply want someone to tell them, “Do this, then this, then set that, and write this.” Favor thinking and learn the guidance of doing X and don’t ask someone to show you how to do X, if that makes sense. Read, Read and Read If you don’t like reading, you’re probably NOT going to make it into the Great Developer group. Great developers read books, they read magazines and they read code. Open source playgrounds like SourceForge, CodePlex and GitHub, have made it extremely easy to download code from developers you admire and see how they do stuff. Chances are, if you read their blog too, they’ll even explain WHY they did what they did (see “Guidance” above). MSDN and Code Magazine have not only code samples, but explanations of how to use certain technologies and sometimes even when NOT to use that same technology. Books are also out on just about every topic. I still favor the less technology centric books. For instance, I generally don’t buy books like, “Getting Started with Jiminy Jappets”. I look for titles like, “How To Write More Effective Code” (again, see guidance). The Addison-Wesley Signature Series is a great example of these types of books. They teach technology-agnostic concepts. Head First Design Patterns is another great guidance book. It teaches the "Gang Of Four" Design Patterns in a very easy-to-understand, picture-heavy way (I LIKE pictures). Hang Your Balls Out There Even though the advice came from a 3rd-shift Kinko’s attendant, doesn’t mean it’s not sound advice. Write some code and put it out for others to read, criticize and castigate you for. Understand that there are some real jerks out there who are absolute geniuses. Don’t be afraid to get some great advice wrapped in some really nasty language. Try to take what’s good about it and leave what’s not. I have a tough time with this myself. I don’t really have any code out there that is available for review (other than my demo code). It takes some guts to do, but in the end, there is no substitute for getting a community of developers to critique your code and give you ways to improve. Get Involved Speaking of community, the local and online user groups and discussion forums are a great place to hear about technologies and techniques you might never come across otherwise. Mostly because you might not know to look. But, once you sit down with a bunch of other developers and start discussing what you’re interested in, you may open up a whole new perspective on it. Don’t just go to the UG meetings and watch the presentations either, get out there and talk, socialize. I realize geeks weren’t meant to necessarily be social creatures, but if you’re amongst other geeks, it’s much easier. I’ve learned more in the last 3-4 years that I have been involved in the community that I did in my previous 8 years of coding without it. Socializing works, even if socialism doesn’t. Continuous Improvement Lean proponents might call this “Kaizen”, but I call it progress. We all know, especially in the technology realm, if you’re not moving ahead, you’re falling behind. It may seem like drinking from a fire hose, but step back and pick out the technologies that speak to you. The ones that may you’re little heart go pitter-patter. Concentrate on those. If you’re still overloaded, pick the best of the best. Just know that if you’re not looking at the code you wrote last week or at least last year with some embarrassment, you’re probably stagnating. That’s about all I can say about that, cause I am all out of clichés to throw at it. :0) Write Code Great painters paint, great writers write, and great developers write code. The most sure-fire way to improve your coding ability is to continue writing code. Don’t just write code that your work throws on you, pick that technology you love or are curious to know more about and walk through some blog demo examples. Take the language you use everyday and try to get it to do something crazy. Who knows, you might create the next Google search algorithm! All in all, being a great developer is about finding yourself in all this code. If it is just a job to you, you will probably never be one of the “Great Developers”, but you’re probably okay with that. If, on the other hand, you do aspire to greatness, get out there and GET it. No one’s going hand it to you.

    Read the article

  • Open XML SDK 2 Released

    - by Tim Murphy
    Note: Cross posted from Coding The Document. Permalink This post is a little late since the SDK was released about a week ago.  At PSC we have been using the Open XML SDK 2 since its earliest beta.  It is a very powerful tool for generating documents without using the Office DLLs.  It is also the main technology that I have been working with for the last six months.  I would suggest giving it a try.  Stay tuned here.  In the near future I will be presenting at different locations on this and other document generation technologies. Download the Open XML SDK here. del.icio.us Tags: Office Open XML,Open XML SDK 2

    Read the article

  • Exchange 2013 goes RTM!

    - by marc dekeyser
    Exchange 2013 has been signed off and is now RTM! Hoozaaa!!   From the Exchange team blog: Today we reached an important milestone in the development of the new Exchange. Moments ago, the Exchange engineering team signed off on the Release to Manufacturing (RTM) build. This milestone means the coding and testing phase of the project is complete and we are now focused on releasing the new Exchange via multiple distribution channels to our business customers. General availability is planned for the first quarter of 2013. We have a number of programs that provide business customers with early access so they can begin testing, piloting and adopting Exchange within their organizations: We will begin rolling out new capabilities to Office 365 Enterprise customers in our next service updates, starting in November through general availability. Volume Licensing customers with Software Assurance will be able to download Exchange Server 2013 through the Volume Licensing Service Center by mid-November. These products will be available on the Volume Licensing price list on December 1. Read more…

    Read the article

  • How to manage PHP projects?

    - by Shakti Singh
    I am a PHP developer and I want to know how to manage a project with more than one developer working on the same time. Are there some tools to manage them if any please let me know I don't know about that? The tool which can show everything which developer is working on what task. when he will be available for another task. Who one is free right now? Something managing project as well as utilization of your team member. Tool which can take care of all the phases of a project from coding to delivery.

    Read the article

  • Fraud Detection with the SQL Server Suite Part 1

    - by Dejan Sarka
    While working on different fraud detection projects, I developed my own approach to the solution for this problem. In my PASS Summit 2013 session I am introducing this approach. I also wrote a whitepaper on the same topic, which was generously reviewed by my friend Matija Lah. In order to spread this knowledge faster, I am starting a series of blog posts which will at the end make the whole whitepaper. Abstract With the massive usage of credit cards and web applications for banking and payment processing, the number of fraudulent transactions is growing rapidly and on a global scale. Several fraud detection algorithms are available within a variety of different products. In this paper, we focus on using the Microsoft SQL Server suite for this purpose. In addition, we will explain our original approach to solving the problem by introducing a continuous learning procedure. Our preferred type of service is mentoring; it allows us to perform the work and consulting together with transferring the knowledge onto the customer, thus making it possible for a customer to continue to learn independently. This paper is based on practical experience with different projects covering online banking and credit card usage. Introduction A fraud is a criminal or deceptive activity with the intention of achieving financial or some other gain. Fraud can appear in multiple business areas. You can find a detailed overview of the business domains where fraud can take place in Sahin Y., & Duman E. (2011), Detecting Credit Card Fraud by Decision Trees and Support Vector Machines, Proceedings of the International MultiConference of Engineers and Computer Scientists 2011 Vol 1. Hong Kong: IMECS. Dealing with frauds includes fraud prevention and fraud detection. Fraud prevention is a proactive mechanism, which tries to disable frauds by using previous knowledge. Fraud detection is a reactive mechanism with the goal of detecting suspicious behavior when a fraudster surpasses the fraud prevention mechanism. A fraud detection mechanism checks every transaction and assigns a weight in terms of probability between 0 and 1 that represents a score for evaluating whether a transaction is fraudulent or not. A fraud detection mechanism cannot detect frauds with a probability of 100%; therefore, manual transaction checking must also be available. With fraud detection, this manual part can focus on the most suspicious transactions. This way, an unchanged number of supervisors can detect significantly more frauds than could be achieved with traditional methods of selecting which transactions to check, for example with random sampling. There are two principal data mining techniques available both in general data mining as well as in specific fraud detection techniques: supervised or directed and unsupervised or undirected. Supervised techniques or data mining models use previous knowledge. Typically, existing transactions are marked with a flag denoting whether a particular transaction is fraudulent or not. Customers at some point in time do report frauds, and the transactional system should be capable of accepting such a flag. Supervised data mining algorithms try to explain the value of this flag by using different input variables. When the patterns and rules that lead to frauds are learned through the model training process, they can be used for prediction of the fraud flag on new incoming transactions. Unsupervised techniques analyze data without prior knowledge, without the fraud flag; they try to find transactions which do not resemble other transactions, i.e. outliers. In both cases, there should be more frauds in the data set selected for checking by using the data mining knowledge compared to selecting the data set with simpler methods; this is known as the lift of a model. Typically, we compare the lift with random sampling. The supervised methods typically give a much better lift than the unsupervised ones. However, we must use the unsupervised ones when we do not have any previous knowledge. Furthermore, unsupervised methods are useful for controlling whether the supervised models are still efficient. Accuracy of the predictions drops over time. Patterns of credit card usage, for example, change over time. In addition, fraudsters continuously learn as well. Therefore, it is important to check the efficiency of the predictive models with the undirected ones. When the difference between the lift of the supervised models and the lift of the unsupervised models drops, it is time to refine the supervised models. However, the unsupervised models can become obsolete as well. It is also important to measure the overall efficiency of both, supervised and unsupervised models, over time. We can compare the number of predicted frauds with the total number of frauds that include predicted and reported occurrences. For measuring behavior across time, specific analytical databases called data warehouses (DW) and on-line analytical processing (OLAP) systems can be employed. By controlling the supervised models with unsupervised ones and by using an OLAP system or DW reports to control both, a continuous learning infrastructure can be established. There are many difficulties in developing a fraud detection system. As has already been mentioned, fraudsters continuously learn, and the patterns change. The exchange of experiences and ideas can be very limited due to privacy concerns. In addition, both data sets and results might be censored, as the companies generally do not want to publically expose actual fraudulent behaviors. Therefore it can be quite difficult if not impossible to cross-evaluate the models using data from different companies and different business areas. This fact stresses the importance of continuous learning even more. Finally, the number of frauds in the total number of transactions is small, typically much less than 1% of transactions is fraudulent. Some predictive data mining algorithms do not give good results when the target state is represented with a very low frequency. Data preparation techniques like oversampling and undersampling can help overcome the shortcomings of many algorithms. SQL Server suite includes all of the software required to create, deploy any maintain a fraud detection infrastructure. The Database Engine is the relational database management system (RDBMS), which supports all activity needed for data preparation and for data warehouses. SQL Server Analysis Services (SSAS) supports OLAP and data mining (in version 2012, you need to install SSAS in multidimensional and data mining mode; this was the only mode in previous versions of SSAS, while SSAS 2012 also supports the tabular mode, which does not include data mining). Additional products from the suite can be useful as well. SQL Server Integration Services (SSIS) is a tool for developing extract transform–load (ETL) applications. SSIS is typically used for loading a DW, and in addition, it can use SSAS data mining models for building intelligent data flows. SQL Server Reporting Services (SSRS) is useful for presenting the results in a variety of reports. Data Quality Services (DQS) mitigate the occasional data cleansing process by maintaining a knowledge base. Master Data Services is an application that helps companies maintaining a central, authoritative source of their master data, i.e. the most important data to any organization. For an overview of the SQL Server business intelligence (BI) part of the suite that includes Database Engine, SSAS and SSRS, please refer to Veerman E., Lachev T., & Sarka D. (2009). MCTS Self-Paced Training Kit (Exam 70-448): Microsoft® SQL Server® 2008 Business Intelligence Development and Maintenance. MS Press. For an overview of the enterprise information management (EIM) part that includes SSIS, DQS and MDS, please refer to Sarka D., Lah M., & Jerkic G. (2012). Training Kit (Exam 70-463): Implementing a Data Warehouse with Microsoft® SQL Server® 2012. O'Reilly. For details about SSAS data mining, please refer to MacLennan J., Tang Z., & Crivat B. (2009). Data Mining with Microsoft SQL Server 2008. Wiley. SQL Server Data Mining Add-ins for Office, a free download for Office versions 2007, 2010 and 2013, bring the power of data mining to Excel, enabling advanced analytics in Excel. Together with PowerPivot for Excel, which is also freely downloadable and can be used in Excel 2010, is already included in Excel 2013. It brings OLAP functionalities directly into Excel, making it possible for an advanced analyst to build a complete learning infrastructure using a familiar tool. This way, many more people, including employees in subsidiaries, can contribute to the learning process by examining local transactions and quickly identifying new patterns.

    Read the article

  • eSTEP TechCast - November 2013

    - by uwes
    Dear partner, we are pleased to announce our next eSTEP TechCast on Thursday 7th of November and would be happy if you could join. Please see below the details for the next TechCast.Date and time:Thursday, 07. November 2013, 11:00 - 12:00 GMT (12:00 - 13:00 CET; 15:00 - 16:00 GST) Title: The Operational Management benefits of Engineered Systems Abstract:Oracle Engineered Systems require significantly less administration effort than traditional platforms. This presentation will explain why this is the case, how much can be saved and discusses the best practices recommended to maximise Engineered Systems operational efficiency. Target audience: Tech Presales Speaker: Julian Lane Call Info:Call-in-toll-free number: 08006948154 (United Kingdom)Call-in-toll-free number: +44-2081181001 (United Kingdom) Show global numbers Conference Code: 803 594 3Security Passcode: 9876Webex Info (Oracle Web Conference) Meeting Number: 599 156 244Meeting Password: tech2011 Playback / Recording / Archive: The webcasts will be recorded and will be available shortly after the event in the eSTEP portal under the Events tab, where you could find also material from already delivered eSTEP TechCasts. Use your email-adress and PIN: eSTEP_2011 to get access. Feel free to have a look. We are happy to get your comments and feedback. Thanks and best regards, Partner HW Enablement EMEA

    Read the article

  • First Person Shooter game agent development

    - by LangerHansIslands
    I would like to apply (program) the Artificial intelligence methods to create a intelligent game bots for a first person shooter game. Do you have any knowledge from where can I start to develop as a Linux user? Do you have a suggestion for an easy-to-start game for which I can develop bots easily, caring more about the result of my algorithms rather than spending a lot of time dealing with the game code? I've read some publications about the applied methods to Quake 3 (c) and Open Arena. But I couldn't find the source codes and manuals describing how to start coding( for compiling, developing ai and etc.). I appreciate your help.

    Read the article

  • Join us for Live Oracle VM and Oracle Linux Cloud Events in Europe

    - by Monica Kumar
    Join us for a series of live events and discover how Oracle VM and Oracle Linux offer an integrated and optimized infrastructure for quickly deploying a private cloud environment at lower cost. As one of the most widely deployed operating systems today, Oracle Linux delivers higher performance, better reliability, and stability, at a lower cost for your cloud environments. Oracle VM is an application-driven server virtualization solution fully integrated and certified with Oracle applications to deliver rapid application deployment and simplified management. With Oracle VM, you have peace of mind that the entire Oracle stack deployed is fully certified by Oracle. Register now for any of the upcoming events, and meet with Oracle experts to discuss how we can help in enabling your private cloud. Nov 20: Foundation for the Cloud: Oracle Linux and Oracle VM (Belgium) Nov 21: Oracle Linux & Oracle VM Enabling Private Cloud (Germany) Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Nov 28: Realize Substantial Savings and Increased Efficiency with Oracle Linux and Oracle VM (Luxembourg) Nov 29: Foundation for the Cloud: Oracle Linux and Oracle VM (Netherlands)Dec 5: MySQL Tech Tour, including Oracle Linux and Oracle VM (France) Hope to see you at one of these events!

    Read the article

  • Free Oracle Special Edition eBooks - Cloud Architecture & Enterprise Cloud

    - by Thanos
    Cloud computing can improve your business agility, lower operating costs, and speed innovation. The key to making it work is the architecture. Learn how to define your architectural requirements and get started on your path to cloud computing with the free oracle special edition e-book, Cloud Architecture for Dummies.   Topics covered in this quick reference guide include: Cloud architecture principles and guidelines Scoping your project and choosing your deployment model Moving toward implementation with vertically integrated engineered systems Learn how to architect and model your cloud implementation to drive efficiency and leverage economies of scale. For more information, visit oracle.com/cloud and our cloud services at cloud.oracle.com Specifically Infrastructure as a Service (IaaS) is critical to the success of many enterprises. Want to build a private Cloud infrastructure and cut down IT costs? Learn more about Oracle's highly integrated infrastructure software and hardware to help you architect and deploy a cloud infrastructure that is optimized for the needs of your enterprise from day one. Download the free e-book of Enterprise Cloud Infrastructure for Dummies to: Realize the benefits of consolidation with the added cloud capabilities Simplify deployments and reduce risks with tested and proven guidelines Achieve up to 50% lower TCO than comparable multi-vendor alternatives Choosing the right infrastructure technologies is essential to capitalizing on the benefits of cloud computing. Oracle Optimized Solution for Enterprise Cloud Infrastructure helps identify the right hardware and software stack and provides configuration guidelines for your cloud. With this book, you come to understand Enterprise Cloud Infrastructure and find out how to jumpstart your IaaS cloud plans. You also discover Oracle Optimized Solutions and learn how integration testing and proven best practices maximize your IT investments. In addition, you see how to architect and deploy your IaaS cloud to drive down costs and improve performance, how to understand and select the right private cloud strategy for you, what key cloud infrastructure elements are and how to use them to achieve your business goals, and more. For more information, visit oracle.com/oos.

    Read the article

< Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >