Search Results

Search found 7077 results on 284 pages for 'concurrent processing'.

Page 169/284 | < Previous Page | 165 166 167 168 169 170 171 172 173 174 175 176  | Next Page >

  • Fraud Detection with the SQL Server Suite Part 1

    - by Dejan Sarka
    While working on different fraud detection projects, I developed my own approach to the solution for this problem. In my PASS Summit 2013 session I am introducing this approach. I also wrote a whitepaper on the same topic, which was generously reviewed by my friend Matija Lah. In order to spread this knowledge faster, I am starting a series of blog posts which will at the end make the whole whitepaper. Abstract With the massive usage of credit cards and web applications for banking and payment processing, the number of fraudulent transactions is growing rapidly and on a global scale. Several fraud detection algorithms are available within a variety of different products. In this paper, we focus on using the Microsoft SQL Server suite for this purpose. In addition, we will explain our original approach to solving the problem by introducing a continuous learning procedure. Our preferred type of service is mentoring; it allows us to perform the work and consulting together with transferring the knowledge onto the customer, thus making it possible for a customer to continue to learn independently. This paper is based on practical experience with different projects covering online banking and credit card usage. Introduction A fraud is a criminal or deceptive activity with the intention of achieving financial or some other gain. Fraud can appear in multiple business areas. You can find a detailed overview of the business domains where fraud can take place in Sahin Y., & Duman E. (2011), Detecting Credit Card Fraud by Decision Trees and Support Vector Machines, Proceedings of the International MultiConference of Engineers and Computer Scientists 2011 Vol 1. Hong Kong: IMECS. Dealing with frauds includes fraud prevention and fraud detection. Fraud prevention is a proactive mechanism, which tries to disable frauds by using previous knowledge. Fraud detection is a reactive mechanism with the goal of detecting suspicious behavior when a fraudster surpasses the fraud prevention mechanism. A fraud detection mechanism checks every transaction and assigns a weight in terms of probability between 0 and 1 that represents a score for evaluating whether a transaction is fraudulent or not. A fraud detection mechanism cannot detect frauds with a probability of 100%; therefore, manual transaction checking must also be available. With fraud detection, this manual part can focus on the most suspicious transactions. This way, an unchanged number of supervisors can detect significantly more frauds than could be achieved with traditional methods of selecting which transactions to check, for example with random sampling. There are two principal data mining techniques available both in general data mining as well as in specific fraud detection techniques: supervised or directed and unsupervised or undirected. Supervised techniques or data mining models use previous knowledge. Typically, existing transactions are marked with a flag denoting whether a particular transaction is fraudulent or not. Customers at some point in time do report frauds, and the transactional system should be capable of accepting such a flag. Supervised data mining algorithms try to explain the value of this flag by using different input variables. When the patterns and rules that lead to frauds are learned through the model training process, they can be used for prediction of the fraud flag on new incoming transactions. Unsupervised techniques analyze data without prior knowledge, without the fraud flag; they try to find transactions which do not resemble other transactions, i.e. outliers. In both cases, there should be more frauds in the data set selected for checking by using the data mining knowledge compared to selecting the data set with simpler methods; this is known as the lift of a model. Typically, we compare the lift with random sampling. The supervised methods typically give a much better lift than the unsupervised ones. However, we must use the unsupervised ones when we do not have any previous knowledge. Furthermore, unsupervised methods are useful for controlling whether the supervised models are still efficient. Accuracy of the predictions drops over time. Patterns of credit card usage, for example, change over time. In addition, fraudsters continuously learn as well. Therefore, it is important to check the efficiency of the predictive models with the undirected ones. When the difference between the lift of the supervised models and the lift of the unsupervised models drops, it is time to refine the supervised models. However, the unsupervised models can become obsolete as well. It is also important to measure the overall efficiency of both, supervised and unsupervised models, over time. We can compare the number of predicted frauds with the total number of frauds that include predicted and reported occurrences. For measuring behavior across time, specific analytical databases called data warehouses (DW) and on-line analytical processing (OLAP) systems can be employed. By controlling the supervised models with unsupervised ones and by using an OLAP system or DW reports to control both, a continuous learning infrastructure can be established. There are many difficulties in developing a fraud detection system. As has already been mentioned, fraudsters continuously learn, and the patterns change. The exchange of experiences and ideas can be very limited due to privacy concerns. In addition, both data sets and results might be censored, as the companies generally do not want to publically expose actual fraudulent behaviors. Therefore it can be quite difficult if not impossible to cross-evaluate the models using data from different companies and different business areas. This fact stresses the importance of continuous learning even more. Finally, the number of frauds in the total number of transactions is small, typically much less than 1% of transactions is fraudulent. Some predictive data mining algorithms do not give good results when the target state is represented with a very low frequency. Data preparation techniques like oversampling and undersampling can help overcome the shortcomings of many algorithms. SQL Server suite includes all of the software required to create, deploy any maintain a fraud detection infrastructure. The Database Engine is the relational database management system (RDBMS), which supports all activity needed for data preparation and for data warehouses. SQL Server Analysis Services (SSAS) supports OLAP and data mining (in version 2012, you need to install SSAS in multidimensional and data mining mode; this was the only mode in previous versions of SSAS, while SSAS 2012 also supports the tabular mode, which does not include data mining). Additional products from the suite can be useful as well. SQL Server Integration Services (SSIS) is a tool for developing extract transform–load (ETL) applications. SSIS is typically used for loading a DW, and in addition, it can use SSAS data mining models for building intelligent data flows. SQL Server Reporting Services (SSRS) is useful for presenting the results in a variety of reports. Data Quality Services (DQS) mitigate the occasional data cleansing process by maintaining a knowledge base. Master Data Services is an application that helps companies maintaining a central, authoritative source of their master data, i.e. the most important data to any organization. For an overview of the SQL Server business intelligence (BI) part of the suite that includes Database Engine, SSAS and SSRS, please refer to Veerman E., Lachev T., & Sarka D. (2009). MCTS Self-Paced Training Kit (Exam 70-448): Microsoft® SQL Server® 2008 Business Intelligence Development and Maintenance. MS Press. For an overview of the enterprise information management (EIM) part that includes SSIS, DQS and MDS, please refer to Sarka D., Lah M., & Jerkic G. (2012). Training Kit (Exam 70-463): Implementing a Data Warehouse with Microsoft® SQL Server® 2012. O'Reilly. For details about SSAS data mining, please refer to MacLennan J., Tang Z., & Crivat B. (2009). Data Mining with Microsoft SQL Server 2008. Wiley. SQL Server Data Mining Add-ins for Office, a free download for Office versions 2007, 2010 and 2013, bring the power of data mining to Excel, enabling advanced analytics in Excel. Together with PowerPivot for Excel, which is also freely downloadable and can be used in Excel 2010, is already included in Excel 2013. It brings OLAP functionalities directly into Excel, making it possible for an advanced analyst to build a complete learning infrastructure using a familiar tool. This way, many more people, including employees in subsidiaries, can contribute to the learning process by examining local transactions and quickly identifying new patterns.

    Read the article

  • Distinction between API and frontend-backend

    - by Jason
    I'm trying to write a "standard" business web site. By "standard", I mean this site runs the usual HTML5, CSS and Javascript for the front-end, a back-end (to process stuff), and runs MySQL for the database. It's a basic CRUD site: the front-end just makes pretty whatever the database has in store; the backend writes to the database whatever the user enters and does some processing. Just like most sites out there. In creating my Github repositories to begin coding, I've realized I don't understand the distinction between the front-end back-end, and the API. Another way of phrasing my question is: where does the API come into this picture? I'm going to list some more details and then questions I have - hopefully this gives you guys a better idea of what my actual question is, because I'm so confused that I don't know the specific question to ask. Some more details: I'd like to try the Model-View-Controller pattern. I don't know if this changes the question/answer. The API will be RESTful I'd like my back-end to use my own API instead of allowing the back-end to cheat and call special queries. I think this style is more consistent. My questions: Does the front-end call the back-end which calls the API? Or does the front-end just call the API instead of calling the back-end? Does the back-end just execute an API and the API returns control to the back-end (where the back-end acts as the ultimate controller, delegating tasks)? Long and detailed answers explaining the role of the API alongside the front-end back-end are encouraged. If the answer depends on the model of programming (models other than the Model-View-Controller pattern), please describe these other ways of thinking of the API. Thanks. I'm very confused.

    Read the article

  • How to implement early exit / return in Haskell?

    - by Giorgio
    I am porting a Java application to Haskell. The main method of the Java application follows the pattern: public static void main(String [] args) { if (args.length == 0) { System.out.println("Invalid number of arguments."); System.exit(1); } SomeDataType d = getData(arg[0]); if (!dataOk(d)) { System.out.println("Could not read input data."); System.exit(1); } SomeDataType r = processData(d); if (!resultOk(r)) { System.out.println("Processing failed."); System.exit(1); } ... } So I have different steps and after each step I can either exit with an error code, or continue to the following step. My attempt at porting this to Haskell goes as follows: main :: IO () main = do a <- getArgs if ((length args) == 0) then do putStrLn "Invalid number of arguments." exitWith (ExitFailure 1) else do -- The rest of the main function goes here. With this solution, I will have lots of nested if-then-else (one for each exit point of the original Java code). Is there a more elegant / idiomatic way of implementing this pattern in Haskell? In general, what is a Haskell idiomatic way to implement an early exit / return as used in an imperative language like Java?

    Read the article

  • Partner Webcast: Implementing on SOA - A Hands-On Technology Demonstration

    - by Thanos
    Service Oriented Architecture enables organizations to operate more efficiently and react faster to opportunities. How? By helping you create a flexible application architecture that supports greater business agility. You decide how quickly you want to move. You can start by implementing an application integration platform. Then, you can evolve your environment gradually by introducing business process management, business rules, governance and event processing. This unified but flexible approach also allows you to maximize the long-term cost reduction benefits of SOA and cloud-based applications. In this session, you dive into SOA Suite and you will see the usage of some advanced features. The topics covered range from adapters, automatic and custom business process correlation through service routing, rule based and manual decisions and to error handling, compensations and extending SOA Suite with your own Java code. Agenda: Service Oriented Architecture The Auctions Scenario Live Demo of the Oracle SOA Suite Features Connecting to non service enabled technologies with adapters (Database and File adapter) Orchestrating services with BPEL processes Correlating processes with correlation sets Mediating services Service Component Architecture Event Handling User Notification Human Workflow Business Rules Fault Handling patterns Developing custom components with Spring and using them in SOA Suite composites Delivery Format This FREE online LIVE eSeminar will be delivered over the Web. Registrations received less than 24hours prior to start time may not receive confirmation to attend. Duration: 1 hour Register Now For all your questions and support requests to adopt and implement the latest Oracle technologies please contact us at [email protected]

    Read the article

  • WebLogic Application Server: free for developers! by Bruno Borges

    - by JuergenKress
    Great news! Oracle WebLogic Server is now free for developers! What does this mean for you? That you as a developer are permitted to: "[...] deploy the programs only on your single developer desktop computer (of any type, including physical, virtual or remote virtual), to be used and accessed by only (1) named developer." But the most interesting part of the license change is this one: "You may continue to develop, test, prototype and demonstrate your application with the programs under this license after you have deployed the application for any internal data processing, commercial or production purposes" (Read the full license agreement here). If you want to take advantage of this licensing change and start developing Java EE applications with the #1 Application Server in the world, read now the previous post, How To Install WebLogic Zip on Linux! WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. BlogTwitterLinkedInMixForumWiki Technorati Tags: WebLogic free,WebLogic for developers,WebLogic license,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Why Oracle Data Integrator for Big Data?

    - by Mala Narasimharajan
    Big Data is everywhere these days - but what exactly is it? It’s data that comes from a multitude of sources – not only structured data, but unstructured data as well.  The sheer volume of data is mindboggling – here are a few examples of big data: climate information collected from sensors, social media information, digital pictures, log files, online video files, medical records or online transaction records.  These are just a few examples of what constitutes big data.   Embedded in big data is tremendous value and being able to manipulate, load, transform and analyze big data is key to enhancing productivity and competitiveness.  The value of big data lies in its propensity for greater in-depth analysis and data segmentation -- in turn giving companies detailed information on product performance, customer preferences and inventory.  Furthermore, by being able to store and create more data in digital form, “big data can unlock significant value by making information transparent and usable at much higher frequency." (McKinsey Global Institute, May 2011) Oracle's flagship product for bulk data movement and transformation, Oracle Data Integrator, is a critical component of Oracle’s Big Data strategy. ODI provides automation, bulk loading, and validation and transformation capabilities for Big Data while minimizing the complexities of using Hadoop.  Specifically, the advantages of ODI in a Big Data scenario are due to pre-built Knowledge Modules that drive processing in Hadoop. This leverages the graphical UI to load and unload data from Hadoop, perform data validations and create mapping expressions for transformations.  The Knowledge Modules provide a key jump-start and eliminate a significant amount of Hadoop development.  Using Oracle Data Integrator together with Oracle Big Data Connectors, you can simplify the complexities of mapping, accessing, and loading big data (via NoSQL or HDFS) but also correlating your enterprise data – this correlation may require integrating across heterogeneous and standards-based environments, connecting to Oracle Exadata, or sourcing via a big data platform such as Oracle Big Data Appliance. To learn more about Oracle Data Integration and Big Data, download our resource kit to see the latest in whitepapers, webinars, downloads, and more… or go to our website on www.oracle.com/bigdata

    Read the article

  • Functional Methods on Collections

    - by GlenPeterson
    I'm learning Scala and am a little bewildered by all the methods (higher-order functions) available on the collections. Which ones produce more results than the original collection, which ones produce less, and which are most appropriate for a given problem? Though I'm studying Scala, I think this would pertain to most modern functional languages (Clojure, Haskell) and also to Java 8 which introduces these methods on Java collections. Specifically, right now I'm wondering about map with filter vs. fold/reduce. I was delighted that using foldRight() can yield the same result as a map(...).filter(...) with only one traversal of the underlying collection. But a friend pointed out that foldRight() may force sequential processing while map() is friendlier to being processed by multiple processors in parallel. Maybe this is why mapReduce() is so popular? More generally, I'm still sometimes surprised when I chain several of these methods together to get back a List(List()) or to pass a List(List()) and get back just a List(). For instance, when would I use: collection.map(a => a.map(b => ...)) vs. collection.map(a => ...).map(b => ...) The for/yield command does nothing to help this confusion. Am I asking about the difference between a "fold" and "unfold" operation? Am I trying to jam too many questions into one? I think there may be an underlying concept that, if I understood it, might answer all these questions, or at least tie the answers together.

    Read the article

  • Splitting up a Rails/Ruby app onto multiple servers

    - by craig.kaminsky
    We recently moved a large application to two machines, both running the same codebase. I. Machine A Web server for public facing application Receives web hook call backs from our ESP Handles a few large, list-processing jobs (uploaded spreadsheets with data) II. Machine B Manages a massive set of (background) jobs but, primarily, focuses on building and assembling newsletters Runs all integration with our NetSuite platform Runs all system maintenance (read: DB) jobs To me, having these two apps running the same codebase (a large, monolithic Rails application) seems 'wrong'. I am wondering if anyone has advice on how to better break up the code for these two apps. While they both need the same DB and, ultimately, the same model code, Machine B has no need for Controllers and Views and it feels wasteful running a full-stack Rails app for its tasks. A couple things came to mind but I'm not sure if I'm trying to solve a problem that doesn't exist: Break the models out into a sub-module on git and include into both apps Build out the Mahcine B app in plain Ruby or a lighter framework like Sinatra (where I could use ActiveRecord with Sinatra in combo with a sub-module for the model folder). I'm new to this scenario and appreciate any and all feedback or direction! Thank you.

    Read the article

  • Is there any reason not to go directly from client-side Javascript to a database?

    - by Chris Smith
    So, let's say I'm going to build a Stack Exchange clone and I decide to use something like CouchDB as my backend store. If I use their built-in authentication and database-level authorization, is there any reason not to allow the client-side Javascript to write directly to the publicly available CouchDB server? Since this is basically a CRUD application and the business logic consists of "Only the author can edit their post" I don't see much of a need to have a layer between the client-side stuff and the database. I would simply use validation on the CouchDB side to make sure someone isn't putting in garbage data and make sure that permissions are set properly so that users can only read their own _user data. The rendering would be done client-side by something like AngularJS. In essence you could just have a CouchDB server and a bunch of "static" pages and you're good to go. You wouldn't need any kind of server-side processing, just something that could serve up the HTML pages. Opening my database up to the world seems wrong, but in this scenario I can't think of why as long as permissions are set properly. It goes against my instinct as a web developer, but I can't think of a good reason. So, why is this a bad idea? EDIT: Looks like there is a similar discussion here: Writing Web "server less" applications EDIT: Awesome discussion so far, and I appreciate everyone's feedback! I feel like I should add a few generic assumptions instead of calling out CouchDB and AngularJS specifically. So let's assume that: The database can authenticate users directly from its hidden store All database communication would happen over SSL Data validation can (but maybe shouldn't?) be handled by the database The only authorization we care about other than admin functions is someone only being allowed to edit their own post We're perfectly fine with everyone being able to read all data (EXCEPT user records which may contain password hashes) Administrative functions would be restricted by database authorization No one can add themselves to an administrator role The database is relatively easy to scale There is little to no true business logic; this is a basic CRUD app

    Read the article

  • How can I retrieve the details of the file from an outbound operation in BPEL 11g

    - by [email protected]
    Several times, we come across requirements where we need to capture the details of the file that got written out as a part of a BPEL process invoking a File/Ftp Adapter. Consider a case where we're using FileNamingConvention as "PurchaseOrder_%SEQ%.txt" and we need to do some post processing based on the filename (please remember that we wouldn't know the filename until the adapter invocation completes) In order to achieve this, we need to manually tweak the WSDL so that the File/Ftp Adapter can return the metadata of the file that was written out. In general, the File/Ftp Write/Put WSDL operations are one way as shown below:         The File/Ftp Adapters are designed to return the metadata back if this WSDL is tweaked into a two-way WSDL. In addition, the <wsdl:output/> must import the fileread.xsd schema (see below). You will need to copy fileread.xsd from  here into the xsd folder of your composite.       Finally, we will need to tweak the  WSDL. (highlighted below)           Finally, the BPEL <invoke> would look as shown below. Please note that the file metadata would be returned as a part of the BPEL output variable:

    Read the article

  • Oracle CEP on OTN

    - by seth.white
    Here are the latest links to Oracle CEP on the Oracle website (OTN). I've heard from a few folks that these are hard to find. As of this writing, the latest release of Oracle CEP is 11.1.3. You may also see this release referred to as 11.1.1.3 and 11.1.1.3.0. An overview of the new features in 11.1.3 can be found here. The product download page: http://www.oracle.com/technology/software/products/middleware/htdocs/fmw_11_download.html The product documentation: http://download.oracle.com/docs/cd/E14571_01/soa.htm#ocep Don't be alarmed that the release number in the documentation is 11.1.1. This is the documentation for the 11.1.3 release. The user forum: http://forums.oracle.com/forums/forum.jspa?forumID=820 The Oracle CEP samples page (contains useful code samples): http://www.oracle.com/technology/sample_code/products/event-driven-architecture/index.html   The Oracle CEP product page (maintained by our product manager): http://www.oracle.com/technology/products/event-driven-architecture/complex-event-processing.html   The event driven architecture page (Oracle CEP is bundled in the EDA Suite product): http://www.oracle.com/technology/products/event-driven-architecture/index.html   Technorati Tags: Oracle,CEP,OTN

    Read the article

  • Scheduled Deprecation of Legacy Obligation Features

    - by Wes Curtis
    The Obligation object in ETPM includes some functionality and tables that, to our knowledge, are not being used by customers and implementers are this time.  Removing this logic and the related tables should benefit the performance of and simplify logic executed during Obligation maintenance processing. The Release Notes included with ETPM v2.3.1 announced that the product plans to deprecate the functionality on Obligation for Contract Terms, Contract Quantities, Tax Exemptions, Terms & Conditions and Obligation Type Start Options.  Our plan is to remove this functionality in the next release of ETPM. We have already confirmed with most project teams that these features are not being used so the deprecation should have no impact on existing designs or process. If you think your project may be impacted by this deprecation, please review any Business Object that has been created for the Obligation maintenance object to make sure that no elements are being defined for any of the following child tables: -          CI_SA_CONTERM -          CI_SA_CONT_QTY -          CI_TOU_CONT_VAL -          CI_SA_TC   As part of this deprecation, the following administrative tables are being removed along with their related metadata: -          Contract Quantity Type -          Tax Exempt Type -          Terms and Conditions Please contact myself or the Oracle Tax Product Management team if your implementation has actually used these objects in their designs. We can discuss options to mitigate impacts of this planned deprecation.  We will continue to announce planned deprecations in the Release Notes for each release and will contact project teams ahead of time to confirm that these deprecations will have little to no impact on our customers.

    Read the article

  • Oracle SOA Suite, the Most Capable Tool for Every Possible Integration Challenge

    - by Demed L'Her
    session ID: CON8601 - when: Monday, Oct. 1, 10:45am-11:45am - where: Moscone South 102 "Oracle SOA Suite, the Most Capable Tool for Every Possible Integration Challenge" is the name of the session I will be delivering at Oracle OpenWorld this year. I'm usually going for more subdued titles but decided to remove the gloves this year, at the risk of sounding arrogant! While we have a number of worthy competitors in various areas of integration no one can really compete with the breadth and reliability of Oracle SOA Suite. This session is primarily intended for people who are not yet familiar with Oracle SOA Suite (i.e. if you are an existing customer your time might be better spent at some of the other sessions we have on the topic). I will provide an overview of Oracle SOA Suite, the customers using it and the types of challenges they are solving with it: from integrating Oracle Applications (E-Business Suite, Siebel, PeopleSoft, RightNow, Taleo etc.) to third-party applications (did you know that over a third of our customers actually use us to integrate SAP?), mainframes and a variety of technologies. We will talk about some emerging trends and problems that our users are solving with the product: cloud integration, B2B consolidation and mobile-enablement. I will also briefly touch upon the exciting projects we are doing with Oracle Event Processing, in the domain of "Fast Data" and "Big Data". Last but not least, I will be joined on stage by Venktesh Maudgalya, Director at Electronic Arts. Venktesh will bring his customer perspective and explain how EA leveraged Oracle SOA Suite to implement iHub, the massive integration hub that interconnects all their applications (E-BusinessSuite, Hyperion, Demantra, Peoplesoft, Salesforce.com, Kronos, Teradata, GXS etc.) and carries 3/4 of their revenue flows. I just picked up my badge and will be kicking off the festivities tomorrow talking to partners in a pre-OOW briefing at the Oracle Headquarters - see you next week! PS: if you're going to tweet about Oracle SOA Suite next week please make sure to use the #oraclesoa and #oow hashtags so that we can track and amplify your tweets!

    Read the article

  • JSP Include: one large bean or bean for each include

    - by shylynx
    I want to refactor a webapp that consists of very distorted JSPs and servlets. Because we can't switch to a web framework easily we have to keep JSPs and Servlets, and now we are in doubt how to include pages into another and how to setup the use:bean-directives effectively. At the first step we want to decouple the code for the core-actions and the bean-creation into servlets. The servlets should forward to their corresponding pages, which should use the bean. The problem here is, that each jsp consists of different sub- and sub-sub-jsp that are included into another. Here is a shortend extract (because reality is more complex): head header top navigation actionspanel main header actionspanel foot footer Moreover each jsp (also the header and footer) use dynamic data. For example title and actionspanel can change on each page-reload or do have links and labels that depend on the processing by the preceding servlet. I know that jsp-include-directives should only be used for static content und should be avoided for dynamic content. But here we have very large pages, that consist of many parts. Now the core questions: Should I use one big bean for each page, so that each bean holds also data for header and footer beside its core data, so that each subsequent included jsp uses the same bean-directive? For example: DirectoryJSP <- DirectoryBean CompareJSP <- CompareBean Or should I use one bean for each jsp, so that each bean only holds the data for one jsp and its own purpose. For example: DirectoryJSP <- DirectoryBean HeaderJSP <- HeaderBean FooterJSP <- FooterBean CompareJSP <- CompareBean HeaderJSP <- HeaderBean FooterJSP <- FooterBean In the second case: should the subsequent beans be a member of the corresponding parent bean, so that only the parent bean is attached as attribute to the request? Or should each bean attached to the request?

    Read the article

  • Elementary OS boots to a terminal (other OS) [on hold]

    - by Benjamin Watson
    Im new to this site, please forgive me if I missed some posting protocol of some sort. I am attempting to install Luna on my samsung s2 laptop (a8 amd radeon 7640g) and when I click on try luna, it just pulls up a terminal after the insignia (curvy E). When I install it, same issue. CTRL-ALT-f7 reveals this (hand typed, sorry if there's typos) Starting preload: *starting CUPS printing spooler/server *stopping save kernel messages preload. fsck from util-linux 2.20.1 fsck from util-linux 2.20.1 dosfsck 3.0.12, 29 oct 2011 FAT32, LFN /dev/sda1: 3 files, 245/189518 clusters /dev/sda2: clean, 133841/30294016 files, 2529529/121164544 blocks Skipping profile in /etc/apparmor.d/disable: usr.sbin.rsyslogd *starting AppArmor profiles speech-dispacher disabled; edit /etc/default/speech-dispenser *stopping system V initialisation compatibility *starting system V runlevel compatability *starting apci daemon *starting anac(h)ronistic cron *starting save kernal messages *starting ntp server ntpd *starting regular background program processing damon *starting deferred execution scheduler *stopping anac(h)ronistic cron *starting LightDM Display Manager *starting bluetooth daemon *starting mDNS/DNS-SD daemon *starting CPU interrupts balancing daemon *stopping Send an event to indicate plymouth is up saned disabled ; edit /etc/default/saned *starting network connection manager *starting crash report submission daemon *checking battery state... That's it. I can't make heads or tails of it. Please note that while I've been running linux for about a year, I'm still fairly new to all of this, so try to be detailed in your explanations and/or descriptions of what I need to do. Any/all help would be appreciated. Thank you for your time.

    Read the article

  • SQLAuthority News – Download Whitepaper – Choosing a Tabular or Multidimensional Modeling Experience in SQL Server 2012 Analysis Services

    - by pinaldave
    Data modeling is the most important task for any BI professional. Matter of the fact, the biggest challenge is to organizing disparate data into an analytic model that effectively and efficiently supports the reporting and analysis. SQL Server 2012 introduces BI Semantic Model (BISM), a single model that can support a broad range of reporting and analysis while blending two Analysis Services modeling experiences behind the scenes. Multidimensional modeling – enables BI professionals to create sophisticated multidimensional cubes using traditional online analytical processing (OLAP). Tabular modeling – provides self-service data modeling capabilities to business and data analysts. As data modeling is evolving and business needs are growing new technologies and tools are emerging to help end users to make the necessary adjustment to the reporting and analysis needs. This white paper is will provide practical guidance to help you decide which SQL Server 2012 Analysis Services modeling experience – tabular or multidimensional. Do let me know what do is your opinion as a comment. In simple word – I would like to know when will you use Tabular modeling and when Multidimensional modeling? Download Choosing a Tabular or Multidimensional Modeling Experience in SQL Server 2012 Analysis Services Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Business Intelligence, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, T SQL, Technology

    Read the article

  • XOLO X900–First mobile phone with Intel Power

    - by Rekha
    XOLO X900, XOLO’s offering the world’s first smart phone with the power of Intel inside® shaking hands with LAVA International Ltd., India’s fastest growing handset brands. The R&D Centre is in Shenzhan (China) and Bangalore (India). The smart phone has a fast web browsing with the 1.6 GHz Intel processor and smooth multi-tasking process using Intel patented Hyper Threading technology.It has an optimum battery usage, 4.03” hi-resolution of 1024X600 pixels LCD screen to ensure crisp text and vibrant images, HDMI Output port for TV, full HD 1080p playback and dual speakers. It has a camera of 8MP HD camera with certain DSLR like features allowing to click upto 10 photos in less than a second. 3D and HD gaming is immensely realistic with 400 MHz Graphics Processing Unit. The Operating System used here is Android 2.3 (Gingerbread) and upgradable to Android 4.0. It has the GPS facility and rear and front cameras with 8MP and 1.3MP respectively.  They have enabled Accelerometer, Gyroscope, Magnetometer, Ambient light sensor and Proximity sensor in this smart phone. Intel’s smartphone venture is beginning in India first. It is said to be available for sale in Indian from April 23, 2011 onwards. The price is at a best-buy price of INR 22,000 approximately. The smartphone will be available at the Indian retail chain Croma. The phone will available in other retail stores and online stores from early May. The company is launching the smartphone in India first and a more powerful handset in China later this year. According to their success in India and China, Intel is planning to come into Europe and US market. Till then, Intel smartphones are only for Indian buyers. You can more technical information from the XOLO’s site.

    Read the article

  • What naughty ways are there of driving traffic?

    - by Tom Wright
    OK, so this is purely for my intellectual curiosity and I'm not interested in illegal methods (no botnets please). But say, for instance, that some organisation incentivised link sharing in a bid to drive publicity. How could I drive traffic to my link? Obviously I could spam all my friends on social networking sites, which is what they want me to do, but that doesn't sound as fun as trying to game the system. (Not that I necessarily dispute the merit of this particular campaign.) The ideas I've come up with so far (in order of increasing deviousness) include: Link-dropping - This is too close to what they want me to do to be devious, but I've done it here (sorry) and I've done it on Twitter. I'm subverting it slightly by focusing on the game aspects rather than their desired message. AdWords - Not very devious at all, but effectively free with the vouchers I've accrued. That said, I must be pretty poor at choosing keywords, because I've seen very few hits (~5) so far. Browser testing websites - The target has a robots txt which prevents browsershots from processing it, but I got around this by including it in an iframe on a page that I hosted. But my creative juices have run dry I'm afraid. Does anyone have any cheeky/devious/cunning/all-of-the-above idea for driving traffic to my page?

    Read the article

  • How can a non-technical person can learn to write a spec for small projects?

    - by Joseph Turian
    How can a non-technical person learn to write specs for small projects? A friend of mine is trying to outsource some development on a statistics project. In particular, he does a lot of work in excel, and wants to outsource the creation of scripts to do what he now does by hand. However, my friend is extremely non-technical. He is poor at writing technical specs. When he does write a spec, it is written the way you would describe doing something in excel (go to this cell and then copy the value to that cell). It is also overly verbose, and does examples several times. I'm not sure if he properly describes corner cases. The first project he outsourced was a failure. I think he overdescribed some details, but underdescribed corner cases. That and/or the coder he hired didn't think through the corner cases and ask appropriate questions. I'm not sure. I got on IM with him and it took me half an hour to dig out a description that should have taken five minutes or less to describe. I wrote the scripts for him at the end, but didn't examine why his process with the coder failed. He has asked me for help. However, I refuse to get involved, because taking his spec and translating it into clear requirements is 10x more work than executing on a clearly written spec. What is the right way for him to learn? Are there resources he could use? Are there ways he can learn from small, low-pressure practice projects with coders? [edit: Most of his scripts are statistical and data processing oriented. e.g. take this column and run an average over it. Remove these rows under these conditions. So the challenge is different than spec'ing a web app.]

    Read the article

  • Java Spotlight Episode 102: Freescale on Embedded Java and Java Embedded @ JavaOne

    - by Roger Brinkley
    An interview with Michael O'Donnell of Freescale on Embedded Java and Embedded Java @ JavaOne. Part of this podcast was recorded live at the JavaOne 2012 Glassfish Party at the Thirsty Bear. Right-click or Control-click to download this MP3 file. You can also subscribe to the Java Spotlight Podcast Feed to get the latest podcast automatically. If you use iTunes you can open iTunes and subscribe with this link:  Java Spotlight Podcast in iTunes. Show Notes News Oracle Java ME Embedded 3.2 Java Embedded Server 7.0 Events Oct 3-4, Java Embedded @ JavaONE, San Francisco Oct 15-17, JAX London Oct 30-Nov 1, Arm TechCon, Santa Clara Oct 22-23, Freescale Technology Forum - Japan, Tokyo Oct 31, JFall, Netherlands Nov 2-3, JMagreb, Morocco Nov 13-17, Devoxx, Belgium Feature InterviewFreescale is the global leader in embedded processing solutions, advancing the automotive, consumer, industrial and networking markets. From microprocessors and microcontrollers to sensors, analog ICs and connectivity – our technologies are the foundation to the innovations that make our world greener, safer, healthier and more connected. Michael O'Donnell, is the Director of Software Ecosystem Alliances. The upcoming Freescale Technology Forum - Japan in Tokyo, Japan is an excellent way for developers to learn more about Freescale and Java. What’s Cool Glassfish Party - 6th year Geek Bike Ride

    Read the article

  • Management Reporter Installation – Lessons Learned

    - by Ryan McBee
    After successfully completing several installations of Management Reporter this year, I wanted to share a few lessons learned that should help you. First, you will want to make sure that you install Management Reporter under a domain account as opposed to a local system or network service account. Management Reporter gives you the option to install under these accounts, but it is a be a best practice approach to use a domain account. Upon installation of Management Report, you will want to make sure that Directory Browsing is enabled within the IIS server of your site or you will have problems when you go to use Management Reporter. By default, it will be disabled in Server 2008 R2 and you will need to make the setting change under the Actions pane shown below. Lastly, you will want to make sure that SQL Server is running under a domain account. I have had multiple situations where reports have been stuck in the Queued status rather than Processing status of Management Reporter. After reviewing resolution 5 of KB 2298248, it was determined that running SQL Server under a domain account is the way to go.

    Read the article

  • Implementing a post-notification function to perform custom validation

    - by Alejandro Sosa
    Introduction Oracle Workflow Notification System can be extended to perform extra validation or processing via PLSQL procedures when the notification is being responded to. These PLSQL procedures are called post-notification functions since they are executed after a notification action such as Approve, Reject, Reassign or Request Information is performed. The standard signature for the post-notification function is     procedure <procedure_name> (itemtype  in varchar2,                                itemkey   in varchar2,                                actid     in varchar2,                                funcmode  in varchar2,                                resultout in out nocopy varchar2); Modes The post-notification function provides the parameter 'funcmode' which will have the following values: 'RESPOND', 'VALIDATE, and 'RUN' for a notification is responded to (Approve, Reject, etc) 'FORWARD' for a notification being forwarded to another user 'TRANSFER' for a notification being transferred to another user 'QUESTION' for a request of more information from one user to another 'QUESTION' for a response to a request of more information 'TIMEOUT' for a timed-out notification 'CANCEL' when the notification is being re-executed in a loop. Context Variables Oracle Workflow provides different context information that corresponds to the current notification being acted upon to the post-notification function. WF_ENGINE.context_nid - The notification ID  WF_ENGINE.context_new_role - The new role to which the action on the notification is directed WF_ENGINE.context_user_comment - Comments appended to the notification   WF_ENGINE.context_user - The user who is responsible for taking the action that updated the notification's state WF_ENGINE.context_recipient_role - The role currently designated as the recipient of the notification. This value may be the same as the value of WF_ENGINE.context_user variable, or it may be a group role of which the context user is a member. WF_ENGINE.context_original_recipient - The role that has ownership of and responsibility for the notification. This value may differ from the value of the WF_ENGINE.context_recipient_role variable if the notification has previously been reassigned.  Example Let us assume there is an EBS transaction that can only be approved by a certain people thus any attempt to transfer or delegate such notification should be allowed only to users SPIERSON or CBAKER. The way to implement this functionality would be as follows: Edit the corresponding workflow definition in Workflow Builder and open the notification. In the Function Name enter the name of the procedure where the custom code is handled, for instance, TEST_PACKAGE.Post_Notification In PLSQL create the corresponding package TEST_PACKAGE with a procedure named Post_Notification, as follows:     procedure Post_Notification (itemtype  in varchar2,                                  itemkey   in varchar2,                                  actid     in varchar2,                                  funcmode  in varchar2,                                  resultout in out nocopy varchar2) is     l_count number;     begin       if funcmode in ('TRANSFER','FORWARD') then         select count(1) into l_count         from WF_ROLES         where WF_ENGINE.context_new_role in ('SPIERSON','CBAKER');               --and/or any other conditions         if l_count<1 then           WF_CORE.TOKEN('ROLE', WF_ENGINE.context_new_role);           WF_CORE.RAISE('WFNTF_TRANSFER_FAIL');         end if;       end if;     end Post_Notification; Launch the workflow process with the changed notification and attempt to reassign or transfer it. When trying to reassign the notification to user CBROWN the screen would like like below: Check the Workflow API Reference Guide, section Post-Notification Functions, to see all the standard, seeded WF_ENGINE variables available for extending notifications processing

    Read the article

  • When and why you should use void (instead of i.e. bool/int)

    - by Jonas
    I occasionally run into methods where a developer chose to return something which isn't critical to the function. I mean, when looking at the code, it apparently works just as nice as a void and after a moment of thought, I ask "Why?" Does this sound familiar? Sometimes I would agree that most often it is better to return something like a bool or int, rather then just do a void. I'm not sure though, in the big picture, about the pros and cons. Depending on situation, returning an int can make the caller aware of the amount of rows or objects affected by the method (e.g., 5 records saved to MSSQL). If a method like "InsertSomething" returns a boolean, I can have the method designed to return true if success, else false. The caller can choose to act or not on that information. On the other hand, May it lead to a less clear purpose of a method call? Bad coding often forces me to double-check the method content. If it returns something, it tells you that the method is of a type you have to do something with the returned result. Another issue would be, if the method implementation is unknown to you, what did the developer decide to return that isn't function critical? Of course you can comment it. The return value has to be processed, when the processing could be ended at the closing bracket of method. What happens under the hood? Did the called method get false because of a thrown error? Or did it return false due to the evaluated result? What are your experiences with this? How would you act on this?

    Read the article

  • Let's introduce the Oracle Enterprise Data Quality family!

    - by Sarah Zanchetti
    The Oracle Enterprise Data Quality family of products helps you to achieve maximum value from their business applications by delivering fit-­for-­purpose data. OEDQ is a state-of-the-art collaborative data quality profiling, analysis, parsing, standardization, matching and merging product, designed to help you understand, improve, protect and govern the quality of the information your business uses, all from a single integrated environment. Oracle Enterprise Data Quality products are: Oracle Enterprise Data Quality Profile and Audit Oracle Enterprise Data Quality Parsing and Standardization Oracle Enterprise Data Quality Match and Merge Oracle Enterprise Data Quality Address Verification Server Oracle Enterprise Data Quality Product Data Parsing and Standardization Oracle Enterprise Data Quality Product Data Match and Merge Also, the following are some of the key features of OEDQ: Integrated data profiling, auditing, cleansing and matching Browser-based client access Ability to handle all types of data – for example customer, product, asset, financial, operational Connection to any JDBC-compliant data sources and targets Multi-user project support (role-based access, issue tracking, process annotation, and version control) Services Oriented Architecture (SOA) - support for designing processes that may be exposed to external applications as a service Designed to process large data volumes A single repository to hold data along with gathered statistics and project tracking information, with shared access Intuitive graphical user interface designed to help you solve real-world information quality issues quickly Easy, data-led creation and extension of validation and transformation rules Fully extensible architecture allowing the insertion of any required custom processing  If you need to learn more about EDQ, or get assistance for any kind of issue, the Oracle Technology Network offers a huge range of resources on Oracle software. Discuss technical problems and solutions on the Discussion Forums. Get hands-on step-by-step tutorials with Oracle By Example. Download Sample Code. Get the latest news and information on any Oracle product. You can also get further help and information with Oracle software from: My Oracle Support Oracle Support Services An Information Center is available, where you can find technical information and fast solutions to the most common already solved issues: Information Center: Oracle Enterprise Data Quality [ID 1555073.2]

    Read the article

  • How do I get network working on a Dell n5010 - Ubuntu 10.04

    - by cyberroger
    Hey. I just formatted a Dell n5010 and installed Ubuntu 10.04. Now,when i try to access the Internet it doesn't find the available networks. Informations about it: My network is set to broadcast SSID; Already typed lspci, and it doesnt returns my wireless card; Tried installing windows drivers from fabricant cd usind diswrapper, but it says it cant complete the installation because it cant find some "Device" folder; I don't know what else to do, I switch the wireless button on/off and it just can't see any wireless connection, includin mine. Here are the informations about lspci: *-network UNCLAIMED description: Network controller product: Broadcom Corporation vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:12:00.0 version: 01 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list configuration: latency=0 resources: memory:fbc00000-fbc03fff And this: 00:00.0 Host bridge: Intel Corporation Core Processor DRAM Controller (rev 18) 00:02.0 VGA compatible controller: Intel Corporation Core Processor Integrated Graphics Controller (rev 18) 00:16.0 Communication controller: Intel Corporation 5 Series/3400 Series Chipset HECI Controller (rev 06) 00:1a.0 USB Controller: Intel Corporation 5 Series/3400 Series Chipset USB2 Enhanced Host Controller (rev 06) 00:1b.0 Audio device: Intel Corporation 5 Series/3400 Series Chipset High Definition Audio (rev 06) 00:1c.0 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 1 (rev 06) 00:1c.1 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 2 (rev 06) 00:1c.2 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 3 (rev 06) 00:1c.4 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 5 (rev 06) 00:1d.0 USB Controller: Intel Corporation 5 Series/3400 Series Chipset USB2 Enhanced Host Controller (rev 06) 00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev a6) 00:1f.0 ISA bridge: Intel Corporation Mobile 5 Series Chipset LPC Interface Controller (rev 06) 00:1f.2 SATA controller: Intel Corporation 5 Series/3400 Series Chipset 6 port SATA AHCI Controller (rev 06) 00:1f.3 SMBus: Intel Corporation 5 Series/3400 Series Chipset SMBus Controller (rev 06) 00:1f.6 Signal processing controller: Intel Corporation 5 Series/3400 Series Chipset Thermal Subsystem (rev 06) 12:00.0 Network controller: Broadcom Corporation Device 4727 (rev 01) 13:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8101E/RTL8102E PCI Express Fast Ethernet controller (rev 02) Thanks!

    Read the article

< Previous Page | 165 166 167 168 169 170 171 172 173 174 175 176  | Next Page >