Search Results

Search found 24498 results on 980 pages for 'lock pages in memory'.

Page 300/980 | < Previous Page | 296 297 298 299 300 301 302 303 304 305 306 307  | Next Page >

  • Reviewing Orace ADF Enterprise Application Development Made Simple Book

    - by Grant Ronald
    Although I was a technical reviewer of Oracle ADF Enterprise Application Development-Made Simple (by Sten Vesterli) it is nice to get the finished article in your hands as a real tangible book. Personally, on a sun lounger with a Dan Brown book I can read 300 pages a day, but technical books are a different beast and I find it hard to get through them with the same vigour.  However, I'm up to chapter 7 in Sten's book and so far it's holding my interest.  He writes in an almost conversational tone and I really like the comparisons to "real world" concepts - like page templates being like gingerbread cookie cutters.  Personally I like to be able to compare or size up a new concept against something I already know. I'll post a full review next week but the good news is 212 pages in and I'm still reading!

    Read the article

  • SEO Service - Refresh SEO

    - by Dan
    I've been approached to possible take over SEO/marketting work for a site. The guy is currently using a paid service at http://refreshseo.com/ and paying around $80p/m. From what I can make out all refreshseo does is automatically generate keyword rich content pages and attach them to the site. These pages aren't actually linked to from within the site. So I'm wondering two things has anyone had any experience with this particular company or similar types - has it been worth it? How do you think the recent Google Panda updates impacts on this kind of strategy? Thanks in advance

    Read the article

  • Database Management System do exist?

    - by Bakaburg
    I want to build a database for my no-profit association, and i was planning to do it by myself. But then i realize that really i don't have the time to buid a solid, secure system. So i was thinking, maybe like cms do exist, maybe there are also database management systems. I mean a layer of abstraction over the database that allow you manage data, manage access to data, create widgets with and expand the data. Maybe with a frontend to use this data and a backend to manage it. that is a cms but not based on pages and post but on data! Moreover, i would like some standard solution, because my IT management position ends this year, so i need something that will be easy to use and expand even by someone that is not a developer. So do exist something that fit my need? PS: i would really like some modern and visually pleasant solution, javascritp and ajax heavy and that relies the fewest is possible on server and reloading of the pages.

    Read the article

  • Setting up page goals in Analytics when using progressive enhancement to load content using jquery .load

    - by sam
    I'm using jQuery .load to load content in from other pages into my homepage, so that Google can still see whats going on I've made the <a> tags go to the pages but over ride them in the JavaScript so instead of going the that page it just loads in the content from that page to the main page. Normaly I would just make the page /contact.html a goal. Can I still get it to work as a goal if the content is being loaded in? Can I do something like when the user clicks <a href="contact.html" id="load-contact">contact</a> it logs the clicking of the <a> tag as a goal, rather than the actaul page being visited?

    Read the article

  • Google Webmaster Tools Data Highlighter says "Failed to load data, please try again later"

    - by George Garside
    I seem to be unable to access the data highlighter in Google Webmaster Tools since I attempted to start a new highlight on a page. Clicking the red Start Highlighting button to open the tagger did nothing, so I refreshed. Now, the page loads without the middle content section, then a few seconds later shows the following error: Failed to load data, please try again later. I can't get any of the middle section to load, even the list of current pages/page sets that have been highlighted—this error shows. I thought it may be a Google service outage, but other sites' data highlighters work fine. It also seems coincidental that it stopped working after I attempted to start highlighting—I was able to list the existing pages and page sets fine before that, and still am able to access the service on other sites. I've tried clearing browser data and have tried Google Chrome as well—same problem. What's happened?

    Read the article

  • The Future of the Database Begins

    - by Thanos Terentes Printzios
    For more than three-and-a-half decades, Oracle has defined database innovation. With our leading technologies, Oracle customers have been able to out-think and out-perform their competition. Soon organizations will be able to do that even faster.With the introduction of the Oracle Database In-Memory Option it will be possible to perform TRUE real-time, ad-hoc, analytic queries on your organization’s business data as it exists at that moment and receive the results immediately. Imagine your sales team being able to know the total sales they have made as of right now -- not last week, or even last night, but right now.Imagine innovation that accelerates business decision making to real-time speeds. That's the power of Oracle Database In-Memory.Watch Larry Ellison to find out what this and the other new features of Oracle Database 12c will do for you. Register Now for the Live Webcast

    Read the article

  • Javascript Implementation Patterns for Server-side MVC Websites

    - by tmo256
    I'm looking for information on common patterns for initializing and executing Javascript page by page in a "traditional" server-side MVC website architecture. A few months ago, my development team began, but abandoned, a major re-architecture of our company's primary web app, including a full front-end redesign. In the process, there was some debate about the architecture of the Javascript in the current version of the site, and whether it fit into a clear, modern design pattern. Now I've returned to the process of overhauling the front end of this and several other MVC websites (Ruby on Rails and MVC.net) to implement a responsive framework (Bootstrap), and in the process will again need to review then revamp and update a lot of Javascript. These web applications are NOT single-page Javascript applications (in fact, we are ripping out a lot of Ajax) or designed to require a Javascript MVC pattern; these apps are basically brochure, catalog and administrative sites that follow a server-side MVC pattern. The vast majority of the Javascript required is behavioral, pre-built plugins (JQuery and Bootstrap, et al) that execute on specific DOM nodes. I'm going to give a very brief (as brief as I can be) run-down of the current architecture only in order to illustrate the scope and type of paradigm I'm talking about. Hopefully, it will help you understand the nature of the patterns I'm looking for, but I'm not looking for commentary on the specifics of this code. What I've done in the past is relatively straight-forward and easy to maintain, but, as mentioned above, some of the other developers don't like the current architecture. Currently, on document ready, I execute whatever global Javascript needs to occur on every page, and then call a page-specific init function to initialize node-specific functionality, retrieving the init method from a JS object. On each page load, something like this will happen: $(document).ready(function(){ $('header').menuAction(); App.pages.executePage('home','show'); //dynamic from framework request object }); And the main App javascript is like App = { usefulGlobalVar: 0, pages: { executePage: function(action, controller) { // if exists, App.pages[action][controller].init() }, home: { show: { init: function() { $('#tabs').tabs(); //et. al }, normalizeName: function() { // dom-specific utility function that // doesn't require a full-blown component/class/module } }, edit: ... }, user_profile: ... } } Any common features and functionality requiring modularization or compotentizing is done as needed with prototyping. For common implementation of plugins, I often extend JQuery, so I can easily initialize a plugin with the same options throughout the site. For example, $('[data-tabs]').myTabs() with this code in a utility javascript file: (function($) { $.fn.myTabs = function() { this.tabs( { //...common options }); }; }) Pointers to articles, books or other discussions would be most welcome. Again, I am looking for a site-wide implementation pattern, NOT a JS MVC framework or general how-tos on creating JS classes or components. Thanks for your help!

    Read the article

  • How to do a cacheable redirection?

    - by John Doe
    When users enter my website example.com, their "preferred" language is detected and they are redirected (using a 301 Moved Permanently redirection) to example.com/en/ (for english), example.com/it/ (for italian), etc. It works perfectly, but when I analized my website with the Google Page Speed tool it gave me the following advice. Many pages, especially mobile pages, redirect users to a different URL, for instance from www.example.com to m.example.com. Making this redirect cacheable by the user's browser can speed up page load times for repeat visitors to a site. And later it says We recommend using a 302 redirect with a cache lifetime of one day. The redirect should include a Vary: User-Agent header as well as a Cache-Control: private header. So my questions are, how can I do a "cacheable" redirection in PHP? Would the following be enough? header("HTTP/1.0 302 Moved Temporarily"); header("Location: example.com/whatever"); exit;

    Read the article

  • Approach on Software Development Architecture

    - by john ryan
    Hi i am planning to standardize our way of creating project for our new projects. Currently we are using 3tier architecture where we have our ClassLibrary Project where it includes our Data Access Layer and Business Layer Something like: Solution ClassLibrary >ClassLibrary Project : >DAL(folder) > DAL Classes >BAL(folder) > BAL Classes And this Class Library dll was reference on our presentation Layer Project which are the Application(web/desktop) Something like: Solution WebUniversitySystem >Libraries(folder) > ClassLibrary.dll >WebUniversitySystem(Project): >Reference ClassLibrary.dll >Pages etc... Now i am planning to do is something like: Solution WebUniversitySystem >DataAccess(Project) >BusinesLayer(Project) >Reference DAL >WebUniversitySystem(Project): >Reference BAL >Pages etc... Is this Ok ? Or there is a good Approach that we can follow? Thanks In Regards

    Read the article

  • Joomla and Google Analytics advanced options in tracking code

    - by miako
    I want to insert google analytics tracking code in my joomla site. so i registered in the official site of google and saw there is an advanced tab with three more options than standard. Do i have to check "i want to track dynamic pages" and "i want to track php pages"? Do these options provide me better results or they are necessary for a dynamic site based on php like joomla? Does anyone know the process of installing? because i didn't manage to make it work by following this Also where do i place the tracking code? Because of some bugs some say it is better just after the tag <body> whereas other say just before the tag </body>. Thank you

    Read the article

  • Does an onboard video affect the X windows configuration?

    - by Timothy
    Does the onboard video on the motherboard affect the X windows configuration? My system has onboard and pcie video. The onboard video is a NVIDIA GeForce 7025 GPU, On Board Graphic Max. Memory Share Up to 512MB(Under OS By Turbo Cache). I have a pcie dual head video card installed with two monitors. The video card is a GeForce 8400 GS, with 512mb memory. When installing Ubuntu 12.04, only one monitor worked. When pulling up system settings- Displays it shows a laptop. This is a desktop pc. I did get both monitors to work using nvidia using twinview -- A complicated process! When checking nvidia now it shows the monitors disabled. The Nvidia X server setting does show the GPU and all the information. I was thinking it's seeing the onboard video on the motherboard. Why else would it show laptop?

    Read the article

  • Make my website dynamically loaded data available to Facebook Open Graph Object Scrapper

    - by fvaliquette
    Here is the design of my web site: The user enter myWebsite.com/a/1 .htaccess rules redirect to myWebsite.com/b Now the JavaScript ExtJS library is loading. Extracting the value from the URL (in this case it is “1”) Loading ./xml/1.xml From 1.xml setting the Open Graph data (Title, type, image, etc) Loading data that will be shown to the user from 1.xml into the website. My question is: How can I make the Open Graph data available to Facebook? Facebook do not to load my ExtJS JavaScript Library before extracting the Open Graph Object values from the HTML. Is there an easy solution to this problem? The only solutions I found is to make statics web pages or dynamically pages rendered on the server side but I would like to avoid these since my web page implementation is already finished and I would like to avoid re working on it.

    Read the article

  • How to send packets via a pptp vpn tunnel?

    - by Phill
    I'm trying to send certain port traffic through my ppp0 interface it's a pptp vpn tunnel, First, I'm using a wireless usb interface, I connect up to my access point, then I initiate my vpn, there is a connection but I do not channel all connexions through that, nor do I want to, so, say I want to channel all port 80 packets through my vpn (interface dev ppp0). I first run: iptables -t mangle -A OUTPUT -p tcp --dport 80 -j MARK --set-mark 0xa to mark the correct packets then I add a table named vpn_table, I then add ip route add default dev ppp0 table vpn_table when I do that traffic begins to dribble through the ppp0, but no pages load. I supose I must have caused some sort of coflict, or the route I'm adding in vpn_table isn't quite right. I'm not sure, I think I'm marking the packets correctly but I can't be sure of that either. UPDATE: I think i've got part of the issue solved: running tcpdump -i ppp0 showed me that indeed there was outgoing requests via ppp0, now, there is never a response, and pages do not load with using that interface..i'm still missing something.

    Read the article

  • php is not working well on ubuntu 13.10 and mcrypt is missing in phpmyadmin

    - by mohamad
    I've upgraded from UBUNTU 13.04 to 13.10 but I can not work with php pages or phpmyadmin . I've tried this way to install lamp on ubuntu sudo apt-get install lamp-server^ phpmyadmin and I've done all of the configuration correctly after installation I've added this line Include /etc/phpmyadmin/apache.conf to /etc/apache2/apache2.conf then I restarted apache2 but in phpmyadmin on the bottom of the page is this error : The mcrypt extension is missing. Please check your PHP configuration I've check and mcrypt was in it , but in phpmyadmin it gives me error of missing . the other problem is on PHP pages it seems like there is no PHP and it's all html because lots of PHP lines are printed in textbox's like : <? echo $row['details']; ?> can anybody tell me what should I do ?

    Read the article

  • June Edition - Oracle Database Insider

    - by jgelhaus
    Now available.  The June edition of the Oracle Database Insider includes: NEWS June 10: Oracle CEO Larry Ellison Live on the Future of Database Performance At a live webcast on June 10 at Oracle’s headquarters, Oracle CEO Larry Ellison is expected to announce the upcoming availability of Oracle Database In-Memory, which dramatically accelerates business decision-making by processing analytical queries in memory without requiring any changes to existing applications. Read More New Study Confirms Capital Expenditure Savings with Oracle Multitenant A new study finds that Oracle Multitenant, an option of Oracle Database 12c, drives significant savings in capital expenditures by enabling the consolidation of a large number of databases on the same number or fewer hardware resources. Read More VIDEO Oracle Database 12c: Multitenant Environment with Tom Kyte Tom Kyte discusses Oracle Multitenant, followed by a demo of the multitenant architecture that includes moving a pluggable database (PDB) from one multitenant container database to another, cloning a PDB, and creating a new PDB.  and much more.

    Read the article

  • Pre order Nokia Lumia 900 from AT&T for $99.99 and Walmart for $49.99

    - by Gopinath
    Nokia Lumia 900, the flagship Windows Phone OS smartphone from Nokia is available for pre-order from AT&T Stores. With a two year contract, you can grab the phone by paying $99.99 online and they are expected to ship a day or two earlier than their official launch in AT&T stores across US. Walmart in an aggressive move, is selling Nokia Lumia 900 for just $49.99 with a two year contract. So you save $50 more. Earlier in January of this year, Nokia unveiled its plans of Lumia 900 launch exclusively for American market. Nokia Lumia 900 features a 4.3 inch Clear Black display, sporting a resolution of 800 x 480 pixels, 1.4 GHz Snapdragon processor, Windows Phone 7.5 (Mango) OS, 8 megapixel rear camera with f2.2/28mm Carl Zeiss lens and dual LED flash, auto-focus and HD (720p) video recording, 1 megapixel front-facing camera for video calls, 512 MB RAM, 16 GB internal memory, 14.5 GB user memory and more. Pre-order Nokia Lumia 900 from AT&T and Walmart

    Read the article

  • Proper way to create and work with a subdomain?

    - by Genadinik
    My site got effected by Panda, and I am trying to see if making a subdomain would work. The site is comehike.com, and I created a subdomain which is currently empty at hiking.comehike.com I have a directory /outdoors that has some high quality hand-written articles. I want to put those into the new subdomain to see what would happen. My questions are: Should I just copy and paste the files for those pages into the new subdomain's folder, and just change all the links in all my pages from the original domain to the new subdomain? Should I just do a 301 redirect to the new subdomain? Since test.site.com and www.site.com are different domains, will the new page have to start from scratch in terms of Pagerank, and its rankings in the SERPs?

    Read the article

  • Développement d'applications professionnelles avec Android 2 de Reto Meier, critique par verdvaine yan

    Je viens de lire "Développement d'applications professionnelles avec Android 2" de Reto Meier, ingénieur chez Google. [IMG]http://images-eu.amazon.com/images/P/274402452X.08.LZZZZZZZ.jpg[/IMG] Je le trouve très complémentaire aux tutoriaux qu'on trouve sur Internet. Il aborde beaucoup de sujets et le nombre de pages n'est pas dù à des captures d'écrans ! Ce que j'ai particulièremen apprécié, ce sont toutes les petites informations tirées de son expérience qu'il distille au fil des pages. L'avez vous lu ? Si oui, par rapport à d'autres livres sur le sujet ? Allez vous le lire ?...

    Read the article

  • Better drivers for SiS 650/740 integrated video?

    - by Bart van Heukelom
    I installed Xubuntu 10.10 on an old box today and the graphical performance is horrid. According to lspci, the video card is this: 01:00.0 VGA compatible controller: Silicon Integrated Systems [SiS] 65x/M650/740 PCI/AGP VGA Display Adapter (prog-if 00 [VGA controller]) Subsystem: ASUSTeK Computer Inc. Device 8081 Flags: 66MHz, medium devsel, IRQ 11 BIST result: 00 Memory at f0000000 (32-bit, prefetchable) [size=128M] Memory at e7800000 (32-bit, non-prefetchable) [size=128K] I/O ports at d800 [size=128] Expansion ROM at <unassigned> [disabled] Capabilities: <access denied> Kernel modules: sisfb Is there a way to make it faster? Alternative drivers? The additional drivers tool shows nothing. I'm specifically interested in improving Java's Java2D rendering speed, because I'll be running a "stat screen" written in that language on it.

    Read the article

  • Best wiki engine to use?

    - by Ross
    Hi, I'm looking to set up a wiki as a simple CMS for a resource page. Mostly just pdfs and word documents will be hosted, but the two main features I'm looking for is the ability to restrict pages based upon user privileges and for blog-style comments between the users. From what I've researched, mediawiki can easily do the first part with restricting users, but I haven't had much luck finding any plugins for comments. I'm trying to avoid the discussion style pages from wikipedia, and have more of a comments just under the article. So far I'm leaning towards trying Tiki out, any other recommendations?

    Read the article

  • Object-Oriented Operating System

    - by nmagerko
    As I thought about writing an operating system, I came across a point that I really couldn't figure out on my own: Can an operating system truly be written in an Object-Oriented Programming (OOP) Language? Being that these types of languages do not allow for direct accessing of memory, wouldn't this make it impossible for a developer to write an entire operating system using only an OOP Language? Take, for example, the Android Operating System that runs many phones and some tablets in use around the world. I believe that this operating system uses only Java, an Object-Oriented language. In Java, I have been unsuccessful in trying to point at and manipulate a specific memory address that the run-time environment (JRE) has not assigned to my program implicitly. In C, C++, and other non-OOP languages, I can do this in a few lines. So this makes me question whether or not an operating system can be written in an OOP, especially Java. Any counterexamples or other information is appreciated.

    Read the article

  • Solving Big Problems with Oracle R Enterprise, Part II

    - by dbayard
    Part II – Solving Big Problems with Oracle R Enterprise In the first post in this series (see https://blogs.oracle.com/R/entry/solving_big_problems_with_oracle), we showed how you can use R to perform historical rate of return calculations against investment data sourced from a spreadsheet.  We demonstrated the calculations against sample data for a small set of accounts.  While this worked fine, in the real-world the problem is much bigger because the amount of data is much bigger.  So much bigger that our approach in the previous post won’t scale to meet the real-world needs. From our previous post, here are the challenges we need to conquer: The actual data that needs to be used lives in a database, not in a spreadsheet The actual data is much, much bigger- too big to fit into the normal R memory space and too big to want to move across the network The overall process needs to run fast- much faster than a single processor The actual data needs to be kept secured- another reason to not want to move it from the database and across the network And the process of calculating the IRR needs to be integrated together with other database ETL activities, so that IRR’s can be calculated as part of the data warehouse refresh processes In this post, we will show how we moved from sample data environment to working with full-scale data.  This post is based on actual work we did for a financial services customer during a recent proof-of-concept. Getting started with the Database At this point, we have some sample data and our IRR function.  We were at a similar point in our customer proof-of-concept exercise- we had sample data but we did not have the full customer data yet.  So our database was empty.  But, this was easily rectified by leveraging the transparency features of Oracle R Enterprise (see https://blogs.oracle.com/R/entry/analyzing_big_data_using_the).  The following code shows how we took our sample data SimpleMWRRData and easily turned it into a new Oracle database table called IRR_DATA via ore.create().  The code also shows how we can access the database table IRR_DATA as if it was a normal R data.frame named IRR_DATA. If we go to sql*plus, we can also check out our new IRR_DATA table: At this point, we now have our sample data loaded in the database as a normal Oracle table called IRR_DATA.  So, we now proceeded to test our R function working with database data. As our first test, we retrieved the data from a single account from the IRR_DATA table, pull it into local R memory, then call our IRR function.  This worked.  No SQL coding required! Going from Crawling to Walking Now that we have shown using our R code with database-resident data for a single account, we wanted to experiment with doing this for multiple accounts.  In other words, we wanted to implement the split-apply-combine technique we discussed in our first post in this series.  Fortunately, Oracle R Enterprise provides a very scalable way to do this with a function called ore.groupApply().  You can read more about ore.groupApply() here: https://blogs.oracle.com/R/entry/analyzing_big_data_using_the1 Here is an example of how we ask ORE to take our IRR_DATA table in the database, split it by the ACCOUNT column, apply a function that calls our SimpleMWRR() calculation, and then combine the results. (If you are following along at home, be sure to have installed our myIRR package on your database server via  “R CMD INSTALL myIRR”). The interesting thing about ore.groupApply is that the calculation is not actually performed in my desktop R environment from which I am running.  What actually happens is that ore.groupApply uses the Oracle database to perform the work.  And the Oracle database is what actually splits the IRR_DATA table by ACCOUNT.  Then the Oracle database takes the data for each account and sends it to an embedded R engine running on the database server to apply our R function.  Then the Oracle database combines all the individual results from the calls to the R function. This is significant because now the embedded R engine only needs to deal with the data for a single account at a time.  Regardless of whether we have 20 accounts or 1 million accounts or more, the R engine that performs the calculation does not care.  Given that normal R has a finite amount of memory to hold data, the ore.groupApply approach overcomes the R memory scalability problem since we only need to fit the data from a single account in R memory (not all of the data for all of the accounts). Additionally, the IRR_DATA does not need to be sent from the database to my desktop R program.  Even though I am invoking ore.groupApply from my desktop R program, because the actual SimpleMWRR calculation is run by the embedded R engine on the database server, the IRR_DATA does not need to leave the database server- this is both a performance benefit because network transmission of large amounts of data take time and a security benefit because it is harder to protect private data once you start shipping around your intranet. Another benefit, which we will discuss in a few paragraphs, is the ability to leverage Oracle database parallelism to run these calculations for dozens of accounts at once. From Walking to Running ore.groupApply is rather nice, but it still has the drawback that I run this from a desktop R instance.  This is not ideal for integrating into typical operational processes like nightly data warehouse refreshes or monthly statement generation.  But, this is not an issue for ORE.  Oracle R Enterprise lets us run this from the database using regular SQL, which is easily integrated into standard operations.  That is extremely exciting and the way we actually did these calculations in the customer proof. As part of Oracle R Enterprise, it provides a SQL equivalent to ore.groupApply which it refers to as “rqGroupEval”.  To use rqGroupEval via SQL, there is a bit of simple setup needed.  Basically, the Oracle Database needs to know the structure of the input table and the grouping column, which we are able to define using the database’s pipeline table function mechanisms. Here is the setup script: At this point, our initial setup of rqGroupEval is done for the IRR_DATA table.  The next step is to define our R function to the database.  We do that via a call to ORE’s rqScriptCreate. Now we can test it.  The SQL you use to run rqGroupEval uses the Oracle database pipeline table function syntax.  The first argument to irr_dataGroupEval is a cursor defining our input.  You can add additional where clauses and subqueries to this cursor as appropriate.  The second argument is any additional inputs to the R function.  The third argument is the text of a dummy select statement.  The dummy select statement is used by the database to identify the columns and datatypes to expect the R function to return.  The fourth argument is the column of the input table to split/group by.  The final argument is the name of the R function as you defined it when you called rqScriptCreate(). The Real-World Results In our real customer proof-of-concept, we had more sophisticated calculation requirements than shown in this simplified blog example.  For instance, we had to perform the rate of return calculations for 5 separate time periods, so the R code was enhanced to do so.  In addition, some accounts needed a time-weighted rate of return to be calculated, so we extended our approach and added an R function to do that.  And finally, there were also a few more real-world data irregularities that we needed to account for, so we added logic to our R functions to deal with those exceptions.  For the full-scale customer test, we loaded the customer data onto a Half-Rack Exadata X2-2 Database Machine.  As our half-rack had 48 physical cores (and 96 threads if you consider hyperthreading), we wanted to take advantage of that CPU horsepower to speed up our calculations.  To do so with ORE, it is as simple as leveraging the Oracle Database Parallel Query features.  Let’s look at the SQL used in the customer proof: Notice that we use a parallel hint on the cursor that is the input to our rqGroupEval function.  That is all we need to do to enable Oracle to use parallel R engines. Here are a few screenshots of what this SQL looked like in the Real-Time SQL Monitor when we ran this during the proof of concept (hint: you might need to right-click on these images to be able to view the images full-screen to see the entire image): From the above, you can notice a few things (numbers 1 thru 5 below correspond with highlighted numbers on the images above.  You may need to right click on the above images and view the images full-screen to see the entire image): The SQL completed in 110 seconds (1.8minutes) We calculated rate of returns for 5 time periods for each of 911k accounts (the number of actual rows returned by the IRRSTAGEGROUPEVAL operation) We accessed 103m rows of detailed cash flow/market value data (the number of actual rows returned by the IRR_STAGE2 operation) We ran with 72 degrees of parallelism spread across 4 database servers Most of our 110seconds was spent in the “External Procedure call” event On average, we performed 8,200 executions of our R function per second (110s/911k accounts) On average, each execution was passed 110 rows of data (103m detail rows/911k accounts) On average, we did 41,000 single time period rate of return calculations per second (each of the 8,200 executions of our R function did rate of return calculations for 5 time periods) On average, we processed over 900,000 rows of database data in R per second (103m detail rows/110s) R + Oracle R Enterprise: Best of R + Best of Oracle Database This blog post series started by describing a real customer problem: how to perform a lot of calculations on a lot of data in a short period of time.  While standard R proved to be a very good fit for writing the necessary calculations, the challenge of working with a lot of data in a short period of time remained. This blog post series showed how Oracle R Enterprise enables R to be used in conjunction with the Oracle Database to overcome the data volume and performance issues (as well as simplifying the operations and security issues).  It also showed that we could calculate 5 time periods of rate of returns for almost a million individual accounts in less than 2 minutes. In a future post, we will take the same R function and show how Oracle R Connector for Hadoop can be used in the Hadoop world.  In that next post, instead of having our data in an Oracle database, our data will live in Hadoop and we will how to use the Oracle R Connector for Hadoop and other Oracle Big Data Connectors to move data between Hadoop, R, and the Oracle Database easily.

    Read the article

  • How to setup ATI Mobility Radeon HD 4650?

    - by Thi An
    I'm currently using Ubuntu 12.04 64bit. After installing ATI/AMD proprietary FGLRX graphics drivers via Additional Drivers, I checked the status of my VGA card using lspci -v. Here's the output: 02:00.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI M96 [Mobility Radeon HD 4650] (prog-if 00 [VGA controller]) Subsystem: Dell Device 0456 Flags: bus master, fast devsel, latency 0, IRQ 46 Memory at d0000000 (32-bit, prefetchable) [size=256M] I/O ports at 2000 [size=256] Memory at cfef0000 (32-bit, non-prefetchable) [size=64K] [virtual] Expansion ROM at cfe00000 [disabled] [size=128K] Capabilities: Kernel driver in use: fglrx_pci Kernel modules: fglrx, radeon As mentioned in the title, my VGA card is 1GB and yet my computer only recognizes 256MB. My question is: "How to make my computer fully recognize the capacity of my ATI Mobility Radeon HD 4650 (1GB)?"

    Read the article

  • why not use unmanaged safe code in c#

    - by user613326
    There is an option in c# to execute code unchecked. It's generally not advised to do so, as managed code is much safer and it overcomes a lot of problems. However I am wondering, if you're sure your code won't cause errors, and you know how to handle memory then why (if you like fast code) follow the general advice? I am wondering this since I wrote a program for a video camera, which required some extremely fast bitmap manipulation. I made some fast graphical algorithms myself, and they work excellent on the bitmaps using unmanaged code. Now I wonder in general, if you're sure you don't have memory leaks, or risks of crashes, why not use unmanaged code more often ? PS my background: I kinda rolled into this programming world and I work alone (I do so for a few years) and so I hope this software design question isn't that strange. I don't really have other people out there like a teacher to ask such things.

    Read the article

  • How to reload UBUNTU11.04 again which is hidden in the hard disk

    - by Yaskadeva
    I had installed Ubuntu 11.04 as another OS ( not inside Windows). means every time i used to get a Ubuntu screen and i can select Ubuntu or windows. but once i formatted my windows. after that the 38 gb memory which was used under Ubuntu is missing means the ubuntu is there and as it is EXT type windows is not able to access that. and i am not able to boot into it.I need ubuntu i can install new version but my memory is being wasted i do not kno what to do. pl reply me asap.

    Read the article

< Previous Page | 296 297 298 299 300 301 302 303 304 305 306 307  | Next Page >