Search Results

Search found 235 results on 10 pages for 'reduction'.

Page 6/10 | < Previous Page | 2 3 4 5 6 7 8 9 10  | Next Page >

  • SQLAuthority News – Whitepaper Download – Using Star Join and Few-Outer-Row Optimizations to Improve Data Warehousing Queries

    - by pinaldave
    Size of the database is growing every day. Many organizations now a days have more than TB of the Data in their system. Performance is always part of the issue. Microsoft is really paying attention to the same and also focusing on improving performance for Data Warehousing. Microsoft has recently released whitepaper on the performance tuning subject of Data Warehousing. Here is the abstract about the whitepaper from official site: In this white paper we discuss two of the new features introduced in SQL Server 2008, Star Join and Few-Outer-Row optimizations. These two features are in SQL Server 2008 R2 as well.  We test the performance of SQL Server 2008 on a set of complex data warehouse queries designed to highlight the effect of these two features and observed a significant performance gain over SQL Server 2005 (without these two features). The results observed also apply to SQL Server 2008 R2.  On average, about 75 percent of the query execution time has been reduced, compared to SQL Server 2005. We also include data that shows a reduction in the number of rows processed and improved balance in parallel queries, both of which highlight the important role the Star Join and Few Outer-Row features played. I encouraged all of those interested in Data Warehouse to read it and see if they can learn the tricks. Using Star Join and Few-Outer-Row Optimizations to Improve Data Warehousing Queries Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • Turn off Windows Defender on your builds

    - by george_v_reilly
    I've spent some time this evening profiling a Python application on Windows, trying to find out why it was so much slower than on Mac or Linux. The application is an in-house build tool which reads a number of config files, then writes some output files. Using the RunSnakeRun Python profile viewer on Windows, two things immediately leapt out at me: we were running os.stat a lot and file.close was really expensive. A quick test convinced me that we were stat-ing the same files over and over. It was a combination of explicit checks and implicit code, like os.walk calling os.path.isdir. I wrote a little cache that memoizes the results, which brought the cost of the os.stats down from 1.5 seconds to 0.6. Figuring out why closing files was so expensive was harder. I was writing 77 files, totaling just over 1MB, and it was taking 3.5 seconds. It turned out that it wasn't the UTF-8 codec or newline translation. It was simply that closing those files took far longer than it should have. I decided to try a different profiler, hoping to learn more. I downloaded the Windows Performance Toolkit. I recorded a couple of traces of my application running, then I looked at them in the Windows Performance Analyzer, whereupon I saw that in each case, the CPU spike of my app was followed by a CPU spike in MsMpEng.exe. What's MsMpEng.exe? It's Microsoft's antimalware engine, at the heart of Windows Defender. I added my build tree to the list of excluded locations, and my runtime halved. The 3.5 seconds of file closing dropped to 60 milliseconds, a 98% reduction. The moral of this story is: don't let your virus checker run on your builds.

    Read the article

  • How do graphics programmers deal with rendering vertices that don't change the image?

    - by canisrufus
    So, the title is a little awkward. I'll give some background, and then ask my question. Background: I work as a web GIS application developer, but in my spare time I've been playing with map rendering and improving data interchange formats. I work only in 2D space. One interesting issue I've encountered is that when you're rendering a polygon at a small scale (zoomed way out), many of the vertices are redundant. An extreme case would be that you have a polygon with 500,000 vertices that only takes up a single pixel. If you're sending this data to the browser, it would make sense to omit ~499,999 of those vertices. One way we achieve that is by rendering an image on a server and and sending it as a PNG: voila, it's a point. Sometimes, though, we want data sent to the browser where it can be rendered with SVG (or canvas, or webgl) so that it can be interactive. The problem: It turns out that, using modern geographic data sets, it's very easy to overload SVG's rendering abilities. In an effort to cope with those limitations, I'm trying to figure out how to visually losslessly reduce a data set for a given scale and map extent (and, if necessary, for a known map pixel width and height). I got a great reduction in data size just using the Douglas-Peucker algorithm, and I believe I was able to get it to keep the polygons true to within one pixel. Unfortunately, Douglas-Peucker doesn't preserve topology, so it changed how borders between polygons got rendered. I couldn't readily find other algorithms to try out and adapt to the purpose, but I don't have much CS/algorithm background and might not recognize them if I saw them.

    Read the article

  • Final agenda - Oracle Exadata & Manageability Partner Community Forum at OpenWorld

    - by Javier Puerta
    Just a few days for Oracle OpenWorld and our Exadata & Manageability Partner Community Forum for EMEA partners. The event will take place on the afternoon of Monday, October 1st, 2012 during the Oracle OpenWorld week. For all partners that have confirmed their attendance to the event, find below the final detailed agenda. I look forward to meeting again in San Francisco with all of you who can attend the event and hope that you will find the sessions useful for your business.   FINAL AGENDAOracle Exadata & ManageabilityEMEA Partner Community Forum at Oracle OpenWorld 2012 in San Francisco, USAMonday, October 1st, 20112 Detailed agenda Time Session Speaker 15:30 Reception of participants - Networking coffe served 16:00 Welcome Hans-Peter Kipfer, VP Engineered Systems, Oracle EMEA 16:10 Next challenges in building and managing clouds Javier Cabrerizo, VP, Global Business Development for Exadata, Oracle Corp. 16:30 Partner experience 1.- IT modernization, simplification and cost reduction: The case of a customer in Transportation & Logistics with custom applications and SAP. The Technological Renewal Model built by aligning the innovation of Oracle's Engineered Systems and Capgemini's service delivery excellence has resulted in significant cost savings for the client. Francisco Bermúdez, Country Leader Infrastructure Services, Capgemini, Spain 16:55 Partner experience 2.- The Nvision cloud project NCloud is an innovative design that combines advanced technical solutions, virtualization, and dynamic management of IT resources, providing a complete "as-a-Service" offering for Infrastructure, Database, Middleware, and Applications. Dmitry Krasilov, Head of Oracle Competence Center, Nvision Group, Russia 17:20 Partner experience 3.- From Exadata Ready to Exadata Optimized: An ISV Experience The experience of WeDo Technologies in the process and benefits that started as an Exadata Ready certification and ended up as an Exadata Optimized. Miguel Alves,  Product Business Solutions Manager, Wedo Technologies, Portugal 17:45 Next steps in engaging with Oracle Cengiz Yilmaz, Director Partner Strategy, Oracle EMEA Engineered SystemsPatrick Rood, Manageability Partner Business, Oracle EMEA 18:00 Wrap-up & Networking Time and Location:Monday, October 1st, 2012, 15:30 - 18:00 PST Grand Hyatt San Francisco, 345 Stockton Street, San Francisco (Conference Theater) (It is a 15 minute walk from OOW Moscone Center. See directions here)  

    Read the article

  • Final agenda - Oracle Exadata & Manageability Partner Community Forum at OpenWorld

    - by Javier Puerta
    Just a few days for Oracle OpenWorld and our Exadata & Manageability Partner Community Forum for EMEA partners. The event will take place on the afternoon of Monday, October 1st, 2012 during the Oracle OpenWorld week. For all partners that have confirmed their attendance to the event, find below the final detailed agenda. I look forward to meeting again in San Francisco with all of you who can attend the event and hope that you will find the sessions useful for your business.   FINAL AGENDAOracle Exadata & ManageabilityEMEA Partner Community Forum at Oracle OpenWorld 2012 in San Francisco, USAMonday, October 1st, 20112 Detailed agenda Time Session Speaker 15:30 Reception of participants - Networking coffe served 16:00 Welcome Hans-Peter Kipfer, VP Engineered Systems, Oracle EMEA 16:10 Next challenges in building and managing clouds Javier Cabrerizo, VP, Global Business Development for Exadata, Oracle Corp. 16:30 Partner experience 1.- IT modernization, simplification and cost reduction: The case of a customer in Transportation & Logistics with custom applications and SAP. The Technological Renewal Model built by aligning the innovation of Oracle's Engineered Systems and Capgemini's service delivery excellence has resulted in significant cost savings for the client. Francisco Bermúdez, Country Leader Infrastructure Services, Capgemini, Spain 16:55 Partner experience 2.- The Nvision cloud project NCloud is an innovative design that combines advanced technical solutions, virtualization, and dynamic management of IT resources, providing a complete "as-a-Service" offering for Infrastructure, Database, Middleware, and Applications. Dmitry Krasilov, Head of Oracle Competence Center, Nvision Group, Russia 17:20 Partner experience 3.- From Exadata Ready to Exadata Optimized: An ISV Experience The experience of WeDo Technologies in the process and benefits that started as an Exadata Ready certification and ended up as an Exadata Optimized. Miguel Alves,  Product Business Solutions Manager, Wedo Technologies, Portugal 17:45 Next steps in engaging with Oracle Cengiz Yilmaz, Director Partner Strategy, Oracle EMEA Engineered SystemsPatrick Rood, Manageability Partner Business, Oracle EMEA 18:00 Wrap-up & Networking Time and Location:Monday, October 1st, 2012, 15:30 - 18:00 PST Grand Hyatt San Francisco, 345 Stockton Street, San Francisco (Conference Theater) (It is a 15 minute walk from OOW Moscone Center. See directions here)  

    Read the article

  • SQL Rally Pre-Con: Data Warehouse Modeling – Making the Right Choices

    - by Davide Mauri
    As you may have already learned from my old post or Adam’s or Kalen’s posts, there will be two SQL Rally in North Europe. In the Stockholm SQL Rally, with my friend Thomas Kejser, I’ll be delivering a pre-con on Data Warehouse Modeling: Data warehouses play a central role in any BI solution. It's the back end upon which everything in years to come will be created. For this reason, it must be rock solid and yet flexible at the same time. To develop such a data warehouse, you must have a clear idea of its architecture, a thorough understanding of the concepts of Measures and Dimensions, and a proven engineered way to build it so that quality and stability can go hand-in-hand with cost reduction and scalability. In this workshop, Thomas Kejser and Davide Mauri will share all the information they learned since they started working with data warehouses, giving you the guidance and tips you need to start your BI project in the best way possible?avoiding errors, making implementation effective and efficient, paving the way for a winning Agile approach, and helping you define how your team should work so that your BI solution will stand the test of time. You'll learn: Data warehouse architecture and justification Agile methodology Dimensional modeling, including Kimball vs. Inmon, SCD1/SCD2/SCD3, Junk and Degenerate Dimensions, and Huge Dimensions Best practices, naming conventions, and lessons learned Loading the data warehouse, including loading Dimensions, loading Facts (Full Load, Incremental Load, Partitioned Load) Data warehouses and Big Data (Hadoop) Unit testing Tracking historical changes and managing large sizes With all the Self-Service BI hype, Data Warehouse is become more and more central every day, since if everyone will be able to analyze data using self-service tools, it’s better for him/her to rely on correct, uniform and coherent data. Already 50 people registered from the workshop and seats are limited so don’t miss this unique opportunity to attend to this workshop that is really a unique combination of years and years of experience! http://www.sqlpass.org/sqlrally/2013/nordic/Agenda/PreconferenceSeminars.aspx See you there!

    Read the article

  • Too much I/O in the morning ?

    - by steveh99999
    Interesting little improvement on a SQL 2005 system I encountered recently….. Some background - this system had a fairly ‘traditional OLTP’ workload ie  heavily used during day – till around 9pm, then had a batch window for several hours, then not much activity in the early hours of the day, until normal workload resumed the following morning. Using perfmon, I noticed that every morning, we would see a big spike in SQL Server I/O when the application started to be used... As it was 2005 I decided to look at what tables were in cache before and after the overnight batch processing ran… ( using DMV equivalent of dbcc memusage that I posted earlier). Here’s what I saw :-     So, contents of data cache split fairly evenly between my 'important/heavily used' tables.   After this:- some application batch processing,backups, DBCC checks and reindexes were run.  A fairly standard batch I'd suggest. Cache contents then looked like this :- Hmmmm – most of cache is now being used by a table I’ve described as ‘unimportant’. Why ? Well, that table was the last to be reindexed…. purely due to luck, as  the reindexing stored procedure performing a loop in alphabetical order through all application tables...  When the application starts to be used again – all this ‘unimportant’ data has to be replaced in cache by data that is heavily used… So, we changed the overnight reindex scripts –  the most heavily accessed tables are now the last to be reindexed. Obvious really, but we did see a significant reduction in early-morning I/O after changing the order of our reindexing.  

    Read the article

  • News from OpenWorld: Oracle Announces Identity Governance Suite

    - by Tanu Sood
      At OpenWorld, Oracle today announced the release of Oracle Identity Governance Suite. An end-to-end access governance solution, Oracle Identity Governance Suite addresses compliance, governance and identity administration requirements. Built on Oracle’s unique platform approach to Identity Management, the suite offers a single, comprehensive platform for access request, provisioning, role lifecycle management, access certification, closed loop remediation and privileged account management. The suite offers benefits like dramatic reduction in administration (and help desk) overhead, cost-effective compliance enforcement and reporting, enhanced user experience and analytics driven insight. More details available in the announcement and on our website. Additional Resources: ·         Oracle Identity Governance Datasheet ·         Oracle Privileged Account Manager ·         Integrated Identity Governance Whitepaper ·         Gartner Magic Quadrant for User Provisioning ·         Join the Oracle Identity Management online communities: Blog, Facebook and Twitter

    Read the article

  • What kinds of demos are good to make for a software engineer job

    - by user23012
    I have created my cv site and sent out my demos for a while now, but most of my demos are either from my course or games related since my course was a games programming course, I was wondering what kind of demos are good to show off my skills in programming in general. These are what i already have Pennies:just a simple game first coursework i did. Compiler:coursework for compiler writing module Pongout: basic a pong game in 68k using colour detection Snake: snake in 68k same thing as the pong Game Cube Maze: gamecube work BeatmyBot: basic Ai Basic plat-former game: 2d game with different types of collision Turing Lambda Simulation: my dissertation Turing machine simulated in Miranda. alpha and Beta reduction,and SKI calculus simulated in the Turing machine. What I am asking here is what kind of demos are good to add or have, i have been looking and have hit a tough spot I cant think of anything to make more than games. so for a general graduate software engineer what types would be good examples? EDIT: since responding to the comments bellow well for what languages well my main one would be C++, followed by Java, Erlang and abit of Haskell

    Read the article

  • K-12 and Cloud considerations

    - by user736511
    Much like every other Public Sector organization, school districts in the US and Canada are under tremendous pressure to deliver consistent and modern services while operating with reduced budgets, IT personnel shortages, and staff attrition.  Electronic/remote learning and the need for immediate access to resources such as grades, calendars, curricula etc. are straining IT environments that were already burdened with meeting privacy requirements imposed by both regulators and parents/students.  One area viewed as a solution to at least some of the challenges is the use of "Cloud" in education.  Although the concept of "Cloud" is nothing new in education with many providers supplying educational material over the web, school districts defer previously-in-house-hosted services to established commercial vendors to accommodate document sharing, app hosting, and even e-mail.  Doing so, however, does not reduce an important risk, that of privacy.  As always, Cloud implementations are viewed in a skeptical manner because of the perceived reduction in sensitive data management and protection thereof, although with a careful approach and the right tooling, the benefits realized by Clouds can expand to security and privacy.   Oracle's comprehensive approach to data privacy and identity management ensures that the necessary tools are available to support regulations, operational efficiencies and strong security regardless of where the sensitive data is stored - on premise or a Cloud.  Common management tools, role-based access controls, access policy management and engineered systems provided by Oracle can be the foundational pieces on which school districts can build their Cloud implementations without having to worry about security itself. Their biggest challenge, and it is a positive one, is how to best take advantage of Oracle's DB Security and IDM functionality to reduce operational costs while enabling modern applications and data delivery to those who needs access to it. For more information please refer to http://www.oracle.com/us/products/middleware/identity-management/overview/index.html and http://www.oracle.com/us/products/database/security/overview/index.html.

    Read the article

  • Drink Milk or Got a Pet? Watch what IDEXX Laboratories and Oracle do for you

    - by Ruma Sanyal
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 IDEXX Laboratories is the global market leader in diagnostics and IT for animal health [with 50,000 veterinary practices worldwide], and water and milk quality. Watch this video where Brett Curtis, Senior System Administrator from IDEXX, discusses their business applications and laboratory information management systems. IDEXX uses Oracle WebLogic Server, SOA Suite, Coherence, Enterprise Manager and more. Enterprise Manager is used to manage their entire stack and has enabled IDEXX to achieve an astounding 90% reduction in time to find root cause of problems in their application infrastructure. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Cost effective way to provide static media content

    - by james
    I'd like to be able to deliver around 50MB of static content, either in about 30 individual files up to 10MB or grouped into 3 compressed files, around 5k to 20k times a day. Ideally I'd like to put some sort of very basic security around providing the data to ensure that a request is from the expected source, but if tossing the security for a big reduction in price is possible then it's an option. Does anyone have any suggestions other than what I've found: Google AppEngine is $0.12/GB & I believe has a file size limit of 10MB so I'd have to break the data up a bit. So a rough calculation would seem to be that this would cost me about $30 to $120 a day. Or I've seen something like what seems to be just public static content delivery with no type of logic capabilities like Usenet.nl at what I think calculates to about $0.025/GB which would cost me about $6 to $25 a day. Any idea if I'm going about these calculations right & if there might be a better option for just static content on a decently high volume delivery? Again some basic security would be great but if cost is greatly reduced without it then I'm up for that.

    Read the article

  • INETA NorAm Component Code Challenge

    - by Chris Williams
    Want to win a trip to TechEd 2011? INETA NorAm is hosting a contest with our partners to see who can build an .NET application making effective use of reusable components to solve a problem. The Rules: Any .NET Application (WinForms, ASP.NET, WPF, Silverlight, Windows Phone 7, etc.) built in the last year (since 1/1/2010) using at least 1 component from at least 1 approved vendor. Then make a 3 - 5 minute Camtasia video showing your entry and describing what component(s) you used and why your application is cool. Our judges will review the submissions and the best two will win a scholarship to Tech·Ed 2011, May 16-19 in Atlanta GA including airfare, hotel, and conference pass. The Judging: Entries will be judged on four criteria: Effective use of a component to solve a problem/display data Innovative use of components Impact using components (i.e. reduction in lines of code written, increased productivity, etc.) Most creative use of a component. Timeline: Hurry! The submission deadline is March 15, 2011 at Midnight Eastern Standard Time. More information can be found on the INETA Component Code Challenge page: http://ineta.org/CodeChallenge/Default.aspx

    Read the article

  • get eigenvalue pca with java

    - by Muhamad Burhanudin
    I try use PCA to reduce dimention, and i use jama for help me using matrix. but, i got problem when get eigenvalue with jama. for example i hava 2 image dimention 100x100, then i create single matrix 2 image x (100x100). there is 20.000 pixel. and how to get reduction with eigenvalue? this is sample my code : public static void main(String[] args) { BufferedImage bi; int[] rgb; int R, G, B; // int[] jum; double[][] gray = new double[500][500] ; String[] baris = new String[1000]; try { //bi = ImageIO.read(new File("D:\\c.jpg")); int[][] pixelData = new int[bi.getHeight() * bi.getWidth()][3]; int counter = 0; for (int i = 0; i < bi.getHeight(); i++) { for (int j = 0; j < bi.getWidth(); j++) { gray[i][j] = getPixelData(bi, i, j); // R = getR(bi, i, j); //G = getG(bi, i, j); //B = getB(bi, i, j); //jum = R + G + B; // gray[counter] = Double.toString(R + G + B / 3); // System.out.println("Gray " +gray); //for (int k = 0; k < rgb.length; k++) { // pixelData[counter][k] = rgb[k]; // } counter++; } } } catch (IOException e) { e.printStackTrace(); } Matrix matrix = new Matrix(gray); PCA pca = new PCA(matrix); pca.getEigenvalue(6); String n = pca.toString(); System.err.println("nilai n "+n); //double dete = pcadete(matrix,3600); } private static int getPixelData(BufferedImage bi, int x, int y) { int argb = bi.getRGB(y, x); int r, g, b; int gray; int rgb[] = new int[]{ (argb >> 16) & 0xff, //red (argb >> 8) & 0xff, //green (argb) & 0xff //blue }; r = rgb[0]; g = rgb[1]; b = rgb[2]; gray = (r + g + b) / 3; System.out.println("gray: " + gray); return gray; } when i show eigenvalue in this code : PCA pca = new PCA(matrix); pca.getEigenvalue(6); String n = pca.toString(); System.err.println("nilai n "+n); Result is : nilai n PCA@c3e9e9 Can, u tell me what way to get eigenvalue and reduction dimension.

    Read the article

  • Get to Know a Candidate (6 of 25): Jill Stein&ndash;Green Party

    - by Brian Lanham
    DISCLAIMER: This is not a post about “Romney” or “Obama”. This is not a post for whom I am voting. Information sourced for Wikipedia. Stein is a physician with degrees from Harvard College and Harvard Medical School.  She serves on the boards of Greater Boston Physicians for Social Responsibility and MassVoters for Fair Elections, and has been active with the Massachusetts Coalition for Healthy Communities Jill Stein advocates a "Green New Deal" in which renewable energy jobs would be created to address climate change and environmental issues with the objective of employing "every American willing and able to work". Citing the research of Dr. Phillip Harvey, Professor of Law & Economics at Rutgers University, as evidence of the successful economic effects of the 1930s' New Deal projects, Stein would fund the plan with a 30% reduction in the U.S. military budget, returning US troops home, and increasing taxes on areas such as capital gains, offshore tax havens and multimillion dollar real estate. Stein plans on impacting what she sees as a growing convergence of environmental crises in water, soil, fisheries and forests, through the creation of sustainable infrastructure based in clean renewable energy generation and sustainable communities principles such as increasing intra-city mass transit and inter-city railroads, creating 'complete streets' that safely encourage bike and pedestrian traffic and regional food systems based on sustainable organic agriculture The Green Party of the United States was founded in 1991 as a voluntary association of state green parties. With its founding, the Green Party of the United States became the primary national Green organization in the United States, eclipsing the Greens/Green Party USA, which emphasized non-electoral movement building. The Green Party of the United States of America emphasizes environmentalism, non-hierarchical participatory democracy, social justice, respect for diversity, peace and nonviolence. Their "Ten Key Values," which are described as non-authoritative guiding principles, are as follows: Grassroots democracy Social justice and equal opportunity Ecological wisdom Nonviolence Decentralization Community-based economics Feminism and gender equality Respect for diversity Personal and global responsibility Future focus and sustainability The Green Party does not accept donations from corporations. Thus, the party's platforms and rhetoric critique any corporate influence and control over government, media, and American society at large. Stein has access to 403 electoral votes and is a write-in candidate in GA, IN, and MS Learn more about Jill Stein and Green Party on Wikipedia.

    Read the article

  • Weekly Cloud Roundup 2012-15

    - by Alan Smith
    Filtering the informative, insightful and quirky from the fire hose of cloud-based hype. Irving Wladawsky-Berger provides some great insight into The Complex Transition to the Cloud, sharing his views on the slow adoption of cloud computing in organizations. “…a prediction by the research firm Gartner that while cloud computing will continue to grow at almost 20 percent a year, it will account for less than 5 percent of totally IT spending in 2015.” With a more positive mindset, Balaji Viswanathan highlights 7 Salient Trends and Directions in Cloud Computing that could be shaping the industry over the next few years. Cloud computing also looks to save energy “A small business with 100 users that moved the Microsoft applications to the cloud could cut energy use and carbon emissions by 90%. Large organizations with 10,000 users saw a 30% reduction.” More on that story here. The expansion of Windows Azure has been in the news with the announcement of “East US” and “West US” datacenters; this was covered by Visual Studio Magazine and Mary-Jo, and according to thenextweb.com Microsoft are also building $112 million data center in Wyoming. The cloud price war is still in full swing with Joe Panettieri discussing the pricing of Windows Azure and Office 365 and asking How Low Can It Go?

    Read the article

  • Encrypt SSD or not?

    - by JamesBradbury
    My desktop machine is running Ubuntu 12.04 (and will probably stay with it until the next LTS). I've got a new 120GB SSD on the way as my existing 420GB spinning disk. If it makes any difference I'll be dual-booting with Windows 7 across both disks too. I've read some helpful answers here about /home setup and enabling TRIM, which I intend to follow. So most of my /home will be on the SSD, with only photos, videos and music on the spinning disk. The question is, when I reinstall Ubuntu from CD or USB, whether I should encrypt the SSD? Specifically: I'm reading that drive wear isn't much of an issue with modern SSDs as they last decades even if you spam them. Is this true? How big a performance reduction will encrypting cause (I have an i7 Sandybridge, so I guess it can cope)? Is it more important from a security point of view to encrypt an SSD? I think I read somewhere that it may be hard to reliably wipe data. By all means answer even if you only know about one of those things.

    Read the article

  • Price Drop for Processor based License on Exalytics

    - by Mike.Hallett(at)Oracle-BI&EPM
    ·       33% reduction in the list `per processor` license pricing for the Oracle BI Foundation Suite ·       New capacity-based licensing which allows customers to think big & start small, significantly lowering the entry price point for an Exalytics. Oracle BI Software List Price changes In response to new powerful platforms like the in-memory Oracle Exalytics with 40 cpu cores (counted under Oracle pricing policy as 20 “processors”), the list price of “Oracle BI Foundation Suite” (BIFS) is reduced by 33% from $450K per processor to $300K per processor. Capacity-based licensing on Exalytics (Trusted Partitions) “Capacity-based pricing” for the BIFS, Endeca, Essbase and Times Ten for Exalytics software is now available for Exalytics systems. This is delivered using “Oracle VM” (OVM).  We still ship a full Exalytics machine to all customers, but they may choose to only use and license a subset of the processors installed in the machine.   Customers can license Exalytics software in units of 5 “processors”: 5, 10, 15 or the full capacity 20.   As the customer’s implementation and workload increases, it is a simple matter to license additional processors and, using OVM, make them available to the BI or EPM application. Endeca Information Discovery now available on Exalytics Oracle has also announced the certification of “Oracle Endeca Information Discovery” (EID) on the Exalytics machine.    EID can be licensed alone or in combination with the BIFS & Times Ten for an Exalytics stack, and also participates in the capacity based pricing outlined above.   The Exalytics hardware is the perfect platform for EID, and provides superb power and performance for this in-memory hybrid text-search-analytics.   For more information : Oracle Price lists Oracle Partitioning Policy Discussion by Mark Rittman (Rittman Mead Consulting ltd.) on Oracle Trusted Partitions for Oracle Engineered Systems, Oracle Exalytics and Updated BI Foundation Pricing.

    Read the article

  • Impact of Service Oriented Architecture (SOA) on Business and IT Operations

    The impact of Service Oriented Architecture (SOA) on business and IT operations varies from company to company. I think more and more companies are starting to view SOA as just another technology that they can incorporate in an existing or new system. One of the driving factors in using SOA is the reduction in maintenance costs and decrease in the time needed to bring products to market. The reductions in costs, and reduced turnaround time can be directly converted in to increased profitability due to less expenditures that are needed in order to maintain or create new systems. My personal perspective on SOA is that it is great for what it is actually intended to do. SOA allows systems to be distributed across networks or even the world while ensuring enterprise processing consistency, data integrity and preventing code duplication. This being said a lot of preparation and work goes into properly designing and implementing an SOA especially if an enterprise wants to take full advantage of its benefits. Even though SOA has recently gotten a lot of hype about its benefits it does not a perfect fit for all situations. At the end of the day SOA is just another tool in my tool belt that I can pull from to create solutions that meet the business’s needs. Based on current industry trends SOA appears to be a very solid technology to use moving forward, especially as more and more companies shift towards cloud based computing. It is important to remember that SOA is one of many technologies that can be used in creating business solutions and I think more time will be spent in the future evaluating if SOA is the right technology for a solution once the initial hype of SOA has calmed down.

    Read the article

  • JavaScript and callback nesting

    - by Jake King
    A lot of JavaScript libraries (notably jQuery) use chaining, which allows the reduction of this: var foo = $(".foo"); foo.stop(); foo.show(); foo.animate({ top: 0 }); to this: $(".foo").stop().show().animate({ top: 0 }); With proper formatting, I think this is quite a nice syntactic capability. However, I often see a pattern which I don't particularly like, but appears to be a necessary evil in non-blocking models. This is the ever-present nesting of callback functions: $(".foo").animate({ top: 0, }, { callback: function () { $.ajax({ url: 'ajax.php', }, { callback: function () { ... } }); } }); And it never ends. Even though I love the ease non-blocking models provide, I hate the odd nesting of function literals it forces upon the programmer. I'm interesting in writing a small JS library as an exercise, and I'd love to find a better way to do this, but I don't know how it could be done without feeling hacky. Are there any projects out there that have resolved this problem before? And if not, what are the alternatives to this ugly, meaningless code structure?

    Read the article

  • Series On Embedded Development (Part 1)

    - by user12612705
    This is the first in a series of entries on developing applications for the embedded environment. Most of this information is relevant to any type of embedded development (and even for desktop and server too), not just Java. This information is based on a talk Hinkmond Wong and I gave at JavaOne 2012 entitled Reducing Dynamic Memory in Java Embedded Applications. One thing to remember when developing embeddded applications is that memory matters. Yes, memory matters in desktop and server environments as well, but there's just plain less of it in embedded devices. So I'm going to be talking about saving this precious resource as well as another precious resource, CPU cycles...and a bit about power too. CPU matters too, and again, in embedded devices, there's just plain less of it. What you'll find, no surprise, is that there's a trade-off between performance and memory. To get better performance, you need to use more memory, and to save more memory, you need to need to use more CPU cycles. I'll be discussing three Memory Reduction Categories: - Optionality, both build-time and runtime. Optionality is about providing options so you can get rid of the stuff you don't need and include the stuff you do need. - Tunability, which is about providing options so you can tune your application by trading performance for size, and vice-versa. - Efficiency, which is about balancing size savings with performance.

    Read the article

  • Customer Highlight: NTT DOCOMO

    - by jeckels
    NTT DOCOMO is the largest mobile operator in Japan, and serves over 13 million smartphone customers. Due to their growing data processing and scalability needs, they turned to Oracle's Cloud Application Foundation products for an integral soultion. At Oracle OpenWorld 2012, we first showcased NTT DOCOMO as a customer who was utilizing Oracle Coherence to process mobile data at a rate of 700,000 events per second (and then using Hadoop for distributed processing of big data). Overall, this Led to a 50% cost reduction due to the ultra-high velocity traffic processing of their customers' events. Recently, on October 7th, 2013, Oracle and NTT DOCOMO were proud to again announce a partnership around another key component of Oracle CAF: WebLogic Server. WebLogic was recently deployed as the application platform of choice to run DOCOMO's mission-critical data system ALADIN, which connects nationwide shops and information centers. ALADIN, which also utilizes Oracle Database and Oracle Tuxedo, is based on Java Platform, Enterprise Edition (Java EE), which has allowed the company to operate smoothly while minimizing additional development and modification associated with the migration of application server products. We look forward to continuing to partner with NTT DOCOMO, and are proud that Oracle Cloud Application Foundation products are providing the mission-critical solutions - at scale - that DOCOMO requires. Want to learn more about how CAF products are working in the real world? Join us for a FREE Virtual Developer Day on November 5th from 9am-1pm Pacific Time!REGISTER NOW

    Read the article

  • Ball to Ball Collision - Detection and Handling

    - by Simucal
    With the help of the Stack Overflow community I've written a pretty basic-but fun physics simulator. You click and drag the mouse to launch a ball. It will bounce around and eventually stop on the "floor". My next big feature I want to add in is ball to ball collision. The ball's movement is broken up into a x and y speed vector. I have gravity (small reduction of the y vector each step), I have friction (small reduction of both vectors each collision with a wall). The balls honestly move around in a surprisingly realistic way. I guess my question has two parts: What is the best method to detect ball to ball collision? Do I just have an O(n^2) loop that iterates over each ball and checks every other ball to see if it's radius overlaps? What equations do I use to handle the ball to ball collisions? Physics 101 How does it effect the two balls speed x/y vectors? What is the resulting direction the two balls head off in? How do I apply this to each ball? Handling the collision detection of the "walls" and the resulting vector changes were easy but I see more complications with ball-ball collisions. With walls I simply had to take the negative of the appropriate x or y vector and off it would go in the correct direction. With balls I don't think it is that way. Some quick clarifications: for simplicity I'm ok with a perfectly elastic collision for now, also all my balls have the same mass right now, but I might change that in the future. In case anyone is interested in playing with the simulator I have made so far, I've uploaded the source here (EDIT: Check the updated source below). Edit: Resources I have found useful 2d Ball physics with vectors: 2-Dimensional Collisions Without Trigonometry.pdf 2d Ball collision detection example: Adding Collision Detection Success! I have the ball collision detection and response working great! Relevant code: Collision Detection: for (int i = 0; i < ballCount; i++) { for (int j = i + 1; j < ballCount; j++) { if (balls[i].colliding(balls[j])) { balls[i].resolveCollision(balls[j]); } } } This will check for collisions between every ball but skip redundant checks (if you have to check if ball 1 collides with ball 2 then you don't need to check if ball 2 collides with ball 1. Also, it skips checking for collisions with itself). Then, in my ball class I have my colliding() and resolveCollision() methods: public boolean colliding(Ball ball) { float xd = position.getX() - ball.position.getX(); float yd = position.getY() - ball.position.getY(); float sumRadius = getRadius() + ball.getRadius(); float sqrRadius = sumRadius * sumRadius; float distSqr = (xd * xd) + (yd * yd); if (distSqr <= sqrRadius) { return true; } return false; } public void resolveCollision(Ball ball) { // get the mtd Vector2d delta = (position.subtract(ball.position)); float d = delta.getLength(); // minimum translation distance to push balls apart after intersecting Vector2d mtd = delta.multiply(((getRadius() + ball.getRadius())-d)/d); // resolve intersection -- // inverse mass quantities float im1 = 1 / getMass(); float im2 = 1 / ball.getMass(); // push-pull them apart based off their mass position = position.add(mtd.multiply(im1 / (im1 + im2))); ball.position = ball.position.subtract(mtd.multiply(im2 / (im1 + im2))); // impact speed Vector2d v = (this.velocity.subtract(ball.velocity)); float vn = v.dot(mtd.normalize()); // sphere intersecting but moving away from each other already if (vn > 0.0f) return; // collision impulse float i = (-(1.0f + Constants.restitution) * vn) / (im1 + im2); Vector2d impulse = mtd.multiply(i); // change in momentum this.velocity = this.velocity.add(impulse.multiply(im1)); ball.velocity = ball.velocity.subtract(impulse.multiply(im2)); } Source Code: Complete source for ball to ball collider. Binary: Compiled binary in case you just want to try bouncing some balls around. If anyone has some suggestions for how to improve this basic physics simulator let me know! One thing I have yet to add is angular momentum so the balls will roll more realistically. Any other suggestions? Leave a comment!

    Read the article

  • Is the switch to Dvorak worth it?

    - by Kevin Weil
    To those who were experienced ( 70 WPM, say) typists before the switch to Dvorak -- were you faster after switching? There are a couple good SO threads on Dvorak, but they are more on how to learn or reduction in typing pain than speed before/after. I know it will take me 1-2 months to feel comfortable, but I want to know if I should expect to be faster afterward. I am a programmer and type maybe 90-110 WPM on QWERTY. EDIT: I agree that coding is not typically IO-bound, and that a minimum typing speed is sufficient. This is half from curiosity, but it will be an undertaking to achieve QWERTY parity, so I want to know if I should at least expect some asymptotic improvement.

    Read the article

  • Stochastic calculus library in python

    - by LeMiz
    Hello, I am looking for a python library that would allow me to compute stochastic calculus stuff, like the (conditional) expectation of a random process I would define the diffusion. I had a look a at simpy (simpy.sourceforge.net), but it does not seem to cover my needs. This is for quick prototyping and experimentation. In java, I used with some success the (now inactive) http://martingale.berlios.de/Martingale.html library. The problem is not difficult in itself, but there is a lot non trivial, boilerplate things to do (efficient memory use, variable reduction techniques, and so on). Ideally, I would be able to write something like this (just illustrative): def my_diffusion(t, dt, past_values, world, **kwargs): W1, W2 = world.correlated_brownians_pair(correlation=kwargs['rho']) X = past_values[-1] sigma_1 = kwargs['sigma1'] sigma_2 = kwargs['sigma2'] dX = kwargs['mu'] * X * dt + sigma_1 * W1 * X * math.sqrt(dt) + sigma_2 * W2 * X * X * math.sqrt(dt) return X + dX X = RandomProcess(diffusion=my_diffusion, x0 = 1.0) print X.expectancy(T=252, dt = 1./252., N_simul= 50000, world=World(random_generator='sobol'), sigma1 = 0.3, sigma2 = 0.01, rho=-0.1) Does someone knows of something else than reimplementing it in numpy for example ?

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10  | Next Page >