Search Results

Search found 10041 results on 402 pages for 'persistent world'.

Page 13/402 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • SPARC T4-4 Delivers World Record Performance on Oracle OLAP Perf Version 2 Benchmark

    - by Brian
    Oracle's SPARC T4-4 server delivered world record performance with subsecond response time on the Oracle OLAP Perf Version 2 benchmark using Oracle Database 11g Release 2 running on Oracle Solaris 11. The SPARC T4-4 server achieved throughput of 430,000 cube-queries/hour with an average response time of 0.85 seconds and the median response time of 0.43 seconds. This was achieved by using only 60% of the available CPU resources leaving plenty of headroom for future growth. The SPARC T4-4 server operated on an Oracle OLAP cube with a 4 billion row fact table of sales data containing 4 dimensions. This represents as many as 90 quintillion aggregate rows (90 followed by 18 zeros). Performance Landscape Oracle OLAP Perf Version 2 Benchmark 4 Billion Fact Table Rows System Queries/hour Users* Response Time (sec) Average Median SPARC T4-4 430,000 7,300 0.85 0.43 * Users - the supported number of users with a given think time of 60 seconds Configuration Summary and Results Hardware Configuration: SPARC T4-4 server with 4 x SPARC T4 processors, 3.0 GHz 1 TB memory Data Storage 1 x Sun Fire X4275 (using COMSTAR) 2 x Sun Storage F5100 Flash Array (each with 80 FMODs) Redo Storage 1 x Sun Fire X4275 (using COMSTAR with 8 HDD) Software Configuration: Oracle Solaris 11 11/11 Oracle Database 11g Release 2 (11.2.0.3) with Oracle OLAP option Benchmark Description The Oracle OLAP Perf Version 2 benchmark is a workload designed to demonstrate and stress the Oracle OLAP product's core features of fast query, fast update, and rich calculations on a multi-dimensional model to support enhanced Data Warehousing. The bulk of the benchmark entails running a number of concurrent users, each issuing typical multidimensional queries against an Oracle OLAP cube consisting of a number of years of sales data with fully pre-computed aggregations. The cube has four dimensions: time, product, customer, and channel. Each query user issues approximately 150 different queries. One query chain may ask for total sales in a particular region (e.g South America) for a particular time period (e.g. Q4 of 2010) followed by additional queries which drill down into sales for individual countries (e.g. Chile, Peru, etc.) with further queries drilling down into individual stores, etc. Another query chain may ask for yearly comparisons of total sales for some product category (e.g. major household appliances) and then issue further queries drilling down into particular products (e.g. refrigerators, stoves. etc.), particular regions, particular customers, etc. Results from version 2 of the benchmark are not comparable with version 1. The primary difference is the type of queries along with the query mix. Key Points and Best Practices Since typical BI users are often likely to issue similar queries, with different constants in the where clauses, setting the init.ora prameter "cursor_sharing" to "force" will provide for additional query throughput and a larger number of potential users. Except for this setting, together with making full use of available memory, out of the box performance for the OLAP Perf workload should provide results similar to what is reported here. For a given number of query users with zero think time, the main measured metrics are the average query response time, the median query response time, and the query throughput. A derived metric is the maximum number of users the system can support achieving the measured response time assuming some non-zero think time. The calculation of the maximum number of users follows from the well-known response-time law N = (rt + tt) * tp where rt is the average response time, tt is the think time and tp is the measured throughput. Setting tt to 60 seconds, rt to 0.85 seconds and tp to 119.44 queries/sec (430,000 queries/hour), the above formula shows that the T4-4 server will support 7,300 concurrent users with a think time of 60 seconds and an average response time of 0.85 seconds. For more information see chapter 3 from the book "Quantitative System Performance" cited below. -- See Also Quantitative System Performance Computer System Analysis Using Queueing Network Models Edward D. Lazowska, John Zahorjan, G. Scott Graham, Kenneth C. Sevcik external local Oracle Database 11g – Oracle OLAP oracle.com OTN SPARC T4-4 Server oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 oracle.com OTN Disclosure Statement Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 11/2/2012.

    Read the article

  • Real Time BI in the Real World

    - by tobin.gilman(at)oracle.com
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif"; mso-fareast-font-family:"Times New Roman";} One of my favorite BI offerings from Oracle is a solution called Oracle Real Time Decisions.  Whenever I mention this product in customer meetings, eyes light up.  There are some fascinating examples of customers using it to up-sell, cross-sell, increase customer retention, and reduce risk in real time, with off the charts return on investment. I plan to share some of those stories in a future blog.  In this post however, I want to share some far more common real time analytics use case scenarios that are being addressed with widely deployed Oracle BI and data integration technologies Not all real time BI applications require continuous learning, predictive modeling, and data mining.  Many simply require the ability to integrate, aggregate, and access information that is current (typically within in few minutes or a few seconds).  The use cases are infinite.  A few I've seen: ·         Purchasing agents need to match demand against available inventory ·         Manufacturing planners need to monitor current parts and material against scheduled build plans ·         Airline agents need to match ticket demand against flight schedules, ·         Human resources managers need to track the status of global hiring requisitions against current headcount authorizations...you get the idea. One way of doing this is to run reports or federated queries directly against transactional systems.  That approach can be viable if you only need to access simple data sets on rare occasions.  High volume and complex queries can quickly bog down performance of mission critical transactional systems.  There is an architecturally simple way of solving the problem, and it's being applied by real companies around the world to solve real needs in real time.    Cbeyond is an Atlanta, GA based  provider of voice, data and mobile business applications delivers.  They deliver real time information to its call center agents  as they are interacting with their customers. The data they need resides in production CRM and other transactional systems, but  instead or reporting directly off the those systems, data is first moved to an operational data store (ODS).  Rather than running data intensive, time consuming, and performance degrading batch ETL routines to populate the ODS, Cbeyond uses Oracle Golden Gate software to incrementally capture and move only the changed records from log files of the transactional systems every few minutes.  There is no impact on transactional system performance, and the information needed by call center representatives is up to date.  Oracle Business Intelligence software presents the information to services reps in a rich, visual, and highly interactive format. Avea is similar to Cbeyond.  They are a telecommunications company who integrates billing and customer information in an ODS that is accessed by their call center agents in real time using Oracle Golden Gate and Oracle Business Intelligence.  They've taken it a step further by using the ODS to feed a data warehouse.  The operational data store provides the current information needed by call center agents during "in flight" customer interactions.  The data warehouse is used for more sophisticated analysis of historical data.  For maximum performance, both the ODS and data warehouse run on the Oracle Exadata Database Machine. These are practical illustrations of companies addressing real time reporting and analysis needs using established business intelligence/data warehousing methodologies and tools common to many IT departments.  If real time BI could benefit your organization, you may be already be closer than you thought to having the pieces in place to solving the problem.    Give us a shout if you are interested in learning more or if you have an interesting use or approach to real-time BI.

    Read the article

  • Programming challenge: can you code a hello world program as a Palindrome?

    - by Assaf Lavie
    So the puzzle is to write a hello world program in your language of choice, where the program's source file as a string has to be a palindrome. To be clear, the output has to be exactly "Hello, World". Edit: Well, with comments it seems trivial (not that I thought of it myself of course [sigh].. hat tip to cobbal). So new rule: no comments. Edit: I feel kind of bad editing someone else's question to say this, but it will eliminate a lot of non-palindromes that keep popping up, and I'm tired of seeing the same simple mistake over and over. The following is NOT a palindrome: ()() The following IS a palindrome: ())( Brackets, parenthesis, and anything else that must match are a major barrier to palindrome-ing, yes, but that doesn't mean you can ignore them and post non-palindrome answers. Languages represented thus far: C, C++, Bash, elisp, C#, Perl, sh, Windows shell, Java, Common Lisp, Awk, Ruby, Brainfuck, Funge, Python, Machine Language, HQ9+, Assembly, TCL, J, php, Haskell, io, TeX, APL, Javascript, mIRC Script, Basic, Orc, Fortran, Unlambda, Pseudo-code, Befunge, CFML, Lua, INTERCAL, VBScript, HTML, sed, PostScript, GolfScript, REBOL, SQL

    Read the article

  • Pre-rentrée Oracle Open World 2012 : à vos agendas

    - by Eric Bezille
    A maintenant moins d'un mois de l’événement majeur d'Oracle, qui se tient comme chaque année à San Francisco, fin septembre, début octobre, les spéculations vont bon train sur les annonces qui vont y être dévoilées... Et sans lever le voile, je vous engage à prendre connaissance des sujets des "Key Notes" qui seront tenues par Larry Ellison, Mark Hurd, Thomas Kurian (responsable des développements logiciels) et John Fowler (responsable des développements systèmes) afin de vous donner un avant goût. Stratégie et Roadmaps Oracle Bien entendu, au-delà des séances plénières qui vous donnerons  une vision précise de la stratégie, et pour ceux qui seront sur place, je vous engage à ne pas manquer les séances d'approfondissement qui auront lieu dans la semaine, dont voici quelques morceaux choisis : "Accelerate your Business with the Oracle Hardware Advantage" avec John Fowler, le lundi 1er Octobre, 3:15pm-4:15pm "Why Oracle Softwares Runs Best on Oracle Hardware" , avec Bradley Carlile, le responsable des Benchmarks, le lundi 1er Octobre, 12:15pm-13:15pm "Engineered Systems - from Vision to Game-changing Results", avec Robert Shimp, le lundi 1er Octobre 1:45pm-2:45pm "Database and Application Consolidation on SPARC Supercluster", avec Hugo Rivero, responsable dans les équipes d'intégration matériels et logiciels, le lundi 1er Octobre, 4:45pm-5:45pm "Oracle’s SPARC Server Strategy Update", avec Masood Heydari, responsable des développements serveurs SPARC, le mardi 2 Octobre, 10:15am - 11:15am "Oracle Solaris 11 Strategy, Engineering Insights, and Roadmap", avec Markus Flier, responsable des développements Solaris, le mercredi 3 Octobre, 10:15am - 11:15am "Oracle Virtualization Strategy and Roadmap", avec Wim Coekaerts, responsable des développement Oracle VM et Oracle Linux, le lundi 1er Octobre, 12:15pm-1:15pm "Big Data: The Big Story", avec Jean-Pierre Dijcks, responsable du développement produits Big Data, le lundi 1er Octobre, 3:15pm-4:15pm "Scaling with the Cloud: Strategies for Storage in Cloud Deployments", avec Christine Rogers,  Principal Product Manager, et Chris Wood, Senior Product Specialist, Stockage , le lundi 1er Octobre, 10:45am-11:45am Retours d'expériences et témoignages Si Oracle Open World est l'occasion de partager avec les équipes de développement d'Oracle en direct, c'est aussi l'occasion d'échanger avec des clients et experts qui ont mis en oeuvre  nos technologies pour bénéficier de leurs retours d'expériences, comme par exemple : "Oracle Optimized Solution for Siebel CRM at ACCOR", avec les témoignages d'Eric Wyttynck, directeur IT Multichannel & CRM  et Pascal Massenet, VP Loyalty & CRM systems, sur les bénéfices non seulement métiers, mais également projet et IT, le mercredi 3 Octobre, 1:15pm-2:15pm "Tips from AT&T: Oracle E-Business Suite, Oracle Database, and SPARC Enterprise", avec le retour d'expérience des experts Oracle, le mardi 2 Octobre, 11:45am-12:45pm "Creating a Maximum Availability Architecture with SPARC SuperCluster", avec le témoignage de Carte Wright, Database Engineer à CKI, le mercredi 3 Octobre, 11:45am-12:45pm "Multitenancy: Everybody Talks It, Oracle Walks It with Pillar Axiom Storage", avec le témoignage de Stephen Schleiger, Manager Systems Engineering de Navis, le lundi 1er Octobre, 1:45pm-2:45pm "Oracle Exadata for Database Consolidation: Best Practices", avec le retour d'expérience des experts Oracle ayant participé à la mise en oeuvre d'un grand client du monde bancaire, le lundi 1er Octobre, 4:45pm-5:45pm "Oracle Exadata Customer Panel: Packaged Applications with Oracle Exadata", animé par Tim Shetler, VP Product Management, mardi 2 Octobre, 1:15pm-2:15pm "Big Data: Improving Nearline Data Throughput with the StorageTek SL8500 Modular Library System", avec le témoignage du CTO de CSC, Alan Powers, le jeudi 4 Octobre, 12:45pm-1:45pm "Building an IaaS Platform with SPARC, Oracle Solaris 11, and Oracle VM Server for SPARC", avec le témoignage de Syed Qadri, Lead DBA et Michael Arnold, System Architect d'US Cellular, le mardi 2 Octobre, 10:15am-11:15am "Transform Data Center TCO with Oracle Optimized Servers: A Customer Panel", avec les témoignages notamment d'AT&T et Liberty Global, le mardi 2 Octobre, 11:45am-12:45pm "Data Warehouse and Big Data Customers’ View of the Future", avec The Nielsen Company US, Turkcell, GE Retail Finance, Allianz Managed Operations and Services SE, le lundi 1er Octobre, 4:45pm-5:45pm "Extreme Storage Scale and Efficiency: Lessons from a 100,000-Person Organization", le témoignage de l'IT interne d'Oracle sur la transformation et la migration de l'ensemble de notre infrastructure de stockage, mardi 2 Octobre, 1:15pm-2:15pm Echanges avec les groupes d'utilisateurs et les équipes de développement Oracle Si vous avez prévu d'arriver suffisamment tôt, vous pourrez également échanger dès le dimanche avec les groupes d'utilisateurs, ou tous les soirs avec les équipes de développement Oracle sur des sujets comme : "To Exalogic or Not to Exalogic: An Architectural Journey", avec Todd Sheetz - Manager of DBA and Enterprise Architecture, Veolia Environmental Services, le dimanche 30 Septembre, 2:30pm-3:30pm "Oracle Exalytics and Oracle TimesTen for Exalytics Best Practices", avec Mark Rittman, de Rittman Mead Consulting Ltd, le dimanche 30 Septembre, 10:30am-11:30am "Introduction of Oracle Exadata at Telenet: Bringing BI to Warp Speed", avec Rudy Verlinden & Eric Bartholomeus - Managers IT infrastructure à Telenet, le dimanche 30 Septembre, 1:15pm-2:00pm "The Perfect Marriage: Sun ZFS Storage Appliance with Oracle Exadata", avec Melanie Polston, directeur, Data Management, de Novation et Charles Kim, Managing Director de Viscosity, le dimanche 30 Septembre, 9:00am-10am "Oracle’s Big Data Solutions: NoSQL, Connectors, R, and Appliance Technologies", avec Jean-Pierre Dijcks et les équipes de développement Oracle, le lundi 1er Octobre, 6:15pm-7:00pm Testez et évaluez les solutions Et pour finir, vous pouvez même tester les technologies au travers du Oracle DemoGrounds, (1133 Moscone South pour la partie Systèmes Oracle, OS, et Virtualisation) et des "Hands-on-Labs", comme : "Deploying an IaaS Environment with Oracle VM", le mardi 2 Octobre, 10:15am-11:15am "Virtualize and Deploy Oracle Applications in Minutes with Oracle VM: Hands-on Lab", le mardi 2 Octobre, 11:45am-12:45pm (il est fortement conseillé d'avoir suivi le "Hands-on-Labs" précédent avant d'effectuer ce Lab. "x86 Enterprise Cloud Infrastructure with Oracle VM 3.x and Sun ZFS Storage Appliance", le mercredi 3 Octobre, 5:00pm-6:00pm "StorageTek Tape Analytics: Managing Tape Has Never Been So Simple", le mercredi 3 Octobre, 1:15pm-2:15pm "Oracle’s Pillar Axiom 600 Storage System: Power and Ease", le lundi 1er Octobre, 12:15pm-1:15pm "Enterprise Cloud Infrastructure for SPARC with Oracle Enterprise Manager Ops Center 12c", le lundi 1er Octobre, 1:45pm-2:45pm "Managing Storage in the Cloud", le mardi 2 Octobre, 5:00pm-6:00pm "Learn How to Write MapReduce on Oracle’s Big Data Platform", le lundi 1er Octobre, 12:15pm-1:15pm "Oracle Big Data Analytics and R", le mardi 2 Octobre, 1:15pm-2:15pm "Reduce Risk with Oracle Solaris Access Control to Restrain Users and Isolate Applications", le lundi 1er Octobre, 10:45am-11:45am "Managing Your Data with Built-In Oracle Solaris ZFS Data Services in Release 11", le lundi 1er Octobre, 4:45pm-5:45pm "Virtualizing Your Oracle Solaris 11 Environment", le mardi 2 Octobre, 1:15pm-2:15pm "Large-Scale Installation and Deployment of Oracle Solaris 11", le mercredi 3 Octobre, 3:30pm-4:30pm En conclusion, une semaine très riche en perspective, et qui vous permettra de balayer l'ensemble des sujets au coeur de vos préoccupations, de la stratégie à l'implémentation... Cette semaine doit se préparer, pour tailler votre agenda sur mesure, à travers les plus de 2000 sessions dont je ne vous ai fait qu'un extrait, et dont vous pouvez retrouver l'ensemble en ligne.

    Read the article

  • Designing a persistent asynchronous TCP protocol

    - by dogglebones
    I have got a collection of web sites that need to send time-sensitive messages to host machines all over my metro area, each on its own generally dynamic IP. Until now, I have been doing this the way of the script kiddie: Each host machine runs an (s)FTP server, or an HTTP(s) server, and correspondingly has a certain port opened up by its gateway. Each host machine runs a program that watches a certain folder and automatically opens or prints or exec()s when a new file of a given extension shows up. Dynamic IP addresses are accommodated using a dynamic DNS service. Each web site does cURL or fsockopen or whatever and communicates directly with its recipient as-needed. This approach has been suprisingly reliable, however obvious issues have come up and the situation needs to be addressed. As stated, these messages are time-sensitive and failures need to be detected within minutes of submission by end-users. What I'm doing is building a messaging protocol. It will run on a machine and connection in my control. As far as the service is concerned, there is no distinction between web site and host machine -- there is only one device sending a message to another device. So that's where I'm at right now. I've got a skeleton server and a skeleton client. They can negotiate high-quality authentication and encryption. The (TCP) connection is persistent and asynchronous, and can handle delimited (i.e., read until \r\n or whatever) as well as length-prefixed (i.e., read exactly n bytes) messages. Unless somebody gives me a better idea, I think I'll handle messages as byte arrays. So I'm looking for suggestions on how to model the protocol itself -- at the application level. I'll mostly be transferring XML and DLM type files, as well as control messages for things like "handshake" and "is so-and-so online?" and so forth. Is there anything really stupid in my train of thought? Or anything I should read about before I get started? Stuff like that -- please and thanks.

    Read the article

  • Create non-persistent cookie with FormsAuthenticationTicket

    - by Marcus
    Hello! I'm having trouble creating a non-persistent cookie using the FormsAuthenticationTicket. I want to store userdata in the ticket, so i can't use FormsAuthentication.SetAuthCookie() or FormsAuthentication.GetAuthCookie() methods. Because of this I need to create the FormsAuthenticationTicket and store it in a HttpCookie. My code looks like this: DateTime expiration = DateTime.Now.AddDays(7); // Create ticket FormsAuthenticationTicket ticket = new FormsAuthenticationTicket(2, user.Email, DateTime.Now, expiration, isPersistent, userData, FormsAuthentication.FormsCookiePath); // Create cookie HttpCookie cookie = new HttpCookie(FormsAuthentication.FormsCookieName, FormsAuthentication.Encrypt(ticket)); cookie.Path = FormsAuthentication.FormsCookiePath; if (isPersistent) cookie.Expires = expiration; // Add cookie to response HttpContext.Current.Response.Cookies.Add(cookie); When the variable isPersistent is true everything works fine and the cookie is persisted. But when isPersistent is false the cookie seems to be persisted anyway. I sign on in a browser window, closes it and opens the browser again and I am still logged in. How do i set the cookie to be non-persistent? Is a non-persistent cookie the same as a session cookie? Is the cookie information stored in the sessiondata on the server or are the cookie transferred in every request/response to the server? Thanks in advance! /Marcus

    Read the article

  • LUKS with LVM, mount is not persistent after reboot

    - by linxsaga
    I have created a Logical vol and used luks to encrypt it. But while rebooting the server. I get a error message (below), therefore I would have to enter the root pass and disable the /etc/fstab entry. So mount of the LUKS partition is not persistent during reboot using LUKS. I have this setup on RHEL6 and wondering what i could be missing. I want to the LV to get be mount on reboot. Later I would want to replace it with UUID instead of the device name. Error message on reboot: "Give root password for maintenance (or type Control-D to continue):" Here are the steps from the beginning: [root@rhel6 ~]# pvcreate /dev/sdb Physical volume "/dev/sdb" successfully created [root@rhel6 ~]# vgcreate vg01 /dev/sdb Volume group "vg01" successfully created [root@rhel6 ~]# lvcreate --size 500M -n lvol1 vg01 Logical volume "lvol1" created [root@rhel6 ~]# lvdisplay --- Logical volume --- LV Name /dev/vg01/lvol1 VG Name vg01 LV UUID nX9DDe-ctqG-XCgO-2wcx-ddy4-i91Y-rZ5u91 LV Write Access read/write LV Status available # open 0 LV Size 500.00 MiB Current LE 125 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:0 [root@rhel6 ~]# cryptsetup luksFormat /dev/vg01/lvol1 WARNING! ======== This will overwrite data on /dev/vg01/lvol1 irrevocably. Are you sure? (Type uppercase yes): YES Enter LUKS passphrase: Verify passphrase: [root@rhel6 ~]# mkdir /house [root@rhel6 ~]# cryptsetup luksOpen /dev/vg01/lvol1 house Enter passphrase for /dev/vg01/lvol1: [root@rhel6 ~]# mkfs.ext4 /dev/mapper/house mke2fs 1.41.12 (17-May-2010) Filesystem label= OS type: Linux Block size=1024 (log=0) Fragment size=1024 (log=0) Stride=0 blocks, Stripe width=0 blocks 127512 inodes, 509952 blocks 25497 blocks (5.00%) reserved for the super user First data block=1 Maximum filesystem blocks=67633152 63 block groups 8192 blocks per group, 8192 fragments per group 2024 inodes per group Superblock backups stored on blocks: 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 Writing inode tables: done Creating journal (8192 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 21 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. [root@rhel6 ~]# mount -t ext4 /dev/mapper/house /house PS: HERE I have successfully mounted: [root@rhel6 ~]# ls /house/ lost+found [root@rhel6 ~]# vim /etc/fstab -> as follow /dev/mapper/house /house ext4 defaults 1 2 [root@rhel6 ~]# vim /etc/crypttab -> entry as follows house /dev/vg01/lvol1 password [root@rhel6 ~]# mount -o remount /house [root@rhel6 ~]# ls /house/ lost+found [root@rhel6 ~]# umount /house/ [root@rhel6 ~]# mount -a -> SUCCESSFUL AGAIN [root@rhel6 ~]# ls /house/ lost+found Please let me know if I am missing anything here. Thanks in advance.

    Read the article

  • A training world nugget for being taught by the best

    - by Testas
    June represents an exciting time for the SQL Server community with events all over the country in the next few months and there is plenty of knowledge to be gained from willing speakers enthusiastically sharing their knowledge. Furthermore, Paul Randall and Kimberley Trip will be conducting their highly recommended immersion events at London Heathrow in June.There are other big names within SQL Server that will be teaching this year. The company I used to work for, QA, has excellent trainers teaching SQL Server who I would always recommend. Occasionally a big name speaker will be take a course, unknowingly to the community. Solid Quality Mentors is such a company where their staff will teach at QA offices from time to time. And I know from conversation with Itzik Ben-Gan that he will be teaching Advanced TSQL within QA offices in London during the week of Oct 3-7. A link to the course details can be found here.http://www.qa.com/training-courses/technical-it-training/microsoft/microsoft-sql-server/microsoft-sql-server-2008-and-r2/advanced-t-sql-querying,-programming-and-tuning-for-sql-server-2005--2008So if you want to be taught by the best experts, consider checking www.QA.com for their advanced SQL courses, you could find yourself being taught by the best in the business in their field.Chris  

    Read the article

  • Potentially The World’s Filthiest PC [Video]

    - by Jason Fitzpatrick
    We’re confident we’ve seen some dusty PC cases in our day, but nothing we’ve ever cleaned produced the sheer volume of smoke-bomb like dust this neglected tower spews out. That noise you hear, about 1:15 into the video, is the sound of the compressor motor kicking back on to top off the pressure tank: behold, a PC so filthy the compressor cleaning it out needs to take a break! [via Geeks Are Sexy] HTG Explains: Why Linux Doesn’t Need Defragmenting How to Convert News Feeds to Ebooks with Calibre How To Customize Your Wallpaper with Google Image Searches, RSS Feeds, and More

    Read the article

  • World Class Training For Them, an Amazon Gift Certificate For You

    - by Adam Machanic
    We have just two weeks to go before Paul Randal and Kimberly Tripp touch down in the Boston area to deliver their famous SQL Server Immersions course . This is going to be a truly fantastic SQL Server learning experience and we're hoping a few more people will join in the fun. This is where you come in: we have a few vacant seats remaining and we need your help spreading the word. Simply tell your friends and colleagues about the course and e-mail me (adam [at] bostonsqltraining [dot] com) the names...(read more)

    Read the article

  • Podcast Show Notes: Architecture in a Post-SOA World

    - by Bob Rhubart
    All three segments of my conversation with Oracle ACE Director Hajo Normann, SOA author Jeff Davies, and enterprise architect Pat Shepherd are now available. This conversation was recorded on March 9, 2010, and covered a lot of territory, from the lingering fear of SOA among many in IT, to the misinformation behind that fear, to a discussion of the future of enterprise architecture. Listen to Part 1 Listen to Part 2 Listen to Part 3 If you’d like to engage any of the panelists in your own conversation, the links below will help: Hajo Normann is a SOA architect and consultant at EDS in Frankfurt Blog | LinkedIn | Oracle Mix | Oracle ACE Profile | Books Jeff Davies is a Senior Product Manager at Oracle, and is the primary author of The Definitive Guide to SOA: Oracle Service Bus Homepage | Blog | LinkedIn | Oracle Mix Pat Shepherd is an enterprise architect with the Oracle Enterprise Solutions Group. Oracle Mix | LinkedIn | Blog New panelists and new topics coming next week, so stay tuned: RSS   Technorati Tags: oracle,otn,arch2arch,architect,communiity,enterprise architecture,podcast,soa,service-oriented architecture del.icio.us Tags: oracle,otn,arch2arch,architect,communiity,enterprise architecture,podcast,soa,service-oriented architecture

    Read the article

  • Autoscaling in a modern world&hellip;. Part 3

    - by Steve Loethen
    The Wasabi Hands on Labs give you a good look at the basic mechanics, but I don’t find the setup too practical.  Using a local console application to host the Autoscaler and rules files is probably the (IMHO) least likely architecture.  Far more common would be hosting in a service on premise (if you want to have the Autoscaler local) or most likely, host it in a Azure role of it’s own.  I chose to go the Azure route. First step was to get the rules.xml and the services.xml files into the cloud.  I tend to be a “one step at a time” sort of guy, so running the console application with the rules sitting in a Azure hosted set of blobs seemed to be the logical first step.  Here are the steps: 1) Create a container in the storage account you wish to use.  Name does not matter, you will get a chance to set the container name (as well as the file names) in the app.config 2) Copy the two files from where you created them to your  container.  I used the same files I had locally.  I made the container public to eliminate security issues, but in the final application, a bit of security needs to be applied (one problem at a time).  The content type was set to text/xml.  I found one reference claiming the importance of this step, and it makes sense. 3) Adjust the app.config to set the location of the files.  This will let you set all the storage account and key information needed to reach into the cloud form your console application.  The sections of your app.config will look like this: <rulesStores> <add name="Blob Rules Store" type="Microsoft.Practices.EnterpriseLibrary.WindowsAzure.Autoscaling.Rules.Configuration.BlobXmlFileRulesStore, Microsoft.Practices.EnterpriseLibrary.WindowsAzure.Autoscaling, Version=5.0.1118.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" blobContainerName="[ContainerName]" blobName="rules.xml" storageAccount="DefaultEndpointsProtocol=https;AccountName=[StorageAccount];AccountKey=[AccountKey]" monitoringRate="00:00:30" certificateThumbprint="" certificateStoreLocation="LocalMachine" checkCertificateValidity="false" /> </rulesStores> <serviceInformationStores> <add name="Blob Service Information Store" type="Microsoft.Practices.EnterpriseLibrary.WindowsAzure.Autoscaling.ServiceModel.Configuration.BlobXmlFileServiceInformationStore, Microsoft.Practices.EnterpriseLibrary.WindowsAzure.Autoscaling, Version=5.0.1118.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" blobContainerName="[ContainerName]" blobName="services.xml" storageAccount="DefaultEndpointsProtocol=https;AccountName=[StorageAccount];AccountKey=[AccountKey]" monitoringRate="00:00:30" certificateThumbprint="" certificateStoreLocation="LocalMachine" checkCertificateValidity="false" /> </serviceInformationStores> Once I had the files up in the sky, I renamed the local copies to just to make my self feel better about the application using the correct set of rules and services.  Deploy the web role to the cloud.  Once it is up and running, start the console application.  You should find the application scales up and down in response to the buttons on the web site.  Tune in next time for moving the hosting of the Autoscaler to a worker role, discussions on getting the logging information into diagnostics into storage, and a set of discussions about certs and how they play a role.

    Read the article

  • Nokia vs. The World

    - by Michael B. McLaughlin
    I’m looking forward to the launch of the Nokia Lumia 920. Why? Well, it stacks up better than the competition for one thing. Then there’s also that security problem that certain other phones have. Mostly, though, it’s because I love my Lumia 900 and the 920, with Windows Phone 8, will be even better. Before I got my Lumia 900, I just took it as given that smart phone cameras couldn’t be good. The Lumia taught me that smart phone cameras can be good if the manufacturer treats them as an important component worth spending time and money on (rather than some thing that consumers expect such that they’d better throw one in). I’m extremely pleased with the quality of pictures that my Lumia 900 gives me as well as the range of settings it provides (you can delve in to tell it a film speed, an f-stop, and a whole range of other settings). And the image stabilization features in the Lumia 920 deliver far better results than the others. Nokia has had great maps for a long time and they continue to improve. Even better, they made a deal that puts many of their excellent maps into Windows Phone 8 itself. There are still Nokia-exclusive features such as Nokia City Lens, of course. But by giving the core OS a great set of fundamental map data and technologies, they help ensure that customers know that buying a Windows Phone 8 will give them a great map experience no matter who made the phone. I’ll be getting a 920, myself, but the HTC and Samsung devices that have been announced have some compelling features, too, and it’s great to know that people who buy one of these won’t need to worry about where their maps might lead them. I’m looking forward to the NFC capabilities and Qi wireless charging my Lumia 920 will have. With the availability of DirectX and C++ programming on Windows Phone 8, I’m also excited about all the great games that will be added to the Windows Phone environment. I love my Xbox Phone. I love my Office phone. I love my Facebook phone. I love my GPS phone. I love my camera phone. I love my SkyDrive phone. In short, I love my Windows Phone!

    Read the article

  • Open World Day 1 Continued

    - by Antony Reynolds
    A Day in the Life of an Oracle OpenWorld Attendee Part II A couple of things I forgot to mention about yesterdays OpenWorld. First I attended a presentation on SOA Suite and Virtualization which explained how Oracle Virtual Assembly Builder (OVAB) can be used to accelerate the deployment of an Enterprise Deployment Guide (EDG) compliant SOA Suite infrastructure.  OVAB provides the ability to introspect a deployed software component such as WebLogic Server, SOA Suite or other components and extract the configuration and package it up for rapid deployment into an Oracle Virtual Machine.  OVAB allows multiple machines to be configured and connections made between the machines and outside resources such as databases.  That by itself is pretty cool and has been available for a while in OVAB.  What is new is that Oracle has done this for an EDG compliant installations and made it available as an OVAB assembly for customers to use, significantly accelerating the deployment of an EDG deployment.  A real help for customers standing up EDG environments, particularly in test, dev and QA environments. The other thing I forgot to mention was the most memorable demo I saw at OpenWorld.  This was done by my co-author Matt Wright who was showcasing the products of his company Rubicon Red.  They showed a really cool application called OneSpot which puts all the information about a single users business processes in one spot!  Apparently a customer suggested the name.  It allows business flows to be defined that map onto events.  As events occur the status of the business flow is updated to reflect the change.  The interface is strongly reminiscent of social media sites and provides a graphical view of business flows.  So how does this differ from BPEL and BPM process flows?  The OneSpot process flow is more like a BAM process flow, it is based on events arriving from multiple sources, and is focused on the clients view of the process, not the actual business process.  This is important because it allows an end user to get a view of where his current business flow is and what actions, if any, are required of him.  This by itself is great, but better still is that OneSpot has a real time updating view of events that have occurred (BAM style no need to refresh the browser).  This means that as new events occur the end user can see them and jump to the business flow or take other appropriate actions.  Under the covers OneSpot makes use of Oracle Human Workflow to provide a forms interface, but this is not the HWF GUI you know!  The HWF GUI screens are much prettier and have more of a social media feel about them due to their use of images and pulling in relevant related information.  If you are at OOW I strongly recommend you visit Matt or John at the Rubicon Red stand and ask, no demand a demo of OneSpot!

    Read the article

  • Open World 2012

    - by jeffrey.waterman
    For those of you fortunate enough to be attending this year's Oracle OpenWorld here is a sessions I recommend carving time out of your hectic schedule to attend: Public Sector General Session (session ID#: GEN8536) Wednesday, October 3, 10:15 a.m.–11:15 a.m., Westin San Francisco, Metropolitan III Room Speakers, Mark Johnson, SVP Oracle Public Sector; Peter Doolan, CTO Oracle Public Sector; Robert Livingston, founding partner of Livingston Group and former member of the US Congress. Join Mark Johnson for an update on Oracle in government. Mark will be joined by Peter Doolan and Robert Livingston to discuss current topics facing governments and how Oracle can help organizations achieve their goals. I'll be posting more interesting sessions as I peruse the conference agenda over the next week or so.  If you see an interesting session, please feel free to share your suggestions in the comments section.

    Read the article

  • Java Slick2d - How to translate mouse coordinates to world coordinates

    - by Corey
    I am translating in my main class render. How do I get the mouse position where my mouse actually is after I scroll the screen public void render(GameContainer gc, Graphics g) throws SlickException { float centerX = 800/2; float centerY = 600/2; g.translate(centerX, centerY); g.translate(-player.playerX, -player.playerY); gen.render(g); player.render(g); } playerX = 800 /2 - sprite.getWidth(); playerY = 600 /2 - sprite.getHeight(); Image to help with explanation I tried implementing a camera but it seems no matter what I can't get the mouse position. I was told to do this worldX = mouseX + camX; but it didn't work the mouse was still off. Here is my Camera class if that helps: public class Camera { public float camX; public float camY; Player player; public void init() { player = new Player(); } public void update(GameContainer gc, int delta) { Input input = gc.getInput(); if(input.isKeyDown(Input.KEY_W)) { camY -= player.speed * delta; } if(input.isKeyDown(Input.KEY_S)) { camY += player.speed * delta; } if(input.isKeyDown(Input.KEY_A)) { camX -= player.speed * delta; } if(input.isKeyDown(Input.KEY_D)) { camX += player.speed * delta; } } Code used to convert mouse worldX = (int) (mouseX + cam.camX); worldY = (int) (mouseY + cam.camY);

    Read the article

  • Isometric screen to 3D world coordinates efficiently

    - by Justin
    Been having a difficult time transforming 2D screen coordinates to 3D isometric space. This is the situation where I am working in 3D but I have an orthographic camera. Then my camera is positioned at (100, 200, 100), Where the xz plane is flat and y is up and down. I've been able to get a sort of working solution, but I feel like there must be a better way. Here's what I'm doing: With my camera at (0, 1, 0) I can translate my screen coordinates directly to 3D coordinates by doing: mouse2D.z = (( event.clientX / window.innerWidth ) * 2 - 1) * -(window.innerWidth /2); mouse2D.x = (( event.clientY / window.innerHeight) * 2 + 1) * -(window.innerHeight); mouse2D.y = 0; Everything okay so far. Now when I change my camera back to (100, 200, 100) my 3D space has been rotated 45 degrees around the y axis and then rotated about 54 degrees around a vector Q that runs along the xz plane at a 45 degree angle between the positive z axis and the negative x axis. So what I do to find the point is first rotate my point by 45 degrees using a matrix around the y axis. Now I'm close. So then I rotate my point around the vector Q. But my point is closer to the origin than it should be, since the Y value is not 0 anymore. What I want is that after the rotation my Y value is 0. So now I exchange my X and Z coordinates of my rotated vector with the X and Z coordinates of my non-rotated vector. So basically I have my old vector but it's y value is at an appropriate rotated amount. Now I use another matrix to rotate my point around the vector Q in the opposite direction, and I end up with the point where I clicked. Is there a better way? I feel like I must be missing something. Also my method isn't completely accurate. I feel like it's within 5-10 coordinates of where I click, maybe because of rounding from many calculations. Sorry for such a long question.

    Read the article

  • Autoscaling in a modern world&hellip;. Part 1

    - by Steve Loethen
    It has been a while since I have had time to sit down and blog.  I need to make sure I take the time.  It helps me to focus on technology and not let the administrivia keep me from doing the things I love. I have been focusing on the cloud for the last couple of years.  Specifically the  PaaS platform from Microsoft called Azure.  Time to dig in.. I wanted to explore Autoscaling.  Autoscaling is not native part of Azure.  The platform has the needed connection points.  You can write code that looks at the health and performance of your application components and react to needed scaling changes.  But that means you have to write all the code.  Luckily, an add on to the Enterprise Library provides a lot of code that gets you a long way to being able to autoscale without having to start from scratch. The tool set is primarily composed of a Autoscaler object that you need to host.  This object, when hosted and configured, looks at the performance criteria you specify and adjusts your application based on your needs.  Sounds perfect. I started with the a set of HOL’s that gave me a good basis to understand the mechanics.  I worked through labs 1 and 2 just to get the feel, but let’s start our saga at the end of lab3.  Lab3 end results in a web application, hosted in Azure and a console app running on premise.  The web app has a few buttons on it.  One set adds messages to a queue, another removes them.  A second set of buttons drives processor utilization to 100%.  If you want to guess, a safe bet is that the Autoscaler is configured to react to a queue that has filled up or high cpu usage.  We will continue our saga in the next post…

    Read the article

  • Autoscaling in a modern world&hellip;. Part 4

    - by Steve Loethen
    Now that I have the rules and services XML files in the cloud, it is time to sever the bounds of earth and live totally in the cloud.  I have to host the Autoscaling object in Azure as well, point it to the rules, tell it the management certs and get out of the way. A couple of questions.  Where to host?  The most obvious place to me was a worker role.  A simple, single purpose worker role, doing nothing but watching my app.  Here are the steps I used. 1) Created a project.  Separate project from my web site.  I wanted to be able to run the web in the cloud and the autoscaler local for debugging purposes.  Seemed like the easiest way.  2) Add the Wasabi block to the project. 3) Configure the settings.  I used the same settings used for the console app.  It points to the same web role, uses the same rules file.  4) Make sure the certification needed to manage the role is added to the cert store in the sky (“LocalMachine” and “My” are default locations). I ran the worker role in the local fabric.  It worked.  I then published to the cloud, and verified it worked again.  Here is what my code looked like. public override bool OnStart() { Trace.WriteLine("Set Default Connection Limit", "Information"); // Set the maximum number of concurrent connections ServicePointManager.DefaultConnectionLimit = 12; Trace.WriteLine("Set up configuration change code", "Information"); // set up config CloudStorageAccount.SetConfigurationSettingPublisher((configName, configSetter) => configSetter(RoleEnvironment.GetConfigurationSettingValue(configName))); Trace.WriteLine("Get current diagnostic configuration", "Information"); // Get current diagnostic configuration DiagnosticMonitorConfiguration dmc = DiagnosticMonitor.GetDefaultInitialConfiguration(); Trace.WriteLine("Set Diagnostic Buffer Size", "Information"); // Set Diagnostic Buffer size dmc.Logs.BufferQuotaInMB = 4; Trace.WriteLine("Set log transfer period", "Information"); // Set log transfer period dmc.Logs.ScheduledTransferPeriod = TimeSpan.FromMinutes(1); Trace.WriteLine("Set log verbosity", "Information"); // Set log filter to verbose dmc.Logs.ScheduledTransferLogLevelFilter = LogLevel.Verbose; Trace.WriteLine("Start the diagnostic monitor", "Information"); // Start the diagnostic monitor DiagnosticMonitor.Start("Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString", dmc); Trace.WriteLine("Get the current Autoscaler from the EntLib Container", "Information"); // Get the current Autoscaler from the EntLib Container scaler = EnterpriseLibraryContainer.Current.GetInstance<Autoscaler>(); Trace.WriteLine("Start the autoscaler", "Information"); // Start the autoscaler scaler.Start(); Trace.WriteLine("call the base class OnStart", "Information"); // call the base class OnStart return base.OnStart(); } public override void OnStop() { Trace.WriteLine("Stop the Autoscaler", "Information"); // Stop the Autoscaler scaler.Stop(); } I did have to turn on some basic logging for wasabi, which will cover in the next post.  This let me figure out that I hadn’t done the certificate step.

    Read the article

  • Autoscaling in a modern world&hellip;. last chapter

    - by Steve Loethen
    As we all know as coders, things like logging are never important.  Our code will work right the first time.  So, you can understand my surprise when the first time I deployed the autoscaling worker role to the actual Azure fabric, it did not scale.  I mean, it worked on my machine.  How dare the datacenter argue with that.  So, how did I track down the problem?  (turns out, it was not so much code as lack of the right certificate)  When I ran it local in the developer fabric, I was able to see a wealth of information.  Lots of periodic status info every time the autoscalar came around to check on my rules and decide to act or not.  But that information was not making it to Azure storage.  The diagnostics were not being transferred to where I could easily see and use them to track down why things were not being cooperative.  After a bit of digging, I discover the problem.  You need to add a bit of extra configuration code to get the correct information stored for you.  I added the following to my app.config: Code Snippet <system.diagnostics>     <sources>         <source name="Autoscaling General"switchName="SourceSwitch"           switchType="System.Diagnostics.SourceSwitch" >         <listeners>           <add name="AzureDiag" />             <remove name="Default"/>         </listeners>       </source>         <source name="Autoscaling Updates"switchName="SourceSwitch"           switchType="System.Diagnostics.SourceSwitch" >         <listeners>           <add name="AzureDiag" />             <remove name="Default"/>         </listeners>       </source>     </sources>     <switches>       <add name="SourceSwitch"           value="Verbose, Information, Warning, Error, Critical" />     </switches>     <sharedListeners>       <add type="Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener,Microsoft.WindowsAzure.Diagnostics, Version=1.0.0.0, Culture=neutral,PublicKeyToken=31bf3856ad364e35" name="AzureDiag"/>     </sharedListeners>     <trace>       <listeners>         <add             type="Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener,Microsoft.WindowsAzure.Diagnostics, Version=1.0.0.0, Culture=neutral,PublicKeyToken=31bf3856ad364e35" name="AzureDiagnostics">           <filter type="" />         </add>       </listeners>     </trace>   </system.diagnostics> Suddenly all the rich tracing info I needed was filling up my storage account.  After a few cycles of trying to attempting to scale, I identified the cert problem, uploaded a correct certificate, and away it went.  I hope this was helpful.

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >