Search Results

Search found 9064 results on 363 pages for 'big twisty'.

Page 248/363 | < Previous Page | 244 245 246 247 248 249 250 251 252 253 254 255  | Next Page >

  • BI&EPM in Focus June 2013

    - by Mike.Hallett(at)Oracle-BI&EPM
    Analyst Report from Ovum: BI bites into a bigger slice of Oracle’s Red Stack Customers INC Research Ensures 24/7 Enterprise Application Availability and Supports Rapid Expansion in Asia with Managed Cloud Services – Hyperion Planning, PeopleSoft, E-Business Suite, SOA Suite PL Developments Improves Quality and Demand Planning Accuracy, Streamlines Compliance as It Moves into Manufacturing – Hyperion Planning, OBIEE, E-Business Suite Release 12.1, Agile, Demantra Kiabi Provides Store Managers with Monthly Earnings Statements in Four Business Days to Support Continued Retail Growth – Hyperion Planning, Hyperion Financial Reporting, Hyperion Smart View for Office Speedy Cash Improves Global Financial Budgeting and Forecasting to Support Continued Company Growth - Hyperion Planning, Essbase, Hyperion Smart View for Office, Hyperion Financial Management Grupo Sports World Automates and Reduces Budget Consolidation Time by 33% for 30 Fitness Centers – Hyperion Planning Jupiter Shop Channel Automates Budgeting Processes, Enhances Visibility of Project Investments to Support Strategic Decision-Making – Hyperion Planning GENBAND Saves US$1.25 Million Annually with Automated Global Trade Management, Gains Compliance Assurance – Hyperion Financial Management, E-Business Suite Aldar Properties Consolidates and Simplifies Group Planning and Reporting for Business and Finance Structures with Integrated ERP and Business Intelligence – Hyperion Planning, Essbase, Data Integrator, OBIEE, E-Business Suite, SUN Link to Complete Archive Enterprise Performance Management Hyperion EPM 11.1.2.3 Webcast Tutorials EPM Blog: Three Technologies CFOs Need to Know About The CFO as Catalyst for Change - Part 1 The CFO as Catalyst for Change - Part 2 Actions Speak Louder in Scorecards Unlocking Business Potential with Enterprise Performance Management Business Intelligence Oracle Database 12c is launched Analysis: How to Take Big Data Advantage of Oracle Database 12c by Data-informed.com Normal 0 false false false EN-GB X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-fareast-language:EN-US;}

    Read the article

  • How to migrate ASP.NET MVC 3 , MVC4 project to ASP.NET MVC5 ?

    - by Anirudha
    Originally posted on: http://geekswithblogs.net/anirugu/archive/2013/10/16/how-to-migrate-asp.net-mvc-3--mvc4-project-to.aspxSoon you will see a new version of MVC5 in VS2013. MVC5 will be incorporated in VS2013. MVC3 will not be supported in VS2013. I confirmed it on channel9 last time. So People who have installed only VS2013 or doesn’t have old version will be got trouble with the project that is still in MVC3. This error happen because MVC4 and 5 installation doesn’t contain the DLL that is used in Version 3 of ASP.NET MVC.   Don’t be panic. You guys want to upgrade your project. Here is a trick  to solve the issue.   When you open the project you have seen that in Reference there is some dll that have yellow icon. This means that dll are missing or not found in your configuration or system.   Now remember that dll name. Remove them from reference and add them from adding reference. I telling you to remove so VS will not prevent you to add new version of same assembly. Add all those assembly. Those dll will be following : System.Web.Mvc Razor and Webpages Dll.   Remember that in MVC3 we use old version of these assembly. Now When you done by adding all assembly then now open web.config.   There is 2 web.config file in our mvc project.  One is in root folder and second in Views folder. You need to update all those version no. This is not a big deal if you know the name of assembly. Now if you web.config show you assembly version as 3.000.00 then 3 would be replaced with 4 or 5 according to version no. Same thing need to applied all dll for both web.config.   Note :- In VS Template Views goes in ~/Views folder but if someone use any other folder then Views for views and those folder have also web.config then remember to update them also. Your project will be compile and make no warning and error but that certainly not work. for examples areas/views and themes/views that contain web.config also need to be updated with newer assembly version no.   After done these thing you can compile your project and it will be work as it should be Thanks for read my post. Follow me on FB and Twitter to stay updated

    Read the article

  • Oracle Systems and Solutions at OpenWorld Tokyo 2012

    - by ferhat
    Oracle OpenWorld Tokyo and JavaOne Tokyo will start next week April 4th. We will cover Oracle systems and Oracle Optimized Solutions in several keynote talks and general sessions. Full schedule can be found here. Come by the DemoGrounds to learn more about mission critical integration and optimization of complete Oracle stack. Our Oracle Optimized Solutions experts will be at hand to discuss 1-1 several of Oracle's systems solutions and technologies. Oracle Optimized Solutions are proven blueprints that eliminate integration guesswork by combing best in class hardware and software components to deliver complete system architectures that are fully tested, and include documented best practices that reduce integration risks and deliver better application performance. And because they are highly flexible by design, Oracle Optimized Solutions can be implemented as an end-to-end solution or easily adapted into existing environments. Oracle Optimized Solutions, Servers,  Storage, and Oracle Solaris  Sessions, Keynotes, and General Session Talks DAY TIME TITLE Notes Session Wednesday  April 4 9:00 - 11:15 Keynote: ENGINEERED FOR INNOVATION - Engineered Systems Mark Hurd,  President, Oracle Takao Endo, President & CEO, Oracle Corporation Japan John Fowler, EVP of Systems, Oracle Ed Screven, Chief Corporate Architect, Oracle English Session K1-01 11:50 - 12:35 Simplifying IT: Transforming the Data Center with Oracle's Engineered Systems Robert Shimp, Group VP, Product Marketing, Oracle English Session S1-01 15:20 - 16:05 Introducing Tiered Storage Solution for low cost Big Data Archiving S1-33 16:30 - 17:15 Simplifying IT - IT System Consolidation that also Accelerates Business Agility S1-42 Thursday  April 5 9:30 - 11:15 Keynote: Extreme Innovation Larry Ellison, Chief Executive Officer, Oracle English Session K2-01 11:50 - 13:20 General Session: Server and Storage Systems Strategy John Fowler, EVP of Systems, Oracle English Session G2-01 16:30 - 17:15 Top 5 Reasons why ZFS Storage appliance is "The cloud storage" by SAKURA Internet Inc L2-04 16:30 - 17:15 The UNIX based Exa* Performance IT Integration Platform - SPARC SuperCluster S2-42 17:40 - 18:25 Full stack solutions of hardware and software with SPARC SuperCluster and Oracle E-Business Suite  to minimize the business cost while maximizing the agility, performance, and availability S2-53 Friday April 6 9:30 - 11:15 Keynote: Oracle Fusion Applications & Cloud Robert Shimp, Group VP, Product Marketing Anthony Lye, Senior VP English Session K3-01 11:50 - 12:35 IT at Oracle: The Art of IT Transformation to Enable Business Growth English Session S3-02 13:00-13:45 ZFS Storagge Appliance: Architecture of high efficient and high performance S3-13 14:10 - 14:55 Why "Niko Niko doga" chose ZFS Storage Appliance to support their growing requirements and storage infrastructure By DWANGO Co, Ltd. S3-21 15:20 - 16:05 Osaka University: Lower TCO and higher flexibility for student study by Virtual Desktop By Osaka University S3-33 Oracle Developer Sessions with Oracle Systems and Oracle Solaris DAY TIME TITLE Notes LOCATION Friday April 6 13:00 - 13:45 Oracle Solaris 11 Developers D3-03 13:00 - 14:30 Oracle Solaris Tuning Contest Hands-On Lab D3-04 14:00 - 14:35 How to build high performance and high security Oracle Database environment with Oracle SPARC/Solaris English Session D3-13 15:00 - 15:45 IT Assets preservation and constructive migration with Oracle Solaris virtualization D3-24 16:00 - 17:30 The best packaging system for cloud environment - Creating an IPS package D3-34 Follow Oracle Infrared at Twitter, Facebook, Google+, and LinkedIn  to catch the latest news, developments, announcements, and inside views from  Oracle Optimized Solutions.

    Read the article

  • Finally home - and something fully off topic

    - by Mike Dietrich
    Arrived at Munich Pasing last night at 0:50am ... finally :-) On Sunday I've left the Dylan Hotel in Dublin (thanks to the staff there as well: you were REALLY helpful!!) around 7:30pm to go to the port - and came home on Tuesday morning 1:15am. So all together 29:45hrs door-to-door - not bad for nearly 2000km just relying on public transport. And could have been faster if there were seats in ealier TGV's left. But I don't complain at all ;-) Just checked the website of Dublin Airport - it says currently: 17.00pm: Latest on flight disruptions at Dublin Airport The IAA have advised us that based on the latest Volcanic Ash Advisory Centre London Dublin Airport will remain closed for all inbound and outbound commercial flights until 20.00hours. This effectively means that no flights will land or take off at Dublin Airport until then. A further update will be posted this afternoon. When traveling I have always my iPod with me. It has gotten a bit old now (I think I've bought it 3 years ago in November 2007) but it has a 160GB hard disk in it so it fits most of my music collection (not the entire collection anymore as I'm currently re-riping everything to Apple Lossless because at least for my ears it makes a big difference - but I listen to good ol' vinyl as well ...and I don't download compressed music ;-) ). The battery of my little travel companion is still good for more than 20 hours consistent music playback - and there was a band from Texas being in my ears most of the whole journey called Midlake. I haven't heard of them before until I asked a lady at a Munich store some few weeks ago what she's playing on the speakers in the shop. She was amazed and came back with the CD cover but I hesitated to buy it as I always want to listen the tunes before - and at this day I had no time left to do so. But in Dublin I had a bit of spare time on Saturday and I always enter record stores - and the Tower Records was the sort of store I really enjoy and so I've spent there nearly two hours - leaving with 3 Midlake CDs in my bag. So if you are interested just listen those tunes which may remind some people on Fleetwood Mac: As I said in the title, fully off topic ;-)

    Read the article

  • Silverlight Cream for June 16, 2010 -- #884

    - by Dave Campbell
    In this Issue: Zoltan Arvai, Emiel Jongerius, Charles Petzold, Adam Kinney, Deepesh Mohnani, Timmy Kokke, and Damon Payne. Shoutouts: Andy Beaulieu reported his Coding4Fun: Shuffleboard Game for WP7 has been posted -- Big ol' Tutorial and 6 videos of WP7 goodness Karl Shifflett announced Three New WPF and Silverlight Designer Videos Posted Charles Petzold has a cool Flip-Number Clock in Silverlight posted... cool demo, and the source. From SilverlightCream.com: Data Driven Applications with MVVM Part II: Messaging, Unit Testing, and Live Data Sources Zoltan Arvai has part 2 of his Data-Driven Apps with MVVM up, and this one is also including Messaging, Unit Testing, and Live WCF Data... good tutorial and all the code. Silverlight DataContext Changed Event and Trigger Emiel Jongerius takes a hard swing at the lack of DataContextChanged... his solution involves two attached properties instead of one... check it out and see what you think! Orientation Strategies for Windows Phone 7 Charles Petzold is discussing WP7 Orientation... showing the problems you can get involved in, and how to work through them... and you might be surprised at how he does it :) ... pretty cool as usual, Charles! Debugging the TranslateZoomRotate WPF Behavior in Blend Adam Kinney talks through a bug reported about the WPF TranslateZoomRotate Behavior. Again, it's WPF, but it's in Blend, and ya never know when the solution might apply. I want my app to look like the Zune client Deepesh Mohnani demonstrates using the Cosmopolitan theme to get his app to have the same look as the Zune client. MVVM Project and Item Templates Timmy Kokke is continuing with his cool SilverAmp media player, using it to expand upon the new Blend and Silverlight 4 features. This episode touches very lightly on cranking up a new MVVM project in Blend. Great Features for MVVM Friendly Objects Part 0: Favor Composition Over Inheritance Damon Payne has the first part up of a series he's working on with 'MVVM Friendly' features... he's building out a lot of the infrastructure in this post for the ones that follow... all good stuff. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Oracle and Eloqua Welcome Compendium’s Content Marketing

    - by Mike Stiles
    Yesterday, Oracle announced its acquisition of Compendium, a cloud-based content marketing provider that helps companies plan, produce and deliver engaging content across multiple channels throughout their customers' lifecycle. Why? Because every part of the above paragraph speaks to where modern marketing is and where it’s headed. Customers have now been empowered, thanks to the Internet and particularly social, with access to almost limitless amounts of information about companies and products. This includes the especially influential voices of friends and objective acquaintances that have experience with the product or brand. With mobile, this info is available instantly in the palm of their hand. All of this research and influence mind you, is taking place long before a prospect will ever engage with the brand itself or one of its sales reps. So how does a brand effectively insert itself into these conversations and this flow of the customer journey? Now, more than ever, marketers must deliver relevant and engaging content across multiple channels and throughout the entire customer journey to be useful, helpful, and influential. Compendium has a data-driven content marketing platform that lines up relevant content with customer data and personas so brands can accelerate the conversion of prospects. Now think about combining that with the Oracle Eloqua Marketing Cloud, part of Oracle's comprehensive CX solution. Marketers will be able to automate content delivery across channels by aligning persona-based content with customers' digital body language. Better customer engagement, improved sales lead quality, better return on marketing investment, and higher customer loyalty. Now we’re talking. Does data-driven content marketing have an impact? Compendium customer CVENT is a SaaS company specializing in meetings management tech. They wanted to increase leads & ad performance on their blog and dramatically increase their content. They also wanted to manage the creation, workflow, promotion and distribution of that content. With Compendium, CVENT created over 9,000 content elements, and sales-ready leads grew 325%. So Oracle Eloqua helps you target audiences, know buyers, and automate multi-channel marketing campaigns. Compendium lets you plan, publish, manage and measure content across content types and channels. Now kick it up yet another notch with Oracle’s Analytics, Big Data and Social solutions, and you’re using your marketing dollars to reach the right people in the right place at the right time with the right content. And as if that weren’t enough, your customers will love you for it. @mikestiles

    Read the article

  • SQL SERVER – Transaction Log Full – Transaction Log Larger than Data File – Notes from Fields #001

    - by Pinal Dave
    I am very excited to announce a new series on this blog – Notes from Fields. I have been blogging for almost 7 years on this blog and it has been a wonderful experience. Though, I have extensive experience with SQL and Databases, it is always a good idea that we consult experts for their advice and opinion. Following the same thought process, I have started this new series of Notes from Fields. In this series we will have notes from various experts in the database world. My friends at Linchpin People have graciously decided to support me in my new initiation.  Linchpin People are database coaches and wellness experts for a data driven world. In this very first episode of the Notes from Fields series database expert Tim Radney (partner at Linchpin People) explains a very common issue DBA and Developer faces in their career, when database logs fills up your hard-drive or your database log is larger than your data file. Read the experience of Tim in his own words. As a consultant, I encounter a number of common issues with clients.  One of the more common things I encounter is finding a user database in the FULL recovery model that does not make a regular transaction log backups or ever had a transaction log backup. When I find this, usually the transaction log is several times larger than the data file. Finding this issue is very significant to me in that it allows to me to discuss service level agreements with the client. I get to ask questions such as, are nightly full backups sufficient or do they need point in time recovery.  This conversation has now signed with the customer and gets them to thinking about their disaster recovery and high availability solutions. This issue is also very prominent on SQL Server forums and usually has the title of “Help, my transaction log has filled up my disk” or “Help, my transaction log is many times the size of my database”. In cases where the client only needs the previous full nights backup, I am able to change the recovery model to SIMPLE and shrink the transaction log using DBCC SHRINKFILE (2,1) or by specifying the transaction log file name by using DBCC SHRINKFILE (file_name, target_size). When the client needs point in time recovery then in most cases I will still end up switching the client to the SIMPLE recovery model to truncate the transaction log followed by a full backup. I will then schedule a SQL Agent job to make the regular transaction log backups with an interval determined by the client to meet their service level agreements. It should also be noted that typically when I find an overgrown transaction log the virtual log file count is also out of control. I clean up will always take that into account as well.  That is a subject for a future blog post. If your SQL Server is facing any issue we can Fix Your SQL Server. Additional reading: Monitoring SQL Server Database Transaction Log Space Growth – DBCC SQLPERF(logspace)  SQL SERVER – How to Stop Growing Log File Too Big Shrinking Truncate Log File – Log Full Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Problem in installation in My Hp g4 1226se

    - by vivek Verma
    1vivek.100 Dual booting error in Hp pavilion g4 1226se Dear sir or Madam, My name is vivek verma.... I am the user of my Hp laptop which series and model name is HP PAVILION G4 1226SE........ i have purchase in the year of 2012 and month is February.....the windows 7 home basic 64 Bit is already installed in in my laptop.... Now i want to install Ubuntu 12.04 Lts or 13.10 lts..... i have try many time to install in my laptop via live CD or USB installer....and i have try many live CD and many pen drive to install Ubuntu ... but it is not done......now i am in very big problem...... when i put my CD or USB drive to boot and install the Ubuntu......my laptop screen is goes the some black (brightness of my laptop screen is very low and there is very low visibility ) and not showing any thing on my laptop screen..... and when i move the my laptop screen.....then there is graphics option in this screen to installation of the Ubuntu option......and when i press the dual boot with setting button and press to continue them my laptop is goes for shutdown after 2 or 5 minutes..... ...... and Hp service center person is saying to me our laptop hardware has no problem.....please contact to Ubuntu tech support............. show please help me if possible..... My laptop configuration is here...... Hardware Product Name g4-1226se Product Number QJ551EA Microprocessor 2.4 GHz Intel Core i5-2430M Microprocessor Cache 3 MB L3 cache Memory 4 GB DDR3 Memory Max Upgradeable to 4 GB DDR3 Video Graphics Intel HD 3000 (up to 1.65 GB) Display 35,5 cm (14,0") High-Definition LED-backlit BrightView Display (1366 x 768) Hard Drive 500 GB SATA (5400 rpm) Multimedia Drive SuperMulti DVD±R/RW with Double Layer Support Network Card Integrated 10/100 BASE-T Ethernet LAN Wireless Connectivity 802.11 b/g/n Sound Altec Lansing speakers Keyboard Full size island-style keyboard with home roll keys Pointing Device TouchPad supporting Multi-Touch gestures with On/Off button PC Card Slots Multi-Format Digital Media Card Reader for Secure Digital cards, Multimedia cards External Ports 1 VGA 1 headphone-out 1 microphone-in 3 USB 2.0 1 RJ45 Dimensions 34.1 x 23.1 x 3.56 cm Weight Starting at 2.1 kg Power 65W AC Power Adapter 6-cell Lithium-Ion (Li-Ion) What's In The Box Webcam with Integrated Digital Microphone (VGA) Software Operating System: Windows 7 Home Basic 64bit....Genuine..... ......... Sir please help me if possible....... Name =vivek verma Contact no.+919911146737 Email [email protected]

    Read the article

  • Best Platform/Engine for turn based Client/Server Android game

    - by Paradine
    I'm currently designing a turn based game for tablets. Initially for Android with porting to iOS later considered in design. I'm having trouble narrowing down the available technologies to even know where to spend my research time. I am hoping that if I explain what I am trying to achieve someone may be able to suggest a platform and/or engine. I've looked into some of the open source Engines ( http://www.cuteandroid.com/ten-open-source-android-2d-or-3d-game-engine-for-android-developers ) and some appear to handle much of what I might require - although with a higher focus on graphics than i need. Mages looks interesting although development appears to have ceased. If I could somehow leverage GoogleApps that would be excellent. Here is what I am trying to achieve: PvP turn based strategy game over internet - minimal animation and bandwidth required Players match up online using MetaGame system MatchID created on Resolution Server and Game starts Clients have 30 second countdown to select MoveString Clients sends small secure timestamped and MatchIDed MoveString to Resolution server Resolution server looks up Move String for each player, Resolves and Updates Players status in MatchID on Server Resolution server updates Client Views Repeat until victory conditions met - MatchID Closed, Rewards earned in MetaGame There will also need to be a full social and account system and metagame backend - but this could be running on separate system(s) Tablet in Offline mode would be catalog browsing and perhaps single player AI - bum I'm focusing on the Resolution Server at this point I'm not even certain if I would be looking at an Android App or a WebApp at this stage! I want a custom GUI so I guess an app - but maybe as I have little animation a WebApp might also work. Probably some combination of both. There will be very small overhead in data between client server - essentially a small text string every 30 seconds sent to the Resolution server which looks up the Effect and applies it to the Opponents string and determines some results to apply to the match. The client view is updated minimally with the results (only 5 in game Integers tracked) - perhaps triggering small animations/popups on the client to show the end result. e.g Explosion. If you have suggestions for a good technology or platform to best achieving the Resolution Server I'd love to hear. Also if you have experience with open source Engines - and could narrow down which (if any ) might be most suitable that would be a big help. Thanks in advance

    Read the article

  • Summit Old, Summit New, Summit Borrowed...

    - by Rob Farley
    PASS Summit is coming up, and I thought I’d post a few things. Summit Old... At the PASS Summit, you will get the chance to hear presentations by the SQL Server establishment. Just about every big name in the SQL Server world is a regular at the PASS Summit, so you will get to hear and meet people like Kalen Delaney (@sqlqueen) (who just recently got awarded MVP status for the 20th year running), and from all around the world such as the UK’s Chris Webb (@technitrain) or Pinal Dave (@pinaldave) from India. Almost all the household names in SQL Server will be there, including a large contingent from Microsoft. The PASS Summit is by far the best place to meet the legends of SQL Server. And they’re not all old. Some are, but most of them are younger than you might think. ...Summit New... The hottest topics are often about the newest technologies (such as SQL Server 2012). But you will almost certainly learn new stuff about older versions too. But that’s not what I wanted to pick on for this point. There are many new speakers at every PASS Summit, and content that has not been covered in other places. This year, for example, LobsterPot’s Roger Noble (@roger_noble) is giving a presentation for the first time. He’s a regular around the Australian circuit, but this is his first time presenting to a US audience. New Zealand’s Paul White (@sql_kiwi) is attending his first PASS Summit, and will be giving over four hours of incredibly deep stuff that has never been presented anywhere in the US before (I can’t say the world, because he did present similar material in Adelaide earlier in the year). ...Summit Borrowed... No, I’m not talking about plagiarism – the talks you’ll hear are all their own work. But you will get a lot of stuff you’ll be able to take back and apply at work. The PASS Summit sessions are not full of sales-pitches, telling you about how great things could be if only you’d buy some third-party vendor product. It’s simply not that kind of conference, and PASS doesn’t allow that kind of talk to take place. Instead, you’ll be taught techniques, and be able to download scripts and slides to let you perform that magic back at work when you get home. You will definitely find plenty of ideas to borrow at the PASS Summit. ...Summit Blue Yeah – and there’s karaoke. Blue - Jason - SQL Karaoke - YouTube

    Read the article

  • Upcoming presentations by me at Windows Azure Events

    - by ScottGu
    I recently blogged about a big wave of improvements we recently released for Windows Azure.  I also delivered a keynote on June 7th that discussed and demoed the enhancements – you can watch a recorded version of it online. Over the next few weeks I’ll be doing several more speaking events about Windows Azure in North America and Europe.  Below are details on some of the upcoming the events and how you can sign-up to attend one in person: Scottsdale, Arizona on June 19th, 2012 Attend this FREE all-day event in Scottsdale, Arizona on Tuesday, June 19th to learn more about Windows Azure, ASP.NET, Web API and SignalR.  I’ll be doing a 2 hour presentation on Windows Azure, followed by Scott Hanselman on ASP.NET and Web API, and Brady Gaster on SignalR.  Learn more about the event and register to attend here. Cambridge, United Kingdom on June 21st, 2012 Attend this FREE two-hour event in Cambridge (UK) the evening of Thursday, June 21st.  I’ll be covering the new Windows Azure release – expects lots of demos and audience participation. Learn more about the event and register to attend here. London, United Kingdom on June 22nd, 2012 Attend the FREE all-day Microsoft Cloud Day conference in London (UK) on Friday, June 22nd to learn about Windows Azure and Windows 8.  I’ll be kicking off the event with a two hour keynote, and will be followed by some other fantastic speakers. Learn more about the conference and register to attend here. TechEd Europe in Amsterdam, Netherlands on June 26th, 2012 I’ll be at TechEd Europe this year where I’ll be presenting on Windows Azure.  I’ll be in the general session keynote and also have a foundation track session on Windows Azure on Tuesday, June 26th. Learn more about TechEd Europe and register to attend here. Amsterdam, Netherlands on June 26th, 2012 Not attending TechEd Europe but near Amsterdam and still want to see me talk?  The good news is that the leaders of the Windows Azure User Group NL have setup a FREE event during the evening of Tuesday, June 26th where I’ll be presenting along with Clemens Vasters. Learn more about the event and register to attend here. Dallas, Texas on July 10th, 2012 I’ll be in Dallas, Texas on Tuesday, July 10th and presenting at a FREE all day Microsoft Cloud Summit.  I’ll kick off the day with a keynote, which will be followed by a great set of additional Windows Azure talks as well as a “Grill the Gu” Q&A session with me over lunch. Learn more about the event and register to attend here. Additional Events I’ll be doing many more events and talks in the months ahead – I’ll blog details of additional conferences/events I’m doing as they are fixed. Hope to see some of you at the above ones! Scott

    Read the article

  • SQL SERVER – NTFS File System Performance for SQL Server

    - by pinaldave
    Note: Before practicing any of the suggestion of this article, consult your IT Infrastructural Admin, applying the suggestion without proper testing can only damage your system. Question: “Pinal, we have 80 GB of data including all the database files, we have our data in NTFS file system. We have proper backups are set up. Any suggestion for our NTFS file system performance improvement. Our SQL Server box is running only SQL Server and nothing else. Please advise.” When I receive questions which I have just listed above, it often sends me deep thought. Honestly, I know a lot but there are plenty of things, I believe can be built with community knowledge base. Today I need you to help me to complete this list. I will start the list and you help me complete it. NTFS File System Performance Best Practices for SQL Server Disable Indexing on disk volumes Disable generation of 8.3 names (command: FSUTIL BEHAVIOR SET DISABLE8DOT3 1) Disable last file access time tracking (command: FSUTIL BEHAVIOR SET DISABLELASTACCESS 1) Keep some space empty (let us say 15% for reference) on drive is possible (Only on Filestream Data storage volume) Defragement the volume Add your suggestions here… The one which I often get a pretty big debate is NTFS allocation size. I have seen that on the disk volume which stores filestream data, when increased allocation to 64K from 4K, it reduces the fragmentation. Again, I suggest you attempt this after proper testing on your server. Every system is different and the file stored is different. Here is when I would like to request you to share your experience with related to NTFS allocation size. If you do not agree with any of the above suggestions, leave a comment with reference and I will modify it. Please note that above list prepared assuming the SQL Server application is only running on the computer system. The next question does all these still relevant for SSD – I personally have no experience with SSD with large database so I will refrain from comment. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Handling Coding Standards at Work (I'm not the boss)

    - by Josh Johnson
    I work on a small team, around 10 devs. We have no coding standards at all. There are certain things that have become the norm but some ways of doing things are completely disparate. My big one is indentation. Some use tabs, some use spaces, some use a different number of spaces, which creates a huge problem. I often end up with conflicts when I merge because someone used their IDE to auto format and they use a different character to indent than I do. I don't care which we use I just want us all to use the same one. Or else I'll open a file and some lines have curly brackets on the same line as the condition while others have them on the next line. Again, I don't mind which one so long as they are all the same. I've brought up the issue of standards to my direct manager, one on one and in group meetings, and he is not overly concerned about it (there are several others who share the same view as myself). I brought up my specific concern about indentation characters and he thought a better solution would be to, "create some kind of script that could convert all that when we push/pull from the repo." I suspect that he doesn't want to change and this solution seems overly complicated and prone to maintenance issues down the road (also, this addresses only one manifestation of a larger issue). Have any of you run into a similar situation at work? If so, how did you handle it? What would be some good points to help sell my boss on standards? Would starting a grass roots movement to create coding standards, among those of us who are interested, be a good idea? Am I being too particular, should I just let it go? Thank you all for your time. Note: Thanks everyone for the great feedback so far! To be clear, I don't want to dictate One Style To Rule Them All. I'm willing to concede my preferred way of doing something in favor of what suits everyone the best. I want consistency and I want this to be a democracy. I want it to be a group decision that everyone agrees on. True, not everyone will get their way, but I'm hoping that everyone will be mature enough to compromise for the betterment of the group. Note 2: Some people are getting caught up in the two examples I gave above. I'm more after the heart of the matter. It manifests itself with many examples: naming conventions, huge functions that should be broken up, should something go in a util or service, should something be a constant or injected, should we all use different versions of a dependency or the same, should an interface be used for this case, how should unit tests be set up, what should be unit tested, (Java specific) should we use annotations or external config. I could go on.

    Read the article

  • TechEd 2012: Day 3 &ndash; Morning TFS

    - by Tim Murphy
    My morning sessions for day three were dominated by Team Foundation Server.  This has been a hot topic for our clients lately, so this topic really stuck a chord. The speaker for the first session was from Boeing.  It was nice to hear how how a company mixes both agile and waterfall project management.   The approaches that he presented were very pragmatic.  For their needs reporting is the crucial part of their decision to use TFS.  This was interesting since this is probably the last aspect that most shops would think about. The challenge of getting users to adopt TFS was brought up by the audience.  As with the other discussion point he took a very level headed stance.  The approach he was prescribing was to eat the elephant a bite at a time instead of all at once.  If you try to convert you entire shop at once the culture shock will most likely kill the effort. Another key point he reminded us of is that you need to make sure that standards and compliance are taken into account when you setup TFS.  If you don’t implement a tool and processes around it that comply with the standards bodies that govern your business you are in for a world of hurt. Ultimately the reason they chose TFS was because it was the first tool that incorporated all the ALM features that they needed. Reduced licensing cost because of all the different tools they would need to buy to complete the same tasks.  They got to this point by doing an industry evaluation.  Although TFS came out on top he said that it still has a big gap is in the Java area.  Of course in this market there are vendors helping to close that gap. The second session was on how continuous feedback in agile is a new focus in VS2012.  The problems they intended to address included cycle time and average time to repair, root cause analysis. The speakers fired features at us as if they were firing a machine gun.  I will just say that I am looking forward to digging into the product after seeing this presentation.  Beyond that I will simply list some of the key features that caught my attention. Feature – Ability to link documents into tasks as artifacts Web access portal PowerPoint storyboards Exploratory testing Request feedback (allows users to record notes, screen shots and video/audio) See you after the second half. del.icio.us Tags: TechEd,TechEd 2012,TFS,Team Foundation Server

    Read the article

  • Cocos2d: Changing b2Body x val every frame causes jitter

    - by Joey Green
    So, I have a jumping mechanism similar to what you would see in doodle jump where character jumps and you use the accelerometer to make character change direction left or right. I have a player object with position and a box2d b2Body with position. I'm changing the player X position via the accelerometer and the Y position according to box2d. pseudocode for this is like so -----accelerometer acceleration------ player.position = new X -----world update--------- physicsWorld-step() //this will get me the new Y according to the physics similation //so we keep the bodys Y value but change x to new X according to accelerometer data playerPhysicsBody.position = new pos(player.position.x, keepYval) player.position = playerPhysicsBody.position Now this is simplifying my code, but I'm doing the position conversion back and forth via mult or divide by PTM_. Well, I'm getting a weird jitter effect after I get big jump in acceleration data. So, my questions are: 1) Is this the right approach to have the accelerometer control the x pos and box2d control the y pos and just sync everthing up every frame? 2) Is there some issue with updating a b2body x position every frame? 3) Any idea what might be creating this jitter effect? I've collecting some data while running the game. Pre-body is before I set the x value on the b2Body in my update method after I world-step(). Post of course is afterwards. As you can see there is definitively a pattern. 012-06-19 08:14:13.118 Game[1073:707] pre-body pos 5.518720~24.362963 2012-06-19 08:14:13.120 Game[1073:707] post-body pos 5.060156~24.362963 2012-06-19 08:14:13.131 Game[1073:707] player velocity x: -31.833529 2012-06-19 08:14:13.133 Game[1073:707] delta 0.016669 2012-06-19 08:14:13.135 Game[1073:707] pre-body pos 5.060156~24.689455 2012-06-19 08:14:13.137 Game[1073:707] post-body pos 5.502138~24.689455 2012-06-19 08:14:13.148 Game[1073:707] player velocity x: -31.833529 2012-06-19 08:14:13.150 Game[1073:707] delta 0.016667 2012-06-19 08:14:13.151 Game[1073:707] pre-body pos 5.502138~25.006948 2012-06-19 08:14:13.153 Game[1073:707] post-body pos 5.043575~25.006948 2012-06-19 08:14:13.165 Game[1073:707] player velocity x: -31.833529 2012-06-19 08:14:13.167 Game[1073:707] delta 0.016644 2012-06-19 08:14:13.169 Game[1073:707] pre-body pos 5.043575~25.315441 2012-06-19 08:14:13.170 Game[1073:707] post-body pos 5.485580~25.315441 2012-06-19 08:14:13.180 Game[1073:707] player velocity x: -31.833529 2012-06-19 08:14:13.182 Game[1073:707] delta 0.016895 2012-06-19 08:14:13.185 Game[1073:707] pre-body pos 5.485580~25.614935 2012-06-19 08:14:13.188 Game[1073:707] post-body pos 5.026768~25.614935 2012-06-19 08:14:13.198 Game[1073:707] player velocity x: -31.833529 2012-06-19 08:14:13.199 Game[1073:707] delta 0.016454 2012-06-19 08:14:13.207 Game[1073:707] pre-body pos 5.026768~25.905428 2012-06-19 08:14:13.211 Game[1073:707] post-body pos 5.469213~25.905428 2012-06-19 08:14:13.217 Game[1073:707] acceleration x -0.137421 2012-06-19 08:14:13.223 Game[1073:707] player velocity x: -65.022644 2012-06-19 08:14:13.229 Game[1073:707] delta 0.016603

    Read the article

  • A Technique for Performing Cross-host Upgrades to FMW 11gR1

    - by reza.shafii
    The main tool used for the upgrade of iAS 10g mid-tier (data not stored in 10g meta-data repository schemas) environments to Fusion Middleware (FMW) 11gR1 is the FMW Upgrade Assistant (UA). This tool performs what we call an out-of-place upgrade which in a nut-shell means the following: Upgrade is performed by pointing the UA to a 10g source topology as well as an 11g destination topology. The destination topology must be created, using the standard FMW 11g installation and configuration process, prior to the execution of the UA. The UA carries over all of the required changes from the source environment to the destination. This approach has a number of advantages rooted in the fact that the source environment - which is presumably working well and serving its needs - is not disturbed during the upgrade process as the UA only performs read-only operations on it. The UA today can only perform such out-of-place upgrades when the source and destination topologies reside on the same machine. This can sometimes be an issue when the host on which the iAS 10g environment is installed is running at full capacity and installing new hardware for the purpose of the upgrade (in most cases what would be needed is extra memory) is completely infeasible. In such cases, upgrade across a different host is still possible by using the following technique: Backup your source environment and restore it on to a target machine. The backup and restore procedures for the iAS 10.1.2 components are described within this section of the release's Administration Guide. As described in the docs, the Oracle Application Server Backup and Recovery Tool provides capabilities for backing up the installation on one machine and restoring it on another which is exactly what you want to do for the purpose of cross host upgrade. Ensure that the restored environment on your target host is fully functional. Go through the upgrade steps on the target machine to perform the out-of-place upgrade using the UA. Although this process does add another big step to the overall upgrade process, it does make it possible to perform a cross-host upgrade to 11gR1 when necessary. The easiest approach would of course be to find a way of ensuring that the required hardware capacity for upgrade is available on the original 10g host. Using techniques such as scheduling the upgrade at low traffic times and/or temporarily stopping other processes running on the machine to clear up some memory might provide you the sufficient memory needed to perform the out-of-place upgrade and save you the need for using the backup/restore technique I have described in this post.

    Read the article

  • C# Domain-Driven Design Sample Released

    - by Artur Trosin
    In the post I want to declare that NDDD Sample application(s) is released and share the work with you. You can access it here: http://code.google.com/p/ndddsample. NDDDSample from functionality perspective matches DDDSample 1.1.0 which is based Java and on joint effort by Eric Evans' company Domain Language and the Swedish software consulting company Citerus. But because NDDDSample is based on .NET technologies those two implementations could not be matched directly. However concepts, practices, values, patterns, especially DDD, are cross-language and cross-platform :). Implementation of .NET version of the application was an interesting journey because now as .NET developer I better understand the differences positive and negative between these two platforms. Even there are those differences they can be overtaken, in many cases it was not so hard to match a java libs\framework with .NET during the implementation. Here is a list of technology stack: 1. .net 3.5 - framework 2. VS.NET 2008 - IDE 3. ASP.NET MVC2.0 - for administration and tracking UI 4. WCF - communication mechanism 5. NHibernate - ORM 6. Rhino Commons - Nhibernate session management, base classes for in memory unit tests 7. SqlLite - database 8. Windsor - inversion of control container 9. Windsor WCF facility - for better integration with NHibernate 10. MvcContrib - and in particular its Castle WindsorControllerFactory in order to enable IoC for controllers 11. WPF - for incident logging application 12. Moq - mocking lib used for unit tests 13. NUnit - unit testing framework 14. Log4net - logging framework 15. Cloud based on Azure SDK These are not the latest technologies, tools and libs for the moment but if there are someone thinks that it would be useful to migrate the sample to latest current technologies and versions please comment. Cloud version of the application is based on Azure emulated environment provided by the SDK, so it hasn't been tested on ‘real' Azure scenario (we just do not have access to it). Thanks to participants, Eugen Gorgan who was involved directly in development, Ruslan Rusu and Victor Lungu spend their free time to discuss .NET specific decisions, Eugen Navitaniuc helped with Java related questions. Also, big thank to Cornel Cretu, he designed a nice logo and helped with some browser incompatibility issues. Any review and feedback are welcome! Thank you, Artur Trosin

    Read the article

  • Playing NSF music in FMOD.net

    - by Tesserex
    So, as the title says, I want to be able to play NSF files using FMOD, because my project already uses FMOD and I'd rather not replace it. This will involve figuring out how existing players and emulators work and porting it. I haven't yet found an existing player that uses FMOD. My starting point is the MyNes source from http://sourceforge.net/projects/mynes/. There are two big steps between here and what I'm looking for. MyNes plays from a ROM, not NSF. So, I have to rip out the APU and get it to play NSF files. The MyNes APU uses SlimDX, so I have to convert that to FMOD.NET. I am really stuck about how to go about either of these, because I'm not that familiar with audio formats and it's hard finding resources online. So here are a few questions: From what I can tell from the NSF spec at http://kevtris.org/nes/nsfspec.txt, it's just contains the relevant memory section of the ROM, plus the header. If anyone can verify or correct this that would be great. The emulator APU uses data from the rest of the emulator to play, including things like cycle counts. I'm not sure what replaces this in a standalone player. Can't I just load all the music data at once into a stream and play it? Joining #1 and #2, does the header data from the NSF substitute for some of the ROM data in the emulator code? Using FMOD, will I be following the usercreatedsound example for loading a stream? And does this format count as PCM? Specifically MyNes says PCM8. Any tips on loading / playing the stream in FMOD are appreciated. As an aside, I don't really understand the loading / playing sections of the spec I linked at all. It seems to apply to 6502 systems / emulators only and not to my situation. I know it's a long shot for anyone here to have enough experience in this area to help, but anything you can provide is definitely appreciated. A link to an existing .NET library that does this would be even better, but I don't believe one exists.

    Read the article

  • Sqlite &amp; Entity Framework 4

    - by Dane Morgridge
    I have been working on a few client app projects in my spare time that need to persist small amounts of data and have been looking for an easy to use embedded database.  I really like db4o but I'm not wanting to open source this particular project so it was not an option.  Then I remembered that there was an ADO.NET provider for sqlite.  Being a fan of sqlite in general, I downloaded it and gave it an install.  The installer added tooling support for both Visual Studio 2008 & 2010 which is nice because I am working almost exclusively in 2010 at the moment.  I noticed that the provider also had support for Entity Framework, but not specifically v4.  I created a database using the tools that get installed with Visual Studio and all seemed to work fine.  I went on to create an Entity Framework context and selected the sqlite database and to my surprise it worked with out any problems.  The model showed up just like it would for any database and so I started to write a little code to test and then.. BAM!.. Exception. "Mixed mode assembly is built against version 'v2.0.50727' of the runtime and cannot be loaded in the 4.0 runtime without additional configuration information." A quick bit of searching on Bing found the answer.  To get it working, you need to include the following code in your web.config file: 1: <startup useLegacyV2RuntimeActivationPolicy="true"> 2: <supportedRuntime version="v4.0" /> 3: </startup> And then everything magically works.  Entity Framework 4 features worked, like lazy loading and even the POCO templates worked.  The only thing that didn't work was the model first development.  The SQL generated was for SQL Server and of course wouldn't run on sqlite without some modifications. The only other oddity I found was that in order to have an auto incrementing id, you have to use the full integer data type for sqlite; a regular int won't do the trick.  This translates to an Int64, or a long when working with it in Entity Framework.  Not a big deal, but something you need to be aware of. All in all, I am quite impressed with the Entity Framework support I found with sqlite.  I wasn't really expecting much at all, and I was pleasantly surprised. I downloaded the ADO.NET sqlite provider from http://sqlite.phxsoftware.com/.  If you want to use an embedded database with Entity Framework, give it a look.  It will be well worth your time.

    Read the article

  • Silverlight Cream for May 20, 2010 -- #866

    - by Dave Campbell
    In this Issue: Mike Snow, Victor Gaudioso, Ola Karlsson, Josh Twist(-2-), Yavor Georgiev, Jeff Wilcox, and Jesse Liberty. Shoutouts: Frank LaVigne has an interesting observation on his site: The Big Take-Away from MIX10 Rishi has updated all his work including a release of nRoute to the latest bits: nRoute Samples Revisited Looks like I posted one of Erik Mork's links two days in a row :) ... that's because I meant to post this one: Silverlight Week – How to Choose a Mobile Platform Just in case you missed it (and for me to find it easy), Scott Guthrie has an excellent post up on Silverlight 4 Tools for VS 2010 and WCF RIA Services Released From SilverlightCream.com: Silverlight Tip of the Day #23 – Working with Strokes and Shapes Mike Snow's Silverlight Tip of the Day number 23 is up and about Strokes and Shapes -- as in dotted and dashed lines. New Silverlight Video Tutorial: How to Fire a Visual State based upon the value of a Boolean Variable Victor Gaudioso's latest video tutorial is up and is on selecting and firing a video state based on a boolean... project included. Simultaneously calling multiple methods on a WCF service from silverlight Ola Karlsson details a problem he had where he was calling multiple WCF services to pull all his data and had problems... turns out it was a blocking call and he found the solution in the forums and details it all out for us... actually, a search at SilverlightCream.com would have found one of the better posts listed once you knew the problem :) Securing Your Silverlight Applications Josh Twist has an article in MSDN on Silverlight Security. He talks about Windows, forms, and .NET authorization then WCF, WCF Data, cross domain and XAP files. He also has some good external links. Template/View selection with MEF in Silverlight Josh Twist points out that this next article is just a simple demonstration, but he's discussing, and provides code for, a MEF-driven ViewModel navigation scheme with animation on the navigation. Workaround for accessing some ASMX services from Silverlight 4 Are you having problems hitting you asmx web service with Silverlight 4? Yeah... others are too! Yavor Georgiev at the Silverlight Web Services Team blog has a post up about it... why it's a sometimes problem and a workaround for it. Using Silverlight 4 features to create a Zune-like context menu Jeff Wilcox used Silverlight 4 and the Toolkit to create some samples of menus, then demonstrates a duplication of the Zune menu. You Already Are A Windows Phone 7 Programmer Jesse Liberty is demonstrating the fact that Silverlight developers are WP7 developers by creating a Silverlight and a WP7 app side by side using the same code... this is a closer look at the Silverlight TV presentation he did. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Towards Database Continuous Delivery – What Next after Continuous Integration? A Checklist

    - by Ben Rees
    .dbd-banner p{ font-size:0.75em; padding:0 0 10px; margin:0 } .dbd-banner p span{ color:#675C6D; } .dbd-banner p:last-child{ padding:0; } @media ALL and (max-width:640px){ .dbd-banner{ background:#f0f0f0; padding:5px; color:#333; margin-top: 5px; } } -- Database delivery patterns & practices STAGE 4 AUTOMATED DEPLOYMENT If you’ve been fortunate enough to get to the stage where you’ve implemented some sort of continuous integration process for your database updates, then hopefully you’re seeing the benefits of that investment – constant feedback on changes your devs are making, advanced warning of data loss (prior to the production release on Saturday night!), a nice suite of automated tests to check business logic, so you know it’s going to work when it goes live, and so on. But what next? What can you do to improve your delivery process further, moving towards a full continuous delivery process for your database? In this article I describe some of the issues you might need to tackle on the next stage of this journey, and how to plan to overcome those obstacles before they appear. Our Database Delivery Learning Program consists of four stages, really three – source controlling a database, running continuous integration processes, then how to set up automated deployment (the middle stage is split in two – basic and advanced continuous integration, making four stages in total). If you’ve managed to work through the first three of these stages – source control, basic, then advanced CI, then you should have a solid change management process set up where, every time one of your team checks in a change to your database (whether schema or static reference data), this change gets fully tested automatically by your CI server. But this is only part of the story. Great, we know that our updates work, that the upgrade process works, that the upgrade isn’t going to wipe our 4Tb of production data with a single DROP TABLE. But – how do you get this (fully tested) release live? Continuous delivery means being always ready to release your software at any point in time. There’s a significant gap between your latest version being tested, and it being easily releasable. Just a quick note on terminology – there’s a nice piece here from Atlassian on the difference between continuous integration, continuous delivery and continuous deployment. This piece also gives a nice description of the benefits of continuous delivery. These benefits have been summed up by Jez Humble at Thoughtworks as: “Continuous delivery is a set of principles and practices to reduce the cost, time, and risk of delivering incremental changes to users” There’s another really useful piece here on Simple-Talk about the need for continuous delivery and how it applies to the database written by Phil Factor – specifically the extra needs and complexities of implementing a full CD solution for the database (compared to just implementing CD for, say, a web app). So, hopefully you’re convinced of moving on the the next stage! The next step after CI is to get some sort of automated deployment (or “release management”) process set up. But what should I do next? What do I need to plan and think about for getting my automated database deployment process set up? Can’t I just install one of the many release management tools available and hey presto, I’m ready! If only it were that simple. Below I list some of the areas that it’s worth spending a little time on, where a little planning and prep could go a long way. It’s also worth pointing out, that this should really be an evolving process. Depending on your starting point of course, it can be a long journey from your current setup to a full continuous delivery pipeline. If you’ve got a CI mechanism in place, you’re certainly a long way down that path. Nevertheless, we’d recommend evolving your process incrementally. Pages 157 and 129-141 of the book on Continuous Delivery (by Jez Humble and Dave Farley) have some great guidance on building up a pipeline incrementally: http://www.amazon.com/Continuous-Delivery-Deployment-Automation-Addison-Wesley/dp/0321601912 For now, in this post, we’ll look at the following areas for your checklist: You and Your Team Environments The Deployment Process Rollback and Recovery Development Practices You and Your Team It’s a cliché in the DevOps community that “It’s not all about processes and tools, really it’s all about a culture”. As stated in this DevOps report from Puppet Labs: “DevOps processes and tooling contribute to high performance, but these practices alone aren’t enough to achieve organizational success. The most common barriers to DevOps adoption are cultural: lack of manager or team buy-in, or the value of DevOps isn’t understood outside of a specific group”. Like most clichés, there’s truth in there – if you want to set up a database continuous delivery process, you need to get your boss, your department, your company (if relevant) onside. Why? Because it’s an investment with the benefits coming way down the line. But the benefits are huge – for HP, in the book A Practical Approach to Large-Scale Agile Development: How HP Transformed LaserJet FutureSmart Firmware, these are summarized as: -2008 to present: overall development costs reduced by 40% -Number of programs under development increased by 140% -Development costs per program down 78% -Firmware resources now driving innovation increased by a factor of 8 (from 5% working on new features to 40% But what does this mean? It means that, when moving to the next stage, to make that extra investment in automating your deployment process, it helps a lot if everyone is convinced that this is a good thing. That they understand the benefits of automated deployment and are willing to make the effort to transform to a new way of working. Incidentally, if you’re ever struggling to convince someone of the value I’d strongly recommend just buying them a copy of this book – a great read, and a very practical guide to how it can really work at a large org. I’ve spoken to many customers who have implemented database CI who describe their deployment process as “The point where automation breaks down. Up to that point, the CI process runs, untouched by human hand, but as soon as that’s finished we revert to manual.” This deployment process can involve, for example, a DBA manually comparing an environment (say, QA) to production, creating the upgrade scripts, reading through them, checking them against an Excel document emailed to him/her the night before, turning to page 29 in his/her notebook to double-check how replication is switched off and on for deployments, and so on and so on. Painful, error-prone and lengthy. But the point is, if this is something like your deployment process, telling your DBA “We’re changing everything you do and your toolset next week, to automate most of your role – that’s okay isn’t it?” isn’t likely to go down well. There’s some work here to bring him/her onside – to explain what you’re doing, why there will still be control of the deployment process and so on. Or of course, if you’re the DBA looking after this process, you have to do a similar job in reverse. You may have researched and worked out how you’d like to change your methodology to start automating your painful release process, but do the dev team know this? What if they have to start producing different artifacts for you? Will they be happy with this? Worth talking to them, to find out. As well as talking to your DBA/dev team, the other group to get involved before implementation is your manager. And possibly your manager’s manager too. As mentioned, unless there’s buy-in “from the top”, you’re going to hit problems when the implementation starts to get rocky (and what tool/process implementations don’t get rocky?!). You need to have support from someone senior in your organisation – someone you can turn to when you need help with a delayed implementation, lack of resources or lack of progress. Actions: Get your DBA involved (or whoever looks after live deployments) and discuss what you’re planning to do or, if you’re the DBA yourself, get the dev team up-to-speed with your plans, Get your boss involved too and make sure he/she is bought in to the investment. Environments Where are you going to deploy to? And really this question is – what environments do you want set up for your deployment pipeline? Assume everyone has “Production”, but do you have a QA environment? Dedicated development environments for each dev? Proper pre-production? I’ve seen every setup under the sun, and there is often a big difference between “What we want, to do continuous delivery properly” and “What we’re currently stuck with”. Some of these differences are: What we want What we’ve got Each developer with their own dedicated database environment A single shared “development” environment, used by everyone at once An Integration box used to test the integration of all check-ins via the CI process, along with a full suite of unit-tests running on that machine In fact if you have a CI process running, you’re likely to have some sort of integration server running (even if you don’t call it that!). Whether you have a full suite of unit tests running is a different question… Separate QA environment used explicitly for manual testing prior to release “We just test on the dev environments, or maybe pre-production” A proper pre-production (or “staging”) box that matches production as closely as possible Hopefully a pre-production box of some sort. But does it match production closely!? A production environment reproducible from source control A production box which has drifted significantly from anything in source control The big question is – how much time and effort are you going to invest in fixing these issues? In reality this just involves figuring out which new databases you’re going to create and where they’ll be hosted – VMs? Cloud-based? What about size/data issues – what data are you going to include on dev environments? Does it need to be masked to protect access to production data? And often the amount of work here really depends on whether you’re working on a new, greenfield project, or trying to update an existing, brownfield application. There’s a world if difference between starting from scratch with 4 or 5 clean environments (reproducible from source control of course!), and trying to re-purpose and tweak a set of existing databases, with all of their surrounding processes and quirks. But for a proper release management process, ideally you have: Dedicated development databases, An Integration server used for testing continuous integration and running unit tests. [NB: This is the point at which deployments are automatic, without human intervention. Each deployment after this point is a one-click (but human) action], QA – QA engineers use a one-click deployment process to automatically* deploy chosen releases to QA for testing, Pre-production. The environment you use to test the production release process, Production. * A note on the use of the word “automatic” – when carrying out automated deployments this does not mean that the deployment is happening without human intervention (i.e. that something is just deploying over and over again). It means that the process of carrying out the deployment is automatic in that it’s not a person manually running through a checklist or set of actions. The deployment still requires a single-click from a user. Actions: Get your environments set up and ready, Set access permissions appropriately, Make sure everyone understands what the environments will be used for (it’s not a “free-for-all” with all environments to be accessed, played with and changed by development). The Deployment Process As described earlier, most existing database deployment processes are pretty manual. The following is a description of a process we hear very often when we ask customers “How do your database changes get live? How does your manual process work?” Check pre-production matches production (use a schema compare tool, like SQL Compare). Sometimes done by taking a backup from production and restoring in to pre-prod, Again, use a schema compare tool to find the differences between the latest version of the database ready to go live (i.e. what the team have been developing). This generates a script, User (generally, the DBA), reviews the script. This often involves manually checking updates against a spreadsheet or similar, Run the script on pre-production, and check there are no errors (i.e. it upgrades pre-production to what you hoped), If all working, run the script on production.* * this assumes there’s no problem with production drifting away from pre-production in the interim time period (i.e. someone has hacked something in to the production box without going through the proper change management process). This difference could undermine the validity of your pre-production deployment test. Red Gate is currently working on a free tool to detect this problem – sign up here at www.sqllighthouse.com, if you’re interested in testing early versions. There are several variations on this process – some better, some much worse! How do you automate this? In particular, step 3 – surely you can’t automate a DBA checking through a script, that everything is in order!? The key point here is to plan what you want in your new deployment process. There are so many options. At one extreme, pure continuous deployment – whenever a dev checks something in to source control, the CI process runs (including extensive and thorough testing!), before the deployment process keys in and automatically deploys that change to the live box. Not for the faint hearted – and really not something we recommend. At the other extreme, you might be more comfortable with a semi-automated process – the pre-production/production matching process is automated (with an error thrown if these environments don’t match), followed by a manual intervention, allowing for script approval by the DBA. One he/she clicks “Okay, I’m happy for that to go live”, the latter stages automatically take the script through to live. And anything in between of course – and other variations. But we’d strongly recommended sitting down with a whiteboard and your team, and spending a couple of hours mapping out “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” NB: Most of what we’re discussing here is about production deployments. It’s important to note that you will also need to map out a deployment process for earlier environments (for example QA). However, these are likely to be less onerous, and many customers opt for a much more automated process for these boxes. Actions: Sit down with your team and a whiteboard, and draw out the answers to the questions above for your production deployments – “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” Repeat for earlier environments (QA and so on). Rollback and Recovery If only every deployment went according to plan! Unfortunately they don’t – and when things go wrong, you need a rollback or recovery plan for what you’re going to do in that situation. Once you move in to a more automated database deployment process, you’re far more likely to be deploying more frequently than before. No longer once every 6 months, maybe now once per week, or even daily. Hence the need for a quick rollback or recovery process becomes paramount, and should be planned for. NB: These are mainly scenarios for handling rollbacks after the transaction has been committed. If a failure is detected during the transaction, the whole transaction can just be rolled back, no problem. There are various options, which we’ll explore in subsequent articles, things like: Immediately restore from backup, Have a pre-tested rollback script (remembering that really this is a “roll-forward” script – there’s not really such a thing as a rollback script for a database!) Have fallback environments – for example, using a blue-green deployment pattern. Different options have pros and cons – some are easier to set up, some require more investment in infrastructure; and of course some work better than others (the key issue with using backups, is loss of the interim transaction data that has been added between the failed deployment and the restore). The best mechanism will be primarily dependent on how your application works and how much you need a cast-iron failsafe mechanism. Actions: Work out an appropriate rollback strategy based on how your application and business works, your appetite for investment and requirements for a completely failsafe process. Development Practices This is perhaps the more difficult area for people to tackle. The process by which you can deploy database updates is actually intrinsically linked with the patterns and practices used to develop that database and linked application. So you need to decide whether you want to implement some changes to the way your developers actually develop the database (particularly schema changes) to make the deployment process easier. A good example is the pattern “Branch by abstraction”. Explained nicely here, by Martin Fowler, this is a process that can be used to make significant database changes (e.g. splitting a table) in a step-wise manner so that you can always roll back, without data loss – by making incremental updates to the database backward compatible. Slides 103-108 of the following slidedeck, from Niek Bartholomeus explain the process: https://speakerdeck.com/niekbartho/orchestration-in-meatspace As these slides show, by making a significant schema change in multiple steps – where each step can be rolled back without any loss of new data – this affords the release team the opportunity to have zero-downtime deployments with considerably less stress (because if an increment goes wrong, they can roll back easily). There are plenty more great patterns that can be implemented – the book Refactoring Databases, by Scott Ambler and Pramod Sadalage is a great read, if this is a direction you want to go in: http://www.amazon.com/Refactoring-Databases-Evolutionary-paperback-Addison-Wesley/dp/0321774515 But the question is – how much of this investment are you willing to make? How often are you making significant schema changes that would require these best practices? Again, there’s a difference here between migrating old projects and starting afresh – with the latter it’s much easier to instigate best practice from the start. Actions: For your business, work out how far down the path you want to go, amending your database development patterns to “best practice”. It’s a trade-off between implementing quality processes, and the necessity to do so (depending on how often you make complex changes). Socialise these changes with your development group. No-one likes having “best practice” changes imposed on them, so good to introduce these ideas and the rationale behind them early.   Summary The next stages of implementing a continuous delivery pipeline for your database changes (once you have CI up and running) require a little pre-planning, if you want to get the most out of the work, and for the implementation to go smoothly. We’ve covered some of the checklist of areas to consider – mainly in the areas of “Getting the team ready for the changes that are coming” and “Planning our your pipeline, environments, patterns and practices for development”, though there will be more detail, depending on where you’re coming from – and where you want to get to. This article is part of our database delivery patterns & practices series on Simple Talk. Find more articles for version control, automated testing, continuous integration & deployment.

    Read the article

  • IDC Analyst Report Touts Oracle–Accenture Strategic Initiative

    - by kristin.jellison
    Hi there, partners! Oracle Engineered Systems have been getting some love lately, and we want to share it with you! The market intelligence and advisory firm IDC recently released a report lauding Oracle and Accenture’s strategic initiative to route the performance and flexibility of Oracle Engineered Systems to clients. The report, "Oracle and Accenture Strategic Alliance Places Big Bet on Engineered Systems,” by Steve White, reflects a largely positive analysis of the relationship. White notes that the alliance is “one of the largest in the industry.” Under the relationship, Accenture has incorporated Oracle Engineered Systems—including Oracle Exadata Database Machine, Oracle Exalogic Elastic Cloud, Oracle SuperCluster, and Oracle Exalytics In-Memory Machine—into its leading datacenter transformation consulting services. Together, the two companies have also created bespoke platforms, such as the Accenture Foundation Platform for Oracle, which helps clients accelerate deployments on Oracle Fusion Middleware, running Oracle Exalogic Elastic Cloud and Oracle Exadata Database Machine. Oracle Engineered Systems deliver a single, engineered platform—including server to storage and networking. This makes it easier and cheaper for Accenture clients around the world to prepare their datacenters for managing, processing and analyzing the massive amounts of data they (rightly) anticipate seeing in the next decade. The new solutions can help reduce the effort and cost to migrate any vendor database to an Oracle Engineered Systems platform, which can lower the cost of ownership by up to 50 percent. For its part, Accenture has built a team of 300 consultants to implement and increase the flexibility and stability of client datacenters. This move further expands one of the fastest-growing full-service Oracle Enterprise solutions. Over 52,000 Accenture consultants are qualified to implement, upgrade and outsource the Oracle product suite. Accenture is a Diamond-level member of Oracle PartnerNetwork (OPN). For Oracle Partners, this update should give you at least two things to walk away with. First, this initiative is showing signs of success. As Marty Cole, group chief executive for Accenture’s Technology growth platform, put it, “We are seeing an increasing number of clients recognizing the value of consolidating their databases and taking advantage of the cost and performance benefits delivered by these solutions.” The pipeline is there—and not just for Accenture. Use this example to show your clients that investments in Oracle Engineered Systems are on the rise. Second, recognize that Oracle Engineered Systems represent one of the biggest platforms for growth that Oracle has to offer partners. As part of the agreement, Accenture is able to provide: Platform Readiness Assessments Platform Implementation App Rationalization Database Rationalization Managed Services These are all enablement opportunities you can offer customers under Oracle’s partner programs —to continue building the value of their investments, and the value of your relationship with Oracle. Take a read through the IDC report. To learn more about the partnership, see this press release. Happy selling! The OPN Communications Team

    Read the article

  • Infinite detail inside Perlin noise procedural mapping

    - by Dave Jellison
    I am very new to game development but I was able to scour the internet to figure out Perlin noise enough to implement a very simple 2D tile infinite procedural world. Here's the question and it's more conceptual than code-based in answer, I think. I understand the concept of "I plug in (x, y) and get back from Perlin noise p" (I'll call it p). P will always be the same value for the same (x, y) (as long as the Perlin algorithm parameters haven't changed, like altering number of octaves, et cetera). What I want to do is be able to zoom into a square and be able to generate smaller squares inside of the already generated overhead tile of terrain. Let's say I have a jungle tile for overhead terrain but I want to zoom in and maybe see a small river tile that would only be a creek and not large enough to be a full "big tile" of water in the overhead. Of course, I want the same net effect as a Perlin equation inside a Perlin equation if that makes sense? (aka. I want two people playing the game with the same settings to get the same terrain and details every time). I can conceptually wrap my head around the large tile being based on an "zoomed out" coordinate leaving enough room to drill into but this approach doesn't make sense in my head (maybe I'm wrong). I'm guessing with this approach my overhead terrain would lose all of the cohesiveness delivered by the Perlin. Imagine I calculate (0, 0) as overhead tile 1 and then to the east of that I plug in (50, 0). OK, great, I now have 49 pixels of detail I could then "drill down" into. The issue I have in my head with this approach (without attempting it) is that there's no guarantee from my Perlin noise that (0,0) would be a good neighbor to (50,0) as they could have wildly different "elevations" or p/resultant values returning from the Perlin equation when I generate the overhead map. I think I can conceive of using the Perlin noise for the overhead tile to then reuse the p value as a seed for the "detail" level of noise once I zoom in. That would ensure my detail Perlin is always the same configuration for (0,0), (1,0), etc. ad nauseam but I'm not sure if there are better approaches out there or if this is a sound approach at all.

    Read the article

  • Free eBook with SQL Server performance tips and nuggets

    - by Claire Brooking
    I’ve often found that the kind of tips that turn out to be helpful are the ones that encourage me to make a small step outside of a routine. No dramatic changes – just a quick suggestion that changes an approach. As a languages student at university, one of the best I spotted came from outside the lecture halls and ended up saving me time (and lots of huffing and puffing) – the use of a rainbow of sticky notes for well-used pages and letter categories in my dictionary. Simple, but armed with a heavy dictionary that could double up as a step stool, those markers were surprisingly handy. When the Simple-Talk editors told me about a book they were planning that would give a series of tips for developers on how to improve database performance, we all agreed it needed to contain a good range of pointers for big-hitter performance topics. But we wanted to include some of the smaller, time-saving nuggets too. We hope we’ve struck a good balance. The 45 Database Performance Tips eBook covers different tips to help you avoid code that saps performance, whether that’s the ‘gotchas’ to be aware of when using Object to Relational Mapping (ORM) tools, or what to be aware of for indexes, database design, and T-SQL. The eBook is also available to download with SQL Prompt from Red Gate. We often hear that it’s the productivity-boosting side of SQL Prompt that makes it useful for everyday coding. So when a member of the SQL Prompt team mentioned an idea to make the most of tab history, a new feature in SQL Prompt 6 for SQL Server Management Studio, we were intrigued. Now SQL Prompt can save tabs we have been working on in SSMS as a way to maintain an active template for queries we often recycle. When we need to reuse the same code again, we search for our saved tab (and we can also customize its name to speed up the search) to get started. We hope you find the eBook helpful, and as always on Simple-Talk, we’d love to hear from you too. If you have a performance tip for SQL Server you’d like to share, email Melanie on the Simple-Talk team ([email protected]) and we’ll publish a collection in a follow-up post.

    Read the article

  • WebCenter Spaces 11g PS2 Template Customization

    - by javier.ductor(at)oracle.com
    Recently, we have been involved in a WebCenter Spaces customization project. Customer sent us a prototype website in HTML, and we had to transform Spaces to set the same look&feel as in the prototype.Protoype:First of all, we downloaded a Virtual Machine with WebCenter Spaces 11g PS2, same version as customer's. The next step was to download ExtendWebCenterSpaces application. This is a webcenter application that customizes several elements of WebCenter Spaces: templates, skins, landing page, etc. jDeveloper configuration is needed, we followed steps described in Extended Guide, this blog was helpful too. . After that, we deployed the application using WebLogic console, we created a new group space and assigned the ExtendedWebCenterSpaces template (portalCentricSiteTemplate) to it. The result was this:As you may see there is a big difference with the prototype, which meant a lot of work to do from our side.So we needed to create a new Spaces template with its skin.Using jDeveloper, we created a new template based on the default template. We removed all menus we did not need and inserted 'include'  tags for header, breadcrumb and footers. Each of these elements was defined in an isolated jspx file.In the beginning, we faced a problem: we had to add code from prototype (in plain HTML) to jspx templates (JSF language). We managed to workaround this issue using 'verbatim' tags with 'CDATA' surrounding HTML code in header, breadcrumb and footers.Once the template was modified, we added css styles to the default skin. So we had some styles from portalCentricSiteTemplate plus styles from customer's prototype.Then, we re-deployed the application, assigned this template to a new group space and checked the result. After testing, we usually found issues, so we had to do some modifications in the application, then it was necessary to re-deploy the application and restart Spaces server. Due to this fact, the testing process takes quite a bit of time.Added to the template and skin customization, we also customized the Landing Page using this application and Members task flow using another application: SpacesTaskflowCustomizationApplication. I will talk about this task flow customization in a future entry.After some issues and workarounds, eventually we managed to get this look&feel in Spaces:P.S. In this customization I was working with Francisco Vega and Jose Antonio Labrada, consultants from Oracle Malaga International Consulting Centre.

    Read the article

< Previous Page | 244 245 246 247 248 249 250 251 252 253 254 255  | Next Page >