Search Results

Search found 5641 results on 226 pages for 'maintenance plan'.

Page 130/226 | < Previous Page | 126 127 128 129 130 131 132 133 134 135 136 137  | Next Page >

  • How to Omit the Page Number From the First Page of a Word 2013 Document Without Using Sections

    - by Lori Kaufman
    Normally, the first page, or cover page, of a document does not have a page number or other header or footer text. You can avoid putting a page number on the first page using sections, but there is an easier way to do this. If you don’t plan to use sections in any other part of your document, you may want to avoid using them completely. We will show you how to easily take the page number off the cover page and start the page numbering at one on the second page of your document by simply using a footer (or a header) and changing one setting. Click the Page Layout tab. In the Page Setup section of the Page Layout tab, click the Page Setup dialog box launcher icon in the lower, right corner of the section. On the Page Setup dialog box, click the Layout tab and select the Different first page check box in the Headers and footers section so there is a check mark in the box. Click OK. You’ll notice there is no page number on the first page of your document now. However, you might want the second page to be page one of your document, only to find it is currently page two. To change the page number on the second page to one, click the Insert tab. In the Header & Footer section of the Insert tab, click Page Number and select Format Page Numbers from the drop-down menu. On the Page Number Format dialog box, select Start at in the Page numbering section. Enter 0 in the edit box and click OK. This allows the second page of your document to be labeled as page one. You can use the drop-down menu on the Format Page Numbers button in the Header & Footer section of the Insert tab to add page numbers to your document as well. Easily insert formatted page numbers at the top or bottom of the page or in the page margins. Use the same menu to remove page numbers from your document.     

    Read the article

  • Glenn Fiedler's fixed timestep with fake threads

    - by kaoD
    I've implemented Glenn Fiedler's Fix Your Timestep! quite a few times in single-threaded games. Now I'm facing a different situation: I'm trying to do this in JavaScript. I know JS is single-threaded, but I plan on using requestAnimationFrame for the rendering part. This leaves me with two independent fake threads: simulation and rendering (I suppose requestAnimationFrame isn't really threaded, is it? I don't think so, it would BREAK JS.) Timing in these threads is independent too: dt for simulation and render is not the same. If I'm not mistaken, simulation should be up to Fiedler's while loop end. After the while loop, accumulator < dt so I'm left with some unspent time (dt) in the simulation thread. The problem comes in the draw/interpolation phase: const double alpha = accumulator / dt; State state = currentState*alpha + previousState * ( 1.0 - alpha ); render( state ); In my render callback, I have the current timestamp to which I can subtract the last-simulated-in-physics-timestamp to have a dt for the current frame. Should I just forget about this dt and draw using the physics thread's dt? It seems weird, since, well, I want to interpolate for the unspent time between simulation and render too, right? Of course, I want simulation and rendering to be completely independent, but I can't get around the fact that in Glenn's implementation the renderer produces time and the simulation consumes it in discrete dt sized chunks. A similar question was asked in Semi Fixed-timestep ported to javascript but the question doesn't really get to the point, and answers there point to removing physics from the render thread (which is what I'm trying to do) or just keeping physics in the render callback too (which is what I'm trying to avoid.)

    Read the article

  • Resolve Instructional Webcast Series — E-Business Suite Payables Period Close

    - by user793044
    Resolve Instructional Webcast Series — New Product Specific Troubleshooting Topics For E-Business we have coming up: Title: Resolve—Best Practices for E-Business Suite Payables Period Close Date: Nov 7, 2012 Time: 8:00 am MT - 3:00 pm GMT - 10:00 am Eastern - 8:30 pm India - 7:00 am Pacific This one-hour webcast shows you how to use 3 main recommendations: Period Close Helper – New Diagnostic to identify and resolve period close issues. Master Generic Datafix Diagnostic (MGD) usage in proactive/reactive mode. Recommended Patch Collection (RPC) uptake. This session will help customers to plan and complete their month on month period close activities successfully. Also, in approaching period close in a proactive way. It will assist in order to avoid last minute hassles and prevent delays in achieving the Period Close deadlines. Customers who are involved in the Payables period close activities (both Functional and Technical) will benefit most from the webcast. Join us. Leverage this opportunity to learn Support Best Practices that help you resolve the issues you face with your Oracle products. Oracle Support experts provide live demonstrations of proactive resources. You will see you how working proactively helps you work more efficiently—from using the right tools to providing the right information on Service requests—you can get answers faster. Register for sessions now Resolve—Troubleshooting Questions? Contact Oracle’s "Get Proactive" team today. WORK SMART. SOLVE FAST. RESOLVE.

    Read the article

  • Host your own private git repository via SSH

    - by kerry
    If you are like me you have tons of projects you would like to keep private but track with git, but do not want to pay a git host for a private plan. One of the problems is that most hosts scale their plans by project instead of users. Luckily, it is easy to host your own git repositories on any el cheapo host that provides ssh access. In the interest of full disclosure, I learned this trick from this blog post. I decided to recreate it in case the source material vanishes for some reason. To setup your host, login via ssh and run the following commands: mkdir ~/git/yourprojectname.git cd ~/git/yourprojectname.git git --bare init Then in your project directory (on your local machine): # setup your user info git config --global user.name "Firstname Lastname" git config --global user.email "[email protected]" # initialize the workspace git init git add . git commit -m "initial commit" git add remote origin ssh://[email protected]/~/git/yourprojectname.git git push origin master It’s that easy! To keep from entering your password every time add your public key to the server: Generate your key with ‘ssh-keygen -t rsa‘ on your local machine.  Then add the contents of the generated file to ~/.ssh/authorized_keys on your server.

    Read the article

  • SQLAuthority News – Download SQL Server 2008 R2 Upgrade Technical Reference Guide

    - by pinaldave
    I recently come across very interesting white paper written for Microsoft by Solid Quality Mentors. A successful upgrade to SQL Server 2008 R2 should be smooth and trouble-free. To do that smooth transition, you must plan sufficiently for the upgrade and match the complexity of your database application. Otherwise, you risk costly and stressful errors and upgrade problems. SQL Server 2008 R2 Upgrade Technical Reference Guide is one of the best and comprehensive reference guide I have seen on the subject of SQL Server 2008 R2 upgrade. There are so many various subjects discussed about upgrade which one would always wanted to see. You can find the link of why one has to upgrade to SQL Server 2008 R2 over here: Why upgrade to SQL Server 2008 R2. White paper to upgrade to SQL Server 2008 R2 Upgrade Guide. Here is the quick list of content of the white paper. 1. Upgrade Planning and Deployment 2. Management and Development Tools 3. Relational Databases 4. High Availability 5. Database Security 6. Full-Text Search 7. Service Broker 8. Transact-SQL Queries 9. Notification Services 10. SQL Server Express 11. Analysis Services 12. Data Mining 13. Integration Services 14. Reporting Services 15. Other Microsoft Applications and Platforms Appendix 1: Version and Edition Upgrade Paths Appendix 2: Upgrade Planning Deployment and Tasks Checklist This white paper is indeed huge with 490 pages and 151,956 words.As I said, this is one of the most comprehensive white paper ever published on the subject. Just reading this white paper one can learn a lot about SQL Server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • New ZFSSA code release - April 2012

    - by user12620172
    A new version of the ZFSSA code was released over the weekend. In case you have missed a few, we are now on code 2011.1.2.1. This minor update is very important for our friends with the older SAS1 cards on the older 7x10 systems. This 2.1 minor release was made specifically for them, and fixes the issue that their SAS1 card had with the last major release. They can now go ahead and upgrade straight from the 2010.Q3.2.1 code directly to 2011.1.2.1. If you are on a 7x20 series, and already running 2011.1.2.0, there is no real reason why you need to upgrade to 1.2.1, as it's really only the Pandora SAS1 HBA fix. If you are not already on 1.2.0, then go ahead and upgrade all the way to 2011.1.2.1. I hope everyone out there is having a good April so far. For my next blog, the plan is to work off the Analytic tips I did last week and expand on which Analytics you want to really keep your eyes on, and also how to setup alerts to watch them for you. You can read more and keep up on your releases here: https://wikis.oracle.com/display/FishWorks/Software+Updates Steve   

    Read the article

  • OpenWorld 2012—Is Almost Here!

    - by Scott McNeil
    With OpenWorld fast approaching, I thought I would take this opportunity to look at some of the “must see” database manageability activities and sessions happening this year. Here's a quick run down: Oracle Database Manageability: Download all the details for sessions, hands-on-labs, and demos (PDF) Keynotes: Sunday, September 30 Hardware and Software, Engineered to Work Together: Why It’s A Different Approach Larry Ellison, CEO, Oracle Monday, October 1 Shift Complexity Hosted by Mark Hurd, President, Oracle Andrew Mendelsohn, Senior Vice President, Database Server Technologies, Oracle IOUG SIG Sunday: Database Performance Tuning: Getting the Best out of Oracle Enterprise Manager Cloud Control 12c (session ID# CON6511) Oracle DEMOgrounds: Floor plan – Moscone South Automatic Application and SQL Tuning Automatic Performance Diagnostics Complete Database Lifecycle Management Data Masking and Data Subsetting Database Testing with Oracle Real Application Testing Oracle Enterprise Manager Cloud Control 12c Overview Oracle Exadata Management Hands-on-Labs: Database Performance Testing, Data Masking, and Subsetting (session ID# HOL10720) Database Performance Tuning Hands-on Lab (session ID# HOL10393) Sessions: What’s Next for Oracle Database? (session ID# GEN8259) Building and Managing a Private Oracle Database Cloud (session ID# GEN11421) Using Oracle Enterprise Manager to Manage Your Own Private Cloud (session ID# GEN11423) Extreme Database Management with the Latest Generation of Database Technology (session ID# CON9547) Oracle OpenWorld Music Festival New this year is Oracle’s first annual Oracle OpenWorld Musical Festival, featuring some of today's breakthrough musicians from around the country and the world. It's five nights of back-to-back performances in the heart of San Francisco—free to registered attendees. See the lineup Not Heading to OpenWorld—Watch it Live! Stay Connected: Twitter | Facebook | YouTube | Linkedin | Newsletter Download the Oracle Enterprise Manager Cloud Control12c Mobile app

    Read the article

  • How should I configure TRIM Support for LVM logical volumes?

    - by Zack Perry
    I am setting up a notebook for software demo purpose. The machine has a Intel Core i7 CPU, 8GB RAM, a 128GB SSD, and runs Ubuntu 12.04 LTS 64bit desktop. As it is, the SSD is configured to have a single volume group, with /boot, /swap, and / all in their respective logical volumes. They collectively consume 30GB space. I plan to use the remaining for logical volumes for KVM guests, all run Ubuntu 12.04 Server I would like to ensure that the SSD is utilized optimally. Although on this site, there are some great info about setting up TRIM support for file system setups that do not involve LVM, I have not found explicit guide regarding my planned setup. I did found this page which talks about adding issue_discards in /etc/lvm/lvm.conf. But in said file on my machine, I didn't find the cited content. I double-checked man lvm.conf(5), didn't see any mentioning of this option either. Thus, I'm not sure what to do. Furthermore, even say adding the option is the right thing to do, should I in my machine's /etc/fstab still add mount options such as noatime etc? Any tips, pointers, and/or further guidance are greatly appreciated.

    Read the article

  • Attention NYC Area Marketers: Don't Miss This Executive Breakfast on Brand Building in the Digital Era

    - by Christie Flanagan
    Presenting and Managing Digital Content – A New Approach Reach Your Audiences Where They Are with Multi-Channel Marketing Attention marketers in the greater New York City area! Oracle Platinum Partner, Bluenog, invites you to an executive breakfast seminar on brand building in the digital era. In an age where consumers are spending increasing amounts of their time online, interacting, communicating and being influenced by other brands, you too must go online with a coordinated plan. And, given the hundreds, if not thousands, of places that might be relevant, having the right content and the right tools are critical. This two-part presentation will focus on the growing need for content and connection in building and maintaining your brand, as well as the role of technology in helping you maintain brand consistency, reach and interaction while simplifying delivery to web, tablet, mobile, and social audiences. Location Oracle Offices 520 Madison Ave, 30th Floor New York, NY  10022 Day/Time Thursday, May 3, 2012 9:30 AM to 11:30 AM About the Speakers Agenda: Michelle Pujadas, is an award-winning marketer and communicator, who has worked with more than 125 companies to help them package, launch and expand their brand presence, online and off. Michelle is the Founder and co-CEO of Zer0 to 5ive, a strategic marketing and communications firm that focuses on B2B and B2C technology companies, with offices in NY, Philadelphia and Chicago. Peter Conrad, the E 2.0 Practice Director for Bluenog, focuses on translating exciting visions for user experiences into well executed technical implementations leveraging advanced WebCenter technology from Oracle. Bluenog provides the systems and professional services today's forward-looking marketing organizations need to convert content, business capabilities, and communications into productive interactions with customers and prospects. 09:30am Arrival, Registration & Breakfast 10:00am Brand Building through Content and Connection, presented by Michelle Pujadas, Founder and co-CEO of Zer0 to 5ive 10:30am Leveraging Technology for Brand Reach, Consistency and Interaction, presented by Peter Conrad, E2.0 Practice Director at Bluenog 11:15am Q&A 11:30am Adjourn

    Read the article

  • Ask the Readers: How Fast is Your Internet Connection?

    - by Mysticgeek
    The federal government recently announced a broadband initiative that calls for 260 million homes to have 100Mbps Internet connections by the year 2020. This got us wondering, how fast is your current Internet connection? Photo by roland When it comes to the speed of our Internet connection, we all want the maximum possible. The FCC recently announced their National Broadband Plan, which is an initiative to improve the Internet infrastructure in the United States and provide higher speeds to everyone. You’ve also undoubtedly heard the news about Google getting into the mix with their program to bring ultra high-speed fiber broadband to 50,000 users in select cities. While we wait for those programs to come into fruition, we thought it would be cool to check out what kinds of speeds you’re getting now. Test Your Internet Connection Speed There are several sites out there you can use to test your Internet speeds, but probably the best site is Speedtest.net. It’s easy to use, and allows you test download and upload speeds to and from various locations in the US and throughout the world. If you already know the speeds you’re getting leave a comment and let us know. If you use Speedtest.com, just keep in mind that our comment system won’t allow you to copy their result links, but you can simply tell us what you get in the results. We’re especially interested in the results of those of you who have Verizon FIOS or Comcast’s “Ultra” service. Leave a comment and join in the discussion! Similar Articles Productive Geek Tips Configure How often Ubuntu checks for Automatic UpdatesMysticgeek Blog: A Look at Internet Explorer 8 Beta 1 on Windows XPNorton Internet Security 2010 [Review]Disable Fast User Switching on Windows XPUnderstanding Vista’s New Network Connection Icons TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Converting Mp4 to Mp3 Easily Use Quick Translator to Translate Text in 50 Languages (Firefox) Get Better Windows Search With UltraSearch Scan News With NY Times Article Skimmer SpeedyFox Claims to Speed up your Firefox Beware Hover Kitties

    Read the article

  • How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions

    - by Eric Z Goodnight
    Have a huge folder of images needing tweaks? A few hundred adjustments may seem like a big, time consuming job—but read one to see how Photoshop can do repetitive tasks automatically, even if you don’t know how to program! Photoshop Actions are a simple way to program simple routines in Photoshop, and are a great time saver, allowing you to re-perform tasks over and over, saving you minutes or hours, depending on the job you have to work on. See how any bunch of images and even some fairly complicated photo tweaking can be done automatically to even hundreds of images at once. When Can I use Photoshop Actions? Photoshop actions are a way of recording the tools, menus, and keys pressed while using the program. Each time you use a tool, adjust a color, or use the brush, it can be recorded and played back over any file Photoshop can open. While it isn’t perfect and can get very confused if not set up correctly, it can automate editing hundreds of images, saving you hours and hours if you have big jobs with complex edits. The image illustrated above is a template for a polaroid-style picture frame. If you had several hundred images, it would actually be a simple matter to use Photoshop Actions to create hundreds of new images inside the frame in almost no time at all. Let’s take a look at how a simple folder of images and some Image editing automation can turn lots of work into a simple and easy job. Creating a New Action Actions is a default part of the “Essentials” panel set Photoshop begins with as a default. If you can’t see the panel button under the “History” button, you can find Actions by going to Window > Actions or pressing Alt + F9. Click the in the Actions Panel, pictured in the previous illustration on the left. Choose to create a “New Set” in order to begin creating your own custom Actions. Name your action set whatever you want. Names are not relevant, you’ll simply want to make it obvious that you have created it. Click OK. Look back in the layers panel. You’ll see your new Set of actions has been added to the list. Click it to highlight it before going on. Click the again to create a “New Action” in your new set. If you care to name your action, go ahead. Name it after whatever it is you’re hoping to do—change the canvas size, tint all your pictures blue, send your image to the printer in high quality, or run multiple filters on images. The name is for your own usage, so do what suits you best. Note that you can simplify your process by creating shortcut keys for your actions. If you plan to do hundreds of edits with your actions, this might be a good idea. If you plan to record an action to use every time you use Photoshop, this might even be an invaluable step. When you create a new Action, Photoshop automatically begins recording everything you do. It does not record the time in between steps, but rather only the data from each step. So take your time when recording and make sure you create your actions the way you want them. The square button stops recording, and the circle button starts recording again. With these basics ready, we can take a look at a sample Action. Recording a Sample Action Photoshop will remember everything you input into it when it is recording, even specific photographs you open. So begin recording your action when your first photo is already open. Once your first image is open, click the record button. If you’re already recording, continue on. Using the File > Place command to insert the polaroid image can be easier for Actions to deal with. Photoshop can record with multiple open files, but it often gets confused when you try it. Keep your recordings as simple as possible to ensure your success. When the image is placed in, simply press enter to render it. Select your background layer in your layers panel. Your recording should be following along with no trouble. Double click this layer. Double clicking your background layer will create a new layer from it. Allow it to be renamed “Layer 0” and press OK. Move the “polaroid” layer to the bottom by selecting it and dragging it down below “Layer 0” in the layers panel. Right click “Layer 0” and select “Create Clipping Mask.” The JPG image is cropped to the layer below it. Coincidentally, all actions described here are being recorded perfectly, and are reproducible. Cursor actions, like the eraser, brush, or bucket fill don’t record well, because the computer uses your mouse movements and coordinates, which may need to change from photo to photo. Click the to set your Photograph layer to a “Screen” blending mode. This will make the image disappear when it runs over the white parts of the polaroid image. With your image layer (Layer 0) still selected, navigate to Edit > Transform > Scale. You can use the mouse to resize your Layer 0, but Actions work better with absolute numbers. Visit the Width and Height adjustments in the top options panel. Click the chain icon to link them together, and adjust them numerically. Depending on your needs, you may need to use more or less than 30%. Your image will resize to your specifications. Press enter to render, or click the check box in the top right of your application. + Click on your bottom layer, or “polaroid” in this case. This creates a selection of the bottom layer. Navigate to Image > Crop in order to crop down to your bottom layer selection Your image is now resized to your bottommost layer, and Photoshop is still recording to that effect. For additional effect, we can navigate to Image > Image Rotation > Arbitrary to rotate our image by a small tilt. Choosing 3 degrees clockwise , we click OK to render our choice. Our image is rotated, and this step is recorded. Photoshop will even record when you save your files. With your recording still going, find File > Save As. You can easily tell Photoshop to save in a new folder, other than the one you have been working in, so that your files aren’t overwritten. Navigate to any folder you wish, but do not change the filename. If you change the filename, Photoshop will record that name, and save all your images under whatever you type. However, you can change your filetype without recording an absolute filename. Use the pulldown tab and select a different filetype—in this instance, PNG. Simply click “Save” to create a new PNG based on your actions. Photoshop will record the destination and the change in filetype. If you didn’t edit the name of your file, it will always use the variable filename of any image you open. (This is very important if you want to edit hundreds of images at once!) Click File > Close or the red “X” in the corner to close your filetype. Photoshop can record that as well. Since we have already saved our image as a JPG, click “NO” to not overwrite your original image. Photoshop will also record your choice of “NO” for subsequent images. In your Actions panel, click the stop button to complete your action. You can always click the record button to add more steps later, if you want. This is how your new action looks with its steps expanded. Curious how to put it into effect? Read on to see how simple it is to use that recording you just made. Editing Lots of Images with Your New Action Open a large number of images—as many as you care to work with. Your action should work immediately with every image on screen, although you may have to test and re-record, depending on how you did. Actions don’t require any programming knowledge, but often can get confused or work in a counter-intuitive way. Record your action until it is perfect. If it works once without errors, it’s likely to work again and again! Find the “Play” button in your Actions Panel. With your custom action selected, click “Play” and your routine will edit, save, and close each file for you. Keep bashing “Play” for each open file, and it will keep saving and creating new files until you run out of work you need to do. And in mere moments, a complicated stack of work is done. Photoshop actions can be very complicated, far beyond what is illustrated here, and can even be combined with scripts and other actions, creating automated creation of potentially very complex files, or applying filters to an entire portfolio of digital photos. Have questions or comments concerning Graphics, Photos, Filetypes, or Photoshop? Send your questions to [email protected], and they may be featured in a future How-To Geek Graphics article. Image Credits: All images copyright Stephanie Pragnell and author Eric Z Goodnight, protected under Creative Commons. Latest Features How-To Geek ETC How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) How To Remove People and Objects From Photographs In Photoshop Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Smart Taskbar Is a Thumb Friendly Android Task Launcher Comix is an Awesome Comics Archive Viewer for Linux Get the MakeUseOf eBook Guide to Speeding Up Windows for Free Need Tech Support? Call the Star Wars Help Desk! [Video Classic] Reclaim Vertical UI Space by Adding a Toolbar to the Left or Right Side of Firefox Androidify Turns You into an Android-style Avatar

    Read the article

  • Keep Your Eye on the Ball

    - by [email protected]
    With the FIFA World Cup 2010 in South Africa almost a week underway, the soccer fans all around the World are talking about at least 2 things. That typical vuvuzela sound and the new Jabulani ball, saying it moves unpredictably, is difficult to handle and somehow the altitude of the World Cup stadiums also seem to be a contributing factor.(Picture taken from http://www.flickr.com/photos/warrenski/4143923059/ under a Creative Commons license)Although the FIFA states that it hasn't received any official complaints, the end users don't seem to be very happy with this new ball. This brings me to a comparison with IT management and testing. When you're in a situation where you're introducing a new product, in IT terms, introducing a new application, you would like to test all possible scenarios that your end users could be using and experiencing. However, that's a very time and resource intensive process to do for every application change or update.  It's like getting ready for the big game but you have no game plan.That's why a new approach has been developed. One that's based on the 80/20 rule. Testing 80% of the application will cost about 20% of the efforts. The remaining 20% of your application will not be tested before deployment, but monitored with a real user monitoring solution immediately after deployment. These tools track all user experiences, including error messages and the performance and availability metrics from an end user perspective. Should any anomaly occur, you would be able to repair it quickly so you and your end users can get back into the game.These real user sessions can be easily converted into testing scripts, so the 80% of the application testing can be complimented with the remaining 20%.Oracle Enterprise Manager 11g group of products offers both the real user monitoring solution with Oracle Real User Experience Insight, as well as the required testing solution with Oracle Application Testing Suite. Visit our Oracle Enterprise Manager 11g resource center and find out how it's Business-Driven IT Management approach will help you keep your eye on your business ball.Happy World Cup.

    Read the article

  • Unused Indexes Gotcha

    - by DavidWimbush
    I'm currently looking into dropping unused indexes to reduce unnecessary overhead and I came across a very good point in the excellent SQL Server MVP Deep Dives book that I haven't seen highlighted anywhere else. I was thinking it was simply a case of dropping indexes that didn't show as being used in DMV sys.dm_db_index_usage_stats (assuming a solid representative workload had been run since the last service start). But Rob Farley points out that the DMV only shows indexes whose pages have been read or updated. An index that isn't listed in the DMV might still be useful by providing metadata to the Query Optimizer and thus streamlining query plans. For example, if you have a query like this: select  au.author_name         , count(*) as books from    books b         inner join authors au on au.author_id = b.author_id group by au.author_name If you have a unique index on authors.author_name the Query Optimizer will realise that each author_id will have a different author_name so it can produce a plan that just counts the books by author_id and then adds the author name to each row in that small table. If you delete that index the query will have to join all the books with their authors and then apply the GROUP BY - a much more expensive query. So be cautious about dropping apparently unused unique indexes.

    Read the article

  • Caveats with the runAllManagedModulesForAllRequests in IIS 7/8

    - by Rick Strahl
    One of the nice enhancements in IIS 7 (and now 8) is the ability to be able to intercept non-managed - ie. non ASP.NET served - requests from within ASP.NET managed modules. This opened up a ton of new functionality that could be applied across non-managed content using .NET code. I thought I had a pretty good handle on how IIS 7's Integrated mode pipeline works, but when I put together some samples last tonight I realized that the way that managed and unmanaged requests fire into the pipeline is downright confusing especially when it comes to the runAllManagedModulesForAllRequests attribute. There are a number of settings that can affect whether a managed module receives non-ASP.NET content requests such as static files or requests from other frameworks like PHP or ASP classic, and this is topic of this blog post. Native and Managed Modules The integrated mode IIS pipeline for IIS 7 and later - as the name suggests - allows for integration of ASP.NET pipeline events in the IIS request pipeline. Natively IIS runs unmanaged code and there are a host of native mode modules that handle the core behavior of IIS. If you set up a new IIS site or application without managed code support only the native modules are supported and fired without any interaction between native and managed code. If you use the Integrated pipeline with managed code enabled however things get a little more confusing as there both native modules and .NET managed modules can fire against the same IIS request. If you open up the IIS Modules dialog you see both managed and unmanaged modules. Unmanaged modules point at physical files on disk, while unmanaged modules point at .NET types and files referenced from the GAC or the current project's BIN folder. Both native and managed modules can co-exist and execute side by side and on the same request. When running in IIS 7 the IIS pipeline actually instantiates a the ASP.NET  runtime (via the System.Web.PipelineRuntime class) which unlike the core HttpRuntime classes in ASP.NET receives notification callbacks when IIS integrated mode events fire. The IIS pipeline is smart enough to detect whether managed handlers are attached and if they're none these notifications don't fire, improving performance. The good news about all of this for .NET devs is that ASP.NET style modules can be used for just about every kind of IIS request. All you need to do is create a new Web Application and enable ASP.NET on it, and then attach managed handlers. Handlers can look at ASP.NET content (ie. ASPX pages, MVC, WebAPI etc. requests) as well as non-ASP.NET content including static content like HTML files, images, javascript and css resources etc. It's very cool that this capability has been surfaced. However, with that functionality comes a lot of responsibility. Because every request passes through the ASP.NET pipeline if managed modules (or handlers) are attached there are possible performance implications that come with it. Running through the ASP.NET pipeline does add some overhead. ASP.NET and Your Own Modules When you create a new ASP.NET project typically the Visual Studio templates create the modules section like this: <system.webServer> <validation validateIntegratedModeConfiguration="false" /> <modules runAllManagedModulesForAllRequests="true" > </modules> </system.webServer> Specifically the interesting thing about this is the runAllManagedModulesForAllRequest="true" flag, which seems to indicate that it controls whether any registered modules always run, even when the value is set to false. Realistically though this flag does not control whether managed code is fired for all requests or not. Rather it is an override for the preCondition flag on a particular handler. With the flag set to the default true setting, you can assume that pretty much every IIS request you receive ends up firing through your ASP.NET module pipeline and every module you have configured is accessed even by non-managed requests like static files. In other words, your module will have to handle all requests. Now so far so obvious. What's not quite so obvious is what happens when you set the runAllManagedModulesForAllRequest="false". You probably would expect that immediately the non-ASP.NET requests no longer get funnelled through the ASP.NET Module pipeline. But that's not what actually happens. For example, if I create a module like this:<add name="SharewareModule" type="HowAspNetWorks.SharewareMessageModule" /> by default it will fire against ALL requests regardless of the runAllManagedModulesForAllRequests flag. Even if the value runAllManagedModulesForAllRequests="false", the module is fired. Not quite expected. So what is the runAllManagedModulesForAllRequests really good for? It's essentially an override for managedHandler preCondition. If I declare my handler in web.config like this:<add name="SharewareModule" type="HowAspNetWorks.SharewareMessageModule" preCondition="managedHandler" /> and the runAllManagedModulesForAllRequests="false" my module only fires against managed requests. If I switch the flag to true, now my module ends up handling all IIS requests that are passed through from IIS. The moral of the story here is that if you intend to only look at ASP.NET content, you should always set the preCondition="managedHandler" attribute to ensure that only managed requests are fired on this module. But even if you do this, realize that runAllManagedModulesForAllRequests="true" can override this setting. runAllManagedModulesForAllRequests and Http Application Events Another place the runAllManagedModulesForAllRequest attribute affects is the Global Http Application object (typically in global.asax) and the Application_XXXX events that you can hook up there. So while the events there are dynamically hooked up to the application class, they basically behave as if they were set with the preCodition="managedHandler" configuration switch. The end result is that if you have runAllManagedModulesForAllRequests="true" you'll see every Http request passed through the Application_XXXX events, and you only see ASP.NET requests with the flag set to "false". What's all that mean? Configuring an application to handle requests for both ASP.NET and other content requests can be tricky especially if you need to mix modules that might require both. Couple of things are important to remember. If your module doesn't need to look at every request, by all means set a preCondition="managedHandler" on it. This will at least allow it to respond to the runAllManagedModulesForAllRequests="false" flag and then only process ASP.NET requests. Look really carefully to see whether you actually need runAllManagedModulesForAllRequests="true" in your applications as set by the default new project templates in Visual Studio. Part of the reason, this is the default because it was required for the initial versions of IIS 7 and ASP.NET 2 in order to handle MVC extensionless URLs. However, if you are running IIS 7 or later and .NET 4.0 you can use the ExtensionlessUrlHandler instead to allow you MVC functionality without requiring runAllManagedModulesForAllRequests="true": <handlers> <remove name="ExtensionlessUrlHandler-Integrated-4.0" /> <add name="ExtensionlessUrlHandler-Integrated-4.0" path="*." verb="GET,HEAD,POST,DEBUG,PUT,DELETE,PATCH,OPTIONS" type="System.Web.Handlers.TransferRequestHandler" preCondition="integratedMode,runtimeVersionv4.0" /> </handlers> Oddly this is the default for Visual Studio 2012 MVC template apps, so I'm not sure why the default template still adds runAllManagedModulesForAllRequests="true" is - it should be enabled only if there's a specific need to access non ASP.NET requests. As a side note, it's interesting that when you access a static HTML resource, you can actually write into the Response object and get the output to show, which is trippy. I haven't looked closely to see how this works - whether ASP.NET just fires directly into the native output stream or whether the static requests are re-routed directly through the ASP.NET pipeline once a managed code module is detected. This doesn't work for all non ASP.NET resources - for example, I can't do the same with ASP classic requests, but it makes for an interesting demo when injecting HTML content into a static HTML page :-) Note that on the original Windows Server 2008 and Vista (IIS 7.0) you might need a HotFix in order for ExtensionLessUrlHandler to work properly for MVC projects. On my live server I needed it (about 6 months ago), but others have observed that the latest service updates have integrated this functionality and the hotfix is not required. On IIS 7.5 and later I've not needed any patches for things to just work. Plan for non-ASP.NET Requests It's important to remember that if you write a .NET Module to run on IIS 7, there's no way for you to prevent non-ASP.NET requests from hitting your module. So make sure you plan to support requests to extensionless URLs, to static resources like files. Luckily ASP.NET creates a full Request and full Response object for you for non ASP.NET content. So even for static files and even for ASP classic for example, you can look at Request.FilePath or Request.ContentType (in post handler pipeline events) to determine what content you are dealing with. As always with Module design make sure you check for the conditions in your code that make the module applicable and if a filter fails immediately exit - minimize the code that runs if your module doesn't need to process the request.© Rick Strahl, West Wind Technologies, 2005-2012Posted in IIS7   ASP.NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • My experience working with Teradata SQL Assistant

    - by Kevin Shyr
    Originally posted on: http://geekswithblogs.net/LifeLongTechie/archive/2014/05/28/my-experience-working-with-teradata-sql-assistant.aspx To this date, I still haven't figure out how to "toggle" between my query windows. It seems like unless I click on that "new" button on top, whatever SQL I generate from right-click just overrides the current SQL in the window. I'm probably missing a "generate new sql in new window" setting The default Teradata SQL Assistant doesn't execute just the SQL query I highlighted. There is a setting I have to change first. I'm not really happy that the SQL assistant and SQL admin are different app. Still trying to get used to the fact that I can't quickly look up a table's keys/relationships while writing query. I have to switch between windows. LOVE the execution plan / explanation. I think that part is better done than MS SQL in some ways. The error messages can be better. I feel that Teradata .NET provider sends smaller query command over than others. I don't have any hard data to support my claim. One of my query in SSRS was passing multi-valued parameters to another query, and got error "Teradata 3577 row size or sort key size overflow". The search on this error says the solution is to cast result column into smaller data type, but I found that the problem was that the parameter passed into the where clause could not be too large. I wish Teradata SQL Assistant would remember the window size I just adjusted to. Every time I execute the query, the result set, query, and exec log auto re-adjust back to the default size. In SSMS, if I adjust the result set area to be smaller, it would stay like that if I execute query in the same window.

    Read the article

  • ExaLogic 2.01 ppt & training & Installation check-list & tips & Web tier roadmap

    - by JuergenKress
    For partners with an ExaLogic opportunity or an ExaLogic demo center we plan to offer an hands-on ExaLogic bootcamp. If you want to attend, please make sure that you add your details to our wiki: ExaLogic checklist Exalogic Installation checklist 08.2012.pdf Exalogic Installation Tips and Tricks 08.2012.pdf Oracle FMW Web Tier Roadmap .pptx (Oracle and Partner confidential) ExaLogic Vision CVC 08.2012.pptx Online Launch Event: Introducing Oracle Exalogic Elastic Cloud Software 2.0 Webcast Replay For the complete ExaLogic partner kit, please visit the WebLogic Community Workspace (WebLogic Community membership required). Exalogic Distribution Rights Update Oracle have recently modified the criteria for obtaining Distribution Rights (resell rights) for Oracle Exadata Database Machine and Exalogic Elastic Cloud. Partners will NO longer be required to be specialized in these products or in their underlying product sets in order to attain Distribution Rights. There are, however, competency criteria that partners must meet, and partners must still apply for the respective Distributions Rights. Please note, there are no changes to the criteria to become EXADATA or EXALOGIC Specialized. List of Criteria is available on the Sell tab of the he Exalogic Elastic Cloud Knowledge Zone WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: ExaLogic,Exalogic training,education,training,Exalogic roadmap,exalogic installation,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Technology stack for CRUD apps [closed]

    - by Panoy
    In the past years, I have been using VB6 + MySQL when developing CRUD applications. Now I am currently learning how to develop web applications, as my plan is to go through the "browser/web app" path every time I build a CRUD app. I'm leaning on Ruby on Rails + MySQL/PostgreSQL/any NoSQL database now. I would like to know what other technology/tools stack to include in my architecture when developing these web apps? I'm asking your inputs with regards to the UI, database and reporting stack/toolset. Currently I have these in mind: UI = jQuery, jQueryUI (add your comments for other good UI stack) database = will be considering NoSQL or simply but RDBMS reporting tool = i'm clueless here Will it also make sense to use NoSQL database on these CRUD applications? I am assuming that the data would balloon later on. The desktop/native app route is an option only if there is a requirement, that in my limited experience, believes that a web app can't solve. Like for example those imaging apps/document forms and point-of-sale systems. I believe that web apps are gaining ground now and I find it most fun and intriguing to play and experiment with them. Please share your suggestions!

    Read the article

  • Beat the Post-Holiday Blues with a dose of BIWA

    - by mdonohue
    You know its coming so why not plan ahead.  Come and join like minded professionals at the BIWA Summit 2013 Early Bird Registration ends December 14th for BIWA Summit 2013. This event, focused on Business Intelligence, Data Warehousing and Analytics, is hosted by the BIWA SIG of the IOUG on January 9 and 10, at the Hotel Sofitel, near Oracle headquarters in Redwood City, California. Be sure to check out the many featured speakers, including Oracle executives Balaji Yelamanchili, Vaishnavi Sashikanth, and Tom Kyte, and Ari Kaplan, sports analyst, as well as the many other speakers. Hands-on labs will give you the opportunity to try out much of the Oracle software for yourself--be sure to bring a laptop capable of running Windows Remote Desktop. Check out the Schedule page for the list of over 40 sessions on all sorts of BIWA-related topics. See the BIWA Summit 2013 web site for details and be sure to register soon, while early bird rates still apply. Klaus and Nikos will be presenting the ever popular Getting the Best Performance from your Business Intelligence Publisher Reports and Implementation and we will run 2 sessions of the BI Publisher Hands On Lab for building Reports and Data Models. Hope to see you there.

    Read the article

  • A Kingdom To Conquer: Character Sketches

    - by George Clingerman
    Still not 100% sold on my title so it remains a working title for now, but here’s a series of character sketches I’ve done for a turn based strategy game I’m playing at making. I’ve been sketching these on various pieces of paper throughout the last two weeks and just finished the last of them today (my plan was for 16 different types of units and well, now I have them, so I consider that done!).                    Pretty rough sketches for now, but I’m pretty happy with the art style overall. I was wrestling for quite a while just HOW I wanted the game to look and then I finally stumbled across Art Baltazar and I was like, THAT’S IT! There’s a few characters I need to re-do a bit more, I feel they’re a bit TOO much like some of the characters that inspired them but I’m happy that the ideas are finally sketched out. I’ve also been playing a bit in InkScape working on making these guys digital. A pretty new experience for me since I’m not used to working with vector images but I think I’ll get the hang of it. Here’s the Knight all vectorized. Now if I could just start making some progress on the actual game itself…

    Read the article

  • Developing a long pannable, sprite-animated Windows Store app

    - by Groo
    I am creating my first Windows Store app in XAML, and I cannot seem to find a proper example for the requirements I have (I have spent a couple of days fiddling around, so I apologize if I missed something obvious). Basic idea of the app is to have a large scrollable canvas which would lazily start animating visible parts of the view as soon as user stops panning over a certain content (with some audio played also): My original idea was to use a StackPanel to add a bunch of custom controls, each of which would then animate itself once visible (with a short delay), but I have a couple of concerns: If the entire canvas is ~50 screen widths wide, is it feasible to load all content at the beginning, or do I need to plan doing some lazy loading during scrolling? For example, when I select a certain region in the Bing Travel app, it seems to lazily load tiles as I scroll it towards the end. Since content is stretched 100% vertically, and these animations are vectorized to be resolution independent, I am not sure if XAML (CompositionTarget) will be able to handle this, or I have to go for DirectX (MonoGame or C++) to get rid of flicker. Even better, is there an example for Windows 8 which uses a 100% vertically sized GridView with custom animated controls inside?

    Read the article

  • TRADACOMS Support in B2B

    - by Dheeraj Kumar
    TRADACOMS is an initial standard for EDI. Predominantly used in the retail sector of United Kingdom. This is similar to EDIFACT messaging system, involving ecs file for translation and validation of messages. The slight difference between EDIFACT and TRADACOMS is, 1. TRADACOMS is a simpler version than EDIFACT 2. There is no Functional Acknowledgment in TRADACOMS 3. Since it is just a business message to be sent to the trading partner, the various reference numbers at STX, BAT, MHD level need not be persisted in B2B as there is no Business logic gets derived out of this. Considering this, in AS11 B2B, this can be handled out of the box using Positional Flat file document plugin. Since STX, BAT segments which define the envelope details , and part of transaction, has to be sent from the back end application itself as there is no Document protocol parameters defined in B2B. These would include some of the identifiers like SenderCode, SenderName, RecipientCode, RecipientName, reference numbers. Additionally the batching in this case be achieved by sending all the messages of a batch in a single xml from backend application, containing total number of messages in Batch as part of EOB (Batch trailer )segment. In the case of inbound scenario, we can identify the document based on start position and end position of the incoming document. However, there is a plan to identify the incoming document based on Tradacom standard instead of start/end position. Please email to [email protected] if you need a working sample.

    Read the article

  • Application Lifecycle Management Tools

    - by John K. Hines
    Leading a team comprised of three former teams means that we have three of everything.  Three places to gather requirements, three (actually eight or nine) places for customers to submit support requests, three places to plan and track work. We’ve been looking into tools that combine these features into a single product.  Not just Agile planning tools, but those that allow us to look in a single place for requirements, work items, and reports. One of the interesting choices is Software Planner by Automated QA (the makers of Test Complete).  It's a lovely tool with real end-to-end process support.  We’re probably not going to use it for one reason – cost.  I’m sure our company could get a discount, but it’s on a concurrent user license that isn’t cheap for a large number of users.  Some initial guesswork had us paying over $6,000 for 3 concurrent users just to get started with the Enterprise version.  Still, it’s intuitive, has great Agile capabilities, and has a reputation for excellent customer support. At the moment we’re digging deeper into Rational Team Concert by IBM.  Reading the docs on this product makes me want to submit my resume to Big Blue.  Not only does RTC integrate everything we need, but it’s free for up to 10 developers.  It has beautiful support for all phases of Scrum.  We’re going to bring the sales representative in for a demo. This marks one of the few times that we’re trying to resist the temptation to write our own tool.  And I think this is the first time that something so complex may actually be capably provided by an external source.   Hooray for less work! Technorati tags: Scrum Scrum Tools

    Read the article

  • What is the most time-effective way to monitor & manage threats from bots and/or humans?

    - by CheeseConQueso
    I'm usually overwhelmed by the amount of tools that hosting companies provide to track & quantify traffic data and statistics. I'm equally overwhelmed by the countless flavors of malicious 'attacks' that target any and every web site known to man. The security methods used to protect both the back and front end of a website are documented well and are straight-forward in terms of ease of implementation and application, but the army of autonomous bots knows no boundaries and will always find a niche of a website to infest. So what can be done to handle the inevitable swarm of bots that pound your domain with brute force? Whenever I look at error logs for my domains, there are always thousands of entries that look like bots trying to sneak sql code into the database by tricking the variables in the url into giving them schema information or private data within the database. My barbaric and time-consuming plan of defense is just to monitor visitor statistics for those obvious patterns of abuse and either ban the ips or range of ips accordingly. Aside from that, I don't know much else I could do to prevent all of the ping pong going on all day. Are there any good tools that automatically monitor this background activity (specifically activity that throws errors on the web & db server) and proactively deal with these source(s) of mayhem?

    Read the article

  • MySQL Connect in 4 Days - Sessions From Users and Customers

    - by Bertrand Matthelié
    72 1024x768 Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Cambria","serif";} Let’s review today the conference sessions where users and customers will describe their use of MySQL as well as best practices. Remember you can plan your schedule with Schedule Builder. Saturday, 11.30 am, Room Golden Gate 7: MySQL and Hadoop—Chris Schneider, Ning.com Saturday, 1.00 pm, Room Golden Gate 7: Thriving in a MySQL Replicated World—Ed Presz and Andrew Yee, Ticketmaster Saturday, 1.00 pm, Room Golden Gate 8: Rick’s RoTs (Rules of Thumb)—Rick James, Yahoo! Saturday, 2.30 pm, Room Golden Gate 3: Scaling Pinterest—Yashwanth Nelapati and Evrhet Milam, Pinterest Saturday, 4.00 pm, Room Golden Gate 3: MySQL Pool Scanner: An Automated Service for Host Management—Levi Junkert, Facebook Sunday, 10.15 am, Room Golden Gate 3: Big Data Is a Big Scam (Most of the Time)—Daniel Austin, PayPal Sunday, 11.45 am, Room Golden Gate 3: MySQL at Twitter: Development and Deployment—Jeremy Cole and Davi Arnaut, Twitter Sunday, 1.15 pm, Room Golden Gate 3: CERN’s MySQL-as-a-Service Deployment with Oracle VM: Empowering Users—Dawid Wojcik and Eric Grancher, DBA, CERN Sunday, 2.45 pm, Room Golden Gate 3: Database Scaling at Mozilla—Sheeri Cabral, Mozilla Sunday, 5.45 pm, Room Golden Gate 4: MySQL Cluster Carrier Grade Edition @ El Chavo, Latin America’s #1 Facebook Game—Carlos Morales, Playful Play You can check out the full program here as well as in the September edition of the MySQL newsletter. Not registered yet? You can still save US$ 300 over the on-site fee – Register Now!

    Read the article

  • UPK and the Oracle Unified Method can be used to deploy Oracle-Based Business Solutions

    - by Emily Chorba
    Originally developed to support Oracle's acquisition strategy, the Oracle Unified Method (OUM) defines a common implementation language across all of Oracle's products and technologies. OUM is a flexible, scalable, and evolving body of knowledge that combines existing best practices and field experience with an industry standard framework that includes the latest thinking around agile implementation and cloud computing.    Strong, proven methods are essential to ensuring successful enterprise IT projects both within Oracle and for our customers and partners. OUM provides a collection of repeatable processes that are the basis for agile implementations of Oracle enterprise business solutions. OUM also provides a structure for tracking progress and managing cost and risks. OUM is applicable to any size or type of IT project. While OUM is a plan-based method—including overview material, task and artifact descriptions, and templates—the method is intended to be tailored to support the appropriate level of ceremony (or agility) required for each project. Guidance is provided for identifying the minimum subset of tasks, tailoring the approach, executing iterative and incremental planning, and applying agile techniques, including support for managing projects using Scrum. Supplemental guidance provides specific support for Oracle products, such as UPK. OUM is available to Oracle employees, partners, and customers. Internal Use at Oracle: Employees can download OUM from MyDesktop. OUM Partner Program: OUM is available free of charge to Oracle PartnerNetwork (OPN) Diamond, Platinum, and Gold partners as a benefit of membership. These partners may download OUM from the Oracle Unified Method Knowledge Zone on OPN. OUM Customer Program: The OUM Customer Program allows customers to obtain copies of the method for their internal use by contracting with Oracle for a services engagement of two weeks or longer. Customers who have a signed contract with Oracle and meet the engagement qualification criteria as published on Customer tab of the OUM Website, are permitted to download the current release of OUM for their perpetual use. They may obtain subsequent releases published during a renewable, three-year access period To learn more about OUM, visit OUM Blog OUM on LinkedIn OUM on Twitter Emily Chorba, Principle Product Manager, Oracle User Productivity Kit

    Read the article

< Previous Page | 126 127 128 129 130 131 132 133 134 135 136 137  | Next Page >