Search Results

Search found 31902 results on 1277 pages for 'sql backup'.

Page 916/1277 | < Previous Page | 912 913 914 915 916 917 918 919 920 921 922 923  | Next Page >

  • Excel 2013 Data Explorer and GeoFlow make 3-D maps quick and easy

    - by John Paul Cook
    Excel add-ins Data Explorer and GeoFlow work well together, mainly because they just work. Simple, fast, and powerful. I started Excel 2013, used Data Explorer to search for, examine, and then download latitude-longitude data and finally used GeoFlow to plot an interactive 3-D visualization. I didn’t use any fancy Excel commands and the entire process took less than 3 minutes. You can download the GeoFlow preview from here . It can also be used with Office 365. Start by clicking the DATA EXPLORER...(read more)

    Read the article

  • FREE goodies if you are a UK based software house already live on the Windows Azure Platform

    - by Eric Nelson
    In the UK we have seen some fantastic take up around the Windows Azure Platform and we have lined up some great stuff in 2011 to help companies fully exploit the Cloud – but we need you to tell us what you are up to! Once you tell us about your plans around Windows Azure, you will get access to FREE benefits including email based developer support and free monthly allowance of Windows Azure, SQL Azure and AppFabric from Jan 2011 – and more! (This offer is referred to as Cloud Essentials and is explained here) And… we will be able to plan the right amount of activity to continue to help early adopters through 2011. Step 1: Sign up your company to Microsoft Platform Ready (you will need a windows live id to do this) Step 2: Add your applications For each application, state your intention around Windows Azure (and SQL etc if you so wish) Step 3: Verify your application works on the Windows Azure Platform Step 4 (Optional): Test your application works on the Windows Azure Platform Download the FREE test tool. Test your application with it and upload the successful results. Step 5: Revisit the MPR site in early January to get details of Cloud Essentials and other benefits P.S. You might want some background on the “fantastic take up” bit: We helped over 3000 UK companies deploy test applications during the beta phase of Windows Azure We directly trained over 1000 UK developers during 2010 We already have over 100 UK applications profiled on the Microsoft Platform Ready site And in a recent survey of UK ISVs you all look pretty excited around Cloud – 42% already offer their solution on the Cloud or plan to.

    Read the article

  • Financial Transparency is Good for Community

    - by ArnieRowland
    I was recently in a conversation with several people that had previously organized one or more community events. The topic evolved into a discussion of Sponsors, and eventually, fund raising. Being able to adequately raise the funds necessary is critical to producing a successful event. Many vendors will readily provide products for raffles and give-aways (SWAG), but the success of the event hangs on being able to raise cold, hard, cash. Venues and equipment have to be rented, refreshments and lunches...(read more)

    Read the article

  • Investigating on xVelocity (VertiPaq) column size

    - by Marco Russo (SQLBI)
      In January I published an article about how to optimize high cardinality columns in VertiPaq. In the meantime, VertiPaq has been rebranded to xVelocity: the official name is now “xVelocity in-memory analytics engine (VertiPaq)” but using xVelocity and VertiPaq when we talk about Analysis Services has the same meaning. In this post I’ll show how to investigate on columns size of an existing Tabular database so that you can find the most important columns to be optimized. A first approach can be looking in the DataDir of Analysis Services and look for the folder containing the database. Then, look for the biggest files in all subfolders and you will find the name of a file that contains the name of the most expensive column. However, this heuristic process is not very optimized. A better approach is using a DMV that provides the exact information. For example, by using the following query (open SSMS, open an MDX query on the database you are interested to and execute it) you will see all database objects sorted by used size in a descending way. SELECT * FROM $SYSTEM.DISCOVER_STORAGE_TABLE_COLUMN_SEGMENTS ORDER BY used_size DESC You can look at the first rows in order to understand what are the most expensive columns in your tabular model. The interesting data provided are: TABLE_ID: it is the name of the object – it can be also a dictionary or an index COLUMN_ID: it is the column name the object belongs to – you can also see ID_TO_POS and POS_TO_ID in case they refer to internal indexes RECORDS_COUNT: it is the number of rows in the column USED_SIZE: it is the used memory for the object By looking at the ration between USED_SIZE and RECORDS_COUNT you can understand what you can do in order to optimize your tabular model. Your options are: Remove the column. Yes, if it contains data you will never use in a query, simply remove the column from the tabular model Change granularity. If you are tracking time and you included milliseconds but seconds would be enough, round the data source column to the nearest second. If you have a floating point number but two decimals are good enough (i.e. the temperature), round the number to the nearest decimal is relevant to you. Split the column. Create two or more columns that have to be combined together in order to produce the original value. This technique is described in VertiPaq optimization article. Sort the table by that column. When you read the data source, you might consider sorting data by this column, so that the compression will be more efficient. However, this technique works better on columns that don’t have too many distinct values and you will probably move the problem to another column. Sorting data starting from the lower density columns (those with a few number of distinct values) and going to higher density columns (those with high cardinality) is the technique that provides the best compression ratio. After the optimization you should be able to reduce the used size and improve the count/size ration you measured before. If you are interested in a longer discussion about internal storage in VertiPaq and you want understand why this approach can save you space (and time), you can attend my 24 Hours of PASS session “VertiPaq Under the Hood” on March 21 at 08:00 GMT.

    Read the article

  • Hardware Virtualization no longer required for Windows 7 XP Mode

    - by Jonathan Kehayias
    One of my frustrations in upgrading to Windows 7 last year was that Virtual PC no longer worked since I didn’t have Hardware Virtualization on my CPU.  This really drove my transition entirely to VMware Workstation on my personal laptop.  I recently reinstalled my work laptop (with permission) on Windows 7 Enterprise and figured I’d give XP Mode a look since this machine has Hardware Virtualization enabled.  I was surprised to find that Hardware Virtualization was no longer required,...(read more)

    Read the article

  • PASS Summit Survey

    - by andyleonard
    One item mentioned in the PASS Board Q & A was the PASS Summit survey, which should be in your Inbox today if you attended the PASS Summit 2013. Charlotte? Did you enjoy having the Summit in Charlotte? If so, the PASS Summit Survey is your primary and most effective means of communicating this fact to the PASS Board and PASS HQ. The same holds if you didn’t like the Summit in Charlotte. Would you rather have the Summit remain forever in Seattle? Would you like to see the Summit in Seattle two...(read more)

    Read the article

  • Strange date relationships with #PowerPivot

    - by Marco Russo (SQLBI)
    A reader of my PowerPivot book highlighted a strange behavior of the relationship between a datetime column and a Calendar table. Long story short: it seems that PowerPivot automatically round the date to the “neareast day”, but instead of simply removing the time (truncating the decimal part of the decimal number internally used to represent a datetime value) a rounding function seems used, moving the date to the next day if the time part contain a PM time. As you can imagine, this becomes particularly...(read more)

    Read the article

  • New White Papers Available

    - by mattande
    New Master Data Services white papers are now available on MSDN. For an application-agnostic overview of Master Data Management, see Organizational Approaches to Master Data Management . For the steps needed to configure Master Data Services to work with a SharePoint workflow, see SharePoint Workflow Integration with Master Data Services . Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!...(read more)

    Read the article

  • Should vendors have an express queue for people who have a clue? What passes for support today?

    - by Greg Low
    It's good to see some airports that have queues for people that travel frequently and know what they're doing. But I'm left thinking that IT vendors need to have something similar. Bigpond (part of Telstra) in Australia have recently introduced new 42MB/sec modems on their 3G network. It's actually just a pair of 21MB/sec modems linked together but the idea is cute. Around most of the country, they work pretty well. In the middle of the CBD in Melbourne however, at present they just don't work. Having...(read more)

    Read the article

  • What are the Crappy Code Games - Tips on how to win?

    - by simonsabin
    This is part of a series on the Crappy Code Games The background Who can enter? What are the challenges? What are the prizes? Why should I attend? Tips on how to win Tips on how to win Each test has some different aspect that will define how you win. In this post we will give you some tips on how to try and win. As a starter why not watch some of the sessions from previous SQLBits Storage sessions Sessions on IO The background Who can enter? What are the challenges? What are the prizes? Why should...(read more)

    Read the article

  • Is there a canonical source supporting "all-surrogates"?

    - by user61852
    Background The "all-PK-must-be-surrogates" approach is not present in Codd's Relational Model or any SQL Standard (ANSI, ISO or other). Canonical books seems to elude this restrictions too. Oracle's own data dictionary scheme uses natural keys in some tables and surrogate keys in other tables. I mention this because these people must know a thing or two about RDBMS design. PPDM (Professional Petroleum Data Management Association) recommend the same canonical books do: Use surrogate keys as primary keys when: There are no natural or business keys Natural or business keys are bad ( change often ) The value of natural or business key is not known at the time of inserting record Multicolumn natural keys ( usually several FK ) exceed three columns, which makes joins too verbose. Also I have not found canonical source that says natural keys need to be immutable. All I find is that they need to be very estable, i.e need to be changed only in very rare ocassions, if ever. I mention PPDM because these people must know a thing or two about RDBMS design too. The origins of the "all-surrogates" approach seems to come from recommendations from some ORM frameworks. It's true that the approach allows for rapid database modeling by not having to do much business analysis, but at the expense of maintainability and readability of the SQL code. Much prevision is made for something that may or may not happen in the future ( the natural PK changed so we will have to use the RDBMS cascade update funtionality ) at the expense of day-to-day task like having to join more tables in every query and having to write code for importing data between databases, an otherwise very strightfoward procedure (due to the need to avoid PK colisions and having to create stage/equivalence tables beforehand ). Other argument is that indexes based on integers are faster, but that has to be supported with benchmarks. Obviously, long, varying varchars are not good for PK. But indexes based on short, fix-length varchar are almost as fast as integers. The questions - Is there any canonical source that supports the "all-PK-must-be-surrogates" approach ? - Has Codd's relational model been superceded by a newer relational model ?

    Read the article

  • Working with Reporting Services Filters–Part 5: OR Logic

    - by smisner
    When you combine multiple filters, Reporting Services uses AND logic. Once upon a time, there was actually a drop-down list for selecting AND or OR between filters which was very confusing to people because often it was grayed out. Now that selection is gone, but no matter. It wouldn’t help us solve the problem that I want to describe today. As with many problems, Reporting Services gives us more than one way to apply OR logic in a filter. If I want a filter to include this value OR that value for the same field, one approach is to set up the filter is to use the IN operator as I explained in Part 1 of this series. But what if I want to base the filter on two different fields? I  need a different solution. Using the AdventureWorksDW2008R2 database, I have a report that lists product sales: Let’s say that I want to filter this report to show only products that are Bikes (a category) OR products for which sales were greater than $1,000 in a year. If I set up the filter like this: Expression Data Type Operator Value [Category] Text = Bikes [SalesAmount]   > 1000 Then AND logic is used which means that both conditions must be true. That’s not the result I want. Instead, I need to set up the filter like this: Expression Data Type Operator Value =Fields!EnglishProductCategoryName.Value = "Bikes" OR Fields!SalesAmount.Value > 1000 Boolean = =True The OR logic needs to be part of the expression so that it can return a Boolean value that we test against the Value. Notice that I have used =True rather than True for the value. The filtered report appears below. Any non-bike product appears only if the total sales exceed $1,000, whereas Bikes appear regardless of sales. (You can’t see it in this screenshot, but Mountain-400-W Silver, 38 has sales of $923 in 2007 but gets included because it is in the Bikes category.)

    Read the article

  • Monitoring Baseline

    - by Grant Fritchey
    Knowing what's happening on your servers is important, that's monitoring. Knowing what happened on your server is establishing a baseline. You need to do both. I really enjoyed this blog post by Ted Krueger (blog|twitter). It's not enough to know what happened in the last hour or yesterday, you need to compare today to last week, especially if you released software this weekend. You need to compare today to 30 days ago in order to begin to establish future projections. How your data has changed over 30 days is a great indicator how it's going to change for the next 30. No, it's not perfect, but predicting the future is not exactly a science, just ask your local weatherman. Red Gate's SQL Monitor can show you the last week, the last 30 days, the last year, or all data you've collected (if you choose to keep a year's worth of data or more, please have PLENTY of storage standing by). You have a lot of choice and control here over how much data you store. Here's the configuration window showing how you can set this up: This is for version 2.3 of SQL Monitor, so if you're running an older version, you might want to update. The key point is, a baseline simply represents a moment in time in your server. The ability to compare now to then is what you're looking for in order to really have a useful baseline as Ted lays out so well in his post.

    Read the article

  • In SQLCMD mode, should CONNECT be an implicit batch separator?

    - by Greg Low
    Hi Folks, I've been working with SQLCMD mode again today and one thing about it always bites me. If I execute a script like: ::CONNECT SERVER1 SELECT @@VERSION; ::CONNECT SERVER2 SELECT @@VERSION; ::CONNECT SERVER3 SELECT @@VERSION; I'm sure I'm not the only person that would be surprised to see all three SELECT commands executed against SERVER3 and none executed against SERVER1 or SERVER2. If you think that's odd behavior, here's where to vote: https://connect.microsoft.com/SQLServer/feedback/details/611144/sqlcmd-connect-to-a-different-server-should-be-an-implicit-batch-separator#detail...(read more)

    Read the article

  • PASS Summit 2012 Women In Technology Luncheon

    - by AllenMWhite
    My final stint at the Summit Blogger's Table(tm) is for the annual WIT luncheon. I do appreciate the honor that PASS conferred on me by inviting me to the "table" for the event, it's been a lot of fun (even if there were some moments that weren't.) Newly-elected board member Wendy Pastrick is the MC for this year's luncheon, and the panel consists of Stefanie Higgins, Denise McInerny, Kevin Kline, Jen Stirrup and Kendra Little. I'm pleased to say that I know each one of them except Stefanie Higgins,...(read more)

    Read the article

  • Certification Notes: 70-583 Designing and Developing Windows Azure Applications

    - by BuckWoody
    Last Updated: 02/01/2011 It’s time for another certification, and we’ve just release the 70-583 exam on Windows Azure. I’ve blogged my “study plans” here before on other certifications, so I thought I would do the same for this one. I’ll also need to take exam 70-513 and 70-516; but I’ll post my notes on those separately. None of these are “brain dumps” or any questions from the actual tests - just the books, links and notes I have from my studies. I’ll update these references as I’m studying, so bookmark this site and watch my Twitter and Facebook posts for when I’ll update them, or just subscribe to the RSS feed. A “Green” color on the check-block means I’ve done that part so far, red means I haven’t. First, I need to refresh my memory on some basic coding, so along with the Azure-specific information I’m reading the following general programming books: Introducing Microsoft .NET (Pro-Developer): link   Head First C#, 2E: A Learner's Guide to Real-World Programming with Visual C# and .NET: link Microsoft Visual C# 2008 Step by Step: link  c The first place to start is at the official site for the certification. link c On that page you’ll find several resources, and the first you should follow is the “Save to my learning” so you have a place to track everything. Then click the “Related Learning Plans” link and follow the videos and read the documentation in each of those bullets. There are six areas on the learning plan that you should focus on - make sure you open the learning plan to drill into the specifics. c Designing Data Storage Architecture (18%) Books I’m Reading: Links: My Notes: c Optimizing Data Access and Messaging (17%) Books I’m Reading: Links: My Notes: c Designing the Application Architecture (19%) Books I’m Reading: Applied Architecture Patterns on the Microsoft Platform: link Links: My Notes: c Preparing for Application and Service Deployment (15%) Books I’m Reading: Links: My Notes: c Investigating and Analyzing Applications (16%) Books I’m Reading: Links: My Notes: c Designing Integrated Solutions (15%) Books I’m Reading: Applied Architecture Patterns on the Microsoft Platform (2nd mention) Links: My Notes:

    Read the article

  • Microsoft BI Conference 2011 in Lisbon

    - by AlbertoFerrari
    Anyone interested in BI from Portugal or Spain should not miss the Microsoft BI Conference 2011 in Lisbon : one full day ( March, 25, 2011 ) with three tracks on Business Intelligence: Decision Makers BI pros Intro to BI. I am going to present two sessions on PowerPivot: one is a nice deep dive into DAX for BI pros, the other is about self service BI for decision makers. Titles and the complete agenda will be published in the next days, but I suggest to save the date. The full event is free and it...(read more)

    Read the article

  • PASS: Budget Status

    - by Bill Graziano
    Our budget situation is a little different this year than in years past.  We were late getting an initial budget approved.  There are a number of different reasons this occurred.  We had different competing priorities and the budget got pushed down the list.  And that’s completely my fault for not making the budget a higher priority and getting it completed on time. That left us with initial budget approval in early August rather than prior to June 30th.  Even after that there were a number of small adjustments that needed to be made.  And one large glaring mistake that needed to be fixed.  We had a typo in the budget that made it through twelve versions of review.  In my defense I can only say that the cell was red so of course it had to be negative!  And that’s one more mistake I can add to my long and growing list of Mistakes I’ll Never Make Again. Last week we passed a revised budget (version 17) with this corrected.  This is the version we’re cleaning up and posting to the web site this week or next.

    Read the article

  • Cell Transitions in Excel 2013 Preview–Fixed

    - by simonsabin
    If you’ve downloaded Excel 2013 and been working with it you may have noticed the new cell transition feature. Not sure why they put it in, it feels a bit like the aero interface which I understand has been dropped in windows 8. What you may have found is that the transition is buggy, Excel hangs, of the transition is jumpy. Well I found the fix on http://answers.microsoft.com/en-us/office/forum/office_home-excel/hardware-acceleration-problem-with-excel-2013/894da202-48c0-4442-a371-955587c1b7c0 For...(read more)

    Read the article

  • Analysing Indexes - count *

    - by GrumpyOldDBA
    In my presentations on indexing I have always said that you should explore the advantages of covering your clustered index with a secondary index. In circumstances where you might want to just return values form the PK ( assuming it's your clustered index ) a secondary index will be more efficient especially when the row size is wide. Any operation on a clustered index will always return the entire row, so select ID from dbo.mytable where ID is the clustered PK integer will return not just the...(read more)

    Read the article

  • JNDI Datasource definition in Tomcat 6.0

    - by romaintaz
    Hi all, I want to define a DataSource to an Oracle database on my Tomcat 6.0. So, in conf/server.xml (yes, I know that this DataSource will be available for all the webapps in Tomcat, but it's not a problem here), I've set this Resource: <GlobalNamingResources> <Resource name="hibernate/HibernateDS" auth="Container" type="javax.sql.DataSource" url="jdbc:oracle:thin:@myserver:1542:foo" username="foo" password="bar" driverClassName="oracle.jdbc.OracleDriver" maxActive="50" maxIdle="10" validationQuery="select 1 from dual"/> Then, in the web.xml of my application, I set a resource-ref element: <resource-ref> <description>Hibernate Datasource</description> <res-ref-name>hibernate/HibernateDS</res-ref-name> <res-type>javax.sql.DataSource</res-type> <res-auth>Container</res-auth> </resource-ref> Finally, as Hibernate is used to manage the database connection, I have a webapps/mywebapp/WEB-INF/classes/hibernate.cfg.xml that creates a session-factory using the JNDI DataSource: <hibernate-configuration> <session-factory> <property name="connection.datasource">java:comp/env/hibernate/HibernateDS</property> ... However, when I start my Tomcat server, I get an error that says it could not create the INFO [net.sf.hibernate.util.NamingHelper] JNDI InitialContext properties:{} INFO [net.sf.hibernate.connection.DatasourceConnectionProvider] Using datasource: java:comp/env/hibernate/HibernateDS INFO [net.sf.hibernate.transaction.TransactionFactoryFactory] Transaction strategy: net.sf.hibernate.transaction.JDBCTransactionFactory INFO [net.sf.hibernate.transaction.TransactionManagerLookupFactory] No TransactionManagerLookup configured (in JTA environment, use of process level read-write cache is not recommended) WARN [net.sf.hibernate.cfg.SettingsFactory] Could not obtain connection metadata org.apache.tomcat.dbcp.dbcp.SQLNestedException: Cannot create JDBC driver of class '' for connect URL 'null' at org.apache.tomcat.dbcp.dbcp.BasicDataSource.createDataSource(BasicDataSource.java:1150) at org.apache.tomcat.dbcp.dbcp.BasicDataSource.getConnection(BasicDataSource.java:880) at net.sf.hibernate.connection.DatasourceConnectionProvider.getConnection(DatasourceConnectionProvider.java:59) at net.sf.hibernate.cfg.SettingsFactory.buildSettings(SettingsFactory.java:84) at net.sf.hibernate.cfg.Configuration.buildSettings(Configuration.java:1172) ... Caused by: java.lang.NullPointerException at sun.jdbc.odbc.JdbcOdbcDriver.getProtocol(JdbcOdbcDriver.java:507) at sun.jdbc.odbc.JdbcOdbcDriver.knownURL(JdbcOdbcDriver.java:476) at sun.jdbc.odbc.JdbcOdbcDriver.acceptsURL(JdbcOdbcDriver.java:307) at java.sql.DriverManager.getDriver(DriverManager.java:253) at org.apache.tomcat.dbcp.dbcp.BasicDataSource.createDataSource(BasicDataSource.java:1143) ... 11 more Do you have any idea why Hibernate is not able to construct the session-factory? What is wrong in my configuration?

    Read the article

  • Rounding functions in DAX

    - by Marco Russo (SQLBI)
    Today I prepared a table of the many rounding functions available in DAX (yes, it’s part of the book we’re writing), so that I have a complete schema of the better function to use, depending on the round operation I need to do. Here is the list of functions used and then the results shown for a relevant set of values. FLOOR = FLOOR( Tests[Value], 0.01 ) TRUNC = TRUNC( Tests[Value], 2 ) ROUNDDOWN = ROUNDDOWN( Tests[Value], 2 ) MROUND = MROUND( Tests[Value], 0.01 ) ROUND = ROUND( Tests[Value], 2 )...(read more)

    Read the article

  • Perspective in Modeling

    - by drsql
    Your task, model a database that represents a suburban block.  You survey the area, and see the following houses (pictures culled from Wikipedia here ) and So you look at the houses, start modeling roofs, windows, lawn, driveway, mail boxes, porches, etc etc. You get done, and with your 30+ tables you are feeling great, right? I know I would be. “I knocked this out of the park! We can capture everything about these houses.  I…am…a…superhero database modeler,” I think, “I will get a big...(read more)

    Read the article

  • Speakers, Please Check Your Time

    - by AjarnMark
    Woodrow Wilson was once asked how long it would take him to prepare for a 10 minute speech. He replied "Two weeks". He was then asked how long it would take for a 1 hour speech. "One week", he replied. 2 hour speech? "I'm ready right now," he replied.  Whether that is a true story or an urban legend, I don’t really know, but either way, it is a poignant reminder for all speakers, and particularly apropos this week leading up to the PASS Community Summit. (Cross-posted to the PASS Professional Development Virtual Chapter blog #PASSProfDev.) What’s the point of that story?  Simply this…if you have plenty of time to do your presentation, you don’t need to prepare much because it is easy to throw in more and more material to stretch out to your allotted time.  But if you are on a tight time constraint, then it will take significant preparation to distill your talk down to only the essential points. I have attended seven of the last eight North American Summit events, and every one of them has been fantastic.  The speakers are great, the material is timely and relevant, and the networking opportunities are awesome.  And every year, there is one little thing that just bugs me…speakers going over their allotted time.  Why does it bother me so?  Well, if you look at a typical schedule for a Summit, you’ll see that there are six or more sessions going on at the same time, and only 15 minutes to move from one to another.  If you’re trying to maximize your training dollar by attending something during every session time slot, and you don’t want to be the last guy trying to squeeze into the middle of the row, then those 15 minutes can be critical.  All the more so if you need to stop and use the bathroom or if you have to hike to the opposite end of the convention center.  It is really a bad position to find yourself having to choose between learning the last key points of Speaker A who is going over time, and getting over to Speaker B on time so you don’t miss her key opening remarks. And frankly, I think it is just rude.  Yes, the speakers are the function, after all they are bringing the content that the rest of us are paying to learn.  But it is also an honor to be given the opportunity to speak at a conference like this, and no one speaker is so important that the conference would be a disaster without him.  Speakers know when they submit their abstract, long before the conference, how much time they will have.  It has been the same pattern at the Summit for at least the last eight years.  Program Sessions are 75 minutes long.  Some speakers who have a good track record, and meet other qualifying criteria, are extended an invitation to present a Spotlight Session which is 90 minutes (a 20% increase).  So there really is no excuse.  It’s not like you were promised a 2-hour segment and then discovered when you got here that it was only 75 minutes.  In fact, it’s not like PASS advertised 90-minute sessions for everyone and then a select few were cut back to only 75.  As a speaker, you know well before you get here which type of session you are doing and how long it is, so as a professional, you should plan accordingly. Now you might think that this only happens to rookies, but I’ll tell you that some of the worst offenders are big-name veterans who draw huge attendance numbers for their sessions.  Some attendees blow this off as, “Hey, it’s so-and-so, and I’d stay here for hours and listen to him/her talk.”  To which I would reply, “Then they should have submitted for a pre- or post-conference day-long seminar instead, but don’t try to squeeze your day-long talk into a 90-minute session.”  Now I don’t really believe that these speakers are being malicious or just selfishly trying to extend their time in the spotlight.  I think that most of them are merely being undisciplined and did not trim their presentation sufficiently, or allowed themselves to get off-track (often in a generous attempt to help someone in the audience with a question or problem that really should have been noted for further discussion after the session). So here is my recommendation…my plea, even.  TRIM THE FAT!  Now.  Before it’s too late.  Before you even get on the airplane, take a long, hard look at your presentation and eliminate some of the points that you originally thought you had to make, but in reality are not truly crucial to your main topic.  Delete a few slides.  Test your demos and have them already scripted rather than typing them during your talk.  It is better to cut out too much and end up with plenty of time at the end for Questions & Answers.  And you can always keep some notes on the stuff that you cut out so that you could fill it back in at the end as bonus material if you really do end up with a whole bunch of time on your hands.  But I don’t think you will.  And if you do, that will look even better to the audience as it will look like you’re giving them something extra that not every audience gets.  And they will thank you for that.

    Read the article

< Previous Page | 912 913 914 915 916 917 918 919 920 921 922 923  | Next Page >