Search Results

Search found 32223 results on 1289 pages for 'sql 2012'.

Page 938/1289 | < Previous Page | 934 935 936 937 938 939 940 941 942 943 944 945  | Next Page >

  • How do I calculate the offset, in hours, of a given timezone from UTC in ruby?

    - by esilver
    I need to calculate the offset, in hours, of a given timezone from UTC in Ruby. This line of code had been working for me, or so I thought: offset_in_hours = (TZInfo::Timezone.get(self.timezone).current_period.offset.utc_offset).to_f / 3600.0 But, it turns out that was returning to me the Standard Offset, not the DST offset. So for example, assume self.timezone = "America/New_York" If I run the above line, offset_in_hours = -5, not -4 as it should, given that the date today is April 1, 2012. Can anyone advise me how to calculate offset_in_hours from UTC given a valid string TimeZone in Ruby that accounts for both standard time and daylight savings? Thanks!

    Read the article

  • DOS Batch file - Copy file based on filename elements

    - by user1848356
    I need to sort alot of files based on their filename. I would like to use a batch file to do it. I do know what I want but I am not sure of the correct syntax. Example of filenames I work with: (They are all in the same directory originally) 2012_W34_Sales_Store001.pdf 2012_W34_Sales_Store002.pdf 2012_W34_Sales_Store003.pdf 2012_November_Sales_Store001.pdf 2012_November_Sales_Store002.pdf 2012_November_Sales_Store003.pdf I would like to extract the information that are located between the "_" signs and put them into a different variable each time. The lenght of the informations contained between the _ signs will be different everytime. Example: var1="2012" var2="W34" (or November) var3="Sales" var4="001" If I am able to do this, I could then copy the files to the appropriate directory using move %var1%_%var2%_%var3%_%var4%.pdf z:\%var3%\%var4%\%var1%\%var2% It would need to loop because I have Store001 to Store050. Also, there are not only Sales report, many others are available. I hope I am clear. Please help me realize this batchfile!

    Read the article

  • regular expression on replace method of js not working

    - by user950146
    why this is not working var value = arr[row][col].replace(new RegExp('"', 'g'),'""'); Error : Webpage error details User Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; InfoPath.2; Tablet PC 2.0) Timestamp: Tue, 10 Apr 2012 11:22:01 UTC Message: Object doesn't support this property or method Line: 1041 Char: 25 Code: 0 URI: http://example.com/? Message: Object doesn't support this property or method Line: 1041 Char: 25 Code: 0 URI: http://example.com/? Message: Object doesn't support this property or method Line: 1041 Char: 25 Code: 0 URI: http://example.com/? Note: : Error copied directly from debugger of IE8

    Read the article

  • How to group a period of time into yearly periods ? (split timespan into yearly periods)

    - by user315648
    I have a range of two datetimes: DateTime start = new DateTime(2012,4,1); DateTime end = new DateTime(2016,7,1); And I wish to get all periods GROUPED BY YEAR between this period. Meaning the output has to be: 2012-04-01 - 2012-12-31 2013-01-01 - 2013-12-31 2014-01-01 - 2014-12-31 2015-01-01 - 2015-12-31 2016-01-01 - 2016-07-01 Preferably the output would be in IList<Tuple<DateTime,DateTime>> list. How would you do this ? Is there anyway to do this with LINQ somehow ? Oh and daylight saving time is not absolutely critical, but surely a bonus. Thanks!

    Read the article

  • Darwin kernel architecture and OS X, 64bit on 32bit kernel, how does this work?

    - by overscore
    The OS X Lion (10.7) OS runs on mostly 64-bit binaries as reported by Activity Monitor. Given this, and the fact that my laptop runs a 32-bit version of the EFI and thus also a 32-bit kernel, how does the arch mixing work in general? Darwin Kernel Version 11.3.0: Thu Jan 12 18:48:32 PST 2012; root:xnu-1699.24.23~1/RELEASE_I386 Normally one would run 32b binaries on x86_64, but the other way around would require pushing the cpu into 64b mode, which AFAIK cannot be undone. Hope this question is clear enough..

    Read the article

  • Python Continue Loop

    - by Rob B.
    I am using the following code from this tutorial (http://jeriwieringa.com/blog/2012/11/04/beautiful-soup-tutorial-part-1/). from bs4 import BeautifulSoup soup = BeautifulSoup (open("43rd-congress.html")) final_link = soup.p.a final_link.decompose() trs = soup.find_all('tr') for tr in trs: for link in tr.find_all('a'): fulllink = link.get ('href') print fulllink #print in terminal to verify results tds = tr.find_all("td") try: #we are using "try" because the table is not well formatted. This allows the program to continue after encountering an error. names = str(tds[0].get_text()) # This structure isolate the item by its column in the table and converts it into a string. years = str(tds[1].get_text()) positions = str(tds[2].get_text()) parties = str(tds[3].get_text()) states = str(tds[4].get_text()) congress = tds[5].get_text() except: print "bad tr string" continue #This tells the computer to move on to the next item after it encounters an error print names, years, positions, parties, states, congress However, I get an error saying that 'continue' is not properly in the loop on line 27. I am using notepad++ and windows powershell. How do I make this code work?

    Read the article

  • Regex for Date DD-MM-YYYY

    - by Josh M
    I have the following expression which will be used as date validation in the HTML5 "pattern" attribute. ?:0[1-9]|1[0-9]|2[0-9])|(?:(?!02)(?:0[1-9]|1[0-2])-(?:30))|(?:(?:0[13578]|1[02])-31))-(?:(?:0[1-9]|1[0-2])-(?:19|20)[0-9]{2} I want it to allow only valid dates, using "-" as a separator. This means up to 29th in February if it's a leap year, and 30/31 for other months respectively. Currently, it only allows years starting with 2 (2012) and months up to 12 (December). But it limits the day to 29 regardless of which month. Can anybody help me fix it?

    Read the article

  • No image displayed in Fancybox

    - by seansean11
    I am having trouble getting fancybox to display its corresponding images on a website that I'm building http://www.nomadicdrift.com/test/kaniwa#events . It's a custom one page portfolio theme that I set up on the WordPress platform. If you follow the link to the events section you will see 1 figure item in a gallery like position. I have this image set up to work as a fancybox gallery, but when you click on it, it opens up the fancybox interface but does not place a image in the frame, even though it should. So this is the problem...the images do not show up in fancybox and instead I see just the frame. Here is the html that I'm displaying: <figure> <a class="fancybox" data-fancybox-type="Fashion Show de Paris, France" href="http://www.nomadicdrift.com/test/kaniwa/wp-content/uploads/2012/07/NM2.jpg"> <img class="attachment-evento wp-post-image" width="231" height="191" title="NM" alt="NM" src="http://www.nomadicdrift.com/test/kaniwa/wp-content/uploads/2012/07/NM2-231x191.jpg"> </a> <figcaption> <a class="fancybox" data-fancybox-type="Fashion Show de Paris, France" href="http://www.nomadicdrift.com/test/kaniwa/wp-content/uploads/2012/07/CEDESAN.jpg"> </a> <h4>Fashion Show de Paris, France</h4> </figcaption> </figure> I'm not going to bore you with the PHP that I used to get that output because I think the problem lies elsewhere. ***I have tried to set up a simple standard fancybox gallery on the site also, but it gives me thes same problem, leading me to believe that the problem is deeper than the html markup. I have also successfully used this same markup for a one thumbnail fancybox gallery on another site. I thought maybe it was due to some conflict in the .js files I'm using. I tried uninstalling all of my plugins/addons (which aren't too many) one by one and still had the same result. I have all of my personal javascript in the functions.js file, which is where I call the fancybox plugin using the standard $("a.fancybox").fancybox();. I have installed this plugin before on other sites and have searched extensively for an answer, so any help is greatly appreciated. Thanks, Sean

    Read the article

  • Bug in variadic function template specialization with MSVC?

    - by Andrei Tita
    Using the Visual Studio Nov 2012 CTP, which supports variadic templates (among other things). The following code: template<int, typename... Args> void myfunc(Args... args) { } template<> void myfunc<1, float>(float) { } produces the following errors: error C2785: 'void myfunc(Args...)' and 'void myfunc(float)' have different return types error C2912: explicit specialization 'void myfunc(float)' is not a specialization of a function template (yeah, the first one is pretty funny) So my questions are: 1) Am I writing legal C++11 here? 2) If yes, is there a way to find out if this is a known bug in MSVC before submitting it?

    Read the article

  • jquery datepicker - retrieve events and show on hover in dialog - allow the a href to be clicked to access individual event entries

    - by paul724
    Please see this jsfiddle - http://jsfiddle.net/paul724/HXb6v/ What is working: The datepicker displays correctly, picks up the css for todays date and on hover. The dialog box follows the mouse On click the dialog box "stops" and displays a link What I want to achieve When a table cell is hovered over the title of the dialog box shows the date e.g "Mon 8th Oct 2012" When a table cell is hovered over the html of the dialog box shows the events for that day in list format (there is code that succesfully retrieves the first row in function getSelectedDates() ) function getSelectedDates() needs to be called in the hover event - showing multiple events for that date in the dialog box I hope that if we can display the date being hovered over in the title of the dialog then we can use the same information to retrieve the rows from the database to populate the html of the dialog for that day

    Read the article

  • set an input value from jquery function

    - by user1499220
    I'm simply trying to set a value for an input but it doesn't work. Here is my jquery function : function display_form() { try { $('#start_date').val('2012/08/21'); }catch (e){ alert('error'); } window.alert( $('[name=start_date]').val() ); } This shows "undefined". Here is my form : <input id="start_date" name="start_date" type="text" size="6" /> Should I put somewhere the id of the form that I want to use? I just followed what is said in the documentation of jquery but it doesn't seem to work. Could anyone help me ?

    Read the article

  • PHP strtotime() setting date to default

    - by Paddyd
    I have a script which is reading values from an excel sheet and inserting them into an sql table. My problem is when the script reads in some of the date fields it is setting them to a default value : 1970-01-01 01:00:00 I have noticed that this is happening when it comes across a date where the 'day' is greater than 12 : 13/05/2012 18:52:33 (dd-mm-yyyy hh-mm-ss) I thought this may be due to the script thinking that this is the month field (i.e american format) and it sees it as an invalid value, so I set the default timezone to my own hoping it may resolve the problem but it made no difference. date_default_timezone_set('Europe/Dublin'); Here is an example of what my code is doing: Excel field value : 01/05/2012 18:32:45 Script: $start_time = $data->val($x,5); echo $start_time = date("Y-m-d H:i:s", strtotime($start_time)); Output: 2012-01-05 18:32:45 This is the exact format I am looking for (except for the fact that the day and month fields are switched around i.e american date format) but it doesnt work for the type of dates I mentioned above.

    Read the article

  • Variable pre-fixes, Visual Studio 2010 onwards?

    - by thedixon
    I'm a bit bewildered on this subject, as I relate variable prefixes to being a thing of the past now, but with Visual Studio 2010 onwards (I'm currently using 2012), do people still do this and why? I only ask because, these days, you can hover over any variable and it'll tell you the variable type and scope. There's literally no requirement for pre-fixing being there for readability. By this I mean: string strHello int intHello etc. And I'm being language/tool biased here - as Visual Studio takes a lot of the legwork out for you in terms of seeing exactly what type the variable is, including after conversions in the code. This is not a "general programming" question.

    Read the article

  • Check if the current date is between two dates + mysql select query

    - by kj7
    I have following table : id dateStart dateEnd active 1 2012-11-12 2012-12-31 0 2 2012-11-12 2012-12-31 0 I want to compare todays date in between dateStart and dateEnd. Following is my query for this : $todaysDate="2012-26-11"; $db = Zend_Registry::get("db"); $result = $db->fetchAll("SELECT * FROM `table` WHERE active=0 AND {$todaysDate} between dateStart and dateEnd"); return $result; But its not working. Any solution. Thanks in advance.

    Read the article

  • C: Why does gcc allow char array initialization with string literal larger than array?

    - by Ashwin
    int main() { char a[7] = "Network"; return 0; } A string literal in C is terminated internally with a nul character. So, the above code should give a compilation error since the actual length of the string literal Network is 8 and it cannot fit in a char[7] array. However, gcc (even with -Wall) on Ubuntu compiles this code without any error or warning. Why does gcc allow this and not flag it as compilation error? gcc only gives a warning (still no error!) when the char array size is smaller than the string literal. For example, it warns on: char a[6] = "Network"; [Related] Visual C++ 2012 gives a compilation error for char a[7]: 1>d:\main.cpp(3): error C2117: 'a' : array bounds overflow 1> d:\main.cpp(3) : see declaration of 'a'

    Read the article

  • Iterating through facebook comments JSON object failing

    - by user1594304
    I tried the option of students.item["http://www.myurl.com"].comments.data.length. However, the item["http://www.myurl.com"] call is not working. If I take out the URL from JSON object and write the iterator with students.comments.data, it works. Here is my code, any help highly appreciated. var students = { "http://www.myurl.com":{ "comments":{ "data" : [ { "id": "123456778", "from": { "name": "XYZ", "id": "1000005" }, "message": "Hey", "can_remove": false, "created_time": "2012-09-03T03:16:01+0000", "like_count": 0, "user_likes": false } ] } } } var i=0 var arrayObject = new Array(); alert("Parsing 2: "+students.item["http://www.myurl.com"].comments.data.length); for(i=0;i<students.item["http://www.myurl.com"].comments.data.length;i++) { alert("Parsing 1: "+i); arrayObject.push(students.item["http://www.myurl.com"].comments.data[i].id); arrayObject.push(students.item["http://www.myurl.com"].comments.data[i].message); arrayObject.push(students.item["http://www.myurl.com"].comments.data[i].created_time); }

    Read the article

  • Easiest way to keep SSRS child elements in the same relative position when the parent is re-positioned?

    - by Mac
    I am trying to revise the layout of an SSRS report where I have several textboxes that are child elements of a rectangle. When I reposition the parent rectangle down by x, all of the child textboxes maintain the same absolute position. Their "Location" (defined relative to the parent) decreases by x. I then need to reposition the child textboxes. Additionally, if any of these ever has a negative "Location" then the parent rectangle is then repositioned back up by x. What is the easiest way to move everything in unison? I can Control-click everything and then drag them or use the arrow keys, but I want to position everything with precision and the "Location" field in the Properties window disappears when selecting more than one item. Is there a way I can avoid individually computing and typing in every "Location" value every time I have a small layout change? I am using SSRS (11.0.3360.12) within the Visual Studio 2012 Shell. Thanks!

    Read the article

  • What's the best way to convert a date and time into a timestamp using php?

    - by user1267980
    I need to convert a date and time into a timestamp with php. The following code shows what I'm currently using: <?php $date="2012-06-29 10:50"; $timestamp = strtotime($date); echo $timestamp; ?> However, when I test the timestamp in an online convertor (http://www.epochconverter.com), the resulting date is 29th June 2012, 8:50, or 2 hours previous. Is it possible that the strtotime() function isn't completely accurate and is just an estimate of the time? If so, are there better methods I could use for getting the exact time? Thanks.

    Read the article

  • getResponseHeader('last-modified'); does not change value

    - by telexper
    var page = 'data/appointments/<? echo $_SESSION['name']; ?><? echo $_SESSION['last']; ?>App.html'; var lM; function checkModified(){ $.get(page, function(a,a,x){ var mod = x.getResponseHeader('last-modified'); alert (lM); alert ("page" +mod); }); } when i alert the last-modified from my page, it outputs the same value, even when when i deleted all my cookies and cache , deleted the file from the server and replace it. it still outputs one value Tue , Oct 23, 2012 3:37:41 GMT

    Read the article

  • date enable disable

    - by Newbie
    I have an "add new product page" and I have a field in the database table "jos_stockmovement", "start_week" which contains data for the year and week. In my "Add New Product" page I have a field dropdown-list named "Start Week" which holds the year and week, for example, "2012/36". How do I disable the past week? I all I want to enable is the current week. Heres my code: <td colspan="99"> <%= fld.select :start_week, options_for_select( StockMovement.order("year DESC, week DESC").map { | val | [ "#{ val.year }/#{ val.week }", val.id ] }, :selected => @product.start_week ), :class => "ddl_SW" %> </td>

    Read the article

  • read after lseek always return 0

    - by Wins
    I'm having an issue reading byte in file after performing lseek long cur_position = lseek(fd, length * -1, SEEK_CUR ); // Rewind file reading by length bytes. printf("cur_position: %i\n", cur_position); int num_byte = read(fd, &byte, 1); printf("num_byte = %i\n", num_byte); With the initial variable length set to 34, the above code would produce cur_position 5 (so there are definitely at least 34 bytes after the lseek function returns), but the variable num_byte returned from function read always returns 0 even though there are still more bytes to read. Does anyone know the reason num_byte always return 0 or do I make mistake in the above code? Just for information, the above code was run on the following machine $ uname -srvpio Linux 3.2.0-24-generic #39-Ubuntu SMP Mon May 21 16:52:17 UTC 2012 x86_64 x86_64 GNU/Linux

    Read the article

  • DMV {dm_os_ring_buffers} - Queries to help pinpoint current Issues / usual usage patterns

    - by NeilHambly
    I'm been running some queries (below) to help me identify when I have had time-sensitive performance issues around Memory/CPU, I didn't want to load up additional overhead to the system (unless absolutely neccessary) using traces or profiler  - naturally we have various methods to do this Perfmon counters, DBCC, DMVs etc.. One quick way I like is to run a few DMV queries (normally back in seconds) to help me find those RECENT specific time periods when the system has been substantially changed in some way using, this is using the DMV dm_os_ring_buffers This one helps me identify when I'm expericing Timeout Errors (1222).. modiy code to look for other error as highlight belowDECLARE @ts_now BIGINT,@dt_max BIGINT, @dt_min BIGINT  SELECT @ts_now = cpu_ticks / CONVERT(FLOAT, cpu_ticks_in_ms) FROM sys.dm_os_sys_info SELECT @dt_max = MAX(timestamp), @dt_min = MIN(timestamp)    FROM sys.dm_os_ring_buffers WHERE ring_buffer_type = N'RING_BUFFER_EXCEPTION'  SELECT       record_id      ,DATEADD(ms, -1 * (@ts_now - [timestamp]), GETDATE()) AS EventTime      ,y.Error      ,UserDefined      ,b.description as NormalizedText FROM       (       SELECT       record.value('(./Record/@id)[1]', 'int')                    AS record_id,       record.value('(./Record/Exception/Error)[1]', 'int')        AS Error,       record.value('(./Record/Exception/UserDefined)[1]', 'int')  AS UserDefined,      TIMESTAMP       FROM             (             SELECT TIMESTAMP, CONVERT(XML, record) AS record             FROM sys.dm_os_ring_buffers             WHERE ring_buffer_type = N'RING_BUFFER_EXCEPTION'             AND record LIKE '% %'            ) AS x      ) AS y INNER JOIN sys.sysmessages b on y.Error = b.error WHERE b.msglangid = 1033 and  y.Error = 1222 ORDER BY record_id DESC Sample Output record_id EventTime Error UserDefined NormalizedText 15199195 18/03/2010 14:00 1222 0 Lock request time out period exceeded. 15199194 18/03/2010 14:00 1222 0 Lock request time out period exceeded. 15199193 18/03/2010 14:00 1222 0 Lock request time out period exceeded. 15199192 18/03/2010 14:00 1222 0 Lock request time out period exceeded. 15199191 18/03/2010 14:00 1222 0 Lock request time out period exceeded.  This one helps me identify when I have Unusally High Processing (> 50%) or # Page-FaultsSELECT record.value('(./Record/@id)[1]', 'int') AS record_id,record.value('(./Record/SchedulerMonitorEvent/SystemHealth/SystemIdle)[1]', 'int')              AS SystemIdle,record.value('(./Record/SchedulerMonitorEvent/SystemHealth/ProcessUtilization)[1]', 'int')      AS SQLProcessUtilization,record.value('(./Record/SchedulerMonitorEvent/SystemHealth/UserModeTime)[1]', 'bigint')         AS UserModeTime,record.value('(./Record/SchedulerMonitorEvent/SystemHealth/KernelModeTime)[1]', 'bigint')       AS KernelModeTime,record.value('(./Record/SchedulerMonitorEvent/SystemHealth/PageFaults)[1]', 'bigint')           AS PageFaults,record.value('(./Record/SchedulerMonitorEvent/SystemHealth/WorkingSetDelta)[1]', 'bigint')      AS WorkingSetDelta,record.value('(./Record/SchedulerMonitorEvent/SystemHealth/MemoryUtilization)[1]', 'int')       AS MemoryUtilization,TIMESTAMPFROM (        SELECT TIMESTAMP, CONVERT(XML, record) AS record         FROM sys.dm_os_ring_buffers         WHERE ring_buffer_type = N'RING_BUFFER_SCHEDULER_MONITOR'        AND record LIKE '% %'         ) AS x Example: Showing entries > 50% SQL CPU record_id SystemIdle SQLProcessUtilization UserModeTime KernelModeTime PageFaults WorkingSetDelta MemoryUtilization TIMESTAMP 111916 66 29 36718750 1374843750 21333 -40960 100 7991061289 111917 54 41 50156250 1954062500 26914 -28672 100 7991121290 111918 57 39 42968750 1838437500 30096 20480 100 7991181290 111919 41 53 43906250 2530156250 22088 -4096 100 7991241307 111920 48 45 40937500 2124062500 26395 8192 100 7991301310 111921 52 43 35625000 2052812500 21996 155648 100 7991361311 111922 40 55 36875000 2637343750 33355 -262144 100 7991421311 111923 36 58 44843750 2786562500 47019 28672 100 7991481311 111924 31 64 53437500 3046562500 31027 61440 100 7991541314 111925 36 57 43906250 2711250000 37074 -8192 100 7991601317 111926 52 43 43437500 2060156250 29176 20480 100 7991661318 111927 71 24 33750000 1141250000 14478 16384 100 7991721320 111928 71 23 34531250 1116250000 12711 -20480 100 7991781320 111929 53 36 46562500 1714062500 26684 200704 100 7991841323 Finally one to provide some understanding of the level of memory state changes that are ocuringSELECT record.value('(./Record/@id)[1]', 'int')                                                       AS 'record_id',record.value('(./Record/ResourceMonitor/Notification)[1]', 'VARCHAR(100)')                     AS 'ReservedMemory',record.value('(./Record/ResourceMonitor/Indicators)[1]', 'int')                                AS 'Indicators',record.value('(./Record/ResourceMonitor/Effect/@state)[1]', 'VARCHAR(100)')         + ' - ' + record.value('(./Record/ResourceMonitor/Effect/@reversed)[1]', 'VARCHAR(100)')      + ' - ' + record.value('(./Record/ResourceMonitor/Effect)[1]', 'VARCHAR(100)')                           AS 'APPLY-HIGHPM',record.value('(./Record/ResourceMonitor/Effect/@state)[2]', 'VARCHAR(100)')         + ' - ' + record.value('(./Record/ResourceMonitor/Effect/@reversed)[2]', 'VARCHAR(100)')      + ' - ' + record.value('(./Record/ResourceMonitor/Effect)[2]', 'VARCHAR(100)')                           AS 'APPLY-HIGHPM',record.value('(./Record/ResourceMonitor/Effect/@state)[3]', 'VARCHAR(100)')         + ' - ' + record.value('(./Record/ResourceMonitor/Effect/@reversed)[3]', 'VARCHAR(100)')      + ' - ' + record.value('(./Record/ResourceMonitor/Effect)[3]', 'VARCHAR(100)')                           AS 'REVERT_HIGHPM',record.value('(./Record/MemoryNode/ReservedMemory)[1]', 'int')                                 AS 'ReservedMemory',record.value('(./Record/MemoryNode/CommittedMemory)[1]', 'int')                                AS 'CommittedMemory',record.value('(./Record/MemoryNode/SharedMemory)[1]', 'int')                                   AS 'SharedMemory',record.value('(./Record/MemoryNode/AWEMemory)[1]', 'int')                                      AS 'AWEMemory',record.value('(./Record/MemoryNode/SinglePagesMemory)[1]', 'int')                              AS 'SinglePagesMemory',record.value('(./Record/MemoryNode/CachedMemory)[1]', 'int')                                   AS 'CachedMemory',record.value('(./Record/MemoryRecord/MemoryUtilization)[1]', 'int')                            AS 'MemoryUtilization',record.value('(./Record/MemoryRecord/TotalPhysicalMemory)[1]', 'int')                          AS 'TotalPhysicalMemory',record.value('(./Record/MemoryRecord/AvailablePhysicalMemory)[1]', 'int')                      AS 'AvailablePhysicalMemory',record.value('(./Record/MemoryRecord/TotalPageFile)[1]', 'int')                                AS 'TotalPageFile',record.value('(./Record/MemoryRecord/AvailablePageFile)[1]', 'int')                            AS 'AvailablePageFile',record.value('(./Record/MemoryRecord/TotalVirtualAddressSpace)[1]', 'bigint')                  AS 'TotalVirtualAddressSpace',record.value('(./Record/MemoryRecord/AvailableVirtualAddressSpace)[1]', 'bigint')              AS 'AvailableVirtualAddressSpace',record.value('(./Record/MemoryRecord/AvailableExtendedVirtualAddressSpace)[1]', 'bigint')      AS 'AvailableExtendedVirtualAddressSpace', TIMESTAMPFROM (        SELECT TIMESTAMP, CONVERT(XML, record) AS record         FROM sys.dm_os_ring_buffers         WHERE ring_buffer_type = N'RING_BUFFER_RESOURCE_MONITOR'        AND record LIKE '% %'        ) AS x  

    Read the article

  • Laissez les bon temps rouler! (Microsoft BI Conference 2010)

    - by smisner
    "Laissez les bons temps rouler" is a Cajun phrase that I heard frequently when I lived in New Orleans in the mid-1990s. It means "Let the good times roll!" and encapsulates a feeling of happy expectation. As I met with many of my peers and new acquaintances at the Microsoft BI Conference last week, this phrase kept running through my mind as people spoke about their plans in their respective businesses, the benefits and opportunities that the recent releases in the BI stack are providing, and their expectations about the future of the BI stack. Notwithstanding some jabs here and there to point out the platform is neither perfect now nor will be anytime soon (along with admissions that the competitors are also not perfect), and notwithstanding several missteps by the event organizers (which I don't care to enumerate), the overarching mood at the conference was positive. It was a refreshing change from the doom and gloom hovering over several conferences that I attended in 2009. Although many people expect economic hardships to continue over the coming year or so, everyone I know in the BI field is busier than ever and expects to stay busy for quite a while. Self-Service BI Self-service was definitely a theme of the BI conference. In the keynote, Ted Kummert opened with a look back to a fairy tale vision of self-service BI that he told in 2008. At that time, the fairy tale future was a time when "every end user was able to use BI technologies within their job in order to move forward more effectively" and transitioned to the present time in which SQL Server 2008 R2, Office 2010, and SharePoint 2010 are available to deliver managed self-service BI. This set of technologies is presumably poised to address the needs of the 80% of users that Kummert said do not use BI today. He proceeded to outline a series of activities that users ought to be able to do themselves--from simple changes to a report like formatting or an addtional data visualization to integration of an additional data source. The keynote then continued with a series of demonstrations of both current and future technology in support of self-service BI. Some highlights that interested me: PowerPivot, of course, is the flagship product for self-service BI in the Microsoft BI stack. In the TechEd keynote, which was open to the BI conference attendees, Amir Netz (twitter) impressed the audience by demonstrating interactivity with a workbook containing 100 million rows. He upped the ante at the BI keynote with his demonstration of a future-state PowerPivot workbook containing over 2 billion records. It's important to note that this volume of data is being processed by a server engine, and not in the PowerPivot client engine. (Yes, I think it's impressive, but none of my clients are typically wrangling with 2 billion records at a time. Maybe they're thinking too small. This ability to work quickly with large data sets has greater implications for BI solutions than for self-service BI, in my opinion.) Amir also demonstrated KPIs for the future PowerPivot, which appeared to be easier to implement than in any other Microsoft product that supports KPIs, apart from simple KPIs in SharePoint. (My initial reaction is that we have one more place to build KPIs. Great. It's confusing enough. I haven't seen how well those KPIs integrate with other BI tools, which will be important for adoption.) One more PowerPivot feature that Amir showed was a graphical display of the lineage for calculations. (This is hugely practical, especially if you build up calculations incrementally. You can more easily follow the logic from calculation to calculation. Furthermore, if you need to make a change to one calculation, you can assess the impact on other calculations.) Another product demonstration will be available within the next 30 days--Pivot for Reporting Services. If you haven't seen this technology yet, check it out at www.getpivot.com. (It definitely has a wow factor, but I'm skeptical about its practicality. However, I'm looking forward to trying it out with data that I understand.) Michael Tejedor (twitter) demonstrated a feature that I think is really interesting and not emphasized nearly enough--overshadowed by PowerPivot, no doubt. That feature is the Microsoft Business Intelligence Indexing Connector, which enables search of the content of Excel workbooks and Reporting Services reports. (This capability existed in MOSS 2007, but was more cumbersome to implement. The search results in SharePoint 2010 are not only cooler, but more useful by describing whether the content is found in a table or a chart, for example.) This may yet be the dawning of the age of self-service BI - a phrase I've heard repeated from time to time over the last decade - but I think BI professionals are likely to stay busy for a long while, and need not start looking for a new line of work. Kummert repeatedly referenced strategic BI solutions in contrast to self-service BI to emphasize that self-service BI is not a replacement for the services that BI professionals provide. After all, self-service BI does not appear magically on user desktops (or whatever device they want to use). A supporting infrastructure is necessary, and grows in complexity in proportion to the need to simplify BI for users. It's one thing to hear the party line touted by Microsoft employees at the BI keynote, but it's another to hear from the people who are responsible for implementing and supporting it within an organization. Rob Collie (blog | twitter), Kasper de Jonge (blog | twitter), Vidas Matelis (site | twitter), and I were invited to join Andrew Brust (blog | twitter) as he led a Birds of a Feather session at TechEd entitled "PowerPivot: Is It the BI Deal-Changer for Developers and IT Pros?" I would single out the prevailing concern in this session as the issue of control. On one side of this issue were those who were concerned that they would lose control once PowerPivot is implemented. On the other side were those who believed that data should be freely accessible to users in PowerPivot, and even acknowledgment that users would get the data they want even if it meant they would have to manually enter into a workbook to have it ready for analysis. For another viewpoint on how PowerPivot played out at the conference, see Rob Collie's observations. Collaborative BI I have been intrigued by the notion of collaborative BI for a very long time. Before I discovered BI, I was a Lotus Notes developer and later a manager of developers, working in a software company that enabled collaboration in the legal industry. Not only did I help create collaborative systems for our clients, I created a complete project management from the ground up to collaboratively manage our custom development work. In that case, collaboration involved my team, my client contacts, and me. I was also able to produce my own BI from that system as well, but didn't know that's what I was doing at the time. Only in recent years has SharePoint begun to catch up with the capabilities that I had with Lotus Notes more than a decade ago. Eventually, I had the opportunity at that job to formally investigate BI as another product offering for our software, and the rest - as they say - is history. I built my first data warehouse with Scott Cameron (who has also ventured into the authoring world by writing Analysis Services 2008 Step by Step and was at the BI Conference last week where I got to reminisce with him for a bit) and that began a career that I never imagined at the time. Fast forward to 2010, and I'm still lauding the virtues of collaborative BI, if only the tools will catch up to my vision! Thus, I was anxious to see what Donald Farmer (blog | twitter) and Rita Sallam of Gartner had to say on the subject in their session "Collaborative Decision Making." As I suspected, the tools aren't quite there yet, but the vendors are moving in the right direction. One thing I liked about this session was a non-Microsoft perspective of the state of the industry with regard to collaborative BI. In addition, this session included a better demonstration of SharePoint collaborative BI capabilities than appeared in the BI keynote. Check out the video in the link to the session to see the demonstration. One of the use cases that was demonstrated was linking from information to a person, because, as Donald put it, "People don't trust data, they trust people." The Microsoft BI Stack in General A question I hear all the time from students when I'm teaching is how to know what tools to use when there is overlap between products in the BI stack. I've never taken the time to codify my thoughts on the subject, but saw that my friend Dan Bulos provided good insight on this topic from a variety of perspectives in his session, "So Many BI Tools, So Little Time." I thought one of his best points was that ideally you should be able to design in your tool of choice, and then deploy to your tool of choice. Unfortunately, the ideal is yet to become real across the platform. The closest we come is with the RDL in Reporting Services which can be produced from two different tools (Report Builder or Business Intelligence Development Studio's Report Designer), manually, or by a third-party or custom application. I have touted the idea for years (and publicly said so about 5 years ago) that eventually more products would be RDL producers or consumers, but we aren't there yet. Maybe in another 5 years. Another interesting session that covered the BI stack against a backdrop of competitive products was delivered by Andrew Brust. Andrew did a marvelous job of consolidating a lot of information in a way that clearly communicated how various vendors' offerings compared to the Microsoft BI stack. He also made a particularly compelling argument about how the existence of an ecosystem around the Microsoft BI stack provided innovation and opportunities lacking for other vendors. Check out his presentation, "How Does the Microsoft BI Stack...Stack Up?" Expo Hall I had planned to spend more time in the Expo Hall to see who was doing new things with the BI stack, but didn't manage to get very far. Each time I set out on an exploratory mission, I got caught up in some fascinating conversations with one or more of my peers. I find interacting with people that I meet at conferences just as important as attending sessions to learn something new. There were a couple of items that really caught me eye, however, that I'll share here. Pragmatic Works. Whether you develop SSIS packages, build SSAS cubes, or author SSRS reports (or all of the above), you really must take a look at BI Documenter. Brian Knight (twitter) walked me through the key features, and I must say I was impressed. Once you've seen what this product can do, you won't want to document your BI projects any other way. You can download a free single-user database edition, or choose from more feature-rich standard or professional editions. Microsoft Press ebooks. I also stopped by the O'Reilly Media booth to meet some folks that one of my acquisitions editors at Microsoft Press recommended. In case you haven't heard, Microsoft Press has partnered with O'Reilly Media for distribution and publishing. Apart from my interest in learning more about O'Reilly Media as an author, an advertisement in their booth caught me eye which I think is a really great move. When you buy Microsoft Press ebooks through the O'Reilly web site, you can receive it in any (or all) of the following formats where possible: PDF, epub, .mobi for Kindle and .apk for Android. You also have lifetime DRM-free access to the ebooks. As someone who is an avid collector of books, I fnd myself running out of room for storage. In addition, I travel a lot, and it's hard to lug my reference library with me. Today's e-reader options make the move to digital books a more viable way to grow my library. Having a variety of formats means I am not limited to a single device, and lifetime access means I don't have to worry about keeping track of where I've stored my files. Because the e-books are DRM-free, I can copy and paste when I'm compiling notes, and I can print pages when necessary. That's a winning combination in my mind! Overall, I was pleased with the BI conference. There were many more sessions that I couldn't attend, either because the room was full when I got there or there were multiple sessions running concurrently that I wanted to see. Fortunately, many of the sessions are accessible for viewing online at http://www.msteched.com/2010/NorthAmerica along with the TechEd sessions. You can spot the BI sessions by the yellow skyline on the title slide of the presentation as shown below. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Laissez les bon temps rouler! (Microsoft BI Conference 2010)

    - by smisner
    Laissez les bons temps rouler" is a Cajun phrase that I heard frequently when I lived in New Orleans in the mid-1990s. It means "Let the good times roll!" and encapsulates a feeling of happy expectation. As I met with many of my peers and new acquaintances at the Microsoft BI Conference last week, this phrase kept running through my mind as people spoke about their plans in their respective businesses, the benefits and opportunities that the recent releases in the BI stack are providing, and their expectations about the future of the BI stack.Notwithstanding some jabs here and there to point out the platform is neither perfect now nor will be anytime soon (along with admissions that the competitors are also not perfect), and notwithstanding several missteps by the event organizers (which I don't care to enumerate), the overarching mood at the conference was positive. It was a refreshing change from the doom and gloom hovering over several conferences that I attended in 2009. Although many people expect economic hardships to continue over the coming year or so, everyone I know in the BI field is busier than ever and expects to stay busy for quite a while.Self-Service BISelf-service was definitely a theme of the BI conference. In the keynote, Ted Kummert opened with a look back to a fairy tale vision of self-service BI that he told in 2008. At that time, the fairy tale future was a time when "every end user was able to use BI technologies within their job in order to move forward more effectively" and transitioned to the present time in which SQL Server 2008 R2, Office 2010, and SharePoint 2010 are available to deliver managed self-service BI.This set of technologies is presumably poised to address the needs of the 80% of users that Kummert said do not use BI today. He proceeded to outline a series of activities that users ought to be able to do themselves--from simple changes to a report like formatting or an addtional data visualization to integration of an additional data source. The keynote then continued with a series of demonstrations of both current and future technology in support of self-service BI. Some highlights that interested me:PowerPivot, of course, is the flagship product for self-service BI in the Microsoft BI stack. In the TechEd keynote, which was open to the BI conference attendees, Amir Netz (twitter) impressed the audience by demonstrating interactivity with a workbook containing 100 million rows. He upped the ante at the BI keynote with his demonstration of a future-state PowerPivot workbook containing over 2 billion records. It's important to note that this volume of data is being processed by a server engine, and not in the PowerPivot client engine. (Yes, I think it's impressive, but none of my clients are typically wrangling with 2 billion records at a time. Maybe they're thinking too small. This ability to work quickly with large data sets has greater implications for BI solutions than for self-service BI, in my opinion.)Amir also demonstrated KPIs for the future PowerPivot, which appeared to be easier to implement than in any other Microsoft product that supports KPIs, apart from simple KPIs in SharePoint. (My initial reaction is that we have one more place to build KPIs. Great. It's confusing enough. I haven't seen how well those KPIs integrate with other BI tools, which will be important for adoption.)One more PowerPivot feature that Amir showed was a graphical display of the lineage for calculations. (This is hugely practical, especially if you build up calculations incrementally. You can more easily follow the logic from calculation to calculation. Furthermore, if you need to make a change to one calculation, you can assess the impact on other calculations.)Another product demonstration will be available within the next 30 days--Pivot for Reporting Services. If you haven't seen this technology yet, check it out at www.getpivot.com. (It definitely has a wow factor, but I'm skeptical about its practicality. However, I'm looking forward to trying it out with data that I understand.)Michael Tejedor (twitter) demonstrated a feature that I think is really interesting and not emphasized nearly enough--overshadowed by PowerPivot, no doubt. That feature is the Microsoft Business Intelligence Indexing Connector, which enables search of the content of Excel workbooks and Reporting Services reports. (This capability existed in MOSS 2007, but was more cumbersome to implement. The search results in SharePoint 2010 are not only cooler, but more useful by describing whether the content is found in a table or a chart, for example.)This may yet be the dawning of the age of self-service BI - a phrase I've heard repeated from time to time over the last decade - but I think BI professionals are likely to stay busy for a long while, and need not start looking for a new line of work. Kummert repeatedly referenced strategic BI solutions in contrast to self-service BI to emphasize that self-service BI is not a replacement for the services that BI professionals provide. After all, self-service BI does not appear magically on user desktops (or whatever device they want to use). A supporting infrastructure is necessary, and grows in complexity in proportion to the need to simplify BI for users.It's one thing to hear the party line touted by Microsoft employees at the BI keynote, but it's another to hear from the people who are responsible for implementing and supporting it within an organization. Rob Collie (blog | twitter), Kasper de Jonge (blog | twitter), Vidas Matelis (site | twitter), and I were invited to join Andrew Brust (blog | twitter) as he led a Birds of a Feather session at TechEd entitled "PowerPivot: Is It the BI Deal-Changer for Developers and IT Pros?" I would single out the prevailing concern in this session as the issue of control. On one side of this issue were those who were concerned that they would lose control once PowerPivot is implemented. On the other side were those who believed that data should be freely accessible to users in PowerPivot, and even acknowledgment that users would get the data they want even if it meant they would have to manually enter into a workbook to have it ready for analysis. For another viewpoint on how PowerPivot played out at the conference, see Rob Collie's observations.Collaborative BII have been intrigued by the notion of collaborative BI for a very long time. Before I discovered BI, I was a Lotus Notes developer and later a manager of developers, working in a software company that enabled collaboration in the legal industry. Not only did I help create collaborative systems for our clients, I created a complete project management from the ground up to collaboratively manage our custom development work. In that case, collaboration involved my team, my client contacts, and me. I was also able to produce my own BI from that system as well, but didn't know that's what I was doing at the time. Only in recent years has SharePoint begun to catch up with the capabilities that I had with Lotus Notes more than a decade ago. Eventually, I had the opportunity at that job to formally investigate BI as another product offering for our software, and the rest - as they say - is history. I built my first data warehouse with Scott Cameron (who has also ventured into the authoring world by writing Analysis Services 2008 Step by Step and was at the BI Conference last week where I got to reminisce with him for a bit) and that began a career that I never imagined at the time.Fast forward to 2010, and I'm still lauding the virtues of collaborative BI, if only the tools will catch up to my vision! Thus, I was anxious to see what Donald Farmer (blog | twitter) and Rita Sallam of Gartner had to say on the subject in their session "Collaborative Decision Making." As I suspected, the tools aren't quite there yet, but the vendors are moving in the right direction. One thing I liked about this session was a non-Microsoft perspective of the state of the industry with regard to collaborative BI. In addition, this session included a better demonstration of SharePoint collaborative BI capabilities than appeared in the BI keynote. Check out the video in the link to the session to see the demonstration. One of the use cases that was demonstrated was linking from information to a person, because, as Donald put it, "People don't trust data, they trust people."The Microsoft BI Stack in GeneralA question I hear all the time from students when I'm teaching is how to know what tools to use when there is overlap between products in the BI stack. I've never taken the time to codify my thoughts on the subject, but saw that my friend Dan Bulos provided good insight on this topic from a variety of perspectives in his session, "So Many BI Tools, So Little Time." I thought one of his best points was that ideally you should be able to design in your tool of choice, and then deploy to your tool of choice. Unfortunately, the ideal is yet to become real across the platform. The closest we come is with the RDL in Reporting Services which can be produced from two different tools (Report Builder or Business Intelligence Development Studio's Report Designer), manually, or by a third-party or custom application. I have touted the idea for years (and publicly said so about 5 years ago) that eventually more products would be RDL producers or consumers, but we aren't there yet. Maybe in another 5 years.Another interesting session that covered the BI stack against a backdrop of competitive products was delivered by Andrew Brust. Andrew did a marvelous job of consolidating a lot of information in a way that clearly communicated how various vendors' offerings compared to the Microsoft BI stack. He also made a particularly compelling argument about how the existence of an ecosystem around the Microsoft BI stack provided innovation and opportunities lacking for other vendors. Check out his presentation, "How Does the Microsoft BI Stack...Stack Up?"Expo HallI had planned to spend more time in the Expo Hall to see who was doing new things with the BI stack, but didn't manage to get very far. Each time I set out on an exploratory mission, I got caught up in some fascinating conversations with one or more of my peers. I find interacting with people that I meet at conferences just as important as attending sessions to learn something new. There were a couple of items that really caught me eye, however, that I'll share here.Pragmatic Works. Whether you develop SSIS packages, build SSAS cubes, or author SSRS reports (or all of the above), you really must take a look at BI Documenter. Brian Knight (twitter) walked me through the key features, and I must say I was impressed. Once you've seen what this product can do, you won't want to document your BI projects any other way. You can download a free single-user database edition, or choose from more feature-rich standard or professional editions.Microsoft Press ebooks. I also stopped by the O'Reilly Media booth to meet some folks that one of my acquisitions editors at Microsoft Press recommended. In case you haven't heard, Microsoft Press has partnered with O'Reilly Media for distribution and publishing. Apart from my interest in learning more about O'Reilly Media as an author, an advertisement in their booth caught me eye which I think is a really great move. When you buy Microsoft Press ebooks through the O'Reilly web site, you can receive it in any (or all) of the following formats where possible: PDF, epub, .mobi for Kindle and .apk for Android. You also have lifetime DRM-free access to the ebooks. As someone who is an avid collector of books, I fnd myself running out of room for storage. In addition, I travel a lot, and it's hard to lug my reference library with me. Today's e-reader options make the move to digital books a more viable way to grow my library. Having a variety of formats means I am not limited to a single device, and lifetime access means I don't have to worry about keeping track of where I've stored my files. Because the e-books are DRM-free, I can copy and paste when I'm compiling notes, and I can print pages when necessary. That's a winning combination in my mind!Overall, I was pleased with the BI conference. There were many more sessions that I couldn't attend, either because the room was full when I got there or there were multiple sessions running concurrently that I wanted to see. Fortunately, many of the sessions are accessible for viewing online at http://www.msteched.com/2010/NorthAmerica along with the TechEd sessions. You can spot the BI sessions by the yellow skyline on the title slide of the presentation as shown below. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • PASS: Bylaw Changes

    - by Bill Graziano
    While you’re reading this, a post should be going up on the PASS blog on the plans to change our bylaws.  You should be able to find our old bylaws, our proposed bylaws and a red-lined version of the changes.  We plan to listen to feedback until March 31st.  At that point we’ll decide whether to vote on these changes or take other action. The executive summary is that we’re adding a restriction to prevent more than two people from the same company on the Board and eliminating the Board’s Officer Appointment Committee to have Officers directly elected by the Board.  This second change better matches how officer elections have been conducted in the past. The Gritty Details Our scope was to change bylaws to match how PASS actually works and tackle a limited set of issues.  Changing the bylaws is hard.  We’ve been working on these changes since the March board meeting last year.  At that meeting we met and talked through the issues we wanted to address.  In years past the Board has tried to come up with language and then we’ve discussed and negotiated to get to the result.  In March, we gave HQ guidance on what we wanted and asked them to come up with a starting point.  Hannes worked on building us an initial set of changes that we could work our way through.  Discussing changes like this over email is difficult wasn’t very productive.  We do a much better job on this at the in-person Board meetings.  Unfortunately there are only 2 or 3 of those a year. In August we met in Nashville and spent time discussing the changes.  That was also the day after we released the slate for the 2010 election. The discussion around that colored what we talked about in terms of these changes.  We talked very briefly at the Summit and again reviewed and revised the changes at the Board meeting in January.  This is the result of those changes and discussions. We made numerous small changes to clean up language and make wording more clear.  We also made two big changes. Director Employment Restrictions The first is that only two people from the same company can serve on the Board at the same time.  The actual language in section VI.3 reads: A maximum of two (2) Directors who are employed by, or who are joint owners or partners in, the same for-profit venture, company, organization, or other legal entity, may concurrently serve on the PASS Board of Directors at any time. The definition of “employed” is at the sole discretion of the Board. And what a mess this turns out to be in practice.  Our membership is a hodgepodge of interlocking relationships.  Let’s say three Board members get together and start a blog service for SQL Server bloggers.  It’s technically for-profit.  Let’s assume it makes $8 in the first year.  Does that trigger this clause?  (Technically yes.)  We had a horrible time trying to write language that covered everything.  All the sample bylaws that we found were just as vague as this. That led to the third clause in this section.  The first sentence reads: The Board of Directors reserves the right, strictly on a case-by-case basis, to overrule the requirements of Section VI.3 by majority decision for any single Director’s conflict of employment. We needed some way to handle the trivial issues and exercise some judgment.  It seems like a public vote is the best way.  This discloses the relationship and gets each Board member on record on the issue.   In practice I think this clause will rarely be used.  I think this entire section will only be invoked for actual employment issues and not for small side projects.  In either case we have the mechanisms in place to handle it in a public, transparent way. That’s the first and third clauses.  The second clause says that if your situation changes and you fall afoul of this restriction you need to notify the Board.  The clause further states that if this new job means a Board members violates the “two-per-company” rule the Board may request their resignation.  The Board can also  allow the person to continue serving with a majority vote.  I think this will also take some judgment.  Consider a person switching jobs that leads to three people from the same company.  I’m very likely to ask for someone to resign if all three are two weeks into a two year term.  I’m unlikely to ask anyone to resign if one is two weeks away from ending their term.  In either case, the decision will be a public vote that we can be held accountable for. One concern that was raised was whether this would affect someone choosing to accept a job.  I think that’s a choice for them to make.  PASS is clearly stating its intent that only two directors from any one organization should serve at any time.  Once these bylaws are approved, this policy should not come as a surprise to any potential or current Board members considering a job change.  This clause isn’t perfect.  The biggest hole is business relationships that aren’t defined above.  Let’s say that two employees from company “X” serve on the Board.  What happens if I accept a full-time consulting contract with that company?  Let’s assume I’m working directly for one of the two existing Board members.  That doesn’t violate section VI.3.  But I think it’s clearly the kind of relationship we’d like to prevent.  Unfortunately that was even harder to write than what we have now.  I fully expect that in the next revision of the bylaws we’ll address this.  It just didn’t make it into this one. Officer Elections The officer election process received a slightly different rewrite.  Our goal was to codify in the bylaws the actual process we used to elect the officers.  The officers are the President, Executive Vice-President (EVP) and Vice-President of Marketing.  The Immediate Past President (IPP) is also an officer but isn’t elected.  The IPP serves in that role for two years after completing their term as President.  We do that for continuity’s sake.  Some organizations have a President-elect that serves for one or two years.  The group that founded PASS chose to have an IPP. When I started on the Board, the Nominating Committee (NomCom) selected the slate for the at-large directors and the slate for the officers.  There was always one candidate for each officer position.  It wasn’t really an election so much as the NomCom decided who the next person would be for each officer position.  Behind the scenes the Board worked to select the best people for the role. In June 2009 that process was changed to bring it line with what actually happens.  An Officer Appointment Committee was created that was a subset of the Board.  That committee would take time to interview the candidates and present a slate to the Board for approval.  The majority vote of the Board would determine the officers for the next two years.  In practice the Board itself interviewed the candidates and conducted the elections.  That means it was time to change the bylaws again. Section VII.2 and VII.3 spell out the process used to select the officers.  We use the phrase “Officer Appointment” to separate it from the Director election but the end result is that the Board elects the officers.  Section VII.3 starts: Officers shall be appointed bi-annually by a majority of all the voting members of the Board of Directors. Everything else revolves around that sentence.  We use the word appoint but they truly are elected.  There are details in the bylaws for term limits, minimum requirements for President (1 prior term as an officer), tie breakers and filling vacancies. In practice we will have an election for President, then an election for EVP and then an election for VP Marketing.  That means that losing candidates will be able to fall down the ladder and run for the next open position.  Another point to note is that officers aren’t at-large directors.  That means if a current sitting officer loses all three elections they are off the Board.  Having Board member votes public will help with the transparency of this approach. This process has a number of positive and negatives.  The biggest concern I expect to hear is that our members don’t directly choose the officers.  I’m going to try and list all the positives and negatives of this approach. Many non-profits value continuity and are slower to change than a business.  On the plus side this promotes that.  On the negative side this promotes that.  If we change too slowly the members complain that we aren’t responsive.  If we change too quickly we make mistakes and fail at various things.  We’ve been criticized for both of those lately so I’m not entirely sure where to draw the line.  My rough assumption to this point is that we’re going too slow on governance and too quickly on becoming “more than a Summit.”  This approach creates competition in the officer elections.  If you are an at-large director there is no consequence to losing an election.  If you are an officer the only way to stay on the Board is to win an officer election or an at-large election.  If you are an officer and lose an election you can always run for the next office down.  This makes it very easy for multiple people to contest an election. There is value in a person moving through the officer positions up to the Presidency.  Having the Board select the officers promotes this.  The down side is that it takes a LOT of time to get to the Presidency.  We’ve had good people struggle with burnout.  We’ve had lots of discussion around this.  The process as we’ve described it here makes it possible for someone to move quickly through the ranks but doesn’t prevent people from working their way up through each role. We talked long and hard about having the officers elected by the members.  We had a self-imposed deadline to complete these changes prior to elections this summer. The other challenge was that our original goal was to make the bylaws reflect our actual process rather than create a new one.  I believe we accomplished this goal. We ran out of time to consider this option in the detail it needs.  Having member elections for officers needs a number of problems solved.  We would need a way for candidates to fall through the election.  This is what promotes competition.  Without this few people would risk an election and we’ll be back to one candidate per slot.  We need to do this without having multiple elections.  We may be able to copy what other organizations are doing but I was surprised at how little I could find on other organizations.  We also need a way for people that lose an officer election to win an at-large election.  Otherwise we’ll have very little competition for officers. This brings me to an area that I think we as a Board haven’t done a good job.  We haven’t built a strong process to tell you who is doing a good job and who isn’t.  This is a double-edged sword.  I don’t want to highlight Board members that are failing.  That’s not a good way to get people to volunteer and run for the Board.  But I also need a way let the members make an informed choice about who is doing a good job and would make a good officer.  Encouraging Board members to blog, publishing minutes and making votes public helps in that regard but isn’t the final answer.  I don’t know what the final answer is yet.  I do know that the Board members themselves are uniquely positioned to know which other Board members are doing good work.  They know who speaks up in meetings, who works to build consensus, who has good ideas and who works with the members.  What I Could Do Better I’ve learned a lot writing this about how we communicated with our members.  The next time we revise the bylaws I’d do a few things differently.  The biggest change would be to provide better documentation.  The March 2009 minutes provide a very detailed look into what changes we wanted to make to the bylaws.  Looking back, I’m a little surprised at how closely they matched our final changes and covered the various arguments.  If you just read those you’d get 90% of what we eventually changed.  Nearly everything else was just details around implementation.  I’d also consider publishing a scope document defining exactly what we were doing any why.  I think it really helped that we had a limited, defined goal in mind.  I don’t think we did a good job communicating that goal outside the meeting minutes though. That said, I wish I’d blogged more after the August and January meeting.  I think it would have helped more people to know that this change was coming and to be ready for it. Conclusion These changes address two big concerns that the Board had.  First, it prevents a single organization from dominating the Board.  Second, it codifies and clearly spells out how officers are elected.  This is the process that was previously followed but it was somewhat murky.  These changes bring clarity to this and clearly explain the process the Board will follow. We’re going to listen to feedback until March 31st.  At that time we’ll decide whether to approve these changes.  I’m also assuming that we’ll start another round of changes in the next year or two.  Are there other issues in the bylaws that we should tackle in the future?

    Read the article

< Previous Page | 934 935 936 937 938 939 940 941 942 943 944 945  | Next Page >