Search Results

Search found 36186 results on 1448 pages for 'sql 11'.

Page 867/1448 | < Previous Page | 863 864 865 866 867 868 869 870 871 872 873 874  | Next Page >

  • What does this mean?

    - by LeonixSolutions
    I found this in some code examples while googling : $sql = 'INSERT INTO users (username,passwordHash) VALUES (?,?)'; it's new to me, but I would guess that it a substitution method and equivalent to $sql = "INSERT INTO users (username,passwordHash) VALUES ($username,$passwordHash)";` or $sql = 'INSERT INTO users (username,passwordHash) VALUES (' . $username . ',' . $passwordHash . ')';` would that be correct? Is it an actual PHP syntax, or was he just trying to simplify his example?

    Read the article

  • Database suggestion

    - by superartsy
    I am working on an enterprise level product that is designed around SQL Express Database ( views,concurrent users,stored procedures,case,IF).Though we dont use any SQL Server advanced features, if the database size of 4Gb in express ends up being a limitation customers can move to advanced versions of SQL Server. However my problem is, SQL Express deployment is not easy and the installer size is huge. This is a major drawback for someone looking to try our product.You dont want end users to not buy a product becos the download is huge. Does anyone have any recommendations of a database that has a smaller footprint but all the features of Express and which can be migrated to express?

    Read the article

  • Multiple Forms with Different Actions using Submit Buttons

    - by Adam Coulson
    I have Two forms in my index.php and am trying to get them to submit and POST their data seperatly Form 1 echo '<form action="process.php?type=news" method="post">'; echo '<table cellspacing="0"><tr><th>'; echo 'Add News to News Feed'; echo '</th></tr><tr><td>'; echo 'Title:<br /><input name="newstitle" type="text" id="add" style="height:16px;width:525px;" size="80" maxlength="186" /><br />'; echo 'Main Body Text:<br /><textarea name="newsfeed" id="add" style="width:525px;height:78px;" maxlength="2000" ></textarea>'; echo '<input style="float:right;margin-top:5px;" id="button" type="submit" value="Submit" />'; echo '</td></tr></table></from>'; And Form 2 echo '<form action="process.php?type=suggest" method="post">'; echo '<table cellspacing="0"><tr><th>'; echo 'Suggest Additions to the Intranet'; echo '</th></tr><tr><td>'; echo '<textarea name="suggest" id="add" style="width:330px;height:60px;" maxlength="800" ></textarea>'; echo '<input style="float:right;margin-top:5px;" id="button" type="submit" value="Submit" />'; echo '</td></tr></table></from>'; I want these both to post and do the action after pressing the submit button, but currently the second form submit to the first forms action How can i fix this??? EDIT: Also i am using .PHP for both the index and process page then using it to echo the forms onto the page Also here is the process.php data $type=$_REQUEST['type']; $suggest=$_POST['suggest']; $newstitle=$_POST['newstitle']; $news=mysql_real_escape_string($_POST['newsfeed']); if ($type == "news") { $sql=mysql_query("SELECT * FROM newsfeed WHERE title = ('$newstitle')"); $number_of_rows = mysql_num_rows($sql); if ($number_of_rows > 0) { echo 'This Title is Already Taken'; } else { $sql="INSERT INTO newsfeed (title, news) VALUES ('$newstitle','$news')"; if (!mysql_query($sql,$con)) { die('Error: ' . mysql_error()); } header('Location: index.php'); exit; } } elseif ($type == "suggest") { $sql="INSERT INTO suggestions VALUES ('$suggest')"; if (!mysql_query($sql,$con)) { die('Error: ' . mysql_error()); } header('Location: index.php'); exit; }

    Read the article

  • Php Syntax Error

    - by Jeff Cameron
    I'm trying to update a table in php and I keep getting syntax errors. Here's what I've got: if (isset($_POST['inspect'])) { // get gis_id from pole table to update fm_poles $sql = "select gis_id from poles where pole_number = '".$_GET['polenumber']."'"; $rs = pg_query($sql) or die('Query failed: ' . pg_last_error()); $gisid = $row['gis_id']; pg_free_result($rs); // update fm_poles $sql = "update fm_poles set inspect ='".$_POST['inspect']."',co_date = '".$_POST['co_date']."',size = '".$_POST['size']."',date = ".$_POST['date'].",brand ='".$_POST['brand']."',backspan = ".$_POST['backspan']." WHERE gis_id = ".$gisid.""; print $sql."<BR>\n"; $rs = pg_query($sql) or die('Query failed: ' . pg_last_error()); pg_free_result($rs); } This is the error it gives me: update fm_poles set inspect ='20120208',co_date = '20030710',size = '30-5',date = 0,brand ='test',backspan = 300 WHERE gis_id = The error message: Query failed: ERROR: syntax error at end of input at character 129

    Read the article

  • How do I split the output from mysqldump into smaller files?

    - by lindelof
    I need to move entire tables from one MySQL database to another. I don't have full access to the second one, only phpMyAdmin access. I can only upload (compressed) sql files smaller than 2MB. But the compressed output from a mysqldump of the first database's tables is larger than 10MB. Is there a way to split the output from mysqldump into smaller files? I cannot use split(1) since I cannot cat(1) the files back on the remote server. Or is there another solution I have missed? Edit The --extended-insert=FALSE option to mysqldump suggested by the first poster yields a .sql file that can then be split into importable files, provided that split(1) is called with a suitable --lines option. By trial and error I found that bzip2 compresses the .sql files by a factor of 20, so I needed to figure out how many lines of sql code correspond roughly to 40MB.

    Read the article

  • Best Data Access Methods for New Web Application

    - by user98454
    I'm building a new web application project and am confused by the numerous methods of performing data access. I'm backending on SQL and a bit confused whether to use LINQ to SQL or trtaditional ADO.net ? what are the advantages and disadvantages of using LINQ/SQL over ADO.net? If it is ADO.net,then what is the best way to retrieve data means either calling the stored procedures or directly calling the t-sql code? My question is what is cleanes and most effiecient and professional way of creating DAL for webapplication in asp.net? Thanks

    Read the article

  • hibernate pagination mechanism

    - by haicnpmk44
    I am trying to use Hibernate pagination for my query (PostgreSQL ) i set setFirstResult(0), setMaxResults(20) for my sql query. My code like below: Session session = getSessionFactory().getCurrentSession(); session.beginTransaction(); Query query = session.createQuery("select id , customer_name , address from tbl_customers "); query.setFirstResult(0); query.setMaxResults(20); List<T> entities = query.list(); session.getTransaction().commit(); but when viewing SQL hibernate log, i still see full sql query: Hibernate: select customer0_.id as id9_, customer0_.customer_name as dst2_9_, customer0_.addres as dst3_9_ from tbl_customers customer0_ Why there is no LIMIT OFFSET in query of Hibernate pagination SQL log? Does anyone know about Hibernate pagination mechanism? I guess that Hibernate will select all data, put data into Resultset, and then paging in Resultset, right?

    Read the article

  • table in drupal with edit link

    - by user544079
    I have a table created in drupal with the edit link pointing to the input form. But the problem is, it only displays the last row values in the $email and $comment variables. Can anyone suggest how to modify the table display to have the edit link to the corresponding records? function _MYMODULE_sql_to_table($sql) { $html = ""; // execute sql $resource = db_query($sql); // fetch database results in an array $results = array(); while ($row = db_fetch_array($resource)) { $results[] = $row; $email = $row['email']; $comment = $row['comment']; drupal_set_message('Email: '.$email. ' comment: '.$comment); } // ensure results exist if (!count($results)) { $html .= "Sorry, no results could be found."; return $html; } // create an array to contain all table rows $rows = array(); // get a list of column headers $columnNames = array_keys($results[0]); // loop through results and create table rows foreach ($results as $key => $data) { // create row data $row = array( 'edit' => l(t('Edit'),"admin/content/test/$email/$comment/ContactUs", $options=array()),); // loop through column names foreach ($columnNames as $c) { $row[] = array( 'data' => $data[$c], 'class' => strtolower(str_replace(' ', '-', $c)), ); } // add row to rows array $rows[] = $row; } // loop through column names and create headers $header = array(); foreach ($columnNames as $c) { $header[] = array( 'data' = $c, 'class' = strtolower(str_replace(' ', '-', $c)), ); } // generate table html $html .= theme('table', $header, $rows); return $html; } // then you can call it in your code... function _MYMODULE_some_page_callback() { $html = ""; $sql = "select * from {contactus}"; $html .= _MYMODULE_sql_to_table($sql); return $html; }

    Read the article

  • How to append an integer to a const char value in iphone?

    - by Warrior
    I have sql query statement which used to display the contents in the table.The sql statement consist of a where clause which is to be appended with numeric value as 1 ,2 3 etc depends upon the previously selected content.I am having the numeric value as int and i want it to append to sql statement which is const char.how can i append both the values.Please help me out. My query is == select * from Book where id=1; i have the id value is integer Thanks.

    Read the article

  • in MSSQL Server 2005 Dev Edition, I faced index corruption

    - by tranhuyhung
    Hi all, When running stored procedures in MSSQL Server, I found it failed and the DBMS (MSSQL Server 2005 Dev Edition) notified that some indexes are corrupted. Please advice me, here below is DBCC logs: DBCC results for 'itopup_dev'. Service Broker Msg 9675, State 1: Message Types analyzed: 14. Service Broker Msg 9676, State 1: Service Contracts analyzed: 6. Service Broker Msg 9667, State 1: Services analyzed: 3. Service Broker Msg 9668, State 1: Service Queues analyzed: 3. Service Broker Msg 9669, State 1: Conversation Endpoints analyzed: 0. Service Broker Msg 9674, State 1: Conversation Groups analyzed: 0. Service Broker Msg 9670, State 1: Remote Service Bindings analyzed: 0. DBCC results for 'sys.sysrowsetcolumns'. There are 1148 rows in 14 pages for object "sys.sysrowsetcolumns". DBCC results for 'sys.sysrowsets'. There are 187 rows in 2 pages for object "sys.sysrowsets". DBCC results for 'sysallocunits'. There are 209 rows in 3 pages for object "sysallocunits". DBCC results for 'sys.sysfiles1'. There are 2 rows in 1 pages for object "sys.sysfiles1". DBCC results for 'sys.syshobtcolumns'. There are 1148 rows in 14 pages for object "sys.syshobtcolumns". DBCC results for 'sys.syshobts'. There are 187 rows in 2 pages for object "sys.syshobts". DBCC results for 'sys.sysftinds'. There are 0 rows in 0 pages for object "sys.sysftinds". DBCC results for 'sys.sysserefs'. There are 209 rows in 1 pages for object "sys.sysserefs". DBCC results for 'sys.sysowners'. There are 15 rows in 1 pages for object "sys.sysowners". DBCC results for 'sys.sysprivs'. There are 135 rows in 1 pages for object "sys.sysprivs". DBCC results for 'sys.sysschobjs'. There are 817 rows in 21 pages for object "sys.sysschobjs". DBCC results for 'sys.syscolpars'. There are 2536 rows in 71 pages for object "sys.syscolpars". DBCC results for 'sys.sysnsobjs'. There are 1 rows in 1 pages for object "sys.sysnsobjs". DBCC results for 'sys.syscerts'. There are 0 rows in 0 pages for object "sys.syscerts". DBCC results for 'sys.sysxprops'. There are 12 rows in 4 pages for object "sys.sysxprops". DBCC results for 'sys.sysscalartypes'. There are 27 rows in 1 pages for object "sys.sysscalartypes". DBCC results for 'sys.systypedsubobjs'. There are 0 rows in 0 pages for object "sys.systypedsubobjs". DBCC results for 'sys.sysidxstats'. There are 466 rows in 15 pages for object "sys.sysidxstats". DBCC results for 'sys.sysiscols'. There are 616 rows in 6 pages for object "sys.sysiscols". DBCC results for 'sys.sysbinobjs'. There are 23 rows in 1 pages for object "sys.sysbinobjs". DBCC results for 'sys.sysobjvalues'. There are 1001 rows in 376 pages for object "sys.sysobjvalues". DBCC results for 'sys.sysclsobjs'. There are 14 rows in 1 pages for object "sys.sysclsobjs". DBCC results for 'sys.sysrowsetrefs'. There are 0 rows in 0 pages for object "sys.sysrowsetrefs". DBCC results for 'sys.sysremsvcbinds'. There are 0 rows in 0 pages for object "sys.sysremsvcbinds". DBCC results for 'sys.sysxmitqueue'. There are 0 rows in 0 pages for object "sys.sysxmitqueue". DBCC results for 'sys.sysrts'. There are 1 rows in 1 pages for object "sys.sysrts". DBCC results for 'sys.sysconvgroup'. There are 0 rows in 0 pages for object "sys.sysconvgroup". DBCC results for 'sys.sysdesend'. There are 0 rows in 0 pages for object "sys.sysdesend". DBCC results for 'sys.sysdercv'. There are 0 rows in 0 pages for object "sys.sysdercv". DBCC results for 'sys.syssingleobjrefs'. There are 317 rows in 2 pages for object "sys.syssingleobjrefs". DBCC results for 'sys.sysmultiobjrefs'. There are 3607 rows in 37 pages for object "sys.sysmultiobjrefs". DBCC results for 'sys.sysdbfiles'. There are 2 rows in 1 pages for object "sys.sysdbfiles". DBCC results for 'sys.sysguidrefs'. There are 0 rows in 0 pages for object "sys.sysguidrefs". DBCC results for 'sys.sysqnames'. There are 91 rows in 1 pages for object "sys.sysqnames". DBCC results for 'sys.sysxmlcomponent'. There are 93 rows in 1 pages for object "sys.sysxmlcomponent". DBCC results for 'sys.sysxmlfacet'. There are 97 rows in 1 pages for object "sys.sysxmlfacet". DBCC results for 'sys.sysxmlplacement'. There are 17 rows in 1 pages for object "sys.sysxmlplacement". DBCC results for 'sys.sysobjkeycrypts'. There are 0 rows in 0 pages for object "sys.sysobjkeycrypts". DBCC results for 'sys.sysasymkeys'. There are 0 rows in 0 pages for object "sys.sysasymkeys". DBCC results for 'sys.syssqlguides'. There are 0 rows in 0 pages for object "sys.syssqlguides". DBCC results for 'sys.sysbinsubobjs'. There are 0 rows in 0 pages for object "sys.sysbinsubobjs". DBCC results for 'TBL_BONUS_TEMPLATES'. There are 0 rows in 0 pages for object "TBL_BONUS_TEMPLATES". DBCC results for 'TBL_ROLE_PAGE_GROUP'. There are 18 rows in 1 pages for object "TBL_ROLE_PAGE_GROUP". DBCC results for 'TBL_BONUS_LEVELS'. There are 0 rows in 0 pages for object "TBL_BONUS_LEVELS". DBCC results for 'TBL_SUPERADMIN'. There are 1 rows in 1 pages for object "TBL_SUPERADMIN". DBCC results for 'TBL_ADMIN_ROLES'. There are 11 rows in 1 pages for object "TBL_ADMIN_ROLES". DBCC results for 'TBL_ADMIN_USER_ROLE'. There are 42 rows in 1 pages for object "TBL_ADMIN_USER_ROLE". DBCC results for 'TBL_BONUS_CALCULATION_HISTORIES'. There are 0 rows in 0 pages for object "TBL_BONUS_CALCULATION_HISTORIES". DBCC results for 'TBL_MERCHANT_MOBILES'. There are 0 rows in 0 pages for object "TBL_MERCHANT_MOBILES". DBCC results for 'TBL_ARCHIVE_EXPORTED_SOFTPINS'. There are 16030918 rows in 35344 pages for object "TBL_ARCHIVE_EXPORTED_SOFTPINS". DBCC results for 'TBL_ARCHIVE_LOGS'. There are 280 rows in 2 pages for object "TBL_ARCHIVE_LOGS". DBCC results for 'TBL_ADMIN_USERS'. There are 29 rows in 1 pages for object "TBL_ADMIN_USERS". DBCC results for 'TBL_SYSTEM_ALERT_GROUPS'. There are 4 rows in 1 pages for object "TBL_SYSTEM_ALERT_GROUPS". DBCC results for 'TBL_EXPORTED_TRANSACTIONS'. There are 7848 rows in 89 pages for object "TBL_EXPORTED_TRANSACTIONS". DBCC results for 'TBL_SYSTEM_ALERTS'. There are 968 rows in 9 pages for object "TBL_SYSTEM_ALERTS". DBCC results for 'TBL_SYSTEM_ALERT_GROUP_MEMBERS'. There are 1 rows in 1 pages for object "TBL_SYSTEM_ALERT_GROUP_MEMBERS". DBCC results for 'TBL_ESTIMATED_TIME'. There are 11 rows in 1 pages for object "TBL_ESTIMATED_TIME". DBCC results for 'TBL_SYSTEM_ALERT_MEMBERS'. There are 0 rows in 1 pages for object "TBL_SYSTEM_ALERT_MEMBERS". DBCC results for 'TBL_COMMISSIONS'. There are 10031 rows in 106 pages for object "TBL_COMMISSIONS". DBCC results for 'TBL_CATEGORIES'. There are 3 rows in 1 pages for object "TBL_CATEGORIES". DBCC results for 'TBL_SERVICE_PROVIDERS'. There are 11 rows in 1 pages for object "TBL_SERVICE_PROVIDERS". DBCC results for 'TBL_CATEGORY_SERVICE_PROVIDER'. There are 11 rows in 1 pages for object "TBL_CATEGORY_SERVICE_PROVIDER". DBCC results for 'TBL_PRODUCTS'. There are 73 rows in 6 pages for object "TBL_PRODUCTS". DBCC results for 'TBL_MERCHANT_KEYS'. There are 291 rows in 30 pages for object "TBL_MERCHANT_KEYS". DBCC results for 'TBL_POS_UNLOCK_KEYS'. There are 0 rows in 0 pages for object "TBL_POS_UNLOCK_KEYS". DBCC results for 'TBL_POS'. There are 0 rows in 0 pages for object "TBL_POS". DBCC results for 'TBL_IMPORT_BATCHES'. There are 3285 rows in 84 pages for object "TBL_IMPORT_BATCHES". DBCC results for 'TBL_IMPORT_KEYS'. There are 2 rows in 1 pages for object "TBL_IMPORT_KEYS". DBCC results for 'TBL_PRODUCT_COMMISSION_TEMPLATES'. There are 634 rows in 4 pages for object "TBL_PRODUCT_COMMISSION_TEMPLATES". DBCC results for 'TBL_POS_SETTLE_TRANSACTIONS'. There are 0 rows in 0 pages for object "TBL_POS_SETTLE_TRANSACTIONS". DBCC results for 'TBL_CHANGE_KEY_SOFTPINS'. There are 0 rows in 1 pages for object "TBL_CHANGE_KEY_SOFTPINS". DBCC results for 'TBL_POS_RETURN_TRANSACTIONS'. There are 0 rows in 0 pages for object "TBL_POS_RETURN_TRANSACTIONS". DBCC results for 'TBL_POS_SOFTPINS'. There are 0 rows in 0 pages for object "TBL_POS_SOFTPINS". DBCC results for 'TBL_POS_MENUS'. There are 0 rows in 0 pages for object "TBL_POS_MENUS". DBCC results for 'TBL_COMMISSION_TEMPLATES'. There are 23 rows in 1 pages for object "TBL_COMMISSION_TEMPLATES". DBCC results for 'TBL_DOWNLOAD_TRANSACTIONS'. There are 170820 rows in 1789 pages for object "TBL_DOWNLOAD_TRANSACTIONS". DBCC results for 'TBL_IMPORT_TEMP_SOFTPINS'. There are 0 rows in 1 pages for object "TBL_IMPORT_TEMP_SOFTPINS". DBCC results for 'TBL_REGIONS'. There are 2 rows in 1 pages for object "TBL_REGIONS". DBCC results for 'TBL_SOFTPINS'. There are 9723677 rows in 126611 pages for object "TBL_SOFTPINS". DBCC results for 'sysdiagrams'. There are 0 rows in 0 pages for object "sysdiagrams". DBCC results for 'TBL_SYNCHRONIZE_TRANSACTIONS'. There are 9302 rows in 53 pages for object "TBL_SYNCHRONIZE_TRANSACTIONS". DBCC results for 'TBL_SALEMEN'. There are 32 rows in 1 pages for object "TBL_SALEMEN". DBCC results for 'TBL_RESERVATION_SOFTPINS'. There are 131431 rows in 1629 pages for object "TBL_RESERVATION_SOFTPINS". DBCC results for 'TBL_SYNCHRONIZE_TRANSACTION_ITEMS'. There are 5345 rows in 16 pages for object "TBL_SYNCHRONIZE_TRANSACTION_ITEMS". DBCC results for 'TBL_ACCOUNTS'. There are 1 rows in 1 pages for object "TBL_ACCOUNTS". DBCC results for 'TBL_SYNCHRONIZE_TRANSACTION_SOFTPIN'. There are 821988 rows in 2744 pages for object "TBL_SYNCHRONIZE_TRANSACTION_SOFTPIN". *DBCC results for 'TBL_EXPORTED_SOFTPINS'. Msg 8928, Level 16, State 1, Line 1 Object ID 1716917188, index ID 1, partition ID 72057594046119936, alloc unit ID 72057594050838528 (type In-row data): Page (1:677314) could not be processed. See other errors for details. Msg 8939, Level 16, State 7, Line 1 Table error: Object ID 1716917188, index ID 1, partition ID 72057594046119936, alloc unit ID 72057594050838528 (type In-row data), page (1:677314). Test (m_freeData = PAGEHEADSIZE && m_freeData <= (UINT)PAGESIZE - m_slotCnt * sizeof (Slot)) failed. Values are 15428 and 7240. There are 2267937 rows in 6133 pages for object "TBL_EXPORTED_SOFTPINS". CHECKDB found 0 allocation errors and 2 consistency errors in table 'TBL_EXPORTED_SOFTPINS' (object ID 1716917188).* DBCC results for 'TBL_DOWNLOAD_SOFTPINS'. There are 7029404 rows in 17999 pages for object "TBL_DOWNLOAD_SOFTPINS". DBCC results for 'TBL_MERCHANT_BALANCE_CREDIT_PAID'. There are 0 rows in 0 pages for object "TBL_MERCHANT_BALANCE_CREDIT_PAID". DBCC results for 'TBL_ARCHIVE_SOFTPINS'. There are 44015040 rows in 683692 pages for object "TBL_ARCHIVE_SOFTPINS". DBCC results for 'TBL_ACCOUNT_BALANCE_LOGS'. There are 0 rows in 0 pages for object "TBL_ACCOUNT_BALANCE_LOGS". DBCC results for 'TBL_BLOCK_BATCHES'. There are 23 rows in 1 pages for object "TBL_BLOCK_BATCHES". DBCC results for 'TBL_BLOCK_BATCH_SOFTPIN'. There are 396 rows in 1 pages for object "TBL_BLOCK_BATCH_SOFTPIN". DBCC results for 'TBL_MERCHANTS'. There are 290 rows in 22 pages for object "TBL_MERCHANTS". DBCC results for 'TBL_DOWNLOAD_TRANSACTION_ITEMS'. There are 189296 rows in 1241 pages for object "TBL_DOWNLOAD_TRANSACTION_ITEMS". DBCC results for 'TBL_BLOCK_BATCH_CONDITIONS'. There are 23 rows in 1 pages for object "TBL_BLOCK_BATCH_CONDITIONS". DBCC results for 'TBL_SP_ADVERTISEMENTS'. There are 6 rows in 1 pages for object "TBL_SP_ADVERTISEMENTS". DBCC results for 'TBL_SERVER_KEYS'. There are 1 rows in 1 pages for object "TBL_SERVER_KEYS". DBCC results for 'TBL_ARCHIVE_DOWNLOAD_SOFTPINS'. There are 27984122 rows in 60773 pages for object "TBL_ARCHIVE_DOWNLOAD_SOFTPINS". DBCC results for 'TBL_ACCOUNT_BALANCE_REQUESTS'. There are 0 rows in 0 pages for object "TBL_ACCOUNT_BALANCE_REQUESTS". DBCC results for 'TBL_MERCHANT_TERMINALS'. There are 633 rows in 4 pages for object "TBL_MERCHANT_TERMINALS". DBCC results for 'TBL_SP_PREFIXES'. There are 6 rows in 1 pages for object "TBL_SP_PREFIXES". DBCC results for 'TBL_DIRECT_TOPUP_TRANSACTIONS'. There are 43 rows in 1 pages for object "TBL_DIRECT_TOPUP_TRANSACTIONS". DBCC results for 'TBL_MERCHANT_BALANCE_REQUESTS'. There are 19367 rows in 171 pages for object "TBL_MERCHANT_BALANCE_REQUESTS". DBCC results for 'TBL_ACTION_LOGS'. There are 133714 rows in 1569 pages for object "TBL_ACTION_LOGS". DBCC results for 'sys.queue_messages_1977058079'. There are 0 rows in 0 pages for object "sys.queue_messages_1977058079". DBCC results for 'sys.queue_messages_2009058193'. There are 0 rows in 0 pages for object "sys.queue_messages_2009058193". DBCC results for 'TBL_CODES'. There are 98 rows in 1 pages for object "TBL_CODES". DBCC results for 'TBL_MERCHANT_BALANCE_LOGS'. There are 183498 rows in 3178 pages for object "TBL_MERCHANT_BALANCE_LOGS". DBCC results for 'TBL_MERCHANT_CHANNEL_TEMPLATE'. There are 397 rows in 2 pages for object "TBL_MERCHANT_CHANNEL_TEMPLATE". DBCC results for 'sys.queue_messages_2041058307'. There are 0 rows in 0 pages for object "sys.queue_messages_2041058307". DBCC results for 'TBL_VNPTEPAY'. There are 0 rows in 0 pages for object "TBL_VNPTEPAY". DBCC results for 'TBL_PAGE_GROUPS'. There are 10 rows in 1 pages for object "TBL_PAGE_GROUPS". DBCC results for 'TBL_PAGE_GROUP_PAGE'. There are 513 rows in 2 pages for object "TBL_PAGE_GROUP_PAGE". DBCC results for 'TBL_ACCOUNT_CHANNEL_TEMPLATE'. There are 0 rows in 0 pages for object "TBL_ACCOUNT_CHANNEL_TEMPLATE". DBCC results for 'TBL_PAGES'. There are 148 rows in 3 pages for object "TBL_PAGES". CHECKDB found 0 allocation errors and 2 consistency errors in database 'itopup_dev'. repair_allow_data_loss is the minimum repair level for the errors found by DBCC CHECKDB (itopup_dev). DBCC execution completed. If DBCC printed error messages, contact your system administrator.

    Read the article

  • Error when installing AppFabric 1.1 on Server 2012 64bit

    - by no9
    I am trying to install AppFabric 1.1 on 64bit Windows Server 2012 R2. All updates have been installed and updates are turned ON .NET Framework 4.0 is installed .NET Framework 3.5 is installed IIS is installed Windows Powershell 3.0 should already be included in Server 2012 I am getting the following error: 2014-03-21 11:02:34, Information Setup ===== Logging started: 2014-03-21 11:02:34+01:00 ===== 2014-03-21 11:02:34, Information Setup File: c:\6c4006b0b3f6dee1bf616f1967\setup.exe 2014-03-21 11:02:34, Information Setup InternalName: Setup.exe 2014-03-21 11:02:34, Information Setup OriginalFilename: Setup.exe 2014-03-21 11:02:34, Information Setup FileVersion: 1.1.2106.32 2014-03-21 11:02:34, Information Setup FileDescription: Setup.exe 2014-03-21 11:02:34, Information Setup Product: Microsoft(R) Windows(R) Server AppFabric 2014-03-21 11:02:34, Information Setup ProductVersion: 1.1.2106.32 2014-03-21 11:02:34, Information Setup Debug: False 2014-03-21 11:02:34, Information Setup Patched: False 2014-03-21 11:02:34, Information Setup PreRelease: False 2014-03-21 11:02:34, Information Setup PrivateBuild: False 2014-03-21 11:02:34, Information Setup SpecialBuild: False 2014-03-21 11:02:34, Information Setup Language: Language Neutral 2014-03-21 11:02:34, Information Setup 2014-03-21 11:02:34, Information Setup OS Name: Windows Server 2012 R2 Standard 2014-03-21 11:02:34, Information Setup OS Edition: ServerStandard 2014-03-21 11:02:34, Information Setup OSVersion: Microsoft Windows NT 6.2.9200.0 2014-03-21 11:02:34, Information Setup CurrentCulture: sl-SI 2014-03-21 11:02:34, Information Setup Processor Architecture: AMD64 2014-03-21 11:02:34, Information Setup Event Registration Source : AppFabric_Setup 2014-03-21 11:02:34, Information Setup 2014-03-21 11:02:34, Information Setup Microsoft.ApplicationServer.Setup.Upgrade.V1UpgradeSetupModule : Initiating V1.0 Upgrade module. 2014-03-21 11:02:34, Information Setup Microsoft.ApplicationServer.Setup.Upgrade.V1UpgradeSetupModule : V1.0 is not installed. 2014-03-21 11:02:54, Information Setup Microsoft.ApplicationServer.Setup.Upgrade.V1UpgradeSetupModule : Initiating V1 Upgrade pre-install. 2014-03-21 11:02:54, Information Setup Microsoft.ApplicationServer.Setup.Upgrade.V1UpgradeSetupModule : V1.0 is not installed, not taking backup. 2014-03-21 11:02:55, Information Setup Executing C:\Windows\Microsoft.NET\Framework64\v4.0.30319\aspnet_regiis.exe with commandline -iru. 2014-03-21 11:02:55, Information Setup Return code from aspnet_regiis.exe is 0 2014-03-21 11:02:55, Information Setup Process.Start: C:\Windows\system32\msiexec.exe /quiet /norestart /i "c:\6c4006b0b3f6dee1bf616f1967\Microsoft CCR and DSS Runtime 2008 R3.msi" /l*vx "C:\Users\Administrator\AppData\Local\Temp\AppServerSetup1_1(2014-03-21 11-02-55).log" 2014-03-21 11:02:57, Information Setup Process.ExitCode: 0x00000000 2014-03-21 11:02:57, Information Setup Windows features successfully enabled. 2014-03-21 11:02:57, Information Setup Process.Start: C:\Windows\system32\msiexec.exe /quiet /norestart /i "c:\6c4006b0b3f6dee1bf616f1967\Packages\AppFabric-1.1-for-Windows-Server-64.msi" ADDDEFAULT=Worker,WorkerAdmin,CacheService,CacheAdmin,Setup /l*vx "C:\Users\Administrator\AppData\Local\Temp\AppServerSetup1_1(2014-03-21 11-02-57).log" LOGFILE="C:\Users\Administrator\AppData\Local\Temp\AppServerSetup1_1_CustomActions(2014-03-21 11-02-57).log" INSTALLDIR="C:\Program Files\AppFabric 1.1 for Windows Server" LANGID=en-US 2014-03-21 11:03:45, Information Setup Process.ExitCode: 0x00000643 2014-03-21 11:03:45, Error Setup AppFabric installation failed because installer MSI returned with error code : 1603 2014-03-21 11:03:45, Error Setup 2014-03-21 11:03:45, Error Setup AppFabric installation failed because installer MSI returned with error code : 1603 2014-03-21 11:03:45, Error Setup 2014-03-21 11:03:45, Information Setup Microsoft.ApplicationServer.Setup.Core.SetupException: AppFabric installation failed because installer MSI returned with error code : 1603 2014-03-21 11:03:45, Information Setup at Microsoft.ApplicationServer.Setup.Installer.WindowsInstallerProxy.GenerateAndThrowSetupException(Int32 exitCode, LogEventSource logEventSource) 2014-03-21 11:03:45, Information Setup at Microsoft.ApplicationServer.Setup.Installer.WindowsInstallerProxy.Invoke(LogEventSource logEventSource, InstallMode installMode, String packageIdentity, List`1 updateList, List`1 customArguments) 2014-03-21 11:03:45, Information Setup at Microsoft.ApplicationServer.Setup.Installer.MsiInstaller.InstallSelectedFeatures() 2014-03-21 11:03:45, Information Setup at Microsoft.ApplicationServer.Setup.Installer.MsiInstaller.Install() 2014-03-21 11:03:45, Information Setup at Microsoft.ApplicationServer.Setup.Client.ProgressPage.StartAction() 2014-03-21 11:03:45, Information Setup 2014-03-21 11:03:45, Information Setup === Summary of Actions === 2014-03-21 11:03:45, Information Setup Required Windows components : Completed Successfully 2014-03-21 11:03:45, Information Setup IIS Management Console : Completed Successfully 2014-03-21 11:03:45, Information Setup Microsoft CCR and DSS Runtime 2008 R3 : Completed Successfully 2014-03-21 11:03:45, Information Setup AppFabric 1.1 for Windows Server : Failed 2014-03-21 11:03:45, Information Setup Hosting Services : Failed 2014-03-21 11:03:45, Information Setup Caching Services : Failed 2014-03-21 11:03:45, Information Setup Hosting Administration : Failed 2014-03-21 11:03:45, Information Setup Cache Administration : Failed 2014-03-21 11:03:45, Information Setup Microsoft Update : Skipped 2014-03-21 11:03:45, Information Setup Microsoft Update : Skipped 2014-03-21 11:03:45, Information Setup 2014-03-21 11:03:45, Information Setup ===== Logging stopped: 2014-03-21 11:03:45+01:00 ===== I have tried this solution but no success: http://stackoverflow.com/questions/11205927/appfabric-installation-failed-because-installer-msi-returned-with-error-code-1 My system enviroment variable PSModulesPath has this value: C:\Windows\System32\WindowsPowerShell\v1.0\Modules I have also followed this link with no success: http://jefferytay.wordpress.com/2013/12/11/installing-appfabric-on-windows-server-2012/ Any help would be greatly appreciated !

    Read the article

  • Installer Reboots at "Detecting hardware" (disks and other hardware) on all recent Server Installs

    - by Ryan Rosario
    I have a very frustrating problem with my PC. I cannot install any recent version of Ubuntu Server (or even Desktop) since 9.04 even using the text-based installer. I boot from a USB stick created by Unetbootin (I also tried other methods such as startup disk creator with no difference). On the Server installer, it gets to "Detecting Hardware" (the second one about disks and all other hardware, not network hardware) and then either hangs at 0% (waited 24 hours), or reboots after a minute or two. My system (late 2007): ASUS P5NSLI motherboard Intel Core 2 Duo E6600 2.4Ghz 2 x 1GB Corsair 667MHz RAM nVidia GeForce 6600 I have unplugged everything (including the only hard disk, CD-ROMs and floppy). I have only one stick of RAM (tried each one to no avail) and am booting the installer from a USB stick (booting from CD-ROM yields the same problem). I also tried several of the boot options (nomodeset, nousb, acpi=off, noapic, i915.modeset=1/0, xforcevesa) in all combinations) to no avail. The only active parts of my system are the video card, mouse, keyboard and USB stick. I have also updated the BIOS to the most recent version. (FWIW, on the Desktop installer, I get a black screen after hitting the Install option.) Even after removing "quiet" I am unable to see what kernel panic is occurring (or not occurring) to cause the install to crash. I am only able to save the debug logs via a simple webserver in the installer. After the last line (I repeatedly refreshed), the server stops responding and the installer hangs or reboots: Jan 2 01:04:03 main-menu[302]: INFO: Menu item 'disk-detect' selected Jan 2 01:04:04 kernel: [ 309.154372] sata_nv 0000:00:0e.0: version 3.5 Jan 2 01:04:04 kernel: [ 309.154409] sata_nv 0000:00:0e.0: Using SWNCQ mode Jan 2 01:04:04 kernel: [ 309.154531] sata_nv 0000:00:0e.0: setting latency timer to 64 Jan 2 01:04:04 kernel: [ 309.164442] scsi0 : sata_nv Jan 2 01:04:04 kernel: [ 309.167610] scsi1 : sata_nv Jan 2 01:04:04 kernel: [ 309.167762] ata1: SATA max UDMA/133 cmd 0x9f0 ctl 0xbf0 bmdma 0xd400 irq 10 Jan 2 01:04:04 kernel: [ 309.167774] ata2: SATA max UDMA/133 cmd 0x970 ctl 0xb70 bmdma 0xd408 irq 10 Jan 2 01:04:04 kernel: [ 309.167948] sata_nv 0000:00:0f.0: Using SWNCQ mode Jan 2 01:04:04 kernel: [ 309.168071] sata_nv 0000:00:0f.0: setting latency timer to 64 Jan 2 01:04:04 kernel: [ 309.171931] scsi2 : sata_nv Jan 2 01:04:04 kernel: [ 309.173793] scsi3 : sata_nv Jan 2 01:04:04 kernel: [ 309.173943] ata3: SATA max UDMA/133 cmd 0x9e0 ctl 0xbe0 bmdma 0xe800 irq 11 Jan 2 01:04:04 kernel: [ 309.173954] ata4: SATA max UDMA/133 cmd 0x960 ctl 0xb60 bmdma 0xe808 irq 11 Jan 2 01:04:04 kernel: [ 309.174061] pata_amd 0000:00:0d.0: version 0.4.1 Jan 2 01:04:04 kernel: [ 309.174160] pata_amd 0000:00:0d.0: setting latency timer to 64 Jan 2 01:04:04 kernel: [ 309.177045] scsi4 : pata_amd Jan 2 01:04:04 kernel: [ 309.178628] scsi5 : pata_amd Jan 2 01:04:04 kernel: [ 309.178801] ata5: PATA max UDMA/133 cmd 0x1f0 ctl 0x3f6 bmdma 0xf000 irq 14 Jan 2 01:04:04 kernel: [ 309.178811] ata6: PATA max UDMA/133 cmd 0x170 ctl 0x376 bmdma 0xf008 irq 15 Jan 2 01:04:04 net/hw-detect.hotplug: Detected hotpluggable network interface eth0 Jan 2 01:04:04 net/hw-detect.hotplug: Detected hotpluggable network interface lo Jan 2 01:04:04 kernel: [ 309.485062] ata3: SATA link down (SStatus 0 SControl 300) Jan 2 01:04:04 kernel: [ 309.633094] ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Jan 2 01:04:04 kernel: [ 309.641647] ata1.00: ATA-8: ST31000528AS, CC38, max UDMA/133 Jan 2 01:04:04 kernel: [ 309.641658] ata1.00: 1953525168 sectors, multi 1: LBA48 NCQ (depth 31/32) Jan 2 01:04:04 kernel: [ 309.657614] ata1.00: configured for UDMA/133 Jan 2 01:04:04 kernel: [ 309.657969] scsi 0:0:0:0: Direct-Access ATA ST31000528AS CC38 PQ: 0 ANSI: 5 Jan 2 01:04:04 kernel: [ 309.658482] sd 0:0:0:0: Attached scsi generic sg0 type 0 Jan 2 01:04:04 kernel: [ 309.658588] sd 0:0:0:0: [sda] 1953525168 512-byte logical blocks: (1.00 TB/931 GiB) Jan 2 01:04:04 kernel: [ 309.658812] sd 0:0:0:0: [sda] Write Protect is off Jan 2 01:04:04 kernel: [ 309.658823] sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Jan 2 01:04:04 kernel: [ 309.658918] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 2 01:04:04 kernel: [ 309.675630] sda: sda1 sda2 Jan 2 01:04:04 kernel: [ 309.676440] sd 0:0:0:0: [sda] Attached SCSI disk Jan 2 01:04:05 kernel: [ 309.969102] ata2: SATA link down (SStatus 0 SControl 300) Jan 2 01:04:05 kernel: [ 310.281137] ata4: SATA link down (SStatus 0 SControl 300) Anybody have any additional ideas I could try? I am getting ready to just toss the motherboard.

    Read the article

  • Master Note for Generic Data Warehousing

    - by lajos.varady(at)oracle.com
    ++++++++++++++++++++++++++++++++++++++++++++++++++++ The complete and the most recent version of this article can be viewed from My Oracle Support Knowledge Section. Master Note for Generic Data Warehousing [ID 1269175.1] ++++++++++++++++++++++++++++++++++++++++++++++++++++In this Document   Purpose   Master Note for Generic Data Warehousing      Components covered      Oracle Database Data Warehousing specific documents for recent versions      Technology Network Product Homes      Master Notes available in My Oracle Support      White Papers      Technical Presentations Platforms: 1-914CU; This document is being delivered to you via Oracle Support's Rapid Visibility (RaV) process and therefore has not been subject to an independent technical review. Applies to: Oracle Server - Enterprise Edition - Version: 9.2.0.1 to 11.2.0.2 - Release: 9.2 to 11.2Information in this document applies to any platform. Purpose Provide navigation path Master Note for Generic Data Warehousing Components covered Read Only Materialized ViewsQuery RewriteDatabase Object PartitioningParallel Execution and Parallel QueryDatabase CompressionTransportable TablespacesOracle Online Analytical Processing (OLAP)Oracle Data MiningOracle Database Data Warehousing specific documents for recent versions 11g Release 2 (11.2)11g Release 1 (11.1)10g Release 2 (10.2)10g Release 1 (10.1)9i Release 2 (9.2)9i Release 1 (9.0)Technology Network Product HomesOracle Partitioning Advanced CompressionOracle Data MiningOracle OLAPMaster Notes available in My Oracle SupportThese technical articles have been written by Oracle Support Engineers to provide proactive and top level information and knowledge about the components of thedatabase we handle under the "Database Datawarehousing".Note 1166564.1 Master Note: Transportable Tablespaces (TTS) -- Common Questions and IssuesNote 1087507.1 Master Note for MVIEW 'ORA-' error diagnosis. For Materialized View CREATE or REFRESHNote 1102801.1 Master Note: How to Get a 10046 trace for a Parallel QueryNote 1097154.1 Master Note Parallel Execution Wait Events Note 1107593.1 Master Note for the Oracle OLAP OptionNote 1087643.1 Master Note for Oracle Data MiningNote 1215173.1 Master Note for Query RewriteNote 1223705.1 Master Note for OLTP Compression Note 1269175.1 Master Note for Generic Data WarehousingWhite Papers Transportable Tablespaces white papers Database Upgrade Using Transportable Tablespaces:Oracle Database 11g Release 1 (February 2009) Platform Migration Using Transportable Database Oracle Database 11g and 10g Release 2 (August 2008) Database Upgrade using Transportable Tablespaces: Oracle Database 10g Release 2 (April 2007) Platform Migration using Transportable Tablespaces: Oracle Database 10g Release 2 (April 2007)Parallel Execution and Parallel Query white papers Best Practices for Workload Management of a Data Warehouse on the Sun Oracle Database Machine (June 2010) Effective resource utilization by In-Memory Parallel Execution in Oracle Real Application Clusters 11g Release 2 (Feb 2010) Parallel Execution Fundamentals in Oracle Database 11g Release 2 (November 2009) Parallel Execution with Oracle Database 10g Release 2 (June 2005)Oracle Data Mining white paper Oracle Data Mining 11g Release 2 (March 2010)Partitioning white papers Partitioning with Oracle Database 11g Release 2 (September 2009) Partitioning in Oracle Database 11g (June 2007)Materialized Views and Query Rewrite white papers Oracle Materialized Views  and Query Rewrite (May 2005) Improving Performance using Query Rewrite in Oracle Database 10g (December 2003)Database Compression white papers Advanced Compression with Oracle Database 11g Release 2 (September 2009) Table Compression in Oracle Database 10g Release 2 (May 2005)Oracle OLAP white papers On-line Analytic Processing with Oracle Database 11g Release 2 (September 2009) Using Oracle Business Intelligence Enterprise Edition with the OLAP Option to Oracle Database 11g (July 2008)Generic white papers Enabling Pervasive BI through a Practical Data Warehouse Reference Architecture (February 2010) Optimizing and Protecting Storage with Oracle Database 11g Release 2 (November 2009) Oracle Database 11g for Data Warehousing and Business Intelligence (August 2009) Best practices for a Data Warehouse on Oracle Database 11g (September 2008)Technical PresentationsA selection of ObE - Oracle by Examples documents: Generic Using Basic Database Functionality for Data Warehousing (10g) Partitioning Manipulating Partitions in Oracle Database (11g Release 1) Using High-Speed Data Loading and Rolling Window Operations with Partitioning (11g Release 1) Using Partitioned Outer Join to Fill Gaps in Sparse Data (10g) Materialized View and Query Rewrite Using Materialized Views and Query Rewrite Capabilities (10g) Using the SQLAccess Advisor to Recommend Materialized Views and Indexes (10g) Oracle OLAP Using Microsoft Excel With Oracle 11g Cubes (how to analyze data in Oracle OLAP Cubes using Excel's native capabilities) Using Oracle OLAP 11g With Oracle BI Enterprise Edition (Creating OBIEE Metadata for OLAP 11g Cubes and querying those in BI Answers) Building OLAP 11g Cubes Querying OLAP 11g Cubes Creating Interactive APEX Reports Over OLAP 11g CubesSelection of presentations from the BIWA website:Extreme Data Warehousing With Exadata  by Hermann Baer (July 2010) (slides 2.5MB, recording 54MB)Data Mining Made Easy! Introducing Oracle Data Miner 11g Release 2 New "Work flow" GUI   by Charlie Berger (May 2010) (slides 4.8MB, recording 85MB )Best Practices for Deploying a Data Warehouse on Oracle Database 11g  by Maria Colgan (December 2009)  (slides 3MB, recording 18MB, white paper 3MB )

    Read the article

  • Installing Exchange 2013 CU1

    - by marc dekeyser
    Originally posted on: http://geekswithblogs.net/marcde/archive/2013/08/01/installing-exchange-2013-cu1.aspxBefore you begin Download the following software: · UCMA 4.0: http://www.microsoft.com/en-us/download/details.aspx?id=34992 · Office 2010 filter packs 64 bit: http://www.microsoft.com/en-us/download/details.aspx?id=17062 · Office 2010 filter packs SP1 64 bit: http://www.microsoft.com/en-us/download/details.aspx?id=26604 Prerequisite installation Step 1 : Open Windows Powershell     Step 2: Enter following string to start prerequisite installation for a multirole server – Install-WindowsFeature AS-HTTP-Activation, Desktop-Experience, NET-Framework-45-Features, RPC-over-HTTP-proxy, RSAT-Clustering, RSAT-Clustering-CmdInterface, RSAT-Clustering-Mgmt, RSAT-Clustering-PowerShell, Web-Mgmt-Console, WAS-Process-Model, Web-Asp-Net45, Web-Basic-Auth, Web-Client-Auth, Web-Digest-Auth, Web-Dir-Browsing, Web-Dyn-Compression, Web-Http-Errors, Web-Http-Logging, Web-Http-Redirect, Web-Http-Tracing, Web-ISAPI-Ext, Web-ISAPI-Filter, Web-Lgcy-Mgmt-Console, Web-Metabase, Web-Mgmt-Console, Web-Mgmt-Service, Web-Net-Ext45, Web-Request-Monitor, Web-Server, Web-Stat-Compression, Web-Static-Content, Web-Windows-Auth, Web-WMI, Windows-Identity-Foundation   Step 3: restart the server   Shutdown.exe /r /t 60     Step 4: Install the UCMA Runtime Setup Navigate to the folder holding the prerequisite downloads and double click the “UCMARunTimeSetup”     Step 5: Accept the Run prompt     Step 6: Click the left click on "Next (button)" in "Microsoft Unified Communications Managed API 4.0, Runtime Setup"     Step 7: Left click on "I have read and accept the license terms. (check box)" in "Microsoft Unified Communications Managed API 4.0, Runtime Setup"     Step 8: Left click on "Install (button)" in "Microsoft Unified Communications Managed API 4.0, Runtime Setup"     Step 9: Left click on "Finish (button)" in "Microsoft Unified Communications Managed API 4.0, Runtime Setup"     Step 10: Start the Office 2010 filter pack installation     Step 11: Left click on "Run (button)" in "Open File - Security Warning"     Step 12: Left click on "Microsoft Filter Pack 2.0 (button)" as it hides in the background by default.     Step 13: Left click on "Next (button)" in "Microsoft Filter Pack 2.0"     Step 14: Left click on "I accept the terms in the License Agreement (check box)" in "Microsoft Filter Pack 2.0"     Step 15: Left click on "Next (button)" in "Microsoft Filter Pack 2.0"     Step 16: Left click on "OK (button)" in "Microsoft Filter Pack 2.0"     Step 17: Start the installation of the Office 2010 Filterpack SP1.     Step 18: Left click on "Run (button)" in "Open File - Security Warning"     Step 19: Left click on "Click here to accept the Microsoft Software License Terms. (check box)" in "Microsoft Office 2010 Filter Pack Service Pack 1 (SP1)"     Step 20: Left click on "Continue (button)" in "Microsoft Office 2010 Filter Pack Service Pack 1 (SP1)"     Step 21: (?21/?06/?2013 11:23:25) User left click on "OK (button)" in "Microsoft Office 2010 Filter Pack Service Pack 1 (SP1)"     Step 22: Left click on "Windows PowerShell (button)"     Step 23: restart the server. Shutdown.exe /r /t 60   Step 24: Left click on "Close (button)" in "You're about to be signed off"     Installing Exchange server 2013 Step 1: Navigate to the Exchange 2013 CU1 extracted location and run setup.exe Left click on "next (button)" in "Exchange Server Setup" Step 2: Left click on "next (button)" in "Exchange Server Setup" Step 3: Left click on "Exchange Server Setup (window)" in "Exchange Server Setup" Step 4: Left click on "Exchange Server Setup (window)" in "Exchange Server Setup" a Step 5: User left click on "next (button)" in "Exchange Server Setup" Step 6: Left click on "I accept the terms in the license agreement" in "Exchange Server Setup" Step 7: Left click on "next (button)" in "Exchange Server Setup" Step 8: Left click on "next (button)" in "Exchange Server Setup" Step 9: Select "Mailbox role” in "Exchange Server Setup" Step 10: Select "Client Access role" in "Exchange Server Setup" Step 11: Left click on "next (button)" in "Exchange Server Setup" Step 12: Left click on "next (button)" in "Exchange Server Setup" Step 13: Choose the installation path and left click on "next (button)" in "Exchange Server Setup" Step 14: Leave malware scanning on by making sure the radio button is on “No”and left click on "Exchange Server Setup (window)" in "Exchange Server Setup"                   Step 15: Left click on "finish (button)" in "Exchange Server Setup" Step 16: Restart the server. Shutdown.exe /r /t 60

    Read the article

  • IPS Facets and Info files

    - by mkupfer
    One of the unusual things about IPS is its "facet" feature. For example, if you're a developer using the foo library, you don't install a libfoo-dev package to get the header files. Intead, you install the libfoo package, and your facet.devel setting controls whether you get header files. I was reminded of this recently when I tried to look at some documentation for Emacs Org mode. I was surprised when Emacs's Info browser said it couldn't find the top-level Info directory. I poked around in /usr/share but couldn't find any info files. $ ls -l /usr/share/info ls: cannot access /usr/share/info: No such file or directory Was I was missing a package? $ pkg list -a | egrep "info|emacs" editor/gnu-emacs 23.1-0.175.0.0.0.2.537 i-- editor/gnu-emacs/gnu-emacs-gtk 23.1-0.175.0.0.0.2.537 i-- editor/gnu-emacs/gnu-emacs-lisp 23.1-0.175.0.0.0.2.537 --- editor/gnu-emacs/gnu-emacs-no-x11 23.1-0.175.0.0.0.2.537 --- editor/gnu-emacs/gnu-emacs-x11 23.1-0.175.0.0.0.2.537 i-- system/data/terminfo 0.5.11-0.175.0.0.0.2.1 i-- system/data/terminfo/terminfo-core 0.5.11-0.175.0.0.0.2.1 i-- text/texinfo 4.7-0.175.0.0.0.2.537 i-- x11/diagnostic/x11-info-clients 7.6-0.175.0.0.0.0.1215 i-- $ Hmm. I didn't have the gnu-emacs-lisp package. That seemed an unlikely place to stick the Info files, and pkg(1) confirmed that the info files were not there: $ pkg contents -r gnu-emacs-lisp | grep info usr/share/emacs/23.1/lisp/info-look.el.gz usr/share/emacs/23.1/lisp/info-xref.el.gz usr/share/emacs/23.1/lisp/info.el.gz usr/share/emacs/23.1/lisp/informat.el.gz usr/share/emacs/23.1/lisp/org/org-info.el.gz usr/share/emacs/23.1/lisp/org/org-jsinfo.el.gz usr/share/emacs/23.1/lisp/pcvs-info.el.gz usr/share/emacs/23.1/lisp/textmodes/makeinfo.el.gz usr/share/emacs/23.1/lisp/textmodes/texinfo.el.gz $ Well, if I have what look like the right packages but don't have the right files, the next thing to check are the facets. The first check is whether there is a facet associated with the Info files: $ pkg contents -m gnu-emacs | grep usr/share/info dir facet.doc.info=true group=bin mode=0755 owner=root path=usr/share/info file [...] chash=[...] facet.doc.info=true group=bin mode=0444 owner=root path=usr/share/info/mh-e-1 [...] file [...] chash=[...] facet.doc.info=true group=bin mode=0444 owner=root path=usr/share/info/mh-e-2 [...] [...] Yes, they're associated with facet.doc.info. Now let's look at the facet settings on my desktop: $ pkg facet FACETS VALUE facet.locale.en* True facet.locale* False facet.doc.man True facet.doc* False $ Oops. I've got man pages and various English documentation files, but not the Info files. Let's fix that: # pkg change-facet facet.doc.info=True Packages to update: 970 Variants/Facets to change: 1 Create boot environment: No Create backup boot environment: Yes Services to change: 1 DOWNLOAD PKGS FILES XFER (MB) Completed 970/970 181/181 9.2/9.2 PHASE ACTIONS Install Phase 226/226 PHASE ITEMS Image State Update Phase 2/2 PHASE ITEMS Reading Existing Index 8/8 Indexing Packages 970/970 # Now we have the info files: $ ls -F /usr/share/info a2ps.info dir@ flex.info groff-2 regex.info aalib.info dired-x flex.info-1 groff-3 remember ...

    Read the article

  • List of all states from COMPOSITE_INSTANCE, CUBE_INSTANCE, DLV_MESSAGE tables

    - by Deepak Arora
    In many of my engagements I get asked repeatedly about the states of the composites in 11g and how to decipher them, especially when we are troubleshooting issues around purging. I have compiled a list of all the states from the COMPOSITE_INSTANCE, CUBE_INSTANCE, DLV_MESSAGE and MEDIATOR_INSTANCE tables. These are the primary tables that are used when using BPEL composites and how they are used with the ECID.  Composite State Values COMPOSITE_INSTANCE States State Description 0 Running 1 Completed 2 Running with faults 3 Completed with faults 4 Running with recovery required 5 Completed with recovery required 6 Running with faults and recovery required 7 Completed with faults and recovery required 8 Running with suspended 9 Completed with suspended 10 Running with faults and suspended 11 Completed with faults and suspended 12 Running with recovery required and suspended 13 Completed with recovery required and suspended 14 Running with faults, recovery required, and suspended 15 Completed with faults, recovery required, and suspended 16 Running with terminated 17 Completed with terminated 18 Running with faults and terminated 19 Completed with faults and terminated 20 Running with recovery required and terminated 21 Completed with recovery required and terminated 22 Running with faults, recovery required, and terminated 23 Completed with faults, recovery required, and terminated 24 Running with suspended and terminated 25 Completed with suspended and terminated 26 Running with faulted, suspended, and terminated 27 Completed with faulted, suspended, and terminated 28 Running with recovery required, suspended, and terminated 29 Completed with recovery required, suspended, and terminated 30 Running with faulted, recovery required, suspended, and terminated 31 Completed with faulted, recovery required, suspended, and terminated 32 Unknown 64 - Normal 0 false false false EN-CA X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Any value in the range of 32 to 63 indicates that the composite instance state has not been enabled, but the instance state is updated for faults, aborts, etc. CUBE_INSTANCE States State Description 0 STATE_INITIATED 1 STATE_OPEN_RUNNING 2 STATE_OPEN_SUSPENDED 3 STATE_OPEN_FAULTED 4 STATE_CLOSED_PENDING_CANCEL 5 STATE_CLOSED_COMPLETED 6 STATE_CLOSED_FAULTED 7 STATE_CLOSED_CANCELLED 8 STATE_CLOSED_ABORTED 9 STATE_CLOSED_STALE 10 STATE_CLOSED_ROLLED_BACK DLV_MESSAGE States State Description 0 STATE_UNRESOLVED 1 STATE_RESOLVED 2 STATE_HANDLED 3 STATE_CANCELLED 4 STATE_MAX_RECOVERED Since now in 11g the Invoke_Messages table is not there so to distinguish between a new message (Invoke) and callback (DLV) and there is DLV_TYPE column that defines the type of message: DLV_TYPE States State Description 1 Invoke Message 2 DLV Message MEDIATOR_INSTANCE STATE Description  0  No faults but there still might be running instances  1  At least one case is aborted by user  2  At least one case is faulted (non-recoverable)  3  At least one case is faulted and one case is aborted  4  At least one case is in recovery required state  5 At least one case is in recovery required state and at least one is aborted  6 At least one case is in recovery required state and at least one is faulted  7 At least one case is in recovery required state, one faulted and one aborted  >=8 and < 16  Running >= 16   Stale In my next blog posting I will walk through the lifecycle of a BPEL process using the above states for the following use cases: - New BPEL process - initial Receive activity - Callback BPEL process - mid-level Receive activity As always comments and questions welcome! Deepak

    Read the article

  • Get Oracle Linux Certified at Much Reduced Price

    - by Antoinette O'Sullivan
    You have already heard the great news that you can now prove your knowledge on Oracle Linux 5 and 6 with the new Oracle Certified Associate, Oracle Linux 5 and 6 System Administrator exam. Until December 21th 2013, this exam is in beta phase so you can get a fully-fledged certification at a much reduced price; for example $50 in the United States or 39 euros in the euro zone. Establishing What You Need to Know Your first step is to click on the Exam Topics tab on the certification page. You will see a list of topics that you will be tested on during the certification exam. These are the areas that you need to improve your knowledge on, if you are not already expert. Registering For a Certification Exam On the certification page, click on Register for this Exam. The Pearson VUE site guides you through signing up for an event at a date and location to suit you. Preparing to Take an Exam On the certification page, click on the Exam Preparation tab. This indicates the recommended training that can help you prepare to sit the exam. The recommended training for this certification is the Oracle Linux System Administration course. You can take this very popular 5-day live instructor-led course as a: Live Virtual Event: Take the training from your own desk, no travel required. Choose from a selection of events already on the schedule to suit different timezones. In-Class: Travel to an education center to take this class. Below is a selection of events already on the schedule.  Location  Date  Delivery Language  Brussels, Belgium  18 November 2013  English  London, England  16 December 2013  English   Manchester, England  27 January 2014  English  Reading, England  12 May 2014  English  Milan, Italy  31 March 2014  Italian   Rome, Italy  10 February 2014  Italian  Utrecht, Netherlands  18 November 2013  Dutch Warsaw, Poland   9 December 2013  Polish  Bucharest, Romania  20 January 2014  Romanian  Ankara, Turkey  12 January 2014  Turkish  Istanbul, Turkey  16 December 2013  Turkish  Panjim, India  4 November 2013  English  Jakarta, Indonesia  9 December 2013  English  Kuala Lumpur, Malaysia  25 November 2013  English  Makati City, Philippines  11 November 2013  English  Singapore  25 November 2013  English  Bangkok, Thailand  11 November 2013  English  Casablanca, Morocco  16 December 2013  English  Muscat, Oman  2 March 2014  English  Johannesburg, South Africa  17 February 2014  English  Tunis, Tunisia  31 March 2014  French  Canberra, Australia 25 November 2013   English  Melbourne, Australia  19 May 2014  English  Sydney, Australia  20 January 2014  English  Mississauga, Canada  24 February 2014  English Ottawa, Canada   28 April 2014  English  Belmont, CA, United States  10 February 2014  English  Irvine, CA, United States  12 May 2014  English  San Francisco, CA, United States  18 November 2013  English  Chicago, IL, United States  14 April 2014  English  Cambridge, MA, United States  18 November 2013  English  Roseville, MA, United States  2 December 2013  English  Edison, NJ, United States  10 March 2014  English   Pittsburg, PA, United States  9 December 2013  English   Reston, VA, United States 13 January 2014   English For more information on the Oracle Linux curriculum, go to http://oracle.com/education/linux.

    Read the article

  • Keepin’ It Simple with StorageTek SL150

    - by Kristin Rose
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Are your customers archive and data protection environments getting out of hand?  Are they looking for a little simplicity in their lives? How about some scalability? Or are they looking for a way to save on capital and operational expenses? If you answered yes to any of these, then  Oracle's new StorageTek SL150 Modular Tape Library is the product for you. It beats the competition in terms of simplicity, scalability and savings, and provides some seriously wallet friendly revenue opportunities for you. If the long-term service annuities on the SL150 aren’t convincing enough, then the resale margins, rebates and follow-on revenue from modular upgrades will be!  The SL150 simplifies StorageTek’s tape portfolio by replacing three products with one scalable solution that  provides an entry point for repeat business within accounts. The SL150 expands your potential storage customer base to smaller companies with low cost, simple upgrades and streamlined management that help alleviate key customer pain points. With the SL150, your customers will be able to simplify growth of their archive and data protection environments with small entry configurations and 10x growth, something that would require multiple box swaps across up to three product categories with competitive products. With the SL150, Oracle can help you provide greater customer satisfaction with  Simplicity, Scalability and Savings! We know you’re probably wondering how you can get started and sell this new and magnificent product… Well, look no further because the only thing you need to do is complete the SL150 Guided Learning Paths (GLPs). For some extra insight, watch the video below on the new StorageTek SL150 modular tape library, and don’t forget to ‘tweet’ this post, and share it on Facebook to spread the good news! Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Wishing you Simplicity, Scalability and Savings, The OPN Communications Team

    Read the article

  • Am I right about the differences between Floyd-Warshall, Dijkstra's and Bellman-Ford algorithms?

    - by Programming Noob
    I've been studying the three and I'm stating my inferences from them below. Could someone tell me if I have understood them accurately enough or not? Thank you. Dijkstra's algorithm is used only when you have a single source and you want to know the smallest path from one node to another, but fails in cases like this Floyd-Warshall's algorithm is used when any of all the nodes can be a source, so you want the shortest distance to reach any destination node from any source node. This only fails when there are negative cycles (this is the most important one. I mean, this is the one I'm least sure about:) 3.Bellman-Ford is used like Dijkstra's, when there is only one source. This can handle negative weights and its working is the same as Floyd-Warshall's except for one source, right? If you need to have a look, the corresponding algorithms are (courtesy Wikipedia): Bellman-Ford: procedure BellmanFord(list vertices, list edges, vertex source) // This implementation takes in a graph, represented as lists of vertices // and edges, and modifies the vertices so that their distance and // predecessor attributes store the shortest paths. // Step 1: initialize graph for each vertex v in vertices: if v is source then v.distance := 0 else v.distance := infinity v.predecessor := null // Step 2: relax edges repeatedly for i from 1 to size(vertices)-1: for each edge uv in edges: // uv is the edge from u to v u := uv.source v := uv.destination if u.distance + uv.weight < v.distance: v.distance := u.distance + uv.weight v.predecessor := u // Step 3: check for negative-weight cycles for each edge uv in edges: u := uv.source v := uv.destination if u.distance + uv.weight < v.distance: error "Graph contains a negative-weight cycle" Dijkstra: 1 function Dijkstra(Graph, source): 2 for each vertex v in Graph: // Initializations 3 dist[v] := infinity ; // Unknown distance function from 4 // source to v 5 previous[v] := undefined ; // Previous node in optimal path 6 // from source 7 8 dist[source] := 0 ; // Distance from source to source 9 Q := the set of all nodes in Graph ; // All nodes in the graph are 10 // unoptimized - thus are in Q 11 while Q is not empty: // The main loop 12 u := vertex in Q with smallest distance in dist[] ; // Start node in first case 13 if dist[u] = infinity: 14 break ; // all remaining vertices are 15 // inaccessible from source 16 17 remove u from Q ; 18 for each neighbor v of u: // where v has not yet been 19 removed from Q. 20 alt := dist[u] + dist_between(u, v) ; 21 if alt < dist[v]: // Relax (u,v,a) 22 dist[v] := alt ; 23 previous[v] := u ; 24 decrease-key v in Q; // Reorder v in the Queue 25 return dist; Floyd-Warshall: 1 /* Assume a function edgeCost(i,j) which returns the cost of the edge from i to j 2 (infinity if there is none). 3 Also assume that n is the number of vertices and edgeCost(i,i) = 0 4 */ 5 6 int path[][]; 7 /* A 2-dimensional matrix. At each step in the algorithm, path[i][j] is the shortest path 8 from i to j using intermediate vertices (1..k-1). Each path[i][j] is initialized to 9 edgeCost(i,j). 10 */ 11 12 procedure FloydWarshall () 13 for k := 1 to n 14 for i := 1 to n 15 for j := 1 to n 16 path[i][j] = min ( path[i][j], path[i][k]+path[k][j] );

    Read the article

  • ASP.NET MVC CRUD Validation

    - by Ricardo Peres
    One thing I didn’t refer on my previous post on ASP.NET MVC CRUD with AJAX was how to retrieve model validation information into the client. We want to send any model validation errors to the client in the JSON object that contains the ProductId, RowVersion and Success properties, specifically, if there are any errors, we will add an extra Errors collection property. Here’s how: 1: [HttpPost] 2: [AjaxOnly] 3: [Authorize] 4: public JsonResult Edit(Product product) 5: { 6: if (this.ModelState.IsValid == true) 7: { 8: using (ProductContext ctx = new ProductContext()) 9: { 10: Boolean success = false; 11:  12: ctx.Entry(product).State = (product.ProductId == 0) ? EntityState.Added : EntityState.Modified; 13:  14: try 15: { 16: success = (ctx.SaveChanges() == 1); 17: } 18: catch (DbUpdateConcurrencyException) 19: { 20: ctx.Entry(product).Reload(); 21: } 22:  23: return (this.Json(new { Success = success, ProductId = product.ProductId, RowVersion = Convert.ToBase64String(product.RowVersion) })); 24: } 25: } 26: else 27: { 28: Dictionary<String, String> errors = new Dictionary<String, String>(); 29:  30: foreach (KeyValuePair<String, ModelState> keyValue in this.ModelState) 31: { 32: String key = keyValue.Key; 33: ModelState modelState = keyValue.Value; 34:  35: foreach (ModelError error in modelState.Errors) 36: { 37: errors[key] = error.ErrorMessage; 38: } 39: } 40:  41: return (this.Json(new { Success = false, ProductId = 0, RowVersion = String.Empty, Errors = errors })); 42: } 43: } As for the view, we need to change slightly the onSuccess JavaScript handler on the Single view: 1: function onSuccess(ctx) 2: { 3: if (typeof (ctx.Success) != 'undefined') 4: { 5: $('input#ProductId').val(ctx.ProductId); 6: $('input#RowVersion').val(ctx.RowVersion); 7:  8: if (ctx.Success == false) 9: { 10: var errors = ''; 11:  12: if (typeof (ctx.Errors) != 'undefined') 13: { 14: for (var key in ctx.Errors) 15: { 16: errors += key + ': ' + ctx.Errors[key] + '\n'; 17: } 18:  19: window.alert('An error occurred while updating the entity: the model contained the following errors.\n\n' + errors); 20: } 21: else 22: { 23: window.alert('An error occurred while updating the entity: it may have been modified by third parties. Please try again.'); 24: } 25: } 26: else 27: { 28: window.alert('Saved successfully'); 29: } 30: } 31: else 32: { 33: if (window.confirm('Not logged in. Login now?') == true) 34: { 35: document.location.href = '<% 1: : FormsAuthentication.LoginUrl %>?ReturnURL=' + document.location.pathname; 36: } 37: } 38: } The logic is as this: If the Edit action method is called for a new entity (the ProductId is 0) and it is valid, the entity is saved, and the JSON results contains a Success flag set to true, a ProductId property with the database-generated primary key and a RowVersion with the server-generated ROWVERSION; If the model is not valid, the JSON result will contain the Success flag set to false and the Errors collection populated with all the model validation errors; If the entity already exists in the database (ProductId not 0) and the model is valid, but the stored ROWVERSION is different that the one on the view, the result will set the Success property to false and will return the current (as loaded from the database) value of the ROWVERSION on the RowVersion property. On a future post I will talk about the possibilities that exist for performing model validation, stay tuned!

    Read the article

  • I spy a Live Framework portal

    - by jamiet
    Those that have followed my blogs for a while may know that I have a slightly banal interest in Windows Live and, more specifically, the Live Services developer platform'; if that doesn’t sound interesting to you then stop reading now. My interest mainly stems from the Live Mesh technology that was announced a couple of years ago and the data synchronisation platform API that underpins it; that platform is called the Live Framework or LiveFX for short. At the Professional Developer’s Conference (PDC) 2008 Microsoft made LiveFX available to the public as a Tech Preview and I spent some time learning to use it and also built a few test apps on it too. In August 2009 an announcement came that that tech preview was getting shut down: "At the Professional Developer Conference 2008, we gave the developer community access to the technical preview of the Live Framework. The Live Framework is core to our vision of providing you with a consistent programming interface. Now we are working to integrate existing services, controls and the Live Framework into the next release of Windows Live. Your feedback continues to help us build the best possible offerings for Windows Live users, for you and for your customers. " Since then news on LiveFX has disappeared save for a throwaway session at PDC09 and I was hoping that news was going to appear at this week’s MIX conference but nothing was forthcoming. Instead though today I stumbled upon an unannounced portal for future LiveFX applications on Microsoft’s Azure portal at http://live.azure.com. Check it out: I consider this to be very good news. This Azure portal was built after the LiveFX tech preview was decommissioned so seeing Live Services existing so prominently alongside Microsoft’s other cloud efforts like Windows Azure and SQL Azure vindicates my early investment in the platform and gives me hope that we’re going to see something get released very very soon. I believe that the potential uses for this platform are extremely compelling and I’m looking forward to trying some out in the near future. I am also expecting LiveFX to have a heavy dependency on the OData protocol that I talked about yesterday in my post OData.org updated - gives clues about future sql azure enhancements so you can tell where my interest in that stems from. In case you’re wondering the projects that you see listed above (Basic List Sample, JT-proj etc…) are projects that I built on the old Tech Preview platform so clearly that stuff has not gone for good which is also good news; not just because it means I’ll have access to the code I wrote before but I also assume it means that LiveFX won’t have changed much since its tech preview incarnation. I know there are other LiveFX buffs out there and hopefully this news reaches some of them. If you are one of them the please put a comment below and let me know your thoughts! @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • SQLSaturday 33 Observations

    - by Geoff N. Hiten
    Along with a lot of my colleagues, I went to SQLSaturday #33 in Charlotte this last weekend.  Overall a really good event, especially for a first-time organizer.  There is some controversy over certain events where my name got mentioned so I thought I would clear the air. Before I get to the core controversy, let's get the details out of the way.  The Microsoft Offices in Charlotte were an excellent venue for this event.  I really appreciated the Microsoft employees that helped out by letting us in and out of normally secure areas.  This is definitely above and beyond on their part. Thanks to the organizers (especially Greg and Peter) for the great hospitality they showed to the speakers.  Now for the specifics.  Like most events of this type, there was a raffle at the end for some cool swag.  As a speaker I got raffle tickets just like any other attendee.  The raffle was clearly promoted as "must be present to win".  The problem is that for various reasons, the raffle kicked off immediately after the last speaker finished in the largest room.  That room was across the parking lot from all the other rooms for the event.  I happened to have one of the last sessions of the day, and not in the main room.  I also ran long since the audience was very interactive and there were a lot of follow-up questions.  (BTW, thanks to everyone who came and stayed for my session.  Sorry it cost you the chance to win too.).  My name was drawn for an very nice piece of swag (iPod Touch if you insist).  Since I wasn't there, I didn't win. Several folks mentioned I was still speaking and was "here" (as in at the event) just not "here in the room". Yes, I was mad when I found out about it. I think that was handled poorly.  I personally lost out as did my audience (dunno if anyone specific lost anything, but it is the idea that counts).  It was a mistake. Mistakes happen.  Nobody acted maliciously.  Heck, the guys running the event who made the decision are my friends and remain so.  I got over my mad.  We talked about this privately and we are all OK with what happened.  I am not going to let a gadget get in the way of a couple of good friendships. I think the mistake was mostly due to a lack of unity between the venue buildings   Pam Shaw had a similar challenge in Tampa a few weeks ago, including a speaker who ran long on the last session (not me that time).  She had a couple of teenage volunteers to act as gofers/runners.  They counted heads in sessions, pointed people to last-minute room and session changes, and generally helped connect the organizers to what was actually happening.  Note that this was not Pam's first SQLSaturday event.  She knew but the knowledge had not been institutionalized.  We (The SQL community in general and SQLSaturday organizers in particular) now know how essential gofers are to success. I know I spent most of this post focusing on the controversy, but I wanted to clear everything up.  I don't want to let a minor mistake, made in good faith, overshadow what was a tremendously good event for the community. As for the iPod Touch, someone in the SQL community is enjoying it, so it is not a total loss.  And if losing out on it is the price I pay so we can learn this, then that is what a community leader does.  Consider it a gift.  Besides, I really wanted a Zune 120 :)

    Read the article

  • Some thoughts on email hosting for one’s own domain

    - by jamiet
    I have used the same email providers for my own domains for a few years now however I am considering moving over to a new provider. In this email I’ll share my current thoughts and hopefully I’ll get some feedback that might help me to decide on what to do next. What I use today I have three email addresses that I use primarily (I have changed the domains in this blog post as I don’t want to give them away to spammers): [email protected] – My personal account that I give out to family and friends and which I use to register on websites [email protected]  - An account that I use to catch email from the numerous mailing lists that I am on [email protected] – I am a self-employed consultant so this is an account that I hand out to my clients, my accountant, and other work-related organisations Those two domains (jtpersonaldomain.com & jtworkdomain.com) are both managed at http://domains.live.com which is a fantastic service provided by Microsoft that for some perplexing reason they never bother telling anyone about. It offers multiple accounts (I have seven at jtpersonaldomain.com though as already stated I only use two of them) which are accessed via Outlook.com (formerly Hotmail.com) along with usage reporting plus a few other odds and sods that I never use. Best of all though, its totally free. In addition, given that I have got both domains hosted using http://domains.live.com I can link my various accounts together and switch between them at Outlook.com without having to login and logout: N.B. You’ll notice that there are two other accounts listed there in addition to the three I already mentioned. One is my mum’s account which helps me provide IT support/spam filtering services to her and the other is the donation account for AdventureWorks on Azure. I find that linking feature to be very handy indeed. Finally, http://domains.live.com is the epitome of “it just works”. I set up jtworkdomain.com at http://domains.live.com over three years ago and I am pretty certain I haven’t been back there even once to administer it. Proposed changes OK, so if I like http://domains.live.com so much why am I considering changing? Well, I earn my corn in the Microsoft ecosystem and if I’m reading the tea-leaves correctly its looking increasingly likely that the services that I’m going to have to be familiar with in the future are all going to be running on top of and alongside Windows Azure Active Directory and Office 365 respectively. Its clear to me that Microsoft’s are pushing their customers toward cloud services and, like it or lump it, data integration developers like me may have to come along for the ride. I don’t think the day is too far off when we can log into Windows Azure SQL Database (aka SQL Azure), Team Foundation Service, Dynamics etc… using the same credentials that are currently used for Office 365 and over time I would expect those things to get integrated together a lot better – that integration will be based upon a Windows Azure Active Directory identity. This should not come as a surprise, in my opinion Microsoft’s whole enterprise play over the past 15 or 20 years can be neatly surmised as “get people onto Windows Server and Active Directory then upsell from there” – in the not-too-distant-future the only difference is that they’re trying to do it in the cloud. I want to get familiar with these services and hence I am considering moving jtworkdomain.com onto Office 365. I’ll lose the convenience of easily being able to switch to that account at Outlook.com and moreover I’ll have to start paying for it (I think it’ll be about fifty quid a year – not a massive amount but its quite a bit more than free) but increasingly this is beginning to look like a move I have to make. So that’s where my head is at right now. Anyone have any relevant thoughts or experiences to share? Please let me know in the comments below. @Jamiet

    Read the article

< Previous Page | 863 864 865 866 867 868 869 870 871 872 873 874  | Next Page >