Search Results

Search found 145 results on 6 pages for 'emc'.

Page 6/6 | < Previous Page | 2 3 4 5 6 

  • Hyper-V Live Migration across Sites!

    - by Ryan Roussel
    One of the great sessions I sat in on at Tech Ed this week was stretching a Windows 2008 R2 Hyper-V  Failover Cluster across sites.  With this ability, you could actually implement a Hyper-V cluster where you could migrate or even Live Migrate VMs across sites.   With this area’s propensity for Hurricanes, this will be a very popular topic for me over the next few months. While this technology is possible today, it’s also very complicated and can be very expensive to implement.    First your WAN connection has to support the ability to trunk your VLAN across both sites in order to Live Migrate.  This means you can’t use a Layer 3 routed connection like MPLS.  It has to be a Metro Ethernet connection or "Dark Fiber”.  Dark Fiber is unused Fiber already in the ground that can be leased from  various providers. Both of these connections would allow you to trunk layer 2 across your WAN.  Cisco does have the ability to trunk layer 2 across a routed connection by muxing the traffic but this is only available in their Nexus product line which has a very steep price tag.   If you are stuck with MPLS or the like and Nexus switching is not a realistic possibility, you will have to implement a multi-subnet cluster in which case Live Migration won’t be possible.  However you can still failover VMs to the remote site with some planning and manual intervention.  The consideration here is that the VMs will be on a different subnet once migrated, so you will have to change the IP addressing of your VMs.  This also has ramifications with DNS and Name resolution to control your down time.  DHCP with Reservations for your VMs is the preferred method to achieve the IP changes as this will automate that part of the process.   Secondly, you will have to have  a mechanism to replicate your storage across both sites.  Many SAN vendors natively support hardware based synchronous and asynchronous replication.  Some even support cluster shared volumes which were introduced in 2008 R2.   If your SANs do not support this natively, there are alternative file based replication products either software based like Double Take or hardware appliance like EMC.  Be sure to check with your vendor on the support of Disk majority if you’re replicating your quorum disk between SANs.   The last consideration is the ability to maintain quorum for your cluster.  If your replication provider does not support Disk Majority through replication, you will have to explore Node Majority with File Share Witness.  This will affect your design as a 3 node cluster with 1 node at the remote site and FSW at the production site would not have the ability to maintain quorum if the production site was lost. MS best practice for this would be to implement an even node cluster with 2 nodes at  each site and the FSW at a third site.   And there you have it.  While some considerations and research goes into implementing this solution, even a multi-subnet solution would be invaluable to organizations in the implementations of “warm” DR sites.

    Read the article

  • Certify September Updates

    - by Sadia2
    We have added some release and platform certifications to MOS Certify. Applications: Oracle Demantra 12.2.2, 7.3.1.5, 7.3.1.4, 7.3.0.2.0, 7.3.0.0.0 Collaboration Technologies: Oracle Beehive 2.0.1.8.0 Database: Oracle Database Client 12.1.0.1.0, Oracle Clusterware 11.2.0.4.0, Oracle Database 11.2.0.4.0, Oracle Real Application Clusters 11.2.0.4.0 E-Business Suite: Oracle E-Business Suite 12.2.2, 12.1.3, 12.1.2, 12.1.1, 12.0.6, 11.5.10.2 Edge Applications: Oracle AutoVue 20.2.2, 20.2.1, 20.2.0 Enterprise Manager: Enterprise Manager Base Platform - OMS 12.1.0.3.0, Oracle Real User Experience Insight 12.1.0.4.0, 12.1.0.3.0, 12.1.0.1, 11.1 FSGBU Insurance Group: Oracle Health Insurance Claims 2.13.3.0.0 Fusion Middleware: Oracle Business Intelligence Applications 11.1.1.7.1, 7.9.6.4.0, Oracle Discoverer 11.1.1.6.0, Discoverer Administrator 11.1.1.6.0, Discoverer Desktop 11.1.1.6.0, Oracle JDK 1.7.0_40, 1.7.0_25", Oracle JRE 1.7.0_40, 1.7.0_25, Oracle JRockit 6u45 R28.2.7+, Oracle WebCenter Sites 11.1.1.8.0, Oracle WebCenter Sites: Community-Gadgets 11.1.1.8.0, Oracle WebCenter Sites: CIP for File Systems and MS SharePoint 11.1.1.8.0, Oracle WebCenter Sites: CIP for EMC Documentum 11.1.1.8.0 JD Edwards EnterpriseOne: JD Edwards EnterpriseOne Business Services Server 9.1.3.0, 9.1.2.0, 9.1.0.0, JD Edwards EnterpriseOne Mobile Applications 9.1.2.0 Oracle Fusion Applications: Oracle Fusion Applications 11.1.7.0.0 Primavera GBU: Primavera Unifier 9.13.0.0 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} Siebel Enterprise: Siebel Application Server 8.2.2.4.0, 8.2.2.3.0, 8.2.2.2.0, 8.1.1.10.0, 8.1.1.9.0, Siebel Database Server 8.2.2.3.0, 8.1.1.10.0, 8.1.1.9.0, Siebel Remote Client 8.2.2.4.0, 8.2.2.3.0, 8.2.2.2.0, 8.1.1.11.0, 8.1.1.10.0, 8.1.1.9.0, Siebel Tools Client 8.2.2.4.0, 8.2.2.2.0, 8.1.1.11.0, 8.1.1.9.0, Siebel SSO Integration 8.2.2.4.0, 8.2.2.3.0, 8.2.2.2.0, 8.1.1.11.0, 8.1.1.10.0, 8.1.1.9.0

    Read the article

  • PHP: Building A Stock Index Using Yahoo Finance [on hold]

    - by Jeremy
    I have the following code which is pulling data but it is not outputting properly. <?php class YahooStock { public function getQuotes(){ $stocks = array(); $result = array(); $s = file_get_contents("http://finance.yahoo.com/d/quotes.csv?s=AMZN+CRM+CNQR+CTL+CTXS+DWRE+EMC+GOOG+HP+IBM+JIVE+LNKD+MKTO+MSFT+N+NFLX+NOW+ORCL+RAX+SAP+T+VEEV+VMW+VZ+WDAY&f=npf6&e=.csv"); $data = explode( ',', $s); $result = $data; return $result; } } $objYahooStock = new YahooStock; foreach( $objYahooStock->getQuotes() as $code => $result){ echo 'Name:' . $result[0] . '<br />'; echo 'Price:' . $result[1] . '<br />'; echo 'Float:' . $result[2] . '<br />'; } ?> The output looks like it is separating every character with a comma instead of each column: Name:" Price:A Float:m Name: Price:I Float:n Name:3 Price:3 Float:2 Name: Price: Float: Any help is appreciated!

    Read the article

  • Event ID 9331 MSExchangeSA & Event ID 9335 MSExchangeSA

    - by George
    I get this two Exchange 2010 Global Address book related event IDs: Event ID 9331 MSExchangeSA OABGen encountered error 80004005 (internal ID 50101f1) accessing the public folder database while generating the offline address list for address list '/'. -\Default Offline Address List and Event ID 9335 MSExchangeSA OABGen encountered error 80004005 while cleaning the offline address list public folders under /o=xxxxx xxxx/cn=addrlists/cn=oabs/cn=Default Offline Address List. Please make sure the public folder database is mounted and replicas exist of the offline address list folders. No offline address lists have been generated. Please check the event log for more information. -\Default Offline Address List It is Exchange 2010 SP2 sitting on Windows 2008 enterprise edition. Essentially the issue is that the global address book is not being updated on Outlook clients. We are using Outlook 2007 and 2010. So far I have tried running the following command: Update-FileDistributionService -Identity ExchangeServer -Type "OAB" And I tried this solution as well: 1) Make sure the Microsoft Exchange System Attendant is running. It will be set to start automatically by default, but it doesn't. This is a known issue. Start this service manually. When running, you will not get an error when trying to update the GAL. 2) "Apply" any changes made to any address lists before the GAL will update Outlook properly. In Organization Configuration - Mailbox in EMC, view the properties of the Default Global Address Book in the Offline Address Book tab. In the properties window, select the Address Lists tab. This shows which address lists makes up the GAL. 3) Close the properties window and select the Address Lists tab in the Organization Configuration - Mailbox. Right-click each address list used by the Def GAL and click "Apply" (make sure the "Immediately" radio button is checked). 4) Last, go back to the Offline Address Book tab, right-click the GAL and select "Update". After a few send/receives in the Outlook clients, their Glogal Address List should update to show the latest changes. Neither one of those solutions helped. So I am not really sure what to do here. Also, I am aware of changing registry on each local computers, but it would be close to impossible as we have 8 offices in 3 different countries. Any suggestions? EDIT 7.XII.2012 @ 10.35 I forgot to mention that we did rebuild the address book and that didn't help.

    Read the article

  • Certify August Updates

    - by Sadia2
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 We have added some release and platform certifications to MOS Certify. Applications : Oracle Demantra Demand Management 7.3.1.5, Oracle Demantra Predictive Trade Planning 7.3.1.5, Oracle Demantra Sales and Operations Planning 7.3.1.5 Database: Oracle Database Client 12.1.0.1.0 11.2.0.4.0, Oracle Clusterware 11.2.0.4.0, Oracle Database 11.2.0.4.0, Oracle Real Application Clusters 11.2.0.4.0 E-Business Suite: Oracle E-Business Suite 12.1.3, Oracle E-Business Suite 12.1.2, Oracle E-Business Suite 12.1.1, Oracle E-Business Suite 12.0.6, Oracle E-Business Suite 11.5.10.2 Edge Applications: Oracle Transportation Management 6.3.2 Enterprise Manager: Oracle Application Management Pack for Oracle E-Business Suite 12.1.0.1.0 Fusion Middleware: Discoverer Administrator 11.1.1.6.0, Discoverer Desktop 11.1.1.6.0, Forms Builder 11.1.1.6.0, Oracle Application Development Framework 11.1.1.6.0, Oracle Application Development Runtime 11.1.1.6.0, Oracle Business Intelligence Publisher 11.1.1.6.0, Oracle Directory Services Manager 11.1.1.6.0, Oracle Forms 11.1.1.6.0, Oracle GoldenGate 11.1.1.1.0, 11.1.1.1.2, 11.1.1.1.1, Oracle GoldenGate Application Adapters 11.1.1.1.1, Oracle Identity and Access Management 11.1.2.0.0, 11.1.2.1.0, Oracle Identity Federation 11.1.1.6.0, Oracle Real-Time Decision Load Generator 11.1.1.7.0, Oracle Real-Time Decision Studio 11.1.1.7.0, Oracle Real-Time Decisions 11.1.1.6.0, Oracle Reports 11.1.1.6.0, Oracle Segmentation Server 11.1.1.6.0, Oracle Virtual Directory 11.1.1.6.0, Oracle Web Cache 11.1.1.6.0, Oracle WebCenter Content Imaging 11.1.1.8.0, Oracle WebCenter Content Inbound Refinery Server 11.1.1.8.0, Oracle WebCenter Content Records 11.1.1.8.0, Oracle WebCenter Content Rights 11.1.1.8.0, Oracle WebCenter Content UI 11.1.1.8.0, Oracle WebCenter Enterprise Capture 11.1.1.8.0, Oracle WebCenter Portal 11.1.1.8.0, Oracle WebCenter Sites 11.1.1.8.0, Oracle WebCenter Sites: CIP for EMC Documentum 11.1.1.8.0, Oracle WebCenter Sites: CIP for File Systems and MS SharePoint 11.1.1.8.0, Oracle WebCenter Sites: Community-Gadgets 11.1.1.8.0, Oracle WebCenter Sites: Explorer 11.1.1.8.0, Oracle WebCenter Universal Content Management 11.1.1.8.0, Reports Builder 11.1.1.6.0, Oracle WebCenter Content Records 11.1.1.8.0, Oracle WebCenter Content Rights 11.1.1.8.0, Oracle WebCenter Content UI 11.1.1.8.0, Oracle WebCenter Sites: Developer Tools 11.1.1.8.0 FSGBU Insurance Group : Oracle Health Insurance Claims 2.13.3.0.0, 2.13.2.0.0, 2.13.1.0.0 JD Edwards EnterpriseOne: JD Edwards EnterpriseOne Tools 9.1.3.0, 9.1.2.0, 9.1.0.0 JD Edwards World: JD Edwards World Service Enablement A93SE, A931SE PeopleSoft: PeopleSoft PeopleTools 8.52 Siebel Enterprise: Siebel Application Server 8.2.2.4.0, 8.2.2.3.0, 8.2.2.2.0, 8.1.1.11.0, 8.1.1.10.0, 8.1.1.9.0, Siebel CRM Desktop Client 8.2.2.4.0, 8.2.2.3.0, 8.2.2.2.0, 8.1.1.11.0, 8.1.1.10.0, 8.1.1.9.0, Siebel Database Server 8.2.2.4.0, 8.2.2.3.0, 8.2.2.2.0, 8.1.1.11.0, 8.1.1.10.0, 8.1.1.9.0, Siebel HI Web Client 8.2.2.2.0, 8.1.1.9.0, Siebel Gateway Server 8.2.2.4.0, 8.2.2.3.0, 8.2.2.2.0, 8.1.1.11.0, 8.1.1.10.0, 8.1.1.9.0, Siebel Outlook Add-in Client 8.2.2.2.0, Siebel Remote Client 8.2.2.4.0, 8.2.2.3.0, 8.2.2.2.0, 8.1.1.11.0, 8.1.1.10.0, 8.1.1.9.0, Siebel Tools Client 8.2.2.4.0, 8.2.2.3.0, 8.2.2.2.0, 8.1.1.11.0, 8.1.1.10.0, 8.1.1.9.0, Siebel Web Server Extension 8.2.2.4.0, 8.2.2.3.0, 8.2.2.2.0, 8.1.1.11.0, 8.1.1.10.0, 8.1.1.9.0 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • Socket connection to a telnet-based server hangs on read

    - by mixwhit
    I'm trying to write a simple socket-based client in Python that will connect to a telnet server. I can test the server by telnetting to its port (5007), and entering text. It responds with a NAK (error) or an AK (success), sometimes accompanied by other text. Seems very simple. I wrote a client to connect and communicate with the server, but it hangs on the first attempt to read the response. The connection is successful. Queries like getsockname and getpeername are successful. The send command returns a value that equals the number of characters I'm sending, so it seems to be sending correctly. But in the end, it always hangs when I try to read the response. I've tried using both file-based objects like readline and write (via socket.makefile), as well as using send and recv. With the file object I tried making it with "rw" and reading and writing via that object, and later tried one object for "r" and another for "w" to separate them. None of these worked. I used a packet sniffer to watch what's going on. I'm not versed in all that I'm seeing, but during a telnet session I can see my typed text and the server's text coming back. During my Python socket connection, I can see my text going to the server, but packets back don't seem to have any text in them. Any ideas on what I'm doing wrong, or any strategies to try? Here's the code I'm using (in this case, it's with send and recv): #!/usr/bin/python host = "localhost" port = 5007 msg = "HELLO EMC 1 1" msg2 = "HELLO" import socket import sys try: skt = socket.socket(socket.AF_INET, socket.SOCK_STREAM) except socket.error, e: print("Error creating socket: %s" % e) sys.exit(1) try: skt.connect((host,port)) except socket.gaierror, e: print("Address-related error connecting to server: %s" % e) sys.exit(1) except socket.error, e: print("Error connecting to socket: %s" % e) sys.exit(1) try: print(skt.send(msg)) print("SEND: %s" % msg) except socket.error, e: print("Error sending data: %s" % e) sys.exit(1) while 1: try: buf = skt.recv(1024) print("RECV: %s" % buf) except socket.error, e: print("Error receiving data: %s" % e) sys.exit(1) if not len(buf): break sys.stdout.write(buf)

    Read the article

  • Looking for advice on Hyper-v storage replication

    - by Notre1
    I am designing a 2-host Hyper-V R2 cluster with 6-10 guests stored on a SMB iSCSI SAN device (probably Promise VessRAID). I will be getting at least two of the SAN devices and need to eliminate the storage a single point of failure. Ideally, that would involve real-time failover for the storage, like the Windows failover clustering does for the hosts. This design will be used at around six of our sites, and I would like to allow for us to eventually setup a cluster at colocation site and replicate each site's VMs there for DR. (Ideally a live multi-site cluster, but a manual import of the VMs would be fine for this sort of DR.) The tools that come with enterprise SANs, like EMC and NetApp, seem to be the most commonly used items for a Hyper-V cluster, but I can't afford their prices with my budget. Outside of them, the two tools that seem to be most common for Hyper-V storage replication are SteelEye (now SIOS) DataKeeper Cluster Edition and Double-Take Availability. Originally, I was planning on using Clustered Shared Volume(s) (CSV), but it seems like replication support for these is either not available or brand new in both these products. It looks like CSVs are supported in Double-Take 5.22, see this discussion, but I don't think I want to run something that new in production. Right now, it seems like the best option for me is not to implement CSVs, implement some sort of storage replication, and upgrade to CSVs at a later date once replicating them is more mature. I would love to have live migration, and CSVs are not required for live migration if you are using one LUN per VM, so I guess this is what I'll do. I would prefer to stick to the using the Microsoft Windows Server and Hyper-V tools and features as much as possible. From that standpoint, SteelEye looks more appealing than Double-Take because they make the DataKeeper volume(s) available to the Failover Clustering Manager and then failover clustering is all configured and managed through the native Microsoft tools. Double-Take says that "clustered Hyper-V hosts are not supported," and Double-Take Availability itself seems to be what is used for the actual clustering and failover. Does anyone know if any of these replication tools work with more than two hosts in the cluster? All the information I can find on the web only uses two hosts in their examples. Are there any better tools than SteelEye and Double-Take for doing what I am trying to do, which is eliminate the storage as as single point of failure? Neverfail, AppAssure, and DataCore all seem to offer similar functionality, but they don't seems to be as popular as SteelEye and Double-Take. I have seen a number of people suggest using Starwind iSCSI SAN software for the shared storage, which includes replication (and CSV replication at that). There are a couple of reasons I have not seriously considered this route: 1) The company I work for is exclusively a Dell shop and Dell does not have any servers with that I can pack with more than six 3.5" SATA drives. 2) In the future, it could be advantegous for us to not be locked into a particular brand or type of storage and third-party replication softwares all allow replication to heterogeneous storage devices. I am pretty new to iSCSI and clustering, so please let me know if it looks like I am planning something that goes against best practices or overlooking/missing something.

    Read the article

  • Symantec Protection Suite and System Recovery 2011 Desktop Edition

    - by rihatum
    I am re-posting this as my previous question was being treated as if I am "Shopping or seeking Product Recommendations" even though I was NOT - BTW they have deleted my comments too which were not offensive in nature. anyway - I have re-phrased some parts of my question and I hope SF Admins "Do Not Modify / Edit" this one - will be most grateful for that. I have a lot of respect for the People who visit this SITE and help others ! Just To clarify : Just to go by SF rules - I am not seeking someone to Design this solution, I am simply seeking real world examples, experiences, technical expert opinions / suggestions, any tips or tricks they may have or any problems they may have faced while doing something similar above with these products. I am also not asking for Capacity Planning for Storage, We have done some research and I am seeking Expert Assurance / Suggestions. We (our company) are planning to deploy Symantec Endpoint Protection and Symantec Desktop Recovery 2011 Desktop Edition to our 3000 - 4000 workstations (Windows7 32 and 64) with a few 100s with Windows XP 32/64 Bit. I have read the implementation guide for SEP and have read tech-notes for Desktop Recovery 2011. Our team have planned to deploy this as follows : 1 x dedicated SQL 2008R2 for Symantec Endpoint Protection (Instead of using the Embedded Database) 1 x Dedicated SQL 2008R2 for Symantec Desktop Recovery 2011 (Instead of using the Embedded Database) 1 x Dedicated W2K8 R2 Box for the SEPM (Symantec Endpoint Protection Manager - Mgmt. APP) 1 x Dedicated W2K8 R2 Box for the Symantec Desktop Recovery 2011 Management Application Agent Deployment : As per Symantec Documentation for both of the above, an agent can be pushed via the Mgmt. Application (provided no firewalls are blocking ports required etc. - we have Windows firewall disabled already). Server Hardware : Per SQL Server : 16GB RAM + SAS DISKS + Dual XEON, RAID-10 for the SQL DB or I can always mount a LUN from our existing Hitachi or EMC SAN. SEPM Server : 16GB RAM + SAS DISKS + DUAL XEON System Recovery MGMT SERVER : 16GB RAM + SAS DISKS + DUAL XEON Above is the initial plan we have for 3000 - 4000 client workstation (Windows) Now my Questions :-) a) If we had these users distributed amongst two sites with AD DC / GC in each site, How would I restrict SEPM and Desktop Mgmt. solution to only check for users in their respective site ? b) At present all users are under one building but we are going to move some dept. to a new location (with dedicated connectivity), How would we control which SEPM / MGMT Server is responsible for which site ? c) We have netbackup in our environment backing up other servers, I am planning to protect these 4 (2 x SQL, 1 x SEPM, 1 x System Recovery Mgmt. Server) via netbackup or I can use System recovery 2011 server edition on all 4 of these boxes as well. (License is not an issue as we have the complete symantec portfolio included in our license). d) Now - Saving Desktop backups - What strategies have you implemented ? Any best practice recommendation for a large user base ? I was thinking to either mount a LUN from our Hitachi SAN on the Symantec Recovery Server itself or backup to the users hard drive locally and then copy it over to a network location ? Suggestions welcome :-) If you have anything to add / correct - that will be really helpful before diving into the actual implementation phase. Will be most grateful with your suggestions, recommendations and corrections with above - Many Thanks !

    Read the article

  • Looking for advice on Hyper-v storage replication

    - by Notre1
    I am designing a 2-host Hyper-V R2 cluster with 6-10 guests stored on a SMB iSCSI SAN device (probably Promise VessRAID). I will be getting at least two of the SAN devices and need to eliminate the storage a single point of failure. Ideally, that would involve real-time failover for the storage, like the Windows failover clustering does for the hosts. This design will be used at around six of our sites, and I would like to allow for us to eventually setup a cluster at colocation site and replicate each site's VMs there for DR. (Ideally a live multi-site cluster, but a manual import of the VMs would be fine for this sort of DR.) The tools that come with enterprise SANs, like EMC and NetApp, seem to be the most commonly used items for a Hyper-V cluster, but I can't afford their prices with my budget. Outside of them, the two tools that seem to be most common for Hyper-V storage replication are SteelEye (now SIOS) DataKeeper Cluster Edition and Double-Take Availability. Originally, I was planning on using Clustered Shared Volume(s) (CSV), but it seems like replication support for these is either not available or brand new in both these products. It looks like CSVs are supported in Double-Take 5.22, see this discussion, but I don't think I want to run something that new in production. Right now, it seems like the best option for me is not to implement CSVs, implement some sort of storage replication, and upgrade to CSVs at a later date once replicating them is more mature. I would love to have live migration, and CSVs are not required for live migration if you are using one LUN per VM, so I guess this is what I'll do. I would prefer to stick to the using the Microsoft Windows Server and Hyper-V tools and features as much as possible. From that standpoint, SteelEye looks more appealing than Double-Take because they make the DataKeeper volume(s) available to the Failover Clustering Manager and then failover clustering is all configured and managed through the native Microsoft tools. Double-Take says that "clustered Hyper-V hosts are not supported," and Double-Take Availability itself seems to be what is used for the actual clustering and failover. Does anyone know if any of these replication tools work with more than two hosts in the cluster? All the information I can find on the web only uses two hosts in their examples. Are there any better tools than SteelEye and Double-Take for doing what I am trying to do, which is eliminate the storage as as single point of failure? Neverfail, AppAssure, and DataCore all seem to offer similar functionality, but they don't seems to be as popular as SteelEye and Double-Take. I have seen a number of people suggest using Starwind iSCSI SAN software for the shared storage, which includes replication (and CSV replication at that). There are a couple of reasons I have not seriously considered this route: 1) The company I work for is exclusively a Dell shop and Dell does not have any servers with that I can pack with more than six 3.5" SATA drives. 2) In the future, it could be advantegous for us to not be locked into a particular brand or type of storage and third-party replication softwares all allow replication to heterogeneous storage devices. I am pretty new to iSCSI and clustering, so please let me know if it looks like I am planning something that goes against best practices or overlooking/missing something.

    Read the article

  • Week in Geek: FBI Back Door in OpenBSD Edition

    - by Asian Angel
    This week we learned how to migrate bookmarks from Delicious to Diigo, fix annoying arrows, play old-school DOS games, schedule smart computer shutdowns, use breaks in Microsoft Word to better format documents, check the condition of hard-disks using Linux disk utilities, & what the Linux fstab is and how it works. Photo by Jameson42. Random Geek Links Another week with extra news link goodness to help keep you up to date. Photo by justmakeit. Report of FBI back door roils OpenBSD community Allegations that the FBI surreptitiously placed a back door into the OpenBSD operating system have alarmed the computer security community, prompting calls for an audit of the source code and claims that the charges must be a hoax. Fortinet: Job outlook improving for cybercrooks In an ironic twist in the job market, more positions will open up for developers who can write customized malware packers, people who can break CAPTCHA codes, and distributors who can spread malicious code, according to Fortinet. Enisa: Malware for smartphones is a ’serious risk’ Businesses and consumers are at risk of data breaches through smartphone use, according to the European Network and Information Security Agency. The trick with the f: Google and Microsoft web sites distribute malware Last week, Google’s DoubleClick advertising platform and Microsoft’s rad.msn.com online ad network briefly distributed malware to other web sites in the form of advertising banners. New scam tactic: Fake disk defraggers It would appear that scammers are trying out new programs to see which might best confuse potential victims and evade detection by legitimate antivirus software. Microsoft closes IE and Stuxnet holes As previously announced, Microsoft has released 17 security updates to close 40 security holes. All four Windows holes so far disclosed in connection with Stuxnet have now been closed. Microsoft Offers H.264 Support to Firefox on Windows via Add-On The new HTML5 Extension for Windows Media Player Firefox Plug-in add-on from Microsoft offers users that are running Firefox on Windows 7 H.264 support for HTML5 video playback. Google proclaims Chrome business-ready Google has announced that Chrome is ready for corporate use. Microsoft Tells Exchange Customers to Think Twice Before Opting for Google Message Continuity This week, Microsoft is telling companies still running Exchange 2010’s precursors that they should carefully consider the implications of embracing Google Message Continuity. Who Google has in mind for its Chrome OS users Steven Vaughan-Nichols explains why he feels that Chrome OS will be ideal for either office-workers or people who need a computer, but do not know the first thing about how to use one safely. Oracle takes office suite to the cloud Oracle has introduced Cloud Office 1.0, a cloud-based version of its office suite, which is aimed at web and mobile users. Mozilla pays premiums for reports of vulnerabilities The Mozilla Foundation has followed Google’s example by expanding its rewards program for reports of vulnerabilities in its Web applications. Who bought those 882 Novell patents? Not just Microsoft The mysterious CPTN Holdings — the organization that bought the 882 Novell patents as part of the terms of the Attachmate acquisition of Novell – has been unmasked (Microsoft, Apple, EMC and Oracle). Appeals court: Feds need warrants for e-mail Police must obtain search warrants before perusing Internet users’ e-mail records, a federal appeals court ruled today in a landmark decision that struck down part of a 1986 law allowing warrantless access. Geek Video of the Week What happens when someone plays a wicked prank by shoveling crazy snow paths that lead to dead ends or turn back on themselves? Watch to find out! Photo by CollegeHumor. Janitor Snow Shoveling Prank Random TinyHacker Links The Oatmeal on Cat vs Internet What lengths will our poor neglected kitty hero have to go to in order to get some attention? Guide On Using JoliCloud With Windows JoliCloud is a nifty operating system that’s made for people who need a light-weight OS that’s mostly cloud based. Check this guide on using it with Windows. Use Cameyo to Easily Create Portable Programs Here’s a nifty tool to make portable apps out of programs in Windows. Check out the guide to do it. Better Family Tech Support A nice new site by Google to help members of family understand how computers work. Track Your Stolen Mobile Phone With F-Secure A useful anti-theft tool for your mobile phone. Super User Questions Another week with great answers to popular questions from Super User. What Chrome password manager fits my requirements? What’s the best way to be able to reimage windows computers? Could you suggest feature-rich disk-based personal backup program for linux (and I’ve seen a few)? What is IPv6 and why should I care? Is there any way to find out what programs are trying to connect to Internet on windows? How-To Geek Weekly Article Recap Here are our hottest articles full of geeky goodness from this past week at HTG. 20 OS X Keyboard Shortcuts You Might Not Know Microsoft Security Essentials 2.0 Kills Viruses Dead. Download It Now. Is Your Desktop Printer More Expensive Than Printing Services? Ask the Readers: Would You Be Willing to Give Windows Up and Use a Different O.S.? The Twelve Days of Geekmas One Year Ago on How-To Geek Enjoy reading through our latest batch of retro-geek goodness from one year ago. Macrium Reflect is a Free and Easy To Use Backup Utility How To Turn a Physical Computer Into A Virtual Machine with Disk2vhd How To Restore Windows 7 from a System Image How To Manage Hard Drive Space Used by Windows 7 Backup and Restore How To Manage Hibernate Mode in Windows 7 The Geek Note That is all we have for you this week, so see you back here again after the holidays! Got a great tip? Send it in to us at [email protected]. Photo by mitjamavsar. Latest Features How-To Geek ETC The Complete List of iPad Tips, Tricks, and Tutorials The 50 Best Registry Hacks that Make Windows Better The How-To Geek Holiday Gift Guide (Geeky Stuff We Like) LCD? LED? Plasma? The How-To Geek Guide to HDTV Technology The How-To Geek Guide to Learning Photoshop, Part 8: Filters Improve Digital Photography by Calibrating Your Monitor Deathwing the Destroyer – WoW Cataclysm Dragon Wallpaper Drag2Up Lets You Drag and Drop Files to the Web With Ease The Spam Police Parts 1 and 2 – Goodbye Spammers [Videos] Snow Angels Theme for Windows 7 Exploring the Jungle Ruins Wallpaper Protect Your Privacy When Browsing with Chrome and Iron Browser

    Read the article

  • Conversation as User Assistance

    - by ultan o'broin
    Applications User Experience members (Erika Web, Laurie Pattison, and I) attended the User Assistance Europe Conference in Stockholm, Sweden. We were impressed with the thought leadership and practical application of ideas in Anne Gentle's keynote address "Social Web Strategies for Documentation". After the conference, we spoke with Anne to explore the ideas further. Anne Gentle (left) with Applications User Experience Senior Director Laurie Pattison In Anne's book called Conversation and Community: The Social Web for Documentation, she explains how user assistance is undergoing a seismic shift. The direction is away from the old print manuals and online help concept towards a web-based, user community-driven solution using social media tools. User experience professionals now have a vast range of such tools to start and nurture this "conversation": blogs, wikis, forums, social networking sites, microblogging systems, image and video sharing sites, virtual worlds, podcasts, instant messaging, mashups, and so on. That user communities are a rich source of user assistance is not a surprise, but the extent of available assistance is. For example, we know from the Consortium for Service Innovation that there has been an 'explosion' of user-generated content on the web. User-initiated community conversations provide as much as 30 times the number of official help desk solutions for consortium members! The growing reliance on user community solutions is clearly a user experience issue. Anne says that user assistance as conversation "means getting closer to users and helping them perform well. User-centered design has been touted as one of the most important ideas developed in the last 20 years of workplace writing. Now writers can take the idea of user-centered design a step further by starting conversations with users and enabling user assistance in interactions." Some of Anne's favorite examples of this paradigm shift from the world of traditional documentation to community conversation include: Writer Bob Bringhurst's blog about Adobe InDesign and InCopy products and Adobe's community help The Microsoft Development Network Community Center ·The former Sun (now Oracle) OpenDS wiki, NetBeans Ruby and other community approaches to engage diverse audiences using screencasts, wikis, and blogs. Cisco's customer support wiki, EMC's community, as well as Symantec and Intuit's approaches The efforts of Ubuntu, Mozilla, and the FLOSS community generally Adobe Writer Bob Bringhurst's Blog Oracle is not without a user community conversation too. Besides the community discussions and blogs around documentation offerings, we have the My Oracle Support Community forums, Oracle Technology Network (OTN) communities, wiki, blogs, and so on. We have the great work done by our user groups and customer councils. Employees like David Haimes reach out, and enthusiastic non-employee gurus like Chet Justice (OracleNerd), Floyd Teter and Eddie Awad provide great "how-to" information too. But what does this paradigm shift mean for existing technical writers as users turn away from the traditional printable PDF manual deliverables? We asked Anne after the conference. The writer role becomes one of conversation initiator or enabler. The role evolves, along with the process, as the users define their concept of user assistance and terms of engagement with the product instead of having it pre-determined. It is largely a case now of "inventing the job while you're doing it, instead of being hired for it" Anne said. There is less emphasis on formal titles. Anne mentions that her own title "Content Stacker" at OpenStack; others use titles such as "Content Curator" or "Community Lead". However, the role remains one essentially about communications, "but of a new type--interacting with users, moderating, curating content, instead of sitting down to write a manual from start to finish." Clearly then, this role is open to more than professional technical writers. Product managers who write blogs, developers who moderate forums, support professionals who update wikis, rock star programmers with a penchant for YouTube are ideal. Anyone with the product knowledge, empathy for the user, and flair for relationships on the social web can join in. Some even perform these roles already but do not realize it. Anne feels the technical communicator space will move from hiring new community conversation professionals (who are already active in the space through blogging, tweets, wikis, and so on) to retraining some existing writers over time. Our own research reveals that the established proponents of community user assistance even set employee performance objectives for internal content curators about the amount of community content delivered by people outside the organization! To take advantage of the conversations on the web as user assistance, enterprises must first establish where on the spectrum their community lies. "What is the line between community willingness to contribute and the enterprise objectives?" Anne asked. "The relationship with users must be managed and also measured." Anne believes that the process can start with a "just do it" approach. Begin by reaching out to existing user groups, individual bloggers and tweeters, forum posters, early adopter program participants, conference attendees, customer advisory board members, and so on. Use analytical tools to measure the level of conversation about your products and services to show a return on investment (ROI), winning management support. Anne emphasized that success with the community model is dependent on lowering the technical and motivational barriers so that users can readily contribute to the conversation. Simple tools must be provided, and guidelines, if any, must be straightforward but not mandatory. The conversational approach is one where traditional style and branding guides do not necessarily apply. Tools and infrastructure help users to create content easily, to search and find the information online, read it, rate it, translate it, and participate further in the content's evolution. Recognizing contributors by using ratings on forums, giving out Twitter kudos, conference invitations, visits to headquarters, free products, preview releases, and so on, also encourages the adoption of the conversation model. The move to conversation as user assistance is not free, but there is a business ROI. The conversational model means that customer service is enhanced, as user experience moves from a functional to a valued, emotional level. Studies show a positive correlation between loyalty and financial performance (Consortium for Service Innovation, 2010), and as customer experience and loyalty become key differentiators, user experience professionals cannot explore the model's possibilities. The digital universe (measured at 1.2 million petabytes in 2010) is doubling every 12 to 18 months, and 70 percent of that universe consists of user-generated content (IDC, 2010). Conversation as user assistance cannot be ignored but must be embraced. It is a time to manage for abundance, not scarcity. Besides, the conversation approach certainly sounds more interesting, rewarding, and fun than the traditional model! I would like to thank Anne for her time and thoughts, and recommend that all user assistance professionals read her book. You can follow Anne on Twitter at: http://www.twitter.com/annegentle. Oracle's Acrolinx IQ deployment was used to author this article.

    Read the article

  • Community Conversation

    - by ultan o'broin
    Applications User Experience members (Erika Webb, Laurie Pattison, and I) attended the User Assistance Europe Conference in Stockholm, Sweden. We were impressed with the thought leadership and practical application of ideas in Anne Gentle's keynote address "Social Web Strategies for Documentation". After the conference, we spoke with Anne to explore the ideas further. Applications User Experience Senior Director Laurie Pattison (left) with Anne Gentle at the User Assistance Europe Conference In Anne's book called Conversation and Community: The Social Web for Documentation, she explains how user assistance is undergoing a seismic shift. The direction is away from the old print manuals and online help concept towards a web-based, user community-driven solution using social media tools. User experience professionals now have a vast range of such tools to start and nurture this "conversation": blogs, wikis, forums, social networking sites, microblogging systems, image and video sharing sites, virtual worlds, podcasts, instant messaging, mashups, and so on. That user communities are a rich source of user assistance is not a surprise, but the extent of available assistance is. For example, we know from the Consortium for Service Innovation that there has been an 'explosion' of user-generated content on the web. User-initiated community conversations provide as much as 30 times the number of official help desk solutions for consortium members! The growing reliance on user community solutions is clearly a user experience issue. Anne says that user assistance as conversation "means getting closer to users and helping them perform well. User-centered design has been touted as one of the most important ideas developed in the last 20 years of workplace writing. Now writers can take the idea of user-centered design a step further by starting conversations with users and enabling user assistance in interactions." Some of Anne's favorite examples of this paradigm shift from the world of traditional documentation to community conversation include: * Writer Bob Bringhurst's blog about Adobe InDesign and InCopy products and Adobe's community help * The Microsoft Development Network Community Center * ·The former Sun (now Oracle) OpenDS wiki, NetBeans Ruby and other community approaches to engage diverse audiences using screencasts, wikis, and blogs. * Cisco's customer support wiki, EMC's community, as well as Symantec and Intuit's approaches * The efforts of Ubuntu, Mozilla, and the FLOSS community generally Adobe Writer Bob Bringhurst's Blog Oracle is not without a user community conversation too. Besides the community discussions and blogs around documentation offerings, we have the My Oracle Support Community forums, Oracle Technology Network (OTN) communities, wiki, blogs, and so on. We have the great work done by our user groups and customer councils. Employees like David Haimes are reaching out, and enthusiastic non-employee gurus like Chet Justice (OracleNerd), Floyd Teter and Eddie Awad provide great "how-to" information too. But what does this paradigm shift mean for existing technical writers as users turn away from the traditional printable PDF manual deliverables? We asked Anne after the conference. The writer role becomes one of conversation initiator or enabler. The role evolves, along with the process, as the users define their concept of user assistance and terms of engagement with the product instead of having it pre-determined. It is largely a case now of "inventing the job while you're doing it, instead of being hired for it" Anne said. There is less emphasis on formal titles. Anne mentions that her own title "Content Stacker" at OpenStack; others use titles such as "Content Curator" or "Community Lead". However, the role remains one essentially about communications, "but of a new type--interacting with users, moderating, curating content, instead of sitting down to write a manual from start to finish." Clearly then, this role is open to more than professional technical writers. Product managers who write blogs, developers who moderate forums, support professionals who update wikis, rock star programmers with a penchant for YouTube are ideal. Anyone with the product knowledge, empathy for the user, and flair for relationships on the social web can join in. Some even perform these roles already but do not realize it. Anne feels the technical communicator space will move from hiring new community conversation professionals (who are already active in the space through blogging, tweets, wikis, and so on) to retraining some existing writers over time. Our own research reveals that the established proponents of community user assistance even set employee performance objectives for internal content curators about the amount of community content delivered by people outside the organization! To take advantage of the conversations on the web as user assistance, enterprises must first establish where on the spectrum their community lies. "What is the line between community willingness to contribute and the enterprise objectives?" Anne asked. "The relationship with users must be managed and also measured." Anne believes that the process can start with a "just do it" approach. Begin by reaching out to existing user groups, individual bloggers and tweeters, forum posters, early adopter program participants, conference attendees, customer advisory board members, and so on. Use analytical tools to measure the level of conversation about your products and services to show a return on investment (ROI), winning management support. Anne emphasized that success with the community model is dependent on lowering the technical and motivational barriers so that users can readily contribute to the conversation. Simple tools must be provided, and guidelines, if any, must be straightforward but not mandatory. The conversational approach is one where traditional style and branding guides do not necessarily apply. Tools and infrastructure help users to create content easily, to search and find the information online, read it, rate it, translate it, and participate further in the content's evolution. Recognizing contributors by using ratings on forums, giving out Twitter kudos, conference invitations, visits to headquarters, free products, preview releases, and so on, also encourages the adoption of the conversation model. The move to conversation as user assistance is not free, but there is a business ROI. The conversational model means that customer service is enhanced, as user experience moves from a functional to a valued, emotional level. Studies show a positive correlation between loyalty and financial performance (Consortium for Service Innovation, 2010), and as customer experience and loyalty become key differentiators, user experience professionals cannot explore the model's possibilities. The digital universe (measured at 1.2 million petabytes in 2010) is doubling every 12 to 18 months, and 70 percent of that universe consists of user-generated content (IDC, 2010). Conversation as user assistance cannot be ignored but must be embraced. It is a time to manage for abundance, not scarcity. Besides, the conversation approach certainly sounds more interesting, rewarding, and fun than the traditional model! I would like to thank Anne for her time and thoughts, and recommend that all user assistance professionals read her book. You can follow Anne on Twitter at: http://www.twitter.com/annegentle. Oracle's Acrolinx IQ deployment was used to author this article.

    Read the article

  • Upgrading Team Foundation Server 2008 to 2010

    - by Martin Hinshelwood
    I am sure you will have seen my posts on upgrading our internal Team Foundation Server from TFS2008 to TFS2010 Beta 2, RC and RTM, but what about a fresh upgrade of TFS2008 to TFS2010 using the RTM version of TFS. One of our clients is taking the plunge with TFS2010, so I have the job of doing the upgrade. It is sometimes very useful to have a team member that starts work when most of the Sydney workers are heading home as I can do the upgrade without impacting them. The down side is that if you have any blockers then you can be pretty sure that everyone that can deal with your problem is asleep I am starting with an existing blank installation of TFS 2010, but Adam Cogan let slip that he was the one that did the install so I thought it prudent to make sure that it was OK. Verifying Team Foundation Server 2010 We need to check that TFS 2010 has been installed correctly. First, check the Admin console and have a root about for any errors. Figure: Even the SQL Setup looks good. I don’t know how Adam did it! Backing up the Team Foundation Server 2008 Databases As we are moving from one server to another (recommended method) we will be taking a backup of our TFS2008 databases and resorting them to the SQL Server for the new TFS2010 Server. Do not just detach and reattach. This will cause problems with the version of the database. If you are running a test migration you just need to create a backup of the TFS 2008 databases, but if you are doing the live migration then you should stop IIS on the TFS 2008 server before you backup the databases. This will stop any inadvertent check-ins or changes to TFS 2008. Figure: Stop IIS before you take a backup to prevent any TFS 2008 changes being written to the database. It is good to leave a little time between taking the TFS 2008 server offline and commencing the upgrade as there is always one developer who has not finished and starts screaming. This time it was John Liu that needed 10 more minutes to make his changes and check-in, so I always give it 30 minutes and see if anyone screams. John Liu [SSW] said:   are you doing something to TFS :-O MrHinsh [SSW UK][VS ALM MVP] said:   I have stopped TFS 2008 as per my emails John Liu [SSW] said:   haven't finish check in @_@   can we have it for 10mins? :) MrHinsh [SSW UK][VS ALM MVP] said:   TFS 2008 has been started John Liu [SSW] said:   I love you! -IM conversation at TFS Upgrade +25 minutes After John confirmed that he had everything done I turned IIS off again and made a cup of tea. There were no more screams so the upgrade can continue. Figure: Backup all of the databases for TFS and include the Reporting Services, just in case.   Figure: Check that all the backups have been taken Once you have your backups, you need to copy them to your new TFS2010 server and restore them. This is a good way to proceed as if we have any problems, or just plain run out of time, then you just turn the TFS 2008 server back on and all you have lost is one upgrade day, and not 10 developer days. As per the rules, you should record the number of files and the total number of areas and iterations before the upgrade so you have something to compare to: TFS2008 File count: Type Count 1 1845 2 15770 Areas & Iterations: 139 You can use this to verify that the upgrade was successful. it should however be noted that the numbers in TFS 2010 will be bigger. This is due to some of the sorting out that TFS does during the upgrade process. Restore Team Foundation Server 2008 Databases Restoring the databases is much more time consuming than just attaching them as you need to do them one at a time. But you may be taking a backup of an operational database and need to restore all your databases to a particular point in time instead of to the latest. I am doing latest unless I encounter any problems. Figure: Restore each of the databases to either a latest or specific point in time.     Figure: Restore all of the required databases Now that all of your databases are restored you now need to upgrade them to Team Foundation Server 2010. Upgrade Team Foundation Server 2008 Databases This is probably the easiest part of the process. You need to call a fire and forget command that will go off to the database specified, find the TFS 2008 databases and upgrade them to 2010. During this process all of the 6 main TFS 2008 databases are merged into the TfsVersionControl database, upgraded and then the database is renamed to TFS_[CollectionName]. The rename is only the database and not the physical files, so it is worth going back and renaming the physical file as well. This keeps everything neat and tidy. If you plan to keep the old TFS 2008 server around, for example if you are doing a test migration first, then you will need to change the TFS GUID. This GUID is unique to each TFS instance and is preserved when you upgrade. This GUID is used by the clients and they can get a little confused if there are two servers with the same one. To kick of the upgrade you need to open a command prompt and change the path to “C:\Program Files\Microsoft Team Foundation Server 2010\Tools” and run the “import” command in  “tfsconfig”. TfsConfig import /sqlinstance:<Previous TFS Data Tier>                  /collectionName:<Collection Name>                  /confirmed Imports a TFS 2005 or 2008 data tier as a new project collection. Important: This command should only be executed after adequate backups have been performed. After you import, you will need to configure portal and reporting settings via the administration console. EXAMPLES -------- TfsConfig import /sqlinstance:tfs2008sql /collectionName:imported /confirmed TfsConfig import /sqlinstance:tfs2008sql\Instance /collectionName:imported /confirmed OPTIONS: -------- sqlinstance         The sql instance of the TFS 2005 or 2008 data tier. The TFS databases at that location will be modified directly and will no longer be usable as previous version databases.  Ensure you have back-ups. collectionName      The name of the new Team Project Collection. confirmed           Confirm that you have backed-up databases before importing. This command will automatically look for the TfsIntegration database and verify that all the other required databases exist. In this case it took around 5 minutes to complete the upgrade as the total database size was under 700MB. This was unlike the upgrade of SSW’s production database with over 17GB of data which took a few hours. At the end of the process you should get no errors and no warnings. The Upgrade operation on the ApplicationTier feature has completed. There were 0 errors and 0 warnings. As this is a new server and not a pure upgrade there should not be a problem with the GUID. If you think at any point you will be doing this more than once, for example doing a test migration, or merging many TFS 2008 instances into a single one, then you should go back and rename the physical TfsVersionControl.mdf file to the same as the new collection. This will avoid confusion later down the line. To do this, detach the new collection from the server and rename the physical files. Then reattach and change the physical file locations to match the new name. You can follow http://www.mssqltips.com/tip.asp?tip=1122 for a more detailed explanation of how to do this. Figure: Stop the collection so TFS does not take a wobbly when we detach the database. When you try to start the new collection again you will get a conflict with project names and will require to remove the Test Upgrade collection. This is fine and it just needs detached. Figure: Detaching the test upgrade from the new Team Foundation Server 2010 so we can start the new Collection again. You will now be able to start the new upgraded collection and you are ready for testing. Do you remember the stats we took off the TFS 2008 server? TFS2008 File count: Type Count 1 1845 2 15770 Areas & Iterations: 139 Well, now we need to compare them to the TFS 2010 stats, remembering that there will probably be more files under source control. TFS2010 File count: Type Count 1 19288 Areas & Iterations: 139 Lovely, the number of iterations are the same, and the number of files is bigger. Just what we were looking for. Testing the upgraded Team Foundation Server 2010 Project Collection Can we connect to the new collection and project? Figure: We can connect to the new collection and project.   Figure: make sure you can connect to The upgraded projects and that you can see all of the files. Figure: Team Web Access is there and working. Note that for Team Web Access you now use the same port and URL as for TFS 2010. So in this case as I am running on the local box you need to use http://localhost:8080/tfs which will redirect you to http://localhost:8080/tfs/web for the web access. If you need to connect with a Visual Studio 2008 client you will need to use the full path of the new collection, http://[servername]/tfs/[collectionname] and this will work with all of your collections. With Visual Studio 2005 you will only be able to connect to the Default collection and in both VS2008 and VS2005 you will need to install the forward compatibility updates. Visual Studio Team System 2005 Service Pack 1 Forward Compatibility Update for Team Foundation Server 2010 Visual Studio Team System 2008 Service Pack 1 Forward Compatibility Update for Team Foundation Server 2010 To make sure that you have everything up to date, make sure that you run SSW Diagnostics and get all green ticks. Upgrade Done! At this point you can send out a notice to everyone that the upgrade is complete and and give them the connection details. You need to remember that at this stage we have 2008 project upgraded to run under TFS 2010 but it is still running under that same process template that it was running before. You can only “enable” 2010 features in a process template you can’t upgrade. So what to do? Well, you need to create a new project and migrate things you want to keep across. Souse code is easy, you can move or Branch, but Work Items are more difficult as you can’t move them between projects. This instance is complicated more as the old project uses the Conchango/EMC Scrum for Team System template and I will need to write a script/application to get the work items across with their attachments in tact. That is my next task! Technorati Tags: TFS 2010,TFS 2008,VS ALM

    Read the article

  • CodePlex Daily Summary for Wednesday, April 14, 2010

    CodePlex Daily Summary for Wednesday, April 14, 2010New Projectsbitly.net: A bitly (useing Version 3 of their API's) client for .NET (Version 3.5)Chord Sheet Editor Add-In for Word: Transpose music chord sheets (guitar chord sheets, etc.) in Microsoft Word using this VSTO Add-In.CloudSponge.Net: Simple .Net wrapper for www.cloudsponge.com's REST API.Database Searcher: This is a small tool for searching a typed value inside all type matching columns and rows of a database. For connecting the database a .NET data p...Edu Math: PL: Program Edu Math, ma na celu ułatwienie wykonywania skomplikowanych obliczeń oraz analiz matematycznych. EN: Program Edu Math, aims to facilita...fluent AOP: This project is not yet publishedFNA Fractal Numerical Algorithm for a new encryption technology: FNA Fractal Numerical Algorithm for a new encryption technology is a symmetrical encryption method based on two algorithms that I developed for: 1....Image viewer cum editor: This is a project on image viewing and editing. The project have following features VIEWER: Album Password security for albums Inbuilt Browser...JEngine - Tile Map Editor v1: JEngine - Tile Map Editor v1Jeremy Knight: Code samples, snippets, etc from my personal blog.lcskey: lcs test codemoldme: testesds ssdfsdfsNanoPrompt: NanoPrompt makes it more pleasant to work on a command-line. Features: - syntax-highlighting - graphical output possible - up to 12 "displays" (cha...nirvana: for testOffInvoice Add-in for MS Office 2010: Project Description: The project it's based in the ability to extend funtionality in the Microsoft Office 2010 suite.PowerSlim - Acceptance Testing for Enterprise Applications: PowerSlim makes it possible to use PowerShell in the acceptance testing. It is a small but powerful plugin for the Fitnesse acceptance testing fram...Proxi [Proxy Interface]: Proxi is a light-weight library that allows to generate dynamic proxies using different providers. By utilizing Proxi frameworks and libraries can ...Reality show about ASP.NET development: This application is created with using ASP.NET and Microsoft SQL Server for the demo purposes with the following target goals: example of usage fo...RecordLogon.vbs login script: RecordLogon.vbs is a script applied at logon via Group or Local policy. It records specific user and computer information and writes the data to a ...SpaceGameApplet: A java game ;)SpaceShipsGame: A game with space ships ";..;"SysHard: Info for Linux system.System Etheral™ - Developer: SE Dev (System Etheral™ - Developer) is an OS (Operating System) that is a bit like UNIX but it is for you to edit! We have not gave you much but w...TimeSheet Reporting Silverlight: TimeSheet Reporting application in Silver light. Contains a data grid containing combo boxes bound to different data sources like Members and Proje...TrayBird: A minimalistic twitter client for windows.Twitter4You: This appliction for windows is a communication for twitter!WCF RIA Services (+ PRISM + MVVM) LoB Application: WCF RIA Services sample LoB application (case study) built on PRISM with Entity Framework Model. It's a simple application for a fictive company Te...New ReleasesBluetooth Radar: Version 1.9: Change Search and Close Icons Add Device Detail ViewCloudSponge.Net: Alpha: Initial alpha release very limited tested includes *CloudSponge.dll *Sponge.exe (simple cmd line utility to import contacts, and test API)Global Assembly Cache comparison tool: GAC Compare version 3.1: Version 3.1Added export assemblies to directory functionalityHTML Ruby: 6.21.2: Some style adjustments Ruby text spacing is spaced out to keep Firefox responsive Status bar is backJEngine - Tile Map Editor v1: JEngine - Tile Map Editor V1: JEngine - Tile Map Editor V1 Discription SoonJeremy Knight: SQL Padding Functions v1.0: The entire scripts, including if exists logic, for SQL Padding Functions are included in this download.jqGrid ASP.Net MVC Control: Version 1.1.0.0: UPDATE 14-04 Fixed a small problem with the custom column renderers controller, And added a new example for a cascading-dropdownlist grid column A...JulMar MVVM Helpers + Behaviors: Version 1.06: This version is an update to MVVM Helpers that is built on Visual Studio 2010 RTM. It includes some minor updates to classes and a few new convert...lcskey: v 1.0: v1.0 基本能跑,未详细测试LINQ To Blippr: LINQ to Blippr: Download to test out and play around LINQ to Blippr based from blog posts: http://consultingblogs.emc.com/jonsharrattLINQ to XSD: 1.1.0: The LINQ to XSD technology provides .NET developers with support for typed XML programming. LINQ to XSD contributes to the LINQ project (.NET Langu...LINQ to XSD: 2.0.0: It is the same code as version 1.1 but compiled for .NET framework 4.0. Requirements: .NET Framework 4.0.LocoSync: LocoSync v0.1r2010.04.12: Second Alpha version of LocoSync. Download unzip and run setup. It will download the .NET framework if needed. It will create an icon in the start ...mojoPortal: 2.3.4.2: see release note on mojoportal.com http://www.mojoportal.com/mojoportal-2342-released.aspxNanoPrompt: Setup (.NET 4.0) - 20100414-A Nightly: The setup for NanoPrompt 0.Xa for Intel-80386- (32 or 64 bits) or Intel-Itanium-compatible targets with installed .NET-Framework 4.0 Client Profile...Neural Cryptography in F#: Neural Cryptography 0.0.5: This release provides the basic functionality that this project was supposed to have from the very beginning: it can hash strings using neural netw...NodeXL: Network Overview, Discovery and Exploration for Excel: NodeXL Class Libraries, version 1.0.1.121: The NodeXL class libraries can be used to display network graphs in .NET applications. To include a NodeXL network graph in a WPF desktop or Windo...nRoute Framework: nRoute.Toolkit Release Version 0.4: Note, both "nRoute.Framework (x3)" and "nRoute.Toolkit (x3)" zip files contains binaries for Web, Desktop and Mobile targets. Also this release wa...Numina Application/Security Framework: Numina.Framework Core 50381: Rebuilt using .NET 4 RTM One minor change made to the web.config file - added System.Data.Linq to the assemblies list.PokeIn Comet Ajax Library: PokeIn Lib and Sample: Great sample with usefull comet ajax library! .Net 2.0 Note : It was very easy to build this project with Visual Studio 10 ;)Powershell Zip File Export/Import Cmdlet Module: PowershellZip 0.1.0.3: Powershell-Zip 0.1.0.3 contains the cmdlets Export-Zip and Import-ZipPowerSlim - Acceptance Testing for Enterprise Applications: PowerSlim 0.1: Just PowerSlim. http://vlasenko.org/2010/04/09/howto-setup-powerslim-step-by-step/RDA Collaboration Team Projects: SharePoint BPOS Logging Framework: RDA's SharePoint BPOS logging framework is a very lightweight WSP Builder project that provides the following items: A Site feature that creates a...RecordLogon.vbs login script: LogonSearchGadget: This is the Windows Gadget companion for RecordLogon.RecordLogon.vbs login script: LogonSearchTool.hta: This is the HTA standalone script that runs inside of an IE window. The HTA is what presents the data the recordlogon.vbs creates. Please remember...RecordLogon.vbs login script: recordlogon.vbs: This is the main script that grabs the logon and computer information and dumps the info as text files to a defined folder share. Make sure to chec...Rensea Image Viewer: RIV 0.4.3: New Release of RIV. Added many many features! You would love it. You would need .NET Framework 4.0 to make it run With separated RIV up-loader, to...SharePoint Site Configurator Feature: SharePoint Site Configurator V2.0: Updated for SharePoint 2010 and added quite a lot of new functions. Compatible with SP2010, MOSS and WSS 3.0Sharp Tests Ex: Sharp Tests Ex 1.0.0RC2: Project Description #TestsEx (Sharp Tests Extensions) is a set of extensible extensions. The main target is write short assertions where the Visual...SQL Server Extended Properties Quick Editor: Release 1.6.2: Whats new in 1.6.2: Fixed several errors in LinqToSQL generated classes, solved generation EntitySet members. Its highly recomended to download and...SSRS SDK for PHP: SugarCRM Sample for SSRSReport: The zip file contains a sample SugarCRM module that shows how the SSRS SDK for PHP can be used to add simple reporting capabilities to the SugarCRM...System Etheral™ - Developer: System Etheral Dev v1.00: Comes with a VERY basic text editor and the ability to shutdown. Hopefully we will have a lot more stuff in version 1.01! But this is fine for now....Text to HTML: 0.4.2.0: ¡Gracias a Martin Lemburg por avisar de los errores y por sus sugerencias! Cambios de la versiónSustitución de los caracteres especiales alemanes:...TimeSheet Reporting Silverlight: v1.0 Source Code: Source CodeTwitter4You: Twitter 4 You - Version 1.0 (TESTER): Serialcode: http://joeynl.blogspot.com/2010/04/test-version-of-t4yv1.html Thanks JoeyNLVCC: Latest build, v2.1.30413.0: Automatic drop of latest buildVisioAutomation: VisioAutomation 2.5.1: VisioAutomation 2.5.1- Moved to Visual Studio 2010 (Still using .NET Framework 3.5) Changes Since 2.5.0- Solution and Projects are all based on Vi...Most Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)ASP.NETMicrosoft SQL Server Community & SamplesPHPExcelFacebook Developer ToolkitMost Active ProjectsRawrAutoPocopatterns & practices – Enterprise LibraryGMap.NET - Great Maps for Windows Forms & PresentationFarseer Physics EngineNB_Store - Free DotNetNuke Ecommerce Catalog ModuleBeanProxyjQuery Library for SharePoint Web ServicesBlogEngine.NETFacebook Developer Toolkit

    Read the article

  • CodePlex Daily Summary for Thursday, May 13, 2010

    CodePlex Daily Summary for Thursday, May 13, 2010New ProjectsC# 4.0: VS 2010Collaborative Rich Text Edit Prototype Silverlight on Windows Azure: Working like Google Doc but based on Silverlight/Windows Azure. Multiple users can edit one rich text document simultaneously. This is a POC and my...Consulting Assigment: Proyecto de consultoría para el curso Gestión de proyectos de la UCRCS 105 MP Splitter: Simple program to help with splitting up MPs for grading.DipEngine - graphic engine written on Xen for XNA: DIPEngine makes it easier for developers to use it in your game. It's developed in C# and using Xen for XNADotNetAgent: Agent FrameworkExtending C# editor - Outlining, classification: Extensions to VS C# editor current feautereset. Outlining for switch, for(each), if etc. statements. Classification types for method parameters and...Floe IRC Client: Floe is an IRC client written entirely in C#, using the .NET 4.0 framework with WPF technology. It is inspired largely by both XChat and mIRC, and ...Gestalt.Net: Gestalt.Net is a simple to use library for application configuration. It allows for more flexibility, easier to use, and more powerful than the app...GIM.OnlineChess: An implementation of online long-term chessGoogleTrail: Create and Export using the google map to develop the Trek Trails and Export Those Trails as Garmin Custom Map Compatible MapsGrassidi: What could Grassidi be?HodsAudio: Personal web based music library.Hospital Manage System: Free Hospital Manage SystemHtml Reader: Html Reader makes it easier for WPF WebBrowser users to access the Document property, and retrieve information from HTML pages. You'll no longer ha...Intellitouch.BackOffice: This project is a control for a menu.Jank: JankKooBoo Image Galery: It's a Module for kooboo that implements an image galery admin and view Developed for kooboo 2.1.1.0MoonyDesk (windows desktop widgets): Windows desktop plugin based widgets system written on WPFMSNSharp Application with Video Conferencing Feature: In MSN World, users don’t communicate directly each other.I examined P2P open source video/audio conference systems . After researching, I found Vi...mysample: study sample siteNoteFlyBot: An Intelligent Note Program is: - a 'Post-it' note that accept sNatural Language command to Upload content to EMC Repositories - Start Workflow...OpenGraphNET: A OpenGraphNET is a simple parser for the Open Graph protocol created by Facebook. More information on this can be found at opengraphprotocol.org.POCO - ADO.NET Entity Framework 4.0: The sample shows how to retrieve the data using POCO using EDM 4.0. Database used: Northwind (http://www.microsoft.com/downloads/details.aspx?Fa...sqlmx: sqlmxUpgrade Log Parser for SharePoint 2010: This is a simple Upgrade Log Parser for SharePoint 2010. It will add a node in the upgrade section of the central administration and deploy a pag...New ReleasesAMP (Adamo Media Player): AMP (Adamo Media Player): First Release version 0.1.0 Installation Instructions Unzip the package Run Setup.exe Follow Setup instructions File includes .NET Framework ...BeanProxy: BeanProxy 2.8: BeanProxy is a C# (.NET 3.5) library housing classes that facilitates unit testing. Any non-static, public interface/class/or abstract class can be...Begtostudy-Test: Test V1: Download to testCBM-Command: 2010-05-13: Release Notes - 2010-05-13New Features Configuration Manager is complete Changes Had to put CBM-Command on a big-time diet. Ran out of room on the...CS 105 MP Splitter: CS 105 MP Splitter: Uses SharpZipLib for unzipping (http://www.icsharpcode.net/OpenSource/SharpZipLib/) Uses ILMerge to merge it all into a single executable (http://w...DipEngine - graphic engine written on Xen for XNA: DipGUI: Provides simple controls for use itEvent Scavenger: Collector service update - version 3.2.3: Added additional functionality to check if host/machine is available on the network before trying to open Event log. Also added code to try to use ...ExcelExportLib: ExcelExportLib 1.4.0.0: Solution converted to Visual Studio 2010 - Added support for formulas in cells - Added support for cell naming - Added support for cell's comments...F# Project Extender: V0.9.1.0 (VS2008,VS2010): F# project extender for Visual Studio 2008 and Visual Studio 2010. What's new : Now supports both Visual Studio 2010 and Visual Studio 2008 Fixed...Floe IRC Client: Floe IRC Client 2010-05: Initial release. Note: The .NET Framework v4.0 is required to run this app. It can be downloaded here: http://www.microsoft.com/downloads/details....Fluent Assertions: Release 1.2.1: This is small release with two improvements: You can now use the Enumeration() extension method on a Func<IEnumerable> before calling ShouldThrow<T...Gestalt.Net: Gestalt.Net 1.0: Initial release of Gestalt.Net, a simple to use library for application configuration. It allows for more flexibility, easier to use, and more powe...GPdotNET - Tree Based Genetic Programming Tool: GPdotNETv0.9: This is the same version as previous, but compiled with Visual Studio 2010 and .NET 4.0. You no longer need CTP of ParallelFX.HKGolden Express: HKGoldenExpress (Build 201005122025): New features: (None) Bug fix: (None) Improvements: HKGolden Express now parse JSON data rather than XML document from HKGolden API. This shoul...Jobping Url Shortener: Deploy Code 0.3: Contains only the files for running 0.3 version of the code.Jobping Url Shortener: Source Code 0.3: Feature added: Restriction placed on the domains that the shortener will shorten. Our installation will only shorten www.jobping.com urls. Bug Fix...KooBoo Image Galery: Beta 1: This is the first release and it is divided in some parts Module Install - Is the admin module and an default view Plugin - returns the galery as...LinkedIn® for Windows Mobile: LinkedIn for Windows Mobile v0.8.5: Changes in release 0.8.5 Allow for multiple updatetypes. Add search by Company. Show profiles of all people in an update .MDownloader: MDownloader-0.15.13.58780: Fixed Uploading.com dead links detection; Fixed links detection at pages and in FilesTube results; Fixed RSS channel checking; Enabled by def...Microsoft - Domain Oriented N-Layered .NET 4.0 App Sample (Microsoft Spain): V0.8 - N-Layer DDD Sample App (VS.2010 RTM compat): Required Software (Microsoft Base Software needed for Development environment) Visual Studio 2010 RTM & .NET 4.0 RTM (Final Versions) Unity Applic...MoonyDesk (windows desktop widgets): MoonyDesk: MoonyDesk alpha releaseMSNSharp Application with Video Conferencing Feature: MSNVideoChat: Msn Video Chat Application's exe and screenshot is attached.NazTek.Extension.Clr35: NazTek.Extension.Clr35 Binary: FxCop compliant codebaseNazTek.Extension.Clr35: NazTek.Extension.Clr35 Signed Binary: Signed binaryNazTek.Extension.Clr35: NazTek.Extension.Clr35 Signed Source: Signed sourceNazTek.Extension.Clr35: NazTek.Extension.Clr35 Source: FxCop compliant codebasepatterns & practices Web Client Developer Guidance: Web Client Software Factory 2010 Guide: If the right-side pane of the chm file is not displayed correctly, do the following: 1) Download WCSF2010Guide.chm file. 2) Start the windows explo...Powershell Scripts for Admins: BizTalk PowerShell Module: BizTalk PowerShell Module Available commands : Get-Applications Start-Application Stop-Application Get-HostInstances Start-HostInstance Stop-Hos...Reusable Library: V1.0.8: A collection of reusable abstractions for enterprise application developerReusable Library Demo: V1.0.6: A demonstration of reusable abstractions for enterprise application developerRobot Shootans: Robot Shootans 0.9: This is the second public release of the game. All instructions for play are in the game itself. Known issues: Bullet collision is still a little...Rx Contrib: V1.2: - Bug fix - API changes public static ReactiveQueue<T> CreateConcurrentQueue( ConcurrentPublicationBehavior concurrentPublicationBeha...SqlDiffFramework-A Visual Differencing Engine for Dissimilar Data Sources: SqlDiffFramework 1.0.1.0: Maintenance Release An embedded user control inadvertently assumed US regional and language settings; with non-US settings the application crashed ...Transcriber: Transcriber V0.5.0: Pre-releaseUpgrade Log Parser for SharePoint 2010: Upgrade Log Parser for SharePoint 2010: Introduction I did an upgrade from MOSS 2007 to SharePoint 2010 and SP2010 generated such a large log file and I really didn't feel like scrolling ...VCC: Latest build, v2.1.30512.0: Automatic drop of latest buildVisual Leak Detector for Visual C++ 2008/2010: v2.0a: Renamed vld dll files. Problem with MSVC 2010 Unicode library fixed.VsTortoise - a TortoiseSVN add-in for Microsoft Visual Studio: VsTortoise Build 24 Beta: Build 24 (beta) New: Added "Open Modified File..." to Solution Explorer context menus. VsTortoise.SolutionExplorerSelectedItemsOpenModifiedFile com...WatchersNET.TagCloud: WatchersNET.TagCloud 01.05.00: Whats New Custom Tags: Tag Name and Tag Url can be localized New Option to set Start Date to grab Search Referrals from New Option to Choose th...Wavelet analysis: Wavelet analysis: Первая публичная версия проекта.XNA Shooter Engine: ModelViewer Alpha 1 Preview: ModelViewer is a utility for previewing HLSL shaders and XNA BasicEffects using the XSE rendering engine. It can also be used with the Microsoft XN...Yet another developer blog - Examples: XML and XSLT transformation in ASP.NET MVC example: This sample application shows how to perform XSL transformation of XML file in ASP.NET MVC (using dedicated HtmlHelper extensions or ActionResult)....Most Popular ProjectsWBFS ManagerRawrAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)patterns & practices – Enterprise LibraryMicrosoft SQL Server Community & SamplesPHPExcelASP.NETMost Active Projectspatterns & practices – Enterprise LibraryMirror Testing SystemRawrBlogEngine.NETPHPExcelwhitejQuery Library for SharePoint Web ServicesMicrosoft Biology FoundationFarseer Physics EngineShake - C# Make

    Read the article

  • How add loading image using jquery?

    - by user244394
    I'm working on a form, <form id="myform" class="form"> </form> that gets submitted to the server using jquery ajax. How can I refresh the form on success to show the updated form information and add a spinner until the form loads? here is my html and jquery snippet <div class="container"> <div class="page-header"> <div class="span2"> <!--Sidebar content--> <img src="img/emc_logo.png" title="EMC" > </div> <div class="span6"> <h2 class="form-wizard-heading">Configuration</h2> </div> </div> <form id="myform" class="form"> </form> </div> <!-- /container --> <!-- Modal --> <div id="myModal" class="modal hide fade" tabindex="-1" role="dialog" aria-labelledby="myModalLabel" aria-hidden="true"> <div class="modal-header"> <button type="button" class="close" data-dismiss="modal" aria-hidden="true">&times;</button> <h3 id="myModalLabel">Configuration Changes</h3> <p><span class="label label-important">Please review your changes before submitting. Submitting the changes will result in rebooting the cluster</span></p> </div> <div class="modal-body"> <table class="table table-condensed table-striped" id="display"></table> </div> <div class="modal-footer"> <button id="cancel" class="btn" data-dismiss="modal" aria-hidden="true">Close</button> <button id="save" class="btn btn-primary">Save changes</button> </div> </div> //Jquery part $(document).ready(function () { $('input').hover(function () { $(this).popover('show') }); // On mouseout destroy popout $('input').mouseout(function () { $(this).popover('destroy') }); $('#myform').on('submit', function (ev) { ev.preventDefault(); var data = $(this).serializeObject(); json_data = JSON.stringify(data); $('#myModal').modal('show'); $.each(data, function (key, val) { var tablefeed = $('<tr><td>' + key + '</td><td id="' + key + '">' + val + '</td><tr>').appendTo('#display'); }); $(".modal-body").html(tablefeed); }); $("#cancel").click(function () { $("#display").empty(); }); $(function () { $("#save").click(function () { // validate and process form here alert("button submitted" + json_data); $.ajax({ type: "POST", url: "somefile.json.", data: json_data, contentType: 'application/json', success: function (data, textStatus, xhr) { console.log(arguments); console.log(xhr.status); alert("Your form has been submitted: " + textStatus + xhr.status); }, error: function (jqXHR, textStatus, errorThrown) { alert(jqXHR.responseText + " - " + errorThrown + " : " + jqXHR.status); } }); }); }); });

    Read the article

  • Getting text position while parsing pdf with Quartz 2D

    - by Koteg
    Hi guys, another question regarding pdf parsing... Just read PDF Reference version 1.7 "5.3.1 Text-Positioning Operators" and I am a little bit confused. I wrote some code to get transformation matrix and initial text position. CGPDFOperatorTableSetCallback (table, "MP", &op_MP);//Define marked-content point CGPDFOperatorTableSetCallback (table, "DP", &op_DP);//Define marked-content point with property list CGPDFOperatorTableSetCallback (table, "BMC", &op_BMC);//Begin marked-content sequence CGPDFOperatorTableSetCallback (table, "BDC", &op_BDC);//Begin marked-content sequence with property list CGPDFOperatorTableSetCallback (table, "EMC", &op_EMC);//End marked-content sequence //Text State operators CGPDFOperatorTableSetCallback(table, "Tc", &op_Tc); CGPDFOperatorTableSetCallback(table, "Tw", &op_Tw); CGPDFOperatorTableSetCallback(table, "Tz", &op_Tz); CGPDFOperatorTableSetCallback(table, "TL", &op_TL); CGPDFOperatorTableSetCallback(table, "Tf", &op_Tf); CGPDFOperatorTableSetCallback(table, "Tr", &op_Tr); CGPDFOperatorTableSetCallback(table, "Ts", &op_Ts); //text showing operators CGPDFOperatorTableSetCallback(table, "TJ", &op_TJ); CGPDFOperatorTableSetCallback(table, "Tj", &op_Tj); CGPDFOperatorTableSetCallback(table, "'", &op_apostrof); CGPDFOperatorTableSetCallback(table, "\"", &op_double_apostrof); //text positioning operators CGPDFOperatorTableSetCallback(table, "Td", &op_Td); CGPDFOperatorTableSetCallback(table, "TD", &op_TD); CGPDFOperatorTableSetCallback(table, "Tm", &op_Tm); CGPDFOperatorTableSetCallback(table, "T*", &op_T); //text object operators CGPDFOperatorTableSetCallback(table, "BT", &op_BT);//Begin text object CGPDFOperatorTableSetCallback(table, "ET", &op_ET);//End text object So this is the output after application lunch: 2010-09-02 15:09:23.041 testSearch[8251:207] op_BT begin Integer value: 0 2010-09-02 15:09:23.043 testSearch[8251:207] op_BT end 2010-09-02 15:09:23.043 testSearch[8251:207] op_Tf begin Integer value: 1 2010-09-02 15:09:23.044 testSearch[8251:207] op_Tf end 2010-09-02 15:09:23.044 testSearch[8251:207] op_Tm begin Float value: 557.364197 2010-09-02 15:09:23.045 testSearch[8251:207] op_Tm end 2010-09-02 15:09:23.045 testSearch[8251:207] op_TJ begin 2010-09-02 15:09:23.046 testSearch[8251:207] Array string value [0]: F 2010-09-02 15:09:23.046 testSearch[8251:207] Array integer value [1]: 94985208 2010-09-02 15:09:23.047 testSearch[8251:207] Array string value [2]: r 2010-09-02 15:09:23.047 testSearch[8251:207] Array integer value [3]: 94985208 2010-09-02 15:09:23.048 testSearch[8251:207] Array string value [4]: o 2010-09-02 15:09:23.048 testSearch[8251:207] Array integer value [5]: 94985208 2010-09-02 15:09:23.049 testSearch[8251:207] Array string value [6]: m s 2010-09-02 15:09:23.049 testSearch[8251:207] Array integer value [7]: 94985208 2010-09-02 15:09:23.049 testSearch[8251:207] Array string value [8]: a 2010-09-02 15:09:23.050 testSearch[8251:207] Array integer value [9]: 94985208 2010-09-02 15:09:23.050 testSearch[8251:207] Array string value [10]: m 2010-09-02 15:09:23.051 testSearch[8251:207] Array integer value [11]: 94985208 2010-09-02 15:09:23.051 testSearch[8251:207] Array string value [12]: p 2010-09-02 15:09:23.052 testSearch[8251:207] Array integer value [13]: 94985208 2010-09-02 15:09:23.053 testSearch[8251:207] Array string value [14]: l 2010-09-02 15:09:23.054 testSearch[8251:207] Array integer value [15]: 94985208 2010-09-02 15:09:23.055 testSearch[8251:207] Array string value [16]: e t 2010-09-02 15:09:23.055 testSearch[8251:207] Array integer value [17]: 94985208 2010-09-02 15:09:23.057 testSearch[8251:207] Array string value [18]: o r 2010-09-02 15:09:23.057 testSearch[8251:207] Array integer value [19]: 94985208 2010-09-02 15:09:23.058 testSearch[8251:207] Array string value [20]: e 2010-09-02 15:09:23.058 testSearch[8251:207] Array integer value [21]: 94985208 2010-09-02 15:09:23.059 testSearch[8251:207] Array string value [22]: s 2010-09-02 15:09:23.059 testSearch[8251:207] Array integer value [23]: 94985208 2010-09-02 15:09:23.060 testSearch[8251:207] Array string value [24]: u 2010-09-02 15:09:23.061 testSearch[8251:207] Array integer value [25]: 94985208 2010-09-02 15:09:23.061 testSearch[8251:207] Array string value [26]: l 2010-09-02 15:09:23.062 testSearch[8251:207] Array integer value [27]: 94985208 2010-09-02 15:09:23.062 testSearch[8251:207] Array string value [28]: t 2010-09-02 15:09:23.063 testSearch[8251:207] op_TJ end If someone is familiar with text matrix and text positioning operators it would be nice to explain how all those thing work. How to calculate text position (or glyph?) using Tm (transformation matrix and other data)?

    Read the article

  • Python regex to parse text file, get the items in list and count the list

    - by Nemo
    I have a text file which contains some data. I m particularly interested in finding the count of the number of items in v_dims v_dims pattern in my text file looks like this : v_dims={ "Sales", "Product Family", "Sales Organization", "Region", "Sales Area", "Sales office", "Sales Division", "Sales Person", "Sales Channel", "Sales Order Type", "Sales Number", "Sales Person", "Sales Quantity", "Sales Amount" } So I m thinking of getting all the elements in v_dims and dumping them out in a Python list. Then compute the len(mylist) to get the count of the items. The challenge is in getting all the elements of v_dims from my text file and putting them in an empty list. I m particularly interested in items in v_dims in my text file. The text file has data in the form of v_dims pattern i showed in my original post. Some data has nested patterns of v_dims. Thanks. Here's what I have tried and failed. Any help is appreciated. TIA. import re fname = "C:\Users\XXXX\Test.mrk" with open(fname, "r") as fo: content_as_string = fo.read() match = re.findall(r'v_dims={\"(.+?)\"}',content_as_string) Though I have a big text file, Here's a snippet of what's the structure of my text file version "1"; // Computer generated object language file object 'MRKR' "Main" { Data_Type=2, HeaderBlock={ Version_String="6.3 (25)" }, Printer_Info={ Orientation=0, Page_Width=8.50000000, Page_Height=11.00000000, Page_Header="", Page_Footer="", Margin_type=0, Top_Margin=0.50000000, Left_Margin=0.50000000, Bottom_Margin=0.50000000, Right_Margin=0.50000000 }, Marker_Options={ Close_All="TRUE", Hide_Console="FALSE", Console_Left="FALSE", Console_Width=217, Main_Style="Maximized", MDI_Rect={ 0, 0, 892, 1063 } }, Dives={ { Dive="A", Windows={ { View_Index=0, Window_Info={ Window_Rect={ 0, -288, 400, 1008 }, Window_Style="Maximized Front", Window_Name="Theater [Previous Qtr Diveplan-Dive A]" }, Dependent_bool="FALSE", Colset={ Dive_Type="Normal", Dimension_Name="Theater", Action_List={ Actions={ { Action_Type="Select", select_type=5 }, { Action_Type="Select", select_type=0, Key_Names={ "Theater" }, Key_Indexes={ { "AMERICAS" } } }, { Action_Type="Focus", Focus_Rows="True" }, { Action_Type="Dimensions", v_dims={ "Theater", "Product Family", "Division", "Region", "Install at Country Name", "Connect Home Type", "Connect In Type", "SymmConnect Enabled", "Connect Home Refusal Reason", "Sales Order Channel Type", "Maintained By Group", "PS Flag", "Avalanche Flag", "Product Item Family" }, Xtab_Bool="False", Xtab_Flip="False" }, { Action_Type="Select", select_type=5 }, { Action_Type="Select", select_type=0, Key_Names={ "Theater", "Product Family", "Division", "Region", "Install at Country Name", "Connect Home Type", "Connect In Type", "SymmConnect Enabled", "Connect Home Refusal Reason", "Sales Order Channel Type", "Maintained By Group", "PS Flag", "Avalanche Flag" }, Key_Indexes={ { "AMERICAS", "ATMOS", "Latin America CS Division", "37000 CS Region", "Mexico", "", "", "", "", "DIRECT", "EMC", "N", "0" } } } } }, Num_Palette_cols=0, Num_Palette_rows=0 }, Format={ Window_Type="Tabular", Tabular={ Num_row_labels=8 } } } } } }, Widget_Set={ Widget_Layout="Vertical", Go_Button=1, Picklist_Width=0, Sort_Subset_Dimensions="TRUE", Order={ } }, Views={ { Data_Type=1, dbname="Previous Qtr Diveplan", diveline_dbname="Current Qtr Diveplan", logical_name="Current Qtr Diveplan", cols={ { name="Total TSS installs", column_type="Calc[Total TSS installs]", output_type="Number", format_string="." }, { name="TSS Valid Connectivity Records", column_type="Calc[TSS Valid Connectivity Records]", output_type="Number", format_string="." }, { name="% TSS Connectivity Record", column_type="Calc[% TSS Connectivity Record]", output_type="Number" }, { name="TSS Not Applicable", column_type="Calc[TSS Not Applicable]", output_type="Number", format_string="." }, { name="TSS Customer Refusals", column_type="Calc[TSS Customer Refusals]", output_type="Number", format_string="." }, { name="% TSS Refusals", column_type="Calc[% TSS Refusals]", output_type="Number" }, { name="TSS Eligible for Physical Connectivity", column_type="Calc[TSS Eligible for Physical Connectivity]", output_type="Number", format_string="." }, { name="TSS Boxes with Physical Connectivty", column_type="Calc[TSS Boxes with Physical Connectivty]", output_type="Number", format_string="." }, { name="% TSS Physical Connectivity", column_type="Calc[% TSS Physical Connectivity]", output_type="Number" } }, dim_cols={ { name="Model", column_type="Dimension[Model]", output_type="None" }, { name="Model", column_type="Dimension[Model]", output_type="None" }, { name="Connect In Type", column_type="Dimension[Connect In Type]", output_type="None" }, { name="Connect Home Type", column_type="Dimension[Connect Home Type]", output_type="None" }, { name="SymmConnect Enabled", column_type="Dimension[SymmConnect Enabled]", output_type="None" }, { name="Theater", column_type="Dimension[Theater]", output_type="None" }, { name="Division", column_type="Dimension[Division]", output_type="None" }, { name="Region", column_type="Dimension[Region]", output_type="None" }, { name="Sales Order Number", column_type="Dimension[Sales Order Number]", output_type="None" }, { name="Product Item Family", column_type="Dimension[Product Item Family]", output_type="None" }, { name="Item Serial Number", column_type="Dimension[Item Serial Number]", output_type="None" }, { name="Sales Order Deal Number", column_type="Dimension[Sales Order Deal Number]", output_type="None" }, { name="Item Install Date", column_type="Dimension[Item Install Date]", output_type="None" }, { name="SYR Last Dial Home Date", column_type="Dimension[SYR Last Dial Home Date]", output_type="None" }, { name="Maintained By Group", column_type="Dimension[Maintained By Group]", output_type="None" }, { name="PS Flag", column_type="Dimension[PS Flag]", output_type="None" }, { name="Connect Home Refusal Reason", column_type="Dimension[Connect Home Refusal Reason]", output_type="None", col_width=177 }, { name="Cust Name", column_type="Dimension[Cust Name]", output_type="None" }, { name="Sales Order Channel Type", column_type="Dimension[Sales Order Channel Type]", output_type="None" }, { name="Sales Order Type", column_type="Dimension[Sales Order Type]", output_type="None" }, { name="Part Model Key", column_type="Dimension[Part Model Key]", output_type="None" }, { name="Ship Date", column_type="Dimension[Ship Date]", output_type="None" }, { name="Model Number", column_type="Dimension[Model Number]", output_type="None" }, { name="Item Description", column_type="Dimension[Item Description]", output_type="None" }, { name="Customer Classification", column_type="Dimension[Customer Classification]", output_type="None" }, { name="CS Customer Name", column_type="Dimension[CS Customer Name]", output_type="None" }, { name="Install At Customer Number", column_type="Dimension[Install At Customer Number]", output_type="None" }, { name="Install at Country Name", column_type="Dimension[Install at Country Name]", output_type="None" }, { name="TLA Serial Number", column_type="Dimension[TLA Serial Number]", output_type="None" }, { name="Product Version", column_type="Dimension[Product Version]", output_type="None" }, { name="Avalanche Flag", column_type="Dimension[Avalanche Flag]", output_type="None" }, { name="Product Family", column_type="Dimension[Product Family]", output_type="None" }, { name="Project Number", column_type="Dimension[Project Number]", output_type="None" }, { name="PROJECT_STATUS", column_type="Dimension[PROJECT_STATUS]", output_type="None" } }, Available_Columns={ "Total TSS installs", "TSS Valid Connectivity Records", "% TSS Connectivity Record", "TSS Not Applicable", "TSS Customer Refusals", "% TSS Refusals", "TSS Eligible for Physical Connectivity", "TSS Boxes with Physical Connectivty", "% TSS Physical Connectivity", "Total Installs", "All Boxes with Valid Connectivty Record", "% All Connectivity Record", "Overall Refusals", "Overall Refusals %", "All Eligible for Physical Connectivty", "Boxes with Physical Connectivity", "% All with Physical Conectivity" }, Remaining_columns={ { name="Total Installs", column_type="Calc[Total Installs]", output_type="Number", format_string="." }, { name="All Boxes with Valid Connectivty Record", column_type="Calc[All Boxes with Valid Connectivty Record]", output_type="Number", format_string="." }, { name="% All Connectivity Record", column_type="Calc[% All Connectivity Record]", output_type="Number" }, { name="Overall Refusals", column_type="Calc[Overall Refusals]", output_type="Number", format_string="." }, { name="Overall Refusals %", column_type="Calc[Overall Refusals %]", output_type="Number" }, { name="All Eligible for Physical Connectivty", column_type="Calc[All Eligible for Physical Connectivty]", output_type="Number" }, { name="Boxes with Physical Connectivity", column_type="Calc[Boxes with Physical Connectivity]", output_type="Number" }, { name="% All with Physical Conectivity", column_type="Calc[% All with Physical Conectivity]", output_type="Number" } }, calcs={ { name="Total TSS installs", definition="Total[Total TSS installs]", ts_flag="Not TS Calc" }, { name="TSS Valid Connectivity Records", definition="Total[PS Boxes w/ valid connectivity record (1=yes)]", ts_flag="Not TS Calc" }, { name="% TSS Connectivity Record", definition="Total[PS Boxes w/ valid connectivity record (1=yes)] /Total[Total TSS installs]", ts_flag="Not TS Calc" }, { name="TSS Not Applicable", definition="Total[Bozes w/ valid connectivity record (1=yes)]-Total[Boxes Eligible (1=yes)]-Total[TSS Refusals]", ts_flag="Not TS Calc" }, { name="TSS Customer Refusals", definition="Total[TSS Refusals]", ts_flag="Not TS Calc" }, { name="% TSS Refusals", definition="Total[TSS Refusals]/Total[PS Boxes w/ valid connectivity record (1=yes)]", ts_flag="Not TS Calc" }, { name="TSS Eligible for Physical Connectivity", definition="Total[TSS Eligible]-Total[Exception]", ts_flag="Not TS Calc" }, { name="TSS Boxes with Physical Connectivty", definition="Total[PS Physical Connectivity] - Total[PS Physical Connectivity, SymmConnect Enabled=\"Capable not enabled\"]", ts_flag="Not TS Calc" }, { name="% TSS Physical Connectivity", definition="Total[Boxes w/ phys conn]/Total[Boxes Eligible (1=yes)]", ts_flag="Not TS Calc" }, { name="Total Installs", definition="Total[Total Installs]", ts_flag="Not TS Calc" }, { name="All Boxes with Valid Connectivty Record", definition="Total[Bozes w/ valid connectivity record (1=yes)]", ts_flag="Not TS Calc" }, { name="% All Connectivity Record", definition="Total[Bozes w/ valid connectivity record (1=yes)]/Total[Total Installs]", ts_flag="Not TS Calc" }, { name="Overall Refusals", definition="Total[Overall Refusals]", ts_flag="Not TS Calc" }, { name="Overall Refusals %", definition="Total[Overall Refusals]/Total[Bozes w/ valid connectivity record (1=yes)]", ts_flag="Not TS Calc" }, { name="All Eligible for Physical Connectivty", definition="Total[Boxes Eligible (1=yes)]-Total[Exception]", ts_flag="Not TS Calc" }, { name="Boxes with Physical Connectivity", definition="Total[Boxes w/ phys conn]-Total[Boxes w/ phys conn,SymmConnect Enabled=\"Capable not enabled\"]", ts_flag="Not TS Calc" }, { name="% All with Physical Conectivity", definition="Total[Boxes w/ phys conn]/Total[Boxes Eligible (1=yes)]", ts_flag="Not TS Calc" } }, merge_type="consolidate", merge_dbs={ { dbname="connectivityallproducts.mdl", diveline_dbname="/DI_PSREPORTING/connectivityallproducts.mdl" } }, skip_constant_columns="FALSE", categories={ { name="Geography", dimensions={ "Theater", "Division", "Region", "Install at Country Name" } }, { name="Mappings and Flags", dimensions={ "Connect Home Type", "Connect In Type", "SymmConnect Enabled", "Connect Home Refusal Reason", "Sales Order Channel Type", "Maintained By Group", "Customer Installable", "PS Flag", "Top Level Flag", "Avalanche Flag" } }, { name="Product Information", dimensions={ "Product Family", "Product Item Family", "Product Version", "Item Description" } }, { name="Sales Order Info", dimensions={ "Sales Order Deal Number", "Sales Order Number", "Sales Order Type" } }, { name="Dates", dimensions={ "Item Install Date", "Ship Date", "SYR Last Dial Home Date" } }, { name="Details", dimensions={ "Item Serial Number", "TLA Serial Number", "Part Model Key", "Model Number" } }, { name="Customer Infor", dimensions={ "CS Customer Name", "Install At Customer Number", "Customer Classification", "Cust Name" } }, { name="Other Dimensions", dimensions={ "Model" } } }, Maintain_Category_Order="FALSE", popup_info="false" } } };

    Read the article

  • Dojo - How to position tooltip close to text?

    - by user244394
    Like the title says i want to be able to display the tooltip close to the text, currently it is displayed far away in the cell. Tobe noted the tooltip positions correctly for large text, only fails for small text. In DOJO How can i position the tooltip close to the text? I have this bit of code snippet that display the tooltip in the grid cells. Screenshot attached, html <div class="some_app claro"></div> ... com.c.widget.EnhancedGrid = function ( aParent, options ) { var grid, options; this.theParentApp = aParent; dojo.require("dojox.grid.EnhancedGrid"); dojo.require("dojox.grid.enhanced.plugins.Menu"); dojo.require("dojox.grid.enhanced.plugins.Selector"); dojo.require("dojox.grid.enhanced.plugins.Pagination"); dojo.require("dojo.store.Memory"); dojo.require("dojo.data.ObjectStore"); dojo.require("dojo._base.xhr"); dojo.require("dojo.domReady!"); dojo.require("dojo.date.locale"); dojo.require("dojo._base.connect"); dojo.require("dojox.data.JsonRestStore"); dojo.require("dojo.data.ItemFileReadStore"); dojo.require("dijit.Menu"); dojo.require("dijit.MenuItem"); dojo.require('dijit.MenuSeparator'); dojo.require('dijit.CheckedMenuItem'); dojo.require('dijit.Tooltip'); dojo.require('dojo/query'); dojo.require("dojox.data.QueryReadStore"); // main initialization function this.init = function( options ) { var me = this; // default options var defaultOptions = { widgetName: ' Enhancedgrid', render: true, // immediately render the grid draggable: true, // disables column dragging containerNode: false, // the Node to hold the Grid (optional) mashupUrl: false, // the URL of the mashup (required) rowsPerPage: 20, //Default number of items per page columns: false, // columns (required) width: "100%", // width of grid height: "100%", // height of grid rowClass: function (rowData) {}, onClick: function () {}, headerMenu: false, // adding a menu pop-up for the header. selectedRegionMenu: false, // adding a menu pop-up for the rows. menusObject: false, //object to start-up the menus using the plug-in. sortInfo: false, // The column default sort infiniteScrolling: false //If true, the enhanced grid will have an infinite scrolling. }; // merge user provided options me.options = jQuery.extend( {}, defaultOptions, options ); // check we have minimum required options if ( ! me.options.mashupUrl ){ throw ("You must supply a mashupUrl"); } if ( ! me.options.columns ){ throw ("You must supply columns"); } // make the column for formatting based on its data type. me.preProcessColumns(); // create the Contextual Menu me.createMenu(); // create the grid object and return me.createGrid(); }; // Loading the data to the grid. this.loadData = function () { var me = this; if (!me.options.infiniteScrolling) { var xhrArgs = { url: me.options.mashupUrl, handleAs: "json", load: function( data ){ var store = new dojo.data.ItemFileReadStore({ data : {items : eval( "data."+me.options.dataRoot)}}); store.fetch({ onComplete : function(items, request) { if (me.grid.selection !== null) { me.grid.selection.clear(); } me.grid.setStore(store); }, onError : function(error) { me.onError(error); } }); }, error: function (error) { me.onError(error); } }; dojo.xhrGet(xhrArgs); } else { dojo.declare('NotificationQueryReadStore', dojox.data.QueryReadStore, { // // hacked -- override to map to proper data structure // from mashup // _xhrFetchHandler : function(data, request, fetchHandler, errorHandler) { // // TODO: need to have error handling here when // data has "error" data structure // // // remap data object before process by super method // var dataRoot = eval ("data."+me.options.dataRoot); var dataTotal = eval ("data."+me.options.dataTotal); data = { numRows : dataTotal, items : dataRoot }; // call to super method to process mapped data and // set rowcount // for proper display this.inherited(arguments); } }); var queryStore = new NotificationQueryReadStore({ url : me.options.mashupUrl, urlPreventCache: true, requestMethod : "get", onError: function (error) { me.onError(error); } }); me.grid.setStore(queryStore); } }; this.preProcessColumns = function () { var me = this; var options = me.options; for (i=0;i<this.options.columns.length;i++) { if (this.options.columns[i].formatter==null) { switch (this.options.columns[i].datatype) { case "string": this.options.columns[i].formatter = me.formatString; break; case "date": this.options.columns[i].formatter = me.formatDate; var todayDate = new Date(); var gmtTime = c.util.Date.parseDate(todayDate.toString()).toString(); var gmtval = gmtTime.substring(gmtTime.indexOf('GMT'),(gmtTime.indexOf('(')-1)); this.options.columns[i].name = this.options.columns[i].name + " ("+gmtval+")"; } } if (this.options.columns[i].sortDefault) { me.options.sortInfo = i+1; } } }; // create GRID object using supplied options this.createGrid = function () { var me = this; var options = me.options; // create a new grid this.grid = new dojox.grid.EnhancedGrid ({ width: options.width, height: options.height, query: { id: "*" }, keepSelection: true, formatterScope: this, structure: options.columns, columnReordering: options.draggable, rowsPerPage: options.rowsPerPage, //sortInfo: options.sortInfo, plugins : { menus: options.menusObject, selector: {"row":"multi", "cell": "disabled" }, }, //Allow the user to decide if a column is sortable by setting sortable = true / false canSort: function(col) { if (options.columns[Math.abs(col)-1].sortable) return true; else return false; }, //Change the row colors depending on severity column. onStyleRow: function (row) { var grid = me.grid; var item = grid.getItem(row.index); if (item && options.rowClass(item)) { row.customClasses += " " +options.rowClass(item); if (grid.selection.selectedIndex == row.index) { row.customClasses += " dojoxGridRowSelected"; } grid.focus.styleRow(row); grid.edit.styleRow(row); } }, onCellMouseOver: function (e){ // var pos = dojo.position(this, true); // alert(pos); console.log( e.rowIndex +" cell node :"+ e.cellNode.innerHTML); // var pos = dojo.position(this, true); console.log( " pos :"+ e.pos); if (e.cellNode.innerHTML!="") { dijit.showTooltip(e.cellNode.innerHTML, e.cellNode); } }, onCellMouseOut: function (e){ dijit.hideTooltip(e.cellNode); }, onHeaderCellMouseOver: function (e){ if (e.cellNode.innerHTML!="") { dijit.showTooltip(e.cellNode.innerHTML, e.cellNode); } }, onHeaderCellMouseOut: function (e){ dijit.hideTooltip(e.cellNode); }, }); // ADDED CODE FOR TOOLTIP var gridTooltip = new Tooltip({ connectId: "grid1", selector: "td", position: ["above"], getContent: function(matchedNode){ var childNode = matchedNode.childNodes[0]; if(childNode.nodeType == 1 && childNode.className == "user") { this.position = ["after"]; this.open(childNode); return false; } if(matchedNode.className && matchedNode.className == "user") { this.position = ["after"]; } else { this.position = ["above"]; } return matchedNode.textContent; } }); ... //Construct the grid this.buildGrid = function(){ var datagrid = new com.emc.widget.EnhancedGrid(this,{ Url: "/dge/api/-resultFormat=json&id="+encodeURIComponent(idUrl), dataRoot: "Root.ATrail", height: '100%', columns: [ { name: 'Time', field: 'Time', width: '20%', datatype: 'date', sortable: true, searchable: true, hidden: false}, { name: 'Type', field: 'Type', width: '20%', datatype: 'string', sortable: true, searchable: true, hidden: false}, { name: 'User ID', field: 'UserID', width: '20%', datatype: 'string', sortable: true, searchable: true, hidden: false } ] }); this.grid = datagrid; };

    Read the article

  • SQL Server 2012 - AlwaysOn

    - by Claus Jandausch
    Ich war nicht nur irritiert, ich war sogar regelrecht schockiert - und für einen kurzen Moment sprachlos (was nur selten der Fall ist). Gerade eben hatte mich jemand gefragt "Wann Oracle denn etwas Vergleichbares wie AlwaysOn bieten würde - und ob überhaupt?" War ich hier im falschen Film gelandet? Ich konnte nicht anders, als meinen Unmut kundzutun und zu erklären, dass die Fragestellung normalerweise anders herum läuft. Zugegeben - es mag vielleicht strittige Punkte geben im Vergleich zwischen Oracle und SQL Server - bei denen nicht unbedingt immer Oracle die Nase vorn haben muss - aber das Thema Clustering für Hochverfügbarkeit (HA), Disaster Recovery (DR) und Skalierbarkeit gehört mit Sicherheit nicht dazu. Dieses Erlebnis hakte ich am Nachgang als Einzelfall ab, der so nie wieder vorkommen würde. Bis ich kurz darauf eines Besseren belehrt wurde und genau die selbe Frage erneut zu hören bekam. Diesmal sogar im Exadata-Umfeld und einem Oracle Stretch Cluster. Einmal ist keinmal, doch zweimal ist einmal zu viel... Getreu diesem alten Motto war mir klar, dass man das so nicht länger stehen lassen konnte. Ich habe keine Ahnung, wie die Microsoft Marketing Abteilung es geschafft hat, unter dem AlwaysOn Brading eine innovative Technologie vermuten zu lassen - aber sie hat ihren Job scheinbar gut gemacht. Doch abgesehen von einem guten Marketing, stellt sich natürlich die Frage, was wirklich dahinter steckt und wie sich das Ganze mit Oracle vergleichen lässt - und ob überhaupt? Damit wären wir wieder bei der ursprünglichen Frage angelangt.  So viel zum Hintergrund dieses Blogbeitrags - von meiner Antwort handelt der restliche Blog. "Windows was the God ..." Um den wahren Unterschied zwischen Oracle und Microsoft verstehen zu können, muss man zunächst das bedeutendste Microsoft Dogma kennen. Es lässt sich schlicht und einfach auf den Punkt bringen: "Alles muss auf Windows basieren." Die Überschrift dieses Absatzes ist kein von mir erfundener Ausspruch, sondern ein Zitat. Konkret stammt es aus einem längeren Artikel von Kurt Eichenwald in der Vanity Fair aus dem August 2012. Er lautet Microsoft's Lost Decade und sei jedem ans Herz gelegt, der die "Microsoft-Maschinerie" unter Steve Ballmer und einige ihrer Kuriositäten besser verstehen möchte. "YOU TALKING TO ME?" Microsoft C.E.O. Steve Ballmer bei seiner Keynote auf der 2012 International Consumer Electronics Show in Las Vegas am 9. Januar   Manche Dinge in diesem Artikel mögen überspitzt dargestellt erscheinen - sind sie aber nicht. Vieles davon kannte ich bereits aus eigener Erfahrung und kann es nur bestätigen. Anderes hat sich mir erst so richtig erschlossen. Insbesondere die folgenden Passagen führten zum Aha-Erlebnis: “Windows was the god—everything had to work with Windows,” said Stone... “Every little thing you want to write has to build off of Windows (or other existing roducts),” one software engineer said. “It can be very confusing, …” Ich habe immer schon darauf hingewiesen, dass in einem SQL Server Failover Cluster die Microsoft Datenbank eigentlich nichts Nenneswertes zum Geschehen beiträgt, sondern sich voll und ganz auf das Windows Betriebssystem verlässt. Deshalb muss man auch die Windows Server Enterprise Edition installieren, soll ein Failover Cluster für den SQL Server eingerichtet werden. Denn hier werden die Cluster Services geliefert - nicht mit dem SQL Server. Er ist nur lediglich ein weiteres Server Produkt, für das Windows in Ausfallszenarien genutzt werden kann - so wie Microsoft Exchange beispielsweise, oder Microsoft SharePoint, oder irgendein anderes Server Produkt das auf Windows gehostet wird. Auch Oracle kann damit genutzt werden. Das Stichwort lautet hier: Oracle Failsafe. Nur - warum sollte man das tun, wenn gleichzeitig eine überlegene Technologie wie die Oracle Real Application Clusters (RAC) zur Verfügung steht, die dann auch keine Windows Enterprise Edition voraussetzen, da Oracle die eigene Clusterware liefert. Welche darüber hinaus für kürzere Failover-Zeiten sorgt, da diese Cluster-Technologie Datenbank-integriert ist und sich nicht auf "Dritte" verlässt. Wenn man sich also schon keine technischen Vorteile mit einem SQL Server Failover Cluster erkauft, sondern zusätzlich noch versteckte Lizenzkosten durch die Lizenzierung der Windows Server Enterprise Edition einhandelt, warum hat Microsoft dann in den vergangenen Jahren seit SQL Server 2000 nicht ebenfalls an einer neuen und innovativen Lösung gearbeitet, die mit Oracle RAC mithalten kann? Entwickler hat Microsoft genügend? Am Geld kann es auch nicht liegen? Lesen Sie einfach noch einmal die beiden obenstehenden Zitate und sie werden den Grund verstehen. Anders lässt es sich ja auch gar nicht mehr erklären, dass AlwaysOn aus zwei unterschiedlichen Technologien besteht, die beide jedoch wiederum auf dem Windows Server Failover Clustering (WSFC) basieren. Denn daraus ergeben sich klare Nachteile - aber dazu später mehr. Um AlwaysOn zu verstehen, sollte man sich zunächst kurz in Erinnerung rufen, was Microsoft bisher an HA/DR (High Availability/Desaster Recovery) Lösungen für SQL Server zur Verfügung gestellt hat. Replikation Basiert auf logischer Replikation und Pubisher/Subscriber Architektur Transactional Replication Merge Replication Snapshot Replication Microsoft's Replikation ist vergleichbar mit Oracle GoldenGate. Oracle GoldenGate stellt jedoch die umfassendere Technologie dar und bietet High Performance. Log Shipping Microsoft's Log Shipping stellt eine einfache Technologie dar, die vergleichbar ist mit Oracle Managed Recovery in Oracle Version 7. Das Log Shipping besitzt folgende Merkmale: Transaction Log Backups werden von Primary nach Secondary/ies geschickt Einarbeitung (z.B. Restore) auf jedem Secondary individuell Optionale dritte Server Instanz (Monitor Server) für Überwachung und Alarm Log Restore Unterbrechung möglich für Read-Only Modus (Secondary) Keine Unterstützung von Automatic Failover Database Mirroring Microsoft's Database Mirroring wurde verfügbar mit SQL Server 2005, sah aus wie Oracle Data Guard in Oracle 9i, war funktional jedoch nicht so umfassend. Für ein HA/DR Paar besteht eine 1:1 Beziehung, um die produktive Datenbank (Principle DB) abzusichern. Auf der Standby Datenbank (Mirrored DB) werden alle Insert-, Update- und Delete-Operationen nachgezogen. Modi Synchron (High-Safety Modus) Asynchron (High-Performance Modus) Automatic Failover Unterstützt im High-Safety Modus (synchron) Witness Server vorausgesetzt     Zur Frage der Kontinuität Es stellt sich die Frage, wie es um diesen Technologien nun im Zusammenhang mit SQL Server 2012 bestellt ist. Unter Fanfaren seinerzeit eingeführt, war Database Mirroring das erklärte Mittel der Wahl. Ich bin kein Produkt Manager bei Microsoft und kann hierzu nur meine Meinung äußern, aber zieht man den SQL AlwaysOn Team Blog heran, so sieht es nicht gut aus für das Database Mirroring - zumindest nicht langfristig. "Does AlwaysOn Availability Group replace Database Mirroring going forward?” “The short answer is we recommend that you migrate from the mirroring configuration or even mirroring and log shipping configuration to using Availability Group. Database Mirroring will still be available in the Denali release but will be phased out over subsequent releases. Log Shipping will continue to be available in future releases.” Damit wären wir endlich beim eigentlichen Thema angelangt. Was ist eine sogenannte Availability Group und was genau hat es mit der vielversprechend klingenden Bezeichnung AlwaysOn auf sich?   SQL Server 2012 - AlwaysOn Zwei HA-Features verstekcne sich hinter dem “AlwaysOn”-Branding. Einmal das AlwaysOn Failover Clustering aka SQL Server Failover Cluster Instances (FCI) - zum Anderen die AlwaysOn Availability Groups. Failover Cluster Instances (FCI) Entspricht ungefähr dem Stretch Cluster Konzept von Oracle Setzt auf Windows Server Failover Clustering (WSFC) auf Bietet HA auf Instanz-Ebene AlwaysOn Availability Groups (Verfügbarkeitsgruppen) Ähnlich der Idee von Consistency Groups, wie in Storage-Level Replikations-Software von z.B. EMC SRDF Abhängigkeiten zu Windows Server Failover Clustering (WSFC) Bietet HA auf Datenbank-Ebene   Hinweis: Verwechseln Sie nicht eine SQL Server Datenbank mit einer Oracle Datenbank. Und auch nicht eine Oracle Instanz mit einer SQL Server Instanz. Die gleichen Begriffe haben hier eine andere Bedeutung - nicht selten ein Grund, weshalb Oracle- und Microsoft DBAs schnell aneinander vorbei reden. Denken Sie bei einer SQL Server Datenbank eher an ein Oracle Schema, das kommt der Sache näher. So etwas wie die SQL Server Northwind Datenbank ist vergleichbar mit dem Oracle Scott Schema. Wenn Sie die genauen Unterschiede kennen möchten, finden Sie eine detaillierte Beschreibung in meinem Buch "Oracle10g Release 2 für Windows und .NET", erhältich bei Lehmanns, Amazon, etc.   Windows Server Failover Clustering (WSFC) Wie man sieht, basieren beide AlwaysOn Technologien wiederum auf dem Windows Server Failover Clustering (WSFC), um einerseits Hochverfügbarkeit auf Ebene der Instanz zu gewährleisten und andererseits auf der Datenbank-Ebene. Deshalb nun eine kurze Beschreibung der WSFC. Die WSFC sind ein mit dem Windows Betriebssystem geliefertes Infrastruktur-Feature, um HA für Server Anwendungen, wie Microsoft Exchange, SharePoint, SQL Server, etc. zu bieten. So wie jeder andere Cluster, besteht ein WSFC Cluster aus einer Gruppe unabhängiger Server, die zusammenarbeiten, um die Verfügbarkeit einer Applikation oder eines Service zu erhöhen. Falls ein Cluster-Knoten oder -Service ausfällt, kann der auf diesem Knoten bisher gehostete Service automatisch oder manuell auf einen anderen im Cluster verfügbaren Knoten transferriert werden - was allgemein als Failover bekannt ist. Unter SQL Server 2012 verwenden sowohl die AlwaysOn Avalability Groups, als auch die AlwaysOn Failover Cluster Instances die WSFC als Plattformtechnologie, um Komponenten als WSFC Cluster-Ressourcen zu registrieren. Verwandte Ressourcen werden in eine Ressource Group zusammengefasst, die in Abhängigkeit zu anderen WSFC Cluster-Ressourcen gebracht werden kann. Der WSFC Cluster Service kann jetzt die Notwendigkeit zum Neustart der SQL Server Instanz erfassen oder einen automatischen Failover zu einem anderen Server-Knoten im WSFC Cluster auslösen.   Failover Cluster Instances (FCI) Eine SQL Server Failover Cluster Instanz (FCI) ist eine einzelne SQL Server Instanz, die in einem Failover Cluster betrieben wird, der aus mehreren Windows Server Failover Clustering (WSFC) Knoten besteht und so HA (High Availability) auf Ebene der Instanz bietet. Unter Verwendung von Multi-Subnet FCI kann auch Remote DR (Disaster Recovery) unterstützt werden. Eine weitere Option für Remote DR besteht darin, eine unter FCI gehostete Datenbank in einer Availability Group zu betreiben. Hierzu später mehr. FCI und WSFC Basis FCI, das für lokale Hochverfügbarkeit der Instanzen genutzt wird, ähnelt der veralteten Architektur eines kalten Cluster (Aktiv-Passiv). Unter SQL Server 2008 wurde diese Technologie SQL Server 2008 Failover Clustering genannt. Sie nutzte den Windows Server Failover Cluster. In SQL Server 2012 hat Microsoft diese Basistechnologie unter der Bezeichnung AlwaysOn zusammengefasst. Es handelt sich aber nach wie vor um die klassische Aktiv-Passiv-Konfiguration. Der Ablauf im Failover-Fall ist wie folgt: Solange kein Hardware-oder System-Fehler auftritt, werden alle Dirty Pages im Buffer Cache auf Platte geschrieben Alle entsprechenden SQL Server Services (Dienste) in der Ressource Gruppe werden auf dem aktiven Knoten gestoppt Die Ownership der Ressource Gruppe wird auf einen anderen Knoten der FCI transferriert Der neue Owner (Besitzer) der Ressource Gruppe startet seine SQL Server Services (Dienste) Die Connection-Anforderungen einer Client-Applikation werden automatisch auf den neuen aktiven Knoten mit dem selben Virtuellen Network Namen (VNN) umgeleitet Abhängig vom Zeitpunkt des letzten Checkpoints, kann die Anzahl der Dirty Pages im Buffer Cache, die noch auf Platte geschrieben werden müssen, zu unvorhersehbar langen Failover-Zeiten führen. Um diese Anzahl zu drosseln, besitzt der SQL Server 2012 eine neue Fähigkeit, die Indirect Checkpoints genannt wird. Indirect Checkpoints ähnelt dem Fast-Start MTTR Target Feature der Oracle Datenbank, das bereits mit Oracle9i verfügbar war.   SQL Server Multi-Subnet Clustering Ein SQL Server Multi-Subnet Failover Cluster entspricht vom Konzept her einem Oracle RAC Stretch Cluster. Doch dies ist nur auf den ersten Blick der Fall. Im Gegensatz zu RAC ist in einem lokalen SQL Server Failover Cluster jeweils nur ein Knoten aktiv für eine Datenbank. Für die Datenreplikation zwischen geografisch entfernten Sites verlässt sich Microsoft auf 3rd Party Lösungen für das Storage Mirroring.     Die Verbesserung dieses Szenario mit einer SQL Server 2012 Implementierung besteht schlicht darin, dass eine VLAN-Konfiguration (Virtual Local Area Network) nun nicht mehr benötigt wird, so wie dies bisher der Fall war. Das folgende Diagramm stellt dar, wie der Ablauf mit SQL Server 2012 gehandhabt wird. In Site A und Site B wird HA jeweils durch einen lokalen Aktiv-Passiv-Cluster sichergestellt.     Besondere Aufmerksamkeit muss hier der Konfiguration und dem Tuning geschenkt werden, da ansonsten völlig inakzeptable Failover-Zeiten resultieren. Dies liegt darin begründet, weil die Downtime auf Client-Seite nun nicht mehr nur von der reinen Failover-Zeit abhängt, sondern zusätzlich von der Dauer der DNS Replikation zwischen den DNS Servern. (Rufen Sie sich in Erinnerung, dass wir gerade von Multi-Subnet Clustering sprechen). Außerdem ist zu berücksichtigen, wie schnell die Clients die aktualisierten DNS Informationen abfragen. Spezielle Konfigurationen für Node Heartbeat, HostRecordTTL (Host Record Time-to-Live) und Intersite Replication Frequeny für Active Directory Sites und Services werden notwendig. Default TTL für Windows Server 2008 R2: 20 Minuten Empfohlene Einstellung: 1 Minute DNS Update Replication Frequency in Windows Umgebung: 180 Minuten Empfohlene Einstellung: 15 Minuten (minimaler Wert)   Betrachtet man diese Werte, muss man feststellen, dass selbst eine optimale Konfiguration die rigiden SLAs (Service Level Agreements) heutiger geschäftskritischer Anwendungen für HA und DR nicht erfüllen kann. Denn dies impliziert eine auf der Client-Seite erlebte Failover-Zeit von insgesamt 16 Minuten. Hierzu ein Auszug aus der SQL Server 2012 Online Dokumentation: Cons: If a cross-subnet failover occurs, the client recovery time could be 15 minutes or longer, depending on your HostRecordTTL setting and the setting of your cross-site DNS/AD replication schedule.    Wir sind hier an einem Punkt unserer Überlegungen angelangt, an dem sich erklärt, weshalb ich zuvor das "Windows was the God ..." Zitat verwendet habe. Die unbedingte Abhängigkeit zu Windows wird zunehmend zum Problem, da sie die Komplexität einer Microsoft-basierenden Lösung erhöht, anstelle sie zu reduzieren. Und Komplexität ist das Letzte, was sich CIOs heutzutage wünschen.  Zur Ehrenrettung des SQL Server 2012 und AlwaysOn muss man sagen, dass derart lange Failover-Zeiten kein unbedingtes "Muss" darstellen, sondern ein "Kann". Doch auch ein "Kann" kann im unpassenden Moment unvorhersehbare und kostspielige Folgen haben. Die Unabsehbarkeit ist wiederum Ursache vieler an der Implementierung beteiligten Komponenten und deren Abhängigkeiten, wie beispielsweise drei Cluster-Lösungen (zwei von Microsoft, eine 3rd Party Lösung). Wie man die Sache auch dreht und wendet, kommt man an diesem Fakt also nicht vorbei - ganz unabhängig von der Dauer einer Downtime oder Failover-Zeiten. Im Gegensatz zu AlwaysOn und der hier vorgestellten Version eines Stretch-Clusters, vermeidet eine entsprechende Oracle Implementierung eine derartige Komplexität, hervorgerufen duch multiple Abhängigkeiten. Den Unterschied machen Datenbank-integrierte Mechanismen, wie Fast Application Notification (FAN) und Fast Connection Failover (FCF). Für Oracle MAA Konfigurationen (Maximum Availability Architecture) sind Inter-Site Failover-Zeiten im Bereich von Sekunden keine Seltenheit. Wenn Sie dem Link zur Oracle MAA folgen, finden Sie außerdem eine Reihe an Customer Case Studies. Auch dies ist ein wichtiges Unterscheidungsmerkmal zu AlwaysOn, denn die Oracle Technologie hat sich bereits zigfach in höchst kritischen Umgebungen bewährt.   Availability Groups (Verfügbarkeitsgruppen) Die sogenannten Availability Groups (Verfügbarkeitsgruppen) sind - neben FCI - der weitere Baustein von AlwaysOn.   Hinweis: Bevor wir uns näher damit beschäftigen, sollten Sie sich noch einmal ins Gedächtnis rufen, dass eine SQL Server Datenbank nicht die gleiche Bedeutung besitzt, wie eine Oracle Datenbank, sondern eher einem Oracle Schema entspricht. So etwas wie die SQL Server Northwind Datenbank ist vergleichbar mit dem Oracle Scott Schema.   Eine Verfügbarkeitsgruppe setzt sich zusammen aus einem Set mehrerer Benutzer-Datenbanken, die im Falle eines Failover gemeinsam als Gruppe behandelt werden. Eine Verfügbarkeitsgruppe unterstützt ein Set an primären Datenbanken (primäres Replikat) und einem bis vier Sets von entsprechenden sekundären Datenbanken (sekundäre Replikate).       Es können jedoch nicht alle SQL Server Datenbanken einer AlwaysOn Verfügbarkeitsgruppe zugeordnet werden. Der SQL Server Spezialist Michael Otey zählt in seinem SQL Server Pro Artikel folgende Anforderungen auf: Verfügbarkeitsgruppen müssen mit Benutzer-Datenbanken erstellt werden. System-Datenbanken können nicht verwendet werden Die Datenbanken müssen sich im Read-Write Modus befinden. Read-Only Datenbanken werden nicht unterstützt Die Datenbanken in einer Verfügbarkeitsgruppe müssen Multiuser Datenbanken sein Sie dürfen nicht das AUTO_CLOSE Feature verwenden Sie müssen das Full Recovery Modell nutzen und es muss ein vollständiges Backup vorhanden sein Eine gegebene Datenbank kann sich nur in einer einzigen Verfügbarkeitsgruppe befinden und diese Datenbank düerfen nicht für Database Mirroring konfiguriert sein Microsoft empfiehl außerdem, dass der Verzeichnispfad einer Datenbank auf dem primären und sekundären Server identisch sein sollte Wie man sieht, eignen sich Verfügbarkeitsgruppen nicht, um HA und DR vollständig abzubilden. Die Unterscheidung zwischen der Instanzen-Ebene (FCI) und Datenbank-Ebene (Availability Groups) ist von hoher Bedeutung. Vor kurzem wurde mir gesagt, dass man mit den Verfügbarkeitsgruppen auf Shared Storage verzichten könne und dadurch Kosten spart. So weit so gut ... Man kann natürlich eine Installation rein mit Verfügbarkeitsgruppen und ohne FCI durchführen - aber man sollte sich dann darüber bewusst sein, was man dadurch alles nicht abgesichert hat - und dies wiederum für Desaster Recovery (DR) und SLAs (Service Level Agreements) bedeutet. Kurzum, um die Kombination aus beiden AlwaysOn Produkten und der damit verbundene Komplexität kommt man wohl in der Praxis nicht herum.    Availability Groups und WSFC AlwaysOn hängt von Windows Server Failover Clustering (WSFC) ab, um die aktuellen Rollen der Verfügbarkeitsreplikate einer Verfügbarkeitsgruppe zu überwachen und zu verwalten, und darüber zu entscheiden, wie ein Failover-Ereignis die Verfügbarkeitsreplikate betrifft. Das folgende Diagramm zeigt de Beziehung zwischen Verfügbarkeitsgruppen und WSFC:   Der Verfügbarkeitsmodus ist eine Eigenschaft jedes Verfügbarkeitsreplikats. Synychron und Asynchron können also gemischt werden: Availability Modus (Verfügbarkeitsmodus) Asynchroner Commit-Modus Primäres replikat schließt Transaktionen ohne Warten auf Sekundäres Synchroner Commit-Modus Primäres Replikat wartet auf Commit von sekundärem Replikat Failover Typen Automatic Manual Forced (mit möglichem Datenverlust) Synchroner Commit-Modus Geplanter, manueller Failover ohne Datenverlust Automatischer Failover ohne Datenverlust Asynchroner Commit-Modus Nur Forced, manueller Failover mit möglichem Datenverlust   Der SQL Server kennt keinen separaten Switchover Begriff wie in Oracle Data Guard. Für SQL Server werden alle Role Transitions als Failover bezeichnet. Tatsächlich unterstützt der SQL Server keinen Switchover für asynchrone Verbindungen. Es gibt nur die Form des Forced Failover mit möglichem Datenverlust. Eine ähnliche Fähigkeit wie der Switchover unter Oracle Data Guard ist so nicht gegeben.   SQL Sever FCI mit Availability Groups (Verfügbarkeitsgruppen) Neben den Verfügbarkeitsgruppen kann eine zweite Failover-Ebene eingerichtet werden, indem SQL Server FCI (auf Shared Storage) mit WSFC implementiert wird. Ein Verfügbarkeitesreplikat kann dann auf einer Standalone Instanz gehostet werden, oder einer FCI Instanz. Zum Verständnis: Die Verfügbarkeitsgruppen selbst benötigen kein Shared Storage. Diese Kombination kann verwendet werden für lokale HA auf Ebene der Instanz und DR auf Datenbank-Ebene durch Verfügbarkeitsgruppen. Das folgende Diagramm zeigt dieses Szenario:   Achtung! Hier handelt es sich nicht um ein Pendant zu Oracle RAC plus Data Guard, auch wenn das Bild diesen Eindruck vielleicht vermitteln mag - denn alle sekundären Knoten im FCI sind rein passiv. Es existiert außerdem eine weitere und ernsthafte Einschränkung: SQL Server Failover Cluster Instanzen (FCI) unterstützen nicht das automatische AlwaysOn Failover für Verfügbarkeitsgruppen. Jedes unter FCI gehostete Verfügbarkeitsreplikat kann nur für manuelles Failover konfiguriert werden.   Lesbare Sekundäre Replikate Ein oder mehrere Verfügbarkeitsreplikate in einer Verfügbarkeitsgruppe können für den lesenden Zugriff konfiguriert werden, wenn sie als sekundäres Replikat laufen. Dies ähnelt Oracle Active Data Guard, jedoch gibt es Einschränkungen. Alle Abfragen gegen die sekundäre Datenbank werden automatisch auf das Snapshot Isolation Level abgebildet. Es handelt sich dabei um eine Versionierung der Rows. Microsoft versuchte hiermit die Oracle MVRC (Multi Version Read Consistency) nachzustellen. Tatsächlich muss man die SQL Server Snapshot Isolation eher mit Oracle Flashback vergleichen. Bei der Implementierung des Snapshot Isolation Levels handelt sich um ein nachträglich aufgesetztes Feature und nicht um einen inhärenten Teil des Datenbank-Kernels, wie im Falle Oracle. (Ich werde hierzu in Kürze einen weiteren Blogbeitrag verfassen, wenn ich mich mit der neuen SQL Server 2012 Core Lizenzierung beschäftige.) Für die Praxis entstehen aus der Abbildung auf das Snapshot Isolation Level ernsthafte Restriktionen, derer man sich für den Betrieb in der Praxis bereits vorab bewusst sein sollte: Sollte auf der primären Datenbank eine aktive Transaktion zu dem Zeitpunkt existieren, wenn ein lesbares sekundäres Replikat in die Verfügbarkeitsgruppe aufgenommen wird, werden die Row-Versionen auf der korrespondierenden sekundären Datenbank nicht sofort vollständig verfügbar sein. Eine aktive Transaktion auf dem primären Replikat muss zuerst abgeschlossen (Commit oder Rollback) und dieser Transaktions-Record auf dem sekundären Replikat verarbeitet werden. Bis dahin ist das Isolation Level Mapping auf der sekundären Datenbank unvollständig und Abfragen sind temporär geblockt. Microsoft sagt dazu: "This is needed to guarantee that row versions are available on the secondary replica before executing the query under snapshot isolation as all isolation levels are implicitly mapped to snapshot isolation." (SQL Storage Engine Blog: AlwaysOn: I just enabled Readable Secondary but my query is blocked?)  Grundlegend bedeutet dies, dass ein aktives lesbares Replikat nicht in die Verfügbarkeitsgruppe aufgenommen werden kann, ohne das primäre Replikat vorübergehend stillzulegen. Da Leseoperationen auf das Snapshot Isolation Transaction Level abgebildet werden, kann die Bereinigung von Ghost Records auf dem primären Replikat durch Transaktionen auf einem oder mehreren sekundären Replikaten geblockt werden - z.B. durch eine lang laufende Abfrage auf dem sekundären Replikat. Diese Bereinigung wird auch blockiert, wenn die Verbindung zum sekundären Replikat abbricht oder der Datenaustausch unterbrochen wird. Auch die Log Truncation wird in diesem Zustant verhindert. Wenn dieser Zustand längere Zeit anhält, empfiehlt Microsoft das sekundäre Replikat aus der Verfügbarkeitsgruppe herauszunehmen - was ein ernsthaftes Downtime-Problem darstellt. Die Read-Only Workload auf den sekundären Replikaten kann eingehende DDL Änderungen blockieren. Obwohl die Leseoperationen aufgrund der Row-Versionierung keine Shared Locks halten, führen diese Operatioen zu Sch-S Locks (Schemastabilitätssperren). DDL-Änderungen durch Redo-Operationen können dadurch blockiert werden. Falls DDL aufgrund konkurrierender Lese-Workload blockiert wird und der Schwellenwert für 'Recovery Interval' (eine SQL Server Konfigurationsoption) überschritten wird, generiert der SQL Server das Ereignis sqlserver.lock_redo_blocked, welches Microsoft zum Kill der blockierenden Leser empfiehlt. Auf die Verfügbarkeit der Anwendung wird hierbei keinerlei Rücksicht genommen.   Keine dieser Einschränkungen existiert mit Oracle Active Data Guard.   Backups auf sekundären Replikaten  Über die sekundären Replikate können Backups (BACKUP DATABASE via Transact-SQL) nur als copy-only Backups einer vollständigen Datenbank, Dateien und Dateigruppen erstellt werden. Das Erstellen inkrementeller Backups ist nicht unterstützt, was ein ernsthafter Rückstand ist gegenüber der Backup-Unterstützung physikalischer Standbys unter Oracle Data Guard. Hinweis: Ein möglicher Workaround via Snapshots, bleibt ein Workaround. Eine weitere Einschränkung dieses Features gegenüber Oracle Data Guard besteht darin, dass das Backup eines sekundären Replikats nicht ausgeführt werden kann, wenn es nicht mit dem primären Replikat kommunizieren kann. Darüber hinaus muss das sekundäre Replikat synchronisiert sein oder sich in der Synchronisation befinden, um das Beackup auf dem sekundären Replikat erstellen zu können.   Vergleich von Microsoft AlwaysOn mit der Oracle MAA Ich komme wieder zurück auf die Eingangs erwähnte, mehrfach an mich gestellte Frage "Wann denn - und ob überhaupt - Oracle etwas Vergleichbares wie AlwaysOn bieten würde?" und meine damit verbundene (kurze) Irritation. Wenn Sie diesen Blogbeitrag bis hierher gelesen haben, dann kennen Sie jetzt meine darauf gegebene Antwort. Der eine oder andere Punkt traf dabei nicht immer auf Jeden zu, was auch nicht der tiefere Sinn und Zweck meiner Antwort war. Wenn beispielsweise kein Multi-Subnet mit im Spiel ist, sind alle diesbezüglichen Kritikpunkte zunächst obsolet. Was aber nicht bedeutet, dass sie nicht bereits morgen schon wieder zum Thema werden könnten (Sag niemals "Nie"). In manch anderes Fettnäpfchen tritt man wiederum nicht unbedingt in einer Testumgebung, sondern erst im laufenden Betrieb. Erst recht nicht dann, wenn man sich potenzieller Probleme nicht bewusst ist und keine dedizierten Tests startet. Und wer AlwaysOn erfolgreich positionieren möchte, wird auch gar kein Interesse daran haben, auf mögliche Schwachstellen und den besagten Teufel im Detail aufmerksam zu machen. Das ist keine Unterstellung - es ist nur menschlich. Außerdem ist es verständlich, dass man sich in erster Linie darauf konzentriert "was geht" und "was gut läuft", anstelle auf das "was zu Problemen führen kann" oder "nicht funktioniert". Wer will schon der Miesepeter sein? Für mich selbst gesprochen, kann ich nur sagen, dass ich lieber vorab von allen möglichen Einschränkungen wissen möchte, anstelle sie dann nach einer kurzen Zeit der heilen Welt schmerzhaft am eigenen Leib erfahren zu müssen. Ich bin davon überzeugt, dass es Ihnen nicht anders geht. Nachfolgend deshalb eine Zusammenfassung all jener Punkte, die ich im Vergleich zur Oracle MAA (Maximum Availability Architecture) als unbedingt Erwähnenswert betrachte, falls man eine Evaluierung von Microsoft AlwaysOn in Betracht zieht. 1. AlwaysOn ist eine komplexe Technologie Der SQL Server AlwaysOn Stack ist zusammengesetzt aus drei verschiedenen Technlogien: Windows Server Failover Clustering (WSFC) SQL Server Failover Cluster Instances (FCI) SQL Server Availability Groups (Verfügbarkeitsgruppen) Man kann eine derartige Lösung nicht als nahtlos bezeichnen, wofür auch die vielen von Microsoft dargestellten Einschränkungen sprechen. Während sich frühere SQL Server Versionen in Richtung eigener HA/DR Technologien entwickelten (wie Database Mirroring), empfiehlt Microsoft nun die Migration. Doch weshalb dieser Schwenk? Er führt nicht zu einem konsisten und robusten Angebot an HA/DR Technologie für geschäftskritische Umgebungen.  Liegt die Antwort in meiner These begründet, nach der "Windows was the God ..." noch immer gilt und man die Nachteile der allzu engen Kopplung mit Windows nicht sehen möchte? Entscheiden Sie selbst ... 2. Failover Cluster Instanzen - Kein RAC-Pendant Die SQL Server und Windows Server Clustering Technologie basiert noch immer auf dem veralteten Aktiv-Passiv Modell und führt zu einer Verschwendung von Systemressourcen. In einer Betrachtung von lediglich zwei Knoten erschließt sich auf Anhieb noch nicht der volle Mehrwert eines Aktiv-Aktiv Clusters (wie den Real Application Clusters), wie er von Oracle bereits vor zehn Jahren entwickelt wurde. Doch kennt man die Vorzüge der Skalierbarkeit durch einfaches Hinzufügen weiterer Cluster-Knoten, die dann alle gemeinsam als ein einziges logisches System zusammenarbeiten, versteht man was hinter dem Motto "Pay-as-you-Grow" steckt. In einem Aktiv-Aktiv Cluster geht es zwar auch um Hochverfügbarkeit - und ein Failover erfolgt zudem schneller, als in einem Aktiv-Passiv Modell - aber es geht eben nicht nur darum. An dieser Stelle sei darauf hingewiesen, dass die Oracle 11g Standard Edition bereits die Nutzung von Oracle RAC bis zu vier Sockets kostenfrei beinhaltet. Möchten Sie dazu Windows nutzen, benötigen Sie keine Windows Server Enterprise Edition, da Oracle 11g die eigene Clusterware liefert. Sie kommen in den Genuss von Hochverfügbarkeit und Skalierbarkeit und können dazu die günstigere Windows Server Standard Edition nutzen. 3. SQL Server Multi-Subnet Clustering - Abhängigkeit zu 3rd Party Storage Mirroring  Die SQL Server Multi-Subnet Clustering Architektur unterstützt den Aufbau eines Stretch Clusters, basiert dabei aber auf dem Aktiv-Passiv Modell. Das eigentlich Problematische ist jedoch, dass man sich zur Absicherung der Datenbank auf 3rd Party Storage Mirroring Technologie verlässt, ohne Integration zwischen dem Windows Server Failover Clustering (WSFC) und der darunterliegenden Mirroring Technologie. Wenn nun im Cluster ein Failover auf Instanzen-Ebene erfolgt, existiert keine Koordination mit einem möglichen Failover auf Ebene des Storage-Array. 4. Availability Groups (Verfügbarkeitsgruppen) - Vier, oder doch nur Zwei? Ein primäres Replikat erlaubt bis zu vier sekundäre Replikate innerhalb einer Verfügbarkeitsgruppe, jedoch nur zwei im Synchronen Commit Modus. Während dies zwar einen Vorteil gegenüber dem stringenten 1:1 Modell unter Database Mirroring darstellt, fällt der SQL Server 2012 damit immer noch weiter zurück hinter Oracle Data Guard mit bis zu 30 direkten Stanbdy Zielen - und vielen weiteren durch kaskadierende Ziele möglichen. Damit eignet sich Oracle Active Data Guard auch für die Bereitstellung einer Reader-Farm Skalierbarkeit für Internet-basierende Unternehmen. Mit AwaysOn Verfügbarkeitsgruppen ist dies nicht möglich. 5. Availability Groups (Verfügbarkeitsgruppen) - kein asynchrones Switchover  Die Technologie der Verfügbarkeitsgruppen wird auch als geeignetes Mittel für administrative Aufgaben positioniert - wie Upgrades oder Wartungsarbeiten. Man muss sich jedoch einem gravierendem Defizit bewusst sein: Im asynchronen Verfügbarkeitsmodus besteht die einzige Möglichkeit für Role Transition im Forced Failover mit Datenverlust! Um den Verlust von Daten durch geplante Wartungsarbeiten zu vermeiden, muss man den synchronen Verfügbarkeitsmodus konfigurieren, was jedoch ernstzunehmende Auswirkungen auf WAN Deployments nach sich zieht. Spinnt man diesen Gedanken zu Ende, kommt man zu dem Schluss, dass die Technologie der Verfügbarkeitsgruppen für geplante Wartungsarbeiten in einem derartigen Umfeld nicht effektiv genutzt werden kann. 6. Automatisches Failover - Nicht immer möglich Sowohl die SQL Server FCI, als auch Verfügbarkeitsgruppen unterstützen automatisches Failover. Möchte man diese jedoch kombinieren, wird das Ergebnis kein automatisches Failover sein. Denn ihr Zusammentreffen im Failover-Fall führt zu Race Conditions (Wettlaufsituationen), weshalb diese Konfiguration nicht länger das automatische Failover zu einem Replikat in einer Verfügbarkeitsgruppe erlaubt. Auch hier bestätigt sich wieder die tiefere Problematik von AlwaysOn, mit einer Zusammensetzung aus unterschiedlichen Technologien und der Abhängigkeit zu Windows. 7. Problematische RTO (Recovery Time Objective) Microsoft postioniert die SQL Server Multi-Subnet Clustering Architektur als brauchbare HA/DR Architektur. Bedenkt man jedoch die Problematik im Zusammenhang mit DNS Replikation und den möglichen langen Wartezeiten auf Client-Seite von bis zu 16 Minuten, sind strenge RTO Anforderungen (Recovery Time Objectives) nicht erfüllbar. Im Gegensatz zu Oracle besitzt der SQL Server keine Datenbank-integrierten Technologien, wie Oracle Fast Application Notification (FAN) oder Oracle Fast Connection Failover (FCF). 8. Problematische RPO (Recovery Point Objective) SQL Server ermöglicht Forced Failover (erzwungenes Failover), bietet jedoch keine Möglichkeit zur automatischen Übertragung der letzten Datenbits von einem alten zu einem neuen primären Replikat, wenn der Verfügbarkeitsmodus asynchron war. Oracle Data Guard hingegen bietet diese Unterstützung durch das Flush Redo Feature. Dies sichert "Zero Data Loss" und beste RPO auch in erzwungenen Failover-Situationen. 9. Lesbare Sekundäre Replikate mit Einschränkungen Aufgrund des Snapshot Isolation Transaction Level für lesbare sekundäre Replikate, besitzen diese Einschränkungen mit Auswirkung auf die primäre Datenbank. Die Bereinigung von Ghost Records auf der primären Datenbank, wird beeinflusst von lang laufenden Abfragen auf der lesabaren sekundären Datenbank. Die lesbare sekundäre Datenbank kann nicht in die Verfügbarkeitsgruppe aufgenommen werden, wenn es aktive Transaktionen auf der primären Datenbank gibt. Zusätzlich können DLL Änderungen auf der primären Datenbank durch Abfragen auf der sekundären blockiert werden. Und imkrementelle Backups werden hier nicht unterstützt.   Keine dieser Restriktionen existiert unter Oracle Data Guard.

    Read the article

< Previous Page | 2 3 4 5 6