Search Results

Search found 5462 results on 219 pages for 'continue'.

Page 151/219 | < Previous Page | 147 148 149 150 151 152 153 154 155 156 157 158  | Next Page >

  • E-Business Suite 12.1.3 Data Masking Certified with Enterprise Manager 12c

    - by Elke Phelps (Oracle Development)
    Following up on our prior announcement for EM 11g, we're pleased to announce the certification of the E-Business Suite 12.1.3 Data Masking Template for the Data Masking Pack with Enterprise Manager Cloud Control 12c. You can use the Oracle Data Masking Pack with Oracle Enterprise Manager Grid Control 12c to scramble sensitive data in cloned E-Business Suite environments.  Due to data dependencies, scrambling E-Business Suite data is not a trivial task.  The data needs to be scrubbed in such a way that allows the application to continue to function.  You may scramble data in E-Business Suite cloned environments with EM12c using the following template: E-Business Suite 12.1.3 Data Masking Template for Data Masking Pack with EM12c (Patch 14407414) What does data masking do in E-Business Suite environments? Application data masking does the following: De-identify the data:  Scramble identifiers of individuals, also known as personally identifiable information or PII.  Examples include information such as name, account, address, location, and driver's license number. Mask sensitive data:  Mask data that, if associated with personally identifiable information (PII), would cause privacy concerns.  Examples include compensation, health and employment information.   Maintain data validity:  Provide a fully functional application. How can EBS customers use data masking? The Oracle E-Business Suite Template for Data Masking Pack can be used in situations where confidential or regulated data needs to be shared with other non-production users who need access to some of the original data, but not necessarily every table.  Examples of non-production users include internal application developers or external business partners such as offshore testing companies, suppliers or customers.  The template works with the Oracle Data Masking Pack and Oracle Enterprise Manager to obscure sensitive E-Business Suite information that is copied from production to non-production environments. The Oracle E-Business Suite Template for Data Masking Pack is applied to a non-production environment with the Enterprise Manager Grid Control Data Masking Pack.  When applied, the Oracle E-Business Suite Template for Data Masking Pack will create an irreversibly scrambled version of your production database for development and testing.  What's new with EM 12c? Some of the execution steps may also be performed with EM Command Line Interface (EM CLI).  Support of EM CLI is a new feature with the E-Business Suite Release 12.1.3 template for EM 12c.  Is there a charge for this? Yes. You must purchase licenses for the Oracle Data Masking Pack plug-in. The Oracle E-Business Suite 12.1.3 Template for the Data Masking Pack is included with the Oracle Data Masking Pack license.  You can contact your Oracle account manager for more details about licensing. References Additional details and requirements are provided in the following My Oracle Support Note: Using Oracle E-Business Suite Release 12.1.3 Template for the Data Masking Pack with Oracle Enterprise Manager 12.1.0.2 Data Masking Tool (Note 1481916.1) Masking Sensitive Data in the Oracle Database Real Application Testing User's Guide 11g Release 2 (11.2) Related Articles Scrambling Sensitive Data in E-Business Suite

    Read the article

  • SQL SERVER – Puzzle to Win Print Book – Functions FIRST_VALUE and LAST_VALUE with OVER clause and ORDER BY

    - by pinaldave
    Some time an interesting feature and smart audience makes total difference at places. From last two days, I have been writing on SQL Server 2012 feature FIRST_VALUE and LAST_VALUE. Please read following post before I continue today as this question is based on the same. Introduction to FIRST_VALUE and LAST_VALUE Introduction to FIRST_VALUE and LAST_VALUE with OVER clause As a comment of the second post I received excellent question from Nilesh Molankar. He asks what will happen if we change few things in the T-SQL. I really like this question as this kind of questions will make us sharp and help us perform in critical situation in need. We recently publish SQL Server Interview Questions book. I promise that in future version of this book, we will for sure include this question. Instead of repeating his question, I am going to ask something very similar to his question. Let us first run following query (read yesterday’s blog post for more detail): USE AdventureWorks GO SELECT s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty, FIRST_VALUE(SalesOrderDetailID) OVER (PARTITION BY SalesOrderID ORDER BY SalesOrderDetailID ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) FstValue, LAST_VALUE(SalesOrderDetailID) OVER (PARTITION BY SalesOrderID ORDER BY SalesOrderDetailID ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) LstValue FROM Sales.SalesOrderDetail s WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty GO Here is the resultset of the above query. Now let us change the ORDER BY clause of OVER clause in above query and see what is the new result. USE AdventureWorks GO SELECT s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty, FIRST_VALUE(SalesOrderDetailID) OVER (PARTITION BY SalesOrderID ORDER BY OrderQty ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) FstValue, LAST_VALUE(SalesOrderDetailID) OVER (PARTITION BY SalesOrderID ORDER BY OrderQty ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) LstValue FROM Sales.SalesOrderDetail s WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty GO Now let us see the result and ready for interesting question: Puzzle You can see that row number 2, 3, 4, and 5 has same SalesOrderID = 43667. The FIRST_VALUE is 78 and LAST_VALUE is 77. Now if these function was working on maximum and minimum value they should have given answer as 77 and 80 respectively instead of 78 and 77. Also the value of FIRST_VALUE is greater than LAST_VALUE 77. Why? Explain in detail. Hint Let me give you a simple hint. Just for simplicity I have changed the order of columns selected in the SELECT and ORDER BY (at the end). This will not change resultset but just order of the columns as well order of the rows. However, the data remains the same. USE AdventureWorks GO SELECT s.OrderQty,s.SalesOrderID,s.SalesOrderDetailID, FIRST_VALUE(SalesOrderDetailID) OVER (PARTITION BY SalesOrderID ORDER BY OrderQty ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) FstValue, LAST_VALUE(SalesOrderDetailID) OVER (PARTITION BY SalesOrderID ORDER BY OrderQty ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) LstValue FROM Sales.SalesOrderDetail s WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY s.OrderQty,s.SalesOrderID,s.SalesOrderDetailID GO Above query returns following result: Now I am very sure all of you have figured out the solution. Here is the second hint – pay attention to row 2, 3, 4, and 10. Hint2 T-SQL Enhancements: FIRST_VALUE() and LAST_VALUE() MSDN: FIRST_VALUE and LAST_VALUE Rules Leave a comment with your detailed answer by Nov 15′s blog post. Open world-wide (where Amazon ships books) If you blog about puzzle’s solution and if you win, you win additional surprise gift as well. Prizes Print copy of my new book SQL Server Interview Questions Amazon|Flipkart If you already have this book, you can opt for any of my other books SQL Wait Stats [Amazon|Flipkart|Kindle] and SQL Programming [Amazon|Flipkart|Kindle]. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Function, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SAB BizTalk Archiving Pipeline Component - Codeplex

    - by Stuart Brierley
    In an effort to give a little more to the BizTalk development community, I have created my first Codeplex project. The SAB BizTalk Archiving Pipeline Component was written using Visual Studio 2010 with BizTalk Server 2010 intended as the target platform.  It is currently at version 0.1, meaning that I have not yet completed all the intended functionality and have so far carried out a limited number of tests.  It does however archive files within the bounds of the functionailty so far implemented and seems to be stable in use. It is based on a recent evolution of a basic archiving component that I wrote in the past, and it is my hope that it will continue to evolve in the coming months. This work was inspired by some old posts by Gilles Zunino and Jeff Lynch.   You can download the documentation, source code or component dll from Codeplex, but to give you a taste here is the first section of the documentation to whet your appetite: SAB BizTalk Archiving Pipeline Component   The SAB BizTalk Archiving Pipeline Component has been developed to allow custom piplelines to be created that can archive messages at any stage of pipeline processing.   It works in both receive and send pipelines and will archive messages to file based on the configuration applied to the component in the BizTalk Administration Console.   The Archiving Pipeline Component has been coded for use with BizTalk Server 2010. Use with other versions of BizTalk has not been tested.   The Archiving Pipeline component is supplied as a dll file that should be placed in the BizTalk Server Pipeline Components folder. It can then be used when developing custom pipelines to be deployed as a part of your BizTalk Server applications.   This version of the component allows you to use a number of generic messaging macros and also a small number that are specific to the FILE adapter. It is intended to extend these macros to cover context properties from other adapters in future releases.     Archive Pipeline Parameters As with all pipeline components, the following parameters can be set when creating your custom pipeline and at runtime via the administration console.   Enabled:              Enables and disables the archive process.                                 True; messages will be archived.   False; messages will be passed to the next stage in the pipeline without performing any processing.   File Name:          The file name of the archived message.   Allows the component to build the archive filename at run-time; based on the values entered, the permitted macros and data extracted from the message context properties.   e.g.        %FileReceivedFileName%-%InterchangeSequenceNumber%   File Mask:           The extension to be added to the File Name following all Macro processing.   e.g.        .xml   File Path:             The path on which the archived message should be saved.   Allows the component to build the archive directory at run-time; based on the values entered, permitted macros and data extracted from the message context properties.   e.g.        C:\Archive\%ReceivePortName%\%Year%\%Month%\%Day%\                   \\ArchiveShare\%ReceivePortName%\%Date%\     Overwrite:          Enables and disables existing file overwrites.   True; any existing file with the same File Path/Name combination (following macro replacement) will be overwritten.   False; any existing file with the same File Path/Name combination (following macro replacement) will not be overwritten.  The current message will be archived with a GUID appended to the File Name.

    Read the article

  • Will You Accept This Rose?

    - by user715249
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} Ashley, Bentley and the Masked Man. If these names mean anything to you we know where you’ll be on Monday night – planted in front of your television awaiting the villain’s return and what is sure to be the most dramatic rose ceremony yet on the Bachelorette.  If you’re the Oracle PartnerNetwork Communications Team you’ll be spending your Monday night putting the final touches on the most exciting Partner Kickoff Event yet.  Listen in as Judson tells you more. Starting at 6:00 AM PT on Tuesday, June 29th partners – and potential partners – can tune in to watch the excitement unfold at partner.oracle.com.  The storyline for FY12 will continue to unfold with a special role being outlined for our ISV partners.  SPOILER ALERT: OPN has made an investment in how we’ll go to market together – trust us - you don’t want to get this news from the highlight reel. While we won’t be sending anyone home from the show, we do promise an exciting hour which will gear you up to go to market with Oracle in the new fiscal year.  The Oracle PartnerNetwork FY12 Kickoff is being held live 5 times and will include a ‘date card’ message for each region. EMEA Kickoff - Tuesday, June 29, at 6 a.m. PT / 2 p.m. BT LAD Kickoff – Tuesday, June 29, at 8 a.m. PT / noon DT North America Kickoff – Tuesday, June 29, at 10 a.m. PT / 1 p.m. ET Japan Kickoff – Tuesday, June 29, at 6 p.m. PT / Wednesday, June 30, at 10 a.m. JT (Tokyo) APAC Kickoff– Tuesday, June 29, at 8 p.m. PT / Wednesday, June 30, at 11 a.m. SGT (Singapore) / 1 p.m. AET (Sydney) We’ll be taking your questions live throughout the show – we hope you’ll “accept our rose” and join us on this amazing journey. The OPN Communications Team

    Read the article

  • Give a session on C++ AMP – here is how

    - by Daniel Moth
    Ever since presenting on C++ AMP at the AMD Fusion conference in June, then the Gamefest conference in August, and the BUILD conference in September, I've had numerous requests about my material from folks that want to re-deliver the same session. The C++ AMP session I put together has evolved over the 3 presentations to its final form that I used at BUILD, so that is the one I recommend you base yours on. Please get the slides and the recording from channel9 (I'll refer to slide numbers below). This is how I've been presenting the C++ AMP session: Context (slide 3, 04:18-08:18) Start with a demo, on my dual-GPU machine. I've been using the N-Body sample (for VS 11 Developer Preview). (slide 4) Use an nvidia slide that has additional examples of performance improvements that customers enjoy with heterogeneous computing. (slide 5) Talk a bit about the differences today between CPU and GPU hardware, leading to the fact that these will continue to co-exist and that GPUs are great for data parallel algorithms, but not much else today. One is a jack of all trades and the other is a number cruncher. (slide 6) Use the APU example from amd, as one indication that the hardware space is still in motion, emphasizing that the C++ AMP solution is a data parallel API, not a GPU API. It has a future proof design for hardware we have yet to see. (slide 7) Provide more meta-data, as blogged about when I first introduced C++ AMP. Code (slide 9-11) Introduce C++ AMP coding with a simplistic array-addition algorithm – the slides speak for themselves. (slide 12-13) index<N>, extent<N>, and grid<N>. (Slide 14-16) array<T,N>, array_view<T,N> and comparison between them. (Slide 17) parallel_for_each. (slide 18, 21) restrict. (slide 19-20) actual restrictions of restrict(direct3d) – the slides speak for themselves. (slide 22) bring it altogether with a matrix multiplication example. (slide 23-24) accelerator, and accelerator_view. (slide 26-29) Introduce tiling incl. tiled matrix multiplication [tiling probably deserves a whole session instead of 6 minutes!]. IDE (slide 34,37) Briefly touch on the concurrency visualizer. It supports GPU profiling, but enhancements specific to C++ AMP we hope will come at the Beta timeframe, which is when I'll be spending more time talking about it. (slide 35-36, 51:54-59:16) Demonstrate the GPU debugging experience in VS 11. Summary (slide 39) Re-iterate some of the points of slide 7, and add the point that the C++ AMP spec will be open for other compiler vendors to implement, even on other platforms (in fact, Microsoft is actively working on that). (slide 40) Links to content – see slide – including where all your questions should go: http://social.msdn.microsoft.com/Forums/en/parallelcppnative/threads.   "But I don't have time for a full blown session, I only need 2 (or just 1, or 3) C++ AMP slides to use in my session on related topic X" If all you want is a small number of slides, you can take some from the session above and customize them. But because I am so nice, I have created some slides for you, including talking points in the notes section. Download them here. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Industry perspectives on managing content

    - by aahluwalia
    Earlier this week I was noodling over a topic for my first blog post. My intention for this blog is to bring a practitioner's perspective on ECM to the community; to share and collaborate on best practices and approaches that address today's business problems. Reviewing my past 14 years of experience with web technologies, I wondered what topic would serve as a good "conversation starter". During this time, I received a call from a friend who was seeking insights on how content management applies to specific industries. She approached me because she vaguely remembered that I had worked in the Health Insurance industry in the recent past. She wanted me to tell her about the specific business needs of this industry. She was in for quite a surprise as she found out that I had spent the better part of a decade managing content within the Health Insurance industry and I discovered a great topic for my first blog post! I offer some insights from Health Insurance and invite my fellow practitioners to share their insights from other industries. What does content management mean to these industries? What can solution providers be aware of when offering solutions to these industries? The United States health care system relies heavily on private health insurance, which is the primary source of coverage for approximately 58% Americans. In the late 19th century, "accident insurance" began to be available, which operated much like modern disability insurance. In the late 20th century, traditional disability insurance evolved into modern health insurance programs. The first thing a solution provider must be aware of about the Health Insurance industry is that it tends to be transaction intensive. They are the ones who manage and administer our health plans and process our claims when we visit our health care providers. It helps to keep in mind that they are in the business of delivering health insurance and not technology. You may find the mindset conservative in comparison to the IT industry, however, the Health Insurance industry has benefited and will continue to benefit from the efficiency that technology brings to traditionally paper-driven processes. We are all aware of the impact that Healthcare reform bill has had a significant impact on the Health Insurance industry. They are under a great deal of pressure to explore ways to reduce their administrative costs and increase operational efficiency. Overall, administrative costs of health insurance include the insurer's cost to administer the health plan, the costs borne by employers, health-care providers, governments and individual consumers. Inefficiencies plague health insurance, owing largely to the absence of standardized processes across the industry. To achieve this, industry leaders have come together to establish standards and invest in initiatives to help their healthcare provider partners transition to the next generation of healthcare technology. The move to online services and paperless explanation of benefits are some manifestations of technological advancements in health insurance. Several companies have adopted Toyota's LEAN methodology or Six Sigma principles to improve quality, reduce waste and excessive costs, thereby increasing the value of their plan offerings. A growing number of health insurance companies have transformed their business systems in the past decade alone and adopted some form of content management to reduce the costs involved in administering health plans. The key strategy has been to convert paper documents and forms into electronic formats, automate the content development process and securely distribute content to various audiences via diverse marketing channels, including web and mobile. Enterprise content management solutions can enable document capture of claim forms, manage digital assets, integrate with Enterprise Resource Planning (ERP) and Human Capital Management (HCM) solutions, build Business Process Management (BPM) processes, define retention and disposition instructions to comply with state and federal regulations and allow eBusiness and Marketing departments to develop and deliver web content to multiple websites, mobile devices and portals. Content can be shared securely within and outside the organization using Information Rights Management.  At the end of the day, solution providers who can translate strategic goals into solutions that maximize process automation, increase ease of use and minimize IT overhead are likely to be successful in today's health insurance environment.

    Read the article

  • AdventureWorks2012 now available for all on SQL Azure

    - by jamiet
    Three days ago I tweeted this: Idea. MSFT could host read-only copies of all the [AdventureWorks] DBs up on #sqlazure for the SQL community to use. RT if agree #sqlfamily — Jamie Thomson (@jamiet) March 24, 2012 Evidently I wasn't the only one that thought this was a good idea because as you can see from the screenshot that tweet has, so far, been retweeted more than fifty times. Clearly there is a desire to see the AdventureWorks databases made available for the community to noodle around on so I am pleased to announce that as of today you can do just that - [AdventureWorks2012] now resides on SQL Azure and is available for anyone, absolutely anyone, to connect to and use* for their own means. *By use I mean "issue some SELECT statements". You don't have permission to issue INSERTs, UPDATEs, DELETEs or EXECUTEs I'm afraid - if you want to do that then you can get the bits and host it yourself. This database is free for you to use but SQL Azure is of course not free so before I give you the credentials please lend me your ears eyes for a short while longer. AdventureWorks on Azure is being provided for the SQL Server community to use and so I am hoping that that same community will rally around to support this effort by making a voluntary donation to support the upkeep which, going on current pricing, is going to be $119.88 per year. If you would like to contribute to keep AdventureWorks on Azure up and running for that full year please donate via PayPal to [email protected]: Any amount, no matter how small, will help. If those 50+ people that retweeted me beforehand all contributed $2 then that would just about be enough to keep this up for a year. If the community contributes more that we need then there are a number of additional things that could be done: Host additional databases (Northwind anyone??) Host in more datacentres (this first one is in Western Europe) Make a charitable donation That last one, a charitable donation, is something I would really like to do. The SQL Community have proved before that they can make a significant contribution to charitable orgnisations through purchasing the SQL Server MVP Deep Dives book and I harbour hopes that AdventureWorks on Azure can continue in that vein. So please, if you think AdventureWorks on Azure is something that is worth supporting please make a contribution. OK, with the prickly subject of begging for cash out of the way let me share the details that you need to connect to [AdventureWorks2012] on SQL Azure: Server mhknbn2kdz.database.windows.net  Database AdventureWorks2012 User sqlfamily Password sqlf@m1ly That user sqlfamily has all the permissions required to enable you to query away to your heart's content. Here is the code that I used to set it up: CREATE USER sqlfamily FOR LOGIN sqlfamily;CREATE ROLE sqlfamilyrole;EXEC sp_addrolemember 'sqlfamilyrole','sqlfamily';GRANT VIEW DEFINITION ON Database::AdventureWorks2012 TO sqlfamilyrole;GRANT VIEW DATABASE STATE ON Database::AdventureWorks2012 TO sqlfamilyrole;GRANT SHOWPLAN TO sqlfamilyrole;EXEC sp_addrolemember 'db_datareader','sqlfamilyrole'; You can connect to the database using SQL Server Management Studio (instructions to do that are provided at Walkthrough: Connecting to SQL Azure via the SSMS) or you can use the web interface at https://mhknbn2kdz.database.windows.net: Lastly, just for a bit of fun I created a table up there called [dbo].[SqlFamily] into which you can leave a small calling card. Simply execute the following SQL statement (changing the values of course): INSERT [dbo].[SqlFamily]([Name],[Message],[TwitterHandle],[BlogURI])VALUES ('Your name here','Some Message','your twitter handle (optional)','Blog URI (optional)'); [Id] is an IDENTITY field and there is a default constraint on [DT] hence there is no need to supply a value for those. Note that you only have INSERT permissions, not UPDATE or DELETE so make sure you get it right first time! Any offensive or distasteful remarks will of course be deleted :) Thank you for reading this far and have fun using AdventureWorks on Azure. I hope it proves to be useful for some of you. @jamiet AdventureWorks on Azure - Provided by the SQL Server community, for the SQL Server community!

    Read the article

  • Data Connection Library in SharePoint Server 2010

    - by Wayne
    What is a Data Connection Library in SharePoint Server 2010? A Data Connection Library in Microsoft SharePoint Server 2010 is a library that can contain two kinds of data connections: an Office Data Connection (ODC) file or a Universal Data Connection (UDC) file. Microsoft InfoPath 2010 uses data connections that comply with the Universal Data Connection (UDC) file schema and typically have either a *.udcx or *.xml file name extension. Data sources described by these data connections are stored on the server and can be used in standard form templates and browser-enabled form templates. How to create a SharePoint Server Data Connection Library? 1. Browse to a SharePoint Server 2010 site on which you have at least Design permissions. If you are on the root site, create a new site before you continue with the next step. 2. On the Site Actions menu, click More Options. 3. On the Create page, click Library under Filter By, and then click Data Connection Library. 4. On the right side of the Create page, type a name for the library, and then click the Create button. 5. Copy the URL of the new data connection library. How to create a new data connection file in InfoPath? 1. Open InfoPath Designer 2010, click Blank Form, and then click Design Form. 2. On the Data tab, click Data Connections, and then click Add. 3. In the Data Connection Wizard, click Create a new connection to, click Receive data, and then click Next. 4. Click the kind of data source that you are connecting to, such as Database, Web service, or SharePoint library or list, and then click Next. 5. Complete the remaining steps in the Data Connection Wizard to configure your data connection, and then click Finish to return to the Data Connections dialog box. 6. In the Data Connections dialog box, click Convert to Connection File. 7. In the Convert Data Connection dialog box, enter the URL of the data connection library that you previously copied (delete "Forms/AllItems.aspx" and anything following it from the URL), enter a name for the data connection file at the end of the URL, and then click OK. It will take a few moments to convert and save the data connection file to the library. 8. Confirm that the data connection was converted successfully by examining the Details section of the Data Connections dialog box while the name of the converted data connection is selected. 9. Browse to the SharePoint data connection library, click the drop-down next to the name of the data connection, click Approve/Reject, click Approved, and then click OK. Take advantage of SharePoint Data Connection Library and other useful features of SharePoint family of products that include, SharePoint Foundation 2010, MOSS 2007 and free SharePoint templates or web parts.

    Read the article

  • First Day of Data Integration Track at Oracle OpenWorld 2012

    - by Irem Radzik
    OpenWorld started full speed for us today with a great set of sessions in the Data Integration track. After the exciting keynote session on Oracle Database 12c in the morning; Brad Adelberg, VP of Development for Data Integration products, presented Oracle’s data integration product strategy. His session highlighted the new requirements for data integration to achieve pervasive and continuous access to trusted data. The new requirements and product focus areas presented in this session are: Provide access to any data at any source On premise or on cloud Enable zero downtime operations and maximum performance Leverage real-time data for accurate business insights And ensure high quality data is used across the enterprise During the session Brad walked over how Oracle’s data integration products, Oracle Data Integrator, Oracle GoldenGate, Oracle Enterprise Data Quality, and Oracle Data Service Integrator, deliver on these requirements and how recent product releases build on this strategy. Soon after Brad’s session we heard from a panel of Oracle GoldenGate customers, St. Jude Medical, Equifax, and Bank of America, how they achieved zero downtime operations using Oracle GoldenGate. The panel presented different use cases of GoldenGate, from Active-Active replication to offloading reporting. Especially St. Jude Medical’s implementation, which involves the alert management system for patients that use their pacemakers, reminded me in some cases downtime of mission-critical systems can be a matter of life or death. It is very comforting to hear that GoldenGate delivers highly-reliable continuous availability for life-saving medical systems. In the afternoon, Nick Wagner from the Product Management team and I followed the customer panel with the review of Oracle GoldenGate 11gR2’s New Features.  Many questions we received from audience were about GoldenGate’s new Integrated Capture for Oracle Database and the enhanced Conflict Management features, as well as how GoldenGate compares to Oracle Streams. In addition to giving details on GoldenGate’s unique capability to capture changed data with a direct integration to the Oracle DBMS engine, we reminded the audience that enhancements to Oracle GoldenGate will continue, while Streams will be primarily maintained. Last but not least, Tim Garrod and Ryan Fonnett from Raymond James presented a unified real-time data integration solution using Oracle Data Integrator and GoldenGate for their operational data store (ODS). The ODS supports application services across the enterprise and providing timely data is a critical requirement. In this solution, Oracle GoldenGate does the log-based change data capture for Oracle Data Integrator’s near real-time data integration between heterogeneous systems. As Raymond James’ ODS supports mission-critical services for their advisors, the project team had to set up this integration environment to be highly available. During the session, Ryan and Tim explained how they use ODI to enable automated process execution and “always-on” integration processes. Their presentation included 2 demonstrations that focused on CDC patterns deployed with ODI and the automated multi-instance execution and monitoring. We are very grateful to Tim and Ryan for their very-well prepared presentation at OpenWorld this year. Day 2 (Tuesday) will be also a busy day in our track. In addition to the Fusion Middleware Innovation Awards ceremony at 11:45am at Moscone West 3001, we have the following DI sessions Real-World Operational Reporting Customer Panel 11:45am Moscone West- 3005 Oracle Data Integrator Product Update and Future Strategy 1:15pm Moscone West- 3005 High-volume OLTP with Oracle GoldenGate: Best Practices from Comcast 1:15pm Moscone West- 3005 Everything You need to Know about Monitoring Oracle GoldenGate 5pm Moscone West-3005 If you are at OpenWorld please join us in these sessions. For a full review of data integration track at OpenWorld please see our Focus-On document.

    Read the article

  • Thoughts on Thoughts on TDD

    Brian Harry wrote a post entitled Thoughts on TDD that I thought I was going to let lie, but I find that I need to write a response. I find myself in agreement with Brian on many points in the post, but I disagree with his conclusion. Not surprisingly, I agree with the things that he likes about TDD. Focusing on the usage rather than the implementation is really important, and this is important whether you use TDD or not. And YAGNI was a big theme in my Seven Deadly Sins of Programming series. Now, on to what he doesnt like. He says that he finds it inefficient to have tests that he has to change every time he refactors. Here is where we part company. If you are having to do a lot of test rewriting (say, more than a couple of minutes work to get back to green) *often* when you are refactoring your code, I submit that either you are testing things that you dont need to test (internal details rather than external implementation), your code perhaps isnt as decoupled as it could be, or maybe you need a visit to refactorers anonymous. I also like to refactor like crazy, but as we all know, the huge downside of refactoring is that we often break things. Important things. Subtle things. Which makes refactoring risky. *Unless* we have a set of tests that have great coverage. And TDD (or Example-based Design, which I prefer as a term) gives those to us. Now, I dont know what sort of coverage Brian gets with the unit tests that he writes, but I do know that for the majority of the developers Ive worked with and I count myself in that bucket the coverage of unit tests written afterwards is considerably inferior to the coverage of unit tests that come from TDD. For me, it all comes down to the answer to the following question: How do you ensure that your code works now and will continue to work in the future? Im willing to put up with a little efficiency on the front side to get that benefit later. Its not the writing of the code thats the expensive part, its everything else that comes after. I dont think that stepping through test cases in the debugger gets you what you want. You can verify what the current behavior is, sure, and do it fairly cheaply, but you dont help the guy in the future who doesnt know what conditions were important if he has to change your code. His second part that he doesnt like backing into an architecture (go read to see what he means). Ive certainly had to work with code that was like this before, and its a nightmare the code that nobody wants to touch. But thats not at all the kind of code that you get with TDD, because if youre doing it right youre doing the write a failing tests, make it pass, refactor approach. Now, you may miss some useful refactorings and generalizations for this, but if you do, you can refactor later because you have the tests that make it safe to do so, and your code tends to be easy to refactor because the same things that make code easy to write unit tests for make it easy to refactor. I also think Brian is missing an important point. We arent all as smart as he is. Im reminded a bit of the lesson of Intentional Programming, Charles Simonyis paradigm for making programming easier. I played around with Intentional Programming when it was young, and came to the conclusion that it was a pretty good thing if you were as smart as Simonyi is, but it was pretty much a disaster if you were an average developer. In this case, TDD gives you a way to work your way into a good, flexible, and functional architecture when you dont have somebody of Brians talents to help you out. And thats a good thing.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How to "apt-get -f install" without deleting software?

    - by Jeggy
    I know Guitar pro doesn't support 64 bit, but i did get it to work with this command jeggy@jeggy-XPS:~$ sudo dpkg --force-architecture -i GuitarPro6-rev9063.deb [sudo] password for jeggy: Selecting previously unselected package guitarpro6:i386. (Reading database ... 285729 files and directories currently installed.) Unpacking guitarpro6:i386 (from GuitarPro6-rev9063.deb) ... dpkg: dependency problems prevent configuration of guitarpro6:i386: guitarpro6:i386 depends on gksu. dpkg: error processing guitarpro6:i386 (--install): dependency problems - leaving unconfigured Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for desktop-file-utils ... Processing triggers for gnome-menus ... Errors were encountered while processing: guitarpro6:i386 And even after i get that error the program perfectly works fine and updating and adding PPA's to the system works great, but when I'm trying to install some other software i get this error: jeggy@jeggy-XPS:~$ sudo apt-get install elinks Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: elinks : Depends: libfsplib0 (>= 0.9) but it is not going to be installed Depends: liblua50 (>= 5.0.3) but it is not going to be installed Depends: liblualib50 (>= 5.0.3) but it is not going to be installed Depends: libtre5 but it is not going to be installed Depends: elinks-data (= 0.12~pre5-7ubuntu1) but it is not going to be installed guitarpro6:i386 : Depends: gksu:i386 but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). And whenever i write "apt-get -f install" i get this jeggy@jeggy-XPS:~$ sudo apt-get -f install [sudo] password for jeggy: Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following packages were automatically installed and are no longer required: dconf-gsettings-backend:i386 python-levenshtein python-indicate libav-tools libstartup-notification0:i386 libxmuu1:i386 libavfilter-extra-2 libbabl-0.0-0 libgegl-0.0-0 libgconf2-4:i386 python-vobject libgtk-3-0:i386 libpam-cap:i386 python-utidylib libdconf0:i386 python-iniparse python-xmpp libpam-gnome-keyring:i386 libxcb-util0:i386 python-farstream Use 'apt-get autoremove' to remove them. The following packages will be REMOVED: guitarpro6:i386 0 upgraded, 0 newly installed, 1 to remove and 7 not upgraded. 1 not fully installed or removed. After this operation, 84,0 MB disk space will be freed. Do you want to continue [Y/n]? y (Reading database ... 286979 files and directories currently installed.) Removing guitarpro6:i386 ... dpkg: warning: while removing guitarpro6:i386, directory '/opt/GuitarPro6/updater' not empty so not removed. dpkg: warning: while removing guitarpro6:i386, directory '/opt/GuitarPro6/Data/Soundbanks' not empty so not removed. Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for desktop-file-utils ... Processing triggers for gnome-menus ... And now Guitar Pro is deleted. How can i install Guitar Pro and still be able to install other software afterwards?

    Read the article

  • T-SQL Tuesday #025 &ndash; CHECK Constraint Tricks

    - by Most Valuable Yak (Rob Volk)
    Allen White (blog | twitter), marathoner, SQL Server MVP and presenter, and all-around awesome author is hosting this month's T-SQL Tuesday on sharing SQL Server Tips and Tricks.  And for those of you who have attended my Revenge: The SQL presentation, you know that I have 1 or 2 of them.  You'll also know that I don't recommend using anything I talk about in a production system, and will continue that advice here…although you might be sorely tempted.  Suffice it to say I'm not using these examples myself, but I think they're worth sharing anyway. Some of you have seen or read about SQL Server constraints and have applied them to your table designs…unless you're a vendor ;)…and may even use CHECK constraints to limit numeric values, or length of strings, allowable characters and such.  CHECK constraints can, however, do more than that, and can even provide enhanced security and other restrictions. One tip or trick that I didn't cover very well in the presentation is using constraints to do unusual things; specifically, limiting or preventing inserts into tables.  The idea was to use a CHECK constraint in a way that didn't depend on the actual data: -- create a table that cannot accept data CREATE TABLE dbo.JustTryIt(a BIT NOT NULL PRIMARY KEY, CONSTRAINT chk_no_insert CHECK (GETDATE()=GETDATE()+1)) INSERT dbo.JustTryIt VALUES(1)   I'll let you run that yourself, but I'm sure you'll see that this is a pretty stupid table to have, since the CHECK condition will always be false, and therefore will prevent any data from ever being inserted.  I can't remember why I used this example but it was for some vague and esoteric purpose that applies to about, maybe, zero people.  I come up with a lot of examples like that. However, if you realize that these CHECKs are not limited to column references, and if you explore the SQL Server function list, you could come up with a few that might be useful.  I'll let the names describe what they do instead of explaining them all: CREATE TABLE NoSA(a int not null, CONSTRAINT CHK_No_sa CHECK (SUSER_SNAME()<>'sa')) CREATE TABLE NoSysAdmin(a int not null, CONSTRAINT CHK_No_sysadmin CHECK (IS_SRVROLEMEMBER('sysadmin')=0)) CREATE TABLE NoAdHoc(a int not null, CONSTRAINT CHK_No_AdHoc CHECK (OBJECT_NAME(@@PROCID) IS NOT NULL)) CREATE TABLE NoAdHoc2(a int not null, CONSTRAINT CHK_No_AdHoc2 CHECK (@@NESTLEVEL>0)) CREATE TABLE NoCursors(a int not null, CONSTRAINT CHK_No_Cursors CHECK (@@CURSOR_ROWS=0)) CREATE TABLE ANSI_PADDING_ON(a int not null, CONSTRAINT CHK_ANSI_PADDING_ON CHECK (@@OPTIONS & 16=16)) CREATE TABLE TimeOfDay(a int not null, CONSTRAINT CHK_TimeOfDay CHECK (DATEPART(hour,GETDATE()) BETWEEN 0 AND 1)) GO -- log in as sa or a sysadmin server role member, and try this: INSERT NoSA VALUES(1) INSERT NoSysAdmin VALUES(1) -- note the difference when using sa vs. non-sa -- then try it again with a non-sysadmin login -- see if this works: INSERT NoAdHoc VALUES(1) INSERT NoAdHoc2 VALUES(1) GO -- then try this: CREATE PROCEDURE NotAdHoc @val1 int, @val2 int AS SET NOCOUNT ON; INSERT NoAdHoc VALUES(@val1) INSERT NoAdHoc2 VALUES(@val2) GO EXEC NotAdHoc 2,2 -- which values got inserted? SELECT * FROM NoAdHoc SELECT * FROM NoAdHoc2   -- and this one just makes me happy :) INSERT NoCursors VALUES(1) DECLARE curs CURSOR FOR SELECT 1 OPEN curs INSERT NoCursors VALUES(2) CLOSE curs DEALLOCATE curs INSERT NoCursors VALUES(3) SELECT * FROM NoCursors   I'll leave the ANSI_PADDING_ON and TimeOfDay tables for you to test on your own, I think you get the idea.  (Also take a look at the NoCursors example, notice anything interesting?)  The real eye-opener, for me anyway, is the ability to limit bad coding practices like cursors, ad-hoc SQL, and sa use/abuse by using declarative SQL objects.  I'm sure you can see how and why this would come up when discussing Revenge: The SQL.;) And the best part IMHO is that these work on pretty much any version of SQL Server, without needing Policy Based Management, DDL/login triggers, or similar tools to enforce best practices. All seriousness aside, I highly recommend that you spend some time letting your mind go wild with the possibilities and see how far you can take things.  There are no rules! (Hmmmm, what can I do with rules?) #TSQL2sDay

    Read the article

  • Welcome 2011

    - by PSteele
    About this time last year, I wrote a blog post about how January of 2010 was almost over and I hadn’t done a single blog post.  Ugh…  History repeats itself. 2010 in Review If I look back at 2010, it was a great year in terms of technology and development: Visited Redmond to attend the MVP Summit in February.  Had a great time with the MS product teams and got to connect with some really smart people. Continued my work on Visual Studio Magazine’s “C# Corner” column.  About mid-year, the column changed from an every-other-month print column to an every-other-month print column along with bi-monthly web-only articles.  Needless to say, this kept me even busier and away from my blog. Participated in another GiveCamp!  Thanks to the wonderful leadership of Michael Eaton and all of his minions, GiveCamp 2010 was another great success.  Planning for GiveCamp 2011 will be starting soon… I switched to DVCS full time.  After years of being a loyal SVN user, I got bit by the DVCS bug.  I played around with both Mercurial and Git and finally settled on Mercurial.  It’s seamless integration with Windows Explorer along with it’s wealth of plugins made me fall in love.  I can’t imagine going back and using a centralized version control system. Continued to work with the awesome group of talent at SRT Solutions.  Very proud that SRT won it’s third consecutive FastTrack award! Jumped off the BlackBerry train and enjoying the smooth ride of Android.  It was time to replace the old BlackBerry Storm so I did some research and settled on the Motorola DroidX.  I couldn’t be happier.  Android is a slick OS and the DroidX is a sweet piece of hardware.  Been dabbling in some Android development with both Eclipse and IntelliJ IDEA (I like IntelliJ IDEA a lot better!).   2011 Plans On January 1st I was pleasantly surprised to get an email from the Microsoft MVP program letting me know that I had received the MVP award again for my community work in 2010.  I’m honored and humbled to be recognized by Microsoft as well as my peers! I’ll continue to do some Android development.  I’m currently working on a simple app to get me feet wet.  It may even makes it’s way into the Android Market. I’ve got a project that could really benefit from WPF so I’ll be diving into WPF this year.  I’ve played around with WPF a bit in the past – simple demos and learning exercises – but this will give me a chance to build an entire application in WPF.  I’m looking forward to the increased freedom that a WPF UI should give me. I plan on blogging a lot more in 2011! Technorati Tags: Android,MVP,Mercurial,WPF,SRT,GiveCamp

    Read the article

  • Seizing the Moment with Mobility

    - by Kathryn Perry
    A guest post by Hernan Capdevila, Vice President, Oracle Fusion Apps Mobile devices are forcing a paradigm shift in the workplace – they’re changing the way businesses can do business and the type of cultures they can nurture. As our customers talk about their mobile needs, we hear them saying they want instant-on access to enterprise data so workers can be more effective at their jobs anywhere, anytime. They also are interested in being more cost effective from an IT point of view. The mobile revolution – with the idea of BYOD (bring your own device) – has added an interesting dynamic because previously IT was driving the employee device strategy and ecosystem. That's been turned on its head with the consumerization of IT. Now employees are figuring out how to use their personal devices for work purposes and IT has to figure out how to adapt. Blurring the Lines between Work and Personal Life My vision of where businesses will be five years from now is that our work lives and personal lives will be more interwoven together. In turn, enterprises will have to determine how to make employees’ work lives fit more into the fabric of their personal lives. And personal devices like smartphones are going to drive significant business value because they let us accomplish things very incrementally. I can be sitting on a train or in a taxi and be productive. At the end of any meeting, I can capture ideas and tasks or follow up with people in real time. Mobile devices enable this notion of seizing the moment – capitalizing on opportunities that might otherwise have slipped away because we're not connected. For the industry shapers out there, this is game changing. The lean and agile workforce is definitely the future. This notion of the board sitting down with the executive team to lay out strategic objectives for a three- to five-year plan, bringing in HR to determine how they're going to staff the strategic activities, kicking off the execution, and then revisiting the plan in three to five years to create another three- to five-year plan is yesterday's model. Businesses that continue to approach innovating in that way are in the dinosaur age. Today it's about incremental planning and incremental execution, which requires a lot of cohesion and synthesis within the workforce. There needs to be this interweaving notion within the workforce about how ideas cascade down, how people engage, how they stay connected, and how insights are shared. How to Survive and Thrive in Today’s Marketplace The notion of Facebook isn’t new. We lived it pre-Internet days with America Online and Prodigy – Facebook is just the renaissance of these services in a more viral and pervasive way. And given the trajectory of the consumerization of IT with people bringing their personal tooling to work, the enterprise has no option but to adapt. The sooner that businesses realize this from a top-down point of view the sooner that they will be able to really drive significant innovation and adapt to the marketplace. There are a small number of companies right now (I think it's closer to 20% rather than 80%, but the number is expanding) that are able to really innovate in this incremental marketplace. So from a competitive point of view, there's no choice but to be social and stay connected. By far the majority of users on Facebook and LinkedIn are mobile users – people on iPhones, smartphones, Android phones, and tablets. It's not the couch people, right? It's the on-the-go people – those people at the coffee shops. Usually when you're sitting at your desk on a big desktop computer, typically you have better things to do than to be on Facebook. This is a topic I'm extremely passionate about because I think mobile devices are game changing. Mobility delivers significant value to businesses – it also brings dramatic simplification from a functional point of view and transforms our work life experience. Hernan CapdevilaVice President, Oracle Applications Development

    Read the article

  • App Stores&ndash;In All Things, Its Quality Over Quantity

    - by D'Arcy Lussier
    Everybody has an opinion about Windows 8. People love it, people hate it, people are meh about it, people are apparently buying it from Microsoft stores in NYC as if it was water before a natural disaster…if there’s one thing that Microsoft product launches do well, its the ability to bring out strong emotional responses. Over at eweek.com, Don Reisinger wrote about 5 good and bad things about Windows 8. Yes, another opinion piece on WIndows 8. I figured since this one had good and bad it might be worthwhile to read. I then came across #10 on his list, and figured “What the hell…might as well post a bit of a rant on Windows 8 myself!” Here’s #10: 10. Bad: Too few apps Unfortunately, Microsoft wasn’t able to get too many developers to start producing applications for its Windows 8 Store. Microsoft hasn’t yet released official numbers, but some have said that the marketplace has less than 8,000 programs. Considering Apple’s App Store has 100 times that, it’s about time Microsoft starts leaning on developers to get more programs into its store. Believe me, Microsoft *has* been leaning on developers to get apps into the store. I’ve been asked at least 5 or 6 times from 5 or 6 different friends at Microsoft about whether I was going to write a Windows 8 app. I think Microsoft felt they had to try and address the number of apps available in their marketplace, since some people (like Don) would draw comparisons to the number of apps in the Apple marketplace. I feel for Microsoft in this, since the number of apps in a marketplace are an empty stat. Quality of Quantity I have an iPad that my family (wife, 10yo daughter, 3yo daughter) use. We all have our own apps installed on it. In addition, my wife has an iPhone 4S that she also installs apps on. As someone who gets asked by his kids often whether they can buy/download an app, the vast majority of the vast catalogue of iOS marketplace apps are crap! Do you realize how many “free” games are out there, only to really be not-free because you have to purchase in-game content to make the game actually playable? And how about searching – with such a vast array of apps and such high numbers of craptastic ones, trying to find something is incredibly difficult and can be frustrating. I would rather see that Microsoft has 8000 high quality apps in their store at launch, instead of 800000 that were mostly junk. Too Few Apps?! And seriously, 8000 is not a small number. How many iOS apps have I actually bought between the iPad and iPhone? I’ll be generous and say 30…heck, let’s round it up to 40. It’s not like I have 10,000 apps installed on my iPad, nor will that ever happen! So if people have, at the *launch* of a new platform ecosystem, EIGHT THOUSAND apps to choose from, I don’t see that as a fail at all! It should be noted that most of the most common apps (Netflix, Skype, etc.) are available for Windows 8 at launch – I guess I’ll have to wait a few weeks for My Pony Ranch and all its clones to start showing up; pity. Let’s Check Back in a Year So look, let’s check back in a year’s time and see what the app store looks like. My hope is that Microsoft doesn’t continue to push quantity over quality. Even knowing the optics that # of apps in the store carries and the pressure to catch Apple and Android marketplaces, I hope Microsoft avoids the scenario where there’s a good percentage of apps in the Windows Store that are utter rubbish and finding the gems will be cumbersome. But if that happens, we can thank guys like Dan who raised the false issue of app count at the launch for it.

    Read the article

  • Project Euler 17: (Iron)Python

    - by Ben Griswold
    In my attempt to learn (Iron)Python out in the open, here’s my solution for Project Euler Problem 17.  As always, any feedback is welcome. # Euler 17 # http://projecteuler.net/index.php?section=problems&id=17 # If the numbers 1 to 5 are written out in words: # one, two, three, four, five, then there are # 3 + 3 + 5 + 4 + 4 = 19 letters used in total. # If all the numbers from 1 to 1000 (one thousand) # inclusive were written out in words, how many letters # would be used? # # NOTE: Do not count spaces or hyphens. For example, 342 # (three hundred and forty-two) contains 23 letters and # 115 (one hundred and fifteen) contains 20 letters. The # use of "and" when writing out numbers is in compliance # with British usage. import time start = time.time() def to_word(n): h = { 1 : "one", 2 : "two", 3 : "three", 4 : "four", 5 : "five", 6 : "six", 7 : "seven", 8 : "eight", 9 : "nine", 10 : "ten", 11 : "eleven", 12 : "twelve", 13 : "thirteen", 14 : "fourteen", 15 : "fifteen", 16 : "sixteen", 17 : "seventeen", 18 : "eighteen", 19 : "nineteen", 20 : "twenty", 30 : "thirty", 40 : "forty", 50 : "fifty", 60 : "sixty", 70 : "seventy", 80 : "eighty", 90 : "ninety", 100 : "hundred", 1000 : "thousand" } word = "" # Reverse the numbers so position (ones, tens, # hundreds,...) can be easily determined a = [int(x) for x in str(n)[::-1]] # Thousands position if (len(a) == 4 and a[3] != 0): # This can only be one thousand based # on the problem/method constraints word = h[a[3]] + " thousand " # Hundreds position if (len(a) >= 3 and a[2] != 0): word += h[a[2]] + " hundred" # Add "and" string if the tens or ones # position is occupied with a non-zero value. # Note: routine is broken up this way for [my] clarity. if (len(a) >= 2 and a[1] != 0): # catch 10 - 99 word += " and" elif len(a) >= 1 and a[0] != 0: # catch 1 - 9 word += " and" # Tens and ones position tens_position_value = 99 if (len(a) >= 2 and a[1] != 0): # Calculate the tens position value per the # first and second element in array # e.g. (8 * 10) + 1 = 81 tens_position_value = int(a[1]) * 10 + a[0] if tens_position_value <= 20: # If the tens position value is 20 or less # there's an entry in the hash. Use it and there's # no need to consider the ones position word += " " + h[tens_position_value] else: # Determine the tens position word by # dividing by 10 first. E.g. 8 * 10 = h[80] # We will pick up the ones position word later in # the next part of the routine word += " " + h[(a[1] * 10)] if (len(a) >= 1 and a[0] != 0 and tens_position_value > 20): # Deal with ones position where tens position is # greater than 20 or we have a single digit number word += " " + h[a[0]] # Trim the empty spaces off both ends of the string return word.replace(" ","") def to_word_length(n): return len(to_word(n)) print sum([to_word_length(i) for i in xrange(1,1001)]) print "Elapsed Time:", (time.time() - start) * 1000, "millisecs" a=raw_input('Press return to continue')

    Read the article

  • Expanding the Partner Ecosystem with Third-Party Plug-ins

    - by Joe Diemer
    Oracle Enterprise Manager’s extensibility capabilities are designed to allow customers and partners to adapt Enterprise Manager for management of heterogeneous environments with Plug-ins and Connectors.  Third-party developers continue to take advantage of Oracle Enterprise Manager’s Extensibility Development Kit (EDK) to build plug-ins to Enterprise Manager 12c, such as F5’s BIG IP Plug-in and Entuity’s Eye of the Storm Network Management Plug-In.  Partners can also validate their plug-ins through the Oracle Validated Integration (OVI) program, which assures customers that the plug-in has been tested and is functionally and technically sound, is designed in a reliable and standardized manner, and operates and performs as documented.   Two very recent examples of partners which have beta versions of their plug-ins are Blue Medora's VMware vSphere plug-in and the NetApp Storage plug-in.  VMware vSphere Plug-in by Blue Medora Blue Medora, an Oracle Partner Network (OPN) “Gold” member, which just announced that it is now signing up customers to try a beta version of their new VMware vSphere plug-in for Enterprise Manager 12c.  According to Blue Medora, the vSphere plug-in monitors critical VMware metrics (CPU, Memory, Disk, Network, etc) at the Host, VM, Cluster and Resource Pool levels.  It has minimal performance impact via an “agentless” approach that requires no installation directly on VMware servers.  It has discovery capabilities for VMware Datacenters, ESX Hosts, Clusters, Virtual Machines, and Datastores.  It offers integration of native VMware Events into Enterprise Manager, and it provides over 300 VMware-related health, availability, performance, and configuration metrics.  It comes with more than 30 out-of-the-box pre-defined thresholds and can manage VMware via a series of jobs split between cluster, host and VM target types.The company reports that the Enterprise Manager 12c plug-in supports vSphere versions 4.0, 4.5 and 5.0.  Platforms supported include Linux 64-bit, Windows, AIX and Solaris SPARC and x86.  Information about the plug-in, including how to sign up for the beta, is available at their web site at http://bluemedora.com after selecting the "Products" tab. NetApp Storage Plug-in NetApp believes the combination of storage system monitoring with comprehensive management of Oracle systems with Enterprise Manager will help customers reduce the cost and complexity of managing applications that rely on NetApp storage and Oracle technologies.  So, NetApp built a plug-in and reports that it has comprehensive availability and performance information for NetApp storage systems.  Using the plug-in, Oracle Enterprise Manager customers with NetApp storage solutions can track the association between databases and storage components and thereby respond to faults and IO performance bottlenecks quickly. With the latest configuration management capabilities, one can also perform drift analysis to make sure all storage systems are configured as per established gold standards. The company is also now signing up beta customers, which can be done at the NetApp Communities site at https://communities.netapp.com/groups/netapp-storage-system-plug-in-for-oem12c-beta. Learn More about Enterprise Manager Extensibility More plug-ins from other partners are soon to come, which I'll be reporting on them here.  To learn more about Enterprise Manager and how customers and partners can build plug-ins using the EDK to manage a multi-vendor data center, go to http://oracle.com/enterprisemanager in the Heterogeneous Management solution area.  The site also lists the plug-ins available with information on how to obtain them.  More info about the Oracle Validated Integration program can be found at the OPN Enterprise Manager Knowledge Zone in the "Develop" tab.

    Read the article

  • Ubuntu 12.04 LTS initramfs-tools dependency issue

    - by Mike
    I know this has been asked several times, but each issue and resolution seems different. I've tried almost everything I could think of, but I can't fix this. I have a VM (VMware I think) running 12.04.03 LTS which has stuck dependencies. The VM is on a rented host, running a live system so I don't want to break it (further). uname -a Linux support 3.5.0-36-generic #57~precise1-Ubuntu SMP Thu Jun 20 18:21:09 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux Some more: sudo apt-get update [sudo] password for tracker: Reading package lists... Done Building dependency tree Reading state information... Done You might want to run ‘apt-get -f install’ to correct these. The following packages have unmet dependencies. initramfs-tools : Depends: initramfs-tools-bin (< 0.99ubuntu13.1.1~) but 0.99ubuntu13.3 is installed E: Unmet dependencies. Try using -f. sudo apt-get install -f Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: initramfs-tools The following packages will be upgraded: initramfs-tools 1 upgraded, 0 newly installed, 0 to remove and 2 not upgraded. 2 not fully installed or removed. Need to get 0 B/50.3 kB of archives. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? Y dpkg: dependency problems prevent configuration of initramfs-tools: initramfs-tools depends on initramfs-tools-bin (<< 0.99ubuntu13.1.1~); however: Version of initramfs-tools-bin on system is 0.99ubuntu13.3. dpkg: error processing initramfs-tools (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates it's a follow-up error from a previous failure. dpkg: dependency problems prevent configuration of apparmor: apparmor depends on initramfs-tools; however: Package initramfs-tools is not configured yet. dpkg: error processing apparmor (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates it's a follow-up error from a previous failure. Errors were encountered while processing: initramfs-tools apparmor E: Sub-process /usr/bin/dpkg returned an error code (1) If I look at the policy behind initramfs-tools / bin I get: apt-cache policy initramfs-tools initramfs-tools: Installed: 0.99ubuntu13.1 Candidate: 0.99ubuntu13.3 Version table: 0.99ubuntu13.3 0 500 http://gb.archive.ubuntu.com/ubuntu/ precise-updates/main amd64 Packages *** 0.99ubuntu13.1 0 100 /var/lib/dpkg/status 0.99ubuntu13 0 500 http://gb.archive.ubuntu.com/ubuntu/ precise/main amd64 Packages apt-cache policy initramfs-tools-bin initramfs-tools-bin: Installed: 0.99ubuntu13.3 Candidate: 0.99ubuntu13.3 Version table: *** 0.99ubuntu13.3 0 500 http://gb.archive.ubuntu.com/ubuntu/ precise-updates/main amd64 Packages 100 /var/lib/dpkg/status 0.99ubuntu13 0 500 http://gb.archive.ubuntu.com/ubuntu/ precise/main amd64 Packages So the issue seems to be I have 0.99ubuntu13.3 for initramfs-tools-bin yet 0.99ubuntu13.1 for initramfs-tools, and can't upgrade to 0.99ubuntu13.3. I've performed apt-get clean/autoclean/install -f/upgrade -f many times but they won't resolve. I can think of only 2 other 'solutions': Edit the dpkg dependency list to trick it into doing the installation with a broken dependency. This seems very dodgy and it would be a last resort Downgrade both initramfs-tools and initramfs-tools-bin to 0.99ubuntu13 from the precise/main sources and hope that would get them in step. However I'm not sure if this will be possible, or whether it would introduce more issues. I'm not sure how this situation arise in the first place. /boot was 96% full; it's now 56% full (it's tiny - 64MB ... this is what I got from the hosting company). Can anyone offer advice please?

    Read the article

  • Do MORE with WebCenter - Webcast Overview & TIES Tour

    - by Michael Snow
    Today's post is from Michelle Huff, Senior Director, Product Management, Oracle WebCenter `````````````````  In case you missed it, I presented on a webcast yesterday focused on how you can “Do More with Oracle WebCenter – Expand Beyond Content Management.” As you may remember, we rebranded Oracle’s Enterprise Content Management (ECM) Suite, which some people knew by the wonderfully techie three-letter acronyms -- UCM, URM & IPM -- to Oracle WebCenter Content last year. Since it’s a unified ECM platform, I’ve seen many customers over the years continue to expand the number of content-centric solutions and application integrations powered by WebCenter throughout their organizations. But, did you know WebCenter also provides portal, collaboration and web experience management capabilities as well? This enables you to leverage your existing investment in the WebCenter platform as well as the information you’re managing to create engaging sites, collaborative spaces, or self-service portals and composite applications. In the webcast I walked through six different ways that you can do more with WebCenter: Collaborative content contribution and sharing environment Share content across intranets and extranets Combine content in composite applications Create targeted online experiences Manage interactive social experiences Optimize multi-channel customer experiences Joining me on the call was Greg Utecht with TIES. TIES is a joint powers cooperative owned by 46 Minnesota school districts, represents 514 schools – and provides software applications, hardware and software, internet service and professional development designed by educators for education. I was having a lot of fun over the past few days talking with Greg about the TIES implementation and future plans with WebCenter. He joined me on the call for a little Q&A to explain how he’s using WebCenter today for their iContent implementation for document management, records management and archiving. And also covered how they have expanded their implementation to create a collaborative space called their HRPay System with WebCenter to facilitate collaboration and to better engage their users within the school districts. During our conversation a few questions came from the audience about their implementation. They were curious to see how the system looked – so let’s take a peak. This first screenshot shows the screen that a human resources or payroll worker in one of our member districts would see upon logging in, based on their credentials and role in their district. This shows the result of clicking on the SUBSCRIBE link on the main page. It allows the user to subscribe to parts of the portal which will e-mail him/her when those are updated in any way. This shows the screen that a human resources or payroll worker in one of our member districts would see upon clicking on the Resources link. This shows the screen that a human resources or payroll worker in one of our member districts would see upon clicking on the Finance Advisory link. It shows the discussion threads and document sharing areas. This shows the screen that appears when the forum topic on the preceding screen is clicked. This shows the screen portlet up close with shared documents. This shows the screen that appears when a shared document is clicked on. Note that there is also a download button and an update button, meaning people can work on these collaboratively. If you missed the webcast, check it out! You can watch the replay OnDemand HERE. If you attended the webcast, thanks for joining - I hoped you learned a little from the session. I learned that kids are getting digital report cards today! Wow, have times changed with technology. Uh oh, is this when I start saying “You know, back in my days…?”

    Read the article

  • Mass Metadata Updates with Folders

    - by Kyle Hatlestad
    With the release of WebCenter Content PS5, a new folder architecture called 'Framework Folders' was introduced.  This is meant to replace the folder architecture of 'Folders_g'.  While the concepts of a folder structure and access to those folders through Desktop Integration Suite remain the same, the underlying architecture of the component has been completely rewritten.  One of the main goals of the new folders is to scale better at large volumes and remove the limitations of 1000 content items or sub-folders within a folder.  Along with the new architecture, it has a new look and a few additional features have been added.  One of those features are Query Folders.  These are folders that are populated simply by a query rather then literally putting items within the folders.  This is something that the Library has provided, but it always took an administrator to define them through the Web Layout Editor.  Now users can quickly define query folders anywhere within the standard folder hierarchy.   Within this new Framework Folders is the very handy ability to do metadata updates.  It's similar to the Propagate feature in Folders_g, but there are some key differences that make this very flexible and much more powerful. It's used within regular folders and Query Folders.  So the content you're updating doesn't all have to be in the same folder...or a folder at all.   The user decides what metadata to propagate.  In Folders_g, the system administrator controls which fields will be propagated using a single administration page.  In Framework Folders, the user decides at that time which fields they want to update. You set the value you want on the propagation screen.  In Folders_g, it used the metadata defined on the parent folder to propagate.  With Framework Folders, you supply the new metadata value when you select the fields you want to update.  It does not have to be defined on the parent folder. Because of these differences, I think the new propagate method is much more useful.  Instead of always having to rely on Archiver or a custom spreadsheet, you can quickly do mass metadata updates right within folders.   Here are the basic steps to perform propagation. First create a folder for the propagation.  You can use a regular folder, but a Query Folder will work as well. Go into the folder to get the results.   In the Edit menu, select 'Propagate'. Select the check-box next to the field to update and enter the new value  Click the Propagate button. Once complete, a dialog will appear showing it is complete What's also nice is that the process happens asynchronously in the background which means you can browse to other pages and do other things while it is still working.  You aren't stuck on the page waiting for it to complete.  In addition, you can add a configuration flag to the server to turn on a status indicator icon.  Set 'FldEnableInProcessIndicator=1' and it will show a working icon as its doing the propagation. There is a caveat when using the propagation on a Query Folder.   While a propagation on a regular folder will update all of the items within that folder, a Query Folder propagation will only update the first 50 items.  So you may need to run it multiple times depending on the size...and have the query exclude the items as they get updated. One extra note...Framework Folders is offered as the default folder architecture in the PS5 release of WebCenter Content.  But if you are using WebCenter Content integrated with another product that makes use of folders (WebCenter Portal/Spaces, Fusion Applications, Primavera, etc), you'll need to continue using Folders_g until they are updated to use the new folders.

    Read the article

  • Should I break contract early?

    - by cbang
    About 7 months ago I made the switch from a 5 year permie role (as a support developer in C#) to a contract role. I did this because I was stagnating in my old role. The extra cash contracting is really helping too. Unfortunately my team leader has taken a dislike to me from day 1. He regularly tells me I went out contracting too early, and frequently remarks that people in their 20's have no idea what they are talking about (I am 29). I was recently given the task of configuring our reports via our in house reporting library. It works off of a database driven criteria base, with controls being loaded as needed. The configs can get fairly complex, with controls having various levels of dependency on each other. I had a short time frame to get 50 reports working, and I was told to just get the basic configuration done, after which they will be handed over to the reporting team for fine tuning, then the test team. Our updated system was deployed 2 weeks ago, and it turned out that about 15 reports had issues causing incorrect data to be returned. Upon investigation I discovered that the reporting team hadn't even looked at them, and the test team hadn't bothered to test the reports. In spite of this, my team leader has told me that it is 100% my fault. As a result, our help desk got hit hard. I worked back until 2am that night to fix the highest priority issues (on my wedding anniversary!). The next day I arrive at work at 7:45 am to continue with the fixes. I got no thanks, but keep getting repeatedly told by my manager that "I fucked up" and "this is all my fault". I told my team leader I would spend part of my weekend working to fix the remaining issues. His response was "so you fucking should! you fucked it all up!" in front of the rest of the team. I responded "No worries." and left. I spent a decent chunk of my weekend working on it. Within 2 business days of finding out about the issues, I had all the medium and high priority issues fixed. The only comments my team leader has made to me in the last 2 weeks is to tell me how I have caused a big mess, and to tell me it was all my fault. I get this multiple times a day. If I make any jokes to anyone else in the team, I get told not to be a smartass... even though the rest of the team jokes throughout the day. Apart from that, all I get is angry looks any time I am anywhere near the guy. I don't give any response other than "alright" or silence when he starts giving me a hard time. Today we found out that the pilot release for the next stage has been pushed back. My team leader has said this was caused by me (but the higher ups said no such thing). He also said I have "no understanding of the ramifications of my actions". My question is, should I break contract (I am contracted until June 30) and find another role? No one else in my team will speak up in my favour, as they are contractors too and have no interest in rocking the boat. I could complain to my team leaders boss, but I can't see that helping, as I will still be stuck in the same team. As this is my first contract, I imagine getting the next one will be hard without a reference. I can't figure out if this guy is trying to get me fired up to provoke a confrontation (the guy loves conflict), or if he is just venting anger, or what. Copping this blame day after day is really wearing me down and making me depressed... especially since I have a wife and kid to support).

    Read the article

  • Drinking Our Own Champagne: Fusion Accounting Hub at Oracle

    - by Di Seghposs
    A guest post by Corey West, Senior Vice President, Oracle's Corporate Controller and Chief Accounting Officer There's no better story to tell than one about Oracle using its own products with blowout success. Here's how this one goes. As you know, Oracle has increased its share of the software market through a number of high-profile acquisitions. Legally combining companies is a very complicated process -- it can take months to complete, especially for the acquisitions with offices in several countries, each with its own unique laws and regulations. It's a mission critical and time sensitive process to roll an acquired company's legacy systems (running vital operations, such as accounts receivable and general ledger (GL)) into the existing systems at Oracle. To date, we've run our primary financial ledgers in E-Business Suite R12 -- and we've successfully met the requirements of the business and closed the books on time every single quarter. But there's always room for improvement and that comes in the form of Fusion Applications. We are now live on Fusion Accounting Hub (FAH), which is the first critical step in moving to a full Fusion Financials instance. We started with FAH so that we could design a global chart of accounts. Eventually, every transaction in every country will originate from this global chart of accounts -- it becomes the structure for managing our business more uniformly. In conjunction, we're using Oracle Hyperion Data Relationship Management (DRM) to centralize and automate governance of our global chart of accounts and related hierarchies, which will help us lower our costs and greatly reduce risk. Each month, we have to consolidate data from our primary general ledgers. We have been able to simplify this process considerably using FAH. We can now submit our primary ledgers running in E-Business Suite (EBS) R12 directly to FAH, eliminating the need for more than 90 redundant consolidation ledgers. Also we can submit incrementally, so if we need to book an adjustment in a primary ledger after close, we can do so without re-opening it and re-submitting. As a result, we have earlier visibility to period-end actuals during the close. A goal of this implementation, and one that we successfully achieved, is that we are able to use FAH globally with no customization. This means we have the ability to fully deploy ledger sets at the consolidation level, plus we can use standard functionality for currency translation and mass allocations. We're able to use account monitoring and drill down functionality from the consolidation level all the way through to EBS primary ledgers and sub-ledgers, which allows someone to click through a transaction appearing at the consolidation level clear through to its original source, a significant productivity enhancement when doing research. We also see a significant improvement in reporting using Essbase cube and Hyperion Smart View. Specifically, "the addition of an Essbase cube on top of the GL gives us tremendous versatility to automate and speed our elimination process," says Claire Sebti, Senior Director of Corporate Accounting at Oracle. A highlight of this story is that FAH is running in a co-existence environment. Our plan is to move to Fusion Financials in steps, starting with FAH. Next, our Oracle Financial Services Software subsidiary will move to a full Fusion Financials instance. Then we'll replace our EBS instance with Fusion Financials. This approach allows us to plan in steps, learn as we go, and not overwhelm our teams. It also reduces the risk that comes with moving the entire instance at once. Maria Smith, Vice President of Global Controller Operations, is confident about how they've positioned themselves to uptake more Fusion functionality and is eager to "continue to drive additional efficiency and cost savings." In this story, the happy customers are Oracle controllers, financial analysts, accounting specialists, and our management team that get earlier access to more flexible reporting. "Fusion Accounting Hub simplifies our processes and gives us more transparency into account activity," raves Alex SanJuan, Senior Director, Record to Report Strategic Process Owner. Overall, the team has been very impressed with the usability and functionality of FAH and are pleased with the quantifiable improvements. Claire Sebti states, "Our WD5 close activities have been reduced by at least four hours of system processing time, just for the consolidation group." Fusion Accounting Hub is an inspiring beginning to our Fusion Financials implementation story. There's no doubt it's going to be an international bestseller! Corey West, Senior Vice President Oracle's Corporate Controller and Chief Accounting Officer

    Read the article

  • SQL SERVER – A Puzzle Part 4 – Fun with SEQUENCE in SQL Server 2012 – Guess the Next Value

    - by pinaldave
    It seems like every weekend I get a new puzzle in my mind. Before continuing I suggest you read my previous posts here where I have shared earlier puzzles. A Puzzle – Fun with SEQUENCE in SQL Server 2012 – Guess the Next Value  A Puzzle Part 2 – Fun with SEQUENCE in SQL Server 2012 – Guess the Next Value A Puzzle Part 3 – Fun with SEQUENCE in SQL Server 2012 – Guess the Next Value After reading above three posts, I am very confident that you all will be ready for the next set of puzzles now. First execute the script which I have written here. Now guess what will be the next value as requested in the query. USE TempDB GO -- Create sequence CREATE SEQUENCE dbo.SequenceID AS DECIMAL(3,0) START WITH 1 INCREMENT BY -1 MINVALUE 1 MAXVALUE 3 CYCLE NO CACHE; GO SELECT next value FOR dbo.SequenceID; -- Guess the number SELECT next value FOR dbo.SequenceID; -- Clean up DROP SEQUENCE dbo.SequenceID; GO Please note that Starting value is 1, Increment value is the negative value of -1 and Minimum value is 3. Now let us first assume how this will work out. In our example of the sequence starting value is equal to 1 and decrement value is -1, this means the value should decrement from 1 to 0. However, the minimum value is 1. This means the value cannot further decrement at all. What will happen here? The natural assumption is that it should throw an error. How many of you are assuming about query will throw an ERROR? Well, you are WRONG! Do not blame yourself, it is my fault as I have told you only half of the story. Now if you have voted for error, let us continue running above code in SQL Server Management Studio. The above script will give the following output: Isn’t it interesting that instead of error out it is giving us result value 3. To understand the answer about the same, carefully observe the original syntax of creating SEQUENCE – there is a keyword CYCLE. This keyword cycles the values between the minimum and maximum value and when one of the range is exhausted it cycles the values from the other end of the cycle. As we have negative incremental value when query reaches to the minimum value or lower end it will cycle it from the maximum value. Here the maximum value is 3 so the next logical value is 3. If your business requirement is such that if sequence reaches the maximum or minimum value, it should throw an error, you should not use the keyword cycle, and it will behave as discussed. I hope, you are enjoying the puzzles as much as I am enjoying it. If you have any interesting puzzle to share, please do share with me and I will share this on blog with due credit to you. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • SQL SERVER – Renaming Index – Index Naming Conventions

    - by pinaldave
    If you are regular reader of this blog, you must be aware of that there are two kinds of blog posts 1) I share what I learn recently 2) I share what I learn and request your participation. Today’s blog post is where I need your opinion to make this blog post a good reference for future. Background Story Recently I came across system where users have changed the name of the few of the table to match their new standard naming convention. The name of the table should be self explanatory and they should have explain their purpose without either opening it or reading documentations. Well, not every time this is possible but again this should be the goal of any database modeler. Well, I no way encourage the name of the tables to be too long like ‘ContainsDetailsofNewInvoices’. May be the name of the table should be ‘Invoices’ and table should contain a column with New/Processed bit filed to indicate if the invoice is processed or not (if necessary). Coming back to original story, the database had several tables of which the name were changed. Story Continues… To continue the story let me take simple example. There was a table with the name  ’ReceivedInvoices’, it was changed to new name as ‘TblInvoices’. As per their new naming standard they had to prefix every talbe with the words ‘Tbl’ and prefix every view with the letters ‘Vw’. Personally I do not see any need of the prefix but again, that issue is not here to discuss.  Now after changing the name of the table they faced very interesting situation. They had few indexes on the table which had name of the table. Let us take an example. Old Name of Table: ReceivedInvoice Old Name of Index: Index_ReceivedInvoice1 Here is the new names New Name of Table: TblInvoices New Name of Index: ??? Well, their dilemma was what should be the new naming convention of the Indexes. Here is a quick proposal of the Index naming convention. Do let me know your opinion. If Index is Primary Clustered Index: PK_TableName If Index is  Non-clustered Index: IX_TableName_ColumnName1_ColumnName2… If Index is Unique Non-clustered Index: UX_TableName_ColumnName1_ColumnName2… If Index is Columnstore Non-clustered Index: CL_TableName Here ColumnName is the column on which index is created. As there can be only one Primary Key Index and Columnstore Index per table, they do not require ColumnName in the name of the index. The purpose of this new naming convention is to increase readability. When any user come across this index, without opening their properties or definition, user can will know the details of the index. T-SQL script to Rename Indexes Here is quick T-SQL script to rename Indexes EXEC sp_rename N'SchemaName.TableName.IndexName', N'New_IndexName', N'INDEX'; GO Your Contribute Please Well, the organization has already defined above four guidelines, personally I follow very similar guidelines too. I have seen many variations like adding prefixes CL for Clustered Index and NCL for Non-clustered Index. I have often seen many not using UX prefix for Unique Index but rather use generic IX prefix only. Now do you think if they have missed anything in the coding standard. Is NCI and CI prefixed required to additionally describe the index names. I have once received suggestion to even add fill factor in the index name – which I do not recommend at all. What do you think should be ideal name of the index, so it explains all the most important properties? Additionally, you are welcome to vote if you believe changing the name of index is just waste of time and energy.  Note: The purpose of the blog post is to encourage all to participate with their ideas. I will write follow up blog posts in future compiling all the suggestions. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Index, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Asynchronous Update and Timestamp – Check if Row Values are Changed Since Last Retrieve

    - by pinaldave
    Here is the question received just this morning. “Pinal, Our application is much different than other application you might have come across. In simple words, I would like to call it Asynchronous Updated Application. We need your quick opinion about one of the situation which we are facing. From business side: We have bidding system (similar to eBay but not exactly) and where multiple parties bid on one item, during the last few minutes of bidding many parties try to bid at the same time with the same price. When they hit submit, we would like to check if the original data which they retrieved is changed or not. If the original data which they have retrieved is the same, we will accept their new proposed price. If original data are changed, they will have to resubmit the data with new price. From technical side: We have a row which we retrieve in our application. Multiple users are retrieving the same row. Some of the users will update the value of the row and submit. However, only the very first user should be allowed to update the row and remaining all the users will have to re-fetch the row and updated it once again. We do not want to lock any record as that will create other problems. Do you have any solution for this kind of situation?” Fantastic Question. I believe there is good chance that we can use timestamp datatype in this kind of application. Before we continue let us see following simple example. USE tempdb GO CREATE TABLE SampleTable (ID INT, Col1 VARCHAR(100), TimeStampCol TIMESTAMP) GO INSERT INTO SampleTable (ID, Col1) VALUES (1, 'FirstVal') GO SELECT ID, Col1, TimeStampCol FROM SampleTable st GO UPDATE SampleTable SET Col1 = 'NextValue' GO SELECT ID, Col1, TimeStampCol FROM SampleTable st GO DROP TABLE SampleTable GO Now let us see the resultset. Here is the simple explanation of the scenario. We created a table with simple column with TIMESTAMP datatype. When we inserted a very first value the timestamp was generated. When we updated any value in that row, the timestamp was updated with the new value. Every single time when we update any value in the row, it will generate new timestamp value. Now let us apply this in an original question’s scenario. In that case multiple users are retrieving the same row. Everybody will have the same now same TimeStamp with them. Before any user update any value they should once again retrieve the timestamp from the table and compare with the timestamp they have with them. If both of the timestamp have the same value – the original row has not been updated and we can safely update the row with the new value. After initial update, now the row will contain a new timestamp. Any subsequent update to the same row should also go to the same process of checking the value of the timestamp they have in their memory. In this case, the timestamp from memory will be different from the timestamp in the row. This indicates that row in the table has changed and new updates should not be allowed. I believe timestamp can be very very useful in this kind of scenario. Is there any better alternative? Please leave a comment with the suggestion and I will post on the blog with due credit. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

< Previous Page | 147 148 149 150 151 152 153 154 155 156 157 158  | Next Page >