Search Results

Search found 14958 results on 599 pages for 'non technical'.

Page 528/599 | < Previous Page | 524 525 526 527 528 529 530 531 532 533 534 535  | Next Page >

  • Oracle celebrates a successful Oracle CloudWorld in Bogotá

    - by yaldahhakim
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 written by: Diana Tamayo Tovar Oracle CloudWorld Bogotá began with scattered showers, rain and strong winds, inviting Colombians to spend a whole day in the social, mobile and complete world of Oracle Cloud. The event took place on November 6th with 807 attendees, 15 media representatives and 65 partners, who gathered to share the business value of Cloud along with Oracle executives and Colombian market leaders. Line-of-business leaders in sales and marketing, customer service and support, HR and talent management, and finance and operations, shared their ideas with Colombian customers, giving them a chance to learn, discover and engage with the tools, trends and concepts of Cloud. The highlights of the event included the presence of keynote speakers such as Bob Evans, Chief Communications Officer, and a customer testimonial session with top business leaders from insurance, finances, retail, communications and health Colombian industries, who shared their innovation experiences and success stories on workforce empowerment, talent management, cloud security, social engagement and productivity, providing best case scenarios on how Oracle has helped them transform their business with technologies like cloud, social collaboration and mobile applications. The keynote session was preceded by a customer success story from one of the largest virtual network operator in the country, providing an interesting case study of mobile banking innovation and a great customer testimonial of the importance of cross industry strategies and cloud technology. The event provided five different tracks on the main trends of how companies communicate and engage with different audiences, providing a different perspective on the importance of empowering brands through their customers, trusting and investing in technology for growth, while Oracle University shared their knowledge with “Oracle Cloud Fundamentals” a training lesson regarding Java Cloud, Database Cloud and other Oracle Cloud product technologies and solutions. The rainy day scenario included sideshows of aerial acrobatics and speed painting performances to recreate the environment of modern and flexible Cloud Solutions in a colorful and creative way. Oracle CloudWorld Bogotá was a great opportunity to expose invalid cloud Myths and the main concerns of the Colombian customers towards cloud, considering IDC Latin America studies stating that 93% of Colombian business leaders are interested in cloud but only 47% understand its business value. Spending a day in the cloud with 6 demogrounds stations, conference sessions, interesting case studies and customer testimonials will surely widen the endless market opportunities for Colombian customers, leaving them amazed with how Oracle Cloud works towards integration with other environments, non oracle applications, social media and mobile devices with bulletproof security infrastructure. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Python Coding standards vs. productivity

    - by Shroatmeister
    I work for a large humanitarian organisation, on a project building software that could help save lives in emergencies by speeding up the distribution of food. Many NGOs desperately need our software and we are weeks behind schedule. One thing that worries me in this project is what I think is an excessive focus on coding standards. We write in python/django and use a version of PEP0008, with various modifications e.g. line lengths can go up to 160 chars and all lines should go that long if possible, no blank lines between imports, line wrapping rules that apply only to certain kinds of classes, lots of templates that we must use, even if they aren't the best way to solve a problem etc. etc. One core dev spent a week rewriting a major part of the system to meet the then new coding standards, throwing away several suites of tests in the process, as the rewrite meant they were 'invalid'. We spent two weeks rewriting all the functionality that was lost, and fixing bugs. He is the lead dev and his word carries weight, so he has convinced the project manager that these standards are necessary. The junior devs do as they are told. I sense that the project manager has a strong feeling of cognitive dissonance about all this but nevertheless agrees with it vehemently as he feels unsure what else to do. Today I got in serious trouble because I had forgotten to put some spaces after commas in a keyword argument. I was literally shouted at by two other devs and the project manager during a Skype call. Personally I think coding standards are important but also think that we are wasting a lot of time obsessing with them, and when I verbalized this it provoked rage. I'm seen as a troublemaker in the team, a team that is looking for scapegoats for its failings. Since the introduction of the coding standards, the team's productivity has measurably plummeted, however this only reinforces the obsession, i.e. the lead dev simply blames our non-adherence to standards for the lack of progress. He believes that we can't read each other's code if we don't adhere to the conventions. This is starting to turn sticky. Now I am trying to modify various scripts, autopep8, pep8ify and PythonTidy to try to match the conventions. We also run pep8 against source code but there are so many implicit amendments to our standard that it's hard to track them all. The lead dev simple picks faults that the pep8 script doesn't pick up and shouts at us in the next stand-up meeting. Every week there are new additions to the coding standards that force us to rewrite existing, working, tested code. Thank heavens we still have tests, (I reverted some commits and fixed a bunch of the ones he removed). All the while there is increasing pressure to meet the deadline. I believe a fundamental issue is that the lead dev and another core dev refuse to trust other developers to do their job. But how to deal with that? We can't do our job because we are too busy rewriting everything. I've never encountered this dynamic in a software engineering team. Am I wrong to question their adherence to coding standards? Has anyone else experienced a similar situation and how have they dealt with it successfully? (I'm not looking for a discussion just actual solutions people have found)

    Read the article

  • ALC889 - GA-P55-UD4 -no sound - Ubuntu 11.10

    - by george
    I have computer with a Gigabyte P55A-UD4 motherboard. I have on-board audio - Realtek ALC889. I am using Ubuntu 11.10 and have no sound. please please heeeelp :). i have tryed to install high definition audio codecs from realtek but it doesn't work. in bios the azalia codec is turned on. ps : sorry for my english. 00:00.0 Host bridge: Intel Corporation Core Processor DRAM Controller (rev 12) 00:01.0 PCI bridge: Intel Corporation Core Processor PCI Express x16 Root Port (rev 12) 00:1a.0 USB Controller: Intel Corporation 5 Series/3400 Series Chipset USB Universal Host Controller (rev 06) 00:1a.1 USB Controller: Intel Corporation 5 Series/3400 Series Chipset USB Universal Host Controller (rev 06) 00:1a.2 USB Controller: Intel Corporation 5 Series/3400 Series Chipset USB Universal Host Controller (rev 06) 00:1a.7 USB Controller: Intel Corporation 5 Series/3400 Series Chipset USB2 Enhanced Host Controller (rev 06) 00:1b.0 Audio device: Intel Corporation 5 Series/3400 Series Chipset High Definition Audio (rev 06) 00:1c.0 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 1 (rev 06) 00:1c.4 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 5 (rev 06) 00:1c.5 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 6 (rev 06) 00:1c.6 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 7 (rev 06) 00:1d.0 USB Controller: Intel Corporation 5 Series/3400 Series Chipset USB Universal Host Controller (rev 06) 00:1d.1 USB Controller: Intel Corporation 5 Series/3400 Series Chipset USB Universal Host Controller (rev 06) 00:1d.2 USB Controller: Intel Corporation 5 Series/3400 Series Chipset USB Universal Host Controller (rev 06) 00:1d.3 USB Controller: Intel Corporation 5 Series/3400 Series Chipset USB Universal Host Controller (rev 06) 00:1d.7 USB Controller: Intel Corporation 5 Series/3400 Series Chipset USB2 Enhanced Host Controller (rev 06) 00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev a6) 00:1f.0 ISA bridge: Intel Corporation 5 Series Chipset LPC Interface Controller (rev 06) 00:1f.2 SATA controller: Intel Corporation 5 Series/3400 Series Chipset 6 port SATA AHCI Controller (rev 06) 00:1f.3 SMBus: Intel Corporation 5 Series/3400 Series Chipset SMBus Controller (rev 06) 01:00.0 VGA compatible controller: nVidia Corporation GT216 [GeForce GT 220] (rev a2) 01:00.1 Audio device: nVidia Corporation High Definition Audio Controller (rev a1) 03:00.0 SATA controller: JMicron Technology Corp. JMB362/JMB363 Serial ATA Controller (rev 03) 03:00.1 IDE interface: JMicron Technology Corp. JMB362/JMB363 Serial ATA Controller (rev 03) 04:00.0 IDE interface: Marvell Technology Group Ltd. Device 91a3 (rev 11) 05:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 06) 06:03.0 IDE interface: Integrated Technology Express, Inc. IT8213 IDE Controller 06:07.0 FireWire (IEEE 1394): Texas Instruments TSB43AB23 IEEE-1394a-2000 Controller (PHY/Link) 3f:00.0 Host bridge: Intel Corporation Core Processor QuickPath Architecture Generic Non-core Registers (rev 02) 3f:00.1 Host bridge: Intel Corporation Core Processor QuickPath Architecture System Address Decoder (rev 02) 3f:02.0 Host bridge: Intel Corporation Core Processor QPI Link 0 (rev 02) 3f:02.1 Host bridge: Intel Corporation Core Processor QPI Physical 0 (rev 02) 3f:02.2 Host bridge: Intel Corporation Core Processor Reserved (rev 02) 3f:02.3 Host bridge: Intel Corporation Core Processor Reserved (rev 02) aplay -l karta 0: Intel [HDA Intel], urzadzenie 0: ALC889 Analog [ALC889 Analog] Urzadzenia podrzedne: 1/1 Urzadzenie podrzedne #0: subdevice #0 karta 0: Intel [HDA Intel], urzadzenie 1: ALC889 Digital [ALC889 Digital] Urzadzenia podrzedne: 1/1 Urzadzenie podrzedne #0: subdevice #0 karta 1: NVidia [HDA NVidia], urzadzenie 3: HDMI 0 [HDMI 0] Urzadzenia podrzedne: 1/1 Urzadzenie podrzedne #0: subdevice #0 karta 1: NVidia [HDA NVidia], urzadzenie 7: HDMI 0 [HDMI 0] Urzadzenia podrzedne: 1/1 Urzadzenie podrzedne #0: subdevice #0 karta 1: NVidia [HDA NVidia], urzadzenie 8: HDMI 0 [HDMI 0] Urzadzenia podrzedne: 1/1 Urzadzenie podrzedne #0: subdevice #0 karta 1: NVidia [HDA NVidia], urzadzenie 9: HDMI 0 [HDMI 0] Urzadzenia podrzedne: 1/1 Urzadzenie podrzedne #0: subdevice #0

    Read the article

  • What's New in Oracle's EPM System?

    - by jmorourke
    Oracle’s EPM System R11.1.2.2  is now generally available to customers and partners on the download center.  Although the release number doesn’t sound significant, this is a major release of Oracle’s Hyperion EPM Suite with new modules as well as significant enhancements across the suite.  This release was announced back on April 4th as part of Oracle’s Business Analytics Strategy launch, so analytics is a key aspect of the release.  But the three biggest pieces of news in this release are Oracle Hyperion Planning support for the Exalytics In-Memory Machine, the new Project Financial Planning Application and the new Account Reconciliations Manager module. The Oracle Exalytics In-Memory Machine was announced back in October 2011, at Oracle OpenWorld.  It’s the latest installment from Oracle in a line of engineered systems that combine Oracle Sun hardware, with Oracle database and application technologies – in solutions that are designed to provide high scalability and performance for specific tasks.  Exalytics is the first engineered system specifically designed for high performance analytics.  Running in-memory versions of Oracle Essbase, as well as the Oracle TimesTen database and Oracle BI tools, Exalytics provides speed of thought response times for complex analytic processes with advanced visualizations.  Early adopter customers have achieved 5X to 100X faster interactivity and 6X to 10X faster planning cycles.  Hyperion Planning running with Oracle Exalytics will support enterprise-wide planning, budgeting and forecasting with more detailed data, with hundreds to thousands of users across an organization getting speed of thought performance. The new Hyperion Project Financial Planning application delivered with EPM 11.1.2.2 is also great news for Oracle customers.  This application follows on the heels of other special-purpose planning applications that Oracle has delivered for Workforce and Capital Asset planning.  It allows Project Managers to identify project-related expenses and revenues, plan and propose new projects, and track results over time. Finance Managers can evaluate and compare different projects, manage the funding process, monitor and report the actual financial results and impacts of projects and project portfolios. This new application is applicable to capital projects, contract projects and indirect projects like IT and HR projects across all industries.  This application is a great complement to existing Project Management applications, and helps bridge the gap between these applications, and the financial planning and budgeting process. Account reconciliations has to be one of the biggest bottlenecks and risks in the financial close and reporting process, and many organizations rely on spreadsheets and manual processes to perform this critical process.  To help address this problem, Oracle developed an Account Reconciliation Manager module that is being delivered as part of Oracle Hyperion Financial Close Management.   This module helps automate and streamline account reconciliations and eliminates the chances for errors, omissions and fraud.  But unlike standalone account reconciliation packages, it’s integrated with the rest of the Oracle Hyperion Financial Close suite, and can integrate balances from any source system.  This can help alleviate a major bottleneck in the financial close process, increase accuracy and reduce risk, and can complement existing investments in Hyperion Financial Management, as well as Oracle and non-Oracle transaction processing systems. Other enhancements in this release include an enhanced Web 2.0 interface for Hyperion Planning and Hyperion Financial Management (HFM), configurable dimensionality in HFM, new Predictive Planning feature in Hyperion Planning, new Detailed Profitability feature in Hyperion Profitability and Cost Management, new Smart View interface for Hyperion Strategic Finance, and integration of the Hyperion applications with JD Edwards Financials. For more information about Oracle EPM System R11.1.2.2 check out the links below: Press Release:  http://www.oracle.com/us/corporate/press/1575775 Product Information on O.com:  http://www.oracle.com/us/solutions/business-analytics/overview/index.html Product Information on OTN:  http://www.oracle.com/technetwork/middleware/epm/downloads/index.html Webcast Replay:  http://www.oracle.com/us/go/index.html?Src=7317510&Act=65&pcode=WWMK11054701MPP046 Please contact me if you have any questions or need additional information – [email protected]

    Read the article

  • Guessing Excel Data Types

    - by AjarnMark
    Note to Self HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Jet\4.0\Engines\Excel: TypeGuessRows = 0 means scan everything. Note to Others About 10 years ago I stumbled across this bit of information just when I needed it and it saved my project.  Then for some reason, a few years later when it would have been nice, but not critical, for some reason I could not find it again anywhere.  Well, now I have stumbled across it again, and to preserve my future self from nightmares and sudden baldness due to pulling my hair out, I have decided to blog it in the hopes that I can find it again this way. Here’s the story…  When you query data from an Excel spreadsheet, such as with old-fashioned DTS packages in SQL 2000 (my first reference) or simply with an OLEDB Data Adapter from ASP.NET (recent task) and if you are using the Microsoft Jet 4.0 driver (newer ones may deal with this differently) then you can get funny results where the query reports back that a cell value is null even when you know it contains data. What happens is that Excel doesn’t really have data types.  While you can format information in cells to appear like certain data types (e.g. Date, Time, Decimal, Text, etc.) that is not really defining the cell as being of a certain type like we think of when working with databases.  But, presumably, to make things more convenient for the user (programmer) when you issue a query against Excel, the query processor tries to guess what type of data is contained in each column and returns it in an appropriate manner.  This is all well and good IF your data is consistent in every row and matches what the processor guessed.  And, for efficiency’s sake, when the query processor is trying to figure out each column’s data type, it does so by analyzing only the first 8 rows of data (default setting). Now here’s the problem, suppose that your spreadsheet contains information about clothing, and one of the columns is Size.  Now suppose that in the first 8 rows, all of your sizes look like 32, 34, 18, 10, and so on, using numbers, but then, somewhere after the 8th row, you have some rows with sizes like S, M, L, XL.  What happens is that by examining only the first 8 rows, the query processor inferred that the column contained numerical data, and then when it hits the non-numerical data in later rows, it comes back blank.  Major bummer, and a real pain to track down if you don’t know that Excel is doing this, because you study the spreadsheet and say, “the data is RIGHT THERE!  WHY doesn’t the query see it?!?!”  And the hair-pulling begins. So, what’s a developer to do?  One option is to go to the registry setting noted above and change the DWORD value of TypeGuessRows from the default of 8 to 0 (zero).  Setting this value to zero will force Jet to scan every row in the spreadsheet before making its determination as to what type of data the column contains.  And that means that in the example above, it would have treated the column as a string rather than as numeric, and presto! your query now returns all of the values that you know are in there. Of course, there is a caveat… if you are querying large spreadsheets, making Jet scan every row can be quite a performance hit.  You could enter a different number (more than 8) that you believe is a better sampling of rows to make the guess, but you still have the possibility that every row scanned looks alike, but that later rows are different, and that you might get blanks when there really is data there.  That’s the type of gamble, I really don’t like to take with my data. Anyone with a better approach, or with experience with more recent drivers that have a better way of handling data types, please chime in!

    Read the article

  • Archbeat Link-O-Rama Top 10 Facebook Faves for October 20-26, 2013

    - by OTN ArchBeat
    Here's this week's list of the Top 10 items shared on the OTN ArchBeat Facebook Page from October 27 - November 2, 2013. Visualizing and Process (Twitter) Events in Real Time with Oracle Coherence | Noah Arliss This OTN Virtual Developer Day session explores in detail how to create a dynamic HTML5 Web application that interacts with Oracle Coherence as it’s processing events in real time, using the Avatar project and Oracle Coherence’s Live Events feature. Part of OTN Virtual Developer Day: Harnessing the Power of Oracle WebLogic and Oracle Coherence, November 5, 2013. 9am to 1pm PT / 12pm to 4pm ET / 1pm to 5pm BRT. Register now! HTML5 Application Development with Oracle WebLogic Server | Doug Clarke This free OTN Virtual Developer Day session covers the support for WebSockets, RESTful data services, and JSON infrastructure available in Oracle WebLogic Server. Part of OTN Virtual Developer Day: Harnessing the Power of Oracle WebLogic and Oracle Coherence, November 5, 2013. 9am to 1pm PT / 12pm to 4pm ET / 1pm to 5pm BRT. Register now! Video: ADF BC and REST services | Frederic Desbiens Spend a few minutes with Oracle ADF principal product manager Frederic Desbiens and learn how to publish ADF Business Components as RESTful web services. One Client Two Clusters | David Felcey "Sometimes its desirable to have a client connect to multiple clusters, either because the data is dispersed or for instance the clusters are in different locations for high availability," says David Felcey. David shows you how in this post, which includes a simple example. Exceptions Handling and Notifications in ODI | Christophe Dupupet Oracle Fusion Middleware A-Team director Christophe Dupupet reviews the techniques that are available in Oracle Data Integrator to guarantee that the appropriate individuals are notified in the event that ODI processes are impacted by network outages or other mishaps. Securing WebSocket applications on Glassfish | Pavel Bucek WebSocket is a key capability standardized into Java EE 7. Many developers wonder how WebSockets can be secured. One very nice characteristic for WebSocket is that it in fact completely piggybacks on HTTP. In this post Pavel Bucek demonstrates how to secure WebSocket endpoints in GlassFish using TLS/SSL. Oracle Coherence, Split-Brain and Recovery Protocols In Detail | Ricardo Ferreira Ricardo Ferreira's article "provides a high level conceptual overview of Split-Brain scenarios in distributed systems," focusing on a "specific example of cluster communication failure and recovery in Oracle Coherence." Non-programmatic Authentication Using Login Form in JSF (For WebCenter & ADF) | JayJay Zheng Oracle ACE JayJay Zheng shares an approach that "avoids the programmatic authentication and works great for having a custom login page developed in WebCenter Portal integrated with OAM authentication." The latest article in the Industrial SOA series looks at mobile computing and how companies are developing SOA to go. http://pub.vitrue.com/PUxT Tech Article: SOA in Real Life: Mobile Solutions The ACE Director Thing | Dr. Frank Munz Frank Munz finally gets around to blogging about achieving Oracle ACE Director status and shares some interesting insight into what will change—and what won't—thanks to that new status. A good, short read for those interested in learning more about the Oracle ACE program. Thought for the Day "Even if you're on the right track, you'll get run over if you just sit there." — Will Rogers, American humorist (November 4, 1879 – August 15, 1935) Source: brainyquote.com

    Read the article

  • Securing Flexfield Value Sets in EBS 12.2

    - by Sara Woodhull
    Release 12.2 includes a new feature: flexfield value set security. This new feature gives you additional options for ensuring that different administrators have non-overlapping responsibilities, which in turn provides checks and balances for sensitive activities.  Separation of Duties (SoD) is one of the key concepts of internal controls and is a requirement for many regulations including: Sarbanes-Oxley (SOX) Act Health Insurance Portability and Accountability Act (HIPAA) European Union Data Protection Directive. Its primary intent is to put barriers in place to prevent fraud or theft by an individual acting alone. Implementing Separation of Duties requires minimizing the possibility that users could modify data across application functions where the users should not normally have access. For flexfields and report parameters in Oracle E-Business Suite, values in value sets can affect functionality such as the rollup of accounting data, job grades used at a company, and so on. Controlling access to the creation or modification of value set values can be an important piece of implementing Separation of Duties in an organization. New Flexfield Value Set Security feature Flexfield value set security allows system administrators to restrict users from viewing, adding or updating values in specific value sets. Value set security enables role-based separation of duties for key flexfields, descriptive flexfields, and report parameters. For example, you can set up value set security such that certain users can view or insert values for any value set used by the Accounting Flexfield but no other value sets, while other users can view and update values for value sets used for any flexfields in Oracle HRMS. You can also segregate access by Operating Unit as well as by role or responsibility.Value set security uses a combination of data security and role-based access control in Oracle User Management. Flexfield value set security provides a level of security that is different from the previously-existing and similarly-named features in Oracle E-Business Suite: Function security controls whether a user has access to a specific page or form, as well as what operations the user can do in that screen. Flexfield value security controls what values a user can enter into a flexfield segment or report parameter (by responsibility) during routine data entry in many transaction screens across Oracle E-Business Suite. Flexfield value set security (this feature, new in Release 12.2) controls who can view, insert, or update values for a particular value set (by flexfield, report, or value set) in the Segment Values form (FNDFFMSV). The effect of flexfield value set security is that a user of the Segment Values form will only be able to view those value sets for which the user has been granted access. Further, the user will be able to insert or update/disable values in that value set if the user has been granted privileges to do so.  Flexfield value set security affects independent, dependent, and certain table-validated value sets for flexfields and report parameters. Initial State of the Feature upon Upgrade Because this is a new security feature, it is turned on by default.  When you initially install or upgrade to Release 12.2.2, no users are allowed to view, insert or update any value set values (users may even think that their values are missing or invalid because they cannot see the values).  You must explicitly set up access for specific users by enabling appropriate grants and roles for those users.We recommend using flexfield value set security as part of a comprehensive Separation of Duties strategy. However, if you choose not to implement flexfield value set security upon upgrading to or installing Release 12.2, you can enable backwards compatibility--users can access any value sets if they have access to the Values form--after you upgrade. The feature does not affect day-to-day transactions that use flexfields.  However, you must either set up specific grants and roles or enable backwards compatibility before users can create new values or update or disable existing values. For more information, see: Release 12.2 Flexfield Value Set Security Documentation Update for Patch 17305947:R12.FND.C (Document 1589204.1) R12.2 TOI: Implement and Use Application Object Library (AOL) - Flexfields Security and Separation of Duties for Value Sets (recorded training)

    Read the article

  • Top 10 Linked Blogs of 2010

    - by Bill Graziano
    Each week I send out a SQL Server newsletter and include links to interesting blog posts.  I’ve linked to over 500 blog posts so far in 2010.  Late last year I started storing those links in a database so I could do a little reporting.  I tend to link to posts related to the OLTP engine.  I also try to link to the individual blogger in the group blogs.  Unfortunately that wasn’t possible for the SQLCAT and CSS blogs.  I also have a real weakness for posts related to PASS. These are the top 10 blogs that I linked to during the year ordered by the number of posts I linked to. Paul Randal – Paul writes extensively on the internals of the relational engine.  Lots of great posts around transactions, transaction log, disaster recovery, corruption, indexes and DBCC.  I also linked to many of his SQL Server myths posts. Glenn Berry – Glenn writes very interesting posts on how hardware affects SQL Server.  I especially like his posts on the various CPU platforms.  These aren’t necessarily topics that I’m searching for but I really enjoy reading them. The SQLCAT Team – This Microsoft team focuses on the largest and most interesting SQL Server installations.  The regularly publish white papers and best practices. SQL Server CSS Team – These are the top engineers from the Microsoft Customer Service and Support group.  These are the folks you finally talk to after your case has been escalated about 20 times.  They write about the interesting problems they find. Brent Ozar – The posts I linked to mostly focused on the relational engine: CPU, NUMA, SSD drives, performance monitoring, etc.  But Brent writes about a real variety of topics including blogging, social networking, speaking, the MCM, SQL Azure and anything else that seems to strike his fancy.  His posts are always well written and though provoking. Jeremiah Peschka – A number of Jeremiah’s posts weren’t about SQL Server.  He’s very active in the “NoSQL” area and I linked to a number of those posts.  I think it’s important for people to know what other technologies are out there. Brad McGehee – Brad writes about being a DBA including maintenance plans, DBA checklists, compression and audit. Thomas LaRock – I linked to a variety of posts from PBM to networking to 24 Hours of PASS to TDE.  Just a real variety of topics.  Tom always writes with an interesting style usually mixing in a movie theme and/or bacon. Aaron Bertrand – Many of my links this year were Denali features.  He also had a great series on bad habits to kick. Michael J. Swart – This last one surprised me.  There are some well known SQL Server bloggers below Michael on this list.  I linked to posts on indexes, hierarchies, transactions and I/O performance and a variety of other engine related posts.  All are interesting and well thought out.  Many of his non-SQL posts are also very good.  He seems to have an interest in puzzles and other brain teasers.  Michael, I won’t be surprised again!

    Read the article

  • Bad Spot to Be In: Playing Catch-up with Mobile Advertising

    - by Mike Stiles
    You probably noticed, there’s a mass migration going on from online desktop/laptop usage to smartphone/tablet usage.  It’s an indicator of how we live our lives in the modern world: always on the go, with no intention of being disconnected while out there. Consequently, paid as it relates to mobile advertising is taking the social spotlight. eMarketer estimated that in 2013, US adults would spend about 2 hours, 21 minutes a day on mobile, not counting talking time. More people in the world own smartphones than own toothbrushes (bad news I suppose if you’re marketing toothpaste). They’re using those mobile devices to access social networks, consuming at least 17% of their mobile time on them. Frankly, you don’t need a deep dive into mobile usage stats to know what’s going on. Just look around you in any store, venue or coffee shop. It’s really obvious…our mobile devices are now where we “are,” so that’s where marketers can increasingly reach us. And it’s a smart place for them to do just that. Mobile devices can be viewed more and more as shopping facilitators. Usually when someone is on mobile, they are not in passive research mode. They are likely standing near a store or in front of a product, using their mobile to seek reassurance that buying that product is the right move. They are the hottest of hot prospects. Consider that 4 out of 5 consumers use smartphones to shop, 52% of Americans use mobile devices for in-store for research, 70% of mobile searches lead to online action inside of an hour, and people that find you on mobile convert at almost 3x the rate as those that find you on desktop or laptop. But what are marketers doing? Enter statistics from Mary Meeker’s latest State of the Internet report. Common sense says you buy advertising where people are spending their eyeball time, right? But while mobile is 20% of media use and rising, the ad spend there is 4%. Conversely, while print usage is at 5% and falling, ad spend there is 19%. We all love nostalgia, but come on. There are reasons marketing dollar migration to mobile has not matched user migration, including the availability of mobile ad products and the ability to measure user response to mobile ads. But interesting things are happening now. First came Facebook’s mobile ad, which let app developers pay to get potential downloads. Then their mobile ad network was announced at F8, allowing marketers to target users across non-Facebook apps while leveraging the wealth of diverse data Facebook has on those users, a big deal since Nielsen has pointed out mobile apps make up 89% of the media time spent on mobile. Twitter has a similar play in motion with their MoPub acquisition. And now mobile deeplinks have arrived, which can take users straight to sub-pages of mobile apps for a faster, more direct shopper/researcher user experience. The sooner the gratification, the smoother and faster the conversion. To be clear, growth in mobile ad spending is well underway. After posting $13.1 billion in 2013, Gartner expects global mobile ad spending to reach $18 billion this year, then go to $41.9 billion by 2017. Cheap smartphones and data plans are spreading worldwide, further fueling the shift to mobile. Mobile usage in India alone should grow 400% by 2018. And, of course, there’s the famous statistic that mobile should overtake desktop Internet usage this year. How can we as marketers mess up this opportunity? Two ways. We could position ourselves in perpetual “catch-up” mode and keep spending ad dollars where the public used to be. And we could annoy mobile users with horrid old-school marketing practices. Two-thirds of users told Forrester they think interruptive in-app ads are more annoying than TV ads. Make sure your brand’s social marketing technology platform is delivering a crystal clear picture of your social connections so the mobile touch point is highly relevant, mobile optimized, and delivering real value and satisfying experiences. Otherwise, all we’ve done is find a new way to be unwanted. @mikestiles @oraclesocialPhoto: Kate Mallatratt, freeimages.com

    Read the article

  • Barcodes and Bugs

    - by Tim Dexter
    A great mail from Mike at Browning last week. He has been through the ringer getting his BIP barcoding sorted out but he's now out of the woods. Here's the final result. By way of explanation, an excerpt from Mike's email:   This is an example of the GS1_128 carton shipping labels we are now producing with BIP in our web application for our vendors who drop ship products to our dealers. It produces 4 labels per printed page, in PDF format, on peel & stick label paper. Each label has a unique carton number, and a unique carton serial number in the SSCC-18 barcode. This example is for Cabelas (each customer has slightly different GS1-128 label format requirements – custom template for each - a pain!). I am using custom java encoders I wrote for the UPC and SSCC-18 barcodes, and a standard encoder (code128b) for the ShipTo zip barcode. Is there any way yet to get around that SUPER ANNOYING bug when opening the rtf template in MS Word, and it replaces my xsl code text in the barcode fields with gibberish??? Every time I open it I have to re-enter all the xsl code. Not only to be able to read & edit it, but also to get it to work in BIP (BIP doesn’t like the gibberish if I upload the template that has it). Mike's last point, regarding the annoying bug in the template builder, is one that I have experienced occasionally. The development team have looked at it and found it to be an issue with MSWord and not a plugin problem. That's all well and good but how can you get around it? Well, you can take advantage of the font mapping that BIP offers to get the barcodes into the PDF output. As many of you know, getting a barcode font to appear in the PDF output, you need employ the use of the xdo.cfg file in the template builder config directory.You would normally have an entry such as this:         <font family="Code 128" style="normal" weight="normal">        <truetype path="C:\windows\fonts\128R00.TTF" />       </font>to map a barcode font to get it to render in the PDF output when testing from the template builder plugin.   Mike's issue is only present when the formfield is highlighted with a barcode font. The other fields in the template are OK. What you can do to get around the issue is to bend the config entry to get around having to use the barcode font in the template at all. Changing the entry to something like:         <font family="Calibri" style="normal" weight="normal">        <truetype path="C:\windows\fonts\128R00.TTF" />       </font>   Note that we are mapping the Calibri; a humanly readable and non 'erroring' font in the template, to the code 128 barcode font. Where you used to highlight the field with the barcode in MSWord, you now use the Calibri font instead. At run time, BIP will go look for the Calibri font mapping and will drop in the Code128 font. Of course, Calibri is an example; you need to pick a font that you are not going to use any where else in the layout.

    Read the article

  • Use Your Favorite Wallpapers in Windows 7 Starter Edition

    - by Asian Angel
    If you have Windows 7 Starter Edition installed on your netbook, the default wallpaper can get old. If you are tired of looking at the default wallpaper, then join us today as we look at changing it with Oceanis Change Background Windows 7. Special Notes This information is quoted directly from the website and needs to be kept in mind when using Oceanis Change Background Windows 7: If the Oceanis Change Background Windows 7 program no longer works properly after installing some Windows Updates, then uninstall and reinstall the Oceanis Change Background Windows 7 program to have it run properly again. If you ever do an in-place upgrade to another higher level edition of Windows 7 in the future, then be sure to uninstall this Oceanis Change Background Windows 7 program first to avoid incompatibility issues with it in the new edition of Windows 7. It was designed to only work in Windows 7 Starter edition. Before There it is…the default wallpaper everyone with the Starter Edition gets stuck with. Some people may not mind it, but if you are one of the people who really wants something different then get ready to rejoice. After The install file for Oceanis is contained in a zip file so you will need to unzip it to get started. The install process is quick and simple but you will need to do a system restart afterwards. Once you have restarted your computer this is what your screen will look like…do not panic and think that this is all there is to it. This is just the Starter Screen and can be easily changed… Note: Oceanis will auto-start with Windows each time. Using either the Desktop Icon or the Start Menu Entry, open up the Oceanis Main Window. You will see the set of four default wallpapers shown here. At this point the best thing to do is browse for the appropriate folder where you have all of those wonderful new wallpapers just waiting to be used. Note: We found Stretch to be the best Picture Position setting on our system. For our example we had three ready and waiting. We decided to try out the Wallpaper Slideshow feature first. We chose a time frame and saved our changes. Here are our three wallpapers as they switched through. This can be much more interesting than the default wallpaper. There was only one quirk that we encountered while using the Slideshow Setting. On occasion if we minimized a non-maximized window there would be a leftover partial image in place of the window. Our suggestion? Go with one wallpaper at a time and the settings shown below. These are the settings that we had terrific luck with…Only one picture selected, Picture Position = Stretch, & Change Picture Every = Every Day. Using these settings, the Starter Edition acted just like any of the other editions with regard to wallpaper management. Conclusion If you have grown tired of looking at the default wallpaper in Windows 7 Starter Edition then you will certainly appreciate what Oceanis Change Background Windows 7 can do to fix that problem. For more ways to customize your Windows 7 Started Edition, be sure to to check out how to personalize Windows 7 Starter. Links Download Oceanis Change Background Windows 7 Similar Articles Productive Geek Tips Windows 7 Welcome Screen Taking Forever? Here’s the Fix (Maybe)Awesome Desktop Wallpapers: The Windows 7 EditionHow To Customize Wallpaper in Windows 7 Starter EditionDesktop Fun: Starship Theme WallpapersDesktop Fun: Underwater Theme Wallpapers TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Vista style sidebar for Windows 7 Create Nice Charts With These Web Based Tools Track Daily Goals With 42Goals Video Toolbox is a Superb Online Video Editor Fun with 47 charts and graphs Tomorrow is Mother’s Day

    Read the article

  • Where to start with game development?

    - by steven_desu
    I asked this earlier in this thread at stackoverflow.com. One of the early comments redirected me here to gamedev.stackexchange.com, so I'm reposting here. Searching for related questions I found a number of very specific questions, but I'm afraid the specifics have proved fruitless for me and after 4 hours on Google I'm no closer than I started, so I felt reaching out to a community might be in order. First, my goal: I've never made a game before, although I've muddled over the possibility several times. I decided to finally sit down and start learning how to code games, use game engines, etc. All so that one day (hopefully soon) I'll be able to make functional (albeit simple) games. I can start adding complexity later, for now I'd be glad to have a keyboard-controlled camera moving in a 3D world with no interaction beyond that. My background: I've worked in SEVERAL programming languages ranging from PHP to C++ to Java to ASM. I'm not afraid of any challenges that come with learning the new syntax or limitations inherent in a new language. All of my past programming experience, however, has been strictly non-graphical and usually with little or extremely simple interaction during execution. I've created extensive and brilliant algorithms for solving logical and mathematical problems as well as graphing problems. However in every case input was either defined in a file, passed form an HTML form, or typed into the console. Real-time interaction with the user is something with which I have no experience. My question: Where should I start in trying to make games? Better yet- where should I start in trying to create a keyboard-navigable 3D environment? In searching online I've found several resources linking to game engines, graphics engines, and physics engines. Here's a brief summary of my experiences with a few engines I tried: Unreal SDK: The tutorial videos assume that you already have in-depth knowledge of 3D modeling, graphics engines, animations, etc. The "Getting Started" page offers no formal explanation of game development but jumps into how Unreal can streamline processes it assumes you're already familiar with. After downloading the SDK and launching it to see if the tools were as intuitive as they claimed, I was greeted with about 60 buttons and a blank void for my 3D modeling. Clicking on "add volume" (to attempt to add a basic cube) I was met with a menu of 30 options. Panicking, I closed the editor. Crystal Space: The website seemed rather informative, explaining that Crystal Space was just for graphics and the companion software, CEL, provided entity logic for making games. A demo game was provided, which was built using "CELStart", their simple tool for people with no knowledge of game programming. I launched the game to see what I might look forward to creating. It froze several times, the menus were buggy, there were thousands of graphical glitches, enemies didn't respond to damage, and when I closed the game it locked up. Gave up on that engine. IrrLicht: The tutorial assumes I have Visual Studio 6.0 (I have Visual Studio 2010). Following their instructions I was unable to properly import the library into Visual Studio and unable to call any of the functions that they kept using. Manually copying header files, class files, and DLLs into my project's folder - the project failed to properly compile. Clearly I'm not off to a good start and I'm going in circles. Can someone point me in the right direction? Should I start by downloading a program like Blender and learning 3D modeling, or should I be learning how to use a graphics engine? Should I look for an all-inclusive game engine, or is it better to try and code my own game logic? If anyone has actually made their own games, I would prefer to hear how they got their start. Also- taking classes at my school is not an option. Nothing is offered.

    Read the article

  • Where to start with game development?

    - by steven_desu
    Searching for related questions I found a number of very specific questions, but I'm afraid the specifics have proved fruitless for me and after 4 hours on Google I'm no closer than I started, so I felt reaching out to a community might be in order. First, my goal: I've never made a game before, although I've muddled over the possibility several times. I decided to finally sit down and start learning how to code games, use game engines, etc. All so that one day (hopefully soon) I'll be able to make functional (albeit simple) games. I can start adding complexity later, for now I'd be glad to have a keyboard-controlled camera moving in a 3D world with no interaction beyond that. My background: I've worked in SEVERAL programming languages ranging from PHP to C++ to Java to ASM. I'm not afraid of any challenges that come with learning the new syntax or limitations inherent in a new language. All of my past programming experience, however, has been strictly non-graphical and usually with little or extremely simple interaction during execution. I've created extensive and brilliant algorithms for solving logical and mathematical problems. However in every case input was either defined in a file, passed form an HTML form, or typed into the console. Real-time interaction with the user is something with which I have no experience. My question: Where should I start in trying to make games? Better yet- where should I start in trying to create a keyboard-navigable 3D environment? In searching online I've found several resources linking to game engines, graphics engines, and physics engines. Here's a brief summary of my experiences with a few engines I tried: Unreal SDK: The tutorial videos assume that you already have in-depth knowledge of 3D modeling, graphics engines, animations, etc. The "Getting Started" page offers no formal explanation of game development but jumps into how Unreal can streamline processes it assumes you're already familiar with. After downloading the SDK and launching it to see if the tools were as intuitive as they claimed, I was greeted with about 60 buttons and a blank void for my 3D modeling. Clicking on "add volume" (to attempt to add a basic cube) I was met with a menu of 30 options. Panicking, I closed the editor. Crystal Space: The website seemed rather informative, explaining that Crystal Space was just for graphics and the companion software, CEL, provided entity logic for making games. A demo game was provided, which was built using "CELStart", their simple tool for people with no knowledge of game programming. I launched the game to see what I might look forward to creating. It froze several times, the menus were buggy, there were thousands of graphical glitches, enemies didn't respond to damage, and when I closed the game it locked up. Gave up on that engine. IrrLicht: The tutorial assumes I have Visual Studio 6.0 (I have Visual Studio 2010). Following their instructions I was unable to properly import the library into Visual Studio and unable to call any of the functions that they kept using. Manually copying header files, class files, and DLLs into my project's folder - the project failed to properly compile. Clearly I'm not off to a good start and I'm going in circles. Can someone point me in the right direction? Should I start by downloading a program like Blender and learning 3D modeling, or should I be learning how to use a graphics engine? Should I look for an all-inclusive game engine, or is it better to try and code my own game logic? If anyone has actually made their own games, I would prefer to hear how they got their start. Also- taking classes at my school is not an option. Nothing is offered.

    Read the article

  • Adding custom interfaces to your mock instance.

    - by mehfuzh
    Previously, i made a post  showing how you can leverage the dependent interfaces that is implemented by JustMock during the creation of mock instance. It could be a informative post that let you understand how JustMock behaves internally for class or interfaces implement other interfaces into it. But the question remains, how you can add your own custom interface to your target mock. In this post, i am going to show you just that. Today, i will not start with a dummy class as usual rather i will use two most common interfaces in the .NET framework  and create a mock combining those. Before, i start i would like to point out that in the recent release of JustMock we have extended the Mock.Create<T>(..) with support for additional settings though closure. You can add your own custom interfaces , specify directly the real constructor that should be called or even set the behavior of your target. Doing a fast forward directly to the point,  here goes the test code for create a creating a mock that contains the mix for ICloneable and IDisposable using the above mentioned changeset. var myMock = Mock.Create<IDisposable>(x => x.Implements<ICloneable>()); var myMockAsClonable = myMock as ICloneable; bool isCloned = false;   Mock.Arrange(() => myMockAsClonable.Clone()).DoInstead(() => isCloned = true);   myMockAsClonable.Clone();   Assert.True(isCloned);   Here, we are creating the target mock for IDisposable and also implementing ICloneable. Finally, using the “as” for getting the ICloneable reference accordingly arranging it, acting on it and asserting if the expectation is met properly. This is a very rudimentary example, you can do the same for a given class: var realItem = Mock.Create<RealItem>(x => {     x.Implements<IDisposable>();     x.CallConstructor(() => new RealItem(0)); }); var iDispose = realItem as IDisposable;     iDispose.Dispose(); Here, i am also calling the real constructor for RealItem class.  This is to mention that you can implement custom interfaces only for non-sealed classes or less it will end up with a proper exception. Also, this feature don’t require any profiler, if you are agile or running it inside silverlight runtime feel free to try it turning off the JM add-in :-). TIP :  Ability to  specify real constructor could be a useful productivity boost in cases for code change and you can re-factor the usage just by one click with your favorite re-factor tool.   That’s it for now and hope that helps Enjoy!!

    Read the article

  • You Probably Already Have a “Private Cloud”

    - by BuckWoody
    I’ve mentioned before that I’m not a fan of the word “Cloud”. It’s too marketing-oriented, gimmicky and non-specific. A better definition (in many cases) is “Distributed Computing”. That means that some or all of the computing functions are handled somewhere other than under your specific control. But there is a current use of the word “Cloud” that does not necessarily mean that the computing is done somewhere else. In fact, it’s a vector of Cloud Computing that can better be termed “Utility Computing”. This has to do with the provisioning of a computing resource. That means the setup, configuration, management, balancing and so on that is needed so that a user – which might actually be a developer – can do some computing work. To that person, the resource is just “there” and works like they expect, like the phone system or any other utility. The interesting thing is, you can do this yourself. In fact, you probably already have been, or are now. It’s got a cool new trendy term – “Private Cloud”, but the fact is, if you have your setup automated, the HA and DR handled, balancing and performance tuning done, and a process wrapped around it all, you can call yourself a “Cloud Provider”. A good example here is your E-Mail system. your users – pretty much your whole company – just logs into e-mail and expects it to work. To them, you are the “Cloud” provider. On your side, the more you automate and provision the system, the more you act like a Cloud Provider. Another example is a database server. In this case, the “end user” is usually the development team, or perhaps your SharePoint group and so on. The data professionals configure, monitor, tune and balance the system all the time. The more this is automated, the more you’re acting like a Cloud Provider. Lots of companies help you do this in your own data centers, from VMWare to IBM and many others. Microsoft's offering in this is based around System Center – they have a “cloud in a box” provisioning system that’s actually pretty slick. The most difficult part of operating a Private Cloud is probably the scale factor. In the case of Windows and SQL Azure, we handle this in multiple ways – and we're happy to share how we do it. It’s not magic, and the algorithms for balancing (like the one we started with called Paxos) are well known. The key is the knowledge, infrastructure and people. Sure, you can do this yourself, and in many cases such as top-secret or private systems, you probably should. But there are times where you should evaluate using Azure or other vendors, or even multiple vendors to spread your risk. All of this should be based on client need, not on what you know how to do already. So congrats on your new role as a “Cloud Provider”. If you have an E-mail system or a database platform, you can just put that right on your resume.

    Read the article

  • Kill all the project files!

    - by jamiet
    Like many folks I’m a keen podcast listener and yesterday my commute was filled by listening to Scott Hunter being interviewed on .Net Rocks about the next version of ASP.Net. One thing Scott said really struck a chord with me. I don’t remember the full quote but he was talking about how the ASP.Net project file (i.e. the .csproj file) is going away. The rationale being that the main purpose of that file is to list all the other files in the project, and that’s something that the file system is pretty good at. In Scott’s own words (that someone helpfully put in the comments): A file that lists files is really redundant when the OS already does this Romeliz Valenciano correctly pointed out on Twitter that there will still be a project.json file however no longer will there be a need to keep a list of files in a project file. I suspect project.json will simply contain a list of exclusions where necessary rather than the current approach where the project file is a list of inclusions. On the face of it this seems like a pretty good idea. I’ve long been a fan of convention over configuration and this is a great example of that. Instead of listing all the files in a separate file, just treat all the files in the directory as being part of the project. Ostensibly the approach is if its in the directory, its part of the project. Simple. Now I’m not an ASP.net developer, far from it, but it did occur to me that the same approach could be applied to the two Visual Studio project types that I am most familiar with, SSIS & SSDT. Like many people I’ve long been irritated by SSIS projects that display a faux file system inside Solution Explorer. As you can see in the screenshot below the project has Miscellaneous and Connection Managers folders but no such folders exist on the file system: This may seem like a minor thing but it means useful Solution Explorer features like Show All Files and Open Folder in Windows Explorer don’t work and quite frankly it makes me feel like a second class citizen in the Microsoft ecosystem. I’m a developer, treat me like one. Don’t try and hide the detail of how a project works under the covers, show it to me. I’m a big boy, I can handle it! Would it not be preferable to simply treat all the .dtsx files in a directory as being part of a project? I think it would, that’s pretty much all the .dtproj file does anyway (that, and present things in a non-alphabetic order – something else that wildly irritates me), so why not just get rid of the .dtproj file? In the case of SSDT the .sqlproj actually does a whole lot more than simply list files because it also states the BuildAction of each file (Build, NotInBuild, Post-Deployment, etc…) but I see no reason why the convention over configuration approach can’t help us there either. Want to know which is the Post-deployment script? Well, its the one called Post-DeploymentScript.sql! Simple! So that’s my new crusade. Let’s kill all the project files (well, the .dtproj & .sqlproj ones anyway). Are you with me? @Jamiet

    Read the article

  • Xsigo and Oracle's Storage

    - by Philippe Deverchère
    Xsigo, a virtual network infrastructure provider, has recently been acquired by Oracle. Following this acquisition, one might ask ourselves why it is important to Oracle and how Oracle's storage is going to benefit on the long term from this virtualized infrastructure layer. Well, the first thing to understand is that Virtual Networking addresses both network and storage connectivity. Oracle Virtual Networking, as the Xsigo technology is now called, connects any server to any network and storage, so this is not just about connecting servers to the Internet or Intranet. It is also for a large part connecting servers to NAS and SAN storage. Connecting servers to storage has become increasingly complex in the past few years because of the strong emergence of virtualization at the Operating System level. 50% of enterprise workloads are now virtualized, up from 18% in 2009, resulting in a strong consolidation of various applications in a high density server footprint. At the same time, server I/O capability increased 8x in the last 8 years. All this has pushed IT administrators to multiply the number of I/O connections in the back-end of their physical servers, resulting in a messy and very hard to manage networking infrastructure. Here is a typical view of a rack back-end when no virtual networking is used. We consider that today: - 75% of users have ten or more Ethernet ports per server - 85% of users have two or more SAN ports per server - 58% have had to add connectivity to a server specifically for VMs - 65% consider cable reduction a priority The average is 12 or more ports per server, resulting in an extremely complex infrastructure to manage. What Oracle wants to achieve with its Oracle Virtual Networking offering is pretty simple. The objective is to eliminate the complexity through a dramatic reduction of cabling between servers and storage/networks. It is also to provide a software based management system so that any server can be connected to any network or any storage, on demand, and without physical intervention on the infrastructure. At the end of the day, the picture on the left shows what one wants to get for the back-end of customer's racks: just a couple of connections on each physical server to provide a simple, agile and fast network infrastructure for both storage and networking access. This is exactly what the Oracle Virtual Networking solution does. It transforms a complex, error-prone, difficult to manage and expensive networking infrastructure into a simple, high performance and agile solution for the data center. Practically speaking, and for the sake of simplicity, imagine that each server just hosts a minimal number of physical InfiniBand HCAs (Host Channel Adapter) with two links (for redundancy) onto the Oracle Fabric Interconnect director. Using the Oracle Fabric Manager software, you'll then be able to create virtual NICs and HBAs (called vNIC and vHBA) that will be seen by the servers as standard NICs and HBAs and associate them to networks and storage systems which are physically connected to the back-end of the director through standard Fibre Channel and Ethernet GbE/10GbE ports. In addition to this incredibly simple "at-a-click" connectivity capability, the Oracle Virtual Networking solution offers powerful features such as network isolation, Quality of Service, advanced performance monitoring and non-disruptive reconfiguration, migration and scalability of networking infrastructure. So let's go back now to our initial question: why is Oracle Virtual Networking especially important to Oracle's storage solutions? After all, one could connect any storage in the back-end of the Oracle Fabric Interconnect directors, right? The answer is pretty simple: since Oracle owns both the virtualized networking infrastructure and the storage (ZFS-SA, Pillar Axiom and tape), it is possible to imagine several ways in the future to add value when it comes to connect storage to a virtualized storage network: enhanced storage capabilities, converged management between storage and network, improved diagnostic capabilities and optimized integration resulting in higher performance and unique features/functions. Of course, all this is not going to be done overnight, and future will tell us is which evolutions come first. But there is little doubt that the integration of Xsigo within Oracle is going to create opportunities for Oracle's storage!

    Read the article

  • Nec USB 3.0 controller drops connection - Ubuntu 12.04.1

    - by Tom
    I have some serious problems with Technaxx pci-e 302p card. It has uPD720200a NEC chip with 4020 firmware. BIOS recognises it. Sometimes it recognises devices and system mounts them and are functional for few minutes, other times they can't be mount and error occours. After fresh install card worked fine, but after kernel and firmware update it behaves as mentioned. Outputs: uname -a Linux asd-GA-MA770-UD3 3.2.0-30-generic #48-Ubuntu SMP Fri Aug 24 16:52:48 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux lspci -vvv USB controller: NEC Corporation uPD720200 USB 3.0 Host Controller (rev 04) (prog-if 30 [XHCI]) Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+ Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0, Cache Line Size: 64 bytes Interrupt: pin A routed to IRQ 17 Region 0: Memory at fd8fe000 (64-bit, non-prefetchable) [size=8K] Capabilities: <access denied> Kernel driver in use: xhci_hcd lsusb Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 008 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 009 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 001 Device 003: ID 07d1:3c0a D-Link System DWA-140 RangeBooster N Adapter(rev.B2) [Ralink RT3072] Bus 005 Device 004: ID 1997:1221 Bus 005 Device 003: ID 15c2:003c SoundGraph Inc. lsmod Module Size Used by nls_iso8859_1 12713 0 nls_cp437 16991 0 vfat 17585 0 fat 61512 1 vfat vesafb 13844 1 saa7134_alsa 18602 1 rfcomm 47604 0 bnep 18281 2 bluetooth 180104 10 rfcomm,bnep tda827x 18182 1 snd_hda_codec_hdmi 32474 1 ir_lirc_codec 12859 0 lirc_dev 19204 1 ir_lirc_codec tda8290 22616 1 arc4 12529 2 snd_hda_codec_realtek 224173 1 ir_mce_kbd_decoder 12777 0 ir_sony_decoder 12510 0 ir_jvc_decoder 12507 0 tuner 27428 1 ir_rc6_decoder 12507 0 snd_hda_intel 33773 5 rt2800usb 22684 0 rt2800lib 58925 1 rt2800usb crc_ccitt 12667 1 rt2800lib rt2x00usb 20762 1 rt2800usb rt2x00lib 51144 3 rt2800usb,rt2800lib,rt2x00usb mac80211 506816 3 rt2800lib,rt2x00usb,rt2x00lib ir_rc5_decoder 12507 0 rc_avermedia_m135a 12526 0 rc_imon_pad 12505 0 ir_nec_decoder 12507 0 cfg80211 205544 2 rt2x00lib,mac80211 snd_hda_codec 127706 3 snd_hda_codec_hdmi,snd_hda_codec_realtek,snd_hda_intel snd_ctxfi 111202 2 snd_hwdep 13668 1 snd_hda_codec imon 32839 0 snd_pcm 97188 5 saa7134_alsa,snd_hda_codec_hdmi,snd_hda_intel,snd_hda_codec,snd_ctxfi snd_seq_midi 13324 0 saa7134 181851 1 saa7134_alsa videobuf_dma_sg 19354 2 saa7134_alsa,saa7134 snd_rawmidi 30748 1 snd_seq_midi snd_seq_midi_event 14899 1 snd_seq_midi joydev 17693 0 rc_core 26412 13 ir_lirc_codec,ir_mce_kbd_decoder,ir_sony_decoder,ir_jvc_decoder,ir_rc6_decoder,ir_rc5_decoder,rc_avermedia_m135a,rc_imon_pad,ir_nec_decoder,imon,saa7134 snd_seq 61896 2 snd_seq_midi,snd_seq_midi_event fglrx 3263886 101 videobuf_core 26390 2 saa7134,videobuf_dma_sg v4l2_common 16454 2 tuner,saa7134 videodev 98259 3 tuner,saa7134,v4l2_common sp5100_tco 13791 0 snd_timer 29990 2 snd_pcm,snd_seq snd_seq_device 14540 3 snd_seq_midi,snd_rawmidi,snd_seq snd 78855 28 saa7134_alsa,snd_hda_codec_hdmi,snd_hda_codec_realtek,snd_hda_intel,snd_hda_codec,snd_ctxfi,snd_hwdep,snd_pcm,snd_rawmidi,snd_seq,snd_timer,snd_seq_device v4l2_compat_ioctl32 17128 1 videodev tveeprom 21249 1 saa7134 i2c_piix4 13301 0 soundcore 15091 1 snd edac_core 53746 0 serio_raw 13211 0 snd_page_alloc 18529 3 snd_hda_intel,snd_ctxfi,snd_pcm edac_mce_amd 23709 0 wmi 19256 0 mac_hid 13253 0 ppdev 17113 0 parport_pc 32866 1 k10temp 13166 0 lp 17799 0 parport 46562 3 ppdev,parport_pc,lp usb_storage 49198 0 uas 18180 0 usbhid 47199 0 hid 99559 1 usbhid firewire_ohci 41000 0 firewire_core 63558 1 firewire_ohci crc_itu_t 12707 1 firewire_core floppy 70365 0 pata_atiixp 13204 2 r8169 62099 0 dmesg | tail after plugging to usb 3.0 port [ 834.871296] sd 9:0:0:0: rejecting I/O to offline device [ 834.871308] sd 9:0:0:0: rejecting I/O to offline device [ 834.871319] sd 9:0:0:0: rejecting I/O to offline device [ 834.871330] sd 9:0:0:0: rejecting I/O to offline device [ 834.871530] sd 9:0:0:0: [sdd] Unhandled error code [ 834.871536] sd 9:0:0:0: [sdd] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK [ 834.871545] sd 9:0:0:0: [sdd] CDB: Read(10): 28 00 0e 8e 48 0a 00 00 3e 00 [ 834.871564] end_request: I/O error, dev sdd, sector 244205578 [ 834.875497] sd 8:0:0:1: Device offlined - not ready after error recovery [ 834.885339] usb 9-2: USB disconnect, device number 2 Are there any other outputs need for answering? I'll post them ASAP. I could of course reject updating the system but I think it's halfway solution. Any help appreciated. BTW USB 2.0 and 1.1 ports run well, card itself runs under win7 as charm. Tom

    Read the article

  • Handling null values and missing object properties in Silverlight 4

    - by PeterTweed
    Before Silverlight 4 to bind a data object to the UI and display a message associated with either a null value or if the binding path was wrong, you would need to write a Converter.  In Silverlight 4 we find the addition of the markup extensions TargetNullValue and FallbackValue that allows us to display a value when a null value is found in the bound to property and display a value when the property being bound to is not found. This post will show you how to use both markup extensions. Steps: 1. Create a new Silverlight 4 application 2. In the body of the MainPage.xaml.cs file replace the MainPage class with the following code:     public partial class MainPage : UserControl     {         public MainPage()         {             InitializeComponent();             this.Loaded += new RoutedEventHandler(MainPage_Loaded);         }           void MainPage_Loaded(object sender, RoutedEventArgs e)         {             person p = new person() { NameValue = "Peter Tweed" };             this.DataContext = p;         }     }       public class person     {         public string NameValue { get; set; }         public string TitleValue { get; set; }     } This code defines a class called person with two properties.  A new instance of the class is created, only defining the value for one of the properties and bound to the DataContext of the page. 3.  In the MainPage.xaml file copy the following XAML into the LayoutRoot grid:         <Grid.RowDefinitions>             <RowDefinition Height="60*" />             <RowDefinition Height="28*" />             <RowDefinition Height="28*" />             <RowDefinition Height="30*" />             <RowDefinition Height="154*" />         </Grid.RowDefinitions>         <Grid.ColumnDefinitions>             <ColumnDefinition Width="86*" />             <ColumnDefinition Width="314*" />         </Grid.ColumnDefinitions>         <TextBlock Grid.Row="1" Height="23" HorizontalAlignment="Left" Margin="32,0,0,0" Name="textBlock1" Text="Name Value:" VerticalAlignment="Top" />         <TextBlock Grid.Row="2" Height="23" HorizontalAlignment="Left" Margin="32,0,0,0" Name="textBlock2" Text="Title Value:" VerticalAlignment="Top" />         <TextBlock Grid.Row="3" Height="23" HorizontalAlignment="Left" Margin="32,0,0,0" Name="textBlock3" Text="Non Existant Value:" VerticalAlignment="Top" />         <TextBlock Grid.Column="1" Grid.Row="1" Height="23" HorizontalAlignment="Left" Name="textBlock4" Text="{Binding NameValue, TargetNullValue='No Name!!!!!!!'}" VerticalAlignment="Top" Margin="6,0,0,0" />         <TextBlock Grid.Column="1" Grid.Row="2" Height="23" HorizontalAlignment="Left" Name="textBlock5" Text="{Binding TitleValue, TargetNullValue='No Title!!!!!!!'}" VerticalAlignment="Top" Margin="6,0,0,0" />         <TextBlock Grid.Column="1" Grid.Row="3" Height="23" HorizontalAlignment="Left" Margin="6,0,0,0" Name="textBlock6" Text="{Binding AgeValue, FallbackValue='No such property!'}" VerticalAlignment="Top" />    This XAML defines three textblocks – two of which use the TargetNull and one that uses the FallbackValue markup extensions.  4. Run the application and see the person name displayed as defined for the person object, the expected string displayed for the TargetNullValue when no value exists for the boudn property and the expected string displayed for the FallbackValue when the property bound to is not found on the bound object. It's that easy!

    Read the article

  • DAC pack up all your troubles

    - by Tony Davis
    Visual Studio 2010, or perhaps its apparently-forthcoming sister, "SQL Studio", is being geared up to become the natural way for developers to create databases. Central to this drive is the introduction of 'data-tier application components', or DACs. Applications are developed as normal but when it comes to deployment, instead of supplying the DBA with a bunch of scripts to create the required database objects, the developer creates a single DAC Package ("DAC Pack"); a zipped XML file containing all the database objects needed by the application, along with versioning information, policies for deployment, and so on. It's an intriguing prospect. Developers can work on their development database using their existing tools and source control, and then package up the changes into a single DACPAC for deployment and management. DBAs get an "application level view" of how their instances are being used and the ability to collectively, rather than individually, manage the objects. The DBA needing to manage a large number of relatively small databases can use "DAC snapshots" to get a quick overview of what has changed across all the databases they manage. The reason that DAC packs haven't caused more excitement is that they can only be pushed to SQL Server 2008 R2, and they must be developed or inspected using Visual Studio 2010. Furthermore, what we see right now in VS2010 is more of a 'work-in-progress' or 'vision of the future', with serious shortcomings and restrictions that render it unsuitable for anything but small 'non-critical' departmental databases. The first problem is that DAC packs support a limited set of schema objects (corresponding closely to the features available on 'Azure'). This means that Service Broker queues, CLR Objects, and perhaps most critically security (permissions, certificates etc.), are off-limits. Applications that require these objects will need to add them via a post-deployment TSQL script, rather defeating the whole idea. More worrying still is the process for altering a database with a DAC pack. The grand 'collective' philosophy, whereby a single XML file can be used for deploying and managing builds and changes, extends, unfortunately, to database upgrades. Any change to a database object will result in the creation of a new database, copying the data from the old version, nuking the previous one, and then renaming the new one. Simple eh? The problem is that even something as trivial as adding a comment to a stored procedure in a 5GB database will require the server to find at least twice as much space, as well sufficient elbow-room in the transaction log for copying the largest table. Of course, you'll need to take the database offline for the full course of the deployment, which is likely to take a long time if there is a lot of data. This upgrade/rename process breaks the log chain, makes any subsequent full restore operation highly complicated, and will also break log shipping. As with any grand vision, the devil is always in the detail. It's hard to fathom why Microsoft hasn't used a SQL Compare-style approach to the upgrade process, altering a database with a change script, and this will surely be adopted in the near future. Something had to be in place for VS2010, but right now DAC packs only make sense for Azure. For this, they're cute, but hardly compelling. Nevertheless, DBAs would do well to get familiar with VS 2010 and DAC packs. Like it or not, they're both coming. Cheers, Tony.

    Read the article

  • Lubuntu upgrade to 13.04 killed sound with ALSA. How to troubleshoot?

    - by Sven
    After upgrading to 13.04 from 12.10 Lubuntu lost audio playback after unplugging usb soundcard (Polycom) and plugging it back in. Volume control was gray and leading to pulseaudio mixer (not installed) so I uninstalled the pulseaudio package. I also removed and reinstalled the alsa-base package. After restart I have the alsamixer back everything seemingly as usual(volume 100%, unmute) but every sound program gets me errors no matter what device I select. aplay -L: null Discard all samples (playback) or generate zero samples (capture) pulse PulseAudio Sound Server default:CARD=NVidia HDA NVidia, ALC662 rev1 Analog Default Audio Device sysdefault:CARD=NVidia HDA NVidia, ALC662 rev1 Analog Default Audio Device front:CARD=NVidia,DEV=0 HDA NVidia, ALC662 rev1 Analog Front speakers surround40:CARD=NVidia,DEV=0 HDA NVidia, ALC662 rev1 Analog 4.0 Surround output to Front and Rear speakers surround41:CARD=NVidia,DEV=0 HDA NVidia, ALC662 rev1 Analog 4.1 Surround output to Front, Rear and Subwoofer speakers surround50:CARD=NVidia,DEV=0 HDA NVidia, ALC662 rev1 Analog 5.0 Surround output to Front, Center and Rear speakers surround51:CARD=NVidia,DEV=0 HDA NVidia, ALC662 rev1 Analog 5.1 Surround output to Front, Center, Rear and Subwoofer speakers surround71:CARD=NVidia,DEV=0 HDA NVidia, ALC662 rev1 Analog 7.1 Surround output to Front, Center, Side, Rear and Woofer speakers iec958:CARD=NVidia,DEV=0 HDA NVidia, ALC662 rev1 Digital IEC958 (S/PDIF) Digital Audio Output hdmi:CARD=NVidia,DEV=0 HDA NVidia, HDMI 0 HDMI Audio Output dmix:CARD=NVidia,DEV=0 HDA NVidia, ALC662 rev1 Analog Direct sample mixing device dmix:CARD=NVidia,DEV=1 HDA NVidia, ALC662 rev1 Digital Direct sample mixing device dmix:CARD=NVidia,DEV=3 HDA NVidia, HDMI 0 Direct sample mixing device dsnoop:CARD=NVidia,DEV=0 HDA NVidia, ALC662 rev1 Analog Direct sample snooping device dsnoop:CARD=NVidia,DEV=1 HDA NVidia, ALC662 rev1 Digital Direct sample snooping device dsnoop:CARD=NVidia,DEV=3 HDA NVidia, HDMI 0 Direct sample snooping device hw:CARD=NVidia,DEV=0 HDA NVidia, ALC662 rev1 Analog Direct hardware device without any conversions hw:CARD=NVidia,DEV=1 HDA NVidia, ALC662 rev1 Digital Direct hardware device without any conversions hw:CARD=NVidia,DEV=3 HDA NVidia, HDMI 0 Direct hardware device without any conversions plughw:CARD=NVidia,DEV=0 HDA NVidia, ALC662 rev1 Analog Hardware device with all software conversions plughw:CARD=NVidia,DEV=1 HDA NVidia, ALC662 rev1 Digital Hardware device with all software conversions plughw:CARD=NVidia,DEV=3 HDA NVidia, HDMI 0 Hardware device with all software conversions default:CARD=Communicator Default Audio Device sysdefault:CARD=Communicator Default Audio Device front:CARD=Communicator,DEV=0 Polycom Communicator, USB Audio Front speakers surround40:CARD=Communicator,DEV=0 Polycom Communicator, USB Audio 4.0 Surround output to Front and Rear speakers surround41:CARD=Communicator,DEV=0 Polycom Communicator, USB Audio 4.1 Surround output to Front, Rear and Subwoofer speakers surround50:CARD=Communicator,DEV=0 Polycom Communicator, USB Audio 5.0 Surround output to Front, Center and Rear speakers surround51:CARD=Communicator,DEV=0 Polycom Communicator, USB Audio 5.1 Surround output to Front, Center, Rear and Subwoofer speakers surround71:CARD=Communicator,DEV=0 Polycom Communicator, USB Audio 7.1 Surround output to Front, Center, Side, Rear and Woofer speakers iec958:CARD=Communicator,DEV=0 Polycom Communicator, USB Audio IEC958 (S/PDIF) Digital Audio Output dmix:CARD=Communicator,DEV=0 Polycom Communicator, USB Audio Direct sample mixing device dsnoop:CARD=Communicator,DEV=0 Polycom Communicator, USB Audio Direct sample snooping device hw:CARD=Communicator,DEV=0 Polycom Communicator, USB Audio Direct hardware device without any conversions plughw:CARD=Communicator,DEV=0 Polycom Communicator, USB Audio Hardware device with all software conversions etc/asound.conf: defaults.ctl.card 1 defaults.pcm.card 1 defaults.pcm.device 1 Following gets same result with both devices. aplay -vv -D front:CARD=NVidia,DEV=0 "Release the Pressure.wav": Playing WAVE 'Release the Pressure.wav' : Signed 16 bit Little Endian, Rate 44100 Hz, Mono aplay: set_params:1087: Channels count non available Guayadeque mp3 playback: AL lib: alsa_open_playback: Could not open playback device 'default': No such file or directory 21:32:14: Error: Gstreamer error 'Configured audiosink playbackbin is not working.' Audacious: ALSA error: snd_mixer_attach failed: No such file or directory. ALSA error: snd_pcm_open failed: No such device. So How do I fix my audio? UPDATE: I removed the usb soundcard and got rid of all alsa config. Everything is working as before the install but it sure feels fragile.

    Read the article

  • Master Data Management for Location Data - Oracle Site Hub

    - by david.butler(at)oracle.com
    Most MDM discussions cover key domains such as customer, supplier, product, service, and reference data. It is usually understood that these domains have complex structures and hundreds if not thousands of attributes that need governing. Location, on the other hand, strikes most people as address data. How hard can that be? But for many industries, locations are complex, and site information is critical to efficient operations and relevant analytics. Retail stores and malls, bank branches, construction sites come to mind. But one of the best industries for illustrating the power of a site mastering application is Oil & Gas.   Oracle's Master Data Management solution for location data is the Oracle Site Hub. It is a location mastering solution that enables organizations to centralize site and location specific information from heterogeneous systems, creating a single view of site information that can be leveraged across all functional departments and analytical systems.   Let's take a look at the location entities the Oracle Site Hub can manage for the Oil & Gas industry: organizations, property, land, buildings, roads, oilfield, service center, inventory site, real estate, facilities, refineries, storage tanks, vendor locations, businesses, assets; project site, area, well, basin, pipelines, critical infrastructure, offshore platform, compressor station, gas station, etc. Any site can be classified into multiple hierarchies, like organizational hierarchy, operational hierarchy, geographic hierarchy, divisional hierarchies and so on. Any site can also be associated to multiple clusters, i.e. collections of sites, and these can be used as a foundation for driving reporting, analysis, organize daily work, etc. Hierarchies can also be used to model entities which are structured or non-structured collections of nodes, like for example routes, pipelines and more. The User Defined Attribute Framework provides the needed infrastructure to add single row attributes groups like well base attributes (well IDs, well type, well structure and key characterizing measures, and more) and well geometry, and multi row attribute groups like well applications, permits, production data, activities, operations, logs, treatments, tests, drills, treatments, and KPIs. Site Hub can also model areas, lands, fields, basins, pools, platforms, eco-zones, and stratigraphic layers as specific sites, tracking their base attributes, aliases, descriptions, subcomponents and more. Midstream entities (pipelines, logistic sites, pump stations) and downstream entities (cylinders, tanks, inventories, meters, partner's sites, routes, facilities, gas stations, and competitor sites) can also be easily modeled, together with their specific attributes and relationships. Site Hub can store any type of unstructured data associated to a site. This could be stored directly or on an external content management solution, like Oracle Universal Content Management. Considering a well, for example, Site Hub can store any relevant associated multimedia file such as: CAD drawings of the well profile, structure and/or parts, engineering documents, contracts, applications, permits, logs, pictures, photos, videos and more. For any site entity, Site Hub can associate all the related assets and equipments at the site, as well as all relationships between sites, between a site and multiple parties, and between a site and any purchasable or sellable item, over time. Items can be equipment, instruments, facilities, services, products, production entities, production facilities (pipelines, batteries, compressor stations, gas plants, meters, separators, etc.), support facilities (rigs, roads, transmission or radio towers, airstrips, etc.), supplier products and services, catalogs, and more. Items can just be associated to sites using standard Site Hub features, or they can be fully mastered by implementing Oracle Product Hub. Site locations (addresses or geographical coordinates) are also managed with out-of-the-box address geo-coding capabilities coupled with Google Maps integration to deliver powerful mapping capabilities and spatial data analysis. Locations can be shared between different sites. Centered on the site location, any site can also have associated areas. Site Hub can master any site location specific information, like for example cadastral, ownership, jurisdictional, geological, seismic and more, and any site-centric area specific information, like for example economical, political, risk, weather, logistic, traffic information and more. Now if anyone ever asks you why locations need MDM, think about how all these Oil & Gas entities and attributes would translate into your business locations. To learn more about Oracle's full MDM solution for the digital oil field, here is a link to Roberto Negro's outstanding whitepaper: Oracle Site Master Data Management for mastering wells and other PPDM entities in a digital oilfield context  

    Read the article

  • Content Challenge: You Can Only Get it Here

    - by Mike Stiles
    Part of the content conundrum for brands is figuring out what kind of content customers would find cool, desirable, and relevant. The mere fact many brands have no idea what this content might be is, in itself, pretty alarming. You’d have to have a pretty thorough lack of involvement with and understanding of your customers to not know what they might like. But despite what should be a great awakening in which consumers are using every technology and trick in the book to shield themselves from ads and commercials, brand self-obsession continues as marketers concentrate on their message, their campaign, what they want to say, and what they want social users to do. When individuals conduct themselves in that same fashion on Facebook and Twitter, it gets tiresome and starts losing value pretty quickly. Their posts eventually get hidden. Conversely, friends who post things that consistently entertain or inform, with little self-marketing desperation involved, win the coveted “show all updates” setting. Of course brands are going to use social to market. It’s pretty much the point of having social in the marketing mix. And yes, people who follow a brand’s Twitter account or “Like” a brand’s Facebook Page implicitly state they want to know what’s going on with that brand’s products and services. But if you have a Facebook friend that assumes you want every one of her posts to be about what wine she likes (Mitsubishi’s current campaign is even based around weeding out pretentious Facebook friends, then running them over), then you know how it must feel for your fans and followers to get a sales pitch for your crackers or whatever you’re selling every single time. Is there such a thing as content that doesn’t sell but that still advances the brand and makes the consumer more involved and valuable? Of course. And perhaps there are no better companies than enterprise brands to do it. Enterprise organizations are large enough to go beyond a product and engage readers/viewers at higher, broader levels…communicating expertise across entire sectors, subjects and industries. You’re going from pitchman to news source, and getting full credit for it as the presenter. A recent GigaOM article pointed out the success a San Francisco-based startup called Crunchyroll is having. Their niche (and they proudly admit it’s a niche) is providing Japanese anime, Korean drama and Asian live action content to countries that can’t get it any other way via licensing deals. Shows are available in HD and on the same day they air in the host country. Crunchyroll not only gets 8 million viewers a month, they have 100,000 paying subscribers at $7-12/month. Got a point, Mike? I do happen to have one. Crunchyroll illustrates the content opportunity enterprise companies have…which is to determine your “area,” the interest graph of your customers, then provide content that speaks to and satisfies those interests that can’t be found anywhere else. At least not in the same style, or of the same quality, or with the same authority. Do what no one else is doing. Provide what no one else is providing in your sector. If underserved users are willing to pay monthly for access to awkwardly moving cartoon dragons, imagine the audience you could attract with free, useful, non-sales content in your customers’ area of interest. It’s an audience you’ll want in place when the time does come to put out that marketing message. A content challenge is better than a content conundrum any day.

    Read the article

  • Project Euler 17: (Iron)Python

    - by Ben Griswold
    In my attempt to learn (Iron)Python out in the open, here’s my solution for Project Euler Problem 17.  As always, any feedback is welcome. # Euler 17 # http://projecteuler.net/index.php?section=problems&id=17 # If the numbers 1 to 5 are written out in words: # one, two, three, four, five, then there are # 3 + 3 + 5 + 4 + 4 = 19 letters used in total. # If all the numbers from 1 to 1000 (one thousand) # inclusive were written out in words, how many letters # would be used? # # NOTE: Do not count spaces or hyphens. For example, 342 # (three hundred and forty-two) contains 23 letters and # 115 (one hundred and fifteen) contains 20 letters. The # use of "and" when writing out numbers is in compliance # with British usage. import time start = time.time() def to_word(n): h = { 1 : "one", 2 : "two", 3 : "three", 4 : "four", 5 : "five", 6 : "six", 7 : "seven", 8 : "eight", 9 : "nine", 10 : "ten", 11 : "eleven", 12 : "twelve", 13 : "thirteen", 14 : "fourteen", 15 : "fifteen", 16 : "sixteen", 17 : "seventeen", 18 : "eighteen", 19 : "nineteen", 20 : "twenty", 30 : "thirty", 40 : "forty", 50 : "fifty", 60 : "sixty", 70 : "seventy", 80 : "eighty", 90 : "ninety", 100 : "hundred", 1000 : "thousand" } word = "" # Reverse the numbers so position (ones, tens, # hundreds,...) can be easily determined a = [int(x) for x in str(n)[::-1]] # Thousands position if (len(a) == 4 and a[3] != 0): # This can only be one thousand based # on the problem/method constraints word = h[a[3]] + " thousand " # Hundreds position if (len(a) >= 3 and a[2] != 0): word += h[a[2]] + " hundred" # Add "and" string if the tens or ones # position is occupied with a non-zero value. # Note: routine is broken up this way for [my] clarity. if (len(a) >= 2 and a[1] != 0): # catch 10 - 99 word += " and" elif len(a) >= 1 and a[0] != 0: # catch 1 - 9 word += " and" # Tens and ones position tens_position_value = 99 if (len(a) >= 2 and a[1] != 0): # Calculate the tens position value per the # first and second element in array # e.g. (8 * 10) + 1 = 81 tens_position_value = int(a[1]) * 10 + a[0] if tens_position_value <= 20: # If the tens position value is 20 or less # there's an entry in the hash. Use it and there's # no need to consider the ones position word += " " + h[tens_position_value] else: # Determine the tens position word by # dividing by 10 first. E.g. 8 * 10 = h[80] # We will pick up the ones position word later in # the next part of the routine word += " " + h[(a[1] * 10)] if (len(a) >= 1 and a[0] != 0 and tens_position_value > 20): # Deal with ones position where tens position is # greater than 20 or we have a single digit number word += " " + h[a[0]] # Trim the empty spaces off both ends of the string return word.replace(" ","") def to_word_length(n): return len(to_word(n)) print sum([to_word_length(i) for i in xrange(1,1001)]) print "Elapsed Time:", (time.time() - start) * 1000, "millisecs" a=raw_input('Press return to continue')

    Read the article

  • Dropped impression 25 days after restructure

    - by Hamid
    Our website is a non English property related website (moshaver.com) which is similar to rightmove.co.uk. On September 2012 our website was adversely affected by Panda causing our Google incoming clicks to drop from around 3000 clicks to less than a thousand. We were hoping that Google will eventually realize that we are not a spam website and things will get better. However, in August 2013 we were almost sure that we needed to do something, so we started to restructure our web content. We used the canonical tag to remove our search results and point to our listing pages, using the noindex tag to remove it from our listing pages which does not have any properties at the moment. We also changed title tags to more friendly ones, in addition to other changes. Our changes were effective on 10th August. As shown in the graph taken from Google Analytics Search Engine Optimization section, these changes has resulted in an increase in the number of times Google displayed our results in its search results. Our impressions almost doubled starting 15th August. However, as the graph shows, our CTR dropped from this date from around 15% to 8%. This might have been because of our changed title tags (so people were less likely to click on them), or it might be normal for increased impressions. This situation has continued up until 10th September, when our impressions decreased dramatically to less than a thousand. This is almost 30% of our original impressions (before website restructure) and 15% of the new impressions. At the same time our impressions has increased dramatically to around 50%. I have two theories for this increase. The first one is that these statistics are less accurate for lower impressions. The second one is that Google is now only displaying our results for queries directly related to our website (our name, our url), and not for general terms, such as "apartments in a specific city". The second theory also explains the dramatic decrease in impression as well. After digging the analytic data a little more, I constructed the following table. It displays the breakdown of our impressions, clicks and ctr in different Google products (web and image) and in total. What I understand from this table is that, most of our increased impressions after restructure were on the image search section. I don't think users of search would be looking for content in our website. Furthermore, it shows that the drop in our web search ctr, is as dramatic of the overall ctr (-30% in compare to -60%) . I thought posting it here might help you understand the situation better. Is it possible that Google has tested our new structure for 25 days, and then decided to decrease our impressions because of the the new low CTR? Or should we look for another factor? If this is the case, how long does it usually take for Google to give us another chance? It has been one month since our impressions has dropped.

    Read the article

< Previous Page | 524 525 526 527 528 529 530 531 532 533 534 535  | Next Page >