Search Results

Search found 8253 results on 331 pages for 'secure coding'.

Page 311/331 | < Previous Page | 307 308 309 310 311 312 313 314 315 316 317 318  | Next Page >

  • My Code Kata–A Solution Kata

    - by Glav
    There are many developers and coders out there who like to do code Kata’s to keep their coding ability up to scratch and to practice their skills. I think it is a good idea. While I like the concept, I find them dead boring and of minimal purpose. Yes, they serve to hone your skills but that’s about it. They are often quite abstract, in that they usually focus on a small problem set requiring specific solutions. It is fair enough as that is how they are designed but again, I find them quite boring. What I personally like to do is go for something a little larger and a little more fun. It takes a little more time and is not as easily executed as a kata though, but it services the same purposes from a practice perspective and allows me to continue to solve some problems that are not directly part of the initial goal. This means I can cover a broader learning range and have a bit more fun. If I am lucky, sometimes they even end up being useful tools. With that in mind, I thought I’d share my current ‘kata’. It is not really a code kata as it is too big. I prefer to think of it as a ‘solution kata’. The code is on bitbucket here. What I wanted to do was create a kind of simplistic virtual world where I can create a player, or a class, stuff it into the world, and see if it survives, and can navigate its way to the exit. Requirements were pretty simple: Must be able to define a map to describe the world using simple X,Y co-ordinates. Z co-ordinates as well if you feel like getting clever. Should have the concept of entrances, exists, solid blocks, and potentially other materials (again if you want to get clever). A coder should be able to easily write a class which will act as an inhabitant of the world. An inhabitant will receive stimulus from the world in the form of surrounding environment and be able to make a decision on action which it passes back to the ‘world’ for processing. At a minimum, an inhabitant will have sight and speed characteristics which determine how far they can ‘see’ in the world, and how fast they can move. Coders who write a really bad ‘inhabitant’ should not adversely affect the rest of world. Should allow multiple inhabitants in the world. So that was the solution I set out to act as a practice solution and a little bit of fun. It had some interesting problems to solve and I figured, if it turned out ok, I could potentially use this as a ‘developer test’ for interviews. Ask a potential coder to write a class for an inhabitant. Show the coder the map they will navigate, but also mention that we will use their code to navigate a map they have not yet seen and a little more complex. I have been playing with solution for a short time now and have it working in basic concepts. Below is a screen shot using a very basic console visualiser that shows the map, boundaries, blocks, entrance, exit and players/inhabitants. The yellow asterisks ‘*’ are the players, green ‘O’ the entrance, purple ‘^’ the exit, maroon/browny ‘#’ are solid blocks. The players can move around at different speeds, knock into each others, and make directional movement decisions based on what they see and who is around them. It has been quite fun to write and it is also quite fun to develop different players to inject into the world. The code below shows a really simple implementation of an inhabitant that can work out what to do based on stimulus from the world. It is pretty simple and just tries to move in some direction if there is nothing blocking the path. public class TestPlayer:LivingEntity { public TestPlayer() { Name = "Beta Boy"; LifeKey = Guid.NewGuid(); } public override ActionResult DecideActionToPerform(EcoDev.Core.Common.Actions.ActionContext actionContext) { try { var action = new MovementAction(); // move forward if we can if (actionContext.Position.ForwardFacingPositions.Length > 0) { if (CheckAccessibilityOfMapBlock(actionContext.Position.ForwardFacingPositions[0])) { action.DirectionToMove = MovementDirection.Forward; return action; } } if (actionContext.Position.LeftFacingPositions.Length > 0) { if (CheckAccessibilityOfMapBlock(actionContext.Position.LeftFacingPositions[0])) { action.DirectionToMove = MovementDirection.Left; return action; } } if (actionContext.Position.RearFacingPositions.Length > 0) { if (CheckAccessibilityOfMapBlock(actionContext.Position.RearFacingPositions[0])) { action.DirectionToMove = MovementDirection.Back; return action; } } if (actionContext.Position.RightFacingPositions.Length > 0) { if (CheckAccessibilityOfMapBlock(actionContext.Position.RightFacingPositions[0])) { action.DirectionToMove = MovementDirection.Right; return action; } } return action; } catch (Exception ex) { World.WriteDebugInformation("Player: "+ Name, string.Format("Player Generated exception: {0}",ex.Message)); throw ex; } } private bool CheckAccessibilityOfMapBlock(MapBlock block) { if (block == null || block.Accessibility == MapBlockAccessibility.AllowEntry || block.Accessibility == MapBlockAccessibility.AllowExit || block.Accessibility == MapBlockAccessibility.AllowPotentialEntry) { return true; } return false; } } It is simple and it seems to work well. The world implementation itself decides the stimulus context that is passed to he inhabitant to make an action decision. All movement is carried out on separate threads and timed appropriately to be as fair as possible and to cater for additional skills such as speed, and eventually maybe stamina, strength, with actions like fighting. It is pretty fun to make up random maps and see how your inhabitant does. You can download the code from here. Along the way I have played with parallel extensions to make the compute intensive stuff spread across all cores, had to heavily factor in visibility of methods and properties so design of classes was paramount, work out movement algorithms that play fairly in the world and properly favour the players with higher abilities, as well as a host of other issues. So that is my ‘solution kata’. If I keep going with it, I may develop a web interface for it where people can upload assemblies and watch their player within a web browser visualiser and maybe even a map designer. What do you do to keep the fires burning?

    Read the article

  • A New Threat To Web Applications: Connection String Parameter Pollution (CSPP)

    - by eric.maurice
    Hi, this is Shaomin Wang. I am a security analyst in Oracle's Security Alerts Group. My primary responsibility is to evaluate the security vulnerabilities reported externally by security researchers on Oracle Fusion Middleware and to ensure timely resolution through the Critical Patch Update. Today, I am going to talk about a serious type of attack: Connection String Parameter Pollution (CSPP). Earlier this year, at the Black Hat DC 2010 Conference, two Spanish security researchers, Jose Palazon and Chema Alonso, unveiled a new class of security vulnerabilities, which target insecure dynamic connections between web applications and databases. The attack called Connection String Parameter Pollution (CSPP) exploits specifically the semicolon delimited database connection strings that are constructed dynamically based on the user inputs from web applications. CSPP, if carried out successfully, can be used to steal user identities and hijack web credentials. CSPP is a high risk attack because of the relative ease with which it can be carried out (low access complexity) and the potential results it can have (high impact). In today's blog, we are going to first look at what connection strings are and then review the different ways connection string injections can be leveraged by malicious hackers. We will then discuss how CSPP differs from traditional connection string injection, and the measures organizations can take to prevent this kind of attacks. In web applications, a connection string is a set of values that specifies information to connect to backend data repositories, in most cases, databases. The connection string is passed to a provider or driver to initiate a connection. Vendors or manufacturers write their own providers for different databases. Since there are many different providers and each provider has multiple ways to make a connection, there are many different ways to write a connection string. Here are some examples of connection strings from Oracle Data Provider for .Net/ODP.Net: Oracle Data Provider for .Net / ODP.Net; Manufacturer: Oracle; Type: .NET Framework Class Library: - Using TNS Data Source = orcl; User ID = myUsername; Password = myPassword; - Using integrated security Data Source = orcl; Integrated Security = SSPI; - Using the Easy Connect Naming Method Data Source = username/password@//myserver:1521/my.server.com - Specifying Pooling parameters Data Source=myOracleDB; User Id=myUsername; Password=myPassword; Min Pool Size=10; Connection Lifetime=120; Connection Timeout=60; Incr Pool Size=5; Decr Pool Size=2; There are many variations of the connection strings, but the majority of connection strings are key value pairs delimited by semicolons. Attacks on connection strings are not new (see for example, this SANS White Paper on Securing SQL Connection String). Connection strings are vulnerable to injection attacks when dynamic string concatenation is used to build connection strings based on user input. When the user input is not validated or filtered, and malicious text or characters are not properly escaped, an attacker can potentially access sensitive data or resources. For a number of years now, vendors, including Oracle, have created connection string builder class tools to help developers generate valid connection strings and potentially prevent this kind of vulnerability. Unfortunately, not all application developers use these utilities because they are not aware of the danger posed by this kind of attacks. So how are Connection String parameter Pollution (CSPP) attacks different from traditional Connection String Injection attacks? First, let's look at what parameter pollution attacks are. Parameter pollution is a technique, which typically involves appending repeating parameters to the request strings to attack the receiving end. Much of the public attention around parameter pollution was initiated as a result of a presentation on HTTP Parameter Pollution attacks by Stefano Di Paola and Luca Carettoni delivered at the 2009 Appsec OWASP Conference in Poland. In HTTP Parameter Pollution attacks, an attacker submits additional parameters in HTTP GET/POST to a web application, and if these parameters have the same name as an existing parameter, the web application may react in different ways depends on how the web application and web server deal with multiple parameters with the same name. When applied to connections strings, the rule for the majority of database providers is the "last one wins" algorithm. If a KEYWORD=VALUE pair occurs more than once in the connection string, the value associated with the LAST occurrence is used. This opens the door to some serious attacks. By way of example, in a web application, a user enters username and password; a subsequent connection string is generated to connect to the back end database. Data Source = myDataSource; Initial Catalog = db; Integrated Security = no; User ID = myUsername; Password = XXX; In the password field, if the attacker enters "xxx; Integrated Security = true", the connection string becomes, Data Source = myDataSource; Initial Catalog = db; Integrated Security = no; User ID = myUsername; Password = XXX; Intergrated Security = true; Under the "last one wins" principle, the web application will then try to connect to the database using the operating system account under which the application is running to bypass normal authentication. CSPP poses serious risks for unprepared organizations. It can be particularly dangerous if an Enterprise Systems Management web front-end is compromised, because attackers can then gain access to control panels to configure databases, systems accounts, etc. Fortunately, organizations can take steps to prevent this kind of attacks. CSPP falls into the Injection category of attacks like Cross Site Scripting or SQL Injection, which are made possible when inputs from users are not properly escaped or sanitized. Escaping is a technique used to ensure that characters (mostly from user inputs) are treated as data, not as characters, that is relevant to the interpreter's parser. Software developers need to become aware of the danger of these attacks and learn about the defenses mechanism they need to introduce in their code. As well, software vendors need to provide templates or classes to facilitate coding and eliminate developers' guesswork for protecting against such vulnerabilities. Oracle has introduced the OracleConnectionStringBuilder class in Oracle Data Provider for .NET. Using this class, developers can employ a configuration file to provide the connection string and/or dynamically set the values through key/value pairs. It makes creating connection strings less error-prone and easier to manager, and ultimately using the OracleConnectionStringBuilder class provides better security against injection into connection strings. For More Information: - The OracleConnectionStringBuilder is located at http://download.oracle.com/docs/cd/B28359_01/win.111/b28375/OracleConnectionStringBuilderClass.htm - Oracle has developed a publicly available course on preventing SQL Injections. The Server Technologies Curriculum course "Defending Against SQL Injection Attacks!" is located at http://st-curriculum.oracle.com/tutorial/SQLInjection/index.htm - The OWASP web site also provides a number of useful resources. It is located at http://www.owasp.org/index.php/Main_Page

    Read the article

  • SQL SERVER – Thinking about Deprecated, Discontinued Features and Breaking Changes while Upgrading to SQL Server 2012 – Guest Post by Nakul Vachhrajani

    - by pinaldave
    Nakul Vachhrajani is a Technical Specialist and systems development professional with iGATE having a total IT experience of more than 7 years. Nakul is an active blogger with BeyondRelational.com (150+ blogs), and can also be found on forums at SQLServerCentral and BeyondRelational.com. Nakul has also been a guest columnist for SQLAuthority.com and SQLServerCentral.com. Nakul presented a webcast on the “Underappreciated Features of Microsoft SQL Server” at the Microsoft Virtual Tech Days Exclusive Webcast series (May 02-06, 2011) on May 06, 2011. He is also the author of a research paper on Database upgrade methodologies, which was published in a CSI journal, published nationwide. In addition to his passion about SQL Server, Nakul also contributes to the academia out of personal interest. He visits various colleges and universities as an external faculty to judge project activities being carried out by the students. Disclaimer: The opinions expressed herein are his own personal opinions and do not represent his employer’s view in anyway. Blog | LinkedIn | Twitter | Google+ Let us hear the thoughts of Nakul in first person - Those who have been following my blogs would be aware that I am recently running a series on the database engine features that have been deprecated in Microsoft SQL Server 2012. Based on the response that I have received, I was quite surprised to know that most of the audience found these to be breaking changes, when in fact, they were not! It was then that I decided to write a little piece on how to plan your database upgrade such that it works with the next version of Microsoft SQL Server. Please note that the recommendations made in this article are high-level markers and are intended to help you think over the specific steps that you would need to take to upgrade your database. Refer the documentation – Understand the terms Change is the only constant in this world. Therefore, whenever customer requirements, newer architectures and designs require software vendors to make a change to the keywords, functions, etc; they ensure that they provide their end users sufficient time to migrate over to the new standards before dropping off the old ones. Microsoft does that too with it’s Microsoft SQL Server product. Whenever a new SQL Server release is announced, it comes with a list of the following features: Breaking changes These are changes that would break your currently running applications, scripts or functionalities that are based on earlier version of Microsoft SQL Server These are mostly features whose behavior has been changed keeping in mind the newer architectures and designs Lesson: These are the changes that you need to be most worried about! Discontinued features These features are no longer available in the associated version of Microsoft SQL Server These features used to be “deprecated” in the prior release Lesson: Without these changes, your database would not be compliant/may not work with the version of Microsoft SQL Server under consideration Deprecated features These features are those that are still available in the current version of Microsoft SQL Server, but are scheduled for removal in a future version. These may be removed in either the next version or any other future version of Microsoft SQL Server The features listed for deprecation will compose the list of discontinued features in the next version of SQL Server Lesson: Plan to make necessary changes required to remove/replace usage of the deprecated features with the latest recommended replacements Once a feature appears on the list, it moves from bottom to the top, i.e. it is first marked as “Deprecated” and then “Discontinued”. We know of “Breaking change” comes later on in the product life cycle. What this means is that if you want to know what features would not work with SQL Server 2012 (and you are currently using SQL Server 2008 R2), you need to refer the list of breaking changes and discontinued features in SQL Server 2012. Use the tools! There are a lot of tools and technologies around us, but it is rarely that I find teams using these tools religiously and to the best of their potential. Below are the top two tools, from Microsoft, that I use every time I plan a database upgrade. The SQL Server Upgrade Advisor Ever since SQL Server 2005 was announced, Microsoft provides a small, very light-weight tool called the “SQL Server upgrade advisor”. The upgrade advisor analyzes installed components from earlier versions of SQL Server, and then generates a report that identifies issues to fix either before or after you upgrade. The analysis examines objects that can be accessed, such as scripts, stored procedures, triggers, and trace files. Upgrade Advisor cannot analyze desktop applications or encrypted stored procedures. Refer the links towards the end of the post to know how to get the Upgrade Advisor. The SQL Server Profiler Another great tool that you can use is the one most SQL Server developers & administrators use often – the SQL Server profiler. SQL Server Profiler provides functionality to monitor the “Deprecation” event, which contains: Deprecation announcement – equivalent to features to be deprecated in a future release of SQL Server Deprecation final support – equivalent to features to be deprecated in the next release of SQL Server You can learn more using the links towards the end of the post. A basic checklist There are a lot of finer points that need to be taken care of when upgrading your database. But, it would be worth-while to identify a few basic steps in order to make your database compliant with the next version of SQL Server: Monitor the current application workload (on a test bed) via the Profiler in order to identify usage of features marked as Deprecated If none appear, you are all set! (This almost never happens) Note down all the offending queries and feature usages Run analysis sessions using the SQL Server upgrade advisor on your database Based on the inputs from the analysis report and Profiler trace sessions, Incorporate solutions for the breaking changes first Next, incorporate solutions for the discontinued features Revisit and document the upgrade strategy for your deployment scenarios Revisit the fall-back, i.e. rollback strategies in case the upgrades fail Because some programming changes are dependent upon the SQL server version, this may need to be done in consultation with the development teams Before any other enhancements are incorporated by the development team, send out the database changes into QA QA strategy should involve a comparison between an environment running the old version of SQL Server against the new one Because minimal application changes have gone in (essential changes for SQL Server version compliance only), this would be possible As an ongoing activity, keep incorporating changes recommended as per the deprecated features list As a DBA, update your coding standards to ensure that the developers are using ANSI compliant code – this code will require a change only if the ANSI standard changes Remember this: Change management is a continuous process. Keep revisiting the product release notes and incorporate recommended changes to stay prepared for the next release of SQL Server. May the power of SQL Server be with you! Links Referenced in this post Breaking changes in SQL Server 2012: Link Discontinued features in SQL Server 2012: Link Get the upgrade advisor from the Microsoft Download Center at: Link Upgrade Advisor page on MSDN: Link Profiler: Review T-SQL code to identify objects no longer supported by Microsoft: Link Upgrading to SQL Server 2012 by Vinod Kumar: Link Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Upgrade

    Read the article

  • La Universidad Estatal de California estandariza 23 campus a través de Oracle PeopleSoft

    - by Noelia Gomez
    Normal 0 21 false false false ES X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} El sistema universitario más grande de los Estados Unidos consolida la Gestión Financiera y consolidará la gestión del capital humano para mejorar la eficiencia y reducir los costes. Normal 0 21 false false false ES X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} La Universidad Estatal de California (CSU) está estandarizando su sistema de 23 campus y la Oficina del Rector con las aplicaciones de Oracle PeopleSoft. La CSU es el mayor sistema universitario público en los Estados Unidos. Los premios CSU con 90.000 grados por año y desde su creación en 1961, ha otorgado casi 2,6 millones. Como parte de su plan estratégico, “Acces to Excellence”, la CSU se ha comprometido a tomar ventaja de la tecnología para satisfacer las futuras necesidades de educación y ha creado los Sistemas Comunes de Gestión (CMS). La misión de CMS es brindar un servicio de calidad eficiente, eficaz y de calidad a los estudiantes, profesores y personal de los 23 campus de la CSU y la Oficina del Decanato. En un esfuerzo por mejorar la eficiencia y reducir los costes de todo el sistema, la CSU ha consolidado sus procesos de gestión financiera y los sistemas a través de PeopleSoft Financial Management. Para proporcionar una visión unificada de las operaciones financieras, CSU ha consolidado 22 campus en un mismo sistema a través de PeopleSoft Financials. Estas aplicaciones incluyen: Contabilidad, Facturación, Cuentas a Pagar, Cuentas a Cobrar y Compras. CSU también utiliza Oracle Business Intelligence Enterprise Edition para el análisis y presentación de informes. CSU está consolidando en PeopleSoft Human Capital Management (HCM) en pos de varios objetivos estratégicos como, entre otros: atraer y retener al personal superior y de la facultad. El sistema financiero consolidad de la CSU es ahora el mayor despliegue único de educación superior de PeopleSoft Financials en los Estados Unidos. Una centralización de esta envergadura no tiene precedentes, y sin embargo todavía se completó a tiempo y dentro del presupuesto. Una vez completado, la CSU también será el mayor despliegue de educación superior de PeopleSoft HCM. CSU también utiliza PeopleSoft Campus Solutions para gestionar sus operaciones académicas de los estudiantes, incluyendo la programación y registro de clases, cálculo y recaudación de las tasas, la concesión de la ayuda financiera y la evaluación y el progreso académico de los estudiantes. Las aplicaciones de Oracle PeopleSoft están diseñadas para atender las necesidades de negocio más complejas. Estas proveen soluciones integrales de negocios y de industria, permitiendo a las organizaciones aumentar la productividad, acelerar el rendimiento del negocio y un menor coste total de propiedad.PeopleSoft de Oracle Campus Solutions es una completa suite de software diseñado para las necesidades cambiantes de las instituciones de educación superior. Oracle sigue colaborando con la CSU y los colegios y universidades de todo el mundo para ofrecer los sistemas de administración y desarrollo de estudiantes más sensible y comprensivo y disponibles en la actualidad. Unisys está implantando aplicaciones PeopleSoft CSU y facilitando el acceso a los estudiantes, profesores y personal de las universidades a través de la solución de Unisys Hosted Secure Private Cloud. Unisys es un miembro de nivel Platino de Oracle PartnerNetwork (OPN). "Con el fin de seguir cumpliendo los objetivos de nuestro plan estratégico, creemos que es fundamental para estandarizar nuestros procesos operativos - tanto en el back office como allí donde lleguen nuestros estudiantes todos los días - maximizar nuestra inversión en tecnología y reducir costes", dijo Larry Furukawa-Schlereth, Director de Finanzas, Sonoma State University, y presidente del Comité Global Ejecutivo de Gestión de la CSU, California State University. "Contar con una vista única de toda la información importante en un sistema ayuda a la Universidad Estatal de California a continuar ofreciendo excelentes oportunidades de educación a un coste asequible mientras ayudamos a dar forma a la futura calidad de vida cívica y económica en California". Conozca más sobre Peoplesoft aquí: Normal 0 21 false false false ES X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Oracle’s PeopleSoft Applications Oracle in Higher Education Oracle’s PeopleSoft Financial Management Oracle’s PeopleSoft Human Capital Management Oracle’s PeopleSoft Campus Solutions Oracle Human Capital Management Blog Oracle HCM on Twitter Oracle Higher Education on Facebook Oracle Higher Education on Diigo

    Read the article

  • The Data Scientist

    - by BuckWoody
    A new term - well, perhaps not that new - has come up and I’m actually very excited about it. The term is Data Scientist, and since it’s new, it’s fairly undefined. I’ll explain what I think it means, and why I’m excited about it. In general, I’ve found the term deals at its most basic with analyzing data. Of course, we all do that, and the term itself in that definition is redundant. There is no science that I know of that does not work with analyzing lots of data. But the term seems to refer to more than the common practices of looking at data visually, putting it in a spreadsheet or report, or even using simple coding to examine data sets. The term Data Scientist (as far as I can make out this early in it’s use) is someone who has a strong understanding of data sources, relevance (statistical and otherwise) and processing methods as well as front-end displays of large sets of complicated data. Some - but not all - Business Intelligence professionals have these skills. In other cases, senior developers, database architects or others fill these needs, but in my experience, many lack the strong mathematical skills needed to make these choices properly. I’ve divided the knowledge base for someone that would wear this title into three large segments. It remains to be seen if a given Data Scientist would be responsible for knowing all these areas or would specialize. There are pretty high requirements on the math side, specifically in graduate-degree level statistics, but in my experience a company will only have a few of these folks, so they are expected to know quite a bit in each of these areas. Persistence The first area is finding, cleaning and storing the data. In some cases, no cleaning is done prior to storage - it’s just identified and the cleansing is done in a later step. This area is where the professional would be able to tell if a particular data set should be stored in a Relational Database Management System (RDBMS), across a set of key/value pair storage (NoSQL) or in a file system like HDFS (part of the Hadoop landscape) or other methods. Or do you examine the stream of data without storing it in another system at all? This is an important decision - it’s a foundation choice that deals not only with a lot of expense of purchasing systems or even using Cloud Computing (PaaS, SaaS or IaaS) to source it, but also the skillsets and other resources needed to care and feed the system for a long time. The Data Scientist sets something into motion that will probably outlast his or her career at a company or organization. Often these choices are made by senior developers, database administrators or architects in a company. But sometimes each of these has a certain bias towards making a decision one way or another. The Data Scientist would examine these choices in light of the data itself, starting perhaps even before the business requirements are created. The business may not even be aware of all the strategic and tactical data sources that they have access to. Processing Once the decision is made to store the data, the next set of decisions are based around how to process the data. An RDBMS scales well to a certain level, and provides a high degree of ACID compliance as well as offering a well-known set-based language to work with this data. In other cases, scale should be spread among multiple nodes (as in the case of Hadoop landscapes or NoSQL offerings) or even across a Cloud provider like Windows Azure Table Storage. In fact, in many cases - most of the ones I’m dealing with lately - the data should be split among multiple types of processing environments. This is a newer idea. Many data professionals simply pick a methodology (RDBMS with Star Schemas, NoSQL, etc.) and put all data there, regardless of its shape, processing needs and so on. A Data Scientist is familiar not only with the various processing methods, but how they work, so that they can choose the right one for a given need. This is a huge time commitment, hence the need for a dedicated title like this one. Presentation This is where the need for a Data Scientist is most often already being filled, sometimes with more or less success. The latest Business Intelligence systems are quite good at allowing you to create amazing graphics - but it’s the data behind the graphics that are the most important component of truly effective displays. This is where the mathematics requirement of the Data Scientist title is the most unforgiving. In fact, someone without a good foundation in statistics is not a good candidate for creating reports. Even a basic level of statistics can be dangerous. Anyone who works in analyzing data will tell you that there are multiple errors possible when data just seems right - and basic statistics bears out that you’re on the right track - that are only solvable when you understanding why the statistical formula works the way it does. And there are lots of ways of presenting data. Sometimes all you need is a “yes” or “no” answer that can only come after heavy analysis work. In that case, a simple e-mail might be all the reporting you need. In others, complex relationships and multiple components require a deep understanding of the various graphical methods of presenting data. Knowing which kind of chart, color, graphic or shape conveys a particular datum best is essential knowledge for the Data Scientist. Why I’m excited I love this area of study. I like math, stats, and computing technologies, but it goes beyond that. I love what data can do - how it can help an organization. I’ve been fortunate enough in my professional career these past two decades to work with lots of folks who perform this role at companies from aerospace to medical firms, from manufacturing to retail. Interestingly, the size of the company really isn’t germane here. I worked with one very small bio-tech (cryogenics) company that worked deeply with analysis of complex interrelated data. So  watch this space. No, I’m not leaving Azure or distributed computing or Microsoft. In fact, I think I’m perfectly situated to investigate this role further. We have a huge set of tools, from RDBMS to Hadoop to allow me to explore. And I’m happy to share what I learn along the way.

    Read the article

  • Beginner’s Guide to Flock, the Social Media Browser

    - by Asian Angel
    Are you wanting a browser that can work as a social hub from the first moment that you start it up? If you love the idea of a browser that is ready to go out of the box then join us as we look at Flock. During the Install Process When you are installing Flock there are two install windows that you should watch for. The first one lets you choose between the “Express Setup & Custom Setup”. We recommend the “Custom Setup”. Once you have selected the “Custom Setup” you can choose which of the following options will enabled. Notice the “anonymous usage statistics” option at the bottom…you can choose to leave this enabled or disable it based on your comfort level. The First Look When you start Flock up for the first time it will open with three tabs. All three are of interest…especially if this is your first time using Flock. With the first tab you can jump right into “logging in/activating” favorite social services within Flock. This page is set to display each time that you open Flock unless you deselect the option in the lower left corner. The second tab provides a very nice overview of Flock and its’ built-in social management power. The third and final page can be considered a “Personal Page”. You can make some changes to the content displayed for quick and easy access and/or monitoring “Twitter Search, Favorite Feeds, Favorite Media, Friend Activity, & Favorite Sites”. Use the “Widget Menu” in the upper left corner to select the “Personal Page Components” that you would like to use. In the upper right corner there is a built-in “Search Bar” and buttons for “Posting to Your Blog & Uploading Media”. To help personalize the “My World Page” just a bit more you can even change the text to your name or whatever best suits your needs. The Flock Toolbar The “Flock Toolbar” is full of social account management goodness. In order from left to right the buttons are: My World (Homepage), Open People Sidebar, Open Media Bar, Open Feeds Sidebar, Webmail, Open Favorites Sidebar, Open Accounts and Services Sidebar, Open Web Clipboard Sidebar, Open Blog Editor, & Open Photo Uploader. The buttons will be “highlighted” with a blue background to help indicate which area you are in. The first area will display a listing of people that you are watching/following at the services shown here. Clicking on the “Media Bar Button” will display the following “Media Slider Bar” above your “Tab Bar”. Notice that there is a built-in “Search Bar” on the right side. Any photos, etc. clicked on will be opened in the currently focused tab below the “Media Bar”. Here is a listing of the “Media Streams” available for viewing. By default Flock will come with a small selection of pre-subscribed RSS Feeds. You can easily unsubscribe, rearrange, add custom folders, or non-categorized feeds as desired. RSS Feeds subscribed to here can be viewed combined together as a single feed (clickable links) in the “My World Page”. or can be viewed individually in a new tab. Very nice! Next on the “Flock Toolbar is the “Webmail Button”. You can set up access to your favorite “Yahoo!, Gmail, & AOL Mail” accounts from here. The “Favorites Sidebar” combines your “Browser History & Bookmarks” into one convenient location. The “Accounts and Services Sidebar” gives you quick and easy access to get logged into your favorite social accounts. Clicking on any of the links will open that particular service’s login page in a new tab. Want to store items such as photos, links, and text to add into a blog post or tweet later on? Just drag and drop them into the “Web Clipboard Sidebar” for later access. Clicking on the “Blog Editor Button” will open up a separate blogging window to compose your posts in. If you have not logged into or set up an account yet in Flock you will see the following message window. The “Blogging Window”…nice, simple, and straightforward. If you are not already logged into your photo account(s) then you will see the following message window when you click on the “Photo Uploader Button”. Clicking “OK” will open the “Accounts and Services Sidebar” with compatible photo services highlighted in a light yellow color. Log in to your favorite service to start uploading all those great images. After Setting Up Here is what our browser looked like after setting up some of our favorite services. The Twitter feed is certainly looking nice and easy to read through… Some tweaking in the “RSS Feeds Sidebar” makes for a perfect reading experience. Keeping up with our e-mail is certainly easy to do too. A look back at the “Accounts and Services Sidebar” shows that all of our accounts are actively logged in (green dot on the right side). Going back to our “My World Page” you can see how nice everything looks for monitoring our “Friend Activity & Favorite Feeds”. Moving on to regular browsing everything is looking very good… Flock is a perfect choice for anyone wanting a browser and social hub all built into a single app. Conclusion Anyone who loves keeping up with their favorite social services while browsing will find using Flock to be a wonderful experience. You literally get the best of both worlds with this browser. Links Download Flock The Official Flock Extensions Homepage The Official Flock Toolbar Homepage Similar Articles Productive Geek Tips Add Color Coding to Windows 7 Media Center Program GuideAdd Social Bookmarking (Digg This!) Links to your Wordpress BlogHow to use an ISO image on Ubuntu LinuxAdvertise on How-To GeekFixing When Windows Media Player Library Won’t Let You Add Files TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7 Microsoft’s “How Do I ?” Videos Home Networks – How do they look like & the problems they cause

    Read the article

  • Five Ways Enterprise 2.0 Can Transform Your Business - Q&A from the Webcast

    - by [email protected]
    A few weeks ago, Vince Casarez and I presented with KMWorld on the Five Ways Enterprise 2.0 Can Transform Your Business. It was an enjoyable, interactive webcast in which Vince and I discussed the ways Enterprise 2.0 can transform your business and more importantly, highlighted key customer examples of how to do so. If you missed the webcast, you can catch a replay here. We had a lot of audience participation in some of the polls we conducted and in the Q&A session. We weren't able to address all of the questions during the broadcast, so we attempted to answer them here: Q: Which area within your firm focuses on Web 2.0? Meaning, do you find new departments developing just to manage the web 2.0 (Twitter, Facebook, etc.) user experience or are you structuring current departments? A: There are three distinct efforts within Oracle. The first is around delivery of these Web 2.0 services for enterprise deployments. This is the focus of the WebCenter team. The second effort is injecting these Web 2.0 services into use cases that drive the different enterprise applications. This effort is focused on how to manage these external services and bring them into a cohesive flow for marketing programs, customer care, and purchasing. The third effort is how we consume these services internally to enhance Oracle's business delivery. It leverages the technologies and use cases of the first two but also pushes the envelope with regards to future directions of these other two areas. Q: In a business, Web 2.0 is mostly like action logs. How can we leverage the official process practice versus the logs of a recent action? Example: a system configuration modified last night on a call out versus the official practice that everybody would use in the morning.A: The key thing to remember is that most Web 2.0 actions / activity streams today are based on collaboration and communication type actions. At least with public social sites like Facebook and Twitter. What we're delivering as part of the WebCenter Suite are not just these types of activities but also enterprise application activities. These enterprise application activities come from different application modules: purchasing, HR, order entry, sales opportunity, etc. The actions within these systems are normally tied to a business object or process: purchase order/customer, employee or department, customer and supplier, customer and product, respectively. Therefore, the activities or "logs" as you name them are able to be "typed" so that as a viewer, you can filter or decide to see only certain types of information. In your example, you could have a view that only showed you recent "configuration" changes and this could be right next to a view that showed off the items to be watched every morning. Q: It's great to hear about customers using the software but is there any plan for future webinars to show what the products/installs look like? That would be very helpful.A: We don't have a webinar planned to show off the install process. However, we have a viewlet that's posted on Oracle Technology Network. You can see it here:http://www.oracle.com/technetwork/testcontent/wcs-install-098014.htmlAnd we've got excellent documentation that walks you through the steps here:http://download.oracle.com/docs/cd/E14571_01/install.1111/e12001/install.htmAnd there's a whole set of demos and examples of what WebCenter can do at this URL:http://www.oracle.com/technetwork/middleware/webcenter/release11-demos-097468.html Q: How do you anticipate managing metadata across the enterprise to make content findable?A: We need to first make sure we are all talking about the same thing when we use a word like "metadata". Here's why...  For a developer, metadata means information that describes key elements of the portal or application and what the portal or application can do. For content systems, metadata means key terms that provide a taxonomy or folksonomy about the information that is being indexed, ordered, and managed. For business intelligence systems, metadata means key terms that provide labels to groups of data that most non-mathematicians need to understand. And for SOA, metadata means labels for parts of the processes that business owners should understand that connect development terminology. There are also additional requirements for metadata to be available to the team building these new solutions as well as requirements to make this metadata available to the running system. These requirements are often separated by "design time" and "run time" respectively. So clearly, a general goal of managing metadata across the enterprise is very challenging. We've invested a huge amount of resources around Oracle Metadata Services (MDS) to be able to provide a more generic system for all of these elements. No other vendor has anything like this technology foundation in their products. This provides a huge benefit to our customers as they will now be able to find content, processes, people, and information from a common set of search interfaces with consistent enterprise wide results. Q: Can you give your definition of terms as to document and content, please?A: Content applies to a broad category of information from Word documents, presentations and reports through attachments to invoices and/or purchase orders. Content is essentially any type of digital asset including images, video, and voice. A document is just one type of content. Q: Do you have special integration tools to realize an interaction between UCM and WebCenter Spaces/Services?A: Yes, we've dedicated a whole team of engineers to exploit the key features of Oracle UCM within WebCenter.  While ensuring that WebCenter can connect to other non-Oracle systems, we've made sure that with the combined set of Oracle technology, no other solution can match the combined power and integration.  This is part of the Oracle Fusion Middleware strategy which is to provide best in class capabilities for Content and Portals.  When combined together, the synergy between the two products enables users to quickly add capabilities when they are needed.  For example, simple document sharing is part of the combined product offering, but if legal discovery or archiving is required, Oracle UCM product includes these capabilities that can be quickly added.  There's no need to move content around or add another system to support this, it's just a feature that gets turned on within Oracle UCM. Q: All customers have some interaction with their applications and have many older versions, how do you see some of these new Enterprise 2.0 capabilities adding value to existing enterprise application deployments?A: Just as Service Oriented Architectures allowed for connecting the processes of different applications systems to work together, there's a need for a similar approach with regards to these enterprise 2.0 capabilities. Oracle WebCenter is built on a core architecture that allows for SOA of these Enterprise 2.0 services so that one set of scalable services can be used and integrated directly into any type of application. In this way, users can get immediate value out of the Enterprise 2.0 capabilities without having to wait for the next major release or upgrade. These centrally managed WebCenter services expose a set of standard interfaces that make it extremely easy to add them into existing applications no matter what technology the application has been implemented. Q: We've heard about Oracle Next Generation applications called "Fusion Applications", can you tell me how all this works together?A: Oracle WebCenter powers the core collaboration and social computing services found within Fusion Applications. It is the core user experience technology for how all the application screens have been implemented. And the core concept of task flows allows for all the Fusion Applications modules to be adaptable and composable by business users and IT without needing to be a professional developer. Oracle WebCenter is at the heart of the new Fusion Applications. In addition, the same patterns and technologies are now being added to the existing applications including JD Edwards, Siebel, Peoplesoft, and eBusiness Suite. The core technology enables all these customers to have a much smoother upgrade path to Fusion Applications. They get immediate benefits of injecting new user interactions into their existing applications without having to completely move to Fusion Applications. And then when the time comes, their users will already be well versed in how the new capabilities work. Q: Does any of this work with non Oracle software? Other databases? Other application servers? etc.A: We have made sure that Oracle WebCenter delivers the broadest set of development choices so that no matter what technology you developers are using, WebCenter capabilities can be quickly and easily added to the site or application. In addition, we have certified Oracle WebCenter to run against non-Oracle databases like DB2 and SQLServer. We have stated plans for certification against MySQL as well. Later in CY 2011, Oracle will provide certification on non-Oracle application servers such as WebSphere and JBoss. Q: How do we balance User and IT requirements in regards to Enterprise 2.0 technologies?A: Wrong decisions are often made because employee knowledge is not tapped efficiently and opportunities to innovate are often missed because the right people do not work together. Collaboration amongst workers in the right business context is critical for success. While standalone Enterprise 2.0 technologies can improve collaboration for collaboration's sake, using social collaboration tools in the context of business applications and processes will improve business responsiveness and lead companies to a more competitive position. As these systems become more mission critical it is essential that they maintain the highest level of performance and availability while scaling to support larger communities. Q: What are the ways in which Enterprise 2.0 can improve business responsiveness?A: With a wide range of Enterprise 2.0 tools in the marketplace, CIOs need to deploy solutions that will meet the requirements from users as well as address the requirements from IT. Workers want a next-generation user experience that is personalized and aggregates their daily tools and tasks, while IT needs to ensure the solution is secure, scalable, flexible, reliable and easily integrated with existing systems. An open and integrated approach to deploying portals, content management, and collaboration can enhance your business by addressing both the needs of knowledge workers for better information and the IT mandate to conserve resources by simplifying, consolidating and centralizing infrastructure and administration.  

    Read the article

  • CodePlex Daily Summary for Sunday, June 01, 2014

    CodePlex Daily Summary for Sunday, June 01, 2014Popular ReleasesSandcastle Help File Builder: Help File Builder and Tools v2014.5.31.0: General InformationIMPORTANT: On some systems, the content of the ZIP file is blocked and the installer may fail to run. Before extracting it, right click on the ZIP file, select Properties, and click on the Unblock button if it is present in the lower right corner of the General tab in the properties dialog. This release completes removal of the branding transformations and implements the new VS2013 presentation style that utilizes the new lightweight website format. Several breaking cha...Tooltip Web Preview: ToolTip Web Preview: Version 1.0Database Helper: Release 1.0.0.0: First Release of Database HelperCoMaSy: CoMaSy1.0.2: !Contact Management SystemImage View Slider: Image View Slider: This is a .NET component. We create this using VB.NET. Here you can use an Image Viewer with several properties to your application form. We wish somebody to improve freely. Try this out! Author : Steven Renaldo Antony Yustinus Arjuna Purnama Putra Andre Wijaya P Martin Lidau PBK GENAP 2014 - TI UKDWAspose for Apache POI: Missing Features of Apache POI WP - v 1.1: Release contain the Missing Features in Apache POI WP SDK in Comparison with Aspose.Words for dealing with Microsoft Word. What's New ?Following Examples: Insert Picture in Word Document Insert Comments Set Page Borders Mail Merge from XML Data Source Moving the Cursor Feedback and Suggestions Many more examples are yet to come here. Keep visiting us. Raise your queries and suggest more examples via Aspose Forums or via this social coding site.babelua: V1.5.6.0: V1.5.6.0 - 2014.5.30New feature: support quick-cocos2d-x project now; support text search in scripts folder now, you can use this function in Search Result Window;SEToolbox: 01.032.014 Release 1: Added fix when loading game Textures for icons causing 'Unable to read beyond the end of the stream'. Added new Resource Report, that displays all in game resources in a concise report. Added in temp directory cleaner, to keep excess files from building up. Fixed use of colors on the windows, to work better with desktop schemes. Adding base support for multilingual resources. This will allow loading of the Space Engineers resources to show localized names, and display localized date a...ClosedXML - The easy way to OpenXML: ClosedXML 0.71.2: More memory and performance improvements. Fixed an issue with pivot table field order.Fancontroller: Fancontroller: Initial releaseVi-AIO SearchBar: Vi – AIO Search Bar: Version 1.0Top Verses ( Ayat Emas ): Binary Top Verses: This one is the bin folder of the component. the .dll component is inside.Traditional Calendar Component: Traditional Calender Converter: Duta Wacana Christian University This file containing Traditional Calendar Component and Demo Aplication that using Traditional Calendar Component. This component made with .NET Framework 4 and the programming language is C# .SQLSetupHelper: 1.0.0.0: First Stable Version of SQL SetupComposite Iconote: Composite Iconote: This is a composite has been made by Microsoft Visual Studio 2013. Requirement: To develop this composite or use this component in your application, your computer must have .NET framework 4.5 or newer.Magick.NET: Magick.NET 6.8.9.101: Magick.NET linked with ImageMagick 6.8.9.1. Breaking changes: - Int/short Set methods of WritablePixelCollection are now unsigned. - The Q16 build no longer uses HDRI, switch to the new Q16-HDRI build if you need HDRI.fnr.exe - Find And Replace Tool: 1.7: Bug fixes Refactored logic for encoding text values to command line to handle common edge cases where find/replace operation works in GUI but not in command line Fix for bug where selection in Encoding drop down was different when generating command line in some cases. It was reported in: https://findandreplace.codeplex.com/workitem/34 Fix for "Backslash inserted before dot in replacement text" reported here: https://findandreplace.codeplex.com/discussions/541024 Fix for finding replacing...VG-Ripper & PG-Ripper: VG-Ripper 2.9.59: changes NEW: Added Support for 'GokoImage.com' links NEW: Added Support for 'ViperII.com' links NEW: Added Support for 'PixxxView.com' links NEW: Added Support for 'ImgRex.com' links NEW: Added Support for 'PixLiv.com' links NEW: Added Support for 'imgsee.me' links NEW: Added Support for 'ImgS.it' linksToolbox for Dynamics CRM 2011/2013: XrmToolBox (v1.2014.5.28): XrmToolbox improvement XrmToolBox updates (v1.2014.5.28)Fix connecting to a connection with custom authentication without saved password Tools improvement New tool!Solution Components Mover (v1.2014.5.22) Transfer solution components from one solution to another one Import/Export NN relationships (v1.2014.3.7) Allows you to import and export many to many relationships Tools updatesAttribute Bulk Updater (v1.2014.5.28) Audit Center (v1.2014.5.28) View Layout Replicator (v1.2014.5.28) Scrip...Microsoft Ajax Minifier: Microsoft Ajax Minifier 5.10: Fix for Issue #20875 - echo switch doesn't work for CSS CSS should honor the SASS source-file comments JS should allow multi-line comment directivesNew ProjectsCet MicroWPF: WPF-like library for simple graphic-UI application using Netduino (Plus) 2 and the FTDI FT800 Eve board.Fakemons: Some Fakmons, powered by XML, XSLT, CSS and JavascriptFling OS: Fling OS is a C# operating system project aiming to create a new, managed operating system from the ground up.MudRoom: Experimental tool sets in mud parsing and area definitionOOP-2112110158: My name's NgocDungRoslynResearch: Roslyn ResearchTHD - Control de Usuarios: control de usuarios y permisosWPF Kinect User Controls: WPF Kinect User Control project provide simple Tilt and Skeleton Tracking Parameter Controls.

    Read the article

  • Recap: Oracle Fusion Middleware Strategies Driving Business Innovation

    - by Harish Gaur
    Hasan Rizvi, Executive Vice President of Oracle Fusion Middleware & Java took the stage on Tuesday to discuss how Oracle Fusion Middleware helps enable business innovation. Through a series of product demos and customer showcases, Hassan demonstrated how Oracle Fusion Middleware is a complete platform to harness the latest technological innovations (cloud, mobile, social and Fast Data) throughout the application lifecycle. Fig 1: Oracle Fusion Middleware is the foundation of business innovation This Session included 4 demonstrations to illustrate these strategies: 1. Build and deploy native mobile applications using Oracle ADF Mobile 2. Empower business user to model processes, design user interface and have rich mobile experience for process interaction using Oracle BPM Suite PS6. 3. Create collaborative user experience and integrate social sign-on using Oracle WebCenter Portal, Oracle WebCenter Content, Oracle Social Network & Oracle Identity Management 11g R2 4. Deploy and manage business applications on Oracle Exalogic Nike, LA Department of Water & Power and Nintendo joined Hasan on stage to share how their organizations are leveraging Oracle Fusion Middleware to enable business innovation. Managing Performance in the Wrld of Social and Mobile How do you provide predictable scalability and performance for an application that monitors active lifestyle of 8 million users on a daily basis? Nike’s answer is Oracle Coherence, a component of Oracle Fusion Middleware and Oracle Exadata. Fig 2: Oracle Coherence enabled data grid improves performance of Nike+ Digital Sports Platform Nicole Otto, Sr. Director of Consumer Digital Technology discussed the vision of the Nike+ platform, a platform which represents a shift for NIKE from a  "product"  to  a "product +" experience.  There are currently nearly 8 million users in the Nike+ system who are using digitally-enabled Nike+ devices.  Once data from the Nike+ device is transmitted to Nike+ application, users access the Nike+ website or via the Nike mobile applicatoin, seeing metrics around their daily active lifestyle and even engage in socially compelling experiences to compare, compete or collaborate their data with their friends. Nike expects the number of users to grow significantly this year which will drive an explosion of data and potential new experiences. To deal with this challenge, Nike envisioned building a shared platform that would drive a consumer-centric model for the company. Nike built this new platform using Oracle Coherence and Oracle Exadata. Using Coherence, Nike built a data grid tier as a distributed cache, thereby provide low-latency access to most recent and relevant data to consumers. Nicole discussed how Nike+ Digital Sports Platform is unique in the way that it utilizes the Coherence Grid.  Nike takes advantage of Coherence as a traditional cache using both cache-aside and cache-through patterns.  This new tier has enabled Nike to create a horizontally scalable distributed event-driven processing architecture. Current data grid volume is approximately 150,000 request per minute with about 40 million objects at any given time on the grid. Improving Customer Experience Across Multiple Channels Customer experience is on top of every CIO's mind. Customer Experience needs to be consistent and secure across multiple devices consumers may use.  This is the challenge Matt Lampe, CIO of Los Angeles Department of Water & Power (LADWP) was faced with. Despite being the largest utilities company in the country, LADWP had been relying on a 38 year old customer information system for serving its customers. Their prior system  had been unable to keep up with growing customer demands. Last year, LADWP embarked on a journey to improve customer experience for 1.6million LA DWP customers using Oracle WebCenter platform. Figure 3: Multi channel & Multi lingual LADWP.com built using Oracle WebCenter & Oracle Identity Management platform Matt shed light on his efforts to drive customer self-service across 3 dimensions – new website, new IVR platform and new bill payment service. LADWP has built a new portal to increase customer self-service while reducing the transactions via IVR. LADWP's website is powered Oracle WebCenter Portal and is accessible by desktop and mobile devices. By leveraging Oracle WebCenter, LADWP eliminated the need to build, format, and maintain individual mobile applications or websites for different devices. Their entire content is managed using Oracle WebCenter Content and secured using Oracle Identity Management. This new portal automated their paper based processes to web based workflows for customers. This includes automation of Self Service implemented through My Account -  like Bill Pay, Payment History, Bill History and Usage Analysis. LADWP's solution went live in April 2012. Matt indicated that LADWP's Self-Service Portal has greatly improved customer satisfaction.  In a JD Power Associates website satisfaction survey, results indicate rankings have climbed by 25+ points, marking a remarkable increase in user experience. Bolstering Performance and Simplifying Manageability of Business Applications Ingvar Petursson, Senior Vice Preisdent of IT at Nintendo America joined Hasan on-stage to discuss their choice of Exalogic. Nintendo had significant new requirements coming their way for business systems, both internal and external, in the years to come, especially with new products like the WiiU on the horizon this holiday season. Nintendo needed a platform that could give them performance, availability and ease of management as they deploy business systems. Ingvar selected Engineered Systems for two reasons: 1. High performance  2. Ease of management Figure 4: Nintendo relies on Oracle Exalogic to run ATG eCommerce, Oracle e-Business Suite and several business applications Nintendo made a decision to run their business applications (ATG eCommerce, E-Business Suite) and several Fusion Middleware components on the Exalogic platform. What impressed Ingvar was the "stress” testing results during evaluation. Oracle Exalogic could handle their 3-year load estimates for many functions, which was better than Nintendo expected without any hardware expansion. Faster Processing of Big Data Middleware plays an increasingly important role in Big Data. Last year, we announced at OpenWorld the introduction of Oracle Data Integrator for Hadoop and Oracle Loader for Hadoop which helps in the ability to move, transform, load data to and from Big Data Appliance to Exadata.  This year, we’ve added new capabilities to find, filter, and focus data using Oracle Event Processing. This product can natively integrate with Big Data Appliance or runs standalone. Hasan briefly discussed how NTT Docomo, largest mobile operator in Japan, leverages Oracle Event Processing & Oracle Coherence to process mobile data (from 13 million smartphone users) at a speed of 700K events per second before feeding it Hadoop for distributed processing of big data. Figure 5: Mobile traffic data processing at NTT Docomo with Oracle Event Processing & Oracle Coherence    

    Read the article

  • Can Microsoft Build Appliances?

    - by andrewbrust
    Billy Hollis, my Visual Studio Live! colleague and fellow Microsoft Regional Director said recently, and I am paraphrasing, that the computing world, especially on the consumer side, has shifted from one of building hardware and software that makes things possible to do, to building products and technologies that make things easy to do.  Billy crystalized things perfectly, as he often does. In this new world of “easy to do,” Apple has done very well and Microsoft has struggled.  In the old world, customers wanted a Swiss Army Knife, with the most gimmicks and gadgets possible.  In the new world, people want elegantly cutlery.  They may want cake cutters and utility knives too, but they don’t want one device that works for all three tasks.  People don’t want tools, they want utensils.  People don’t want machines.  They want appliances. Microsoft Appliances: They Do Exist Microsoft has built a few appliance-like devices.  I would say XBox 360 is an appliance,  It’s versatile, mind you, but it’s the kind of thing you plug in, turn on and use, as opposed to set-up, tune, and open up to upgrade the internals.  Windows Phone 7 is an appliance too.  It’s a true smartphone, unlike Windows Mobile which was a handheld computer with a radio stack.  Zune is an appliance too, and a nice one.  It hasn’t attained much traction in the market, but that’s probably because the seminal consumer computing appliance -- the iPod – got there so much more quickly. In the embedded world, Mediaroom, Microsoft’s set-top product for the cable industry (used by AT&T U-Verse and others) is an appliance.  So is Microsoft’s Sync technology, used in Ford automobiles.  Even on the enterprise side, Microsoft has an appliance: SQL Server Parallel Data Warehouse Edition (PDW) combines Microsoft software with select OEMs’ server, networking and storage hardware.  You buy the appliance units from the OEMs, plug them in, connect them and go. I would even say that Bing is an appliance.  Not in the hardware sense, mind you.  But from the software perspective, it’s a single-purpose product that you visit or run, use and then move on.  You don’t have to install it (except the iOS and Android native apps where it’s pretty straightforward), you don’t have to customize it, you don’t have to program it.  Basically, you just use it. Microsoft Appliances that Should Exist But Microsoft builds a bunch of things that are not appliances.  Media Center is not an appliance, and it most certainly should be.  Instead, it’s an app that runs on Windows 7.  It runs full-screen and you can use this configuration to conceal the fact that Windows is under it, but eventually something will cause you to abandon that masquerade (like Patch Tuesday). The next version of Windows Home Server won’t, in my opinion, be an appliance either.  Now that the Drive Extender technology is gone, and users can’t just add and remove drives into and from a single storage pool, the product is much more like a IT server and less like an appliance-premised one.  Much has been written about this decision by Microsoft.  I’ll just sum it up in one word: pity. Microsoft doesn’t have anything remotely appliance-like in the tablet category, either.  Until it does, it likely won’t have much market share in that space either.  And of course, the bulk of Microsoft’s product catalog on the business side is geared to enterprise machines and not personal appliances. Appliance DNA: They Gotta Have It. The consumerization of IT is real, because businesspeople are consumers too.  They appreciate the fit and finish of appliances at home, and they increasingly feel entitled to have it at work too.  Secure and reliable push email in a smartphone is necessary, but it isn’t enough.  People want great apps and a pleasurable user experience too.  The full Microsoft Office product is needed at work, but a PC with a keyboard and mouse, or maybe a touch screen that uses a stylus (or requires really small fingers), to run Office isn’t enough either.  People want a flawless touch experience available for the times they want to read and take quick notes.  Until Microsoft realizes this fully and internalizes it, it will suffer defeats in the consumer market and even setbacks in the business market.  Think about how slow the Office upgrade cycle is…now imagine if the next version of Office had a first-class alternate touch UI and consider the possible acceleration in adoption rates. Can Microsoft make the appliance switch?  Can the appliance mentality become pervasive at the company?  Can Microsoft hasten its release cycles dramatically and shed the “some assembly required” paradigm upon which many of its products are based?  Let’s face it, the chances that Microsoft won’t make this transition are significant. But there are also encouraging signs, and they should not be ignored.  The appliances we have already discussed, especially Xbox, Zune and Windows Phone 7, are the most obvious in this regard.  The fact that SQL Server has an appliance SKU now is a more subtle but perhaps also more significant outcome, because that product sits so smack in the middle of Microsoft’s enterprise stack.  Bing is encouraging too, especially given its integrated travel, maps and augmented reality capabilities.  As Bing gains market share, Microsoft has tangible proof that it can transform and win, even when everyone outside the company, and many within it, would bet otherwise. That Great Big Appliance in the Sky Perhaps the most promising (and evolving) proof points toward the appliance mentality, though, are Microsoft’s cloud offerings -- Azure and BPOS/Office 365.  While the cloud does not represent a physical appliance (quite the opposite in fact) its ability to make acquisition, deployment and use of technology simple for the user is absolutely an embodiment of the appliance mentality and spirit.  Azure is primarily a platform as a service offering; it doesn’t just provide infrastructure.  SQL Azure does likewise for databases.  And Office 365 does likewise for SharePoint, Exchange and Lync. You don’t administer, tune and manage servers; instead, you create databases or site collections or mailboxes and start using them. Upgrades come automatically, and it seems like releases will come more frequently.  Fault tolerance and content distribution is just there.  No muss.  No fuss.  You use these services; you don’t have to set them up and think about them.  That’s how appliances work.  To me, these signs point out that Microsoft has the full capability of transforming itself.  But there’s a lot of work ahead.  Microsoft may say they’re “all in” on the cloud, but the majority of the company is still oriented around its old products and models.  There needs to be a wholesale cultural transformation in Redmond.  It can happen, but product management, program management, the field and executive ranks must unify in the effort. So must partners, and even customers.  New leaders must rise up and Microsoft must be able to see itself as a winner.  If Microsoft does this, it could lock-in decades of new success, and be a standard business school case study for doing so.  If not, the company will have missed an opportunity, and may see its undoing.

    Read the article

  • 14+ WordPress Portfolio Themes

    - by Edward
    There are various portfolio themes for WordPress out there, with this collection we are trying to help you choose the best one. These themes can be used to create any type of personal, photography, art or corporate portfolio. Display 3 in 1 Display 3 in 1 – Business & Portfolio WordPress Theme. Features a fantastic 3D Image slideshow that can be controlled from your backend with a custom tool. The Theme has a huge wordpress custom backend (8 additional Admin Pages) that make customization of the Theme easy for those who dont know much about coding or wordpress. Price: $40 View Demo Download DeepFocus Tempting features such as automatic separation of blog and portfolio content by template, publishing of most important information on homepage, styles to choose from and many more such features. It also provides for page templates for blog, portfolio, blog archive, tags etc. It has the best feature that helps you to manage everything from one place. Price: $39 (Package includes more than 55 themes) View Demo Download SimplePress Simple, yet awesome. One of the best portfolio theme. Price: $39 (Package includes more than 55 themes) View Demo Download Graphix Graphix is one of best word press portfolio themes. It is most suited to aspiring designers, developers, artists and photographers who’d like a framework theme, which has a great-looking portfolio with a feature-rich blog. It has theme option page, 5-color style, SEO option, featured content blocks, drop down multi-level menu, social profile link custom widgets, custom post, custom page template etc. Price: $69 Single & $149 Developer Package View Demo Download Bizznizz It boasts of many features such as custom homepage, custom post types, custom widgets, portfolio templates, alternative styles and many more. View Demo Download Showtime Ultimate WordPress Theme for you to create your web portfolio, It has 3 different styles for you to choose from. Price: $40 View Demo Download Montana WP Horizontal Portfolio Theme Montana Theme – WP Horizontal Portfolio Theme, best suited for creative studios to showcase design, photography, illustration, paintings and art. Price: $30 View Demo Download OverALL OverALL Premium WordPress Blog & Portfolio Theme, is low priced & has amazing tons of features. Price: $17 View Demo Download Habitat Habitat – Blog and Portfolio Theme. Unique Portfolio Sorting/Filtering with a custom jQuery script (each entry supports multiple images or a video) Multiple Featured Images for each post to generate individual Slideshows per Post, or the option to directly embed video content from youtube, vimeo, hulu etc. Price: $35 View Demo Download Fresh Folio Fresh Folio from WooThemes, can be used as both portfolio and a premium WordPress theme. The theme is a remix of the Fresh News Theme and Proud Folio Theme which combines all the best elements of the respective blog and portfolio style themes. View Demo Download Fresh Folio Features: Can be used to create an impressive portfolio. 7 diverse theme styles to choose from (default, blue, red, grunge light, grunge floral, antique, blue creamer, nightlife) The template will automatically (visually) separate your blog & portfolio content, making this an amazing theme for aspiring designers, developers, artists, photographers etc. Unique page templates types for the portfolio, blog, blog archives, tags & search results. Integrated Theme Options (for WordPress) to tweak the layout, colour scheme etc. for the theme Optional Automatic Image Resize, which is used to dynamically create the thumbnails and featured images Includes Widget enabled Sidebars. eGallery eGallery is a theme made to transform your wordpress blog into a fully functional online portfolio. Theme is perfectly designed to emphasize the artwork you choose to showcase. The design has been greatly enhanced using javascript, and is easy to implement. Price: $39 (Package includes more than 55 themes) View Demo Download ProudFolio ProudFolio is a portfolio premium WordPress theme from Woo Themes. The theme is for designers, developers, artists and photographers who would like a showcase theme which would depict as a portfolio and also serves a purpose of blog. ProudFolio puts a strong emphasis on the portfolio pieces, allowing for decent-sized thumbnails, huge fullscreen views via Lightbox, and full details on the single page. The theme file also contains a choice of three different background images and color schemes. Price: $70 Single $150 Developer License View Demo Download Features: The template will automatically (visually) separate your blog & portfolio content. An unique homepage layout, which publishes only the most important information; Unique page templates for the portfolio, blog, blog archives, tags & search results. Integrated Theme Options (for WordPress) to tweak the layout, colour scheme etc. for the theme; Built-in video panel, which you can use to publish any web-based Flash videos; Automatic Image Resize, which is used to dynamically create the thumbnails and featured images; Custom Page Templates for Archives, Sitemap & Image Gallery; Built-in Gravatar Support for Authors & Comments; Integrated Banner Management script to display randomized banner ads of your choice site-wide; Pretty drop down navigation everywhere; and Widget Enabled Sidebars. Porftolio WordPress Theme A FREE wordpress theme designed for web portfolios and (for now) just for web portfolios. It is coming with an Administrative Panel from where you can edit the head quote text, you can edit all theme colors, font families, font sizes and you can fill a curriculum vitae and display it into a special page. Theme demo and download can be found here Viz | Biz Viz | Biz is a premium WordPress photo gallery and portfolio theme designed specifically for photographers, graphic designers and web designers who want to display their creative work online, market their services, as well as have a typical text blog, using the power and flexibility of WordPress. It is priced for $79.95. Theme Features: Premium quality portfolio template Custom logo uploader to replace the standard graphic with your own unique look from the WP Dashboard Integrated blog component (front images are custom fields and thumbnails, but you can also have a typical blog) Four tabbed feature areas (About Me, Services, Recent Posts, and Tags) Two home page feature photos (You choose which photos to feature using a WP category) Manage your online portfolio through the WordPress CMS Crop two sizes of your work: One for the front page thumbnails and another full size version and upload to WP Search engine optimized. Related posts:14 WordPress Photo Blog & Portfolio Themes 6 PhotoBlog Portfolio WordPress Themes Professional WordPress Business Themes

    Read the article

  • Advanced Continuous Delivery to Azure from TFS, Part 1: Good Enough Is Not Great

    - by jasont
    The folks over on the TFS / Visual Studio team have been working hard at releasing a steady stream of new features for their new hosted Team Foundation Service in the cloud. One of the most significant features released was simple continuous delivery of your solution into your Azure deployments. The original announcement from Brian Harry can be found here. Team Foundation Service is a great platform for .Net developers who are used to working with TFS on-premises. I’ve been using it since it became available at the //BUILD conference in 2011, and when I recently came to work at Stackify, it was one of the first changes I made. Managing work items is much easier than the tool we were using previously, although there are some limitations (more on that in another blog post). However, when continuous deployment was made available, it blew my mind. It was the killer feature I didn’t know I needed. Not to say that I wasn’t previously an advocate for continuous delivery; just that it was always a pain to set up and configure. Having it hosted - and a one-click setup – well, that’s just the best thing since sliced bread. It made perfect sense: my source code is in the cloud, and my deployment is in the cloud. Great! I can queue up a build from my iPad or phone and just let it go! I quickly tore through the quick setup and saw it all work… sort of. This will be the first in a three part series on how to take the building block of Team Foundation Service continuous delivery and build a CD model that will actually work for any team deploying something more advanced than a “Hello World” example. Part 1: Good Enough Is Not Great Part 2: A Model That Works: Branching and Multiple Deployment Environments Part 3: Other Considerations: SQL, Custom Tasks, Etc Good Enough Is Not Great There. I’ve said it. I certainly hope no one on the TFS team is offended, but it’s the truth. Let’s take a look under the hood and understand how it works, and then why it’s not enough to handle real world CD as-is. How it works. (note that I’ve skipped a couple of steps; I already have my accounts set up and something deployed to Azure) The first step is to establish some oAuth magic between your Azure management portal and your TFS Instance. You do this via the management portal. Once it’s done, you have a new build process template in your TFS instance. (Image lifted from the documentation) From here, you’ll get the usual prompts for security, allowing access, etc. But you’ll also get to pick which Solution in your source control to build. Here’s what the bulk of the build definition looks like. All I’ve had to do is add in the solution to build (notice that mine is from a specific branch – Release – more on that later) and I’ve changed the configuration. I trigger the build, and voila! I have an Azure deployment a few minutes later. The beauty of this is that it’s all in the cloud and I’m not waiting for my machine to compile and upload the package. (I also had to enable the build definition first – by default it is created in disabled state, probably a good thing since it will trigger on every.single.checkin by default.) I get to see a history of deployments from the Azure portal, and can link into TFS to see the associated changesets and work items. You’ll notice also that this build definition also automatically put my code in the Staging slot of my Azure deployment – more on this soon. For now, I can VIP swap and be in production. (P.S. I hate VIP swap and “production” and “staging” in Azure. More on that later too.) That’s it. That’s the default out-of-box experience. Easy, right? But it’s full of room for improvement, so let’s get into that….   The Problems Nothing is perfect (except my code – it’s always perfect), and neither is Continuous Deployment without a bit of work to help it fit your dev team’s process. So what are the issues? Issue 1: Staging vs QA vs Prod vs whatever other environments your team may have. This, for me, is the big hairy one. Remember how this automatically deployed to staging rather than prod for us? There are a couple of issues with this model: If I want to deliver to prod, it requires intervention on my part after deployment (via a VIP swap). If I truly want to promote between environments (i.e. Nightly Build –> Stable QA –> Production) I likely have configuration changes between each environment such as database connection strings and this process (and the VIP swap) doesn’t account for this. Yet. Issue 2: Branching and delivering on every check-in. As I mentioned above, I have set this up to target a specific branch – Release – of my code. For the purposes of this example, I have adopted the “basic” branching strategy as defined by the ALM Rangers. This basically establishes a “Main” trunk where you branch off Dev and Release branches. Granted, the Release branch is usually the only thing you will deploy to production, but you certainly don’t want to roll to production automatically when you merge to the Release branch and check-in (unless you like the thrill of it, and in that case, I like your style, cowboy….). Rather, you have nightly build and QA environments, or if you’ve adopted the feature-branch model you have environments for those. Those are the environments you want to continuously deploy to. But that takes us back to Issue 1: we currently have a 1:1 solution to Azure deployment target. Issue 3: SQL and other custom tasks. Let’s be honest and address the elephant in the room: I need to get some sleep because I see an elephant in the room. But seriously, I can’t think of an application I have touched in the last 10 years that doesn’t need to consider SQL changes when deploying code and upgrading an environment. Microsoft seems perfectly content to ignore this elephant for now: yes, they’ve added Data Tier Applications. But let’s be honest with ourselves again: no one really uses it, and it’s not suitable for anything more complex than a Hello World sample project database. Why? Because it doesn’t fit well into a great source control story. Developers make stored procedure and table changes all day long while coding complex applications, and if someone forgets to go update the DACPAC before the automated deployment, you have a broken build until it’s completed. Developers – not just DBAs – also like to work with SQL in SQL tools, not in Visual Studio. I’m really picking on SQL because that’s generally the biggest concern that I hear. But we need to account for any custom tasks as well in the build process.   The Solutions… ? We’ve taken a look at how this all works, and addressed the shortcomings. In my next post (which I promise will be very, very soon), I will detail how I’ve overcome these shortcomings and used this foundation to create a mature, flexible model for deploying my app – any version, any time, to any environment.

    Read the article

  • Building dynamic OLAP data marts on-the-fly

    - by DrJohn
    At the forthcoming SQLBits conference, I will be presenting a session on how to dynamically build an OLAP data mart on-the-fly. This blog entry is intended to clarify exactly what I mean by an OLAP data mart, why you may need to build them on-the-fly and finally outline the steps needed to build them dynamically. In subsequent blog entries, I will present exactly how to implement some of the techniques involved. What is an OLAP data mart? In data warehousing parlance, a data mart is a subset of the overall corporate data provided to business users to meet specific business needs. Of course, the term does not specify the technology involved, so I coined the term "OLAP data mart" to identify a subset of data which is delivered in the form of an OLAP cube which may be accompanied by the relational database upon which it was built. To clarify, the relational database is specifically create and loaded with the subset of data and then the OLAP cube is built and processed to make the data available to the end-users via standard OLAP client tools. Why build OLAP data marts? Market research companies sell data to their clients to make money. To gain competitive advantage, market research providers like to "add value" to their data by providing systems that enhance analytics, thereby allowing clients to make best use of the data. As such, OLAP cubes have become a standard way of delivering added value to clients. They can be built on-the-fly to hold specific data sets and meet particular needs and then hosted on a secure intranet site for remote access, or shipped to clients' own infrastructure for hosting. Even better, they support a wide range of different tools for analytical purposes, including the ever popular Microsoft Excel. Extension Attributes: The Challenge One of the key challenges in building multiple OLAP data marts based on the same 'template' is handling extension attributes. These are attributes that meet the client's specific reporting needs, but do not form part of the standard template. Now clearly, these extension attributes have to come into the system via additional files and ultimately be added to relational tables so they can end up in the OLAP cube. However, processing these files and filling dynamically altered tables with SSIS is a challenge as SSIS packages tend to break as soon as the database schema changes. There are two approaches to this: (1) dynamically build an SSIS package in memory to match the new database schema using C#, or (2) have the extension attributes provided as name/value pairs so the file's schema does not change and can easily be loaded using SSIS. The problem with the first approach is the complexity of writing an awful lot of complex C# code. The problem of the second approach is that name/value pairs are useless to an OLAP cube; so they have to be pivoted back into a proper relational table somewhere in the data load process WITHOUT breaking SSIS. How this can be done will be part of future blog entry. What is involved in building an OLAP data mart? There are a great many steps involved in building OLAP data marts on-the-fly. The key point is that all the steps must be automated to allow for the production of multiple OLAP data marts per day (i.e. many thousands, each with its own specific data set and attributes). Now most of these steps have a great deal in common with standard data warehouse practices. The key difference is that the databases are all built to order. The only permanent database is the metadata database (shown in orange) which holds all the metadata needed to build everything else (i.e. client orders, configuration information, connection strings, client specific requirements and attributes etc.). The staging database (shown in red) has a short life: it is built, populated and then ripped down as soon as the OLAP Data Mart has been populated. In the diagram below, the OLAP data mart comprises the two blue components: the Data Mart which is a relational database and the OLAP Cube which is an OLAP database implemented using Microsoft Analysis Services (SSAS). The client may receive just the OLAP cube or both components together depending on their reporting requirements.  So, in broad terms the steps required to fulfil a client order are as follows: Step 1: Prepare metadata Create a set of database names unique to the client's order Modify all package connection strings to be used by SSIS to point to new databases and file locations. Step 2: Create relational databases Create the staging and data mart relational databases using dynamic SQL and set the database recovery mode to SIMPLE as we do not need the overhead of logging anything Execute SQL scripts to build all database objects (tables, views, functions and stored procedures) in the two databases Step 3: Load staging database Use SSIS to load all data files into the staging database in a parallel operation Load extension files containing name/value pairs. These will provide client-specific attributes in the OLAP cube. Step 4: Load data mart relational database Load the data from staging into the data mart relational database, again in parallel where possible Allocate surrogate keys and use SSIS to perform surrogate key lookup during the load of fact tables Step 5: Load extension tables & attributes Pivot the extension attributes from their native name/value pairs into proper relational tables Add the extension attributes to the views used by OLAP cube Step 6: Deploy & Process OLAP cube Deploy the OLAP database directly to the server using a C# script task in SSIS Modify the connection string used by the OLAP cube to point to the data mart relational database Modify the cube structure to add the extension attributes to both the data source view and the relevant dimensions Remove any standard attributes that not required Process the OLAP cube Step 7: Backup and drop databases Drop staging database as it is no longer required Backup data mart relational and OLAP database and ship these to the client's infrastructure Drop data mart relational and OLAP database from the build server Mark order complete Start processing the next order, ad infinitum. So my future blog posts and my forthcoming session at the SQLBits conference will all focus on some of the more interesting aspects of building OLAP data marts on-the-fly such as handling the load of extension attributes and how to dynamically alter the structure of an OLAP cube using C#.

    Read the article

  • Grow Your Business with Security

    - by Darin Pendergraft
    Author: Kevin Moulton Kevin Moulton has been in the security space for more than 25 years, and with Oracle for 7 years. He manages the East EnterpriseSecurity Sales Consulting Team. He is also a Distinguished Toastmaster. Follow Kevin on Twitter at twitter.com/kevin_moulton, where he sometimes tweets about security, but might also tweet about running, beer, food, baseball, football, good books, or whatever else grabs his attention. Kevin will be a regular contributor to this blog so stay tuned for more posts from him. It happened again! There I was, reading something interesting online, and realizing that a friend might find it interesting too. I clicked on the little email link, thinking that I could easily forward this to my friend, but no! Instead, a new screen popped up where I was asked to create an account. I was expected to create a User ID and password, not to mention providing some personally identifiable information, just for the privilege of helping that website spread their word. Of course, I didn’t want to have to remember a new account and password, I didn’t want to provide the requisite information, and I didn’t want to waste my time. I gave up, closed the web page, and moved on to something else. I was left with a bad taste in my mouth, and my friend might never find her way to this interesting website. If you were this content provider, would this be the outcome you were looking for? A few days later, I had a similar experience, but this one went a little differently. I was surfing the web, when I happened upon some little chotcke that I just had to have. I added it to my cart. When I went to buy the item, I was again brought to a page to create account. Groan! But wait! On this page, I also had the option to sign in with my OpenID account, my Facebook account, my Yahoo account, or my Google Account. I have all of those! No new account to create, no new password to remember, and no personally identifiable information to be given to someone else (I’ve already given it all to those other guys, after all). In this case, the vendor was easy to deal with, and I happily completed the transaction. That pleasant experience will bring me back again. This is where security can grow your business. It’s a differentiator. You’ve got to have a presence on the web, and that presence has to take into account all the smart phones everyone’s carrying, and the tablets that took over cyber Monday this year. If you are a company that a customer can deal with securely, and do so easily, then you are a company customers will come back to again and again. I recently had a need to open a new bank account. Every bank has a web presence now, but they are certainly not all the same. I wanted one that I could deal with easily using my laptop, but I also wanted 2-factor authentication in case I had to login from a shared machine, and I wanted an app for my iPad. I found a bank with all three, and that’s who I am doing business with. Let’s say, for example, that I’m in a regular Texas Hold-em game on Friday nights, so I move a couple of hundred bucks from checking to savings on Friday afternoons. I move a similar amount each week and I do it from the same machine. The bank trusts me, and they trust my machine. Most importantly, they trust my behavior. This is adaptive authentication. There should be no reason for my bank to make this transaction difficult for me. Now let's say that I login from a Starbucks in Uzbekistan, and I transfer $2,500. What should my bank do now? Should they stop the transaction? Should they call my home number? (My former bank did exactly this once when I was taking money out of an ATM on a business trip, when I had provided my cell phone number as my primary contact. When I asked them why they called my home number rather than my cell, they told me that their “policy” is to call the home number. If I'm on the road, what exactly is the use of trying to reach me at home to verify my transaction?) But, back to Uzbekistan… Should my bank assume that I am happily at home in New Jersey, and someone is trying to hack into my account? Perhaps they think they are protecting me, but I wouldn’t be very happy if I happened to be traveling on business in Central Asia. What if my bank were to automatically analyze my behavior and calculate a risk score? Clearly, this scenario would be outside of my typical behavior, so my risk score would necessitate something more than a simple login and password. Perhaps, in this case, a one-time password to my cell phone would prove that this is not just some hacker half way around the world. But, what if you're not a bank? Do you need this level of security? If you want to be a business that is easy to deal with while also protecting your customers, then of course you do. You want your customers to trust you, but you also want them to enjoy doing business with you. Make it easy for them to do business with you, and they’ll come back, and perhaps even Tweet about it, or Like you, and then their friends will follow. How can Oracle help? Oracle has the technology and expertise to help you to grown your business with security. Oracle Adaptive Access Manager will help you to prevent fraud while making it easier for your customers to do business with you by providing the risk analysis I discussed above, step-up authentication, and much more. Oracle Mobile and Social Access Service will help you to secure mobile access to applications by expanding on your existing back-end identity management infrastructure, and allowing your customers to transact business with you using the social media accounts they already know. You also have device fingerprinting and metrics to help you to grow your business securely. Security is not just a cost anymore. It’s a way to set your business apart. With Oracle’s help, you can be the business that everyone’s tweeting about. Image courtesy of Flickr user shareski

    Read the article

  • Get Application Title from Windows Phone

    - by psheriff
    In a Windows Phone application that I am currently developing I needed to be able to retrieve the Application Title of the phone application. You can set the Deployment Title in the Properties of your Windows Phone Application, however getting to this value programmatically can be a little tricky. This article assumes that you have Visual Studio 2010 and the Windows Phone tools installed along with it. The Windows Phone tools must be downloaded separately and installed with Visual Studio2010. You may also download the free Visual Studio2010 Express for Windows Phone developer environment. The WMAppManifest.xml File First off you need to understand that when you set the Deployment Title in the Properties windows of your Windows Phone application, this title actually gets stored into an XML file located under the \Properties folder of your application. This XML file is named WMAppManifest.xml. A portion of this file is shown in the following listing. <?xml version="1.0" encoding="utf-8"?><Deployment  http://schemas.microsoft.com/windowsphone/2009/deployment"http://schemas.microsoft.com/windowsphone/2009/deployment"  AppPlatformVersion="7.0">  <App xmlns=""       ProductID="{71d20842-9acc-4f2f-b0e0-8ef79842ea53}"       Title="Mobile Time Track"       RuntimeType="Silverlight"       Version="1.0.0.0"       Genre="apps.normal"       Author="PDSA, Inc."       Description="Mobile Time Track"       Publisher="PDSA, Inc."> ... ...  </App></Deployment> Notice the “Title” attribute in the <App> element in the above XML document. This is the value that gets set when you modify the Deployment Title in your Properties Window of your Phone project. The only value you can set from the Properties Window is the Title. All of the other attributes you see here must be set by going into the XML file and modifying them directly. Note that this information duplicates some of the information that you can also set from the Assembly Information… button in the Properties Window. Why Microsoft did not just use that information, I don’t know. Reading Attributes from WMAppManifest I searched all over the namespaces and classes within the Windows Phone DLLs and could not find a way to read the attributes within the <App> element. Thus, I had to resort to good old fashioned XML processing. First off I created a WinPhoneCommon class and added two static methods as shown in the snippet below: public class WinPhoneCommon{  /// <summary>  /// Returns the Application Title   /// from the WMAppManifest.xml file  /// </summary>  /// <returns>The application title</returns>  public static string GetApplicationTitle()  {    return GetWinPhoneAttribute("Title");  }   /// <summary>  /// Returns the Application Description   /// from the WMAppManifest.xml file  /// </summary>  /// <returns>The application description</returns>  public static string GetApplicationDescription()  {    return GetWinPhoneAttribute("Description");  }   ... GetWinPhoneAttribute method here ...} In your Windows Phone application you can now simply call WinPhoneCommon.GetApplicationTitle() or WinPhone.GetApplicationDescription() to retrieve the Title or Description properties from the WMAppManifest.xml file respectively. You notice that each of these methods makes a call to the GetWinPhoneAttribute method. This method is shown in the following code snippet: /// <summary>/// Gets an attribute from the Windows Phone WMAppManifest.xml file/// To use this method, add a reference to the System.Xml.Linq DLL/// </summary>/// <param name="attributeName">The attribute to read</param>/// <returns>The Attribute's Value</returns>private static string GetWinPhoneAttribute(string attributeName){  string ret = string.Empty;   try  {    XElement xe = XElement.Load("WMAppManifest.xml");    var attr = (from manifest in xe.Descendants("App")                select manifest).SingleOrDefault();    if (attr != null)      ret = attr.Attribute(attributeName).Value;  }  catch  {    // Ignore errors in case this method is called    // from design time in VS.NET  }   return ret;} I love using the new LINQ to XML classes contained in the System.Xml.Linq.dll. When I did a Bing search the only samples I found for reading attribute information from WMAppManifest.xml used either an XmlReader or XmlReaderSettings objects. These are fine and work, but involve a little extra code. Instead of using these, I added a reference to the System.Xml.Linq.dll, then added two using statements to the top of the WinPhoneCommon class: using System.Linq;using System.Xml.Linq; Now, with just a few lines of LINQ to XML code you can read to the App element and extract the appropriate attribute that you pass into the GetWinPhoneAttribute method. Notice that I added a little bit of exception handling code in this method. I ignore the exception in case you call this method in the Loaded event of a user control. In design-time you cannot access the WMAppManifest file and thus an exception would be thrown. Summary In this article you learned how to retrieve the attributes from the WMAppManifest.xml file. I use this technique to grab information that I would otherwise have to hard-code in my application. Getting the Title or Description for your Windows Phone application is easy with just a little bit of LINQ to XML code. NOTE: You can download the complete sample code at my website. http://www.pdsa.com/downloads. Choose Tips & Tricks, then "Get Application Title from Windows Phone" from the drop-down. Good Luck with your Coding,Paul Sheriff ** SPECIAL OFFER FOR MY BLOG READERS **Visit http://www.pdsa.com/Event/Blog for a free video on Silverlight entitled Silverlight XAML for the Complete Novice - Part 1.  

    Read the article

  • Guide to MySQL & NoSQL, Webinar Q&A

    - by Mat Keep
    0 0 1 959 5469 Homework 45 12 6416 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} Yesterday we ran a webinar discussing the demands of next generation web services and how blending the best of relational and NoSQL technologies enables developers and architects to deliver the agility, performance and availability needed to be successful. Attendees posted a number of great questions to the MySQL developers, serving to provide additional insights into areas like auto-sharding and cross-shard JOINs, replication, performance, client libraries, etc. So I thought it would be useful to post those below, for the benefit of those unable to attend the webinar. Before getting to the Q&A, there are a couple of other resources that maybe useful to those looking at NoSQL capabilities within MySQL: - On-Demand webinar (coming soon!) - Slides used during the webinar - Guide to MySQL and NoSQL whitepaper  - MySQL Cluster demo, including NoSQL interfaces, auto-sharing, high availability, etc.  So here is the Q&A from the event  Q. Where does MySQL Cluster fit in to the CAP theorem? A. MySQL Cluster is flexible. A single Cluster will prefer consistency over availability in the presence of network partitions. A pair of Clusters can be configured to prefer availability over consistency. A full explanation can be found on the MySQL Cluster & CAP Theorem blog post.  Q. Can you configure the number of replicas? (the slide used a replication factor of 1) Yes. A cluster is configured by an .ini file. The option NoOfReplicas sets the number of originals and replicas: 1 = no data redundancy, 2 = one copy etc. Usually there's no benefit in setting it >2. Q. Interestingly most (if not all) of the NoSQL databases recommend having 3 copies of data (the replication factor).    Yes, with configurable quorum based Reads and writes. MySQL Cluster does not need a quorum of replicas online to provide service. Systems that require a quorum need > 2 replicas to be able to tolerate a single failure. Additionally, many NoSQL systems take liberal inspiration from the original GFS paper which described a 3 replica configuration. MySQL Cluster avoids the need for a quorum by using a lightweight arbitrator. You can configure more than 2 replicas, but this is a tradeoff between incrementally improved availability, and linearly increased cost. Q. Can you have cross node group JOINS? Wouldn't that run into the risk of flooding the network? MySQL Cluster 7.2 supports cross nodegroup joins. A full cross-join can require a large amount of data transfer, which may bottleneck on network bandwidth. However, for more selective joins, typically seen with OLTP and light analytic applications, cross node-group joins give a great performance boost and network bandwidth saving over having the MySQL Server perform the join. Q. Are the details of the benchmark available anywhere? According to my calculations it results in approx. 350k ops/sec per processor which is the largest number I've seen lately The details are linked from Mikael Ronstrom's blog The benchmark uses a benchmarking tool we call flexAsynch which runs parallel asynchronous transactions. It involved 100 byte reads, of 25 columns each. Regarding the per-processor ops/s, MySQL Cluster is particularly efficient in terms of throughput/node. It uses lock-free minimal copy message passing internally, and maximizes ID cache reuse. Note also that these are in-memory tables, there is no need to read anything from disk. Q. Is access control (like table) planned to be supported for NoSQL access mode? Currently we have not seen much need for full SQL-like access control (which has always been overkill for web apps and telco apps). So we have no plans, though especially with memcached it is certainly possible to turn-on connection-level access control. But specifically table level controls are not planned. Q. How is the performance of memcached APi with MySQL against memcached+MySQL or any other Object Cache like Ecache with MySQL DB? With the memcache API we generally see a memcached response in less than 1 ms. and a small cluster with one memcached server can handle tens of thousands of operations per second. Q. Can .NET can access MemcachedAPI? Yes, just use a .Net memcache client such as the enyim or BeIT memcache libraries. Q. Is the row level locking applicable when you update a column through memcached API? An update that comes through memcached uses a row lock and then releases it immediately. Memcached operations like "INCREMENT" are actually pushed down to the data nodes. In most cases the locks are not even held long enough for a network round trip. Q. Has anyone published an example using something like PHP? I am assuming that you just use the PHP memcached extension to hook into the memcached API. Is that correct? Not that I'm aware of but absolutely you can use it with php or any of the other drivers Q. For beginner we need more examples. Take a look here for a fully worked example Q. Can I access MySQL using Cobol (Open Cobol) or C and if so where can I find the coding libraries etc? A. There is a cobol implementation that works well with MySQL, but I do not think it is Open Cobol. Also there is a MySQL C client library that is a standard part of every mysql distribution Q. Is there a place to go to find help when testing and/implementing the NoSQL access? If using Cluster then you can use the [email protected] alias or post on the MySQL Cluster forum Q. Are there any white papers on this?  Yes - there is more detail in the MySQL Guide to NoSQL whitepaper If you have further questions, please don’t hesitate to use the comments below!

    Read the article

  • CodePlex Daily Summary for Friday, August 22, 2014

    CodePlex Daily Summary for Friday, August 22, 2014Popular ReleasesQuickMon: Version 3.22: This release add two important changes. 1. Config variables at the monitor pack level (global to entire monitor pack for all Collectors) 2. The QuickMon (Windows) service now automatically reloads monitor packs that have been changed since it was started. This means you don't have to restart the service for changes to take effect.SSIS ReportGeneratorTask: ReportGenerator Task 1.8: New version of the SSIS Report Generator Task that supports SQL Server 2008, 2012 and 2014. In addition to minor bug fixes Multi-Value Parameters and Execution Information were integrated. The complete variable and parameter assignment is now a string and can be set dynamically.Corefig for Windows Server 2012 Core and Hyper-V Server 2012: Corefig 1.1.2 ISO: FixesUpdated Hyper-V scripts to use version 2 of the WMI tree. Updated the Hyper-V check for saved VM to look for the proper identifier. Fixed text issues with the licensing tab (thanks to briangw for rooting this problem out). EnhancementsNew (and improved) version number in Corefig.psd1.Outlook 2013 Backup Add-In: Outlook Backup Add-In 1.3: Changelog for new version: Added button in config-window to reset the last backup-time (this will trigger the backup after closing outlook) Minimum interval set to 0 (backup at each closing of outlook) Catch exception when data store entry is corrupt Added two parameters (prefix and suffix) to automatically rename the backup file Updated VSTO-Runtime to 10.0.50325 Upgraded project to Visual Studio 2013 Added optional command to run after backup (e.g. pack backup files, ...) Add...babelua: 1.6.7.0: V1.6.7.0 - 2014.8.21New feature: add a file search window ( ctrl+1 or ALT+L ), like The file search in VC Assistant; Stability improvement: performance improvement when BabeLua load/unload; performance improvement when debugger load lua files;File Explorer for WPF: FileExplorer3_20August2014: Please see Aug14 Update.Open NFe: RDI Open NFe 3.0 (alpha): Atualização para o layout 3.10 da NFe.ODBC Connect: v1.0: ODBC Connect executables for both 32bit and 64bit ODBC data sourcesMSSQL Deployment Tool: Microsoft SQL Deploy Tool v1.3.1: MicrosoftSqlDeployTool: v1.3.1.38348 What's changed? Update namespace and assembly name. Bug fixing.SharePoint 2013 Search Query Tool: SharePoint 2013 Search Query Tool v2.1: Layout improvements Bug fixes Stores auth method and user name Moved experimental settings to Advanced boxCtrlAltStudio Viewer: CtrlAltStudio Viewer 1.2.2.41183 Alpha: This alpha of the CtrlAltStudio Viewer provides some preliminary Oculus Rift DK2 support. For more details, see the release notes linked to below. Release notes: http://ctrlaltstudio.com/viewer/release-notes/1-2-2-41183-alpha Support info: http://ctrlaltstudio.com/viewer/support Privacy policy: http://ctrlaltstudio.com/viewer/privacy Disclaimer: This software is not provided or supported by Linden Lab, the makers of Second Life.HDD Guardian: HDD Guardian 0.6.1: New: package now include smartctl 6.3; Removed: standard notification e-mail. Now you have to set your mail server to send e-mail alerts; Bugfix: USB detection error; custom e-mail server settings issue; bottom panel displays a wrong ATA error count.VG-Ripper & PG-Ripper: VG-Ripper 2.9.62: changes NEW: Added Support for 'MadImage.org' links NEW: Added Support for 'ImgSpot.org' links NEW: Added Support for 'ImgClick.net' links NEW: Added Support for 'Imaaage.com' links NEW: Added Support for 'Image-Bugs.com' links NEW: Added Support for 'Pictomania.org' links NEW: Added Support for 'ImgDap.com' links NEW: Added Support for 'FileSpit.com' links FIXED: 'ImgSee.me' linksMagick.NET: Magick.NET 7.0.0.0001: Magick.NET linked with ImageMagick 7-Beta.CMake Tools for Visual Studio: CMake Tools for Visual Studio 1.2: This release adds the following new features and bug fixes from CMake Tools for Visual Studio 1.1: Added support for CMake 3.0. Added support for word completion. Added IntelliSense support for the CMAKEHOSTSYSTEM_INFORMATION command. Fixed syntax highlighting for tokens beginning with escape sequences. Fixed issue uninstalling CMake Tools for Visual Studio after Visual Studio has been uninstalled.GW2 Personal Assistant Overlay: GW2 Personal Assistant Overlay 1.1: Overview1.1 is the second 'stable' release of the GW2 Personal Assistant Overlay. This version includes just a couple of very minor features and some minor bug fixes. For details regarding installation, setup, and general use, see Documentation. Note: If you were using a previous version, you will probably want to copy over the following user settings files: GW2PAO.DungeonSettings.xml GW2PAO.EventSettings.xml GW2PAO.WvWSettings.xml GW2PAO.ZoneCompletionSettings.xml New FeaturesAdded new "No...Fluentx: Fluentx v1.5.3: Added few more extension methods.fastJSON: v2.1.2: 2.1.2 - bug fix circular referencesJPush.NET: JPush Server SDK 1.2.1 (For JPush V3): Assembly: 1.2.1.24728 JPush REST API Version: v3 JPush Documentation Reference .NET framework: v4.0 or above. Sample: class: JPushClientV3 2014 Augest 15th.SEToolbox: SEToolbox 01.043.008 Release 1: Changed ship/station names to use new DisplayName instead of Beacon/Antenna. Fixed issue with updated SE binaries 01.043.018 using new Voxel Material definitions.New Projects1thManage: GDT for erevery oneCreateProjectOnCodePlex: This is the first project for CoderCamps.HEAD FIRST C# LAB 1 : A DAY AT THE RACES: This has been provided for educational purposes and general discussion to improve coding practices associated with the resources detailed within Head First C#.Introduce Audit logging to your EF application using Repository & Unit of Work: Introduce Auditing in your application that uses Entity Framework by utilizing the Repository and Unit of Work design patterns.License Registration (C++): Allow to create demo version, activate or not a module.MS Word SharepointWiki Plugin: Scope of the Plugin is to enable a Post to a Sharepoint Wiki from within MS Word with Formatted Text and Images.Send My Zip: This app will help you to send the files were zipped then send the email about password information. This project is currently in setup mode and only availablewinhttp: this is a project for http/https download.Wix Builder: WixBuilder focusses on easily generating a WiX script from a project ouput, compile and link it into msi installer using the WiX Toolset.XiamiSig: ????????。

    Read the article

  • How to select the first ocurrence in the auto-completion menu by pressing Enter?

    - by janoChen
    Every time there's is a pop up menu. I select the first occurrence and press enter but nothing happens (the word is not completed with he selected occurrence). The only way is to press Tab until you reach the term for a second time. Is there a way of selecting the first occurrence pressing Enter (or other Vim hotkey)? My .vimrc: " SHORTCUTS nnoremap <F4> :set filetype=html<CR> nnoremap <F5> :set filetype=php<CR> nnoremap <F3> :TlistToggle<CR> " press space to turn off highlighting and clear any message already displayed. nnoremap <silent> <Space> :nohlsearch<Bar>:echo<CR> " set buffers commands nnoremap <silent> <M-F8> :BufExplorer<CR> nnoremap <silent> <F8> :bn<CR> nnoremap <silent> <S-F8> :bp<CR> " open NERDTree with start directory: D:\wamp\www nnoremap <F9> :NERDTree /home/alex/www<CR> " open MRU nnoremap <F10> :MRU<CR> " open current file (silently) nnoremap <silent> <F11> :let old_reg=@"<CR>:let @"=substitute(expand("%:p"), "/", "\\", "g")<CR>:silent!!cmd /cstart <C-R><C-R>"<CR><CR>:let @"=old_reg<CR> " open current file in localhost (default browser) nnoremap <F12> :! start "http://localhost" file:///"%:p""<CR> " open Vim's default Explorer nnoremap <silent> <F2> :Explore<CR> nnoremap <C-F2> :%s/\.html/.php/g<CR> " REMAPPING " map leader to , let mapleader = "," " remap ` to ' nnoremap ' ` nnoremap ` ' " remap increment numbers nnoremap <C-kPlus> <C-A> " COMPRESSION function Js_css_compress () let cwd = expand('<afile>:p:h') let nam = expand('<afile>:t:r') let ext = expand('<afile>:e') if -1 == match(nam, "[\._]src$") let minfname = nam.".min.".ext else let minfname = substitute(nam, "[\._]src$", "", "g").".".ext endif if ext == 'less' if executable('lessc') cal system( 'lessc '.cwd.'/'.nam.'.'.ext.' &') endif else if filewritable(cwd.'/'.minfname) if ext == 'js' && executable('closure-compiler') cal system( 'closure-compiler --js '.cwd.'/'.nam.'.'.ext.' > '.cwd.'/'.minfname.' &') elseif executable('yuicompressor') cal system( 'yuicompressor '.cwd.'/'.nam.'.'.ext.' > '.cwd.'/'.minfname.' &') endif endif endif endfunction autocmd FileWritePost,BufWritePost *.js :call Js_css_compress() autocmd FileWritePost,BufWritePost *.css :call Js_css_compress() autocmd FileWritePost,BufWritePost *.less :call Js_css_compress() " GUI " taglist right side let Tlist_Use_Right_Window = 1 " hide tool bar set guioptions-=T "remove scroll bars set guioptions+=LlRrb set guioptions-=LlRrb " set the initial size of window set lines=46 columns=180 " set default font set guifont=Monospace " set guifont=Monospace\ 10 " show line number set number " set default theme colorscheme molokai-2 " encoding set encoding=utf-8 setglobal fileencoding=utf-8 bomb set fileencodings=ucs-bom,utf-8,latin1 " SCSS syntax highlight au BufRead,BufNewFile *.scss set filetype=scss " LESS syntax highlight syntax on au BufNewFile,BufRead *.less set filetype=less " Haml syntax highlight "au! BufRead,BufNewFile *.haml "setfiletype haml " Sass syntax highlight "au! BufRead,BufNewFile *.sass "setfiletype sass " set filetype indent filetype indent on " for snipMate to work filetype plugin on " show breaks set showbreak=-----> " coding format set tabstop=4 set shiftwidth=4 set linespace=1 " CONFIG " set location of ctags let Tlist_Ctags_Cmd='D:\ctags58\ctags.exe' " keep the buffer around when left set hidden " enable matchit plugin source $VIMRUNTIME/macros/matchit.vim " folding set foldmethod=marker set foldmarker={,} let g:FoldMethod = 0 map <leader>ff :call ToggleFold()<cr> fun! ToggleFold() if g:FoldMethod == 0 exe 'set foldmethod=indent' let g:FoldMethod = 1 else exe 'set foldmethod=marker' let g:FoldMethod = 0 endif endfun " save and restore folds when a file is closed and re-opened "au BufWrite ?* mkview "au BufRead ?* silent loadview " auto-open NERDTree everytime Vim is invoked au VimEnter * NERDTree /home/alex/www " set omnicomplete autocmd FileType python set omnifunc=pythoncomplete#Complete autocmd FileType javascript set omnifunc=javascriptcomplete#CompleteJS autocmd FileType html set omnifunc=htmlcomplete#CompleteTags autocmd FileType css set omnifunc=csscomplete#CompleteCSS autocmd FileType xml set omnifunc=xmlcomplete#CompleteTags autocmd FileType php set omnifunc=phpcomplete#CompletePHP autocmd FileType c set omnifunc=ccomplete#Complete " Remove trailing white-space once the file is saved au BufWritePre * silent g/\s\+$/s/// " Use CTRL-S for saving, also in Insert mode noremap <C-S> :update!<CR> vnoremap <C-S> <C-C>:update!<CR> inoremap <C-S> <C-O>:update!<CR> " DEFAULT set nocompatible source $VIMRUNTIME/vimrc_example.vim "source $VIMRUNTIME/mswin.vim "behave mswin " disable creation of swap files set noswapfile " no back ups wwhile editing set nowritebackup " disable creation of backups set nobackup " no file change pop up warning autocmd FileChangedShell * echohl WarningMsg | echo "File changed shell." | echohl None set diffexpr=MyDiff() function MyDiff() let opt = '-a --binary ' if &diffopt =~ 'icase' | let opt = opt . '-i ' | endif if &diffopt =~ 'iwhite' | let opt = opt . '-b ' | endif let arg1 = v:fname_in if arg1 =~ ' ' | let arg1 = '"' . arg1 . '"' | endif let arg2 = v:fname_new if arg2 =~ ' ' | let arg2 = '"' . arg2 . '"' | endif let arg3 = v:fname_out if arg3 =~ ' ' | let arg3 = '"' . arg3 . '"' | endif let eq = '' if $VIMRUNTIME =~ ' ' if &sh =~ '\<cmd' let cmd = '""' . $VIMRUNTIME . '\diff"' let eq = '"' else let cmd = substitute($VIMRUNTIME, ' ', '" ', '') . '\diff"' endif else let cmd = $VIMRUNTIME . '\diff' endif silent execute '!' . cmd . ' ' . opt . arg1 . ' ' . arg2 . ' > ' . arg3 . eq endfunction

    Read the article

  • CodePlex Daily Summary for Tuesday, July 02, 2013

    CodePlex Daily Summary for Tuesday, July 02, 2013Popular ReleasesMastersign.Expressions: Mastersign.Expressions v0.4.2: added support for if(<cond>, <true-part>, <false-part>) fixed multithreading issue with rand() improved demo applicationNB_Store - Free DotNetNuke Ecommerce Catalog Module: NB_Store v2.3.6 Rel0: v2.3.6 Is now DNN6 and DNN7 compatible Important : During update this install with overwrite the menu.xml setting, if you have changed this then make a backup before you upgrade and reapply your changes after the upgrade. Please view the following documentation if you are installing and configuring this module for the first time System Requirements Skill requirements Downloads and documents Step by step guide to a working store Please ask all questions in the Discussions tab. Document.Editor: 2013.26: What's new for Document.Editor 2013.26: New Insert Chart Improved User Interface Minor Bug Fix's, improvements and speed upsWsus Package Publisher: Release V1.2.1307.01: Fix an issue in the UI, approvals are not shown correctly in the 'Report' tabDirectX Tool Kit: July 2013: July 1, 2013 VS 2013 Preview projects added and updates for DirectXMath 3.05 vectorcall Added use of sRGB WIC metadata for JPEG, PNG, and TIFF SaveToWIC functions updated with new optional setCustomProps parameter and error check with optional targetFormatCore Server 2012 Powershell Script Hyper-v Manager: new_root.zip: Verison 1.0JSON Toolkit: JSON Toolkit 4.1.736: Improved strinfigy performance New serializing feature New anonymous type support in constructorsDotNetNuke® IFrame: IFrame 04.05.00: New DNN6/7 Manifest file and Azure Compatibility.VidCoder: 1.5.2 Beta: Fixed crash on presets with an invalid bitrate.Gardens Point LEX: Gardens Point LEX version 1.2.1: The main distribution is a zip file. This contains the binary executable, documentation, source code and the examples. ChangesVersion 1.2.1 has new facilities for defining and manipulating character classes. These changes make the construction of large Unicode character classes more convenient. The runtime code for performing automaton backup has been re-implemented, and is now faster for scanners that need backup. Source CodeThe distribution contains a complete VS2010 project for the appli...ZXMAK2: Version 2.7.5.7: - fix TZX emulation (Bruce Lee, Zynaps) - fix ATM 16 colors for border - add memory module PROFI 512K; add PROFI V03 rom image; fix PROFI 3.XX configTwitter image Downloader: Twitter Image Downloader 2 with Installer: Application file with Install shield and Dot Net 4.0 redistributableUltimate Music Tagger: Ultimate Music Tagger 1.0.0.0: First release of Ultimate Music TaggerBlackJumboDog: Ver5.9.2: 2013.06.28 Ver5.9.2 (1) ??????????(????SMTP?????)?????????? (2) HTTPS???????????Outlook 2013 Add-In: Configuration Form: This new version includes the following changes: - Refactored code a bit. - Removing configuration from main form to gain more space to display items. - Moved configuration to separate form. You can click the little "gear" icon to access the configuration form (still very simple). - Added option to show past day appointments from the selected day (previous in time, that is). - Added some tooltips. You will have to uninstall the previous version (add/remove programs) if you had installed it ...Terminals: Version 3.0 - Release: Changes since version 2.0:Choose 100% portable or installed version Removed connection warning when running RDP 8 (Windows 8) client Fixed Active directory search Extended Active directory search by LDAP filters Fixed single instance mode when running on Windows Terminal server Merged usage of Tags and Groups Added columns sorting option in tables No UAC prompts on Windows 7 Completely new file persistence data layer New MS SQL persistence layer (Store data in SQL database)...NuGet: NuGet 2.6: Released June 26, 2013. Release notes: http://docs.nuget.org/docs/release-notes/nuget-2.6Python Tools for Visual Studio: 2.0 Beta: We’re pleased to announce the release of Python Tools for Visual Studio 2.0 Beta. Python Tools for Visual Studio (PTVS) is an open-source plug-in for Visual Studio which supports programming with the Python language. PTVS supports a broad range of features including CPython/IronPython, Edit/Intellisense/Debug/Profile, Cloud, HPC, IPython, and cross platform debugging support. For a quick overview of the general IDE experience, please watch this video: http://www.youtube.com/watch?v=TuewiStN...Player Framework by Microsoft: Player Framework for Windows 8 and WP8 (v1.3 beta): Preview: New MPEG DASH adaptive streaming plugin for Windows Azure Media Services Preview: New Ultraviolet CFF plugin. Preview: New WP7 version with WP8 compatibility. (source code only) Source code is now available via CodePlex Git Misc bug fixes and improvements: WP8 only: Added optional fullscreen and mute buttons to default xaml JS only: protecting currentTime from returning infinity. Some videos would cause currentTime to be infinity which could cause errors in plugins expectin...AssaultCube Reloaded: 2.5.8: SERVER OWNERS: note that the default maprot has changed once again. Linux has Ubuntu 11.10 32-bit precompiled binaries and Ubuntu 10.10 64-bit precompiled binaries, but you can compile your own as it also contains the source. If you are using Mac or other operating systems, please wait while we continue to try to package for those OSes. Or better yet, try to compile it. If it fails, download a virtual machine. The server pack is ready for both Windows and Linux, but you might need to compi...New ProjectsALM Rangers DevOps Tooling and Guidance: Practical tooling and guidance that will enable teams to realize a faster deployment based on continuous feedback.Core Server 2012 Powershell Script Hyper-v Manager: Free core Server 2012 powershell scripts and batch files that replace the non-existent hyper-v manager, vmconnect and mstsc.Enhanced Deployment Service (EDS): EDS is a web service based utility designed to extend the deployment capabilities of administrators with the Microsoft Deployment Toolkit.ExtendedDialogBox: Libreria DialogBoxJazdy: This project is here only because we wanted to take advantage of a public git server.Mon Examen: This web interface is meant to make examinationsneet: summaryOrchard Multi-Choice Voting: A multiple choice voting Orchard module.Particle Swarm Optimization Solving Quadratic Assignment Problem: This project is submitted for the solving of QAP using PSO algorithms with addition of some modification Porjects: 23123123PPL Power Pack: PPL Power PackProperty Builder: Visual Studio tool for speeding up process of coding class properties getters and setters.RedRuler for Redline: I tried some on-screen rulers, none of them help me measure the UI element quickly based on the Redline. So I decided to created this handy RedRuler tool. Royale Living: Mahindra Royale Community PortalSearch and booking Hotel or Tours: Ð? án nghiên c?u c?a sinh viên tdt theo mô hình mvc 4SystemBuilder.Show: This tool is a helper after you create your project in visual studio to create the respective objects and interface. TalentDesk: new ptojectTcmplex: The Training Center teaches many different kind of course such as English, French, Computer hardware and computer softwareTFS Reporting Guide: Provides guidance and samples to enable TFS users to generate reports based on WIT data.Umbraco AdaptiveImages: Adaptive Images Package for UmbracoVirtualNet - A ILcode interpreter/emulator written in C++/Assembly: VirtualNet is a interpreter/emulator for running .net code in native without having to install the .Net FrameWorkVisual Blocks: Visual Blocks ????IDE ????? ??????? ????? ????/?? Visual Studio and Cloud Based Mobile Device Testing: Practical guidance enabling field to remove blockers to adoption and to use and extend the Perfecto Mobile Cloud Device testing within the context of VS.Windows 8 Time Picker for Windows Phone: A Windows Phone implementation of the Time Picker control found in the Windows 8.1 Alarms app.???? - SmallBasic?: ?????????

    Read the article

  • Evaluating Solutions to Manage Product Compliance? Don't Wait Much Longer

    - by Kerrie Foy
    Depending on severity, product compliance issues can cause all sorts of problems from run-away budgets to business closures. But effective policies and safeguards can create a strong foundation for innovation, productivity, market penetration and competitive advantage. If you’ve been putting off a systematic approach to product compliance, it is time to reconsider that decision, or indecision. Why now?  No matter what industry, companies face a litany of worldwide and regional regulations that require proof of product compliance and environmental friendliness for market access.  For example, Restriction of Hazardous Substances (RoHS) is a regulation that restricts the use of six dangerous materials used in the manufacture of electronic and electrical equipment.  ROHS was originally adopted by the European Union in 2003 for implementation in 2006, and it has evolved over time through various regional versions for North America, China, Japan, Korea, Norway and Turkey.  In addition, the RoHS directive allowed for material exemptions used in Medical Devices, but that exemption ends in 2014.   Additional regulations worth watching are the Battery Directive, Waste Electrical and Electronic Equipment (WEEE), and Registration, Evaluation, Authorization and Restriction of Chemicals (REACH) directives.  Additional evolving regulations are coming from governing bodies like the Food and Drug Administration (FDA) and the International Organization for Standardization (ISO). Corporate sustainability initiatives are also gaining urgency and influencing product design. In a survey of 405 corporations in the Global 500 by Carbon Disclosure Project, co-written by PwC (CDP Global 500 Climate Change Report 2012 entitled Business Resilience in an Uncertain, Resource-Constrained World), 48% of the respondents indicated they saw potential to create new products and business services as a response to climate change. Just 21% reported a dedicated budget for the research. However, the report goes on to explain that those few companies are winning over new customers and driving additional profits by exploiting their abilities to adapt to environmental needs. The article cites Dell as an example – Dell has invested in research to develop new products designed to reduce its customers’ emissions by more than 10 million metric tons of CO2e per year. This reduction in emissions should save Dell’s customers over $1billion per year as a result! Over time we expect to see many additional companies prove that eco-design provides marketplace benefits through differentiation and direct customer value. How do you meet compliance requirements and also successfully invest in eco-friendly designs? No doubt companies struggle to answer this question. After all, the journey to get there may involve transforming business models, go-to-market strategies, supply networks, quality assurance policies and compliance processes per the rapidly evolving global and regional directives. There may be limited executive focus on the initiative, inability to quantify noncompliance, or not enough resources to justify investment. To make things even more difficult to address, compliance responsibility can be a passionate topic within an organization, making the prospect of change on an enterprise scale problematic and time-consuming. Without a single source of truth for product data and without proper processes in place, ensuring product compliance burgeons into a crushing task that is cost-prohibitive and overwhelming to an organization. With all the overhead, certain markets or demographics become simply inaccessible. Therefore, the risk to consumer goodwill and satisfaction, revenue, business continuity, and market potential is too great not to solve the compliance challenge. Companies are beginning to adapt and even thrive in today’s highly regulated and transparent environment by implementing systematic approaches to product compliance that are more than functional bandages but revenue-generating engines. Consider partnering with Oracle to help you address your compliance needs. Many of the world’s most innovative leaders and pioneers are leveraging Oracle’s Agile Product Lifecycle Management (PLM) portfolio of enterprise applications to manage the product value chain, centralize product data, automate processes, and launch more eco-friendly products to market faster.   Particularly, the Agile Product Governance & Compliance (PG&C) solution provides out-of-the-box functionality to integrate actionable regulatory information into the enterprise product record from the ideation to the disposal/recycling phase. Agile PG&C makes it possible to efficiently manage compliance per corporate green initiatives as well as regional and global directives. Options are critical, but so is ease-of-use. Anyone who’s grappled with compliance policy knows legal interpretation plays a major role in determining how an organization responds to regulation. Agile PG&C gives you the freedom to configure product compliance per your needs, while maintaining rigorous control over the product record in an easy-to-use interface that facilitates adoption efforts. It allows you to assign regulations as specifications for a part or BOM roll-up. Each specification has a threshold value that alerts you to a non-compliance issue if the threshold value is exceeded. Set however many regulations as specifications you need to make sure a product can be sold in your target countries. Another option is to implement like one of our leading consumer electronics customers and define your own “catch-all” specification to ensure compliance in all markets. You can give your suppliers secure access to enter their component data or integrate a third party’s data. With Agile PG&C you are able to design compliance earlier into your products to reduce cost and improve quality downstream when stakes are higher. Agile PG&C is a comprehensive solution that makes product compliance more reliable and efficient. Throughout product lifecycles, use the solution to support full material disclosures, efficiently manage declarations with your suppliers, feed compliance data into a corrective action if a product must be changed, and swiftly satisfy audits by showing all due diligence tracked in one solution. Given the compounding regulation and consumer focus on urgent environmental issues, now is the time to act. Implementing an enterprise, systematic approach to product compliance is a competitive investment. From the start, Agile Product Governance & Compliance enables companies to confidently design for compliance and sustainability, reduce the cost of compliance, minimize the risk of business interruption, deliver responsible products, and inspire new innovation.  Don’t wait any longer! To find out more about Agile Product Governance & Compliance download the data sheet, contact your sales representative, or call Oracle at 1-800-633-0738. Many thanks to Shane Goodwin, Senior Manager, Oracle Agile PLM Product Management, for contributions to this article. 

    Read the article

  • Elegance, thy Name is jQuery

    - by SGWellens
    So, I'm browsing though some questions over on the Stack Overflow website and I found a good jQuery question just a few minutes old. Here is a link to it. It was a tough question; I knew that by answering it, I could learn new stuff and reinforce what I already knew: Reading is good, doing is better. Maybe I could help someone in the process too. I cut and pasted the HTML from the question into my Visual Studio IDE and went back to Stack Overflow to reread the question. Dang, someone had already answered it! And it was a great answer. I never even had a chance to start analyzing the issue. Now I know what a one-legged man feels like in an ass-kicking contest. Nevertheless, since the question and answer were so interesting, I decided to dissect them and learn as much as possible. The HTML consisted of some divs separated by h3 headings.  Note the elements are laid out sequentially with no programmatic grouping: <h3 class="heading">Heading 1</h3> <div>Content</div> <div>More content</div> <div>Even more content</div><h3 class="heading">Heading 2</h3> <div>some content</div> <div>some more content</div><h3 class="heading">Heading 3</h3> <div>other content</div></form></body>  The requirement was to wrap a div around each h3 heading and the subsequent divs grouping them into sections. Why? I don't know, I suppose if you screen-scrapped some HTML from another site, you might want to reformat it before displaying it on your own. Anyways… Here is the marvelously, succinct posted answer: $('.heading').each(function(){ $(this).nextUntil('.heading').andSelf().wrapAll('<div class="section">');}); I was familiar with all the parts except for nextUntil and andSelf. But, I'll analyze the whole answer for completeness. I'll do this by rewriting the posted answer in a different style and adding a boat-load of comments: function Test(){ // $Sections is a jQuery object and it will contain three elements var $Sections = $('.heading'); // use each to iterate over each of the three elements $Sections.each(function () { // $this is a jquery object containing the current element // being iterated var $this = $(this); // nextUntil gets the following sibling elements until it reaches // an element with the CSS class 'heading' // andSelf adds in the source element (this) to the collection $this = $this.nextUntil('.heading').andSelf(); // wrap the elements with a div $this.wrapAll('<div class="section" >'); });}  The code here doesn't look nearly as concise and elegant as the original answer. However, unless you and your staff are jQuery masters, during development it really helps to work through algorithms step by step. You can step through this code in the debugger and examine the jQuery objects to make sure one step is working before proceeding on to the next. It's much easier to debug and troubleshoot when each logical coding step is a separate line of code. Note: You may think the original code runs much faster than this version. However, the time difference is trivial: Not enough to worry about: Less than 1 millisecond (tested in IE and FF). Note: You may want to jam everything into one line because it results in less traffic being sent to the client. That is true. However, most Internet servers now compress HTML and JavaScript by stripping out comments and white space (go to Bing or Google and view the source). This feature should be enabled on your server: Let the server compress your code, you don't need to do it. Free Career Advice: Creating maintainable code is Job One—Maximum Priority—The Prime Directive. If you find yourself suddenly transferred to customer support, it may be that the code you are writing is not as readable as it could be and not as readable as it should be. Moving on… I created a CSS class to enhance the results: .section{ background-color: yellow; border: 2px solid black; margin: 5px;} Here is the rendered output before:   …and after the jQuery code runs.   Pretty Cool! But, while playing with this code, the logic of nextUntil began to bother me: What happens in the last section? What stops elements from being collected since there are no more elements with the .heading class? The answer is nothing.  In this case it stopped collecting elements because it was at the end of the page.  But what if there were additional HTML elements? I added an anchor tag and another div to the HTML: <h3 class="heading">Heading 1</h3> <div>Content</div> <div>More content</div> <div>Even more content</div><h3 class="heading">Heading 2</h3> <div>some content</div> <div>some more content</div><h3 class="heading">Heading 3</h3> <div>other content</div><a>this is a link</a><div>unrelated div</div> </form></body> The code as-is will include both the anchor and the unrelated div. This isn't what we want.   My first attempt to correct this used the filter parameter of the nextUntil function: nextUntil('.heading', 'div')  This will only collect div elements. But it merely skipped the anchor tag and it still collected the unrelated div:   The problem is we need a way to tell the nextUntil function when to stop. CSS selectors to the rescue! nextUntil('.heading, a')  This tells nextUntil to stop collecting elements when it gets to an element with a .heading class OR when it gets to an anchor tag. In this case it solved the problem. FYI: The comma operator in a CSS selector allows multiple criteria.   Bingo! One final note, we could have broken the code down even more: We could have replaced the andSelf function here: $this = $this.nextUntil('.heading, a').andSelf(); With this: // get all the following siblings and then add the current item$this = $this.nextUntil('.heading, a');$this.add(this);  But in this case, the andSelf function reads real nice. In my opinion. Here's a link to a jsFiddle if you want to play with it. I hope someone finds this useful Steve Wellens CodeProject

    Read the article

  • Developing a Cost Model for Cloud Applications

    - by BuckWoody
    Note - please pay attention to the date of this post. As much as I attempt to make the information below accurate, the nature of distributed computing means that components, units and pricing will change over time. The definitive costs for Microsoft Windows Azure and SQL Azure are located here, and are more accurate than anything you will see in this post: http://www.microsoft.com/windowsazure/offers/  When writing software that is run on a Platform-as-a-Service (PaaS) offering like Windows Azure / SQL Azure, one of the questions you must answer is how much the system will cost. I will not discuss the comparisons between on-premise costs (which are nigh impossible to calculate accurately) versus cloud costs, but instead focus on creating a general model for estimating costs for a given application. You should be aware that there are (at this writing) two billing mechanisms for Windows and SQL Azure: “Pay-as-you-go” or consumption, and “Subscription” or commitment. Conceptually, you can consider the former a pay-as-you-go cell phone plan, where you pay by the unit used (at a slightly higher rate) and the latter as a standard cell phone plan where you commit to a contract and thus pay lower rates. In this post I’ll stick with the pay-as-you-go mechanism for simplicity, which should be the maximum cost you would pay. From there you may be able to get a lower cost if you use the other mechanism. In any case, the model you create should hold. Developing a good cost model is essential. As a developer or architect, you’ll most certainly be asked how much something will cost, and you need to have a reliable way to estimate that. Businesses and Organizations have been used to paying for servers, software licenses, and other infrastructure as an up-front cost, and power, people to the systems and so on as an ongoing (and sometimes not factored) cost. When presented with a new paradigm like distributed computing, they may not understand the true cost/value proposition, and that’s where the architect and developer can guide the conversation to make a choice based on features of the application versus the true costs. The two big buckets of use-types for these applications are customer-based and steady-state. In the customer-based use type, each successful use of the program results in a sale or income for your organization. Perhaps you’ve written an application that provides the spot-price of foo, and your customer pays for the use of that application. In that case, once you’ve estimated your cost for a successful traversal of the application, you can build that into the price you charge the user. It’s a standard restaurant model, where the price of the meal is determined by the cost of making it, plus any profit you can make. In the second use-type, the application will be used by a more-or-less constant number of processes or users and no direct revenue is attached to the system. A typical example is a customer-tracking system used by the employees within your company. In this case, the cost model is often created “in reverse” - meaning that you pilot the application, monitor the use (and costs) and that cost is held steady. This is where the comparison with an on-premise system becomes necessary, even though it is more difficult to estimate those on-premise true costs. For instance, do you know exactly how much cost the air conditioning is because you have a team of system administrators? This may sound trivial, but that, along with the insurance for the building, the wiring, and every other part of the system is in fact a cost to the business. There are three primary methods that I’ve been successful with in estimating the cost. None are perfect, all are demand-driven. The general process is to lay out a matrix of: components units cost per unit and then multiply that times the usage of the system, based on which components you use in the program. That sounds a bit simplistic, but using those metrics in a calculation becomes more detailed. In all of the methods that follow, you need to know your application. The components for a PaaS include computing instances, storage, transactions, bandwidth and in the case of SQL Azure, database size. In most cases, architects start with the first model and progress through the other methods to gain accuracy. Simple Estimation The simplest way to calculate costs is to architect the application (even UML or on-paper, no coding involved) and then estimate which of the components you’ll use, and how much of each will be used. Microsoft provides two tools to do this - one is a simple slider-application located here: http://www.microsoft.com/windowsazure/pricing-calculator/  The other is a tool you download to create an “Return on Investment” (ROI) spreadsheet, which has the advantage of leading you through various questions to estimate what you plan to use, located here: https://roianalyst.alinean.com/msft/AutoLogin.do?d=176318219048082115  You can also just create a spreadsheet yourself with a structure like this: Program Element Azure Component Unit of Measure Cost Per Unit Estimated Use of Component Total Cost Per Component Cumulative Cost               Of course, the consideration with this model is that it is difficult to predict a system that is not running or hasn’t even been developed. Which brings us to the next model type. Measure and Project A more accurate model is to actually write the code for the application, using the Software Development Kit (SDK) which can run entirely disconnected from Azure. The code should be instrumented to estimate the use of the application components, logging to a local file on the development system. A series of unit and integration tests should be run, which will create load on the test system. You can use standard development concepts to track this usage, and even use Windows Performance Monitor counters. The best place to start with this method is to use the Windows Azure Diagnostics subsystem in your code, which you can read more about here: http://blogs.msdn.com/b/sumitm/archive/2009/11/18/introducing-windows-azure-diagnostics.aspx This set of API’s greatly simplifies tracking the application, and in fact you can use this information for more than just a cost model. After you have the tracking logs, you can plug the numbers into ay of the tools above, which should give a representative cost or in some cases a unit cost. The consideration with this model is that the SDK fabric is not a one-to-one comparison with performance on the actual Windows Azure fabric. Those differences are usually smaller, but they do need to be considered. Also, you may not be able to accurately predict the load on the system, which might lead to an architectural change, which changes the model. This leads us to the next, most accurate method for a cost model. Sample and Estimate Using standard statistical and other predictive math, once the application is deployed you will get a bill each month from Microsoft for your Azure usage. The bill is quite detailed, and you can export the data from it to do analysis, and using methods like regression and so on project out into the future what the costs will be. I normally advise that the architect also extrapolate a unit cost from those metrics as well. This is the information that should be reported back to the executives that pay the bills: the past cost, future projected costs, and unit cost “per click” or “per transaction”, as your case warrants. The challenge here is in the model itself - statistical methods are not foolproof, and the larger the sample (in this case I recommend the entire population, not a smaller sample) is key. References and Tools Articles: http://blogs.msdn.com/b/patrick_butler_monterde/archive/2010/02/10/windows-azure-billing-overview.aspx http://technet.microsoft.com/en-us/magazine/gg213848.aspx http://blog.codingoutloud.com/2011/06/05/azure-faq-how-much-will-it-cost-me-to-run-my-application-on-windows-azure/ http://blogs.msdn.com/b/johnalioto/archive/2010/08/25/10054193.aspx http://geekswithblogs.net/iupdateable/archive/2010/02/08/qampa-how-can-i-calculate-the-tco-and-roi-when.aspx   Other Tools: http://cloud-assessment.com/ http://communities.quest.com/community/cloud_tools

    Read the article

  • Developing Mobile Applications: Web, Native, or Hybrid?

    - by Michelle Kimihira
    Authors: Joe Huang, Senior Principal Product Manager, Oracle Mobile Application Development Framework  and Carlos Chang, Senior Principal Product Director The proliferation of mobile devices and platforms represents a game-changing technology shift on a number of levels. Companies must decide not only the best strategic use of mobile platforms, but also how to most efficiently implement them. Inevitably, this conversation devolves to the developers, who face the task of developing and supporting mobile applications—not a simple task in light of the number of devices and platforms. Essentially, developers can choose from the following three different application approaches, each with its own set of pros and cons. Native Applications: This refers to apps built for and installed on a specific platform, such as iOS or Android, using a platform-specific software development kit (SDK).  For example, apps for Apple’s iPhone and iPad are designed to run specifically on iOS and are written in Xcode/Objective-C. Android has its own variation of Java, Windows uses C#, and so on.  Native apps written for one platform cannot be deployed on another. Native apps offer fast performance and access to native-device services but require additional resources to develop and maintain each platform, which can be expensive and time consuming. Mobile Web Applications: Unlike native apps, mobile web apps are not installed on the device; rather, they are accessed via a Web browser.  These are server-side applications that render HTML, typically adjusting the design depending on the type of device making the request.  There are no program coding constraints for writing server-side apps—they can be written in Java, C, PHP, etc., it doesn’t matter.  Instead, the server detects what type of mobile browser is pinging the server and adjusts accordingly. For example, it can deliver fully JavaScript and CSS-enabled content to smartphone browsers, while downgrading gracefully to basic HTML for feature phone browsers. Mobile apps work across platforms, but are limited to what you can do through a browser and require Internet connectivity. For certain types of applications, these constraints may not be an issue. Oracle supports mobile web applications via ADF Faces (for tablets) and ADF Mobile browser (Trinidad) for smartphone and feature phones. Hybrid Applications: As the name implies, hybrid apps combine technologies from native and mobile Web apps to gain the benefits each. For example, these apps are installed on a device, like their pure native app counterparts, while the user interface (UI) is based on HTML5.  This UI runs locally within the native container, which usually leverages the device’s browser engine.  The advantage of using HTML5 is a consistent, cross-platform UI that works well on most devices.  Combining this with the native container, which is installed on-device, provides mobile users with access to local device services, such as camera, GPS, and local device storage.  Native apps may offer greater flexibility in integrating with device native services.  However, since hybrid applications already provide device integrations that typical enterprise applications need, this is typically less of an issue.  The new Oracle ADF Mobile release is an HTML5 and Java hybrid framework that targets mobile app development to iOS and Android from one code base. So, Which is the Best Approach? The short answer is – the best choice depends on the type of application you are developing.  For instance, animation-intensive apps such as games would favor native apps, while hybrid applications may be better suited for enterprise mobile apps because they provide multi-platform support. Just for starters, the following issues must be considered when choosing a development path. Application Complexity: How complex is the application? A quick app that accesses a database or Web service for some data to display?  You can keep it simple, and a mobile Web app may suffice. However, for a mobile/field worker type of applications that supports mission critical functionality, hybrid or native applications are typically needed. Richness of User Interactivity: What type of user experience is required for the application?  Mobile browser-based app that’s optimized for mobile UI may suffice for quick lookup or productivity type of applications.  However, hybrid/native application would typically be required to deliver highly interactive user experiences needed for field-worker type of applications.  For example, interactive BI charts/graphs, maps, voice/email integration, etc.  In the most extreme case like gaming applications, native applications may be necessary to deliver the highly animated and graphically intensive user experience. Performance: What type of performance is required by the application functionality?  For instance, for real-time look up of data over the network, mobile app performance depends on network latency and server infrastructure capabilities.  If consistent performance is required, data would typically need to be cached, which is supported on hybrid or native applications only. Connectivity and Availability: What sort of connectivity will your application require? Does the app require Web access all the time in order to always retrieve the latest data from the server? Or do the requirements dictate offline support? While native and hybrid apps can be built to operate offline, Web mobile apps require Web connectivity. Multi-platform Requirements: The terms “consumerization of IT” and BYOD (bring your own device) effectively mean that the line between the consumer and the enterprise devices have become blurred. Employees are bringing their personal mobile devices to work and are often expecting that they work in the corporate network and access back-office applications.  Even if companies restrict access to the big dogs: (iPad, iPhone, Android phones and tablets, possibly Windows Phone and tablets), trying to support each platform natively will require increasing resources and domain expertise with each new language/platform. And let’s not forget the maintenance costs, involved in upgrading new versions of each platform.   Where multi-platform support is needed, Web mobile or hybrid apps probably have the advantage. Going native, and trying to support multiple operating systems may be cost prohibitive with existing resources and developer skills. Device-Services Access:  If your app needs to access local device services, such as the camera, contacts app, accelerometer, etc., then your choices are limited to native or hybrid applications.   Fragmentation: Apple controls Apple iOS and the only concern is what version iOS is running on any given device.   Not so Android, which is open source. There are many, many versions and variants of Android running on different devices, which can be a nightmare for app developers trying to support different devices running different flavors of Android.  (Is it an Amazon Kindle Fire? a Samsung Galaxy?  A Barnes & Noble Nook?) This is a nightmare scenario for native apps—on the other hand, a mobile Web or hybrid app, when properly designed, can shield you from these complexities because they are based on common frameworks.  Resources: How many developers can you dedicate to building and supporting mobile application development?  What are their existing skills sets?  If you’re considering native application development due to the complexity of the application under development, factor the costs of becoming proficient on a each platform’s OS and programming language. Add another platform, and that’s another language, another SDK. On the other side of the equation, Web mobile or hybrid applications are simpler to make, and readily support more platforms, but there may be performance trade-offs. Conclusion This only scratches the surface. However, I hope to have suggested some food for thought in choosing your mobile development strategy.  Do your due diligence, search the Web, read up on mobile, talk to peers, attend events. The development team at Oracle is working hard on mobile technologies to help customers extend enterprise applications to mobile faster and effectively.  To learn more on what Oracle has to offer, check out the Oracle ADF Mobile (hybrid) and ADF Faces/ADF Mobile browser (Web Mobile) solutions from Oracle.   Additional Information Blog: ADF Blog Product Information on OTN: ADF Mobile Product Information on Oracle.com: Oracle Fusion Middleware Follow us on Twitter and Facebook Subscribe to our regular Fusion Middleware Newsletter

    Read the article

  • The Great Divorce

    - by BlackRabbitCoder
    I have a confession to make: I've been in an abusive relationship for more than 17 years now.  Yes, I am not ashamed to admit it, but I'm finally doing something about it. I met her in college, she was new and sexy and amazingly fast -- and I'd never met anything like her before.  Her style and her power captivated me and I couldn't wait to learn more about her.  I took a chance on her, and though I learned a lot from her -- and will always be grateful for my time with her -- I think it's time to move on. Her name was C++, and she so outshone my previous love, C, that any thoughts of going back evaporated in the heat of this new romance.  She promised me she'd be gentle and not hurt me the way C did.  She promised me she'd clean-up after herself better than C did.  She promised me she'd be less enigmatic and easier to keep happy than C was.  But I was deceived.  Oh sure, as far as truth goes, it wasn't a complete lie.  To some extent she was more fun, more powerful, safer, and easier to maintain.  But it just wasn't good enough -- or at least it's not good enough now. I loved C++, some part of me still does, it's my first-love of programming languages and I recognize its raw power, its blazing speed, and its improvements over its predecessor.  But with today's hardware, at speeds we could only dream to conceive of twenty years ago, that need for speed -- at the cost of all else -- has died, and that has left my feelings for C++ moribund. If I ever need to write an operating system or a device driver, then I might need that speed.  But 99% of the time I don't.  I'm a business-type programmer and chances are 90% of you are too, and even the ones who need speed at all costs may be surprised by how much you sacrifice for that.   That's not to say that I don't want my software to perform, and it's not to say that in the business world we don't care about speed or that our job is somehow less difficult or technical.  There's many times we write programs to handle millions of real-time updates or handle thousands of financial transactions or tracking trading algorithms where every second counts.  But if I choose to write my code in C++ purely for speed chances are I'll never notice the speed increase -- and equally true chances are it will be far more prone to crash and far less easy to maintain.  Nearly without fail, it's the macro-optimizations you need, not the micro-optimizations.  If I choose to write a O(n2) algorithm when I could have used a O(n) algorithm -- that can kill me.  If I choose to go to the database to load a piece of unchanging data every time instead of caching it on first load -- that too can kill me.  And if I cross the network multiple times for pieces of data instead of getting it all at once -- yes that can also kill me.  But choosing an overly powerful and dangerous mid-level language to squeeze out every last drop of performance will realistically not make stock orders process any faster, and more likely than not open up the system to more risk of crashes and resource leaks. And that's when my love for C++ began to die.  When I noticed that I didn't need that speed anymore.  That that speed was really kind of a lie.  Sure, I can be super efficient and pack bits in a byte instead of using separate boolean values.  Sure, I can use an unsigned char instead of an int.  But in the grand scheme of things it doesn't matter as much as you think it does.  The key is maintainability, and that's where C++ failed me.  I like to tell the other developers I work with that there's two levels of correctness in coding: Is it immediately correct? Will it stay correct? That is, you can hack together any piece of code and make it correct to satisfy a task at hand, but if a new developer can't come in tomorrow and make a fairly significant change to it without jeopardizing that correctness, it won't stay correct. Some people laugh at me when I say I now prefer maintainability over speed.  But that is exactly the point.  If you focus solely on speed you tend to produce code that is much harder to maintain over the long hall, and that's a load of technical debt most shops can't afford to carry and end up completely scrapping code before it's time.  When good code is written well for maintainability, though, it can be correct both now and in the future. And you know the best part is?  My new love is nearly as fast as C++, and in some cases even faster -- and better than that, I know C# will treat me right.  Her creators have poured hundreds of thousands of hours of time into making her the sexy beast she is today.  They made her easy to understand and not an enigmatic mess.  They made her consistent and not moody and amorphous.  And they made her perform as fast as I care to go by optimizing her both at compile time and a run-time. Her code is so elegant and easy on the eyes that I'm not worried where she will run to or what she'll pull behind my back.  She is powerful enough to handle all my tasks, fast enough to execute them with blazing speed, maintainable enough so that I can rely on even fairly new peers to modify my work, and rich enough to allow me to satisfy any need.  C# doesn't ask me to clean up her messes!  She cleans up after herself and she tries to make my life easier for me by taking on most of those optimization tasks C++ asked me to take upon myself.  Now, there are many of you who would say that I am the cause of my own grief, that it was my fault C++ didn't behave because I didn't pay enough attention to her.  That I alone caused the pain she inflicted on me.  And to some extent, you have a point.  But she was so high maintenance, requiring me to know every twist and turn of her vast and unrestrained power that any wrong term or bout of forgetfulness was met with painful reminders that she wasn't going to watch my back when I made a mistake.  But C#, she loves me when I'm good, and she loves me when I'm bad, and together we make beautiful code that is both fast and safe. So that's why I'm leaving C++ behind.  She says she's changing for me, but I have no interest in what C++0x may bring.  Oh, I'll still keep in touch, and maybe I'll see her now and again when she brings her problems to my door and asks for some attention -- for I always have a soft spot for her, you see.  But she's out of my house now.  I have three kids and a dog and a cat, and all require me to clean up after them, why should I have to clean up after my programming language as well?

    Read the article

  • Pixel Shader Giving Black output

    - by Yashwinder
    I am coding in C# using Windows Forms and the SlimDX API to show the effect of a pixel shader. When I am setting the pixel shader, I am getting a black output screen but if I am not using the pixel shader then I am getting my image rendered on the screen. I have the following C# code using System; using System.Collections.Generic; using System.Linq; using System.Windows.Forms; using System.Runtime.InteropServices; using SlimDX.Direct3D9; using SlimDX; using SlimDX.Windows; using System.Drawing; using System.Threading; namespace WindowsFormsApplication1 { // Vertex structure. [StructLayout(LayoutKind.Sequential)] struct Vertex { public Vector3 Position; public float Tu; public float Tv; public static int SizeBytes { get { return Marshal.SizeOf(typeof(Vertex)); } } public static VertexFormat Format { get { return VertexFormat.Position | VertexFormat.Texture1; } } } static class Program { public static Device D3DDevice; // Direct3D device. public static VertexBuffer Vertices; // Vertex buffer object used to hold vertices. public static Texture Image; // Texture object to hold the image loaded from a file. public static int time; // Used for rotation caculations. public static float angle; // Angle of rottaion. public static Form1 Window =new Form1(); public static string filepath; static VertexShader vertexShader = null; static ConstantTable constantTable = null; static ImageInformation info; [STAThread] static void Main() { filepath = "C:\\Users\\Public\\Pictures\\Sample Pictures\\Garden.jpg"; info = new ImageInformation(); info = ImageInformation.FromFile(filepath); PresentParameters presentParams = new PresentParameters(); // Below are the required bare mininum, needed to initialize the D3D device. presentParams.BackBufferHeight = info.Height; // BackBufferHeight, set to the Window's height. presentParams.BackBufferWidth = info.Width+200; // BackBufferWidth, set to the Window's width. presentParams.Windowed =true; presentParams.DeviceWindowHandle = Window.panel2 .Handle; // DeviceWindowHandle, set to the Window's handle. // Create the device. D3DDevice = new Device(new Direct3D (), 0, DeviceType.Hardware, Window.Handle, CreateFlags.HardwareVertexProcessing, presentParams); // Create the vertex buffer and fill with the triangle vertices. (Non-indexed) // Remember 3 vetices for a triangle, 2 tris per quad = 6. Vertices = new VertexBuffer(D3DDevice, 6 * Vertex.SizeBytes, Usage.WriteOnly, VertexFormat.None, Pool.Managed); DataStream stream = Vertices.Lock(0, 0, LockFlags.None); stream.WriteRange(BuildVertexData()); Vertices.Unlock(); // Create the texture. Image = Texture.FromFile(D3DDevice,filepath ); // Turn off culling, so we see the front and back of the triangle D3DDevice.SetRenderState(RenderState.CullMode, Cull.None); // Turn off lighting D3DDevice.SetRenderState(RenderState.Lighting, false); ShaderBytecode sbcv = ShaderBytecode.CompileFromFile("C:\\Users\\yashwinder singh\\Desktop\\vertexShader.vs", "vs_main", "vs_1_1", ShaderFlags.None); constantTable = sbcv.ConstantTable; vertexShader = new VertexShader(D3DDevice, sbcv); ShaderBytecode sbc = ShaderBytecode.CompileFromFile("C:\\Users\\yashwinder singh\\Desktop\\pixelShader.txt", "ps_main", "ps_3_0", ShaderFlags.None); PixelShader ps = new PixelShader(D3DDevice, sbc); VertexDeclaration vertexDecl = new VertexDeclaration(D3DDevice, new[] { new VertexElement(0, 0, DeclarationType.Float3, DeclarationMethod.Default, DeclarationUsage.PositionTransformed, 0), new VertexElement(0, 12, DeclarationType.Float2 , DeclarationMethod.Default, DeclarationUsage.TextureCoordinate , 0), VertexElement.VertexDeclarationEnd }); Application.EnableVisualStyles(); MessagePump.Run(Window, () => { // Clear the backbuffer to a black color. D3DDevice.Clear(ClearFlags.Target | ClearFlags.ZBuffer, Color.Black, 1.0f, 0); // Begin the scene. D3DDevice.BeginScene(); // Setup the world, view and projection matrices. //D3DDevice.VertexShader = vertexShader; //D3DDevice.PixelShader = ps; // Render the vertex buffer. D3DDevice.SetStreamSource(0, Vertices, 0, Vertex.SizeBytes); D3DDevice.VertexFormat = Vertex.Format; // Setup our texture. Using Textures introduces the texture stage states, // which govern how Textures get blended together (in the case of multiple // Textures) and lighting information. D3DDevice.SetTexture(0, Image); // Now drawing 2 triangles, for a quad. D3DDevice.DrawPrimitives(PrimitiveType.TriangleList , 0, 2); // End the scene. D3DDevice.EndScene(); // Present the backbuffer contents to the screen. D3DDevice.Present(); }); if (Image != null) Image.Dispose(); if (Vertices != null) Vertices.Dispose(); if (D3DDevice != null) D3DDevice.Dispose(); } private static Vertex[] BuildVertexData() { Vertex[] vertexData = new Vertex[6]; vertexData[0].Position = new Vector3(-1.0f, 1.0f, 0.0f); vertexData[0].Tu = 0.0f; vertexData[0].Tv = 0.0f; vertexData[1].Position = new Vector3(-1.0f, -1.0f, 0.0f); vertexData[1].Tu = 0.0f; vertexData[1].Tv = 1.0f; vertexData[2].Position = new Vector3(1.0f, 1.0f, 0.0f); vertexData[2].Tu = 1.0f; vertexData[2].Tv = 0.0f; vertexData[3].Position = new Vector3(-1.0f, -1.0f, 0.0f); vertexData[3].Tu = 0.0f; vertexData[3].Tv = 1.0f; vertexData[4].Position = new Vector3(1.0f, -1.0f, 0.0f); vertexData[4].Tu = 1.0f; vertexData[4].Tv = 1.0f; vertexData[5].Position = new Vector3(1.0f, 1.0f, 0.0f); vertexData[5].Tu = 1.0f; vertexData[5].Tv = 0.0f; return vertexData; } } } And my pixel shader and vertex shader code are as following // Pixel shader input structure struct PS_INPUT { float4 Position : POSITION; float2 Texture : TEXCOORD0; }; // Pixel shader output structure struct PS_OUTPUT { float4 Color : COLOR0; }; // Global variables sampler2D Tex0; // Name: Simple Pixel Shader // Type: Pixel shader // Desc: Fetch texture and blend with constant color // PS_OUTPUT ps_main( in PS_INPUT In ) { PS_OUTPUT Out; //create an output pixel Out.Color = tex2D(Tex0, In.Texture); //do a texture lookup Out.Color *= float4(0.9f, 0.8f, 0.0f, 1); //do a simple effect return Out; //return output pixel } // Vertex shader input structure struct VS_INPUT { float4 Position : POSITION; float2 Texture : TEXCOORD0; }; // Vertex shader output structure struct VS_OUTPUT { float4 Position : POSITION; float2 Texture : TEXCOORD0; }; // Global variables float4x4 WorldViewProj; // Name: Simple Vertex Shader // Type: Vertex shader // Desc: Vertex transformation and texture coord pass-through // VS_OUTPUT vs_main( in VS_INPUT In ) { VS_OUTPUT Out; //create an output vertex Out.Position = mul(In.Position, WorldViewProj); //apply vertex transformation Out.Texture = In.Texture; //copy original texcoords return Out; //return output vertex }

    Read the article

< Previous Page | 307 308 309 310 311 312 313 314 315 316 317 318  | Next Page >