Search Results

Search found 9501 results on 381 pages for 'dynamic invoke'.

Page 314/381 | < Previous Page | 310 311 312 313 314 315 316 317 318 319 320 321  | Next Page >

  • Forcing an External Activation with Service Broker

    - by Davide Mauri
    In these last days I’ve been working quite a lot with Service Broker, a technology I’m really happy to work with, since it can give a lot of satisfaction. The scale-out solution one can easily build is simply astonishing. I’m helping a company to build a very scalable and – yet almost inexpensive – invoicing system that has to be able to scale out using commodity hardware. To offload the work from the main server to satellite “compute nodes” (yes, I’ve borrowed this term from PDW) we’re using Service Broker and the External Activator application available in the SQL Server Feature Pack. For those who are not used to work with SSB, the External Activation is a feature that allows you to intercept the arrival of a message in a queue right from your application code. http://msdn.microsoft.com/en-us/library/ms171617.aspx (Look for “Event-Based Activation”) In order to make life even more easier, Microsoft released the External Activation application that saves you even from writing even this code. http://blogs.msdn.com/b/sql_service_broker/archive/tags/external+activator/ The External Activator application can be configured to execute your own application so that each time a message – an invoice in my case – arrives in the target queue, the invoking application is executed and the invoice is calculated. The very nice feature of External Activator is that it can automatically execute as many configured application in order to process as many messages as your system can handle.  This also a lot of create a scale-out solution, leaving to the developer only a fraction of the problems that usually came with asynchronous programming. Developers are also shielded from Service Broker since everything can be encapsulated in Stored Procedures, so that – for them – developing such scale-out asynchronous solution is not much more complex than just executing a bunch of Stored Procedures. Now, if everything works correctly, you don’t have to bother of anything else. You put messages in the queue and your application, invoked by the External Activator, process them. But what happen if for some reason your application fails to process the messages. For examples, it crashes? The message is safe in the queue so you just need to process it again. But your application is invoked by the External Activator application, so now the question is, how do you wake up that app? Service Broker will engage the activation process only if certain conditions are met: http://msdn.microsoft.com/en-us/library/ms171601.aspx But how we can invoke the activation process manually, without having to wait for another message to arrive (the arrival of a new message is a condition that can fire the activation process)? The “trick” is to do manually with the activation process does: sending a system message to a queue in charge of handling External Activation messages: declare @conversationHandle uniqueidentifier; declare @n xml = N' <EVENT_INSTANCE>   <EventType>QUEUE_ACTIVATION</EventType>   <PostTime>' + CONVERT(CHAR(24),GETDATE(),126) + '</PostTime>   <SPID>' + CAST(@@SPID AS VARCHAR(9)) + '</SPID>   <ServerName>[your_server_name]</ServerName>   <LoginName>[your_login_name]</LoginName>   <UserName>[your_user_name]</UserName>   <DatabaseName>[your_database_name]</DatabaseName>   <SchemaName>[your_queue_schema_name]</SchemaName>   <ObjectName>[your_queue_name]</ObjectName>   <ObjectType>QUEUE</ObjectType> </EVENT_INSTANCE>' begin dialog conversation     @conversationHandle from service        [<your_initiator_service_name>] to service          '<your_event_notification_service>' on contract         [http://schemas.microsoft.com/SQL/Notifications/PostEventNotification] with     encryption = off,     lifetime = 6000 ; send on conversation     @conversationHandle message type     [http://schemas.microsoft.com/SQL/Notifications/EventNotification] (@n) ;     end conversation @conversationHandle; That’s it! Put the code in a Stored Procedure and you can add to your application a button that says “Force Queue Processing” (or something similar) in order to start the activation process whenever you need it (which should not occur too frequently but it may happen). PS I know that the “fire-and-forget” (ending the conversation without waiting for an answer) technique is not a best practice, but in this case I don’t see how it can hurts so I decided to stay very close to the KISS principle []

    Read the article

  • Oracle Products Reflect Key Trends Shaping Enterprise 2.0

    - by kellsey.ruppel(at)oracle.com
    Following up on his predictions for 2011, we asked Enterprise 2.0 veteran Andy MacMillan to map out the ways Oracle solutions are at the forefront of industry trends--and how Oracle customers can benefit in the coming year. 1. Increase organizational awareness | Oracle WebCenter Suite Oracle WebCenter Suite provides a unique set of capabilities to drive organizational awareness. In particular, the expansive activity graph connects users directly to key enterprise applications, activities, and interests. In this way, applicable and critical business information is automatically and immediately visible--in the context of key tasks--via real-time dashboards and comprehensive reporting. Oracle WebCenter Suite also integrates key E2.0 services, such as blogs, wikis, and RSS feeds, into critical business processes, including back-office systems of records such as ERP and CRM systems. 2. Drive online customer engagement | Oracle Real-Time Decisions With more and more business being conducted on the Web, driving increased online customer engagement becomes a critical key to success. This effort is usually spearheaded by an increasingly important executive role, the Head of Online, who usually reports directly to the CMO. To help manage the Web experience online, Oracle solutions are driving a new kind of intelligent social commerce by combining Oracle Universal Content Management, Oracle WebCenter Services, and Oracle Real-Time Decisions with leading e-commerce and product recommendations. Oracle Real-Time Decisions provides multichannel recommendations for content, products, and services--including seamless integration across Web, mobile, and social channels. The result: happier customers, increased customer acquisition and retention, and improved critical success metrics such as shopping cart abandonment. 3. Easily build composite applications | Oracle Application Development Framework Thanks to the shared user experience strategy across Oracle Fusion Middleware, Oracle Fusion Applications and many other Oracle Applications, customers can easily create real, customer-specific composite applications using Oracle WebCenter Suite and Oracle Application Development Framework. Oracle Application Development Framework components provide modular user interface components that can build rich, social composite applications. In addition, a broad set of components spanning BPM, SOA, ECM, and beyond can be quickly and easily incorporated into composite applications. 4. Integrate records management into a global content platform | Oracle Enterprise Content Management 11g Oracle Enterprise Content Management 11g provides leading records management capabilities as part of a unified ECM platform for managing records, documents, Web content, digital assets, enterprise imaging, and application imaging. This unique strategy provides comprehensive records management in a consistent, cost-effective way, and enables organizations to consolidate ECM repositories and connect ECM to critical business applications. 5. Achieve ECM at extreme scale | Oracle WebLogic Server and Oracle Exadata To support the high-performance demands of a unified and rationalized content platform, Oracle has pioneered highly scalable and high-performing ECM infrastructures. Two innovations in particular helped make this happen. The core ECM platform itself moved to an Enterprise Java architecture, so organizations can now use Oracle WebLogic Server for enhanced scalability and manageability. Oracle Enterprise Content Management 11g can leverage Oracle Exadata for extreme performance and scale. Likewise, Oracle Exalogic--Oracle's foundation for cloud computing--enables extreme performance for processor-intensive capabilities such as content conversion or dynamic Web page delivery. Learn more about Oracle's Enterprise 2.0 solutions.

    Read the article

  • MacGyver Moments

    - by Geoff N. Hiten
    Denny Cherry tagged me to write about my best MacGyver Moment.  Usually I ignore blogosphere fluff and just use this space to write what I think is important.  However, #MVP10 just ended and I have a stronger sense of community.  Besides, where else would I mention my second best Macgyver moment was making a BIOS jumper out of a soda can.  Aluminum is conductive and I didn't have any real jumpers lying around. My best moment is probably my entire home computer network.  Every system but one is hand-built, usually cobbled together out of spare parts and 'adapted' from its original purpose. My Primary Domain Controller is a Dell 2300.   The Service Tag indicates it was shipped to the original owner in 1999.  Box has a PERC/1 RAID controller.  I acquired this from a previous employer for $50.  It runs Windows Server 2003 Enterprise Edition.  Does DNS, DHCP, and RADIUS services as a bonus.  RADIUS authentication is used for VPN and Wireless access.  It is nice to sign in once and be done with it. The Secondary Domain Controller is an old desktop.  Dual P-III 933 with some extra drives. My VPN box is a P-II 250 with 384MB of RAM and a 21 GB hard drive.  I did a P-to-V to my Hyper-V box a year or so ago and retired the hardware again.  Dynamic DNS lets me connect no matter how often Comcast shuffles my IP. The Hyper-V box is a desktop system with 8GB RAM and an AMD Athlon 5000+ processor.  Cost me less than $500 to put together nearly two years ago.  I reasoned that if Vista and Windows 2008 were the same code then Vista 64-bit certified meant the drivers for Vista would load into Windows 2008.  Turns out I was right. Later I added three 1TB drives but wasn't too happy with how that turned out.  I recovered two of the drives yesterday and am building an iSCSI storage unit. (Much thanks to Starwind.  Great product).  I am using an old AMD 1.1GhZ box with 1.5 GB RAM (cobbled together from three old PCs) as my storave server.  The Hyper-V box is slated for an OS rebuild to 2008 R2 once I get the storage system worked out.  maybe in a week or two. A couple of DLink Gigabit switches ties everything together. Add in the Vonage box, the three PCs, the Wireless-N Access Point, the two notebooks and the XBox and you have gone from MacGyver to darn near Rube Goldberg. The only thing I really spend money on is power supplies and fans.  I buy top-of-the-line for both. I even pull and crimp my own cables. Oh, and if my kids hose up a PC, I have all of their data on a server elsewhere.  Every PC and laptop is pretty much interchangable for email and basic workstation tasks.  That helps a lot too. Of course I will tag SQLVariant.

    Read the article

  • Employee Info Starter Kit: Project Mission

    - by Mohammad Ashraful Alam
    Employee Info Starter Kit is an open source ASP.NET project template that is intended to address different types of real world challenges faced by web application developers when performing common CRUD operations. Using a single database table ‘Employee’, it illustrates how to utilize Microsoft ASP.NET 4.0, Entity Framework 4.0 and Visual Studio 2010 effectively in that context. Employee Info Starter Kit is highly influenced by the concept ‘Pareto Principle’ or 80-20 rule. where it is targeted to enable a web developer to gain 80% productivity with 20% of effort with respect to learning curve and production. User Stories The user end functionalities of this starter kit are pretty simple and straight forward that are focused in to perform CRUD operation on employee records as described below. Creating a new employee record Read existing employee record Update an existing employee record Delete existing employee records Key Technology Areas ASP.NET 4.0 Entity Framework 4.0 T-4 Template Visual Studio 2010 Architectural Objective There is no universal architecture which can be considered as the best for all sorts of applications around the world. Based on requirements, constraints, environment, application architecture can differ from one to another. Trade-off factors are one of the important considerations while deciding a particular architectural solution. Employee Info Starter Kit is highly influenced by the concept ‘Pareto Principle’ or 80-20 rule, where it is targeted to enable a web developer to gain 80% productivity with 20% of effort with respect to learning curve and production. “Productivity” as the architectural objective typically also includes other trade-off factors as well as, such as testability, flexibility, performance etc. Fortunately Microsoft .NET Framework 4.0 and Visual Studio 2010 includes lots of great features that have been implemented cleverly in this project to reduce these trade-off factors in the minimum level. Why Employee Info Starter Kit is Not a Framework? Application frameworks are really great for productivity, some of which are really unavoidable in this modern age. However relying too many frameworks may overkill a project, as frameworks are typically designed to serve wide range of different usage and are less customizable or editable. On the other hand having implementation patterns can be useful for developers, as it enables them to adjust application on demand. Employee Info Starter Kit provides hundreds of “connected” snippets and implementation patterns to demonstrate problem solutions in actual production environment. It also includes Visual Studio T-4 templates that generate thousands lines of data access and business logic layer repetitive codes in literally few seconds on the fly, which are fully mock testable due to language support for partial methods and latest support for mock testing in Entity Framework. Why Employee Info Starter Kit is Different than Other Open-source Web Applications? Software development is one of the rapid growing industries around the globe, where the technology is being updated very frequently to adapt greater challenges over time. There are literally thousands of community web sites, blogs and forums that are dedicated to provide support to adapt new technologies. While some are really great to enable learning new technologies quickly, in most cases they are either too “simple and brief” to be used in real world scenarios or too “complex and detailed” which are typically focused to achieve a product goal (such as CMS, e-Commerce etc) from "end user" perspective and have a long duration learning curve with respect to the corresponding technology. Employee Info Starter Kit, as a web project, is basically "developer" oriented which actually considers a hybrid approach as “simple and detailed”, where a simple domain has been considered to intentionally illustrate most of the architectural and implementation challenges faced by web application developers so that anyone can dive into deep into the corresponding new technology or concept quickly. Roadmap Since its first release by 2008 in MSDN Code Gallery, Employee Info Starter Kit gained a huge popularity in ASP.NET community and had 1, 50,000+ downloads afterwards. Being encouraged with this great response, we have a strong commitment for the community to provide support for it with respect to latest technologies continuously. Currently hosted in Codeplex, this community driven project is planned to have a wide range of individual editions, each of which will be focused on a selected application architecture, framework or platform, such as ASP.NET Webform, ASP.NET Dynamic Data, ASP.NET MVC, jQuery Ajax (RIA), Silverlight (RIA), Azure Service Platform (Cloud), Visual Studio Automated Test etc. See here for full list of current and future editions.

    Read the article

  • Token based Authentication for WCF HTTP/REST Services: Authentication

    - by Your DisplayName here!
    This post shows some of the implementation techniques for adding token and claims based security to HTTP/REST services written with WCF. For the theoretical background, see my previous post. Disclaimer The framework I am using/building here is not the only possible approach to tackle the problem. Based on customer feedback and requirements the code has gone through several iterations to a point where we think it is ready to handle most of the situations. Goals and requirements The framework should be able to handle typical scenarios like username/password based authentication, as well as token based authentication The framework should allow adding new supported token types Should work with WCF web programming model either self-host or IIS hosted Service code can rely on an IClaimsPrincipal on Thread.CurrentPrincipal that describes the client using claims-based identity Implementation overview In WCF the main extensibility point for this kind of security work is the ServiceAuthorizationManager. It gets invoked early enough in the pipeline, has access to the HTTP protocol details of the incoming request and can set Thread.CurrentPrincipal. The job of the SAM is simple: Check the Authorization header of the incoming HTTP request Check if a “registered” token (more on that later) is present If yes, validate the token using a security token handler, create the claims principal (including claims transformation) and set Thread.CurrentPrincipal If no, set an anonymous principal on Thread.CurrentPrincipal. By default, anonymous principals are denied access – so the request ends here with a 401 (more on that later). To wire up the custom authorization manager you need a custom service host – which in turn needs a custom service host factory. The full object model looks like this: Token handling A nice piece of existing WIF infrastructure are security token handlers. Their job is to serialize a received security token into a CLR representation, validate the token and turn the token into claims. The way this works with WS-Security based services is that WIF passes the name/namespace of the incoming token to WIF’s security token handler collection. This in turn finds out which token handler can deal with the token and returns the right instances. For HTTP based services we can do something very similar. The scheme on the Authorization header gives the service a hint how to deal with an incoming token. So the only missing link is a way to associate a token handler (or multiple token handlers) with a scheme and we are (almost) done. WIF already includes token handler for a variety of tokens like username/password or SAML 1.1/2.0. The accompanying sample has a implementation for a Simple Web Token (SWT) token handler, and as soon as JSON Web Token are ready, simply adding a corresponding token handler will add support for this token type, too. All supported schemes/token types are organized in a WebSecurityTokenHandlerCollectionManager and passed into the host factory/host/authorization manager. Adding support for basic authentication against a membership provider would e.g. look like this (in global.asax): var manager = new WebSecurityTokenHandlerCollectionManager(); manager.AddBasicAuthenticationHandler((username, password) => Membership.ValidateUser(username, password));   Adding support for Simple Web Tokens with a scheme of Bearer (the current OAuth2 scheme) requires passing in a issuer, audience and signature verification key: manager.AddSimpleWebTokenHandler(     "Bearer",     "http://identityserver.thinktecture.com/trust/initial",     "https://roadie/webservicesecurity/rest/",     "WFD7i8XRHsrUPEdwSisdHoHy08W3lM16Bk6SCT8ht6A="); In some situations, SAML token may be used as well. The following configures SAML support for a token coming from ADFS2: var registry = new ConfigurationBasedIssuerNameRegistry(); registry.AddTrustedIssuer( "d1 c5 b1 25 97 d0 36 94 65 1c e2 64 fe 48 06 01 35 f7 bd db", "ADFS"); var adfsConfig = new SecurityTokenHandlerConfiguration(); adfsConfig.AudienceRestriction.AllowedAudienceUris.Add( new Uri("https://roadie/webservicesecurity/rest/")); adfsConfig.IssuerNameRegistry = registry; adfsConfig.CertificateValidator = X509CertificateValidator.None; // token decryption (read from config) adfsConfig.ServiceTokenResolver = IdentityModelConfiguration.ServiceConfiguration.CreateAggregateTokenResolver();             manager.AddSaml11SecurityTokenHandler("SAML", adfsConfig);   Transformation The custom authorization manager will also try to invoke a configured claims authentication manager. This means that the standard WIF claims transformation logic can be used here as well. And even better, can be also shared with e.g. a “surrounding” web application. Error handling A WCF error handler takes care of turning “access denied” faults into 401 status codes and a message inspector adds the registered authentication schemes to the outgoing WWW-Authenticate header when a 401 occurs. The next post will conclude with authorization as well as the source code download.   (Wanna learn more about federation, WIF, claims, tokens etc.? Click here.)

    Read the article

  • Implementing features in an Entity System

    - by Bane
    After asking two questions on Entity Systems (1, 2), and reading some articles on them, I think that I understand them much better than before. But, I still have some uncertainties, and mainly they are about building a Particle Emitter, an Input system, and a Camera. I obviously still have some problems understanding Entity Systems, and they might apply to a whole other range of objects, but I chose these three because they are very different concepts and should cover a pretty big ground, and help me understand Entity Systems and how to handle problems like these myself, as they come along. I am building an engine in Javascript, and I've implemented most of the core features, which include: input handling, flexible animation system, particle emitter, math classes and functions, scene handling, a camera and a render, and a whole bunch of other things that engines usually support. Then, I read Byte56's answer that got me interested into making the engine into an Entity System one. It would still remain an HTML5 game engine with the basic Scene philosophy, but it should support dynamic creation of entities from components. These are some of the definitions from the previous questions, updated: An Entity is an identifier. It doesn't have any data, it's not an object, it's a simple id that represents an index in the Scene's list of all entities (which I actually plan to implement as a component matrix). A Component is a data holder, but with methods that can operate on that data. The best example is a Vector2D, or a "Position" component. It has data: x and y, but also some methods that make operating on the data a bit easier: add(), normalize(), and so on. A System is something that can operate on a set of entities that meet the certain requirements, usually they (the entities) need to have a specified (by the system itself) set of components to be operated upon. The system is the "logic" part, the "algorithm" part, all the functionality supplied by components is purely for easier data management. The problem that I have now is fitting my old engine concept into this new programming paradigm. Lets start with the simplest one, a Camera. The camera has a position property (Vector2D), a rotation property and some methods for centering it around a point. Each frame, it is fed to a renderer, along with a scene, and all the objects are translated according to it's position. Then the scene is rendered. How could I represent this kind of an object in an Entity System? Would the camera be an entity or simply a component? A combination (see my answer)? Another issues that is bothering me is implementing a Particle Emitter. For what exactly I mean by that, you can check out my video of it: http://youtu.be/BObargIMQsE. The problem I have with this is, again, what should be what. I'm pretty sure that particles themselves shouldn't be entities, as I want to support 10k+ of them, and creating that much entities would be a heavy blow on my performance, I believe. Or maybe not? Depends on the implementation, but anyone with experience: please, do answer. The last bit I wan't to talk about, which is also bugging me the most, is how input should be handled. In my current version of the engine, there is a class called Input. It's a handler that subscribes to browser's events, such as keypresses, and mouse position changes, and also it maintains an internal state. Then, the player class has a react() method, which accepts an input object as an argument. The advantage of this is that the input object could be serialized into JSON and then shared over the network, allowing for smooth multiplayer simulations. But how does this translate into an Entity System?

    Read the article

  • Big Data – Operational Databases Supporting Big Data – Key-Value Pair Databases and Document Databases – Day 13 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the Relational Database and NoSQL database in the Big Data Story. In this article we will understand the role of Key-Value Pair Databases and Document Databases Supporting Big Data Story. Now we will see a few of the examples of the operational databases. Relational Databases (Yesterday’s post) NoSQL Databases (Yesterday’s post) Key-Value Pair Databases (This post) Document Databases (This post) Columnar Databases (Tomorrow’s post) Graph Databases (Tomorrow’s post) Spatial Databases (Tomorrow’s post) Key Value Pair Databases Key Value Pair Databases are also known as KVP databases. A key is a field name and attribute, an identifier. The content of that field is its value, the data that is being identified and stored. They have a very simple implementation of NoSQL database concepts. They do not have schema hence they are very flexible as well as scalable. The disadvantages of Key Value Pair (KVP) database are that they do not follow ACID (Atomicity, Consistency, Isolation, Durability) properties. Additionally, it will require data architects to plan for data placement, replication as well as high availability. In KVP databases the data is stored as strings. Here is a simple example of how Key Value Database will look like: Key Value Name Pinal Dave Color Blue Twitter @pinaldave Name Nupur Dave Movie The Hero As the number of users grow in Key Value Pair databases it starts getting difficult to manage the entire database. As there is no specific schema or rules associated with the database, there are chances that database grows exponentially as well. It is very crucial to select the right Key Value Pair Database which offers an additional set of tools to manage the data and provides finer control over various business aspects of the same. Riak Rick is one of the most popular Key Value Database. It is known for its scalability and performance in high volume and velocity database. Additionally, it implements a mechanism for collection key and values which further helps to build manageable system. We will further discuss Riak in future blog posts. Key Value Databases are a good choice for social media, communities, caching layers for connecting other databases. In simpler words, whenever we required flexibility of the data storage keeping scalability in mind – KVP databases are good options to consider. Document Database There are two different kinds of document databases. 1) Full document Content (web pages, word docs etc) and 2) Storing Document Components for storage. The second types of the document database we are talking about over here. They use Javascript Object Notation (JSON) and Binary JSON for the structure of the documents. JSON is very easy to understand language and it is very easy to write for applications. There are two major structures of JSON used for Document Database – 1) Name Value Pairs and 2) Ordered List. MongoDB and CouchDB are two of the most popular Open Source NonRelational Document Database. MongoDB MongoDB databases are called collections. Each collection is build of documents and each document is composed of fields. MongoDB collections can be indexed for optimal performance. MongoDB ecosystem is highly available, supports query services as well as MapReduce. It is often used in high volume content management system. CouchDB CouchDB databases are composed of documents which consists fields and attachments (known as description). It supports ACID properties. The main attraction points of CouchDB are that it will continue to operate even though network connectivity is sketchy. Due to this nature CouchDB prefers local data storage. Document Database is a good choice of the database when users have to generate dynamic reports from elements which are changing very frequently. A good example of document usages is in real time analytics in social networking or content management system. Tomorrow In tomorrow’s blog post we will discuss about various other Operational Databases supporting Big Data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Fastest pathfinding for static node matrix

    - by Sean Martin
    I'm programming a route finding routine in VB.NET for an online game I play, and I'm searching for the fastest route finding algorithm for my map type. The game takes place in space, with thousands of solar systems connected by jump gates. The game devs have provided a DB dump containing a list of every system and the systems it can jump to. The map isn't quite a node tree, since some branches can jump to other branches - more of a matrix. What I need is a fast pathfinding algorithm. I have already implemented an A* routine and a Dijkstra's, both find the best path but are too slow for my purposes - a search that considers about 5000 nodes takes over 20 seconds to compute. A similar program on a website can do the same search in less than a second. This website claims to use D*, which I have looked into. That algorithm seems more appropriate for dynamic maps rather than one that does not change - unless I misunderstand it's premise. So is there something faster I can use for a map that is not your typical tile/polygon base? GBFS? Perhaps a DFS? Or have I likely got some problem with my A* - maybe poorly chosen heuristics or movement cost? Currently my movement cost is the length of the jump (the DB dump has solar system coordinates as well), and the heuristic is a quick euclidean calculation from the node to the goal. In case anyone has some optimizations for my A*, here is the routine that consumes about 60% of my processing time, according to my profiler. The coordinateData table contains a list of every system's coordinates, and neighborNode.distance is the distance of the jump. Private Function findDistance(ByVal startSystem As Integer, ByVal endSystem As Integer) As Integer 'hCount += 1 'If hCount Mod 0 = 0 Then 'Return hCache 'End If 'Initialize variables to be filled Dim x1, x2, y1, y2, z1, z2 As Integer 'LINQ queries for solar system data Dim systemFromData = From result In jumpDataDB.coordinateDatas Where result.systemId = startSystem Select result.x, result.y, result.z Dim systemToData = From result In jumpDataDB.coordinateDatas Where result.systemId = endSystem Select result.x, result.y, result.z 'LINQ execute 'Fill variables with solar system data for from and to system For Each solarSystem In systemFromData x1 = (solarSystem.x) y1 = (solarSystem.y) z1 = (solarSystem.z) Next For Each solarSystem In systemToData x2 = (solarSystem.x) y2 = (solarSystem.y) z2 = (solarSystem.z) Next Dim x3 = Math.Abs(x1 - x2) Dim y3 = Math.Abs(y1 - y2) Dim z3 = Math.Abs(z1 - z2) 'Calculate distance and round 'Dim distance = Math.Round(Math.Sqrt(Math.Abs((x1 - x2) ^ 2) + Math.Abs((y1 - y2) ^ 2) + Math.Abs((z1 - z2) ^ 2))) Dim distance = firstConstant * Math.Min(secondConstant * (x3 + y3 + z3), Math.Max(x3, Math.Max(y3, z3))) 'Dim distance = Math.Abs(x1 - x2) + Math.Abs(z1 - z2) + Math.Abs(y1 - y2) 'hCache = distance Return distance End Function And the main loop, the other 30% 'Begin search While openList.Count() != 0 'Set current system and move node to closed currentNode = lowestF() move(currentNode.id) For Each neighborNode In neighborNodes If Not onList(neighborNode.toSystem, 0) Then If Not onList(neighborNode.toSystem, 1) Then Dim newNode As New nodeData() newNode.id = neighborNode.toSystem newNode.parent = currentNode.id newNode.g = currentNode.g + neighborNode.distance newNode.h = findDistance(newNode.id, endSystem) newNode.f = newNode.g + newNode.h newNode.security = neighborNode.security openList.Add(newNode) shortOpenList(OLindex) = newNode.id OLindex += 1 Else Dim proposedG As Integer = currentNode.g + neighborNode.distance If proposedG < gValue(neighborNode.toSystem) Then changeParent(neighborNode.toSystem, currentNode.id, proposedG) End If End If End If Next 'Check to see if done If currentNode.id = endSystem Then Exit While End If End While If clarification is needed on my spaghetti code, I'll try to explain.

    Read the article

  • St. Louis ALT.NET

    - by Brian Schroer
    I’m a huge fan of the St. Louis .NET User Group and a regular attendee of their meetings, but always wished there was a local group that discussed more advanced .NET topics. (That’s not a criticism of the group - I appreciate that they want to server developers with a broad range of skill levels). That’s why I was thrilled when Nicholas Cloud started a St. Louis ALT.NET group in 2010. Here’s the “about us” statement from the group’s web site: The ALT.NET community is a loosely coupled, highly cohesive group of like-minded individuals who believe that the best developers do not align themselves with platforms and languages, but with principles and ideas. In 2007, David Laribee created the term "ALT.NET" to explain this "alternative" view of the Microsoft development universe--a view that challenged the "Microsoft-only" approach to software development. He distilled his thoughts into four key developer characteristics which form the basis of the ALT.NET philosophy: You're the type of developer who uses what works while keeping an eye out for a better way. You reach outside the mainstream to adopt the best of any community: Open Source, Agile, Java, Ruby, etc. You're not content with the status quo. Things can always be better expressed, more elegant and simple, more mutable, higher quality, etc. You know tools are great, but they only take you so far. It's the principles and knowledge that really matter. The best tools are those that embed the knowledge and encourage the principles (e.g. Resharper.) The St. Louis ALT.NET meetup group is a place where .NET developers can learn, share, and critique approaches to software development on the .NET stack. We cater to the highest common denominator, not the lowest, and want to help all St. Louis .NET developers achieve a superior level of software craftsmanship. I don’t see a lot of ALT.NET talk in blogs these days. The movement was harmed early on by the negative attitudes of some of its early leaders, including jerk moves like the Entity Framework “vote of no confidence”, but I do see occasional mentions of local groups like the St. Louis one. I think ALT.NET has been successful at bringing some of its ideas into the .NET world, including heavily influencing ASP.NET MVC and raising the general level of software craftsmanship for developers working on the Microsoft stack. The ideas and ideals live on, they’re just not branded as “this is ALT.NET!” In the past 18 months, St. Louis ALT.NET meetups have discussed topics like: NHibernate F# and other functional languages AOP CoffeeScript “How Ruby Is Making Me a Stronger C# Developer” Using rake for builds CQRS .NET dynamic programming micro web frameworks – Nancy & Jessica Git ALT.NET doesn’t mean (to me, anyway) “alternatives to .NET”, but “alternatives for .NET”. We look at how things are done in Ruby and other languages/platforms, but always with the idea “What can I learn from this to take back to my “day job” with .NET?”. Meetings are held at 7PM on the fourth Wednesday of each month at the offices of Professional Employment Group. PEG is located at 999 Executive Parkway (Suite 100 – lower level) in Creve Coeur (South of Olive off of Mason Road - Here's a map). Food is not supplied (sorry if you’re a big fan of the Papa John’s Crust-Lovers’ Pizza that’s a staple of user group meetings), but attendees are encouraged to come early and bring/share beer, so that’s cool. Thanks to Nick for organizing, and to Professional Employment Group for lending their offices. Please visit the meetup site for more information.

    Read the article

  • PASS Summit 2011 &ndash; Part IV

    - by Tara Kizer
    This is the final blog for my PASS Summit 2011 series.  Well okay, a mini-series, I guess. On the last day of the conference, I attended Keith Elmore’ and Boris Baryshnikov’s (both from Microsoft) “Introducing the Microsoft SQL Server Code Named “Denali” Performance Dashboard Reports, Jeremiah Peschka’s (blog|twitter) “Rewrite your T-SQL for Great Good!”, and Kimberly Tripp’s (blog|twitter) “Isolated Disasters in VLDBs”. Keith and Boris talked about the lifecycle of a session, figuring out the running time and the waiting time.  They pointed out the transient nature of the reports.  You could be drilling into it to uncover a problem, but the session may have ended by the time you’ve drilled all of the way down.  Also, the reports are for troubleshooting live problems and not historical ones.  You can use Management Data Warehouse for historical troubleshooting.  The reports provide similar benefits to the Activity Monitor, however Activity Monitor doesn’t provide context sensitive drill through. One thing I learned in Keith’s and Boris’ session was that the buffer cache hit ratio should really never be below 87% due to the read-ahead mechanism in SQL Server.  When a page is read, it will read the entire extent.  So for every page read, you get 7 more read.  If you need any of those 7 extra pages, well they are already in cache.  I had a lot of fun in Jeremiah’s session about refactoring code plus I learned a lot.  His slides were visually presented in a fun way, which just made for a more upbeat presentation.  Jeremiah says that before you start refactoring, you should look at your system.  Investigate missing or too many indexes, out-of-date statistics, and other areas that could be leading to your code running slow.  He talked about code standards.  He suggested using common abbreviations for aliases instead of one-letter aliases.  I’m a big offender of one-letter aliases, but he makes a good point.  He said that join order does not matter to the optimizer, but it does matter to those who have to read your code.  Now let’s get into refactoring! Eliminate useless things – useless/unneeded joins and columns.  If you don’t need it, get rid of it! Instead of using DISTINCT/JOIN, replace with EXISTS Simplify your conditions; use UNION or better yet UNION ALL instead of OR to avoid a scan and use indexes for each union query Branching logic – instead of IF this, IF that, and on and on…use dynamic SQL (sp_executesql, please!) or use a parameterized query in the application Correlated subqueries – YUCK! Replace with a join Eliminate repeated patterns Last, but certainly not least, was Kimberly’s session.  Kimberly is my favorite speaker.  I attended her two-day pre-conference seminar at PASS Summit 2005 as well as a SQL Immersion Event last December.  Did I mention she’s my favorite speaker?  Okay, enough of that. Kimberly’s session was packed with demos.  I had seen some of it in the SQL Immersion Event, but it was very nice to get a refresher on these, especially since I’ve got a VLDB with some growing pains.  One key takeaway from her session is the idea to use a log shipping solution with a load delay, such as 6, 8, or 24 hours behind the primary.  In the case of say an accidentally dropped table in a VLDB, we could retrieve it from the secondary database rather than waiting an eternity for a restore to complete.  Kimberly let us know that in SQL Server 2012 (it finally has a name!), online rebuilds are supported even if there are LOB columns in your table.  This will simplify custom code that intelligently figures out if an online rebuild is possible. There was actually one last time slot for sessions that day, but I had an airplane to catch and my kids to see!

    Read the article

  • HPCM 11.1.2.2.x - How to find data in an HPCM Standard Costing database

    - by Jane Story
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} When working with a Hyperion Profitability and Cost Management (HPCM) Standard Costing application, there can often be a requirement to check data or allocated results using reporting tools e.g Smartview. To do this, you are retrieving data directly from the Essbase databases related to your HPCM model. For information, running reports is covered in Chapter 9 of the HPCM User documentation. The aim of this blog is to provide a quick guide to finding this data for reporting in the HPCM generated Essbase database in v11.1.2.2.x of HPCM. In order to retrieve data from an HPCM generated Essbase database, it is important to understand each of the following dimensions in the Essbase database and where data is located within them: Measures dimension – identifies Measures AllocationType dimension – identifies Direct Allocation Data or Genealogy Allocation data Point Of View (POV) dimensions – there must be at least one, maximum of four. Business dimensions: Stage Business dimensions – these will be identified by the Stage prefix. Intra-Stage dimension – these will be identified by the _Intra suffix. Essbase outlines and reporting is explained in the documentation here:http://docs.oracle.com/cd/E17236_01/epm.1112/hpm_user/ch09s02.html For additional details on reporting measures, please review this section of the documentation:http://docs.oracle.com/cd/E17236_01/epm.1112/hpm_user/apas03.html Reporting requirements in HPCM quite often start with identifying non balanced items in the Stage Balancing report. The following documentation link provides help with identifying some of the items within the Stage Balancing report:http://docs.oracle.com/cd/E17236_01/epm.1112/hpm_user/generatestagebalancing.html The following are some types of data upon which you may want to report: Stage Data: Direct Input Assigned Input Data Assigned Output Data Idle Cost/Revenue Unassigned Cost/Revenue Over Driven Cost/Revenue Direct Allocation Data Genealogy Allocation Data Stage Data Stage Data consists of: Direct Input i.e. input data, the starting point of your allocation e.g. in Stage 1 Assigned Input Data i.e. the cost/revenue received from a prior stage (i.e. stage 2 and higher). Assigned Output Data i.e. for each stage, the data that will be assigned forward is assigned post stage data. Reporting on this data is explained in the documentation here:http://docs.oracle.com/cd/E17236_01/epm.1112/hpm_user/ch09s03.html Dimension Selection Measures Direct Input: CostInput RevenueInput Assigned Input (from previous stages): CostReceivedPriorStage RevenueReceivedPriorStage Assigned Output (to subsequent stages): CostAssignedPostStage RevenueAssignedPostStage AllocationType DirectAllocation POV One member from each POV dimension Stage Business Dimensions Any members for the stage business dimensions for the stage you wish to see the Stage data for. All other Dimensions NoMember Idle/Unassigned/OverDriven To view Idle, Unassigned or Overdriven Costs/Revenue, first select which stage for which you want to view this data. If multiple Stages have unassigned/idle, resolve the earliest first and re-run the calculation as differences in early stages will create unassigned/idle in later stages. Dimension Selection Measures Idle: IdleCost IdleRevenue Unassigned: UnAssignedCost UnAssignedRevenue Overdriven: OverDrivenCost OverDrivenRevenue AllocationType DirectAllocation POV One member from each POV dimension Dimensions in the Stage with Unassigned/ Idle/OverDriven Cost All the Stage Business dimensions in the Stage with Unassigned/Idle/Overdriven. Zoom in on each dimension to find the individual members to find which members have Unassigned/Idle/OverDriven data. All other Dimensions NoMember Direct Allocation Data Direct allocation data shows the data received by a destination intersection from a source intersection where a direct assignment(s) exists. Reporting on direct allocation data is explained in the documentation here:http://docs.oracle.com/cd/E17236_01/epm.1112/hpm_user/ch09s04.html You would select the following to report direct allocation data Dimension Selection Measures CostReceivedPriorStage AllocationType DirectAllocation POV One member from each POV dimension Stage Business Dimensions Any members for the SOURCE stage business dimensions and the DESTINATION stage business dimensions for the direct allocations for the stage you wish to report on. All other Dimensions NoMember Genealogy Allocation Data Genealogy allocation data shows the indirect data relationships between stages. Genealogy calculations run in the HPCM Reporting database only. Reporting on genealogy data is explained in the documentation here:http://docs.oracle.com/cd/E17236_01/epm.1112/hpm_user/ch09s05.html Dimension Selection Measures CostReceivedPriorStage AllocationType GenealogyAllocation (IndirectAllocation in 11.1.2.1 and prior versions) POV One member from each POV dimension Stage Business Dimensions Any stage business dimension members from the STARTING stage in Genealogy Any stage business dimension members from the INTERMEDIATE stage(s) in Genealogy Any stage business dimension members from the ENDING stage in Genealogy All other Dimensions NoMember Notes If you still don’t see data after checking the above, please check the following Check the calculation has been run. Here are couple of indicators that might help them with that. Note the size of essbase cube before and after calculations ensure that a calculation was run against the database you are examing. Export the essbase data to a text file to confirm that some data exists. Examine the date and time on task area to see when, if any, calculations were run and what choices were used (e.g. Genealogy choices) If data does not exist in places where they are expecting, it could be that No calculations/genealogy were run No calculations were successfully run The model/data at feeder location were either absent or incompatible, resulting in no allocation e.g no driver data. Smartview Invocation from HPCM From version 11.1.2.2.350 of HPCM (this version will be GA shortly), it is possible to directly invoke Smartview from HPCM. There is guided navigation before the Smartview invocation and it is then possible to see the selected value(s) in SmartView. Click to Download HPCM 11.1.2.2.x - How to find data in an HPCM Standard Costing database (Right click or option-click the link and choose "Save As..." to download this pdf file)

    Read the article

  • IPsec tunnel to Android device not created even though there is an IKE SA

    - by Quentin Swain
    I'm trying to configure a VPN tunnel between an Android device running 4.1 and a Fedora 17 Linux box running strongSwan 5.0. The device reports that it is connected and strongSwan statusall returns that there is an IKE SA, but doesn't display a tunnel. I used the instructions for iOS in the wiki to generate certificates and configure strongSwan. Since Android uses a modified version of racoon this should work and since the connection is partly established I think I am on the right track. I don't see any errors about not being able to create the tunnel. This is the configuration for the strongSwan connection conn android2 keyexchange=ikev1 authby=xauthrsasig xauth=server left=96.244.142.28 leftsubnet=0.0.0.0/0 leftfirewall=yes leftcert=serverCert.pem right=%any rightsubnet=10.0.0.0/24 rightsourceip=10.0.0.2 rightcert=clientCert.pem ike=aes256-sha1-modp1024 auto=add This is the output of strongswan statusall Status of IKE charon daemon (strongSwan 5.0.0, Linux 3.3.4-5.fc17.x86_64, x86_64): uptime: 20 minutes, since Oct 31 10:27:31 2012 malloc: sbrk 270336, mmap 0, used 198144, free 72192 worker threads: 8 of 16 idle, 7/1/0/0 working, job queue: 0/0/0/0, scheduled: 7 loaded plugins: charon aes des sha1 sha2 md5 random nonce x509 revocation constraints pubkey pkcs1 pkcs8 pgp dnskey pem openssl fips-prf gmp xcbc cmac hmac attr kernel-netlink resolve socket-default stroke updown xauth-generic Virtual IP pools (size/online/offline): android-hybrid: 1/0/0 android2: 1/1/0 Listening IP addresses: 96.244.142.28 Connections: android-hybrid: %any...%any IKEv1 android-hybrid: local: [C=CH, O=strongSwan, CN=vpn.strongswan.org] uses public key authentication android-hybrid: cert: "C=CH, O=strongSwan, CN=vpn.strongswan.org" android-hybrid: remote: [%any] uses XAuth authentication: any android-hybrid: child: dynamic === dynamic TUNNEL android2: 96.244.142.28...%any IKEv1 android2: local: [C=CH, O=strongSwan, CN=vpn.strongswan.org] uses public key authentication android2: cert: "C=CH, O=strongSwan, CN=vpn.strongswan.org" android2: remote: [C=CH, O=strongSwan, CN=client] uses public key authentication android2: cert: "C=CH, O=strongSwan, CN=client" android2: remote: [%any] uses XAuth authentication: any android2: child: 0.0.0.0/0 === 10.0.0.0/24 TUNNEL Security Associations (1 up, 0 connecting): android2[3]: ESTABLISHED 10 seconds ago, 96.244.142.28[C=CH, O=strongSwan, CN=vpn.strongswan.org]...208.54.35.241[C=CH, O=strongSwan, CN=client] android2[3]: Remote XAuth identity: android android2[3]: IKEv1 SPIs: 4151e371ad46b20d_i 59a56390d74792d2_r*, public key reauthentication in 56 minutes android2[3]: IKE proposal: AES_CBC_256/HMAC_SHA1_96/PRF_HMAC_SHA1/MODP_1024 The output of ip -s xfrm policy src ::/0 dst ::/0 uid 0 socket in action allow index 3851 priority 0 ptype main share any flag (0x00000000) lifetime config: limit: soft 0(bytes), hard 0(bytes) limit: soft 0(packets), hard 0(packets) expire add: soft 0(sec), hard 0(sec) expire use: soft 0(sec), hard 0(sec) lifetime current: 0(bytes), 0(packets) add 2012-10-31 13:29:08 use - src ::/0 dst ::/0 uid 0 socket out action allow index 3844 priority 0 ptype main share any flag (0x00000000) lifetime config: limit: soft 0(bytes), hard 0(bytes) limit: soft 0(packets), hard 0(packets) expire add: soft 0(sec), hard 0(sec) expire use: soft 0(sec), hard 0(sec) lifetime current: 0(bytes), 0(packets) add 2012-10-31 13:29:08 use - src ::/0 dst ::/0 uid 0 socket in action allow index 3835 priority 0 ptype main share any flag (0x00000000) lifetime config: limit: soft 0(bytes), hard 0(bytes) limit: soft 0(packets), hard 0(packets) expire add: soft 0(sec), hard 0(sec) expire use: soft 0(sec), hard 0(sec) lifetime current: 0(bytes), 0(packets) add 2012-10-31 13:29:08 use - src ::/0 dst ::/0 uid 0 socket out action allow index 3828 priority 0 ptype main share any flag (0x00000000) lifetime config: limit: soft 0(bytes), hard 0(bytes) limit: soft 0(packets), hard 0(packets) expire add: soft 0(sec), hard 0(sec) expire use: soft 0(sec), hard 0(sec) lifetime current: 0(bytes), 0(packets) add 2012-10-31 13:29:08 use - src 0.0.0.0/0 dst 0.0.0.0/0 uid 0 socket in action allow index 3819 priority 0 ptype main share any flag (0x00000000) lifetime config: limit: soft 0(bytes), hard 0(bytes) limit: soft 0(packets), hard 0(packets) expire add: soft 0(sec), hard 0(sec) expire use: soft 0(sec), hard 0(sec) lifetime current: 0(bytes), 0(packets) add 2012-10-31 13:29:08 use 2012-10-31 13:29:39 src 0.0.0.0/0 dst 0.0.0.0/0 uid 0 socket out action allow index 3812 priority 0 ptype main share any flag (0x00000000) lifetime config: limit: soft 0(bytes), hard 0(bytes) limit: soft 0(packets), hard 0(packets) expire add: soft 0(sec), hard 0(sec) expire use: soft 0(sec), hard 0(sec) lifetime current: 0(bytes), 0(packets) add 2012-10-31 13:29:08 use 2012-10-31 13:29:22 src 0.0.0.0/0 dst 0.0.0.0/0 uid 0 socket in action allow index 3803 priority 0 ptype main share any flag (0x00000000) lifetime config: limit: soft 0(bytes), hard 0(bytes) limit: soft 0(packets), hard 0(packets) expire add: soft 0(sec), hard 0(sec) expire use: soft 0(sec), hard 0(sec) lifetime current: 0(bytes), 0(packets) add 2012-10-31 13:29:08 use 2012-10-31 13:29:20 src 0.0.0.0/0 dst 0.0.0.0/0 uid 0 socket out action allow index 3796 priority 0 ptype main share any flag (0x00000000) lifetime config: limit: soft 0(bytes), hard 0(bytes) limit: soft 0(packets), hard 0(packets) expire add: soft 0(sec), hard 0(sec) expire use: soft 0(sec), hard 0(sec) lifetime current: 0(bytes), 0(packets) add 2012-10-31 13:29:08 use 2012-10-31 13:29:20 So a xfrm policy isn't being created for the connection, even though there is an SA between device and strongswan. Executing ip -s xfrm policy on the android device results in the following output: src 0.0.0.0/0 dst 10.0.0.2/32 uid 0 dir in action allow index 40 priority 2147483648 share any flag (0x00000000) lifetime config: limit: soft (INF)(bytes), hard (INF)(bytes) limit: soft (INF)(packets), hard (INF)(packets) expire add: soft 0(sec), hard 0(sec) expire use: soft 0(sec), hard 0(sec) lifetime current: 0(bytes), 0(packets) add 2012-10-31 13:42:08 use - tmpl src 96.244.142.28 dst 25.239.33.30 proto esp spi 0x00000000(0) reqid 0(0x00000000) mode tunnel level required share any enc-mask 00000000 auth-mask 00000000 comp-mask 00000000 src 10.0.0.2/32 dst 0.0.0.0/0 uid 0 dir out action allow index 33 priority 2147483648 share any flag (0x00000000) lifetime config: limit: soft (INF)(bytes), hard (INF)(bytes) limit: soft (INF)(packets), hard (INF)(packets) expire add: soft 0(sec), hard 0(sec) expire use: soft 0(sec), hard 0(sec) lifetime current: 0(bytes), 0(packets) add 2012-10-31 13:42:08 use - tmpl src 25.239.33.30 dst 96.244.142.28 proto esp spi 0x00000000(0) reqid 0(0x00000000) mode tunnel level required share any enc-mask 00000000 auth-mask 00000000 comp-mask 00000000 src 0.0.0.0/0 dst 0.0.0.0/0 uid 0 dir 4 action allow index 28 priority 0 share any flag (0x00000000) lifetime config: limit: soft (INF)(bytes), hard (INF)(bytes) limit: soft (INF)(packets), hard (INF)(packets) expire add: soft 0(sec), hard 0(sec) expire use: soft 0(sec), hard 0(sec) lifetime current: 0(bytes), 0(packets) add 2012-10-31 13:42:04 use 2012-10-31 13:42:08 src 0.0.0.0/0 dst 0.0.0.0/0 uid 0 dir 3 action allow index 19 priority 0 share any flag (0x00000000) lifetime config: limit: soft (INF)(bytes), hard (INF)(bytes) limit: soft (INF)(packets), hard (INF)(packets) expire add: soft 0(sec), hard 0(sec) expire use: soft 0(sec), hard 0(sec) lifetime current: 0(bytes), 0(packets) add 2012-10-31 13:42:04 use 2012-10-31 13:42:08 src 0.0.0.0/0 dst 0.0.0.0/0 uid 0 dir 4 action allow index 12 priority 0 share any flag (0x00000000) lifetime config: limit: soft (INF)(bytes), hard (INF)(bytes) limit: soft (INF)(packets), hard (INF)(packets) expire add: soft 0(sec), hard 0(sec) expire use: soft 0(sec), hard 0(sec) lifetime current: 0(bytes), 0(packets) add 2012-10-31 13:42:04 use 2012-10-31 13:42:06 src 0.0.0.0/0 dst 0.0.0.0/0 uid 0 dir 3 action allow index 3 priority 0 share any flag (0x00000000) lifetime config: limit: soft (INF)(bytes), hard (INF)(bytes) limit: soft (INF)(packets), hard (INF)(packets) expire add: soft 0(sec), hard 0(sec) expire use: soft 0(sec), hard 0(sec) lifetime current: 0(bytes), 0(packets) add 2012-10-31 13:42:04 use 2012-10-31 13:42:07 Logs from charon: 00[DMN] Starting IKE charon daemon (strongSwan 5.0.0, Linux 3.3.4-5.fc17.x86_64, x86_64) 00[KNL] listening on interfaces: 00[KNL] em1 00[KNL] 96.244.142.28 00[KNL] fe80::224:e8ff:fed2:18b2 00[CFG] loading ca certificates from '/etc/strongswan/ipsec.d/cacerts' 00[CFG] loaded ca certificate "C=CH, O=strongSwan, CN=strongSwan CA" from '/etc/strongswan/ipsec.d/cacerts/caCert.pem' 00[CFG] loading aa certificates from '/etc/strongswan/ipsec.d/aacerts' 00[CFG] loading ocsp signer certificates from '/etc/strongswan/ipsec.d/ocspcerts' 00[CFG] loading attribute certificates from '/etc/strongswan/ipsec.d/acerts' 00[CFG] loading crls from '/etc/strongswan/ipsec.d/crls' 00[CFG] loading secrets from '/etc/strongswan/ipsec.secrets' 00[CFG] loaded RSA private key from '/etc/strongswan/ipsec.d/private/clientKey.pem' 00[CFG] loaded IKE secret for %any 00[CFG] loaded EAP secret for android 00[CFG] loaded EAP secret for android 00[DMN] loaded plugins: charon aes des sha1 sha2 md5 random nonce x509 revocation constraints pubkey pkcs1 pkcs8 pgp dnskey pem openssl fips-prf gmp xcbc cmac hmac attr kernel-netlink resolve socket-default stroke updown xauth-generic 08[NET] waiting for data on sockets 16[LIB] created thread 16 [15338] 16[JOB] started worker thread 16 11[CFG] received stroke: add connection 'android-hybrid' 11[CFG] conn android-hybrid 11[CFG] left=%any 11[CFG] leftsubnet=(null) 11[CFG] leftsourceip=(null) 11[CFG] leftauth=pubkey 11[CFG] leftauth2=(null) 11[CFG] leftid=(null) 11[CFG] leftid2=(null) 11[CFG] leftrsakey=(null) 11[CFG] leftcert=serverCert.pem 11[CFG] leftcert2=(null) 11[CFG] leftca=(null) 11[CFG] leftca2=(null) 11[CFG] leftgroups=(null) 11[CFG] leftupdown=ipsec _updown iptables 11[CFG] right=%any 11[CFG] rightsubnet=(null) 11[CFG] rightsourceip=96.244.142.3 11[CFG] rightauth=xauth 11[CFG] rightauth2=(null) 11[CFG] rightid=%any 11[CFG] rightid2=(null) 11[CFG] rightrsakey=(null) 11[CFG] rightcert=(null) 11[CFG] rightcert2=(null) 11[CFG] rightca=(null) 11[CFG] rightca2=(null) 11[CFG] rightgroups=(null) 11[CFG] rightupdown=(null) 11[CFG] eap_identity=(null) 11[CFG] aaa_identity=(null) 11[CFG] xauth_identity=(null) 11[CFG] ike=aes256-sha1-modp1024 11[CFG] esp=aes128-sha1-modp2048,3des-sha1-modp1536 11[CFG] dpddelay=30 11[CFG] dpdtimeout=150 11[CFG] dpdaction=0 11[CFG] closeaction=0 11[CFG] mediation=no 11[CFG] mediated_by=(null) 11[CFG] me_peerid=(null) 11[CFG] keyexchange=ikev1 11[KNL] getting interface name for %any 11[KNL] %any is not a local address 11[KNL] getting interface name for %any 11[KNL] %any is not a local address 11[CFG] left nor right host is our side, assuming left=local 11[CFG] loaded certificate "C=CH, O=strongSwan, CN=vpn.strongswan.org" from 'serverCert.pem' 11[CFG] id '%any' not confirmed by certificate, defaulting to 'C=CH, O=strongSwan, CN=vpn.strongswan.org' 11[CFG] added configuration 'android-hybrid' 11[CFG] adding virtual IP address pool 'android-hybrid': 96.244.142.3/32 13[CFG] received stroke: add connection 'android2' 13[CFG] conn android2 13[CFG] left=96.244.142.28 13[CFG] leftsubnet=0.0.0.0/0 13[CFG] leftsourceip=(null) 13[CFG] leftauth=pubkey 13[CFG] leftauth2=(null) 13[CFG] leftid=(null) 13[CFG] leftid2=(null) 13[CFG] leftrsakey=(null) 13[CFG] leftcert=serverCert.pem 13[CFG] leftcert2=(null) 13[CFG] leftca=(null) 13[CFG] leftca2=(null) 13[CFG] leftgroups=(null) 13[CFG] leftupdown=ipsec _updown iptables 13[CFG] right=%any 13[CFG] rightsubnet=10.0.0.0/24 13[CFG] rightsourceip=10.0.0.2 13[CFG] rightauth=pubkey 13[CFG] rightauth2=xauth 13[CFG] rightid=(null) 13[CFG] rightid2=(null) 13[CFG] rightrsakey=(null) 13[CFG] rightcert=clientCert.pem 13[CFG] rightcert2=(null) 13[CFG] rightca=(null) 13[CFG] rightca2=(null) 13[CFG] rightgroups=(null) 13[CFG] rightupdown=(null) 13[CFG] eap_identity=(null) 13[CFG] aaa_identity=(null) 13[CFG] xauth_identity=(null) 13[CFG] ike=aes256-sha1-modp1024 13[CFG] esp=aes128-sha1-modp2048,3des-sha1-modp1536 13[CFG] dpddelay=30 13[CFG] dpdtimeout=150 13[CFG] dpdaction=0 13[CFG] closeaction=0 13[CFG] mediation=no 13[CFG] mediated_by=(null) 13[CFG] me_peerid=(null) 13[CFG] keyexchange=ikev0 13[KNL] getting interface name for %any 13[KNL] %any is not a local address 13[KNL] getting interface name for 96.244.142.28 13[KNL] 96.244.142.28 is on interface em1 13[CFG] loaded certificate "C=CH, O=strongSwan, CN=vpn.strongswan.org" from 'serverCert.pem' 13[CFG] id '96.244.142.28' not confirmed by certificate, defaulting to 'C=CH, O=strongSwan, CN=vpn.strongswan.org' 13[CFG] loaded certificate "C=CH, O=strongSwan, CN=client" from 'clientCert.pem' 13[CFG] id '%any' not confirmed by certificate, defaulting to 'C=CH, O=strongSwan, CN=client' 13[CFG] added configuration 'android2' 13[CFG] adding virtual IP address pool 'android2': 10.0.0.2/32 08[NET] received packet: from 208.54.35.241[32235] to 96.244.142.28[500] 15[CFG] looking for an ike config for 96.244.142.28...208.54.35.241 15[CFG] candidate: %any...%any, prio 2 15[CFG] candidate: 96.244.142.28...%any, prio 5 15[CFG] found matching ike config: 96.244.142.28...%any with prio 5 01[JOB] next event in 29s 999ms, waiting 15[IKE] received NAT-T (RFC 3947) vendor ID 15[IKE] received draft-ietf-ipsec-nat-t-ike-02 vendor ID 15[IKE] received draft-ietf-ipsec-nat-t-ike-02\n vendor ID 15[IKE] received draft-ietf-ipsec-nat-t-ike-00 vendor ID 15[IKE] received XAuth vendor ID 15[IKE] received Cisco Unity vendor ID 15[IKE] received DPD vendor ID 15[IKE] 208.54.35.241 is initiating a Main Mode IKE_SA 15[IKE] IKE_SA (unnamed)[1] state change: CREATED => CONNECTING 15[CFG] selecting proposal: 15[CFG] proposal matches 15[CFG] received proposals: IKE:AES_CBC_256/HMAC_SHA1_96/PRF_HMAC_SHA1/MODP_1024, IKE:AES_CBC_256/HMAC_MD5_96/PRF_HMAC_MD5/MODP_1024, IKE:AES_CBC_128/HMAC_SHA1_96/PRF_HMAC_SHA1/MODP_1024, IKE:AES_CBC_128/HMAC_MD5_96/PRF_HMAC_MD5/MODP_1024, IKE:3DES_CBC/HMAC_SHA1_96/PRF_HMAC_SHA1/MODP_1024, IKE:3DES_CBC/HMAC_MD5_96/PRF_HMAC_MD5/MODP_1024, IKE:DES_CBC/HMAC_SHA1_96/PRF_HMAC_SHA1/MODP_1024, IKE:DES_CBC/HMAC_MD5_96/PRF_HMAC_MD5/MODP_1024 15[CFG] configured proposals: IKE:AES_CBC_256/HMAC_SHA1_96/PRF_HMAC_SHA1/MODP_1024, IKE:AES_CBC_128/AES_CBC_192/AES_CBC_256/3DES_CBC/CAMELLIA_CBC_128/CAMELLIA_CBC_192/CAMELLIA_CBC_256/HMAC_MD5_96/HMAC_SHA1_96/HMAC_SHA2_256_128/HMAC_SHA2_384_192/HMAC_SHA2_512_256/AES_XCBC_96/AES_CMAC_96/PRF_HMAC_MD5/PRF_HMAC_SHA1/PRF_HMAC_SHA2_256/PRF_HMAC_SHA2_384/PRF_HMAC_SHA2_512/PRF_AES128_XCBC/PRF_AES128_CMAC/MODP_2048/MODP_2048_224/MODP_2048_256/MODP_1536/MODP_4096/MODP_8192/MODP_1024/MODP_1024_160 15[CFG] selected proposal: IKE:AES_CBC_256/HMAC_SHA1_96/PRF_HMAC_SHA1/MODP_1024 15[NET] sending packet: from 96.244.142.28[500] to 208.54.35.241[32235] 04[NET] sending packet: from 96.244.142.28[500] to 208.54.35.241[32235] 15[MGR] checkin IKE_SA (unnamed)[1] 15[MGR] check-in of IKE_SA successful. 08[NET] received packet: from 208.54.35.241[32235] to 96.244.142.28[500] 08[NET] waiting for data on sockets 07[MGR] checkout IKE_SA by message 07[MGR] IKE_SA (unnamed)[1] successfully checked out 07[NET] received packet: from 208.54.35.241[32235] to 96.244.142.28[500] 07[LIB] size of DH secret exponent: 1023 bits 07[IKE] remote host is behind NAT 07[IKE] sending cert request for "C=CH, O=strongSwan, CN=strongSwan CA" 07[ENC] generating NAT_D_V1 payload finished 07[NET] sending packet: from 96.244.142.28[500] to 208.54.35.241[32235] 07[MGR] checkin IKE_SA (unnamed)[1] 07[MGR] check-in of IKE_SA successful. 04[NET] sending packet: from 96.244.142.28[500] to 208.54.35.241[32235] 08[NET] received packet: from 208.54.35.241[35595] to 96.244.142.28[4500] 10[IKE] ignoring certificate request without data 10[IKE] received end entity cert "C=CH, O=strongSwan, CN=client" 10[CFG] looking for XAuthInitRSA peer configs matching 96.244.142.28...208.54.35.241[C=CH, O=strongSwan, CN=client] 10[CFG] candidate "android-hybrid", match: 1/1/2/2 (me/other/ike/version) 10[CFG] candidate "android2", match: 1/20/5/1 (me/other/ike/version) 10[CFG] selected peer config "android2" 10[CFG] certificate "C=CH, O=strongSwan, CN=client" key: 2048 bit RSA 10[CFG] using trusted ca certificate "C=CH, O=strongSwan, CN=strongSwan CA" 10[CFG] checking certificate status of "C=CH, O=strongSwan, CN=client" 10[CFG] ocsp check skipped, no ocsp found 10[CFG] certificate status is not available 10[CFG] certificate "C=CH, O=strongSwan, CN=strongSwan CA" key: 2048 bit RSA 10[CFG] reached self-signed root ca with a path length of 0 10[CFG] using trusted certificate "C=CH, O=strongSwan, CN=client" 10[IKE] authentication of 'C=CH, O=strongSwan, CN=client' with RSA successful 10[ENC] added payload of type ID_V1 to message 10[ENC] added payload of type SIGNATURE_V1 to message 10[IKE] authentication of 'C=CH, O=strongSwan, CN=vpn.strongswan.org' (myself) successful 10[IKE] queueing XAUTH task 10[IKE] sending end entity cert "C=CH, O=strongSwan, CN=vpn.strongswan.org" 10[NET] sending packet: from 96.244.142.28[4500] to 208.54.35.241[35595] 04[NET] sending packet: from 96.244.142.28[4500] to 208.54.35.241[35595] 10[IKE] activating new tasks 10[IKE] activating XAUTH task 10[NET] sending packet: from 96.244.142.28[4500] to 208.54.35.241[35595] 04[NET] sending packet: from 96.244.142.28[4500] to 208.54.35.241[35595] 01[JOB] next event in 3s 999ms, waiting 10[MGR] checkin IKE_SA android2[1] 10[MGR] check-in of IKE_SA successful. 08[NET] received packet: from 208.54.35.241[35595] to 96.244.142.28[4500] 08[NET] waiting for data on sockets 12[MGR] checkout IKE_SA by message 12[MGR] IKE_SA android2[1] successfully checked out 12[NET] received packet: from 208.54.35.241[35595] to 96.244.142.28[4500] 12[MGR] checkin IKE_SA android2[1] 12[MGR] check-in of IKE_SA successful. 08[NET] received packet: from 208.54.35.241[35595] to 96.244.142.28[4500] 16[MGR] checkout IKE_SA by message 16[MGR] IKE_SA android2[1] successfully checked out 16[NET] received packet: from 208.54.35.241[35595] to 96.244.142.28[4500] 08[NET] waiting for data on sockets 16[IKE] XAuth authentication of 'android' successful 16[IKE] reinitiating already active tasks 16[IKE] XAUTH task 16[NET] sending packet: from 96.244.142.28[4500] to 208.54.35.241[35595] 04[NET] sending packet: from 96.244.142.28[4500] to 208.54.35.241[35595] 16[MGR] checkin IKE_SA android2[1] 01[JOB] next event in 3s 907ms, waiting 16[MGR] check-in of IKE_SA successful. 08[NET] received packet: from 208.54.35.241[35595] to 96.244.142.28[4500] 09[MGR] checkout IKE_SA by message 09[MGR] IKE_SA android2[1] successfully checked out 09[NET] received packet: from 208.54.35.241[35595] to 96.244.142.28[4500] .8rS 09[IKE] IKE_SA android2[1] established between 96.244.142.28[C=CH, O=strongSwan, CN=vpn.strongswan.org]...208.54.35.241[C=CH, O=strongSwan, CN=client] 09[IKE] IKE_SA android2[1] state change: CONNECTING => ESTABLISHED 09[IKE] scheduling reauthentication in 3409s 09[IKE] maximum IKE_SA lifetime 3589s 09[IKE] activating new tasks 09[IKE] nothing to initiate 09[MGR] checkin IKE_SA android2[1] 09[MGR] check-in of IKE_SA successful. 09[MGR] checkout IKE_SA 09[MGR] IKE_SA android2[1] successfully checked out 09[MGR] checkin IKE_SA android2[1] 09[MGR] check-in of IKE_SA successful. 01[JOB] next event in 3s 854ms, waiting 08[NET] waiting for data on sockets 08[NET] received packet: from 208.54.35.241[35595] to 96.244.142.28[4500] 14[MGR] checkout IKE_SA by message 14[MGR] IKE_SA android2[1] successfully checked out 14[NET] received packet: from 208.54.35.241[35595] to 96.244.142.28[4500] 14[IKE] processing INTERNAL_IP4_ADDRESS attribute 14[IKE] processing INTERNAL_IP4_NETMASK attribute 14[IKE] processing INTERNAL_IP4_DNS attribute 14[IKE] processing INTERNAL_IP4_NBNS attribute 14[IKE] processing UNITY_BANNER attribute 14[IKE] processing UNITY_DEF_DOMAIN attribute 14[IKE] processing UNITY_SPLITDNS_NAME attribute 14[IKE] processing UNITY_SPLIT_INCLUDE attribute 14[IKE] processing UNITY_LOCAL_LAN attribute 14[IKE] processing APPLICATION_VERSION attribute 14[IKE] peer requested virtual IP %any 14[CFG] assigning new lease to 'android' 14[IKE] assigning virtual IP 10.0.0.2 to peer 'android' 14[NET] sending packet: from 96.244.142.28[4500] to 208.54.35.241[35595] 14[MGR] checkin IKE_SA android2[1] 14[MGR] check-in of IKE_SA successful. 04[NET] sending packet: from 96.244.142.28[4500] to 208.54.35.241[35595] 08[NET] waiting for data on sockets 01[JOB] got event, queuing job for execution 01[JOB] next event in 91ms, waiting 13[MGR] checkout IKE_SA 13[MGR] IKE_SA android2[1] successfully checked out 13[MGR] checkin IKE_SA android2[1] 13[MGR] check-in of IKE_SA successful. 01[JOB] got event, queuing job for execution 01[JOB] next event in 24s 136ms, waiting 15[MGR] checkout IKE_SA 15[MGR] IKE_SA android2[1] successfully checked out 15[MGR] checkin IKE_SA android2[1] 15[MGR] check-in of IKE_SA successful.

    Read the article

  • WPF: Timers

    - by Ilya Verbitskiy
    I believe, once your WPF application will need to execute something periodically, and today I would like to discuss how to do that. There are two possible solutions. You can use classical System.Threading.Timer class or System.Windows.Threading.DispatcherTimer class, which is the part of WPF. I have created an application to show you how to use the API.     Let’s take a look how you can implement timer using System.Threading.Timer class. First of all, it has to be initialized.   1: private Timer timer; 2:   3: public MainWindow() 4: { 5: // Form initialization code 6: 7: timer = new Timer(OnTimer, null, Timeout.InfiniteTimeSpan, Timeout.InfiniteTimeSpan); 8: }   Timer’s constructor accepts four parameters. The first one is the callback method which is executed when timer ticks. I will show it to you soon. The second parameter is a state which is passed to the callback. It is null because there is nothing to pass this time. The third parameter is the amount of time to delay before the callback parameter invokes its methods. I use System.Threading.Timeout helper class to represent infinite timeout which simply means the timer is not going to start at the moment. And the final fourth parameter represents the time interval between invocations of the methods referenced by callback. Infinite timeout timespan means the callback method will be executed just once. Well, the timer has been created. Let’s take a look how you can start the timer.   1: private void StartTimer(object sender, RoutedEventArgs e) 2: { 3: timer.Change(TimeSpan.Zero, new TimeSpan(0, 0, 1)); 4:   5: // Disable the start buttons and enable the reset button. 6: }   The timer is started by calling its Change method. It accepts two arguments: the amount of time to delay before the invoking the callback method and the time interval between invocations of the callback. TimeSpan.Zero means we start the timer immediately and TimeSpan(0, 0, 1) tells the timer to tick every second. There is one method hasn’t been shown yet. This is the callback method OnTimer which does a simple task: it shows current time in the center of the screen. Unfortunately you cannot simple write something like this:   1: clock.Content = DateTime.Now.ToString("hh:mm:ss");   The reason is Timer runs callback method on a separate thread, and it is not possible to access GUI controls from a non-GUI thread. You can avoid the problem using System.Windows.Threading.Dispatcher class.   1: private void OnTimer(object state) 2: { 3: Dispatcher.Invoke(() => ShowTime()); 4: } 5:   6: private void ShowTime() 7: { 8: clock.Content = DateTime.Now.ToString("hh:mm:ss"); 9: }   You can build similar application using System.Windows.Threading.DispatcherTimer class. The class represents a timer which is integrated into the Dispatcher queue. It means that your callback method is executed on GUI thread and you can write a code which updates your GUI components directly.   1: private DispatcherTimer dispatcherTimer; 2:   3: public MainWindow() 4: { 5: // Form initialization code 6:   7: dispatcherTimer = new DispatcherTimer { Interval = new TimeSpan(0, 0, 1) }; 8: dispatcherTimer.Tick += OnDispatcherTimer; 9: } Dispatcher timer has nicer and cleaner API. All you need is to specify tick interval and Tick event handler. The you just call Start method to start the timer.   private void StartDispatcher(object sender, RoutedEventArgs e) { dispatcherTimer.Start(); // Disable the start buttons and enable the reset button. } And, since the Tick event handler is executed on GUI thread, the code which sets the actual time is straightforward.   1: private void OnDispatcherTimer(object sender, EventArgs e) 2: { 3: ShowTime(); 4: } We’re almost done. Let’s take a look how to stop the timers. It is easy with the Dispatcher Timer.   1: dispatcherTimer.Stop(); And slightly more complicated with the Timer. You should use Change method again.   1: timer.Change(Timeout.InfiniteTimeSpan, Timeout.InfiniteTimeSpan); What is the best way to add timer into an application? The Dispatcher Timer has simple interface, but its advantages are disadvantages at the same time. You should not use it if your Tick event handler executes time-consuming operations. It freezes your window which it is executing the event handler method. You should think about using System.Threading.Timer in this case. The code is available on GitHub.

    Read the article

  • Creating shapes on the fly

    - by Bertrand Le Roy
    Most Orchard shapes get created from part drivers, but they are a lot more versatile than that. They can actually be created from pretty much anywhere, including from templates. One example can be found in the Layout.cshtml file of the ThemeMachine theme: WorkContext.Layout.Footer .Add(New.BadgeOfHonor(), "5"); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } What this is really doing is create a new shape called BadgeOfHonor and injecting it into the Footer global zone (that has not yet been defined, which in itself is quite awesome) with an ordering rank of "5". We can actually come up with something simpler, if we want to render the shape inline instead of sending it into a zone: @Display(New.BadgeOfHonor()) Now let's try something a little more elaborate and create a new shape for displaying a date and time: @Display(New.DateTime(date: DateTime.Now, format: "d/M/yyyy")) For the moment, this throws a "Shape type DateTime not found" exception because the system has no clue how to render a shape called "DateTime" yet. The BadgeOfHonor shape above was rendering something because there is a template for it in the theme: Themes/ThethemeMachine/Views/BadgeOfHonor.cshtml. We need to provide a template for our new shape to get rendered. Let's add a DateTime.cshtml file into our theme's Views folder in order to make the exception go away: Hi, I'm a date time shape. Now we're just missing one thing. Instead of displaying some static text, which is not very interesting, we can display the actual time that got passed into the shape's dynamic constructor. Those parameters will get added to the template's Model, so they are easy to retrieve: @(((DateTime)Model.date).ToString(Model.format)) Now that may remind you a little of WebForm's user controls. That's a fair comparison, except that these shapes are much more flexible (you can add properties on the fly as necessary), and that the actual rendering is decoupled from the "control". For example, any theme can override the template for a shape, you can use alternates, wrappers, etc. Most importantly, there is no lifecycle and protocol abstraction like there was in WebForms. I think this is a real improvement over previous attempts at similar things.

    Read the article

  • Deploying Data Mining Models using Model Export and Import, Part 2

    - by [email protected]
    In my last post, Deploying Data Mining Models using Model Export and Import, we explored using DBMS_DATA_MINING.EXPORT_MODEL and DBMS_DATA_MINING.IMPORT_MODEL to enable moving a model from one system to another. In this post, we'll look at two distributed scenarios that make use of this capability and a tip for easily moving models from one machine to another using only Oracle Database, not an external file transport mechanism, such as FTP. The first scenario, consider a company with geographically distributed business units, each collecting and managing their data locally for the products they sell. Each business unit has in-house data analysts that build models to predict which products to recommend to customers in their space. A central telemarketing business unit also uses these models to score new customers locally using data collected over the phone. Since the models recommend different products, each customer is scored using each model. This is depicted in Figure 1.Figure 1: Target instance importing multiple remote models for local scoring In the second scenario, consider multiple hospitals that collect data on patients with certain types of cancer. The data collection is standardized, so each hospital collects the same patient demographic and other health / tumor data, along with the clinical diagnosis. Instead of each hospital building it's own models, the data is pooled at a central data analysis lab where a predictive model is built. Once completed, the model is distributed to hospitals, clinics, and doctor offices who can score patient data locally.Figure 2: Multiple target instances importing the same model from a source instance for local scoring Since this blog focuses on model export and import, we'll only discuss what is necessary to move a model from one database to another. Here, we use the package DBMS_FILE_TRANSFER, which can move files between Oracle databases. The script is fairly straightforward, but requires setting up a database link and directory objects. We saw how to create directory objects in the previous post. To create a database link to the source database from the target, we can use, for example: create database link SOURCE1_LINK connect to <schema> identified by <password> using 'SOURCE1'; Note that 'SOURCE1' refers to the service name of the remote database entry in your tnsnames.ora file. From SQL*Plus, first connect to the remote database and export the model. Note that the model_file_name does not include the .dmp extension. This is because export_model appends "01" to this name.  Next, connect to the local database and invoke DBMS_FILE_TRANSFER.GET_FILE and import the model. Note that "01" is eliminated in the target system file name.  connect <source_schema>/<password>@SOURCE1_LINK; BEGIN  DBMS_DATA_MINING.EXPORT_MODEL ('EXPORT_FILE_NAME' || '.dmp',                                 'MY_SOURCE_DIR_OBJECT',                                 'name =''MY_MINING_MODEL'''); END; connect <target_schema>/<password>; BEGIN  DBMS_FILE_TRANSFER.GET_FILE ('MY_SOURCE_DIR_OBJECT',                               'EXPORT_FILE_NAME' || '01.dmp',                               'SOURCE1_LINK',                               'MY_TARGET_DIR_OBJECT',                               'EXPORT_FILE_NAME' || '.dmp' );  DBMS_DATA_MINING.IMPORT_MODEL ('EXPORT_FILE_NAME' || '.dmp',                                 'MY_TARGET_DIR_OBJECT'); END; To clean up afterward, you may want to drop the exported .dmp file at the source and the transferred file at the target. For example, utl_file.fremove('&directory_name', '&model_file_name' || '.dmp');

    Read the article

  • Oracle Enterprise Manager Cloud Control 12c Release 2 (12.1.0.2) Now Available!

    - by Javier Puerta
    Oracle Enterprise Manager Cloud Control 12c Release 2 (12.1.0.2) is now available on OTN on ALL platforms. This is the first major release since the launch of Enterprise Manager 12c in October of 2011 and the first ever Enterprise Manager release available on all platforms simultaneously. This is primarily a stability release which incorporates many of issues and feedback reported by early adopters. In addition, this release contains many new features and enhancements in areas across the board.   New Capabilities and Features   Enhanced management capabilities for enterprise private clouds: Introduces new capabilities to allow customers to build and manage a Java Platform-as-a-Service (PaaS) cloud based on Oracle Weblogic Server. The new capabilities include guided set up of PaaS Cloud, self-service provisioning, automatic scale out and metering and chargeback. Enhanced lifecycle management capabilities for Oracle WebLogic Server environments: Combining in-context multiple domain, patching and configuration file synchronizations. Integrated Hardware-Software management for Oracle Exalogic Elastic Cloud through features such as rack schematics visualization and integrated monitoring of all hardware and software components. The latest management capabilities for business-critical applications include: Business Application Management: A new Business Application (BA) target type and dashboard with flexible definitions provides a logical view of an application’s business transactions, end-user experiences and the cloud infrastructure the monitored application is running on. Enhanced User Experience Reporting: Oracle Real User Experience Insight has been enhanced to provide reporting capabilities on client-side issues for applications running in the cloud and has been more tightly coupled with Oracle Business Transaction Management to help ensure that real-time user experience and transaction tracing data is provided to users in context. Several key improvements address ease of administration, reporting and extensibility for massively scalable cloud environments including dynamic groups, self-updateable monitoring templates, bulk operations against many events, etc. New and Revised Plug-Ins:   Several plug-Ins have been updated as a part of this release resulting in either new versions or revisions. Revised plug-ins contain only bug-fixes and while new plug-ins incorporate both bug fixes as well as new functionality.   Plug-In Name Version Enterprise Manager for Oracle Database 12.1.0.2 (revision) Enterprise Manager for Oracle Fusion Middleware 12.1.0.3 (new) Enterprise Manager for Chargeback and Capacity Planning 12.1.0.3 (new) Enterprise Manager for Oracle Fusion Applications 12.1.0.3 (new) Enterprise Manager for Oracle Virtualization 12.1.0.3 (new) Enterprise Manager for Oracle Exadata 12.1.0.3 (new) Enterprise Manager for Oracle Cloud 12.1.0.4 (new) Installation and Upgrade:   All major platforms have been released simultaneously (Linux 32 / 64 bit, Solaris (SPARC), Solaris x86-64, IBM AIX 64-bit, and Windows x86-64 (64-bit) ) Enterprise Manager 12.1.0.2 is a complete release that includes both the EM OMS and Agent versions of 12.1.0.2. Installation options available with EM 12.1.0.2: User can do fresh Install or an upgrade from versions EM 10.2.0.5, 11.1, or 12.1.0.2 ( Bundle Patch 1 not mandatory). Upgrading to EM 12.1.0.2 from EM 12.1.0.1 is not a patch application (similar to Bundle Patch 1) but is achieved through a 1-system upgrade. Documentation:   Oracle Enterprise Manager Cloud Control Introduction Document provides a broad overview of capabilities and highlights"What's New" in EM 12.1.0.2.   All updated Oracle Enterprise Manager documentation can be found on OTN   Customer Webcast - EM 12c Installation and Upgrade: This webcast is for customers who are interested in learning how to successfully deploy or upgrade to EM 12.1.0.2.   Customer Webcast - Installation and Upgrade - September 21(registration and info on OTN starting September 12)   Enterprise Manager 12c R2 Resources:   OTN Download Page Upgrade Guide

    Read the article

  • Python Coding standards vs. productivity

    - by Shroatmeister
    I work for a large humanitarian organisation, on a project building software that could help save lives in emergencies by speeding up the distribution of food. Many NGOs desperately need our software and we are weeks behind schedule. One thing that worries me in this project is what I think is an excessive focus on coding standards. We write in python/django and use a version of PEP0008, with various modifications e.g. line lengths can go up to 160 chars and all lines should go that long if possible, no blank lines between imports, line wrapping rules that apply only to certain kinds of classes, lots of templates that we must use, even if they aren't the best way to solve a problem etc. etc. One core dev spent a week rewriting a major part of the system to meet the then new coding standards, throwing away several suites of tests in the process, as the rewrite meant they were 'invalid'. We spent two weeks rewriting all the functionality that was lost, and fixing bugs. He is the lead dev and his word carries weight, so he has convinced the project manager that these standards are necessary. The junior devs do as they are told. I sense that the project manager has a strong feeling of cognitive dissonance about all this but nevertheless agrees with it vehemently as he feels unsure what else to do. Today I got in serious trouble because I had forgotten to put some spaces after commas in a keyword argument. I was literally shouted at by two other devs and the project manager during a Skype call. Personally I think coding standards are important but also think that we are wasting a lot of time obsessing with them, and when I verbalized this it provoked rage. I'm seen as a troublemaker in the team, a team that is looking for scapegoats for its failings. Since the introduction of the coding standards, the team's productivity has measurably plummeted, however this only reinforces the obsession, i.e. the lead dev simply blames our non-adherence to standards for the lack of progress. He believes that we can't read each other's code if we don't adhere to the conventions. This is starting to turn sticky. Now I am trying to modify various scripts, autopep8, pep8ify and PythonTidy to try to match the conventions. We also run pep8 against source code but there are so many implicit amendments to our standard that it's hard to track them all. The lead dev simple picks faults that the pep8 script doesn't pick up and shouts at us in the next stand-up meeting. Every week there are new additions to the coding standards that force us to rewrite existing, working, tested code. Thank heavens we still have tests, (I reverted some commits and fixed a bunch of the ones he removed). All the while there is increasing pressure to meet the deadline. I believe a fundamental issue is that the lead dev and another core dev refuse to trust other developers to do their job. But how to deal with that? We can't do our job because we are too busy rewriting everything. I've never encountered this dynamic in a software engineering team. Am I wrong to question their adherence to coding standards? Has anyone else experienced a similar situation and how have they dealt with it successfully? (I'm not looking for a discussion just actual solutions people have found)

    Read the article

  • Oracle OpenWorld 2012: The Best Just Gets Better

    - by kellsey.ruppel
    For almost 30 years, Oracle OpenWorld has been the world's premier learning event for Oracle customers, developers, and partners. With more than 2,000 sessions providing best practices; demos; tips and tricks; and product insight from Oracle, customers, partners, and industry experts, Oracle OpenWorld provides more educational and networking opportunities than any other event in the world. 2011 Facts Attendees from 117 Countries Used Filtered Tap Water to Eliminate 22 Tons of Plastic Bottles Diverted Enough Trash to Fill 37 Dump Trucks 45,000+ Total Registered Attendees Oracle OpenWorld 2012: The Best Just Gets Better What's New? What's Different?  This year Oracle OpenWorld will include the Executive Edge @ OpenWorld (replacing Leaders Circle), the Customer Experience Summit @ OpenWorld, JavaOne, MySQL Connect, and the expanded Oracle PartnerNetwork Exchange @ OpenWorld. More than 50,000 customers and partners will attend OpenWorld to see Oracle's newest hardware and software products at work, and learn more about our server and storage, database, middleware, industry, and applications solutions.  New This Year: The Executive Edge @ Oracle OpenWorld (Oct 1 - 2) New at Oracle OpenWorld this year, the Executive Edge @ OpenWorld (replacing Leaders Circle) will bring together customer, partner and Oracle executives for two days of keynote presentations, summits targeted to customer industries and organizational roles, roundtable discussions, and great new networking opportunities. The Customer Experience Revolution Is Here!Customer Experience Summit @ Oracle OpenWorld (Oct 3 - 5) This dynamic new program offers more than 60 keynotes, roundtables and networking sessions exploring trends, innovations and best practices to help companies succeed with a customer experience-driven business strategy.  All Things Java -- JavaOne (Sep 30 - Oct 4) JavaOne is the world's most important event for the Java developer community. Technical sessions cover topics that span the breadth of the Java universe, with keynotes from the foremost Java visionaries and expert-led hands-on learning opportunities.  Are you innovating with Oracle Fusion Middleware?  If you are, then you need to know that the Call for Nominations for the 2012 Oracle Fusion Middleware Innovation Awards is open now through July 17, 2012. Jointly sponsored by Oracle, AUSOUG, IOUG, OAUG, ODTUG, QUEST, and UKOUG, the Oracle Fusion Middleware Innovation Awards honor organizations creatively using Oracle Fusion Middleware to deliver unique value to their enterprise.  Winning customers and partners will be hosted at Oracle OpenWorld 2012, where they can connect with Oracle executives, network with peers, and be featured in an upcoming edition of Oracle Magazine. Be sure to submit your WebCenter use case today! Oracle Music Festival his year, the first-ever Oracle Music Festival will debut, running from September 30 to October 4. In the tradition of great live music events like Coachella and SXSW, the streets of San Francisco—from 7:00 p.m. to 1:00 a.m. for five nights-into-days—will vibrate with the music of some of today’s hottest name acts, emerging and local bands, and scratching DJs. Outdoor venues and clubs near Moscone Center and the Zone (including 111 Minna, DNA, Mezzanine, Roe, Ruby Skye, Slim’s, the Taylor Street Café, Temple, Union Square, and Yerba Buena Gardens) will showcase acts that range from reggae to rock, punk to ska, R&B to country, indie to honky-tonk. After a full day of sessions and networking, you'll be primed for some late-night relaxation and rocking out at one or more of these sets.  Please note that with awesome acts, thousands of music devotees, and a limited number of venues each night, access to Festival events is on a first-come, first-served basis. Join us at the Oracle Music Festival--it's going to be epic! Save $500 on Registration with Early Bird Pricing Early Bird pricing ends July 13! Save up to $500 on registration fees by registering by Friday. Will you be attending Oracle OpenWorld 2012? We hope to see you there! Be sure to follow @oraclewebcenter on Twitter for more information and use hashtags #webcenter and #oow!

    Read the article

  • Archbeat Link-O-Rama Top 10 Facebook Faves for October 20-26, 2013

    - by OTN ArchBeat
    Here's this week's list of the Top 10 items shared on the OTN ArchBeat Facebook Page from October 27 - November 2, 2013. Visualizing and Process (Twitter) Events in Real Time with Oracle Coherence | Noah Arliss This OTN Virtual Developer Day session explores in detail how to create a dynamic HTML5 Web application that interacts with Oracle Coherence as it’s processing events in real time, using the Avatar project and Oracle Coherence’s Live Events feature. Part of OTN Virtual Developer Day: Harnessing the Power of Oracle WebLogic and Oracle Coherence, November 5, 2013. 9am to 1pm PT / 12pm to 4pm ET / 1pm to 5pm BRT. Register now! HTML5 Application Development with Oracle WebLogic Server | Doug Clarke This free OTN Virtual Developer Day session covers the support for WebSockets, RESTful data services, and JSON infrastructure available in Oracle WebLogic Server. Part of OTN Virtual Developer Day: Harnessing the Power of Oracle WebLogic and Oracle Coherence, November 5, 2013. 9am to 1pm PT / 12pm to 4pm ET / 1pm to 5pm BRT. Register now! Video: ADF BC and REST services | Frederic Desbiens Spend a few minutes with Oracle ADF principal product manager Frederic Desbiens and learn how to publish ADF Business Components as RESTful web services. One Client Two Clusters | David Felcey "Sometimes its desirable to have a client connect to multiple clusters, either because the data is dispersed or for instance the clusters are in different locations for high availability," says David Felcey. David shows you how in this post, which includes a simple example. Exceptions Handling and Notifications in ODI | Christophe Dupupet Oracle Fusion Middleware A-Team director Christophe Dupupet reviews the techniques that are available in Oracle Data Integrator to guarantee that the appropriate individuals are notified in the event that ODI processes are impacted by network outages or other mishaps. Securing WebSocket applications on Glassfish | Pavel Bucek WebSocket is a key capability standardized into Java EE 7. Many developers wonder how WebSockets can be secured. One very nice characteristic for WebSocket is that it in fact completely piggybacks on HTTP. In this post Pavel Bucek demonstrates how to secure WebSocket endpoints in GlassFish using TLS/SSL. Oracle Coherence, Split-Brain and Recovery Protocols In Detail | Ricardo Ferreira Ricardo Ferreira's article "provides a high level conceptual overview of Split-Brain scenarios in distributed systems," focusing on a "specific example of cluster communication failure and recovery in Oracle Coherence." Non-programmatic Authentication Using Login Form in JSF (For WebCenter & ADF) | JayJay Zheng Oracle ACE JayJay Zheng shares an approach that "avoids the programmatic authentication and works great for having a custom login page developed in WebCenter Portal integrated with OAM authentication." The latest article in the Industrial SOA series looks at mobile computing and how companies are developing SOA to go. http://pub.vitrue.com/PUxT Tech Article: SOA in Real Life: Mobile Solutions The ACE Director Thing | Dr. Frank Munz Frank Munz finally gets around to blogging about achieving Oracle ACE Director status and shares some interesting insight into what will change—and what won't—thanks to that new status. A good, short read for those interested in learning more about the Oracle ACE program. Thought for the Day "Even if you're on the right track, you'll get run over if you just sit there." — Will Rogers, American humorist (November 4, 1879 – August 15, 1935) Source: brainyquote.com

    Read the article

  • Inspiring a co-worker to adopt better coding practices?

    - by Aaronaught
    In the Handling my antiquated coworker question, various people discussed strategies for dealing with coworkers who are unwilling to integrate their workflow with the team's. I'd like, if possible, to learn some strategies for "teaching" a coworker who is merely ignorant of modern techniques and tools, and possibly a little apathetic. I've started working with a programmer who until recently has been working in relative isolation, in a different part of the company. He has extensive domain knowledge and most importantly he has demonstrated good problem-solving skills, something which many candidates seem to lack. However, the actual (C#) code I've seen is a throwback to the VB6 days. Procedural structure, Hungarian notation, global variables (abuse of static), no interfaces, no tests, non-use of Generics, throwing System.Exception... you get the idea. This programmer is a fair bit older than I am and, by first impressions at least, doesn't actively seek positive change. I'm not going to say resistant to change, because I think that is largely an issue of how the topic gets broached, and I want to be prepared. Programmers tend to be stubborn people, and going in with guns blazing and instituting rip-it-to-shreds code reviews and strictly-enforced policies is very likely not going to produce the end result that I want. If this were a new hire, a junior programmer, I wouldn't think twice about taking a "mentor" stance, but I'm extremely wary of treating an experienced employee as a clueless newbie (which he's not - he just hasn't kept pace with certain advancements in the field). How might I go about raising this developer's code quality standard the Dale Carnegie way, through gentle persuasion and non-material incentives? What would be the best strategy for effecting subtle, gradual changes, without creating an adversarial situation? Have other people - especially lead developers - been in this type of situation before? Which strategies were successful at stimulating interest and creating a positive group dynamic? Which strategies weren't successful and would be better to avoid? Clarifications: I really feel that several people are answering based on personal feelings without actually reading all of the details of the question. Please note the following, which should have been implied but I am now making explicit: This coworker is only my "senior" by virtue of age. I never said that his title, sphere of influence, or years at the organization exceed mine, and in fact, none of those things are true. He's a LOB programmer who's been absorbed into the main development shop. That's it. I am not a new hire, junior programmer, or other naïve idiot with grand plans to transform the company overnight. I am basically in charge of the software process, but as many who've worked as "leads" will know, responsibilities don't always correlate precisely with the org chart. I'm not asking people how to get my way, come hell or high water. I could do that if I wanted to, with the net result being that this person would become resentful and/or quit. Please try to understand that I am looking for a social, cooperative method of driving change. The mention of "...global variables... no tests... throwing System.Exception" was intended to demonstrate that the problems are not just superficial or aesthetic. Practices that may work for relatively small CRUD apps do not necessarily work for large enterprise apps, and in fact, none of the code so far has actually passed the integration tests. Please, try to take the question at face value, accept that I actually know what I'm talking about, and either answer the question that I actually asked or move on. P.S. My sincerest gratitude to those who -did- offer constructive advice rather than arguing with the premise. I'm going to leave this open for a while longer as I'm hoping to hear more in the way of real-world experiences.

    Read the article

  • Unit Testing Framework for XQuery

    - by Knut Vatsendvik
    This posting provides a unit testing framework for XQuery using Oracle Service Bus. It allows you to write a test case to run your XQuery transformations in an automated fashion. When the test case is run, the framework returns any differences found in the response. The complete code sample with install instructions can be downloaded from here. Writing a Unit Test You start a new Test Case by creating a Proxy Service from Workshop that comes with Oracle Service Bus. In the General Configuration page select Service Type to be Messaging Service           In the Message Type Configuration page link both the Request & Response Message Type to the TestCase element of the UnitTest.xsd schema                 The TestCase element consists of the following child elements The ID and optional Name element is simply used for reference. The Transformation element is the XQuery resource to be executed. The Input elements represents the input to run the XQuery with. The Output element represents the expected output. These XML documents are “also” represented as an XQuery resource where the XQuery function takes no arguments and returns the XML document. Why not pass the test data with the TestCase? Passing an XML structure in another XML structure is not very easy or at least not very human readable. Therefore it was chosen to represent the test data as an loadable resource in the OSB. However you are free to go ahead with another approach on this if wanted. The XMLDiff elements represents any differences found. A sample on input is shown here. Modeling the Message Flow Then the next step is to model the message flow of the Proxy Service. In the Request Pipeline create a stage node that loads the test case input data.      For this, specify a dynamic XQuery expression that evaluates at runtime to the name of a pre-registered XQuery resource. The expression is of course set by the input data from the test case.           Add a Run stage node. Assign the result of the XQuery, that is to be run, to a context variable. Define a mapping for each of the input variables added in previous stage.     Add a Compare stage. Like with the input data, load the expected output data. Do a compare using XMLDiff XQuery provided where the first argument is the loaded output test data, and the second argument the result from the Run stage. Any differences found is replaced back into the test case XMLDiff element. In case of any unexpected failure while processing, add an Error Handler to the Pipeline to capture the fault. To pass back the result add the following Insert action In the Response Pipeline. A sample on output is shown here.

    Read the article

  • JDK bug migration: components and subcomponents

    - by darcy
    One subtask of the JDK migration from the legacy bug tracking system to JIRA was reclassifying bugs from a three-level taxonomy in the legacy system, (product, category, subcategory), to a fundamentally two-level scheme in our customized JIRA instance, (component, subcomponent). In the JDK JIRA system, there is technically a third project-level classification, but by design a large majority of JDK-related bugs were migrated into a single "JDK" project. In the end, over 450 legacy subcategories were simplified into about 120 subcomponents in JIRA. The 120 subcomponents are distributed among 17 components. A rule of thumb used was that a subcategory had to have at least 50 bugs in it for it to be retained. Below is a listing the component / subcomponent classification of the JDK JIRA project along with some notes and guidance on which OpenJDK email addresses cover different areas. Eventually, a separate incidents project to host new issues filed at bugs.sun.com will use a slightly simplified version of this scheme. The preponderance of bugs and subcomponents for the JDK are in library-related areas, with components named foo-libs and subcomponents primarily named after packages. While there was an overall condensation of subcomponents in the migration, in some cases long-standing informal divisions in core libraries based on naming conventions in the description were promoted to formal subcomponents. For example, hundreds of bugs in the java.util subcomponent whose descriptions started with "(coll)" were moved into java.util:collections. Likewise, java.lang bugs starting with "(reflect)" and "(proxy)" were moved into java.lang:reflect. client-libs (Predominantly discussed on 2d-dev and awt-dev and swing-dev.) 2d demo java.awt java.awt:i18n java.beans (See beans-dev.) javax.accessibility javax.imageio javax.sound (See sound-dev.) javax.swing core-libs (See core-libs-dev.) java.io java.io:serialization java.lang java.lang.invoke java.lang:class_loading java.lang:reflect java.math java.net java.nio (Discussed on nio-dev.) java.nio.charsets java.rmi java.sql java.sql:bridge java.text java.util java.util.concurrent java.util.jar java.util.logging java.util.regex java.util:collections java.util:i18n javax.annotation.processing javax.lang.model javax.naming (JNDI) javax.script javax.script:javascript javax.sql org.openjdk.jigsaw (See jigsaw-dev.) security-libs (See security-dev.) java.security javax.crypto (JCE: includes SunJCE/MSCAPI/UCRYPTO/ECC) javax.crypto:pkcs11 (JCE: PKCS11 only) javax.net.ssl (JSSE, includes javax.security.cert) javax.security javax.smartcardio javax.xml.crypto org.ietf.jgss org.ietf.jgss:krb5 other-libs corba corba:idl corba:orb corba:rmi-iiop javadb other (When no other subcomponent is more appropriate; use judiciously.) Most of the subcomponents in the xml component are related to jaxp. xml jax-ws jaxb javax.xml.parsers (JAXP) javax.xml.stream (JAXP) javax.xml.transform (JAXP) javax.xml.validation (JAXP) javax.xml.xpath (JAXP) jaxp (JAXP) org.w3c.dom (JAXP) org.xml.sax (JAXP) For OpenJDK, most JVM-related bugs are connected to the HotSpot Java virtual machine. hotspot (See hotspot-dev.) build compiler (See hotspot-compiler-dev.) gc (garbage collection, see hotspot-gc-dev.) jfr (Java Flight Recorder) jni (Java Native Interface) jvmti (JVM Tool Interface) mvm (Multi-Tasking Virtual Machine) runtime (See hotspot-runtime-dev.) svc (Servicability) test core-svc (See serviceability-dev.) debugger java.lang.instrument java.lang.management javax.management tools The full JDK bug database contains entries related to legacy virtual machines that predate HotSpot as well as retired APIs. vm-legacy jit (Sun Exact VM) jit_symantec (Symantec VM, before Exact VM) jvmdi (JVM Debug Interface ) jvmpi (JVM Profiler Interface ) runtime (Exact VM Runtime) Notable command line tools in the $JDK/bin directory have corresponding subcomponents. tools appletviewer apt (See compiler-dev.) hprof jar javac (See compiler-dev.) javadoc(tool) (See compiler-dev.) javah (See compiler-dev.) javap (See compiler-dev.) jconsole launcher updaters (Timezone updaters, etc.) visualvm Some aspects of JDK infrastructure directly affect JDK Hg repositories, but other do not. infrastructure build (See build-dev and build-infra-dev.) licensing (Covers updates to the third party readme, licenses, and similar files.) release_eng (Release engineering) staging (Staging of web pages related to JDK releases.) The specification subcomponent encompasses the formal language and virtual machine specifications. specification language (The Java Language Specification) vm (The Java Virtual Machine Specification) The code for the deploy and install areas is not currently included in OpenJDK. deploy deployment_toolkit plugin webstart install auto_update install servicetags In the JDK, there are a number of cross-cutting concerns whose organization is essentially orthogonal to other areas. Since these areas generally have dedicated teams working on them, it is easier to find bugs of interest if these bugs are grouped first by their cross-cutting component rather than by the affected technology. docs doclet guides hotspot release_notes tools tutorial embedded build hotspot libraries globalization locale-data translation performance hotspot libraries The list of subcomponents will no doubt grow over time, but my inclination is to resist that growth since the addition of each subcomponent makes the system as a whole more complicated and harder to use. When the system gets closer to being externalized, I plan to post more blog entries describing recommended use of various custom fields in the JDK project.

    Read the article

  • Logging connection strings

    If you some of the dynamic features of SSIS such as package configurations or property expressions then sometimes trying to work out were your connections are pointing can be a bit confusing. You will work out in the end but it can be useful to explicitly log this information so that when things go wrong you can just review the logs. You may wish to develop this idea further and encapsulate such logging into a custom task, but for now lets keep it simple and use the Script Task. The Script Task code below will raise an Information event showing the name and connection string for a connection. Imports System Imports Microsoft.SqlServer.Dts.Runtime Public Class ScriptMain Public Sub Main() Dim fireAgain As Boolean ' Get the connection string, we need to know the name of the connection Dim connectionName As String = "My OLE-DB Connection" Dim connectionString As String = Dts.Connections(connectionName).ConnectionString ' Format the message and log it via an information event Dim message As String = String.Format("Connection ""{0}"" has a connection string of ""{1}"".", _ connectionName, connectionString) Dts.Events.FireInformation(0, "Information", message, Nothing, 0, fireAgain) Dts.TaskResult = Dts.Results.Success End Sub End Class Building on that example it is probably more flexible to log all connections in a package as shown in the next example. Imports System Imports Microsoft.SqlServer.Dts.Runtime Public Class ScriptMain Public Sub Main() Dim fireAgain As Boolean ' Loop through all connections in the package For Each connection As ConnectionManager In Dts.Connections ' Get the connection string and log it via an information event Dim message As String = String.Format("Connection ""{0}"" has a connection string of ""{1}"".", _ connection.Name, connection.ConnectionString) Dts.Events.FireInformation(0, "Information", message, Nothing, 0, fireAgain) Next Dts.TaskResult = Dts.Results.Success End Sub End Class By using the Information event it makes it readily available in the designer, for example the Visual Studio Output window (Ctrl+Alt+O) or the package designer Execution Results tab, and also allows you to readily control the logging by choosing which events to log in the normal way. Now before somebody starts commenting that this is a security risk, I would like to highlight good practice for building connection managers. Firstly the Password property, or any other similar sensitive property is always defined as write-only, and secondly the connection string property only uses the public properties to assemble the connection string value when requested. In other words the connection string will never contain the password. I have seen a couple of cases where this is not true, but that was just bad development by third-parties, you won’t find anything like that in the box from Microsoft.   Whilst writing this code it made me wish that there was a custom log entry that you could just turn on that did this for you, but alas connection managers do not even seem to support custom events. It did however remind me of a very useful event that is often overlooked and fits rather well alongside connection string logging, the Execute SQL Task’s custom ExecuteSQLExecutingQuery event. To quote the help reference Custom Messages for Logging - Provides information about the execution phases of the SQL statement. Log entries are written when the task acquires connection to the database, when the task starts to prepare the SQL statement, and after the execution of the SQL statement is completed. The log entry for the prepare phase includes the SQL statement that the task uses. It is the last part that is so useful, how often have you used an expression to derive a SQL statement and you want to log that to make sure the correct SQL is being returned? You need to turn it one, by default no custom log events are captured, but I’ll refer you to a walkthrough on setting up the logging for ExecuteSQLExecutingQuery by Jamie.

    Read the article

  • Seizing the Moment with Mobility

    - by Kathryn Perry
    A guest post by Hernan Capdevila, Vice President, Oracle Fusion Apps Mobile devices are forcing a paradigm shift in the workplace – they’re changing the way businesses can do business and the type of cultures they can nurture. As our customers talk about their mobile needs, we hear them saying they want instant-on access to enterprise data so workers can be more effective at their jobs anywhere, anytime. They also are interested in being more cost effective from an IT point of view. The mobile revolution – with the idea of BYOD (bring your own device) – has added an interesting dynamic because previously IT was driving the employee device strategy and ecosystem. That's been turned on its head with the consumerization of IT. Now employees are figuring out how to use their personal devices for work purposes and IT has to figure out how to adapt. Blurring the Lines between Work and Personal Life My vision of where businesses will be five years from now is that our work lives and personal lives will be more interwoven together. In turn, enterprises will have to determine how to make employees’ work lives fit more into the fabric of their personal lives. And personal devices like smartphones are going to drive significant business value because they let us accomplish things very incrementally. I can be sitting on a train or in a taxi and be productive. At the end of any meeting, I can capture ideas and tasks or follow up with people in real time. Mobile devices enable this notion of seizing the moment – capitalizing on opportunities that might otherwise have slipped away because we're not connected. For the industry shapers out there, this is game changing. The lean and agile workforce is definitely the future. This notion of the board sitting down with the executive team to lay out strategic objectives for a three- to five-year plan, bringing in HR to determine how they're going to staff the strategic activities, kicking off the execution, and then revisiting the plan in three to five years to create another three- to five-year plan is yesterday's model. Businesses that continue to approach innovating in that way are in the dinosaur age. Today it's about incremental planning and incremental execution, which requires a lot of cohesion and synthesis within the workforce. There needs to be this interweaving notion within the workforce about how ideas cascade down, how people engage, how they stay connected, and how insights are shared. How to Survive and Thrive in Today’s Marketplace The notion of Facebook isn’t new. We lived it pre-Internet days with America Online and Prodigy – Facebook is just the renaissance of these services in a more viral and pervasive way. And given the trajectory of the consumerization of IT with people bringing their personal tooling to work, the enterprise has no option but to adapt. The sooner that businesses realize this from a top-down point of view the sooner that they will be able to really drive significant innovation and adapt to the marketplace. There are a small number of companies right now (I think it's closer to 20% rather than 80%, but the number is expanding) that are able to really innovate in this incremental marketplace. So from a competitive point of view, there's no choice but to be social and stay connected. By far the majority of users on Facebook and LinkedIn are mobile users – people on iPhones, smartphones, Android phones, and tablets. It's not the couch people, right? It's the on-the-go people – those people at the coffee shops. Usually when you're sitting at your desk on a big desktop computer, typically you have better things to do than to be on Facebook. This is a topic I'm extremely passionate about because I think mobile devices are game changing. Mobility delivers significant value to businesses – it also brings dramatic simplification from a functional point of view and transforms our work life experience. Hernan CapdevilaVice President, Oracle Applications Development

    Read the article

  • What's up with LDoms: Part 9 - Direct IO

    - by Stefan Hinker
    In the last article of this series, we discussed the most general of all physical IO options available for LDoms, root domains.  Now, let's have a short look at the next level of granularity: Virtualizing individual PCIe slots.  In the LDoms terminology, this feature is called "Direct IO" or DIO.  It is very similar to root domains, but instead of reassigning ownership of a complete root complex, it only moves a single PCIe slot or endpoint device to a different domain.  Let's look again at hardware available to mars in the original configuration: root@sun:~# ldm ls-io NAME TYPE BUS DOMAIN STATUS ---- ---- --- ------ ------ pci_0 BUS pci_0 primary pci_1 BUS pci_1 primary pci_2 BUS pci_2 primary pci_3 BUS pci_3 primary /SYS/MB/PCIE1 PCIE pci_0 primary EMP /SYS/MB/SASHBA0 PCIE pci_0 primary OCC /SYS/MB/NET0 PCIE pci_0 primary OCC /SYS/MB/PCIE5 PCIE pci_1 primary EMP /SYS/MB/PCIE6 PCIE pci_1 primary EMP /SYS/MB/PCIE7 PCIE pci_1 primary EMP /SYS/MB/PCIE2 PCIE pci_2 primary EMP /SYS/MB/PCIE3 PCIE pci_2 primary OCC /SYS/MB/PCIE4 PCIE pci_2 primary EMP /SYS/MB/PCIE8 PCIE pci_3 primary EMP /SYS/MB/SASHBA1 PCIE pci_3 primary OCC /SYS/MB/NET2 PCIE pci_3 primary OCC /SYS/MB/NET0/IOVNET.PF0 PF pci_0 primary /SYS/MB/NET0/IOVNET.PF1 PF pci_0 primary /SYS/MB/NET2/IOVNET.PF0 PF pci_3 primary /SYS/MB/NET2/IOVNET.PF1 PF pci_3 primary All of the "PCIE" type devices are available for SDIO, with a few limitations.  If the device is a slot, the card in that slot must support the DIO feature.  The documentation lists all such cards.  Moving a slot to a different domain works just like moving a PCI root complex.  Again, this is not a dynamic process and includes reboots of the affected domains.  The resulting configuration is nicely shown in a diagram in the Admin Guide: There are several important things to note and consider here: The domain receiving the slot/endpoint device turns into an IO domain in LDoms terminology, because it now owns some physical IO hardware. Solaris will create nodes for this hardware under /devices.  This includes entries for the virtual PCI root complex (pci_0 in the diagram) and anything between it and the actual endpoint device.  It is very important to understand that all of this PCIe infrastructure is virtual only!  Only the actual endpoint devices are true physical hardware. There is an implicit dependency between the guest owning the endpoint device and the root domain owning the real PCIe infrastructure: Only if the root domain is up and running, will the guest domain have access to the endpoint device. The root domain is still responsible for resetting and configuring the PCIe infrastructure (root complex, PCIe level configurations, error handling etc.) because it owns this part of the physical infrastructure. This also means that if the root domain needs to reset the PCIe root complex for any reason (typically a reboot of the root domain) it will reset and thus disrupt the operation of the endpoint device owned by the guest domain.  The result in the guest is not predictable.  I recommend to configure the resulting behaviour of the guest using domain dependencies as described in the Admin Guide in Chapter "Configuring Domain Dependencies". Please consult the Admin Guide in Section "Creating an I/O Domain by Assigning PCIe Endpoint Devices" for all the details! As you can see, there are several restrictions for this feature.  It was introduced in LDoms 2.0, mainly to allow the configuration of guest domains that need access to tape devices.  Today, with the higher number of PCIe root complexes and the availability of SR-IOV, the need to use this feature is declining.  I personally do not recommend to use it, mainly because of the drawbacks of the depencies on the root domain and because it can be replaced with SR-IOV (although then with similar limitations). This was a rather short entry, more for completeness.  I believe that DIO can usually be replaced by SR-IOV, which is much more flexible.  I will cover SR-IOV in the next section of this blog series.

    Read the article

< Previous Page | 310 311 312 313 314 315 316 317 318 319 320 321  | Next Page >