Search Results

Search found 14000 results on 560 pages for 'include guards'.

Page 490/560 | < Previous Page | 486 487 488 489 490 491 492 493 494 495 496 497  | Next Page >

  • Introduction to WebCenter Personalization: &ldquo;The Conductor&rdquo;

    - by Steve Pepper
    There are some new faces in the town of WebCenter with the latest 11g PS3 release.  A new component has introduced itself as "Oracle WebCenter Personalization", a.k.a WCP, to simplify delivery of a personalized experience and content to end users.  This posting reviews one of the primary components within WCP: "The Conductor". The Conductor: This ain't just an ordinary cloud... One of the founding principals behind WebCenter Personalization was to provide an open client-side API that remains independent of the technology invoking it, in addition to independence from the architecture running it.  The Conductor delivers this, and much, much more. The Conductor is the engine behind WebCenter Personalization that allows flow-based documents, called "Scenarios", to be managed and executed on the server-side through a well published and RESTful api.      The Conductor also supports an extensible model for custom provider integration that can be easily invoked within a Scenario to promote seamless integration with existing business assets. Introducing the Scenario Conductor Scenarios are declarative offline-authored documents using the custom Personalization JDeveloper bundle included with WebCenter.  A Scenario contains one (or more) statements that can: Create variables that are scoped to the current execution context Iterate over collections, or loop until a specific condition is met Execute one or more statements when a condition is met Invoke other scenarios that exist within the same namespace Invoke a data provider that integrates with custom applications Once a variable is assigned within the Scenario's execution context, it can be referenced anywhere within the same Scenario using the common Expression Language syntax used in J2EE web containers. Scenarios are then published and tested to the Integrated WebLogic Server domain, or published remotely to other domains running WebCenter Personalization. Various Client-side Models The Conductor server API is built upon RESTful services that support a wide variety of clients able to communicate over HTTP.  The Conductor supports the following client-side models: REST:  Popular browser-based languages can be used to manage and execute Conductor Scenarios.  There are other public methods to retrieve configured provider metadata that can be used by custom applications. The Conductor currently supports XML and JSON for it's API syntax. Java: WebCenter Personalization delivers a robust and light-weight java client with the popular Jersey framework as it's foundation.  It has never been easier to write a remote java client to manage remote RESTful services. Expression Language (EL): Allow the results of Scenario execution to control your user interface or embed personalized content using the session-scoped managed bean.  The EL client can also be used in straight JSP pages with minimal configuration. Extensible Provider Framework The Conductor supports a pluggable provider framework for integrating custom code with Scenario execution.  There are two types of providers supported by the Conductor: Function Provider: Function Providers are simple java annotated classes with static methods that are meant to be served as utilities.  Some common uses would include: object creation or instantiation, data transformation, and the like.  Function Providers can be invoked using the common EL syntax from variable assignments, conditions, and loops. For example:  ${myUtilityClass:doStuff(arg1,arg2))} If you are familiar with EL Functions, Function Providers are based on the same concept. Data Provider: Like Function Providers, Data Providers are annotated java classes, but they must adhere to a much more strict object model.  Data Providers have access to a wealth of Conductor services, such as: Access to namespace-scoped configuration API that can be managed by Oracle Enterprise Manager, Scenario execution context for expression resolution, and more.  Oracle ships with three out-of-the-box data providers that supports integration with: Standardized Content Servers(CMIS),  Federated Profile Properties through the Properties Service, and WebCenter Activity Graph. Useful References If you are looking to immediately get started writing your own application using WebCenter Personalization Services, you will find the following references helpful in getting you on your way: Personalizing WebCenter Applications Authoring Personalized Scenarios in JDeveloper Using Personalization APIs Externally Implementing and Calling Function Providers Implementing and Calling Data Providers

    Read the article

  • Cleaner ClientID's with ASP.NET 4.0

    - by amaniar
    Normal 0 false false false EN-US X-NONE HI /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; mso-bidi-font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} A common complain we have had when using ASP.NET web forms is the inability to control the ID attributes being rendered in the HTML markup when using server controls. Our Interface Engineers want to be able to predict the ID’s of controls thereby having more control over their client side code for selecting/manipulating elements by ID or using CSS to target them. While playing with the just released VS2010 and .NET 4.0 I discovered some real cool improvements. One of them is the ability to now have full control over the ID being rendered for server controls. ASP.NET 4.0 controls now have a new ClientIDMode property which gives the developer complete control over the ID’s being rendered making it easy to write JavaScript and CSS against the rendered html. By default the ClientIDMode is set to Predictable which results in clean and predictable ID’s by concatenating the ID’s of the Parent and child controls. So the following markup: <asp:Content ID="ParentContainer" ContentPlaceHolderID="MainContentPlaceHolder" runat="server">     <asp:Label runat="server" ID="MyLabel">My Label</asp:Label> </asp:Content>                                                                                                                                                             Will render:   <span id="ParentContainer_MyLabel">My Label</span> Instead of something like this: (current) <span id="ct100_ParentContainer_MyLabel">My Label</span> Other modes include AutoID (renders ID’s like it currently does in .NET 3.5), Static (renders the ID exactly as specified in the code) and Inherit (defers the mode to the parent control). So now I can write my jQuery selector as: $(“ParentContainer_MyLabel”).text(“My new Text”); Instead of: $(‘<%=this. MyLabel.ClientID%>’).text(“My new Text”); Scott Mitchell has a great article about this new feature: http://bit.ly/ailEJ2 Am excited about this and some other improvements. Many thanks to the ASP.NET team for Listening!

    Read the article

  • The Minimalist Approach to Content Governance - Request Phase

    - by Kellsey Ruppel
    Originally posted by John Brunswick. For each project, regardless of size, it is critical to understand the required ownership, business purpose, prerequisite education / resources needed to execute and success criteria around it. Without doing this, there is no way to get a handle on the content life-cyle, resulting in a mass of orphaned material. This lowers the quality of end user experiences.     The good news is that by using a simple process in this request phase - we will not have to revisit this phase unless something drastic changes in the project. For each of the elements mentioned above in this stage, the why, how (technically focused) and impact are outlined with the intent of providing the most value to a small team. 1. Ownership Why - Without ownership information it will not be possible to track and manage any of the content and take advantage of many features of enterprise content management technology. To hedge against this, we need to ensure that both a individual and their group or department within the organization are associated with the content. How - Apply metadata that indicates the owner and department or group that has responsibility for the content. Impact - It is possible to keep the content system optimized by running native reports against the meta-data and acting on them based on what has been outlined for success criteria. This will maximize end user experience, as content will be faster to locate and more relevant to the user by virtue of working through a smaller collection. 2. Business Purpose Why - This simple step will weed out requests that have tepid justification, as users will most likely not spend the effort to request resources if they do not have a real need. How - Use a simple online form to collect and workflow the request to management native to the content system. Impact - Minimizes the amount user generated content that is of low value to the organization. 3. Prerequisite Education Resources Needed Why - If a project cannot be properly staffed the probability of its success is going to be low. By outlining the resources needed - in both skill set and duration - it will cause the requesting party to think critically about the commitment needed to complete their project and what gap must be closed with regard to education of those resources. How - In the simple request form outlined above, resources and a commitment to fulfilling any needed education should be included with a brief acceptance clause that outlines the requesting party's commitment. Impact - This stage acts as a formal commitment to ensuring that resources are able to execute on the vision for the project. 4. Success Criteria Why - Similar to the business purpose, this is a key element in helping to determine if the project and its respective content should continue to exist if it does not meet its intended goal. How - Set a review point for the project content that will check the progress against the originally outlined success criteria and then determine the fate of the content. This can even include logic that will tell the content system to remove items that have not been opened by any users in X amount of time. Impact - This ensures that projects and their contents do not live past their useful lifespans. Just as with orphaned content, non-relevant information will slow user's access to relevant materials for the jobs. Request Phase Summary With a simple form that outlines the ownership of a project and its content, business purpose, education and resources, along with success criteria, we can ensure that an enterprise content management system will stay clean and relevant to end users - allowing it to deliver the most value possible. The key here is to make it straightforward to make the request and let the content management technology manage as much as possible through metadata, retention policies and workflow. Doing these basic steps will allow project content to get off to a great start in the enterprise! Stay tuned for the next installment - the "Create Phase" - covering security access and workflow involved in content creation, enabling a practical layer of governance over our enterprise content repository.

    Read the article

  • SQL to select random mix of rows fairly [migrated]

    - by Matt Sieker
    Here's my problem: I have a set of tables in a database populated with data from a client that contains product information. In addition to the basic product information, there is also information about the manufacturer, and categories for those products (a product can be in one or more categories). These categories are then referred to as "Product Categories", and which stores these products are available at. These tables are updated once a week from a feed from the customer. Since for our purposes, some of the product categories are the same, or closely related for our purposes, there is another level of categories called "General Categories", a general category can have one or more product categories. For the scope of these tables, here's some rough numbers: Data Tables: Products: 475,000 Manufacturers: 1300 Stores: 150 General Categories: 245 Product Categories: 500 Mapping Tables: Product Category -> Product: 655,000 Stores -> Products: 50,000,000 Now, for the actual problem: As part of our software, we need to select n random products, given a store and a general category. However, we also need to ensure a good mix of manufacturers, as in some categories, a single manufacturer dominates the results, and selecting rows at random causes the results to strongly favor that manufacturer. The solution that is currently in place, works for most cases, involves selecting all of the rows that match the store and category criteria, partition them on manufacturer, and include their row number from within their partition, then select from that where the row number for that manufacturer is less than n, and use ROWCOUNT to clamp the total rows returned to n. This query looks something like this: SET ROWCOUNT 6 select p.Id, GeneralCategory_Id, Product_Id, ISNULL(m.DisplayName, m.Name) AS Vendor, MSRP, MemberPrice, FamilyImageName from (select p.Id, gc.Id GeneralCategory_Id, p.Id Product_Id, ctp.Store_id, Manufacturer_id, ROW_NUMBER() OVER (PARTITION BY Manufacturer_id ORDER BY NEWID()) AS 'VendorOrder', MSRP, MemberPrice, FamilyImageName from GeneralCategory gc inner join GeneralCategoriesToProductCategories gctpc ON gc.Id=gctpc.GeneralCategory_Id inner join ProductCategoryToProduct pctp on gctpc.ProductCategory_Id = pctp.ProductCategory_Id inner join Product p on p.Id = pctp.Product_Id inner join StoreToProduct ctp on p.Id = ctp.Product_id where gc.Id = @GeneralCategory and ctp.Store_id=@StoreId and p.Active=1 and p.MemberPrice >0) p inner join Manufacturer m on m.Id = p.Manufacturer_id where VendorOrder <=6 order by NEWID() SET ROWCOUNT 0 (I've tried to somewhat format it to make it cleaner, but I don't think it really helps) Running this query with an execution plan shows that for the majority of these tables, it's doing a Clustered Index Seek. There are two operations that take up roughly 90% of the time: Index Seek (Nonclustered) on StoreToProduct: 17%. This table just contains the key of the store, and the key of the product. It seems that NHibernate decided not to make a composite key when making this table, but I'm not concerned about this at this point, as compared to the other seek... Clustered Index Seek on Product: 69%. I really have no clue how I could make this one more performant. On categories without a lot of products, performance is acceptable (<50ms), however larger categories can take a few hundred ms, with the largest category taking 3s (which has about 170k products). It seems I have two ways to go from this point: Somehow optimize the existing query and table indices to lower the query time. As almost every expensive operation is already a clustered index scan, I don't know what could be done there. The inner query could be tuned to not return all of the possible rows for that category, but I am unsure how to do this, and maintain the requirements (random products, with a good mix of manufacturers) Denormalize this data for the purpose of this query when doing the once a week import. However, I am unsure how to do this and maintain the requirements. Does anyone have any input on either of these items?

    Read the article

  • Managing software projects - advice needed

    - by Callum
    I work for a large government department as part of an IT team that manages and develops websites as well as stand alone web applications. We’re running in to problems somewhere in the SDLC that don’t rear their ugly head until time and budget are starting to run out. We try to be “Agile” (software specifications are not as thorough as possible, clients have direct access to the developers any time they want) and we are also in a reasonably peculiar position in that we are not allowed to make profit from the services we provide. We only service the divisions within our government department, and can only charge for the time and effort we actually put in to a project. So if we deliver a project that we have over-quoted on, we will only invoice for the actual time spent. Our software specifications are not as thorough as they could be, but they always include at a minimum: Wireframe mockups for every form view A data dictionary of all field inputs Descriptions of any business rules that affect the system Descriptions of the outputs I’m new to software management, but I’ve overseen enough software projects now to know that as soon as users start observing demos of the system, they start making a huge amount of requests like “Can we add a few more fields to this report.. can we redesign the look of this interface.. can we send an email at this part of the workflow.. can we take this button off this view.. can we make this function redirect to a different screen.. can we change some text on this screen… can we create a special account where someone can log in and get access to X… this report takes too long to run can it be optimised.. can we remove this step in the workflow… there’s got to be a better image we can put here…” etc etc etc. Some changes are tiny and can be implemented reasonably quickly.. but there could be up to 50-100 or so of such requests during the course of the SDLC. Other change requests are what clients claim they “just assumed would be part of the system” even if not explicitly spelled out in the spec. We are having a lot of difficulty managing this process. With no experienced software project managers in our team, we need to come up with a better way to both internally identify whether work being requested is “out of spec”, and be able to communicate this to a client in such a manner that they can understand why what they are asking for is “extra” work. We need a way to track this work and be transparent with it. In the spirit of Agile development where we are not spec'ing software systems in to the ground and back again before development begins, and bearing in mind that clients have access to any developer any time they want it, I am looking for some tips and pointers from experienced software project managers on how to handle this sort of "scope creep" problem, in tracking it, being transparent with it, and communicating it to clients such that they understand it. Happy to clarify anything as needed. I really appreciate anyone who takes the time to offer some advice. Thanks.

    Read the article

  • How to deal with overly aggressive "Link Take Down Demands"?

    - by Eoin
    I've been receiving a large number of emails recently requesting I clean from link spam from my forum. Initially the emails were very polite and professional, and I was happy to remove the links. Recently the email have gotten very abrasive, here is a particularly rude example: From: [email protected] To: [email protected] Hi, This is the second time we are reaching out to you regarding your link to our site hxxp://www.company-two.com from hxxp://www.my-forum.com/some-topic-id. We really do need to remove this link. We have to report to Google any link we were unable to remove, and I wouldn't want to have to include your site in the list. Could you please remove our link from this page and any other page on your site? Thank You, Name Changed Behind the superficial pleasantries I feel there is some very real maliciousness. Note the email address, DMCA Violations, I don't see how the DMCA is involved here, except as a word which tends to strike fear in many people. Also relating to the email address, it doesn't match the company being linked to at all. How am I to trust they are truely operating on behalf of company-two when they don't even use one of it's email addresses. My email is hidden by privacypost. While a service with legitimate uses, I feel it's highly unprofessional for communications between to companies. The claim "This is the second time..." Every email I've received has started like this, but a check of my spam filters has never revealed a 1st mail. Initially I gave them the benefit of the doubt, by now though it's clear this is a cheap ploy to start me off on the defensive. And finally worst of all- the threats of reporting me to Google if I don't do everything they ask. I sent a polite reply asking for more information. I have no idea if the email address was even valid but I never received any response. Much later I got this followup mail From: [email protected] To: [email protected] Hi, This is the final time we are reaching out to you regarding your link to our site hxxp://www.company-two.com from hxxp://www.my-forum.com/some-topic-id. We will soon be reporting to Google any link we were unable to remove, and currently your site will have to be on the list. Could you please remove our link from this page and any other page on your site? I appreciate your urgent attention to this matter. Thank You, Name Changed This time the from address was more personal, though still not obviously connected to the spammed company. Lets be honest, I don't for one second believe that the companies were the victim of a 3rd party spammer as they claim. The links in questions were generated well over a year ago, and I firmly believe the companies were directly responsible for the spam links in question, a type of spam that has plagued my forum. Now they have the audacity to demand I spend my time cleaning up their mess, using threats to ensure they get their way. Have recent changes in Googles algorithms meant all the cash they spent spamming the web has now turned into a liability? If so I can see why these companies are all of a sudden running scared. Frankly, cleaning up my forum is a good things, but the threats they are using sickens me. So my question here is specifically about the threats: Are they vaild, and would such reports to Google destroy my page rankings? Is there a way I can report this abusive behaviour to Google?

    Read the article

  • Finalists for Community Manager of the Year Announced

    - by Mike Stiles
    For as long as brand social has been around, there’s still an amazing disparity from company to company on the role of Community Manager. At some brands, they are the lead social innovators. At others, the task has been relegated to interns who are at the company temporarily. Some have total autonomy and trust. Others must get chain-of-command permission each time they engage. So what does a premiere “worth their weight in gold” Community Manager look like? More than anyone else in the building, they have the most intimate knowledge of who the customer is. They live on the front lines and are the first to detect problems and opportunities. They are sincere, raving fans of the brand themselves and are trusted advocates for the others. They’re fun to be around. They aren’t salespeople. Give me one Community Manager who’s been at the job 6 months over 5 focus groups any day. Because not unlike in speed dating, they must immediately learn how to make a positive, lasting impression on fans so they’ll want to return and keep the relationship going. They’re informers and entertainers, with a true belief in the value of the brand’s proposition. Internally, they live at the mercy of the resources allocated toward social. Many, whose managers don’t understand the time involved in properly curating a community, are tasked with 2 or 3 too many of them. 63% of CM’s will spend over 30 hours a week on one community. They come to intuitively know the value of the relationships they’re building, even if they can’t always be shown in a bar graph to the C-suite. Many must communicate how the customer feels to executives that simply don’t seem to want to hear it. Some can get the answers fans want quickly, others are frustrated in their ability to respond within an impressive timeframe. In short, in a corporate world coping with sweeping technological changes, amidst business school doublespeak, pie charts, decks, strat sessions and data points, the role of the Community Manager is the most…human. They are the true emotional connection to the real life customer. Which is why we sought to find a way to recognize and honor who they are, what they do, and how well they have defined the position as social grows and integrates into the larger organization. Meet our 3 finalists for Community Manager of the Year. Jeff Esposito with VistaprintJeff manages and heads up content strategy for all social networks and blogs. He also crafts company-wide policies surrounding the social space. Vistaprint won the NEDMA Gold Award for Twitter Strategy in 2010 and 2011, and a Bronze in 2011 for Social Media Strategy. Prior to Vistaprint, Jeff was Media Relations Manager with the Long Island Ducks. He graduated from Seton Hall University with a BA in English and a minor in Classical Studies. Stacey Acevero with Vocus In addition to social management, Stacey blogs at Vocus on influential marketing and social media, and blogs at PRWeb on public relations and SEO. She’s been named one of the #Nifty50 Women in Tech on Twitter 2 years in a row, as well as included in the 15 up-and-coming PR pros to watch in 2012. Carly Severn with the San Francisco BalletCarly drives engagement, widens the fanbase and generates digital content for America’s oldest professional ballet company. Managed properties include Facebook, Twitter, Tumblr, Pinterest, Instagram, YouTube and G+. Prior to joining the SF Ballet, Carly was Marketing & Press Coordinator at The Fitzwilliam Museum at Cambridge, where she graduated with a degree in English. We invite you to join us at the first annual Oracle Social Media Summit November 14 and 15 at the Wynn in Las Vegas where our finalists will be featured. Over 300 top brand marketers, agency executives, and social leaders & innovators will be exploring how social is transforming business. Space is limited and the information valuable, so get more info and get registered as soon as possible at the event site.

    Read the article

  • Is this Hybrid of Interface / Composition kosher?

    - by paul
    I'm working on a project in which I'm considering using a hybrid of interfaces and composition as a single thing. What I mean by this is having a contain*ee* class be used as a front for functionality implemented in a contain*er* class, where the container exposes the containee as a public property. Example (pseudocode): class Visibility(lambda doShow, lambda doHide, lambda isVisible) public method Show() {...} public method Hide() {...} public property IsVisible public event Shown public event Hidden class SomeClassWithVisibility private member visibility = new Visibility(doShow, doHide, isVisible) public property Visibility with get() = visibility private method doShow() {...} private method doHide() {...} private method isVisible() {...} There are three reasons I'm considering this: The language in which I'm working (F#) has some annoyances w.r.t. implementing interfaces the way I need to (unless I'm missing something) and this will help avoid a lot of boilerplate code. The containee classes could really be considered properties of the container class(es); i.e. there seems to be a fairly strong has-a relationship. The containee classes will likely implement code which would have been pretty much the same when implemented in all the container classes, so why not do it once in one place? In the above example, this would include managing and emitting the Shown/Hidden events. Does anyone see any isseus with this Composiface/Intersition method, or know of a better way? EDIT 2012.07.26 - It seems a little background information is warranted: Where I work, we have a bunch of application front-ends that have limited access to system resources -- they need access to these resources to fully function. To remedy this we have a back-end application that can access the needed resources, with which the front-ends can communicate. (There is an API written for the front-ends for accessing back-end functionality as though it were part of the front-end.) The back-end program is out of date and its functionality is incomplete. It has made the transition from company to company a couple of times and we can't even compile it anymore. So I'm trying to rewrite it in my spare time. I'm trying to update things to make a nice(r) interface/API for the front-ends (while allowing for backwards compatibility with older front-ends), hopefully something full of OOPy goodness. The thing is, I don't want to write the front-end API after I've written pretty much the same code in F# for implementing the back-end; so, what I'm planning on doing is applying attributes to classes/methods/properties that I would like to have code for in the API then generate this code from the F# assembly using reflection. The method outlined in this question is a possible alternative I'm considering instead of implementing straight interfaces on the classes in F# because they're kind of a bear: In order to access something of an interface that has been implemented in a class, you have to explicitly cast an instance of that class to the interface type. This would make things painful when getting calls from the front-ends. If you don't want to have to do this, you have to call out all of the interface's methods/properties again in the class, outside of the interface implementation (which is separate from regular class members), and call the implementation's members. This is basically repeating the same code, which is what I'm trying to avoid!

    Read the article

  • Monitoring Windows Azure Service Bus Endpoint with BizTalk 360?

    - by Michael Stephenson
    I'm currently working with a customer who is undergoing an initiative to expose some of their line of business applications to external partners and SAAS applications and as part of this we have been looking at using the Windows Azure Service Bus. For the first part of the project we were focused on some synchronous request response scenarios where an external application would use the Service Bus relay functionality to get data from some internal applications. When we were looking at the operational monitoring side of the solution it was obvious that although most of the normal server monitoring capabilities would be required for the on premise components we would have to look at new approaches to validate that the operation of the service from outside of the organization was working as expected. A number of months ago one of my colleagues Elton Stoneman wrote about an approach I have introduced with a number of clients in the past where we implement a diagnostics service in each service component we build. This service would allow us to make a call which would flex some of the working parts of the system to prove it was working within any SLA. This approach is discussed on the following article: http://geekswithblogs.net/EltonStoneman/archive/2011/12/12/the-value-of-a-diagnostics-service.aspx In our solution we wanted to take the same approach but we had to consider that the service clients were external to the service. We also had to consider that by going through Windows Azure Service Bus it's not that easy to make most of your standard monitoring solutions just give you an easy way to do this. In a previous article I have described how you can use BizTalk 360 to monitor things using a custom extension to the Web Endpoint Manager and I felt that we could use this approach to provide an excellent way to monitor our service bus endpoint. The previous article is available on the following link: http://geekswithblogs.net/michaelstephenson/archive/2012/09/12/150696.aspx   The Monitoring Solution BizTalk 360 currently has an easy way to hook up the endpoint manager to a url which it will then call and if a successful response is returned it then considers the endpoint to be in a healthy state. We would take advantage of this by creating an ASP.net web page which would be called by BizTalk 360 and behind this page we would implement the functionality to call the diagnostics service on our Service Bus endpoint. The ASP.net page could include logic to work out how to handle the response from the diagnostics service. For example if the overall result of the diagnostics service was successful but the call to the diagnostics service was longer than a certain amount of time then we could return an error and indicate the service is taking too long. The following diagram illustrates the monitoring pattern.   The diagnostics service which is hosted in the line of business application allows us to ping a simple message through the Azure Service Bus relay to the WCF services in the LOB application and we they get a response back indicating that the service is working fine. To implement this I used the exact same approach I described in my previous post to create a custom web page which calls the diagnostics service and then it would return an HTTP response code which would depend on the error condition returned or a 200 if it was successful. One of the limitations of this approach is that the competing consumer pattern for listening to messages from service bus means that you cannot guarantee which server would process your diagnostics check message but with BizTalk 360 you could simply add multiple endpoint checks so that it could access the individual on-premise web servers directly to ensure that each server is working fine and then check that messages can also be processed through the cloud. Conclusion It took me about 15 minutes to get a proof of concept of this up and running which was able to monitor our web services which had been exposed via Windows Azure Service Bus. I was then able to inherit all of the monitoring benefits of BizTalk 360 to provide an enterprise class monitoring solution for our cloud enabled API.

    Read the article

  • Squibbly: LibreOffice Integration Framework for the Java Desktop

    - by Geertjan
    Squibbly is a new framework for Java desktop applications that need to integrate with LibreOffice, or more generally, need office features as part of a Java desktop solution that could include, for example, JavaFX components. Here's what it looks like, right now, on Ubuntu 13.04: Why is the framework called Squibbly? Because I needed a unique-ish name, because "squibble" sounds a bit like "scribble" (which is what one does with text documents, etc), and because of the many absurd definitions in the Urban Dictionary for the apparently real word "squibble", e.g., "A name for someone who is squibblish in nature." And, another e.g., "A squibble is a small squabble. A squabble is a little skirmish." But the real reason is the first definition (and definitely not the fourth definition): "Taking a small portion of another persons something, such as a small hit off of a pipe, a bite of food, a sip of a drink, or drag of a cigarette." In other words, I took (or "squibbled") a small portion of LibreOffice, i.e., OfficeBean, and integrated it into a NetBeans Platform application. Now anyone can add new features to it, to do anything they need, such as create a legislative software system as Propylon has done with their own solution on the NetBeans Platform: For me, the starting point was Chuk Munn Lee's similar solution from some years ago. However, he uses reflection a lot in that solution, because he didn't want to bundle the related JARs with the application. I understand that benefit but I find it even more beneficial to not need to require the user to specify the location of the LibreOffice location, since all the necessary JARs and native libraries (currently 32-bit Linux only, by the way) are bundled with the application. Plus, hundreds of lines of reflection code, as in Chuk's solution, is not fun to work with at all. Switching between applications is done like this: It's a work in progress, a proof of concept only. Just the result of a few hours of work to get the basic integration to work. Several problems remain, some of them potentially unsolvable, starting with these, but others will be added here as I identify them: Window management problems. I'd like to let the user have multiple LibreOffice applications and documents open at the same time, each in a new TopComponent. However, I haven't figured out how to do that. Right now, each application is opened into the same TopComponent, replacing the currently open application. I don't know the OfficeBean API well enough, e.g., should a single OfficeBean be shared among multiple TopComponents or should each of them have their own instance of it? Focus problems. When putting the application behind other applications and then switching back to the application, typing text becomes impossible. When closing a TopComponent and reopening it, the content is lost completely. Somehow the loss of focus, and then the return of focus, disables something. No idea how to fix that. The project is checked into this location, which isn't public yet, so you can't access it yet. Once it's publicly available, it would be great to get some code contributions and tweaks, etc. https://java.net/projects/squibbly Here's the source structure, showing especially how the OfficeBean JARs and native libraries (currently for Linux 32-bit only) fit in: Ultimately, would be cool to integrate or share code with http://joeffice.com!

    Read the article

  • Migrating VB6 to HTML5 is not a fiction - Customer success story

    - by Webgui
    All of you VB developers in the present or past would probably find it hard to believe that the old VB code can be migrated and modernized into the latest .NET based HTML5 without having to rewrite the application. But we have been working on such tools for the past couple of years and already have several real world applications that were fully 'transposed' from VB6. The solution is called Instant CloudMove and its main tool is called the TranspositionStudio. It is a unique solution that relies on the concept of transposition. Transposition comes from mathematics and music and refers to exchanging elements while everything else remains the same or moving an element as is from one environment to another. This means that we are taking the source code and put it in a modern technological environment with relatively few adjustments.The concept is based on a set of Mapping Expressions which are basically links between an element in the source environment and one in the target environment that has the same functionality. About 95% of the code is usually mapped out-of-the-box and the rest is handled with easy-to-use mapping tools designed for Visual Studio developers providing them with a familiar environment and concepts for completing the mapping and allowing them to extend and customize existing mapping expressions. The solution is also based on a circular workflow that enables developers to make any changes as required until the result is satisfying.As opposed to existing migration solutions that offer automation are usually a “black box” to the user, the transposition concept enables full visibility, flexibility and control over the code and process at all times allowing to also add/change functionalities or upgrade the UI within the process and tools.This is exactly the case with our customer’s aging VB6 PMS (Property Management System) which needed a technological update as well as a design refresh. The decision was to move the VB6 application which had about 1 million lines of code into the latest web technology. Since the application was initially written 13 years ago and had many upgrades since the code must be very patchy and includes unused sections. As a result, the company Mihshuv Group considered rewriting the entire application in Java since it already had the knowledge. Rewrite would allow starting with a clean slate and designing functionality, database architecture, UI without any constraints. On the other hand, rewrite entitles a long and detailed specification work as well as a thorough QA and this translates into a long project with high risk and costs.So the company looked for a migration solution as an alternative; the research lead to Gizmox and after examining the technology it was decided to perform a hybrid project which would include an automatic transposition of the core of the VB6 application (200,000 lines of code) while they redesigning the UI, adding new functionality, deleting unused code and rewriting about 140 reports with Crystal Reports will be done manually using Visual WebGui development tools.The migration part of the project was completed in 65 days by 3 developers from Mihshuv Group guided by Gizmox migration experts while the rewrite and UI upgrade tasks took about the same. So in only a few months period Mihshuv Group generated an up-to-date product, written in the latest Web technology with modern, friendly UI and improved functionality. Guest selection screen of the original VB6 PMS Guest selection screen on the new web–based PMS Compared to the initial plan to rewrite the entire application in Java, the hybrid migration/rewrite approach taken by Mihshuv Group using Gizmox technology proved as a great decision. In terms of time and cost there were substantial savings; from a project that was priced for at least a year (without taking into account the huge risk and uncertainty) it became a few months project only. More about this and other customer stories can be found here

    Read the article

  • New PeopleSoft HCM 9.1 On Demand Standard Edition provides a complete set of IT services at a low, predictable monthly cost

    - by Robbin Velayedam
    At Oracle Open World last month, Oracle announced that we are extending our On Demand offerings with the general availability of PeopleSoft On Demand Standard Edition. Standard Edition represents Oracle’s commitment to providing customers a choice of solutions, technology, and deployment options commensurate with their business needs and future growth. The Standard Edition offering complements the traditional On Demand offerings (Enterprise and Professional Editions) by focusing on a low, predictable monthly cost model that scales with the size of your business.   As part of Oracle's open cloud strategy, customers can freely move PeopleSoft licensed applications between on premise and the various  on demand options as business needs arise.    In today’s business climate, aggressive and creative business objectives demand more of IT organizations. They are expected to provide technology-based solutions to streamline business processes, enable online collaboration and multi-tasking, facilitate data mining and storage, and enhance worker productivity. As IT budgets remain tight in a recovering economy, the challenge becomes how to meet these demands with limited time and resources. One way is to eliminate the variable costs of projects so that your team can focus on the high priority functions and better predict funding and resource needs two to three years out. Variable costs and changing priorities can derail the best laid project and capacity plans. The prime culprits of variable costs in any IT organization include disaster recovery, security breaches, technical support, and changes in business growth and priorities. Customers have an immediate need for solutions that are cheaper, predictable in cost, and flexible enough for long-term growth or capacity changes. The Standard Edition deployment option fulfills that need by allowing customers to take full advantage of the rich business functionality that is inherent to PeopleSoft HCM, while delegating all application management responsibility – such as future upgrades and product updates – to Oracle technology experts, at an affordable and expected price. Standard Edition provides the advantages of the secure Oracle On Demand hosted environment, the complete set of PeopleSoft HCM configurable business processes, and timely management of regular updates and enhancements to the application functionality and underlying technology. Standard Edition has a convenient monthly fee that is scalable by number of employees, which helps align the customer’s overall cost of ownership with its size and anticipated growth and business needs. In addition to providing PeopleSoft HCM applications' world class business functionality and Oracle On Demand's embassy-grade security, Oracle’s hosted solution distinguishes itself from competitors by offering customers the ability to transition between different deployment and service models at any point in the application ownership lifecycle. As our customers’ business and economic climates change, they are free to transition their applications back to on-premise at any time. HCM On Demand Standard Edition is based on configurability options rather than customizations, requiring no additional code to develop or maintain. This keeps the cost of ownership low and time to production less than a month on average. Oracle On Demand offers the highest standard of security and performance by leveraging a state-of-the-art data center with dedicated databases, servers, and secured URL all within a private cloud. Customers will not share databases, environments, platforms, or access portals with other customers because we value how mission critical your data are to your business. Oracle’s On Demand also provides a full breadth of disaster recovery services to provide customers the peace of mind that their data are secure and that backup operations are in place to keep their businesses up and running in the case of an emergency. Currently we have over 50 PeopleSoft customers delegating us with the management of their applications through Oracle On Demand. If you are a customer interested in learning more about the PeopleSoft HCM 9.1 Standard Edition and how it can help your organization minimize your variable IT costs and free up your resources to work on other business initiatives, contact Oracle or your Account Services Representative today.

    Read the article

  • Getting Started with Cloud Computing

    - by juanlarios
    You’ve likely heard about how Office 365 and Windows Intune are great applications to get you started with Cloud Computing. Many of you emailed me asking for more info on what Cloud Computing is, including the distinction between "Public Cloud" and "Private Cloud". I want to address these questions and help you get started. Let's begin with a brief set of definitions and some places to find more info; however, an excellent place where you can always learn more about Cloud Computing is the Microsoft Virtual Academy. Public Cloud computing means that the infrastructure to run and manage the applications users are taking advantage of is run by someone else and not you. In other words, you do not buy the hardware or software to run your email or other services being used in your organization – that is done by someone else. Users simply connect to these services from their computers and you pay a monthly subscription fee for each user that is taking advantage of the service. Examples of Public Cloud services include Office 365, Windows Intune, Microsoft Dynamics CRM Online, Hotmail, and others. Private Cloud computing generally means that the hardware and software to run services used by your organization is run on your premises, with the ability for business groups to self-provision the services they need based on rules established by the IT department. Generally, Private Cloud implementations today are found in larger organizations but they are also viable for small and medium-sized businesses since they generally allow an automation of services and reduction in IT workloads when properly implemented. Having the right management tools, like System Center 2012, to implement and operate Private Cloud is important in order to be successful. So – how do you get started? The first step is to determine what makes the most sense to your organization. The nice thing is that you do not need to pick Public or Private Cloud – you can use elements of both where it makes sense for your business – the choice is yours. When you are ready to try and purchase Public Cloud technologies, the Microsoft Volume Licensing web site is a good place to find links to each of the online services. In particular, if you are interested in a trial for each service, you can visit the following pages: Office 365, CRM Online, Windows Intune, and Windows Azure. For Private Cloud technologies, start with some of the courses on Microsoft Virtual Academy and then download and install the Microsoft Private Cloud technologies including Windows Server 2008 R2 Hyper-V and System Center 2012 in your own environment and take it for a spin. Also, keep up to date with the Canadian IT Pro blog to learn about events Microsoft is delivering such as the IT Virtualization Boot Camps and more to get you started with these technologies hands on. Finally, I want to ask for your help to allow the team at Microsoft to continue to provide you what you need. Twice a year through something we call "The Global Relationship Study" – they reach out and contact you to see how they're doing and what Microsoft could do better. If you get an email from "Microsoft Feedback" with the subject line "Help Microsoft Focus on Customers and Partners" between March 5th and April 13th, please take a little time to tell them what you think. Cloud Computing Resources: Microsoft Server and Cloud Computing site – information on Microsoft's overall cloud strategy and products. Microsoft Virtual Academy – for free online training to help improve your IT skillset. Office 365 Trial/Info page – get more information or try it out for yourself. Office 365 Videos – see how businesses like yours have used Office 365 to transition to the cloud. Windows Intune Trial/Info – get more information or try it out for yourself. Microsoft Dynamics CRM Online page – information on trying and licensing Microsoft Dynamics CRM Online. Additional Resources You May Find Useful: Springboard Series Your destination for technical resources, free tools and expert guidance to ease the deployment and management of your Windows-based client infrastructure. TechNet Evaluation Center Try some of our latest Microsoft products for free, Like System Center 2012 Pre-Release Products, and evaluate them before you buy. AlignIT Manager Tech Talk Series A monthly streamed video series with a range of topics for both infrastructure and development managers. Ask questions and participate real-time or watch the on-demand recording. Tech·Days Online Discover what's next in technology and innovation with Tech·Days session recordings, hands-on labs and Tech·Days TV.

    Read the article

  • EMEA Analytics & Data Integration Oracle Partner Forum

    - by milomir.vojvodic
    MONDAY 12TH NOVEMBER, 2012 IN LONDON (UK) For Oracle Partners across Europe, Middle East and Africa: come to hear the latest news from Oracle OpenWorld about Oracle BI & Data Integration, and propel your business growth as an Oracle partner. This event should appeal to BI or Data Integration specialized partners, Executives, Sales, Pre-sales and Solution architects: with a choice of participation in the plenary day and then a set of special interest (technical) sessions. The follow on breakout sessions from the 13th November provide deeper dives and technical training for those of you who wish to stay for more detailed and hands-on workshops. Keynote: Andrew Sutherland, SVP Oracle Technology Hot agenda items will include: The Fusion Middleware Stack: Engineered to work together A complete Analytics and Data Integration Solution Architecture: Big Data and Little Data combined In-Memory Analytics for Extreme Insight Latest Product Development Roadmap for Data Integration and Analytics Venue: Oracles London CITY Moorgate Offices Places are limited, Register from this Link Note: Registration for the conference and the deeper dives and technical training is free of charge to OPN member Partners, but you will be responsible for your own travel and hotel expenses. Event Schedule During this event you can learn about partner success stories, participate in an array of break-out sessions, exchange information with other partners and enjoy a vibrant panel discussion. Nov. 12th  : Day 1 Main Plenary Session : Full day, starting 10.30 am.  Oracle Hosted Dinner in the Evening Nov. 13th  onwards Architecture Masterclass : IM Reference Architecture – Big Data and Little Data combined (1 day) BI-Apps Bootcamp  (4-days) Oracle GoldenGate workshop (1 day) Oracle Data Integrator and Oracle Enterprise Data Quality workshop (1 day) For further information and detail download the Agenda (pdf) or contact Michael Hallett at [email protected] and Milomir Vojvodic at [email protected] v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Moving StarterSTS to the (Azure) Cloud

    - by Your DisplayName here!
    Quite some people asked me about an Azure version of StarterSTS. While I kinda knew what I had to do to make the move, I couldn’t find the time. Until recently. This blog post briefly documents the necessary changes and design decisions for the next version of StarterSTS which will work both on-premise and on Azure. Provider Fortunately StarterSTS is already based on the idea of “providers”. Authentication, roles and claims generation is based on the standard ASP.NET provider infrastructure. This makes the migration to different data stores less painful. In my case I simply moved the ASP.NET provider database to SQL Azure and still use the standard SQL Server based membership, roles and profile provider. In addition StarterSTS has its own providers to abstract resource access for certificates, relying party registration, client certificate registration and delegation. So I only had to provide new implementations. Signing and SSL keys now go in the Azure certificate store and user mappings (client certificates and delegation settings) have been moved to Azure table storage. The one thing I didn’t anticipate when I originally wrote StarterSTS was the need to also encapsulate configuration. Currently configuration is “locked” to the standard .NET configuration system. The new version will have a pluggable SettingsProvider with versions for .NET configuration as well as Azure service configuration. If you want to externalize these settings into e.g. a database, it is now just a matter of supplying a corresponding provider. Moving between the on-premise and Azure version will be just a matter of using different providers. URL Handling Another thing that’s substantially different on Azure (and load balanced scenarios in general) is the handling of URLs. In farm scenarios, the standard APIs like ASP.NET’s Request.Url return the current (internal) machine name, but you typically need the address of the external facing load balancer. There’s a hotfix for WCF 3.5 (included in v4) that fixes this for WCF metadata. This was accomplished by using the HTTP Host header to generate URLs instead of the local machine name. I now use the same approach for generating WS-Federation metadata as well as information card files. New Features I introduced a cache provider. Since we now have slightly more expensive lookups (e.g. relying party data from table storage), it makes sense to cache certain data in the front end. The default implementation uses the ASP.NET web cache and can be easily extended to use products like memcached or AppFabric Caching. Starting with the relying party provider, I now also provide a read/write interface. This allows building management interfaces on top of this provider. I also include a (very) simple web page that allows working with the relying party provider data. I guess I will use the same approach for other providers in the future as well. I am also doing some work on the tracing and health monitoring area. Especially important for the Azure version. Stay tuned.

    Read the article

  • Setting up your project

    - by ssoolsma
    Before any coding we first make sure that the project is setup correctly. (Please note, that this blog is all about how I do it, and incase i forget, i can return here and read how i used to do it. Maybe you come up with some idea’s for yourself too.) In these series we will create a minigolf scoring cart. Please note that we eventually create a fully functional application which you cannot use unless you pay me alot of money! (And i mean alot!)   1. Download and install the appropriate tools. Download the following: - TestDriven.Net (free version on the bottom of the download page) - nUnit TestDriven is a visual studio plugin for many unittest frameworks, which allows you to run  / test code very easily with a right click –> run test. nUnit is the test framework of choice, it works seamless with TestDriven.   2. Create your project Fire up visual studio and create your DataAccess project:  MidgetWidget.DataAccess is it’s name. (I choose MidgetWidget as name for the solution). Also, make sure that the MidgetWidget.DataAccess project is a c# ClassLibary Hit OK to create the solution. (in the above example the checkbox Create directory for solution is checked, because i’m pointing the location to the root of c:\development where i want MidgetWidget to be created.   3. Setup the database. You should have thought about a database when you reach this point. Let’s assume that you’ve created a database as followed: Table name: LoginKey Fields: Id (PK), KeyName (uniqueidentifier), StartDate (datetime), EndDate (datetime) Table name:  Party Fields: Id (PK), Key (uniqueidentifier, Created (datetime) Table name:  Person Fields: Id(PK),  PartyId (int), Name (varchar) Tablename: Score Fields: Id (PK), Trackid (int), PersonId (int), Strokes (int) Tablename: Track Fields: Id (PK), Name (varchar) A few things to take note about the database setup. I’ve singularized all tablenames (not “Persons“ but “Person”. This is because in a few minutes, when this is in our code, we refer to the database objects as single rows. We retrieve a single Person not a single “Persons” from the database.   4. Create the entity framework In your solution tree create a new folder and call it “DataModel”. Inside this folder: Add new item –> and choose ADO.NET Entity Data Model. Name it “Entities.edmx” and hit  “Add”. Once the edmx is added, open it (double click) and right click the white area and choose “Update model from database…". Now, point it to your database (i include sensitive data in the connectionstring) and select all the tables. After that hit “Finish” and let the entity framework do it’s code generation. Et Voila, after a few seconds you have set up your entity model. Next post we will start building the data-access! I’m off to the beach.

    Read the article

  • Customer Loyalty vs. Customer Engagement: Who Cares?

    - by Jeb Dasteel-Oracle
    Have you read the recent Forbes OracleVoice blog titled Customer Loyalty is Dead. Long Live Engagement!? If you haven’t, take a look. This article prompted lots of conversation in the social realm. Many who read the article voiced their reactions to the headline and now I’m jumping in to add my view. Normal 0 false false false EN-US X-NONE X-NONE Customer loyalty is still key. It’s the effect and engagement is the cause. We at least know that to be true for our customers. We are in an age where customers are demanding to be heard. We need them to be actively involved – or engaged – as well. Greater levels of customer engagement, properly targeted, positively correlate with satisfaction. Our data has shown us this over and over. Satisfied customers are more loyal and more willing to vocalize their satisfaction through referencing, and are more likely to purchase again, all of which in turn drives incremental revenue – from the customer doing the referencing AND the customer on the receiving end of that reference. Turning this around completely, if we begin to see the level of a customer’s engagement start to wane, this is an indicator that their satisfaction, loyalty, and future revenue are likely at risk. At Oracle, we’ve put in place many programs to target, encourage, and then track engagement, allowing us to measure engagement as a determinant of loyalty. Some of these programs include our Key Accounts, solution design and architectural, Executive Sponsorship, as well as executive advisory boards. Specific programs allow us to engage specific contacts within specific customer organizations (based on role) and then systematically track their engagement activities over time, along side of tracking customer satisfaction, loyalty, referenceability, and incremental revenue contribution. Continuous measurement of engagement allows us to better understand customer views of what it means to partner with a provider and adjust program participation to better meet the needs of the partnership. We can also track across customer segments, and design new programs that are even more effective than the ones we have in place today. In case you missed any of my previous Forbes articles, I’ve included links below for easy access. Award-Winning Companies Put Customers First The Power of Peer Networks: 5 Reasons to Get (and Stay) Involved Technology At Work: Traveling In Style Customer Central: 8 Strategies for Putting Customers at the Core of Your Business Technology at Work: Five Companies Doing IT Right /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Silverlight Binding with multiple collections

    - by George Evjen
    We're designing some sport specific applications. In one of our views we have a gridview that is bound to an observable collection of Teams. This is pretty straight forward in terms of getting Teams bound to the GridView. <telerik:RadGridView Grid.Row="0" Grid.Column="0" x:Name="UsersGrid" ItemsSource="{Binding TeamResults}" SelectedItem="{Binding SelectedTeam, Mode=TwoWay}"> <telerik:RadGridView.Columns> <telerik:GridViewDataColumn Header="Name/Group" DataMemberBinding="{Binding TeamName}" MinWidth="150"></telerik:GridViewDataColumn> </telerik:RadGridView.Columns> </telerik:RadGridView> We use the observable collection of teams as our items source and then bind the property of TeamName to the first column. You can set the binding to mode=TwoWay, we use a dialog where we edit the selected item, so our binding here is not set to two way. The issue comes when we want to bind to a property that has another collection in it. To continue on our code from above, we have an observable collection of teams, within that collection we have a collection of KeyPeople. We get this collection using RIA Serivces with the code below. return _TeamsRepository.All().Include("KeyPerson"); Here we are getting all the teams and also including the KeyPerson entity. So when we are done with our Load we will end up with an observable collection of Teams with a navigation property / entity of KeyPerson. Within this KeyPerson entity is a list of people associated with that particular team. We want to display the head coach from this list of KeyPersons. This list currently has a list of ten or more people that are bound to this team, but we just want to display the Head Coach in the column next to team name. The issue becomes how do we bind to this included entity? I have found about three different ways to solve this issue. The way that seemed to fit us best is to utilize the features within RIA Services. We can create client side properties that will do the work for us. We will create in the client side library a partial class of Team. We will end up in our library a file that is Team.shared.cs. The code below is what we will put into our partial team class. public KeyPerson Coach        {            get            {                if (this.KeyPerson != null && this.KeyPerson.Any())                { return this.KeyPerson.Where(x => x.RelationshipType == “HeadCoach”).FirstOrDefault(); }                 return null;            }        } We will return just the person that is the Head Coach and then be able to bind that and any other additional properties that we need. <telerik:GridViewDataColumn Header="Coach" DataMemberBinding="{Binding Coach.Name}" MinWidth="150"></telerik:GridViewDataColumn> There are other ways that we could have solved this issue but we felt that creating a partial class through RIA Services best suited our needs.

    Read the article

  • Writing a method to 'transform' an immutable object: how should I approach this?

    - by Prog
    (While this question has to do with a concrete coding dilemma, it's mostly about what's the best way to design a function.) I'm writing a method that should take two Color objects, and gradually transform the first Color into the second one, creating an animation. The method will be in a utility class. My problem is that Color is an immutable object. That means that I can't do color.setRGB or color.setBlue inside a loop in the method. What I can do, is instantiate a new Color and return it from the method. But then I won't be able to gradually change the color. So I thought of three possible solutions: 1- The client code includes the method call inside a loop. For example: int duration = 1500; // duration of the animation in milliseconds int steps = 20; // how many 'cycles' the animation will take for(int i=0; i<steps; i++) color = transformColor(color, targetColor, duration, steps); And the method would look like this: Color transformColor(Color original, Color target, int duration, int steps){ int redDiff = target.getRed() - original.getRed(); int redAddition = redDiff / steps; int newRed = original.getRed() + redAddition; // same for green and blue .. Thread.sleep(duration / STEPS); // exception handling omitted return new Color(newRed, newGreen, newBlue); } The disadvantage of this approach is that the client code has to "do part of the method's job" and include a for loop. The method doesn't do it's work entirely on it's own, which I don't like. 2- Make a mutable Color subclass with methods such as setRed, and pass objects of this class into transformColor. Then it could look something like this: void transformColor(MutableColor original, Color target, int duration){ final int STEPS = 20; int redDiff = target.getRed() - original.getRed(); int redAddition = redDiff / steps; int newRed = original.getRed() + redAddition; // same for green and blue .. for(int i=0; i<STEPS; i++){ original.setRed(original.getRed() + redAddition); // same for green and blue .. Thread.sleep(duration / STEPS); // exception handling omitted } } Then the calling code would usually look something like this: // The method will usually transform colors of JComponents JComponent someComponent = ... ; // setting the Color in JComponent to be a MutableColor Color mutableColor = new MutableColor(someComponent.getForeground()); someComponent.setForeground(mutableColor); // later, transforming the Color in the JComponent transformColor((MutableColor)someComponent.getForeground(), new Color(200,100,150), 2000); The disadvantage is - the need to create a new class MutableColor, and also the need to do casting. 3- Pass into the method the actual mutable object that holds the color. Then the method could do object.setColor or similar every iteration of the loop. Two disadvantages: A- Not so elegant. Passing in the object that holds the color just to transform the color feels unnatural. B- While most of the time this method will be used to transform colors inside JComponent objects, other kinds of objects may have colors too. So the method would need to be overloaded to receive other types, or receive Objects and have instanceof checks inside.. Not optimal. Right now I think I like solution #2 the most, than solution #1 and solution #3 the least. However I'd like to hear your opinions and suggestions regarding this.

    Read the article

  • MySQL Connector/Net 6.6.2 has been released

    - by fernando
    MySQL Connector/Net 6.6.2, a new version of the all-managed .NET driver for MySQL has been released.  This is the first of two beta releases intended to introduce users to the new features in the release.  This release is feature complete it should be stable enough for users to understand the new features and how we expect them to work.  As is the case with all non-GA releases, it should not be used in any production environment.  It is appropriate for use with MySQL server versions 5.0-5.6 It is now available in source and binary form from http://dev.mysql.com/downloads/connector/net/#downloads and mirror sites (note that not all mirror sites may be up to date at this point-if you can't find this version on some mirror, please try again later or choose another download site.) The 6.6 version of MySQL Connector/Net brings the following new features:   * Stored routine debugging   * Entity Framework 4.3 Code First support   * Pluggable authentication (now third parties can plug new authentications mechanisms into the driver).   * Full Visual Studio 2012 support: everything from Server Explorer to Intellisense & the Stored Routine debugger. Stored Procedure Debugging ------------------------------------------- We are very excited to introduce stored procedure debugging into our Visual Studio integration.  It works in a very intuitive manner by simply clicking 'Debug Routine' from Server Explorer. You can debug stored routines, functions & triggers. Some of the new features in this release include:   * Besides normal breakpoints, you can define conditional & pass count breakpoints.   * Now the debugger editor shows colorizing.   * Now you can change the values of locals in a function scope (previously caused deadlock due to functions executing within their own transaction).   * Now you can also debug triggers for 'replace' sql statements.   * In general anything related to locals, watches, breakpoints, stepping & call stack should work in a similar way to the C#'s Visual Studio debugger. Some limitations remains, due to the current debugger architecture:   * Some MySQL functions cannot be debugged currently (get_lock, release_lock, begin, commit, rollback, set transaction level)..   * Only one debug session may be active on a given server. The Debugger is feature complete at this point. We look forward to your feedback. Documentation ------------------------------------- The documentation is still being developed and will be readily available soon (before Beta 2).  You can view current Connector/Net documentation at http://dev.mysql.com/doc/refman/5.5/en/connector-net.html You can find our team blog at http://blogs.oracle.com/MySQLOnWindows. You can also post questions on our forums at http://forums.mysql.com/. Enjoy and thanks for the support! 

    Read the article

  • Pulling My Hair Out - PHP Forms [migrated]

    - by Joe Turner
    Hello and good morning to all. This is my second post on this subject because the first time, things still didn't work and I have now literally been trying to solve this for about 4/5 days straight... I have a file, called 'edit.php', in this file is a form; <?php $company = $_POST["company"]; $phone = $_POST["phone"]; $colour = $_POST["colour"]; $email = $_POST["email"]; $website = $_POST["website"]; $video = $_POST["video"]; $image = $_POST["image"]; $extension = $_POST["extension"]; ?> <form method="post" action="generate.php"><br> <input type="text" name="company" placeholder="Company Name" /><br> <input type="text" name="slogan" placeholder="Slogan" /><br> <input class="color {required:false}" name="colour" placeholder="Company Colour"><br> <input type="text" name="phone" placeholder="Phone Number" /><br> <input type="text" name="email" placeholder="Email Address" /><br> <input type="text" name="website" placeholder="Full Website - Include http://" /><br> <input type="text" name="video" placeholder="Video URL" /><br> <input type="submit" value="Generate QuickLinks" style="background:url(images/submit.png) repeat-x; color:#FFF"/> </form> Then, when the form is submitted, it creates a file using the variables that have been input. The fields that have been filled in go on to become links, I need to be able to say 'if a field is left blank, then put 'XXX' in as a default value'. Does anyone have any ideas? I really think I have tried everything. I'll put below a snippet from the .php file that generates the links... <?php $File = "includes/details.php"; $Handle = fopen($File, 'w'); ?> <?php $File = "includes/details.php"; $Handle = fopen($File, 'w'); $Data = "<div id='logo'> <img width='270px' src='images/logo.png'/img> <h1 style='color:#$_POST[colour]'>$_POST[company]</h1> <h2>$_POST[slogan]</h2> </div> <ul> <li><a class='full-width button' href='tel:$_POST[phone]'>Phone Us</a></li> <li><a class='full-width button' href='mailto:$_POST[email]'>Email Us</a></li> <li><a class='full-width button' href='$_POST[website]'>View Full Website</a></li> <li><a class='full-width button' href='$_POST[video]'>Watch Us</a></li> </ul> \n"; I really do look forward to any response...

    Read the article

  • WebLogic How-to Videos: Install, Upgrade, & Patch

    - by Ruma Sanyal
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Here is another great YouTube video by our product manager Monica Riccelli. She talks about installers now being standardized in Oracle for greater consistency -- no more WebLogic native installers. Also, JDK is no longer a part of the WebLogic install. The various installers she discusses include OUI, ZIP, OEPE, Coherence and more. Monica then takes us through a step by step install process. After the install process is complete the video takes us through the configuration wizard. The ZIP installer is then discussed and its effectiveness, such as it being the smallest downloadable option, easy, and very popular with our customers and limitations (such as for development only and not to be used in production) highlighted. Monica then takes us through the configuration wizard, its usage, and when to use WLST scripts. The video then discusses NodeManager and its usage and discusses how to reconfigure a WebLogic domain on upgrade – through our GUI tools or through command line interface. Lastly, it highlights Opatch – a patch application tool used by our customers and standardized across all Oracle products. Really detailed video. Check it out!  /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Multichannel Digital Engagement: Find Out How Your Organization Measures Up

    - by Michael Snow
    This article was originally published in the September 2013 Edition of the Oracle Information InDepth Newsletter ORACLE WEBCENTER EDITION Thanks to mobile and social technologies, interactive online experiences are now commonplace. Not only that, they give consumers more choices, influence, and control than ever before. So how can you make your organization stand out? The key building blocks for delivering exceptional cross-channel digital experiences are outlined below. Also, a new assessment tool is available to help you measure your organization's ability to deliver such experiences. A clearly defined digital strategy. The customer journey is growing increasingly complex, encompassing multiple touchpoints and channels. It used to be easy to map marketing efforts to specific offline channels; for example, a direct mail piece with an offer to visit a store for a discounted purchase. Now it is more difficult to cultivate and track such clear cause-and-effect relationships. To deliver an integrated digital experience in this more complex world, organizations need a clearly defined and comprehensive digital marketing strategy that is backed up by an integrated set of software, middleware, and hardware solutions. Strong support for business agility and speed-to-market. As both IT and marketing executives know, speed-to-market and business agility are key to competitive advantage. That means marketers need solutions to support the rapid implementation of online marketing initiatives—plus the flexibility to adapt quickly to a changing marketplace. And IT needs tools with the performance, scalability, and ease of integration to support marketing efforts. Both teams benefit when business users are empowered to implement marketing initiatives on their own, with minimal IT intervention. The ability to deliver relevant, personalized content. Delivering a one-size-fits-all online customer experience is no longer acceptable. Customers expect you to know who they are, including their preferences and past relationship with your brand. That means delivering the most relevant content from the moment a visitor enters your site. To make that happen, you need a powerful rules engine so that marketers and business users can easily define site visitor segments and deliver content accordingly. That includes both implicit targeting that is based on the user’s behavior, and explicit targeting that takes a user’s profile information into account. Ideally, the rules engine can also intelligently weight recommendations when multiple segments apply to a specific customer. Support for social interactivity. With the advent of Facebook and LinkedIn, visitors expect to participate in and contribute to your web presence—and share their experience on their own social networks. That requires easy incorporation of user-generated content such as comments, ratings, reviews, polls, and blogs; seamless integration with third-party social networking sites; and support for social login, which helps to remove barriers to social participation. The ability to deliver connected, multichannel experiences that include powerful, flexible mobile capabilities. By 2015, mobile usage is projected to surpass that of PCs and other wired devices. In other words, mobile is an essential element in delivering exceptional online customer experiences. This requires the creation and management of mobile experiences that are optimized for delivery to the thousands of different devices that are in use today. Just as important, organizations must be able to easily extend their traditional web presence to the mobile channel and deliver highly personalized and relevant multichannel marketing initiatives while also managing to minimize the time and effort required to manage mobile sites. Are you curious to know how your organization measures up when it comes to delivering an engaging, multichannel digital experience? If so, take this brief, 15-question online assessment and see how your organization scores in the areas of digital strategy, digital agility, relevance and personalization, social interactivity, and multichannel experience.

    Read the article

  • "The connection has timed out" - Please help!

    - by gon
    I recently installed a fresh Ubuntu 12.04 LTS on a desktop, and the installation itself was successful (other than 'grub rescue' issue that I encountered but fixed) but this connection problem is really giving me a headache. Symptoms: 1. When I open the FireFox browser and try to connect to a website, it just hangs for a while saying "Connecting..." but eventually loads an error page "The connection has timed out". 2. It's not a browser problem (and I tried setting ipv6 thing to "true" at about:config) because running "sudo apt-get install [some-random-package]" at terminal fails ("E: Unable to locate package [package]") too. All other operations that need internet access are not working. 3. I certainly see a wired network (called "eth1") at the Network Manager, and it says "Connection Established" after disconnecting and then connecting again. I have tried almost everything that could be found from google search results still no luck. Their problems slightly differ from mine or the solutions just don't work. By the way it didn't have internet access when installing Ubuntu 12.04. (I ignored the message that I need internet to install Ubuntu) Could this be a problem? I'm sorry I don't remember if internet worked or not on the previous version of Ubuntu. :( I would really appreciate your help... I don't even know what more to do if this fails too.. Thanks!! Thanks for your comment. Here is the result of ifconfig: eth0 Link encap:Ethernet HWaddr 78:ac:c0:3d:b2:b9 inet addr:10.10.65.185 Bcast:10.10.65.255 Mask:255.255.255.0 inet6 addr: fe80::7aac:c0ff:fe3d:b2b9/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:3907 errors:0 dropped:0 overruns:0 frame:0 TX packets:771 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:393118 (393.1 KB) TX bytes:73472 (73.4 KB) Interrupt:16 eth1 Link encap:Ethernet HWaddr 78:ac:c0:3d:b2:b8 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:17 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:4 errors:0 dropped:0 overruns:0 frame:0 TX packets:4 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:204 (204.0 B) TX bytes:204 (204.0 B) route -n: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 10.10.65.1 0.0.0.0 UG 0 0 0 eth0 10.10.65.0 0.0.0.0 255.255.255.0 U 1 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 eth0 /etc/resolv.conf: # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8) # DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN nameserver 8.8.8.8 nameserver 8.8.4.4 nameserver 10.81.1.8 nameserver 10.1.2.10 nameserver 127.0.0.1 search yamatake.local /etc/network/interfaces: auto lo iface lo inet loopback #auto eth0 #iface eth0 inet dhcp #auto eth1 #iface eth1 inet dhcp And I'll also include the result of 'sudo lshw -C network' in case it might help: *-network description: Ethernet interface product: NetXtreme BCM5764M Gigabit Ethernet PCIe vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:02:00.0 logical name: eth0 version: 10 serial: 78:ac:c0:3d:b2:b9 size: 100Mbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm vpd msi pciexpress bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=tg3 driverversion=3.121 duplex=full firmware=5764m-v3.35 ip=10.10.65.185 latency=0 link=yes multicast=yes port=twisted pair speed=100Mbit/s resources: irq:93 memory:fc000000-fc00ffff *-network description: Ethernet interface product: NetXtreme BCM5764M Gigabit Ethernet PCIe vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:01:00.0 logical name: eth1 version: 10 serial: 78:ac:c0:3d:b2:b8 size: 100Mbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm vpd msi pciexpress bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=tg3 driverversion=3.121 duplex=full firmware=5764m-v3.35 latency=0 link=no multicast=yes port=twisted pair speed=100Mbit/s resources: irq:94 memory:fb000000-fb00ffff

    Read the article

  • Using NSpec at various architectural layers

    - by nono
    Having read the quick start at nspec.org, I realized that NSpec might be a useful tool in a scenario which was becoming a bit cumbersome with NUnit alone. I'm adding an OAuth (or, DotNetOpenAuth) to a website and quickly made a mess of writing test methods such as [Test] public void UserIsLoggedInLocallyPriorToInvokingExternalLoginAndExternalLoginSucceedsAndExternalProviderIdIsNotAlreadyAssociatedWithUserAccount() { ... } ... and I wound up with maybe a dozen permutations of this theme, for the user already being logged in locally and not locally, the external login succeeding or failing, etc. Not only were the method names unwieldy, but every test needed a setup that contained parts in common with a different set of other tests. I realized that NSpec's incremental setup capabilities would work great for this, and for a while I was trucking a long wonderfully, with code like act = () => { actionResult = controller.ExternalLoginCallback(returnUrl); }; context["The user is already logged in"] = () => { before = () => identity.Setup(x => x.IsAuthenticated).Returns(true); context["The external login succeeds"] = () => { before = () => oauth.Setup(x => x.VerifyAuthentication(It.IsAny<string>())).Returns(new AuthenticationResult(true, providerName, "provideruserid", "username", new Dictionary<string, string>())); context["External login already exists for current user"] = () => { before = () => authService.Setup(x => x.ExternalLoginExistsForUser(It.IsAny<string>(), It.IsAny<string>(), It.IsAny<string>())).Returns(true); it["Should add 'login sucessful' alert"] = () => { var alerts = (IList<Alert>)controller.TempData[TempDataKeys.AlertCollection]; alerts[0].Message.should_be_same("Login successful"); alerts[0].AlertType.should_be(AlertType.Success); }; it["Should return a redirect result"] = () => actionResult.should_cast_to<RedirectToRouteResult>(); }; context["External login already exists for another user"] = () => { before = () => authService.Setup(x => x.ExternalLoginExistsForAnyOtherUser(It.IsAny<string>(), It.IsAny<string>(), It.IsAny<string>())).Returns(true); it["Adds an error alert"] = () => { var alerts = (IList<Alert>)controller.TempData[TempDataKeys.AlertCollection]; alerts[0].Message.should_be_same("The external login you requested is already associated with a different user account"); alerts[0].AlertType.should_be(AlertType.Error); }; it["Should return a redirect result"] = () => actionResult.should_cast_to<RedirectToRouteResult>(); }; This approach seemed to work magnificently until I prepared to write test code for my ApplicationServices layer, to which I delegate viewmodel manipulation from my MVC controllers, and which coordinates the operations of the lower data repository layer: public void CreateUserAccountFromExternalLogin(RegisterExternalLoginModel model) { throw new NotImplementedException(); } public void AssociateExternalLoginWithUser(string userName, string provider, string providerUserId) { throw new NotImplementedException(); } public string GetLocalUserName(string provider, string providerUserId) { throw new NotImplementedException(); } I have no idea what in the world to name the test class, the test methods, or even if I should perhaps include the testing for this layer into the test class from my large code snippet above, so that a single feature or user action could be tested without regard to architectural layering. I can't find any tutorials or blog posts which cover more than simple examples, so I would appreciate any recommendations or pointing in the right direction. I would even welcome "your question is invalid"-type answers as long as some explanation is provided.

    Read the article

< Previous Page | 486 487 488 489 490 491 492 493 494 495 496 497  | Next Page >