Search Results

Search found 14354 results on 575 pages for 'existing records'.

Page 219/575 | < Previous Page | 215 216 217 218 219 220 221 222 223 224 225 226  | Next Page >

  • We've completed the first iteration

    - by CliveT
    There are a lot of features in C# that are implemented by the compiler and not by the underlying platform. One such feature is a lambda expression. Since local variables cannot be accessed once the current method activation finishes, the compiler has to go out of its way to generate a new class which acts as a home for any variable whose lifetime needs to be extended past the activation of the procedure. Take the following example:     Random generator = new Random();     Func func = () = generator.Next(10); In this case, the compiler generates a new class called c_DisplayClass1 which is marked with the CompilerGenerated attribute. [CompilerGenerated] private sealed class c__DisplayClass1 {     // Fields     public Random generator;     // Methods     public int b__0()     {         return this.generator.Next(10);     } } Two quick comments on this: (i)    A display was the means that compilers for languages like Algol recorded the various lexical contours of the nested procedure activations on the stack. I imagine that this is what has led to the name. (ii)    It is a shame that the same attribute is used to mark all compiler generated classes as it makes it hard to figure out what they are being used for. Indeed, you could imagine optimisations that the runtime could perform if it knew that classes corresponded to certain high level concepts. We can see that the local variable generator has been turned into a field in the class, and the body of the lambda expression has been turned into a method of the new class. The code that builds the Func object simply constructs an instance of this class and initialises the fields to their initial values.     c__DisplayClass1 class2 = new c__DisplayClass1();     class2.generator = new Random();     Func func = new Func(class2.b__0); Reflector already contains code to spot this pattern of code and reproduce the form containing the lambda expression, so this is example is correctly decompiled. The use of compiler generated code is even more spectacular in the case of iterators. C# introduced the idea of a method that could automatically store its state between calls, so that it can pick up where it left off. The code can express the logical flow with yield return and yield break denoting places where the method should return a particular value and be prepared to resume.         {             yield return 1;             yield return 2;             yield return 3;         } Of course, there was already a .NET pattern for expressing the idea of returning a sequence of values with the computation proceeding lazily (in the sense that the work for the next value is executed on demand). This is expressed by the IEnumerable interface with its Current property for fetching the current value and the MoveNext method for forcing the computation of the next value. The sequence is terminated when this method returns false. The C# compiler links these two ideas together so that an IEnumerator returning method using the yield keyword causes the compiler to produce the implementation of an Iterator. Take the following piece of code.         IEnumerable GetItems()         {             yield return 1;             yield return 2;             yield return 3;         } The compiler implements this by defining a new class that implements a state machine. This has an integer state that records which yield point we should go to if we are resumed. It also has a field that records the Current value of the enumerator and a field for recording the thread. This latter value is used for optimising the creation of iterator instances. [CompilerGenerated] private sealed class d__0 : IEnumerable, IEnumerable, IEnumerator, IEnumerator, IDisposable {     // Fields     private int 1__state;     private int 2__current;     public Program 4__this;     private int l__initialThreadId; The body gets converted into the code to construct and initialize this new class. private IEnumerable GetItems() {     d__0 d__ = new d__0(-2);     d__.4__this = this;     return d__; } When the class is constructed we set the state, which was passed through as -2 and the current thread. public d__0(int 1__state) {     this.1__state = 1__state;     this.l__initialThreadId = Thread.CurrentThread.ManagedThreadId; } The state needs to be set to 0 to represent a valid enumerator and this is done in the GetEnumerator method which optimises for the usual case where the returned enumerator is only used once. IEnumerator IEnumerable.GetEnumerator() {     if ((Thread.CurrentThread.ManagedThreadId == this.l__initialThreadId)               && (this.1__state == -2))     {         this.1__state = 0;         return this;     } The state machine itself is implemented inside the MoveNext method. private bool MoveNext() {     switch (this.1__state)     {         case 0:             this.1__state = -1;             this.2__current = 1;             this.1__state = 1;             return true;         case 1:             this.1__state = -1;             this.2__current = 2;             this.1__state = 2;             return true;         case 2:             this.1__state = -1;             this.2__current = 3;             this.1__state = 3;             return true;         case 3:             this.1__state = -1;             break;     }     return false; } At each stage, the current value of the state is used to determine how far we got, and then we generate the next value which we return after recording the next state. Finally we return false from the MoveNext to signify the end of the sequence. Of course, that example was really simple. The original method body didn't have any local variables. Any local variables need to live between the calls to MoveNext and so they need to be transformed into fields in much the same way that we did in the case of the lambda expression. More complicated MoveNext methods are required to deal with resources that need to be disposed when the iterator finishes, and sometimes the compiler uses a temporary variable to hold the return value. Why all of this explanation? We've implemented the de-compilation of iterators in the current EAP version of Reflector (7). This contrasts with previous version where all you could do was look at the MoveNext method and try to figure out the control flow. There's a fair amount of things we have to do. We have to spot the use of a CompilerGenerated class which implements the Enumerator pattern. We need to go to the class and figure out the fields corresponding to the local variables. We then need to go to the MoveNext method and try to break it into the various possible states and spot the state transitions. We can then take these pieces and put them back together into an object model that uses yield return to show the transition points. After that Reflector can carry on optimising using its usual optimisations. The pattern matching is currently a little too sensitive to changes in the code generation, and we only do a limited analysis of the MoveNext method to determine use of the compiler generated fields. In some ways, it is a pity that iterators are compiled away and there is no metadata that reflects the original intent. Without it, we are always going to dependent on our knowledge of the compiler's implementation. For example, we have noticed that the Async CTP changes the way that iterators are code generated, so we'll have to do some more work to support that. However, with that warning in place, we seem to do a reasonable job of decompiling the iterators that are built into the framework. Hopefully, the EAP will give us a chance to find examples where we don't spot the pattern correctly or regenerate the wrong code, and we can improve things. Please give it a go, and report any problems.

    Read the article

  • Identity Globe Trotters (Sep Edition): The Social Customer

    - by Tanu Sood
    Welcome to the inaugural edition of our monthly series - Identity Globe Trotters. Starting today, the last Friday of every month, we will explore regional commentary on Identity Management. We will invite guest contributors from around the world to share their opinions and experiences around Identity Management and highlight regional nuances, specific drivers, solutions and more. Today's feature is contributed by Michael Krebs, Head of Business Development at esentri consulting GmbH, a (SOA) specialized Oracle Gold Partner based in Ettlingen, Germany. In his current role, Krebs is dealing with the latest developments in Enterprise Social Networking and the Integration of Social Media within business processes.  By Michael Krebs The relevance of "easy sign-on" in the age of the "Social Customer" With the growth of Social Networks, the time people spend within those closed "eco-systems" is growing year by year. With social networks looking to integrate search engines, like Facebook announced some weeks ago, their relevance will continue to grow in contrast to the more conventional search engines. This is one of the reasons why social network accounts of the users are getting more and more like a virtual fingerprint. With the growing relevance of social networks the importance of a simple way for customers to get in touch with say, customer care or contract departments, will be crucial for sales processes in critical markets. Customers want to have one single point of contact and also an easy "login-method" with no dedicated usernames, passwords or proprietary accounts. The golden rule in the future social media driven markets will be: The lower the complexity of the initial contact, the better a company can profit from social networks. If you, for example, can generate a smart way of how an existing customer can use self-service portals, the cost in providing phone support can be lowered significantly. Recruiting and Hiring of "Digital Natives" Another particular example is "social" recruiting processes. The so called "digital natives" don´t want to type in their profile facts and CV´s in proprietary systems. Why not use the actual LinkedIn profile? In German speaking region, the market in the area of professional social networks is dominated by XING, the equivalent to LinkedIn. A few weeks back, this network also opened up their interfaces for integrating social sign-ons or the usage of profile data for recruiting-purposes. In the European (and especially the German) employment market, where the number of young candidates is shrinking because of the low birth rate in the region, it will become essential to use social-media supported hiring processes to find and on-board the rare talents. In fact, you will see traditional recruiting websites integrated with social hiring to attract the best talents in the market, where the pool of potential candidates has decreased dramatically over the years. Identity Management as a key factor in the Customer Experience process To create the biggest value for customers and also future employees, companies need to connect their HCM or CRM-systems with powerful Identity management solutions. With the highly efficient Oracle (social & mobile enabling) Identity Management solution, enterprises can combine easy sign on with secure connections to the backend infrastructure. This combination enables a "one-stop" service with personalized content for customers and talents. In addition, companies can collect valuable data for the enrichment of their CRM-data. The goal is to enrich the so called "Customer Experience" via all available customer channels and contact points. Those systems have already gained importance in the B2C-markets and will gradually spread out to B2B-channels in the near future. Conclusion: Central and "Social" Identity management is key to Customer Experience Management and Talent Management For a seamless delivery of "Customer Experience Management" and a modern way of recruiting the best talent, companies need to integrate Social Sign-on capabilities with modern CX - and Talent management infrastructure. This lowers the barrier for existing and future customers or employees to get in touch with sales, support or human resources. Identity management is the technology enabler and backbone for a modern Customer Experience Infrastructure. Oracle Identity management solutions provide the opportunity to secure Social Applications and connect them with modern CX-solutions. At the end, companies benefit from "best of breed" processes and solutions for enriching customer experience without compromising security. About esentri: esentri is a provider of enterprise social networking and brings the benefits of social network communication into business environments. As one key strength, esentri uses Oracle Identity Management solutions for delivering Social and Mobile access for Oracle’s CRM- and HCM-solutions. …..End Guest Post…. With new and enhanced features optimized to secure the new digital experience, the recently announced Oracle Identity Management 11g Release 2 enables organizations to securely embrace cloud, mobile and social infrastructures and reach new user communities to help further expand and develop their businesses. Additional Resources: Oracle Identity Management 11gR2 release Oracle Identity Management website Datasheet: Mobile and Social Access (pdf) IDM at OOW: Focus on Identity Management Facebook: OracleIDM Twitter: OracleIDM We look forward to your feedback on this post and welcome your suggestions for topics to cover in Identity Globe Trotters. Last Friday, every month!

    Read the article

  • PHP API to trade products from eshop through REST/xml

    - by Donatas Veikutis
    I need algorithm, or PHP api example, or existing decision how to make system for trade big information for B2B xml with goods information. Now I try to use Slim framework to do that system. But for me need some documentation what architecture have to be in here. System requiments is simple: User have autentification username and password Then he can see which product groups assigned to it Then he can see all product with information (price, title, description, images, specifications etc.). Its will the easiest way to get a free php api for that I think, and try too edit some code. But I did not found anything.

    Read the article

  • Sharing Authentication Across Subdomains using cookies

    - by Jordan Reiter
    I know that in general cookies themselves are not considered robust enough to store authentication information. What I am wondering is if there is an existing design pattern or framework for sharing authentication across subdomains without having to use something more complex like OpenID. Ideally, the process would be that the user visits abc.example.org, logs in, and continues on to xyz.example.org where they are automatically recognized (ideally, the reverse should also be possible -- a login via xyz means automatic login at abc). The snag is that abc.example.org and xyz.example.org are both on different servers and different web application frameworks, although they can both use a shared database. The web application platforms include PHP, ColdFusion, and Python (Django), although I'm also interested in this from a more general perspective (i.e. language agnostic).

    Read the article

  • How can I determine if a cube is adjacent to another cube, and optimize its buffers if so?

    - by Christian Frantz
    I'm trying to optimize the rendering of a collection of cubes, (based on an answer I was given to another question I asked). I understand the logic behind occlusion culling, but I'm having trouble with the code. When I create a cube, I want to determine if that cube is touching another existing cube, and if so I don't want to generate the redundant data in my vertex or index buffers. I'm planning on making a method that I call from my cube constructor so that everytime I create a cube, these checks are made, and neither occluded face is ever drawn. How would I go about this?

    Read the article

  • How will Deja-Dup operates when backing up to an external USB drive?

    - by Little Bobby Tables
    I want to set up regular backups, and deja-dup seems like a nice tool. However, I want to put my backups on an extension USB drive that I have, not on a remote network location. Naturally, this drive is not always connected. If I configure deja-dup to backup to a directory on this drive (e.g. /media/extention/backup), what would happen? Will it prompt me to connect the drive when it is missing (the desired behavior), or just fail silently? Is there some way to tweak it to do so? I can roll my own cron-based backup script that checks if this drive is mounted, but I would really prefer to use an existing, integrated tool.

    Read the article

  • Incorporating GPL Code in my Open Source Project

    - by rutherford
    I have downloaded a currently inactive GPL project with a view to updating it and releasing the completed codebase as open source. I'm not really a fan of GPL though and would rather licence my project under BSD. What are my options? Is it just a case of keeping any existing non-touched code under the GPL and any updated stuff can be BSD (messy)? The source will essentially be the same codebase i.e. there is no logical separation between the two and they certainly can't be split into anything resembling different libraries. Are my only realistic options to either GPL the whole thing or seek the original author's permission to release everthing under BSD?

    Read the article

  • Open source ASP.NET MVC project for a SaaS application

    - by DotnetDude
    I am working on a personal project that offers a service online. I'd like put this out to the public. I don't want to reinvent the wheel and use an existing template/open source project and add my service specific functionality. The features I am looking for are: Support for different roles (I need to have an admin role, customer and preferred customer roles) An admin section where admins can manage user accounts, login as with users credentials for providing support Customer pages that are role specific (Ex: Some functionality can be used by preferred customers but not non preferred ones) Preferably a pricing/plans page with payment gateway integration These are some of the basic pages available in most of service sites online. Is there a MVC 3 (preferably 4) written in C# that I can use as a shell to build upon? Thanks

    Read the article

  • 3D game editor with C++ support and Bullet Physics

    - by Raghav Bali
    I am very new to game development and hence the question. Is there any open-source game editor that uses Bullet Physics engine with C++ code edit support.I have read about blender but it allows python based editing(if I am not wrong here). I need an editor to quickly draw the needed objects and concentrate more on the logic on their interactions(mostly magnetism based). This(my project) is part of an already existing legacy code in C++ which is forcing the dependency. Also, is there any physics engine which provides support for magnetism? Any help will surely be great. Thanks

    Read the article

  • Clock is Ticking for OBI 10g Supported Customers to move to OBI 11g

    - by Mike.Hallett(at)Oracle-BI&EPM
    Now is a perfect time for Partners to approach existing OBI 10g customers, to encourage and help them to upgrade to all the great new features in the current OBI v 11g; including the ability to go Mobile, to be in-memory on Exalytics, and to get tighter integration with Hyperion applications, Strategic Scorecards and Essbase. Oracle Lifetime Support Policy for Oracle BI Suite, version 10gR3, will end ‘normal’ support in July 2013. The final point release of Oracle Business Intelligence EE & Publisher 10gR3 was 10.1.3.4.x, which was generally available from July 2008 and will end “Premier Support” in July 2013.  From this time, customers may purchase “Extended Support” until July 2015, and from then “Sustaining Support”  indefinitely. For more information : Upgrade to Oracle Business Intelligence Enterprise Edition 11g, article on Technet Planning to Upgrade from Oracle BI 10g to BI 11g ?, at docs.oracle.com Oracle Business Intelligence Enterprise Edition 11g Oracle Lifetime Support Policy

    Read the article

  • Can not start webhttrack

    - by Sitati
    I am trying to start my webhttrack on ubuntu 12.04 precise pangolin but all i get is the opening screen. When i press any button on the page it gives me the following errors: Oops! Google Chrome could not connect to barney:8081 Suggestions: Try reloading: barney:­8081/­server/­step2.­html i tried to run from a terminal window and get the output here: webhttrack /usr/bin/webhttrack(4405:( launching /usr/bin/x-www-browser /usr/bin/webhttrack(4405:( spawning regular browser.. Created new window in existing browser session. /usr/bin/webhttrack(4405:( browser exited /usr/bin/webhttrack: line 167: 4422 Killed ${BINPATH}/htsserver "${DISTPATH}/" path "${HOME}/websites" lang "${LANGN}" $@ If anyone can help me i appreciate it.

    Read the article

  • Tag link suggestion plugin for wordpress?

    - by Emerson
    Hi, every time I write a post I make sure I add links to wordsthat I have tags for. For example: "The economy of Brazil has improved in the last few years" this ensure that when people re-post my content, a lot of back-links will be created to my tags. This is quite a lot of work to do manually for every post. It would be cool if there was a plugin that would suggest tags to be applied when they match existing words in the text of the post. Is there such a thing?

    Read the article

  • Existential CAML - does an item exist?

    - by PointsToShare
    © 2011 By: Dov Trietsch. All rights reserved More CAML and existence. In “SharePoint List Issues” and “Passing the CAML thru the EY of the NEEDL we saw how to use CAML to return a subset of a list and also how to check the existence of lists, fields, defaults, and values.   Here is a general function that may be used to get a subset of a list by comparing a “text” type field to a given value.  The function is pretty smart. It can be used to check existence or to return a collection of items that may be further processed. It handles non existing fields and replaces them with the ubiquitous “Title”, but only once!  /// Build an SPQuery that returns a selected set of columns from a List /// titleField must be a "Text" type field /// When the titleField parameter is empty ("") "Title" is assumed /// When the title parameter is empty ("") All is assumed /// When the columnNames parameter is null, the query returns all the fields /// When the rowLimit parameter is 0, the query return all the items. /// with a non-zero, the query returns at most rowLimits /// /// usage: to check if an item titled "Blah" exists in your list, do: /// colNames = {"Title"} /// col = GetListItemColumnByTitle(myList, "", "Blah", colNames, 1) /// Check the col.Count. if > 0 the item exists and is in the collection private static SPListItemCollection GetListItemColumnByTitle(SPList list, string titleField, string title, string[] columnNames, uint rowLimit) {   try   {     char QT = Convert.ToChar((int)34);     SPQuery query = new SPQuery();     if (title != "")     {       string tf = titleField;       if (titleField == "") tf = "Title";       tf = CAMLThisName(list, tf, "Title");        StringBuilder titleQuery = new StringBuilder  ("<Where><Eq><FieldRef Name=");       titleQuery.Append(QT);       titleQuery.Append(tf);       titleQuery.Append(QT);       titleQuery.Append("/><Value Type=");       titleQuery.Append(QT);       titleQuery.Append("Text");       titleQuery.Append(QT);       titleQuery.Append(">");       titleQuery.Append(title);       titleQuery.Append("</Value></Eq></Where>");       query.Query = titleQuery.ToString();     }     if (columnNames.Length != 0)     {       StringBuilder sb = new StringBuilder("");       bool TitleAlreadyIncluded = false;       foreach (string columnName in columnNames)       {         string tst = CAMLThisName(list, columnName, "Title");         //Allow Title only once         if (tst != "Title" || !TitleAlreadyIncluded)         {           sb.Append("<FieldRef Name=");           sb.Append(QT);           sb.Append(tst);           sb.Append(QT);           sb.Append("/>");           if (tst == "Title") TitleAlreadyIncluded = true;         }       }       query.ViewFields = sb.ToString();     }     if (rowLimit > 0)     {        query.RowLimit = rowLimit;     }     SPListItemCollection col = list.GetItems(query);     return col;   }   catch (Exception ex)   {     //Console.WriteLine("GetListItemColumnByTitle" + ex.ToString());     //sw.WriteLine("GetListItemColumnByTitle" + ex.ToString());     return null;   } } Here I called it for a list in which “Author” (it is the internal name for “Created”) and “Blah” do not exist. The list of column names is:  string[] columnNames = {"Test Column1", "Title", "Author", "Allow Multiple Ratings", "Blah"};  So if I use this call, I get all the items for which “01-STD MIL_some” has the value of 1. the fields returned are: “Test Column1”, “Title”, and “Allow Multiple Ratings”. Because “Title” was already included and the default for non exixsting is “Title”, it was not replicated for the 2 non-existing fields.  SPListItemCollection col = GetListItemColumnByTitle(masterList, "01-STD MIL_some", "1", columnNames, 0); The following call checks if there are any items where “01-STD MIL_some” has the value of “1”. Note that I limited the number of returned items to 1.  SPListItemCollection col = GetListItemColumnByTitle(masterList, "01-STD MIL_some", "1", columnNames, 1); The code also uses the CAMLThisName function that checks for an existence of a field and returns its InternalName. This is yet another useful function that I use again and again.  /// <summary> /// return a fields internal name (CAMLName)  /// or the "default" name that you passed. /// To check existence pass "" or some funny name like "mud in your eye" /// </summary> public static string CAMLThisName(SPList list, string name, string def) {   String CAMLName = def;   SPField fld = GetFieldByName(list, name);   if (fld != null)   {      CAMLName = fld.InternalName;   }   return CAMLName; } That’s all folks?!

    Read the article

  • Habanero

    - by csharp-source.net
    An Enterprise Application Framework for .Net that is ideally suited for developing applications in an agile manner. The framework is used for producing an application from the data layer through to the front-end. Free open source under the LGPL license, it includes ORM, code generation and runtime UI generation to create one application for the desktop & web. Features: * ORM: Map database tables to objects in code * Persist property values to and from the database * Define all mapping in a single XML file * Switch between database vendors with one setting * Support for MySQL, MS Sql Server, MS Access, Oracle, PostgreSQL, SQLite, Firebird * FireStarter GUI class definitions xml manager * Generate user interfaces and map properties to controls * Develop for both desktop (with Windows Forms) and web (with Gizmox' Visual WebGUI) * Generate new projects and code files * Generate UI forms from templates * Reverse engineer class definitions from existing databases * Support variable data sources, including an in-memory database. Ships with Firestarter a free database reverse engineering, Domain Modelling and Code Generator.

    Read the article

  • audio controls in xfce 4.8

    - by Peter
    I am seeing several questions similar to mine, but none of the answers are sufficient. I am pretty green with Ubuntu, so here goes: I was just automatically upgraded to xfce 4.8 for Ubuntu studio. The volume control no longer works in my panel. When I launch 'mixer' I don't see any settings, either. When I try to run "linux audio configuration" I get an error: JACK can only be configured with a loaded and stopped studio. Please create a new studio or load and stop an existing one. I understand that I can change the volume using command line, but I can't understand why I got upgraded to something that fails on basic features. I much less likely to recommend ubuntu to others as a result. thanks!

    Read the article

  • New Paper on the PeopleSoft Interaction Hub-PeopleTools Relationship

    - by Matthew Haavisto
    A new paper has just been published that explains the relationships and dependencies between the PeopleSoft Interaction Hub (formerly the PeopleSoft Applications Portal), and PeopleTools.  This paper will help you understand which versions of the Hub work with which versions of Tools.  The paper contains information on how new customers can install the PeopleSoft Interaction Hub, and existing PeopleSoft Interaction Hub customers can apply PIH 9.1 Feature Pack 1 functionality if they are on an earlier version. It also describes how PeopleSoft Interaction Hub releases are aligned with PeopleTools releases, the general upgrade process within the Feature Pack model, and how customers can expect this to work with subsequent feature packs, maintenance packs, and bundles. You can get the paper from Oracle support.

    Read the article

  • Internal and external API architecture

    - by Tacomanator
    The company I work for maintains a successful SaaS product that grew "organically" over the years. We are planning to expand the line with a suite of new products that will share data with the existing product. To support this, we are looking to consolidate business logic into a single place: a web service layer. The WS layer will be used by: The web applications A tool to import data A tool to integrate with other client software (not an API per se) We also want to create an API that can be used by our customers that are capable of using it to create their own integrations. We are struggling with the following question: Should the internal API (aka the WS layer) and the external API be one in the same, with security and permission settings to control what can be done by who, or should they be two separate applications where the external API just calls the internal API like any other application? So far in our debate it seems that separating them may be more secure, but will add overhead. What have others done in a similar situation?

    Read the article

  • ecommerce options for 5-6 products

    - by Calum
    Hi, We're looking to develop a simple e-commerce solution to sell 5-6 products. We'd rather not have to use PayPal's buttons (buy it now!) if there's an existing alternative, but would also for budget/time constraints don't want to roll our own. Are there any small, basic ecommerce solutions available that would allow this? I did look at Foxy Cart but the monthly fee was a bit of a turn off. (I must sound extremely fussy I'm aware!) Something like Zen would just be overkill for our needs. Thanks for any suggestions.

    Read the article

  • Upcoming Customer WebCast: Adapters and JCA Transport in Oracle Service Bus 11g

    - by MariaSalzberger
    There is an upcoming webcast planned for September 19th that will show how to implement services using a JCA adapter in Oracle Service Bus 11g. The session will help to utilize existing resources like samples and information centers for adapters in the context of Oracle Service Bus. Topics covered in the webcast are: JCA Transport Overview / Inbound and Outbound scenarios using JCA adapters Implementation of an end-to-end use case using an inbound file adapter and and an outbound database adapter in Oracle Service Bus It will show how to find information on supported adapters in a certain version of OSB 11g Available adapter samples for OSB and SOA How to use SOA adapter samples for Oracle Service Bus A live demo of an adapter sample implementation in Oracle Service Bus Information Centers for adapters and Oracle Service Bus information The presentation recording can by found here after the webcast. Select "Oracle Fusion Middleware" as product. (https://support.oracle.com/rs?type=doc&id=740966.1) The schedule for future webcasts can be found in the above mentioned document as well.

    Read the article

  • Changelog Management

    - by Gnial0id
    I'm currently developing a WinForm application. In order to inform the client about the improvements and corrections made during the last version, I would like to manage and display a changelog. I mostly found existing changelog on website (the term changelog is pretty used) or explanation on how to manage the release numbers, which I don't care. So, these are my questions: How do I manage a changelog (using XML, pure text in the app, etc.) in a desktop application? How do I present it to the user (external website, inside the winform application)?

    Read the article

  • #OOW 2012 @PARIS...talking Oracle and Clouds, and Optimized Datacenter

    - by Eric Bezille
    For those of you who want to get most out of Oracle technologies to evolve your IT to the Next Wave, I encourage you to register to the up coming Oracle Optimized Datacenter event that will take place in Paris on November 28th. You will get the opportunity to exchange with Oracle experts and customers having successfully evolve their IT by leveraging Oracle technologies. You will also get the latest news on some of the Oracle systems announcements made during OOW 2012. During this event we will make an update about Oracle and Clouds, from private to public and hybrid models. So in preparing this session, I thought it was a good start to make a status of Cloud Computing in France, and CIO requirements in particular. Starting in 2009 with the first Cloud Camp in Paris, the market has evolved, but the basics are still the same : think hybrid. From Traditional IT to Clouds One size doesn't fit all, and for big companies having already an IT in place, there will be parts eligible to external (public) cloud, and parts that would be required to stay inside the firewalls, so ability to integrate both side is key.  None the less, one of the major impact of Cloud Computing trend on IT, reported by Forrester, is the pressure it makes on CIO to evolve towards the same model that end-users are now used to in their day to day life, where self-service and flexibility are paramount. This is what is driving IT to transform itself toward "a Global Service Provider", or for some as "IT "is" the Business" (see : Gartner Identifies Four Futures for IT and CIO), and for both models toward a Private Cloud Service Provider. In this journey, there is still a big difference between most of existing external Cloud and a firm IT : the number of applications that a CIO has to manage. Most cloud providers today are overly specialized, but at the end of the day, there are really few business processes that rely on only one application. So CIOs has to combine everything together external and internal. And for the internal parts that they will have to make them evolve to a Private Cloud, the scope can be very large. This will often require CIOs to evolve from their traditional approach to more disruptive ones, the time has come to introduce new standards and processes, if they want to succeed. So let's have a look at the different Cloud models, what type of users they are addressing, what value they bring and most importantly what needs to be done by the  Cloud Provider, and what is left over to the user. IaaS, PaaS, SaaS : what's provided and what needs to be done First of all the Cloud Provider will have to provide all the infrastructure needed to deliver the service. And the more value IT will want to provide, the more IT will have to deliver and integrate : from disks to applications. As we can see in the above picture, providing pure IaaS, left a lot to cover for the end-user, that’s why the end-user targeted by this Cloud Service is IT people. If you want to bring more value to developers, you need to provide to them a development platform ready to use, which is what PaaS is standing for, by providing not only the processors power, storage and OS, but also the Database and Middleware platform. SaaS being the last mile of the Cloud, providing an application ready to use by business users, the remaining part for the end-users being configuring and specifying the application for their specific usage. In addition to that, there are common challenges encompassing all type of Cloud Services : Security : covering all aspect, not only of users management but also data flows and data privacy Charge back : measuring what is used and by whom Application management : providing capabilities not only to deploy, but also to upgrade, from OS for IaaS, Database, and Middleware for PaaS, to a full Business Application for SaaS. Scalability : ability to evolve ALL the components of the Cloud Provider stack as needed Availability : ability to cover “always on” requirements Efficiency : providing a infrastructure that leverage shared resources in an efficient way and still comply to SLA (performances, availability, scalability, and ability to evolve) Automation : providing the orchestration of ALL the components in all service life-cycle (deployment, growth & shrink (elasticity), upgrades,...) Management : providing monitoring, configuring and self-service up to the end-users Oracle Strategy and Clouds For CIOs to succeed in their Private Cloud implementation, means that they encompass all those aspects for each component life-cycle that they selected to build their Cloud. That’s where a multi-vendors layered approach comes short in terms of efficiency. That’s the reason why Oracle focus on taking care of all those aspects directly at Engineering level, to truly provide efficient Cloud Services solutions for IaaS, PaaS and SaaS. We are going as far as embedding software functions in hardware (storage, processor level,...) to ensure the best SLA with the highest efficiency. The beauty of it, as we rely on standards, is that the Oracle components that you are running today in-house, are exactly the same that we are using to build Clouds, bringing you flexibility, reversibility and fast path to adoption. With Oracle Engineered Systems (Exadata, Exalogic & SPARC SuperCluster, more specifically, when talking about Cloud), we are delivering all those components hardware and software already engineered together at Oracle factory, with a single pane of glace for the management of ALL the components through Oracle Enterprise Manager, and with high-availability, scalability and ability to evolve by design. To give you a feeling of what does that bring in terms just of implementation project timeline, for example with Oracle SPARC SuperCluster, we have a consistent track of record to have the system plug into existing Datacenter and ready in a week. This includes Oracle Database, OS, virtualization, Database Storage (Exadata Storage Cells in this case), Application Storage, and all network configuration. This strategy enable CIOs to very quickly build Cloud Services, taking out not only the complexity of integrating everything together but also taking out the automation and evolution complexity and cost. I invite you to discuss all those aspect in regards of your particular context face2face on November 28th.

    Read the article

  • Release Notes for 11/28/2012

    This week we wrapped up a set of work to improve the actions and navigation within the project tabs. Now each tab in a project has a more consistent interaction experience. The navigation and filter activities are on the left side and action based links on the right. For example, on the Issue Tracker tab, the Basic and Advanced filters are on the left and the ability to create a new issue and subscribe the project are on the right.   Have ideas on how to improve CodePlex? Please visit our suggestions page! Vote for existing ideas or submit a new one. As always you can reach out to the CodePlex team on Twitter @codeplex or reach me directly @mgroves84

    Read the article

  • How do I fix these compiler errors in Apple Crunch?

    - by BluFire
    I've been looking around and I finally got the full source code for a game called Apple-Crunch from Google Code. But when I put it into my project, the source code included so many errors in the class files such as: cannot be resolved into a type the constructor is undefined the method method() is undefined for the type Sprite class.java I downloaded the source directly from the command-line and noticed errors popping up on my project. Since I couldn't figure out how to import the actual folder into my workspace (it wouldn't show up on existing projects) I decided to copy and overwrite the folders into the project. The errors were still there so I looked at the class files and noticed that the classes with errors extended from RokonActivity. I then proceeded to add to the libs folder the Rokon library in hopes to fix the errors. Sadly it didn't work and now I don't what to do to fix the errors. How do I fix the errors without having to manually change the code? The source code should be fully functional so why are there errors?

    Read the article

  • How can I change the wallpaper using a Python script?

    - by furtelwart
    I want to change my wallpaper in Ubuntu 11.10 (with Unity) in a small Python script. I found the possibility to change it via the gconf-editor in /desktop/gnome/background/picture_filename. With python-gconf, I'm able to change the necessary values. Apparently, the gconf string is not read out. If I change it (either via a script or via gconf-editor), the wallpaper remains and in the menu of "Change wallpaper", the old wallpaper is shown. How am I able to change the wallpaper for Unity via a Python script? The following code does work. Apparently, the gsettings are only applied, if some Gtk code is executed. #!/usr/bin/python # -*- coding: utf-8 -*- from gi.repository import Gtk, Gio class BackgroundChanger(): SCHEMA = 'org.gnome.desktop.background' KEY = 'picture-uri' def change_background(self, filename): gsettings = Gio.Settings.new(self.SCHEMA) print(gsettings.get_string(self.KEY)) print(gsettings.set_string(self.KEY, "file://" + filename)) Gtk.Window() print(gsettings.get_string(self.KEY)) if __name__ == "__main__": BackgroundChanger().change_background("/home/user/existing.jpg")

    Read the article

  • Google I/O 2010 - Connect enterprise apps w/ Google Docs

    Google I/O 2010 - Connect enterprise apps w/ Google Docs Google I/O 2010 - Connecting your enterprise applications with Google Docs and Sites Enterprise 201 Eric Bidelman, Vijay Bangaru, Matthew Tonkin (Memeo Inc) Learn how your organization can harness the power of Google Docs and Sites directly from within your existing enterprise systems using our extensive APIs. Integrate with data from behind the firewall using Secure Data Connector. Upload, share, collaborate, and sync any file to Docs. Even automate the creation of project and team workspaces in a single click in Sites from within your CRM. For all I/O 2010 sessions, please go to code.google.com From: GoogleDevelopers Views: 18 0 ratings Time: 53:19 More in Science & Technology

    Read the article

< Previous Page | 215 216 217 218 219 220 221 222 223 224 225 226  | Next Page >