Search Results

Search found 14545 results on 582 pages for 'custom macros'.

Page 194/582 | < Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >

  • NSTableView Drag and Drop not working

    - by macatomy
    Hi, I'm trying to set up very basic drag and drop for my NSTableView. The table view has a single column (with a custom cell). The column is bound to an NSArrayController, and the array controller's content array is bound to an NSArray on my controller object. The data displays fine in the table. I connected the dataSource and delegate outlets of the table view to my controller object, and then implemented these methods: - (BOOL)tableView:(NSTableView *)aTableView writeRowsWithIndexes:(NSIndexSet *)rowIndexes toPasteboard:(NSPasteboard *)pboard { NSLog(@"dragging"); return YES; } - (NSDragOperation)tableView:(NSTableView*)tv validateDrop:(id <NSDraggingInfo>)info proposedRow:(NSInteger)row proposedDropOperation:(NSTableViewDropOperation)op { return NSDragOperationEvery; } - (BOOL)tableView:(NSTableView *)aTableView acceptDrop:(id <NSDraggingInfo>)info row:(NSInteger)row dropOperation:(NSTableViewDropOperation)operation { return YES; } I also registered the drag types in -awakeFromNib: #define MyDragType @"MyDragType" - (void)awakeFromNib { [super awakeFromNib]; [_myTable registerForDraggedTypes:[NSArray arrayWithObjects:MyDragType, nil]]; } The problem is that the -tableView:writeRowsWithIndexes:toPasteboard: method is never called. I've looked at a bunch of examples and I can't figure out anything I'm doing wrong. Could the problem be that I'm using a custom cell? Is there something I'm supposed to override in the cell subclass to enable this functionality? EDIT: Confirmed. Switching the custom cell for a regular NSTextFieldCell made dragging work. Now, how do I make drag and drop work with my custom cell?

    Read the article

  • How to change type of information for a Title column in SharePoint MOSS 2007 List?

    - by Ruba
    I created a calendar in SharePoint MOSS 2007 that is connected to my Outlook. I added a custom column “Person” to this list and the type of information in this column is: Person or Group. In SharePoint I can hide Title column and in Calendar View show this Person field as Month View Title. So I can see on the calendar who is working that day. Problem is in Outlook. It seems like Outlook doesn’t know a thing about custom fields. In Outlook I can see only Title and few other fields. I could rename Title field to Person, but I can’t change type of information that it contains. By default it is text field and no way to change it to Person or Group. If I could change those “default” column types, then I think my problem would be solved. I know it is possible. I created a custom list, but this list has also those “sticky” Title, Created By and Modified By columns that can’t be changed or removed. Maybe I have to create a custom list with some other program or code? Please help! Thanks in advance!

    Read the article

  • What's the best way to read a UDT from a database with Java?

    - by Lukas Eder
    I thought I knew everything about UDTs and JDBC until someone on SO pointed out some details of the Javadoc of java.sql.SQLInput and java.sql.SQLData JavaDoc to me. The essence of that hint was (from SQLInput): An input stream that contains a stream of values representing an instance of an SQL structured type or an SQL distinct type. This interface, used only for custom mapping, is used by the driver behind the scenes, and a programmer never directly invokes SQLInput methods. This is quite the opposite of what I am used to do (which is also used and stable in productive systems, when used with the Oracle JDBC driver): Implement SQLData and provide this implementation in a custom mapping to ResultSet.getObject(int index, Map mapping) The JDBC driver will then call-back on my custom type using the SQLData.readSQL(SQLInput stream, String typeName) method. I implement this method and read each field from the SQLInput stream. In the end, getObject() will return a correctly initialised instance of my SQLData implementation holding all data from the UDT. To me, this seems like the perfect way to implement such a custom mapping. Good reasons for going this way: I can use the standard API, instead of using vendor-specific classes such as oracle.sql.STRUCT, etc. I can generate source code from my UDTs, with appropriate getters/setters and other properties My questions: What do you think about my approach, implementing SQLData? Is it viable, even if the Javadoc states otherwise? What other ways of reading UDT's in Java do you know of? E.g. what does Spring do? what does Hibernate do? What does JPA do? What do you do? Addendum: UDT support and integration with stored procedures is one of the major features of jOOQ. jOOQ aims at hiding the more complex "JDBC facts" from client code, without hiding the underlying database architecture. If you have similar questions like the above, jOOQ might provide an answer to you.

    Read the article

  • Iphone: TabView + TableView

    - by OneTrickPonySoft
    I think I'm missing something simple, but I can't figure out exactly what it is. I'm trying to set up an App with a UITabViewController, and one of the Tabs will have a UITableView and UISearchBar (but no Navigation Controller). I set up the UITabViewController with all the tabs in interface builder, and the views are in their own xib files. The xib file for the tab with the UITableView is set up and connected as follows. Stuff in the browser: File's owner (Class is my custom class that is a child of UITableViewController) view - View View (class UIView, reference view - File's owner) contains: UITableView (if i try and set references to the data source / delegate, the app breaks) UISearchBar (unconfigured at the moment) This setup displays all the items and doesn't lock up, but I can't assign a DataSource without it crashing when i try and load the tab with the UITableView. What should I do to get data into this table, either in IB or code? My ideas are as follows: Implement custom UITableView class, hook up to table view in IB or to custom tableviewcontroller in Xcode. Pound head or laptop against the wall until it works. Update: Here's the error the simulator pushes to the console when I connect the Tableview's data source and delegate to File Owner (who's class is my custom tableviewcontroller). 2/14/09 6:59:12 PM TabBarWillbeRight[33172] * Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '* -[UIViewController tableView:numberOfRowsInSection:]: unrecognized selector sent to instance 0x523760'

    Read the article

  • Checking parameter classes in PHP

    - by Professor Elm
    At the moment, I have the following: <?php if ($custom != ""): ?> <div class="head-row5" id="custom"> <?php print $custom?> </div><?php //head-row5 ?> <?php endif; ?> What is returned is something like.... <div class="block block-block" id="block-block-15"> <div class="title"> <h3></h3> </div> <div class="content"><p style="text-align: center;"><a href="website linkage" id="T2roll"><span class="alt">Name of the linked page</span></a></p> </div> What I'm trying to do is find a way to print this out ONLY if the is true. How would I go about doing this? I believe that I'm going to have to check the elements by class of this $custom parameter. However, I have no idea how to go about doing this.

    Read the article

  • Iframe form not submittin in IE but working in Firefox

    - by Younes
    I have got a form that posts values to a page in a wizard. When i'm loading this form in a Iframe everything is working fine in Firefox, it will get me to the second step of the wizard and maintains the values i filled in. When im testing this in Internet Explorer i am not getting to the second step, instead of that it returns me to the first step of the wizard with all fields being blank. When i check this in Fiddler i see that im getting a different response when i'm posting the form in the Iframe from Firefox compared to Internet Explorer. How can i make this work for all browsers? What am I doing wrong? This is what i get back from Fiddler: Firefox Post: Ressult Protocol Host URL Body Caching Content-Type Process Comments Custom 1 302 HTTP www.dmg.eu /brugman/budgetplanner/aanmelden.php 0 no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Expires: Thu, 19 Nov 1981 08:52:00 GMT text/html; charset=UTF-8 firefox:6116 Get: # Result Protocol Host URL Body Caching Content-Type Process Comments Custom 2 200 HTTP www.dmg.eu /brugman/budgetplanner/ 40.677 no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Expires: Thu, 19 Nov 1981 08:52:00 GMT text/html; charset=UTF-8 firefox:6116 Internet Explorer Post: Result Protocol Host URL Body Caching Content-Type Process Comments Custom 73 302 HTTP www.dmg.eu /brugman/budgetplanner/aanmelden.php 0 no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Expires: Thu, 19 Nov 1981 08:52:00 GMT text/html; charset=UTF-8 iexplore:536 Get: Result Protocol Host URL Body Caching Content-Type Process Comments Custom 74 302 HTTP www.dmg.eu /brugman/budgetplanner/ 0 no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Expires: Thu, 19 Nov 1981 08:52:00 GMT text/html; charset=UTF-8 iexplore:536 Hope someone knows what the diff is :).

    Read the article

  • How should child views of UIScrollView report their bounds for contentSize?

    - by Mike
    I'm looking more for advice on the correct design for a view. What I have is a UIScrollView that contains one or more custom Views I have created. My problem is, who reports to the scrollview what it's contentSize should be? I have the following: UIView +-UIScrollView +-CustomView 1 with dynamic height depending on data +-CustomView 2 with dynamic Height depending on data The UIViewController creates new instances of the custom views with data and then adds them as subviews to the UIScrollView. The problem I'm having is how to set the value of the scrollview's contentSize? Right now, I'm not doing that and the contents of the scrollview are clipped with no scrolling possible. Should the custom view call [parent setContentSize:] in its drawRect:? Should the UIViewController query the custom view after creation to get its bounds and then call setContentSize? Should I subclass the UIScrollView to override addSubView to query each subview's height? Is there something else I'm missing? I hope I explained that properly. I'm new to this and still getting a handle on things.

    Read the article

  • What MS technology to use for HTTP service returning XML?

    - by Borek
    I need to create a service that: accepts HTTP requests (with query string or HTTP POST parameters) does some processing on the requests (checking if the request is valid, authentication etc.) reads data from a custom store (another HTTP call in our case) returns the result as custom XML (defined with XSD) I'm trying to think of various MS technologies that could help me and how good they would be for this scenario (pretty standard one I guess). The tasks above are relatively separate, this is what comes to mind: HTTP front-end: ASP.NET Web Forms ASP.NET MVC (seems more appropriate here as I won't need server controls, view state etc.) WCF? Don't know much about it or how well it would suit my task. Custom logic on the server: this will probably be a generic C# code in all cases (sometimes "plugged into" or called from MVC controllers or some equivalent place in other technologies) Reading data from internal data stores: As said, this is another HTTP server in our case. Options that come to mind: Just read the data using something like WebClient (Just theoretically) implement a LINQ provider (Just even more theoretically) implement an EF provider Output the data as custom XML: Linq2XML Serialization? Is it flexible enough? Does WCF provide some tools for this? Some "OXM" - Object/XML mapper if there is something like that for .NET I may be wrong in many of my assumptions, this is just a quick list that comes to mind after a quick research. Some general notes / questions: Testing is important Solution with a clear domain model would be much preferred over the one without Can Entity Framework actually help somewhere in my scenario? If so, where and how? Would WCF be an appropriate technology for this? I don't know much about it.

    Read the article

  • Wireshark Dissector: How to Identify Missing UDP Frames?

    - by John Dibling
    How do you identify missing UDP frames in a custom Wireshark dissector? I have written a custom dissector for the CQS feed (reference page). One of our servers gaps when receiving this feed. According to Wireshark, some UDP frames are never received. I know that the frames were sent because all of our other servers are gap-free. A CQS frame consists of multiple messages, each having its own sequence number. My custom dissector provides the following data to Wireshark: cqs.frame_gaps - the number of gaps within a UDP frame (always zero) cqs.frame_first_seq - the first sequence number in a UDP frame cqs.frame_expected_seq - the first sequence number expected in the next UDP frame cqs.frame_msg_count - the number of messages in this UDP frame And I am displaying each of these values in custom columns, as shown in this screenshot: I tried adding code to my dissector that simply saves the last-processed sequence number (as a local static), and flags gaps when the dissector processes a frame where current_sequence != (previous_sequence + 1). This did not work because the dissector can be called in random-access order, depending on where you click in the GUI. So you could process frame 10, then frame 15, then frame 11, etc. Is there any way for my dissector to know if the frame that came before it (or the frame that follows) is missing? The dissector is written in C. (See also a companion post on serverfault.com)

    Read the article

  • WPF : Command routing for Keyboard shortcuts.

    - by Sprotty
    Basically I want to create a keyboard shortcut which is valid within the scope of a window, and not just enabled when focus is within the control that binds it. in more detail.... I have a window which has 3 controls a toolbar textbox Custom Control The toolbar has a button bound to the Command CustomCommands.CmdA and linked to 'Ctrl-T'. My Custom Control can process CmdA. When I run the app and click on my custom control CmdA is enabled and works fine. Also Ctrl-T cause the command to fire. However when I select the text box, my custom command CmdA becomes disabled. I can rectify this by setting the command target for CmdA's button. Now when I select the textBox, CmdA is still enabled. But the Keyboard shortcut Ctrl-T does nothing. Is there any easy way to change the scope of keyboard shortcuts? Or do I need to catch the keypress somewhere lower down, and work out which Command it relates to and route it myself (if so is there a framework within which to do this?) Many Thanks Simon

    Read the article

  • Django: Paginator + raw SQL query

    - by Silver Light
    Hello! I'm using Django Paginator everywhere on my website and even wrote a special template tag, to make it more convenient. But now I got to a state, where I need to make a complex custom raw SQL query, that without a LIMIT will return about 100K records. How can I use Django Pagintor with custom query? Simplified example of my problem: My model: class PersonManager(models.Manager): def complicated_list(self): from django.db import connection #Real query is much more complex cursor.execute("""SELECT * FROM `myapp_person`"""); result_list = [] for row in cursor.fetchall(): result_list.append(row[0]); return result_list class Person(models.Model): name = models.CharField(max_length=255); surname = models.CharField(max_length=255); age = models.IntegerField(); objects = PersonManager(); The way I use pagintation with Django ORM: all_objects = Person.objects.all(); paginator = Paginator(all_objects, 10); try: page = int(request.GET.get('page', '1')) except ValueError: page = 1 try: persons = paginator.page(page) except (EmptyPage, InvalidPage): persons = paginator.page(paginator.num_pages) This way, Django get very smart, and adds LIMIT to a query when executing it. But when I use custom manager: all_objects = Person.objects.complicated_list(); all data is selected, and only then python list is sliced, which is VERY slow. How can I make my custom manager behave similar like built in one?

    Read the article

  • JSTL <c:out> where the element name contains a space character...

    - by Shane
    I have an array of values being made available, but unfortunately some of the variable names include a space. I cannot work out how to simply output these in the page. I know I'm not explaining this well (I'm the JSP designer, not the Java coder) so hopefully this example will illustrate what I'm trying to do: <c:out value="${x}"/> outputs to the page (artificially wrapped) as: {width=96.0, orderedheight=160.0, instructions=TEST ONLY. This is a test., productId=10132, publication type=ns, name=John} I can output the name by using <c:out value="${x.name}"/> no problems. The issue is when I try to get the "publication type"... because it has a space, I can't seem to get <c:out> to display it. I have tried: <!-- error parsing custom action attribute: --> <c:out value="${x.publication type}"/> <!-- error occurred while evaluating custom action attribute: --> <c:out value="${x.publication+type}"/> <!-- error occurred while parsing custom action attribute: --> <c:out value="${x.'publication type'}"/> <!-- error occurred while parsing custom action attribute: --> <c:out value="${x.publication%20type}"/> I know the real solution is to get the variable names formatted correctly (ie: without spaces) but I can't get the code updated for quite a while. Can this be done? Any help greatly appreciated.

    Read the article

  • MSI install sequence - run DB scripts before services start

    - by marc_s
    Folks, we're running into some sequencing troubles with our MSI install. As part of our app, we install a bunch of services and allow the user to pick whether to start them right away or later. When they start right away, they seem to start too early in the install sequence - before our database manager had a chance to update the database. Right now, our custom action to run the database updater looks like this - it's being run after "InstallFinalize" - very late in the process. <InstallExecuteSequence> <RemoveExistingProducts After='InstallInitialize' /> <Custom Action='RunDbUpdateManagerAction' After='InstallFinalize'> DbUpdateManager=3</Custom> </InstallExecuteSequence> What would be the more appropriate step to run after or before, to make sure the DB scripts are executed before any of the installed services start up? Is there a "BeforeServiceStart" step? EDIT: Just defining the "Before='StartServices'" attribute on the tag didn't solve my problem. I am assuming the issue is this: the custom action has an "inner text", which represents a condition, and this condition is: "&DbUpdateManager=3". From what I can deduce from trial & error, this probably means "the DbUpdateManager feature must be published". Now, trouble is: "PublishFeature" comes way at the end in the install sequence, just before "InstallFinalize", and definitely AFTER InstallServices / StartServices. So when I specify the "Before=StartServices" requirement, the condition "DbUpdateManager feature must be published" isn't true yet, so the DbUpdateManager doesn't get executed :-( I tried removing the condition - in that case, my DbUpdateManager sometimes doesn't execute at all, sometimes more than once - no real clear pattern as to what happens when..... Any more ideas?? Is there a way I could check for a condition "the DbUpdateManager feature is installed" which would be true after the "InstallFiles" step?? Marc

    Read the article

  • Call event from original thread ??

    - by user311883
    Hi all, Here is my problem, I have a class which have a object who throw an event and in this event I throw a custom event from my class. But unfortunately the original object throw the event from another thread and so my event is also throw on another thread. This cause a exception when my custom event try to access from controls. Here is a code sample to better understand : class MyClass { // Original object private OriginalObject myObject; // My event public delegate void StatsUpdatedDelegate(object sender, StatsArgs args); public event StatsUpdatedDelegate StatsUpdated; public MyClass() { // Original object event myObject.AnEvent += new EventHandler(myObject_AnEvent); } // This event is called on another thread private void myObject_AnEvent(object sender, EventArgs e) { // Throw my custom event here StatsArgs args = new StatsArgs(..........); StatsUpdated(this, args); } } So when on my windows form I call try to update a control from the event StatsUpdated I get a cross thread exception cause it has been called on another thread. What I want to do is throw my custom event on the original class thread, so control can be used within it. Anyone can help me ?

    Read the article

  • What is the standard way to bundle OSGi dependent libraries?

    - by Chris
    Hi, I have a project that references a number of open source libraries, some new, some not so new. That said, they are all stable and I wish to stick with my chosen versions until I have time to migrate to the newer versions (I tested hsqldb 2.0 yesterday and it contains many api changes). One of the libraries I have wish to embed is Jasper Reports, but as you all surely know, it comes with a mountain of supporting jar files and I have only need a subset of the mountain (known) therefore I am planning to custom bundle all of my dependant libraries. So: Does everyone custom-make their own OSGi bundles for open-source libraries they are using or is there a master source of OSGi versions of common libraries? Also, I was thinking that it would be far simpler for each of my bundles simply to embed their dependent jars within the bundle itself. Is this possible? If I choose to embed the 3rd party foc libraries within a bundle, I assume I will need to produce 2 jar files, one without the embedded libraries (for libraries to be loaded via the classpath via standard classloader), and one osgi version that includes the embedded libraryy, therefore should I choose a bundle name like this <<myprojectname>>-<<subproject>>-osgi-.1.0.0.jar ? If I cannot embed the open source libraries and choose to custom bundle the open source libraries (via bnd), should I choose a unique bundle name to avoid conflict with a possible official bundle? e.g. <<myprojectname>>-<<3rdpartylibname>>-<<3rdpartylibversion>>.jar ? My non-OSGi enabled project currently scans for custom plugins via scanning the META-INF folders in my various plugin jars via Service.providers(...). If I go OSGi, will this mechanism still work?

    Read the article

  • problem with include in PHP

    - by EquinoX
    I have a php file called Route.php which is located on: /var/www/api/src/fra/custom/Action and the file I want to include is in: /var/www/api/src/fra/custom/ So what I have inside route php is an absolute path to the two php file I want to include: <?php include '/var/www/api/src/frapi/custom/myneighborlists.php'; include '/var/www/api/src/frapi/custom/mynodes.php'; ............ ........... ?> These two file has an array of large amount of size that I want to use in Route.php. When I do vardump($global) it just returns NULL. What am I missing here? UPDATE: I did an echo on the included file and it prints something, so therefore it is included.. however I can't get access that array... when I do a vardump on the array, it just returns NULL! I did add a global $myarray inside the function in which I want to access the array from the other php file Sample myneighborlists.php: <?php $myarray = array( 1=> array(3351=>179), 2=> array(3264=>172, 3471=>139), 3=> array(3467=>226), 4=> array(3309=>211, 3469=>227), 5=> array(3315=>364, 3316=>144, 3469=>153), 6=> array(3305=>273, 3309=>171), 7=> array(3267=>624, 3354=>465, 3424=>411, 3437=>632), 8=> array(3302=>655, 3467=>212), 9=> array(3305=>216, 3306=>148, 3465=>505), 10=> array(3271=>273, 3472=>254), 11=> array(3347=>273, 3468=>262), 12=> array(3310=>237, 3315=>237)); ?>

    Read the article

  • Defining EditText imeOptions when using InputMethodManager.showSoftInput(View, int, ResultReceiver)

    - by TuomasR
    In my application I have a custom view which requires some text input. As the view in itself doesn't contain any actual views (it's a Surface with custom drawing being done), I have a FrameLayout which contains the custom view and underneath it an EditText -view. When the user does a specific action, the custom view is hidden and the EditText takes over for user input. This works fine, but android:imeOptions seem to be ignored for this view. I'm currently doing this: InputMethodManager inputMethodManager = (InputMethodManager)parent.getSystemService(Context.INPUT_METHOD_SERVICE); EditText t = (EditText)parent.findViewById(R.id.DummyEditor); t.setImeOptions(EditorInfo.IME_ACTION_DONE); inputMethodManager.showSoftInput(t, 0, new ResultReceiver(mHandler) { @Override protected void onReceiveResult( int resultCode, Bundle resultData) { // We're done System.out.println("Editing done : " + ((EditText)parent.findViewById(R.id.DummyEditor)).getText()); } } ); It seems that the setImeOptions(EditorInfo.IME_ACTION_DONE) has no effect. I've also tried adding the option to the layout XML with android:imeOptions="actionDone". No help. Any ideas?

    Read the article

  • Persisting object changes from child form to parent form based on button press.

    - by Shyran
    I have created a form that is used for both adding and editing a custom object. Which mode the form takes is provided by an enum value passed from the calling code. I also pass in an object of the custom type. All of my controls at data bound to the specific properties of the custom object. When the form is in Add mode, this works great as when the controls are updated with data, the underlying object is as well. However, in Edit mode, I keep two variables of the custom object supplied by the calling code, the original, and a temporary one made through deep copying. The controls are then bound to the temporary copy, this makes it easy to discard the changes if the user clicks the Cancel button. What I want to know is how to persist those changes back to the original object if the user clicks the OK button, since there is now a disconnect because of the deep copying. I am trying to avoid implementing a internal property on the Add/Edit form if I can. Below is an example of my code: public AddEditCustomerDialog(Customer customer, DialogMode mode) { InitializeComponent(); InitializeCustomer(customer, mode); } private void InitializeCustomer(Customer customer, DialogMode mode) { this.customer = customer; if (mode == DialogMode.Edit) { this.Text = "Edit Customer"; this.tempCustomer = ObjectCopyHelper.DeepCopy(this.customer); this.customerListBindingSource.DataSource = this.tempCustomer; this.phoneListBindingSource.DataSource = this.tempCustomer.PhoneList; } else { this.customerListBindingSource.DataSource = this.customer; this.phoneListBindingSource.DataSource = this.customer.PhoneList; } }

    Read the article

  • UIComponent in Swc

    - by mustISignUp
    In Flash, if i create a custom Movieclip, and compile it to a SWC, i can use it in .fla files (by linking to the .swc).. var mcInstance = new CustomMovieClip(); addChild(mcInstance); All the arrangement of graphics on the custom movieClip's layers is preserved. If i subclass UIComponent and compile to a swc, I can use the custom Class in my .fla file, but the new instance doesn't seem to construct the children arranged on the layers. I know that the correct way to make a custom component is to have the two frames, first to specify bounding box, second frame for assets, and that the first graphic in frame 1 is removed at runtime. But i'm not really trying to make a reusable component - i just want to use the UIComponent class (It seems to have some nice extensions to Sprite). As i really want some hand-positioned layers inside the component i figured i could have the bounding box as the first element on frame 1 (knowing that it would be removed), but any other items i put on frame 1 would be preserved - buttons, images, lines, etc. Is this possible?

    Read the article

  • Hey, Google: It’s Time to Add Multi-Window Multitasking To Android

    - by Chris Hoffman
    In 2012, Google’s Dianne Hackborn threatened to revoke CyanogenMod’s access to the Android Market if they moved forward with adding “Cornerstone” multitasking to their custom ROM. Samsung has since created their own multi-window multitasking feature. Dianne Hackborn said this “is something that needs to be done at the mainline platform level” so apps wouldn’t break. She was right — Android needs this as a standard feature and it’s time for Google to provide it. Doesn’t Android Have Multitasking? Android originally stood out from Apple’s iOS with its powerful multitasking. Applications can continue running in the background while you’re using another application. This makes Android powerful — you can even have BitTorrent clients downloading files in the background while using another app. Android still kept the design of a single app on screen at a time. This made a lot of sense when Android only ran on smartphones with small screens. Today, Android runs on everything from smaller smartphones all the way up to huge “phablets” like the Galaxy Note. Android has gone beyond phones and runs on 12-inch tablets, convertibles with keyboard docks, laptops, and even Android desktops. Android isn’t just a phone operating system. Samsung’s Multi-Window Isn’t Good Enough Samsung has tried to add value to Android by adding a multi-window feature. When you’re using a high-end phone like the Galaxy Note or Galaxy S, or a Galaxy tablet, you have the ability to run certain apps side-by-side with each other. There are big problems here. This only works on Samsung devices, and only on specific Samsung devices. To add support for this feature in a way that doesn’t break other apps, Samsung’s multi-window feature also only works with specific apps. You can’t just run any app in multi-window view, only the apps on the Multi Window bar Samsung provides. This prevents third-party apps from breaking, which is what Google was worried about with CyanogenMod’s Cornerstone feature. A feature that only works with a handful of apps on specific devices from a single manufacturer isn’t good enough. This feature needs to work on every Android device — or at least ones with suitably large screens and powerful enough internals. It needs to be an Android platform feature so application developers can ensure their apps will work properly with it on every device. Android developers shouldn’t have to add support for each manufacturer’s own multi-window feature if other manufacturers decide to copy Samsung. Floating Apps Are a Dirty Hack Floating apps also enable real multitasking. Remember that Android allows apps to run in the background while you’re using an app in the foreground. These apps can present interfaces that appear floating above the current app — think of it like using “always on top” to make a window always appear over every other app on a desktop operating system. You can install floating apps to browse the web, take notes, chat, and watch videos while using any app. Only apps specifically designed to run as floating apps will work, so you have to seek them out. Floating apps are also awkward to use because they float over the app you’re using, blocking parts of its interface. Microsoft added floating-window support to Skype for Android. You can have a video conversation and the other person’s face will always appear on your screen, even when you leave the Skype app. Microsoft is using more of Android’s multi-window multitasking power than Google is. Custom ROMs and Root-Only Tweaks Aren’t Acceptable Some custom ROMs are adding this feature to Android. Google threatened to revoke CyanogenMod’s access to the Android Market (now known as Google Play) if they added this feature because it could potentially break third-party apps. Today, other custom ROMs are working on split-screen multitasking. Samsung added their own version to their own devices. You can also get this feature by using a root-only Xposed Framework tweak known as XMultiWindow. If you have root access, you can get multi-window multitasking or any app on your device. This shouldn’t require rooting your device or installing a custom ROM. These third-party solutions often have awkward interfaces and bugs. We need an integrated, supported solution that works the same on every device. Why Multi-Window is Important Microsoft’s Windows 8.1 stands out among tablet operating systems for its powerful multitasking support, allowing you to view several apps side-by-side at the same time. Apple is also reported to be working on adding side-by-side apps to the iPad with iOS 8. On every competitor’s operating system, you’ll be able to view a web page while you write an email, watch a video while you browse the web, or chat with someone while you do anything else. But Android’s still remained frozen in time. Despite all Android’s underlying power — and despite the way Android allows apps to adapt to different screen sizes — Google is resisting adding this feature. Large-screen Android tablets like the Nexus 10 (remember that tablet Google hasn’t updated in over 18 months?) need this feature. So do huge phones, convertibles, laptops, and Android desktops. If tablets are the future of personal computing, we should be able to do more than one thing at a time on our tablets’ big screens. Microsoft, Samsung, and even Apple are realizing this — now it’s Google’s turn. Image Credit: Sergey Galyonkin on Flickr, Karlis Dambrans on Flickr

    Read the article

  • T4 Template error - Assembly Directive cannot locate referenced assembly in Visual Studio 2010 proje

    - by CodeSniper
    I ran into the following error recently in Visual Studio 2010 while trying to port Phil Haack’s excellent T4CSS template which was originally built for Visual Studio 2008.   The Problem Error Compiling transformation: Metadata file 'dotless.Core' could not be found In “T4 speak”, this simply means that you have an Assembly directive in your T4 template but the T4 engine was not able to locate or load the referenced assembly. In the case of the T4CSS Template, this was a showstopper for making it work in Visual Studio 2010. On a side note: The T4CSS template is a sweet little wrapper to allow you to use DotLessCss to generate static .css files from .less files rather than using their default HttpHandler or command-line tool.    If you haven't tried DotLessCSS yet, go check it out now!  In short, it is a tool that allows you to templatize and program your CSS files so that you can use variables, expressions, and mixins within your CSS which enables rapid changes and a lot of developer-flexibility as you evolve your CSS and UI. Back to our regularly scheduled program… Anyhow, this post isn't about DotLessCss, its about the T4 Templates and the errors I ran into when converting them from Visual Studio 2008 to Visual Studio 2010. In VS2010, there were quite a few changes to the T4 Template Engine; most were excellent changes, but this one bit me with T4CSS: “Project assemblies are no longer used to resolve template assembly directives.” In VS2008, if you wanted to reference a custom assembly in your T4 Template (.tt file) you would simply right click on your project, choose Add Reference and select that assembly.  Afterwards you were allowed to use the following syntax in your T4 template to tell it to look at the local references: <#@ assembly name="dotless.Core.dll" #> This told the engine to look in the “usual place” for the assembly, which is your project references. However, this is exactly what they changed in VS2010.  They now basically sandbox the T4 Engine to keep your T4 assemblies separate from your project assemblies.  This can come in handy if you want to support different versions of an assembly referenced both by your T4 templates and your project. Who broke the build?  Oh, Microsoft Did! In our case, this change causes a problem since the templates are no longer compatible when upgrading to VS 2010 – thus its a breaking change.  So, how do we make this work in VS 2010? Luckily, Microsoft now offers several options for referencing assemblies from T4 Templates: GAC your assemblies and use Namespace Reference or Fully Qualified Type Name Use a hard-coded Fully Qualified UNC path Copy assembly to Visual Studio "Public Assemblies Folder" and use Namespace Reference or Fully Qualified Type Name.  Use or Define a Windows Environment Variable to build a Fully Qualified UNC path. Use a Visual Studio Macro to build a Fully Qualified UNC path. Option #1 & 2 were already supported in Visual Studio 2008, so if you want to keep your templates compatible with both Visual Studio versions, then you would have to adopt one of these approaches. Yakkety Yak, use the GAC! Option #1 requires an additional pre-build step to GAC the referenced assembly, which could be a pain.  But, if you go that route, then after you GAC, all you need is a simple type name or namespace reference such as: <#@ assembly name="dotless.Core" #> Hard Coding aint that hard! The other option of using hard-coded paths in Option #2 is pretty impractical in most situations since each developer would have to use the same local project folder paths, or modify this setting each time for their local machines as well as for production deployment.  However, if you want to go that route, simply use the following assembly directive style: <#@ assembly name="C:\Code\Lib\dotless.Core.dll" #> Lets go Public! Option #3, the Visual Studio Public Assemblies Folder, is the recommended place to put commonly used tools and libraries that are only needed for Visual Studio.  Think of it like a VS-only GAC.  This is likely the best place for something like dotLessCSS and is my preferred solution.  However, you will need to either use an installer or a pre-build action to copy the assembly to the right folder location.   Normally this is located at:  C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\PublicAssemblies Once you have copied your assembly there, you use the type name or namespace syntax again: <#@ assembly name="dotless.Core" #> Save the Environment! Option #4, using a Windows Environment Variable, is interesting for enterprise use where you may have standard locations for files, but less useful for demo-code, frameworks, and products where you don't have control over the local system.  The syntax for including a environment variable in your assembly directive looks like the following, just as you would expect: <#@ assembly name="%mypath%\dotless.Core.dll" #> “mypath” is a Windows environment variable you setup that points to some fully qualified UNC path on your system.  In the right situation this can be a great solution such as one where you use a msi installer for deployment, or where you have a pre-existing environment variable you can re-use. OMG Macros! Finally, Option #5 is a very nice option if you want to keep your T4 template’s assembly reference local and relative to the project or solution without muddying-up your dev environment or GAC with extra deployments.  An example looks like this: <#@ assembly name="$(SolutionDir)lib\dotless.Core.dll" #> In this example, I’m using the “SolutionDir” VS macro so I can reference an assembly in a “/lib” folder at the root of the solution.   This is just one of the many macros you can use.  If you are familiar with creating Pre/Post-build Event scripts, you can use its dialog to look at all of the different VS macros available. This option gives the best solution for local assemblies without the hassle of extra installers or other setup before the build.   However, its still not compatible with Visual Studio 2008, so if you have a T4 Template you want to use with both, then you may have to create multiple .tt files, one for each IDE version, or require the developer to set a value in the .tt file manually.   I’m not sure if T4 Templates support any form of compiler switches like “#if (VS2010)”  statements, but it would definitely be nice in this case to switch between this option and one of the ones more compatible with VS 2008. Conclusion As you can see, we went from 3 options with Visual Studio 2008, to 5 options (plus one problem) with Visual Studio 2010.  As a whole, I think the changes are great, but the short-term growing pains during the migration may be annoying until we get used to our new found power. Hopefully this all made sense and was helpful to you.  If nothing else, I’ll just use it as a reference the next time I need to port a T4 template to Visual Studio 2010.  Happy T4 templating, and “May the fourth be with you!”

    Read the article

  • Amazon Web Services (AWS) Plug-in for Oracle Enterprise Manager

    - by Anand Akela
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Contributed by Sunil Kunisetty and Daniel Chan Introduction and ArchitectureAs more and more enterprises deploy some of their non-critical workload on Amazon Web Services (AWS), it’s becoming critical to monitor those public AWS resources along side with their on-premise resources. Oracle recently announced Oracle Enterprise Manager Plug-in for Amazon Web Services (AWS) allows you to achieve that goal. The on-premise Oracle Enterprise Manager (EM12c) acts as a single tool to get a comprehensive view of your public AWS resources as well as your private cloud resources.  By deploying the plug-in within your Cloud Control environment, you gain the following management features: Monitor EBS, EC2 and RDS instances on Amazon Web Services Gather performance metrics and configuration details for AWS instances Raise alerts and violations based on thresholds set on monitoring Generate reports based on the gathered data Users of this Plug-in can leverage the rich Enterprise Manager features such as system promotion, incident generation based on thresholds, integration with 3rd party ticketing applications etc. AWS Monitoring via this Plug-in is enabled via Amazon CloudWatch API and the users of this Plug-in are responsible for supplying credentials for accessing AWS and the CloudWatch API. This Plug-in can only be deployed on an EM12C R2 platform and agent version should be at minimum 12c R2.Here is a pictorial view of the overall architecture: Amazon Elastic Block Store (EBS) Amazon Elastic Compute Cloud (EC2) Amazon Relational Database Service (RDS) Here are a few key features: Rich and exhaustive list of metrics. Metrics can be gathered from an Agent running outside AWS. Critical configuration information. Custom Home Pages with charts and AWS configuration information. Generate incidents based on thresholds set on monitoring data. Discovery and Monitoring AWS instances can be added to EM12C either via the EM12c User Interface (UI) or the EM12c Command Line Interface ( EMCLI)  by providing the AWS credentials (Secret Key and Access Key Id) as well as resource specific properties as target properties. Here is a quick mapping of target types and properties for each AWS resources AWS Resource Type Target Type Resource specific properties EBS Resource Amazon EBS Service CloudWatch base URI, EC2 Base URI, Period, Volume Id, Proxy Server and Port EC2 Resource Amazon EC2 Service CloudWatch base URI, EC2 Base URI, Period, Instance  Id, Proxy Server and Port RDS Resource Amazon RDS Service CloudWatch base URI, RDS Base URI, Period, Instance  Id, Proxy Server and Port Proxy server and port are optional and are only needed if the agent is within the firewall. Here is an emcli example to add an EC2 target. Please read the Installation and Readme guide for more details and step-by-step instructions to deploy  the plugin and adding the AWS the instances. ./emcli add_target \       -name="<target name>" \       -type="AmazonEC2Service" \       -host="<host>" \       -properties="ProxyHost=<proxy server>;ProxyPort=<proxy port>;EC2_BaseURI=http://ec2.<region>.amazonaws.com;BaseURI=http://monitoring.<region>.amazonaws.com;InstanceId=<EC2 instance Id>;Period=<data point periond>"  \     -subseparator=properties="=" ./emcli set_monitoring_credential \                 -set_name="AWSKeyCredentialSet"  \                 -target_name="<target name>"  \                 -target_type="AmazonEC2Service" \                 -cred_type="AWSKeyCredential"  \                 -attributes="AccessKeyId:<access key id>;SecretKey:<secret key>" Emcli utility is found under the ORACLE_HOME of EM12C install. Once the instance is discovered, the target will show up under the ‘All Targets’ list under “Amazon EC2 Service’. Once the instances are added, one can navigate to the custom homepages for these resource types. The custom home pages not only include critical metrics, but also vital configuration parameters and incidents raised for these instances.  By mapping the configuration parameters as instance properties, we can slice-and-dice and group various AWS instance by leveraging the EM12C Config search feature. The following configuration properties and metrics are collected for these Resource types. Resource Type Configuration Properties Metrics EBS Resource Volume Id, Volume Type, Device Name, Size, Availability Zone Response: Status Utilization: QueueLength, IdleTime Volume Statistics: ReadBrandwith, WriteBandwidth, ReadThroughput, WriteThroughput Operation Statistics: ReadSize, WriteSize, ReadLatency, WriteLatency EC2 Resource Instance ID, Owner Id, Root Device type, Instance Type. Availability Zone Response: Status CPU Utilization: CPU Utilization Disk I/O:  DiskReadBytes, DiskWriteBytes, DiskReadOps, DiskWriteOps, DiskReadRate, DiskWriteRate, DiskIOThroughput, DiskReadOpsRate, DiskWriteOpsRate, DiskOperationThroughput Network I/O : NetworkIn, NetworkOut, NetworkInRate, NetworkOutRate, NetworkThroughput RDS Resource Instance ID, Database Engine Name, Database Engine Version, Database Instance Class, Allocated Storage Size, Availability Zone Response: Status Disk I/O:  ReadIOPS, WriteIOPS, ReadLatency, WriteLatency, ReadThroughput, WriteThroughput DB Utilization:  BinLogDiskUsage, CPUUtilization, DatabaseConnections, FreeableMemory, ReplicaLag, SwapUsage Custom Home Pages As mentioned above, we have custom home pages for these target types that include basic configuration information,  last 24 hours availability, top metrics and the incidents generated. Here are few snapshots. EBS Instance Home Page: EC2 Instance Home Page: RDS Instance Home Page: Further Reading: 1)      AWS Plugin download 2)      Installation and  Read Me. 3)      Screenwatch on SlideShare 4)      Extensibility Programmer's Guide 5)      Amazon Web Services

    Read the article

  • Conversion of BizTalk Projects to Use the New WCF-SAP Adaptor

    - by Geordie
    We are in the process of upgrading our BizTalk Environment from BizTalk 2006 R2 to BizTalk 2010. The SAP adaptor in BizTalk 2010 is an all new and more powerful WCF-SAP adaptor. When my colleagues tested out the new adaptor they discovered that the format of the data extracted from SAP was not identical to the old adaptor. This is not a big deal if the structure of the messages from SAP is simple. In this case we were receiving the delivery and invoice iDocs. Both these structures are complex especially the delivery document. Over the past few years I have tweaked the delivery mapping to remove bugs from original mapping. The idea of redoing these maps did not appeal and due to the current work load was not even an option. I opted for a rather crude alternative of pulling in the iDoc in the new typed format and then adding a static map at the start of the orchestration to convert the data to the old schema.  Note WCF-SAP data formats (on the binding tab of the configuration dialog box is the ‘RecieiveIdocFormat’ field): Typed:  Returns a XML document with the hierarchy represented in XML and all fields being represented by XML tags. RFC: Returns an XML document with the hierarchy represented in XML but the iDoc lines in flat file format. String: This returns the iDoc in a format that is closest to the original flat file format but is still wrapped with some top level XML tags. The files also contained some strange characters at the end of each line. I started with the invoice document and it was quite straight forward to add the mapping but this is where my problems started. The orchestrations for these documents are dynamic and so require the identity of the partner to be able to correctly configure the orchestration. The partner identity is in the EDI_DC40 segment of the iDoc. In the old project the RECPRN node of the segment was promoted. The code to set a variable to the partner ID was now failing. After lot of head scratching I discovered the problem was due to the addition of Namespaces to the fields in the EDI_DC40 segment. To overcome this I needed to use an xPath query with a Namespace Manager. This had to be done in custom code. I now tried to repeat the process with the delivery document. Unfortunately when we tried to get sample typed data from SAP an exception was thrown. The adapter "WCF-SAP" raised an error message. Details "Microsoft.ServiceModel.Channels.Common.XmlReaderGenerationException: The segment or group definition E2EDKA1001 was not found in the IDoc metadata. The UniqueId of the IDoc type is: IDOCTYP/3/DESADV01/ZASNEXT1/640. For Receive operations, the SAP adapter does not support unreleased segments.   Our guess is that when the WCF-SAP adaptor tries to down load the data it retrieves a data schema from SAP. For some reason the schema does not match the data. This may be due to the version of SAP we are running or due to a customization. Either way resolving this problem did not look easy. When doing some research on this problem I found an article showing me how to get the data from SAP using the WCF-SAP adaptor without any XML tags. http://blogs.msdn.com/b/adapters/archive/2007/10/05/receiving-idocs-getting-the-raw-idoc-data.aspx Reproduction of Mustansir blog: Since the WCF based SAP Adapter is ... well, WCF based, all data flowing in and out of the adapter is encapsulated within a SOAP message. Which means there are those pesky xml tags all over the place. If you want to receive an Idoc from SAP, you can receive it in "Typed" format (in which case each column in each segment of the idoc appears within its own xml tag), or you can receive it in "String" format (in which case there are just 2 xml tags at the top, the raw xml data in string/flat file format, and the 2 closing xml tags). In "String" format, an incoming idoc (for ORDERS05, containing 5 data records) would look like: <ReceiveIdoc ><idocData>EDI_DC40 8000000000001064985620 E2EDK01005 800000000000106498500000100000001 E2EDK14 8000000000001064985000002000000020111000 E2EDK14 8000000000001064985000003000000020081000 E2EDK14 80000000000010649850000040000000200710 E2EDK14 80000000000010649850000050000000200600</idocData></ReceiveIdoc> (I have trimmed part of the control record so that it fits cleanly here on one line). Now, you're only interested in the IDOC data, and don't care much for the XML tags. It isn't that difficult to write your own pipeline component, or even some logic in the orchestration to remove the tags, right? Well, you don't need to write any extra code at all - the WCF Adapter can help you here! During the configuration of your one-way Receive Location using WCF-Custom, navigate to the Messages tab. Under the section "Inbound BizTalk Messge Body", select the "Path" radio button, and: (a) Enter the body path expression as: /*[local-name()='ReceiveIdoc']/*[local-name()='idocData'] (b) Choose "String" for the Node Encoding. What we've done is, used an XPATH to pull out the value of the "idocData" node from the XML. Your Receive Location will now emit text containing only the idoc data. You can at this point, for example, put the Flat File Pipeline component to convert the flat text into a different xml format based on some other schema you already have, and receive your version of the xml formatted message in your orchestration.   This was potentially a much easier solution than adding the static maps to the orchestrations and overcame the issue with ‘Typed’ delivery documents. Not quite so fast… Note: When I followed Mustansir’s blog the characters at the end of each line disappeared. After configuring the adaptor and passing the iDoc data into the original flat file receive pipelines I was receiving exceptions. There was a failure executing the receive pipeline: "PAPINETPipelines.DeliveryFlatFileReceive, CustomerIntegration2.PAPINET.Pipelines, Version=1.0.0.0, Culture=neutral, PublicKeyToken=4ca3635fbf092bbb" Source: "Pipeline " Receive Port: "recSAP_Delivery" URI: "D:\CustomerIntegration2\SAP\Delivery\*.xml" Reason: An error occurred when parsing the incoming document: "Unexpected data found while looking for: 'Z2EDPZ7' The current definition being parsed is E2EDP07GRP. The stream offset where the error occured is 8859. The line number where the error occured is 23. The column where the error occured is 0.". Although the new flat file looked the same as the old one there was a differences. In the original file all lines in the document were exactly 1064 character long. In the new file all lines were truncated to the last alphanumeric character. The final piece of the puzzle was to add a custom pipeline component to pad all the lines to 1064 characters. This component was added to the decode node of the custom delivery and invoice flat file disassembler pipelines. Execute method of the custom pipeline component: public IBaseMessage Execute(IPipelineContext pc, IBaseMessage inmsg) { //Convert Stream to a string Stream s = null; IBaseMessagePart bodyPart = inmsg.BodyPart;   // NOTE inmsg.BodyPart.Data is implemented only as a setter in the http adapter API and a //getter and setter for the file adapter. Use GetOriginalDataStream to get data instead. if (bodyPart != null) s = bodyPart.GetOriginalDataStream();   string newMsg = string.Empty; string strLine; try { StreamReader sr = new StreamReader(s); strLine = sr.ReadLine(); while (strLine != null) { //Execute padding code if (strLine != null) strLine = strLine.PadRight(1064, ' ') + "\r\n"; newMsg += strLine; strLine = sr.ReadLine(); } sr.Close(); } catch (IOException ex) { throw new Exception("Error occured trying to pad the message to 1064 charactors"); }   //Convert back to stream and set to Data property inmsg.BodyPart.Data = new MemoryStream(Encoding.UTF8.GetBytes(newMsg)); ; //reset the position of the stream to zero inmsg.BodyPart.Data.Position = 0; return inmsg; }

    Read the article

  • Partner Blog Series: PwC Perspectives - "Is It Time for an Upgrade?"

    - by Tanu Sood
    Is your organization debating their next step with regard to Identity Management? While all the stakeholders are well aware that the one-size-fits-all doesn’t apply to identity management, just as true is the fact that no two identity management implementations are alike. Oracle’s recent release of Identity Governance Suite 11g Release 2 has innovative features such as a customizable user interface, shopping cart style request catalog and more. However, only a close look at the use cases can help you determine if and when an upgrade to the latest R2 release makes sense for your organization. This post will describe a few of the situations that PwC has helped our clients work through. “Should I be considering an upgrade?” If your organization has an existing identity management implementation, the questions below are a good start to assessing your current solution to see if you need to begin planning for an upgrade: Does the current solution scale and meet your projected identity management needs? Does the current solution have a customer-friendly user interface? Are you completely meeting your compliance objectives? Are you still using spreadsheets? Does the current solution have the features you need? Is your total cost of ownership in line with well-performing similar sized companies in your industry? Can your organization support your existing Identity solution? Is your current product based solution well positioned to support your organization's tactical and strategic direction? Existing Oracle IDM Customers: Several existing Oracle clients are looking to move to R2 in 2013. If your organization is on Sun Identity Manager (SIM) or Oracle Identity Manager (OIM) and if your current assessment suggests that you need to upgrade, you should strongly consider OIM 11gR2. Oracle provides upgrade paths to Oracle Identity Manager 11gR2 from SIM 7.x / 8.x as well as Oracle Identity Manager 10g / 11gR1. The following are some of the considerations for migration: Check the end of product support (for Sun or legacy OIM) schedule There are several new features available in R2 (including common Helpdesk scenarios, profiling of disconnected applications, increased scalability, custom connectors, browser-based UI configurations, portability of configurations during future upgrades, etc) Cost of ownership (for SIM customers)\ Customizations that need to be maintained during the upgrade Time/Cost to migrate now vs. waiting for next version If you are already on an older version of Oracle Identity Manager and actively maintaining your support contract with Oracle, you might be eligible for a free upgrade to OIM 11gR2. Check with your Oracle sales rep for more details. Existing IDM infrastructure in place: In the past year and half, we have seen a surge in IDM upgrades from non-Oracle infrastructure to Oracle. If your organization is looking to improve the end-user experience related to identity management functions, the shopping cart style access request model and browser based personalization features may come in handy. Additionally, organizations that have a large number of applications that include ecommerce, LDAP stores, databases, UNIX systems, mainframes as well as a high frequency of user identity changes and access requests will value the high scalability of the OIM reconciliation and provisioning engine. Furthermore, we have seen our clients like OIM's out of the box (OOB) support for multiple authoritative sources. For organizations looking to integrate applications that do not have an exposed API, the Generic Technology Connector framework supported by OIM will be helpful in quickly generating custom connector using OOB wizard. Similarly, organizations in need of not only flexible on-boarding of disconnected applications but also strict access management to these applications using approval flows will find the flexible disconnected application profiling feature an extremely useful tool that provides a high degree of time savings. Organizations looking to develop custom connectors for home grown or industry specific applications will likewise find that the Identity Connector Framework support in OIM allows them to build and test a custom connector independently before integrating it with OIM. Lastly, most of our clients considering an upgrade to OIM 11gR2 have also expressed interest in the browser based configuration feature that allows an administrator to quickly customize the user interface without adding any custom code. Better yet, code customizations, if any, made to the product are portable across the future upgrades which, is viewed as a big time and money saver by most of our clients. Below are some upgrade methodologies we adopt based on client priorities and the scale of implementation. For illustration purposes, we have assumed that the client is currently on Oracle Waveset (formerly Sun Identity Manager).   Integrated Deployment: The integrated deployment is typically where a client wants to split the implementation to where their current IDM is continuing to handle the front end workflows and OIM takes over the back office operations incrementally. Once all the back office operations are moved completely to OIM, the front end workflows are migrated to OIM. Parallel Deployment: This deployment is typically done where there can be a distinct line drawn between which functionality the platforms are supporting. For example the current IDM implementation is handling the password reset functionality while OIM takes over the access provisioning and RBAC functions. Cutover Deployment: A cutover deployment is typically recommended where a client has smaller less complex implementations and it makes sense to leverage the migration tools to move them over immediately. What does this mean for YOU? There are many variables to consider when making upgrade decisions. For most customers, there is no ‘easy’ button. Organizations looking to upgrade or considering a new vendor should start by doing a mapping of their requirements with product features. The recommended approach is to take stock of both the short term and long term objectives, understand product features, future roadmap, maturity and level of commitment from the R&D and build the implementation plan accordingly. As we said, in the beginning, there is no one-size-fits-all with Identity Management. So, arm yourself with the knowledge, engage in industry discussions, bring in business stakeholders and start building your implementation roadmap. In the next post we will discuss the best practices on R2 implementations. We will be covering the Do's and Don't's and share our thoughts on making implementations successful. Meet the Writers: Dharma Padala is a Director in the Advisory Security practice within PwC.  He has been implementing medium to large scale Identity Management solutions across multiple industries including utility, health care, entertainment, retail and financial sectors.   Dharma has 14 years of experience in delivering IT solutions out of which he has been implementing Identity Management solutions for the past 8 years. Scott MacDonald is a Director in the Advisory Security practice within PwC.  He has consulted for several clients across multiple industries including financial services, health care, automotive and retail.   Scott has 10 years of experience in delivering Identity Management solutions. John Misczak is a member of the Advisory Security practice within PwC.  He has experience implementing multiple Identity and Access Management solutions, specializing in Oracle Identity Manager and Business Process Engineering Language (BPEL). Praveen Krishna is a Manager in the Advisory Security practice within PwC.  Over the last decade Praveen has helped clients plan, architect and implement Oracle identity solutions across diverse industries.  His experience includes delivering security across diverse topics like network, infrastructure, application and data where he brings a holistic point of view to problem solving. Jenny (Xiao) Zhang is a member of the Advisory Security practice within PwC.  She has consulted across multiple industries including financial services, entertainment and retail. Jenny has three years of experience in delivering IT solutions out of which she has been implementing Identity Management solutions for the past one and a half years.

    Read the article

  • WebSocket Applications using Java: JSR 356 Early Draft Now Available (TOTD #183)

    - by arungupta
    WebSocket provide a full-duplex and bi-directional communication protocol over a single TCP connection. JSR 356 is defining a standard API for creating WebSocket applications in the Java EE 7 Platform. This Tip Of The Day (TOTD) will provide an introduction to WebSocket and how the JSR is evolving to support the programming model. First, a little primer on WebSocket! WebSocket is a combination of IETF RFC 6455 Protocol and W3C JavaScript API (still a Candidate Recommendation). The protocol defines an opening handshake and data transfer. The API enables Web pages to use the WebSocket protocol for two-way communication with the remote host. Unlike HTTP, there is no need to create a new TCP connection and send a chock-full of headers for every message exchange between client and server. The WebSocket protocol defines basic message framing, layered over TCP. Once the initial handshake happens using HTTP Upgrade, the client and server can send messages to each other, independent from the other. There are no pre-defined message exchange patterns of request/response or one-way between client and and server. These need to be explicitly defined over the basic protocol. The communication between client and server is pretty symmetric but there are two differences: A client initiates a connection to a server that is listening for a WebSocket request. A client connects to one server using a URI. A server may listen to requests from multiple clients on the same URI. Other than these two difference, the client and server behave symmetrically after the opening handshake. In that sense, they are considered as "peers". After a successful handshake, clients and servers transfer data back and forth in conceptual units referred as "messages". On the wire, a message is composed of one or more frames. Application frames carry payload intended for the application and can be text or binary data. Control frames carry data intended for protocol-level signaling. Now lets talk about the JSR! The Java API for WebSocket is worked upon as JSR 356 in the Java Community Process. This will define a standard API for building WebSocket applications. This JSR will provide support for: Creating WebSocket Java components to handle bi-directional WebSocket conversations Initiating and intercepting WebSocket events Creation and consumption of WebSocket text and binary messages The ability to define WebSocket protocols and content models for an application Configuration and management of WebSocket sessions, like timeouts, retries, cookies, connection pooling Specification of how WebSocket application will work within the Java EE security model Tyrus is the Reference Implementation for JSR 356 and is already integrated in GlassFish 4.0 Promoted Builds. And finally some code! The API allows to create WebSocket endpoints using annotations and interface. This TOTD will show a simple sample using annotations. A subsequent blog will show more advanced samples. A POJO can be converted to a WebSocket endpoint by specifying @WebSocketEndpoint and @WebSocketMessage. @WebSocketEndpoint(path="/hello")public class HelloBean {     @WebSocketMessage    public String sayHello(String name) {         return "Hello " + name + "!";     }} @WebSocketEndpoint marks this class as a WebSocket endpoint listening at URI defined by the path attribute. The @WebSocketMessage identifies the method that will receive the incoming WebSocket message. This first method parameter is injected with payload of the incoming message. In this case it is assumed that the payload is text-based. It can also be of the type byte[] in case the payload is binary. A custom object may be specified if decoders attribute is specified in the @WebSocketEndpoint. This attribute will provide a list of classes that define how a custom object can be decoded. This method can also take an optional Session parameter. This is injected by the runtime and capture a conversation between two endpoints. The return type of the method can be String, byte[] or a custom object. The encoders attribute on @WebSocketEndpoint need to define how a custom object can be encoded. The client side is an index.jsp with embedded JavaScript. The JSP body looks like: <div style="text-align: center;"> <form action="">     <input onclick="say_hello()" value="Say Hello" type="button">         <input id="nameField" name="name" value="WebSocket" type="text"><br>    </form> </div> <div id="output"></div> The code is relatively straight forward. It has an HTML form with a button that invokes say_hello() method and a text field named nameField. A div placeholder is available for displaying the output. Now, lets take a look at some JavaScript code: <script language="javascript" type="text/javascript"> var wsUri = "ws://localhost:8080/HelloWebSocket/hello";     var websocket = new WebSocket(wsUri);     websocket.onopen = function(evt) { onOpen(evt) };     websocket.onmessage = function(evt) { onMessage(evt) };     websocket.onerror = function(evt) { onError(evt) };     function init() {         output = document.getElementById("output");     }     function say_hello() {      websocket.send(nameField.value);         writeToScreen("SENT: " + nameField.value);     } This application is deployed as "HelloWebSocket.war" (download here) on GlassFish 4.0 promoted build 57. So the WebSocket endpoint is listening at "ws://localhost:8080/HelloWebSocket/hello". A new WebSocket connection is initiated by specifying the URI to connect to. The JavaScript API defines callback methods that are invoked when the connection is opened (onOpen), closed (onClose), error received (onError), or a message from the endpoint is received (onMessage). The client API has several send methods that transmit data over the connection. This particular script sends text data in the say_hello method using nameField's value from the HTML shown earlier. Each click on the button sends the textbox content to the endpoint over a WebSocket connection and receives a response based upon implementation in the sayHello method shown above. How to test this out ? Download the entire source project here or just the WAR file. Download GlassFish4.0 build 57 or later and unzip. Start GlassFish as "asadmin start-domain". Deploy the WAR file as "asadmin deploy HelloWebSocket.war". Access the application at http://localhost:8080/HelloWebSocket/index.jsp. After clicking on "Say Hello" button, the output would look like: Here are some references for you: WebSocket - Protocol and JavaScript API JSR 356: Java API for WebSocket - Specification (Early Draft) and Implementation (already integrated in GlassFish 4 promoted builds) Subsequent blogs will discuss the following topics (not necessary in that order) ... Binary data as payload Custom payloads using encoder/decoder Error handling Interface-driven WebSocket endpoint Java client API Client and Server configuration Security Subprotocols Extensions Other topics from the API Capturing WebSocket on-the-wire messages

    Read the article

< Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >