Search Results

Search found 21998 results on 880 pages for 'custom msbuild task'.

Page 253/880 | < Previous Page | 249 250 251 252 253 254 255 256 257 258 259 260  | Next Page >

  • Django: Paginator + raw SQL query

    - by Silver Light
    Hello! I'm using Django Paginator everywhere on my website and even wrote a special template tag, to make it more convenient. But now I got to a state, where I need to make a complex custom raw SQL query, that without a LIMIT will return about 100K records. How can I use Django Pagintor with custom query? Simplified example of my problem: My model: class PersonManager(models.Manager): def complicated_list(self): from django.db import connection #Real query is much more complex cursor.execute("""SELECT * FROM `myapp_person`"""); result_list = [] for row in cursor.fetchall(): result_list.append(row[0]); return result_list class Person(models.Model): name = models.CharField(max_length=255); surname = models.CharField(max_length=255); age = models.IntegerField(); objects = PersonManager(); The way I use pagintation with Django ORM: all_objects = Person.objects.all(); paginator = Paginator(all_objects, 10); try: page = int(request.GET.get('page', '1')) except ValueError: page = 1 try: persons = paginator.page(page) except (EmptyPage, InvalidPage): persons = paginator.page(paginator.num_pages) This way, Django get very smart, and adds LIMIT to a query when executing it. But when I use custom manager: all_objects = Person.objects.complicated_list(); all data is selected, and only then python list is sliced, which is VERY slow. How can I make my custom manager behave similar like built in one?

    Read the article

  • I want to run two or more procedures in parallel

    - by binod gyawali
    I have list of procedures. All procedures are not dependent upon each other. So, I need to do is, to run the independent procedures in parallel. I have 4 procedures that are to be run parallel. When the procedures are run successfully, now I need to go to the next task. These procedures create about 10 tables. Next task is to execute the set of procedures. I have made one table, where I describe the dependency of these procedures to the tables created above. After any one of the above procedures is completed, I should come to this set of procedures, and find out those procedures whose dependency tables are already created. If any procedure whose dependent tables creation is completed, I need to execute this procedure. Running 4 procedures parallel is done by dts. But, difficulty for me is to transfer the task from the above 4 procedures to the below set of procedures. Please help me to complete my task. Thanks in advance

    Read the article

  • Silverlight - C# and LINQ - Ordering By Multiple Fields

    - by user70192
    Hello, I have an ObservableCollection of Task objects. Each Task has the following properties: AssignedTo Category Title AssignedDate I'm giving the user an interface to select which of these fields they was to sort by. In some cases, a user may want to sort by up to three of these properties. How do I dynamically build a LINQ statement that will allow me to sort by the selected fields in either ascending or descending order? Currently, I'm trying the following, but it appears to only sort by the last sort applied: var sortedTasks = from task in tasks select task; if (userWantsToSortByAssignedTo == true) { if (sortByAssignedToDescending == true) sortedTasks = sortedTasks.OrderByDescending(t => t.AssignedTo); else sortedTasks = sortedTasks.OrderBy(t => t.AssignedTo); } if (userWantsToSortByCategory == true) { if (sortByCategoryDescending == true) sortedTasks = sortedTasks.OrderByDescending(t => t.Category); else sortedTasks = sortedTasks.OrderBy(t => t.Category); } Is there an elegant way to dynamicaly append order clauses to a LINQ statement?

    Read the article

  • Worklight console app, update

    - by jarkko
    We're using Worklight 6.1.0.0 / WebSphere 8.0.0.2 (ND/aix). This seemed pretty close to my question too, but for version 6.0. I've successfully done uninstall/install to our worklight console war package. However, there is some extra work on re-deploying adapters and such. I was looking for a way to just update the console. Among the ant tasks there is a target 'minimal-update', which sounds like what I'm looking for (is it?). However when all other pieces fell into place, I have an error for mapping the datasources: ADMA0007E: A validation error occurred in task Mapping resource references to resources. The Java Naming and Directory Interface (JNDI) name is not specified for resource reference jdbc/WorklightDS in module Worklight with EJB name . Contents of the 'minimal-update' task is pretty much the same as for 'install'. I tried that as update from websphere admin console (but i should use the ant task - right?), that gave me a wizard screen to map jdbc/WorklightDS from package to jdbc/WorklightDS on server. This left me wondering how could I tell this using the ant task.

    Read the article

  • WPF : Command routing for Keyboard shortcuts.

    - by Sprotty
    Basically I want to create a keyboard shortcut which is valid within the scope of a window, and not just enabled when focus is within the control that binds it. in more detail.... I have a window which has 3 controls a toolbar textbox Custom Control The toolbar has a button bound to the Command CustomCommands.CmdA and linked to 'Ctrl-T'. My Custom Control can process CmdA. When I run the app and click on my custom control CmdA is enabled and works fine. Also Ctrl-T cause the command to fire. However when I select the text box, my custom command CmdA becomes disabled. I can rectify this by setting the command target for CmdA's button. Now when I select the textBox, CmdA is still enabled. But the Keyboard shortcut Ctrl-T does nothing. Is there any easy way to change the scope of keyboard shortcuts? Or do I need to catch the keypress somewhere lower down, and work out which Command it relates to and route it myself (if so is there a framework within which to do this?) Many Thanks Simon

    Read the article

  • MSI install sequence - run DB scripts before services start

    - by marc_s
    Folks, we're running into some sequencing troubles with our MSI install. As part of our app, we install a bunch of services and allow the user to pick whether to start them right away or later. When they start right away, they seem to start too early in the install sequence - before our database manager had a chance to update the database. Right now, our custom action to run the database updater looks like this - it's being run after "InstallFinalize" - very late in the process. <InstallExecuteSequence> <RemoveExistingProducts After='InstallInitialize' /> <Custom Action='RunDbUpdateManagerAction' After='InstallFinalize'> DbUpdateManager=3</Custom> </InstallExecuteSequence> What would be the more appropriate step to run after or before, to make sure the DB scripts are executed before any of the installed services start up? Is there a "BeforeServiceStart" step? EDIT: Just defining the "Before='StartServices'" attribute on the tag didn't solve my problem. I am assuming the issue is this: the custom action has an "inner text", which represents a condition, and this condition is: "&DbUpdateManager=3". From what I can deduce from trial & error, this probably means "the DbUpdateManager feature must be published". Now, trouble is: "PublishFeature" comes way at the end in the install sequence, just before "InstallFinalize", and definitely AFTER InstallServices / StartServices. So when I specify the "Before=StartServices" requirement, the condition "DbUpdateManager feature must be published" isn't true yet, so the DbUpdateManager doesn't get executed :-( I tried removing the condition - in that case, my DbUpdateManager sometimes doesn't execute at all, sometimes more than once - no real clear pattern as to what happens when..... Any more ideas?? Is there a way I could check for a condition "the DbUpdateManager feature is installed" which would be true after the "InstallFiles" step?? Marc

    Read the article

  • java.lang.IllegalArgumentException: View not attached to window manager

    - by alex2k8
    I have an activity that starts AsyncTask and shows progress dialog for the duration of operation. The activity is declared NOT be recreated by rotation or keyboard slide. <activity android:name=".MyActivity" android:label="@string/app_name" android:configChanges="keyboardHidden|orientation" > <intent-filter> </intent-filter> </activity> Once task completed, I dissmiss dialog, but on some phones (framework: 1.5, 1.6) such error is thrown: java.lang.IllegalArgumentException: View not attached to window manager at android.view.WindowManagerImpl.findViewLocked(WindowManagerImpl.java:356) at android.view.WindowManagerImpl.removeView(WindowManagerImpl.java:201) at android.view.Window$LocalWindowManager.removeView(Window.java:400) at android.app.Dialog.dismissDialog(Dialog.java:268) at android.app.Dialog.access$000(Dialog.java:69) at android.app.Dialog$1.run(Dialog.java:103) at android.app.Dialog.dismiss(Dialog.java:252) at xxx.onPostExecute(xxx$1.java:xxx) My code is: final Dialog dialog = new AlertDialog.Builder(context) .setTitle("Processing...") .setCancelable(true) .create(); final AsyncTask<MyParams, Object, MyResult> task = new AsyncTask<MyParams, Object, MyResult>() { @Override protected MyResult doInBackground(MyParams... params) { // Long operation goes here } @Override protected void onPostExecute(MyResult result) { dialog.dismiss(); onCompletion(result); } }; task.execute(...); dialog.setOnCancelListener(new OnCancelListener() { @Override public void onCancel(DialogInterface arg0) { task.cancel(false); } }); dialog.show(); From what I have read (http://bend-ing.blogspot.com/2008/11/properly-handle-progress-dialog-in.html) and seen in Android sources, it looks like the only possible situation to get that exception is when activity was destroyed. But as I have mentioned, I forbid activity recreation for basic events. So any suggestions are very appreciated.

    Read the article

  • JSTL <c:out> where the element name contains a space character...

    - by Shane
    I have an array of values being made available, but unfortunately some of the variable names include a space. I cannot work out how to simply output these in the page. I know I'm not explaining this well (I'm the JSP designer, not the Java coder) so hopefully this example will illustrate what I'm trying to do: <c:out value="${x}"/> outputs to the page (artificially wrapped) as: {width=96.0, orderedheight=160.0, instructions=TEST ONLY. This is a test., productId=10132, publication type=ns, name=John} I can output the name by using <c:out value="${x.name}"/> no problems. The issue is when I try to get the "publication type"... because it has a space, I can't seem to get <c:out> to display it. I have tried: <!-- error parsing custom action attribute: --> <c:out value="${x.publication type}"/> <!-- error occurred while evaluating custom action attribute: --> <c:out value="${x.publication+type}"/> <!-- error occurred while parsing custom action attribute: --> <c:out value="${x.'publication type'}"/> <!-- error occurred while parsing custom action attribute: --> <c:out value="${x.publication%20type}"/> I know the real solution is to get the variable names formatted correctly (ie: without spaces) but I can't get the code updated for quite a while. Can this be done? Any help greatly appreciated.

    Read the article

  • Call event from original thread ??

    - by user311883
    Hi all, Here is my problem, I have a class which have a object who throw an event and in this event I throw a custom event from my class. But unfortunately the original object throw the event from another thread and so my event is also throw on another thread. This cause a exception when my custom event try to access from controls. Here is a code sample to better understand : class MyClass { // Original object private OriginalObject myObject; // My event public delegate void StatsUpdatedDelegate(object sender, StatsArgs args); public event StatsUpdatedDelegate StatsUpdated; public MyClass() { // Original object event myObject.AnEvent += new EventHandler(myObject_AnEvent); } // This event is called on another thread private void myObject_AnEvent(object sender, EventArgs e) { // Throw my custom event here StatsArgs args = new StatsArgs(..........); StatsUpdated(this, args); } } So when on my windows form I call try to update a control from the event StatsUpdated I get a cross thread exception cause it has been called on another thread. What I want to do is throw my custom event on the original class thread, so control can be used within it. Anyone can help me ?

    Read the article

  • What is the standard way to bundle OSGi dependent libraries?

    - by Chris
    Hi, I have a project that references a number of open source libraries, some new, some not so new. That said, they are all stable and I wish to stick with my chosen versions until I have time to migrate to the newer versions (I tested hsqldb 2.0 yesterday and it contains many api changes). One of the libraries I have wish to embed is Jasper Reports, but as you all surely know, it comes with a mountain of supporting jar files and I have only need a subset of the mountain (known) therefore I am planning to custom bundle all of my dependant libraries. So: Does everyone custom-make their own OSGi bundles for open-source libraries they are using or is there a master source of OSGi versions of common libraries? Also, I was thinking that it would be far simpler for each of my bundles simply to embed their dependent jars within the bundle itself. Is this possible? If I choose to embed the 3rd party foc libraries within a bundle, I assume I will need to produce 2 jar files, one without the embedded libraries (for libraries to be loaded via the classpath via standard classloader), and one osgi version that includes the embedded libraryy, therefore should I choose a bundle name like this <<myprojectname>>-<<subproject>>-osgi-.1.0.0.jar ? If I cannot embed the open source libraries and choose to custom bundle the open source libraries (via bnd), should I choose a unique bundle name to avoid conflict with a possible official bundle? e.g. <<myprojectname>>-<<3rdpartylibname>>-<<3rdpartylibversion>>.jar ? My non-OSGi enabled project currently scans for custom plugins via scanning the META-INF folders in my various plugin jars via Service.providers(...). If I go OSGi, will this mechanism still work?

    Read the article

  • how to programatically compare permissions of login/user in sql server 2005

    - by titanium
    There's a login/user in SQL Server who is having a problem importing accounts in production server. I don't have an idea what method he is doing this. According to the one importing, this import is working fine in development server. But when he did the same import in production it is giving him errors. Below are the errors he is getting for each accounts. 2009-06-05 18:01:05.8254 ERROR [engine-1038] Task [1038:00001 - Members]: Step 1.0 [<Insert step description>]: Task.RunStep(): StoreRow has failed 2009-06-05 18:01:05.9035 ERROR [engine-1038] Task [1038:00001 - Members]: Step 1.0 [<Insert step description>]: Task.RunStep(): StoreRow exception: Exception caught while storing Data. [Microsoft][ODBC SQL Server Driver][SQL Server]'ACCOUNT1' is not a valid login or you do not have permission. Please note that 'ACCOUNT1' is not the real account name. I just changed it for security reason. Using SQL Server Management Studio (SSMS), I viewed/checked the permissions of the user/login who is performing the import from development server and production for comparison. I found no difference. My question is: Is there a way to programmatically query permissions in server and database level of a particular login/user so I can compare/contrast for any differences?

    Read the article

  • Wait between tasks with SingleThreadExecutor

    - by Lord.Quackstar
    I am trying to (simply) make a blocking thread queue, where when a task is submitted the method waits until its finished executing. The hard part though is the wait. Here's my 12:30 AM code that I think is overkill: public void sendMsg(final BotMessage msg) { try { Future task; synchronized(msgQueue) { task = msgQueue.submit(new Runnable() { public void run() { sendRawLine("PRIVMSG " + msg.channel + " :" + msg.message); } }); //Add a seperate wait so next runnable doesn't get executed yet but //above one unblocks msgQueue.submit(new Runnable() { public void run() { try { Thread.sleep(Controller.msgWait); } catch (InterruptedException e) { log.error("Wait to send message interupted", e); } } }); } //Block until done task.get(); } catch (ExecutionException e) { log.error("Couldn't schedule send message to be executed", e); } catch (InterruptedException e) { log.error("Wait to send message interupted", e); } } As you can see, there's alot of extra code there just to make it wait 1.7 seconds between tasks. Is there an easier and cleaner solution out there or is this it?

    Read the article

  • iPhone: Same Rows Repeated in Each Section of Grouped UITableview

    - by Rank Beginner
    I have an app that is a list of tasks, like a to do list. The user configures the tasks and that goes to the SQLite db. The list is displayed in a tableview. The SQL table in question consists of a taskid int, groupname varchar, taskname varchar, lastcompleted datetime, nextdue datetime, weighting integer. I currently have it working by creating an array from each column in the SQL table. In the tableView:cellForRowAtIndexPath: method, I create the controls for each task by binding their values to the array for each column. I want to add configurable task groups that should display as the section titles. I got the task groups to display as the section headers. My problem is that all the task rows are repeated in each group under each header. How do I get the correct rows to show up only under the correct section? I'm really new to development period and took on a hobby of trying to teach myself how to develop iphone apps. So, pretty please, be a little more detailed than you normally would with a professional developer. :)

    Read the article

  • Persisting object changes from child form to parent form based on button press.

    - by Shyran
    I have created a form that is used for both adding and editing a custom object. Which mode the form takes is provided by an enum value passed from the calling code. I also pass in an object of the custom type. All of my controls at data bound to the specific properties of the custom object. When the form is in Add mode, this works great as when the controls are updated with data, the underlying object is as well. However, in Edit mode, I keep two variables of the custom object supplied by the calling code, the original, and a temporary one made through deep copying. The controls are then bound to the temporary copy, this makes it easy to discard the changes if the user clicks the Cancel button. What I want to know is how to persist those changes back to the original object if the user clicks the OK button, since there is now a disconnect because of the deep copying. I am trying to avoid implementing a internal property on the Add/Edit form if I can. Below is an example of my code: public AddEditCustomerDialog(Customer customer, DialogMode mode) { InitializeComponent(); InitializeCustomer(customer, mode); } private void InitializeCustomer(Customer customer, DialogMode mode) { this.customer = customer; if (mode == DialogMode.Edit) { this.Text = "Edit Customer"; this.tempCustomer = ObjectCopyHelper.DeepCopy(this.customer); this.customerListBindingSource.DataSource = this.tempCustomer; this.phoneListBindingSource.DataSource = this.tempCustomer.PhoneList; } else { this.customerListBindingSource.DataSource = this.customer; this.phoneListBindingSource.DataSource = this.customer.PhoneList; } }

    Read the article

  • Defining EditText imeOptions when using InputMethodManager.showSoftInput(View, int, ResultReceiver)

    - by TuomasR
    In my application I have a custom view which requires some text input. As the view in itself doesn't contain any actual views (it's a Surface with custom drawing being done), I have a FrameLayout which contains the custom view and underneath it an EditText -view. When the user does a specific action, the custom view is hidden and the EditText takes over for user input. This works fine, but android:imeOptions seem to be ignored for this view. I'm currently doing this: InputMethodManager inputMethodManager = (InputMethodManager)parent.getSystemService(Context.INPUT_METHOD_SERVICE); EditText t = (EditText)parent.findViewById(R.id.DummyEditor); t.setImeOptions(EditorInfo.IME_ACTION_DONE); inputMethodManager.showSoftInput(t, 0, new ResultReceiver(mHandler) { @Override protected void onReceiveResult( int resultCode, Bundle resultData) { // We're done System.out.println("Editing done : " + ((EditText)parent.findViewById(R.id.DummyEditor)).getText()); } } ); It seems that the setImeOptions(EditorInfo.IME_ACTION_DONE) has no effect. I've also tried adding the option to the layout XML with android:imeOptions="actionDone". No help. Any ideas?

    Read the article

  • problem with include in PHP

    - by EquinoX
    I have a php file called Route.php which is located on: /var/www/api/src/fra/custom/Action and the file I want to include is in: /var/www/api/src/fra/custom/ So what I have inside route php is an absolute path to the two php file I want to include: <?php include '/var/www/api/src/frapi/custom/myneighborlists.php'; include '/var/www/api/src/frapi/custom/mynodes.php'; ............ ........... ?> These two file has an array of large amount of size that I want to use in Route.php. When I do vardump($global) it just returns NULL. What am I missing here? UPDATE: I did an echo on the included file and it prints something, so therefore it is included.. however I can't get access that array... when I do a vardump on the array, it just returns NULL! I did add a global $myarray inside the function in which I want to access the array from the other php file Sample myneighborlists.php: <?php $myarray = array( 1=> array(3351=>179), 2=> array(3264=>172, 3471=>139), 3=> array(3467=>226), 4=> array(3309=>211, 3469=>227), 5=> array(3315=>364, 3316=>144, 3469=>153), 6=> array(3305=>273, 3309=>171), 7=> array(3267=>624, 3354=>465, 3424=>411, 3437=>632), 8=> array(3302=>655, 3467=>212), 9=> array(3305=>216, 3306=>148, 3465=>505), 10=> array(3271=>273, 3472=>254), 11=> array(3347=>273, 3468=>262), 12=> array(3310=>237, 3315=>237)); ?>

    Read the article

  • Foreach loop and tasks.

    - by Scott Chamberlain
    I know from the codeing guidlines that I have read you should not do for (int i = 0; i < 5; i++) { Task.Factory.StartNew(() => Console.WriteLine(i)); } Console.ReadLine(); as it will write 5 5's, I understand that and I think i understand why it is happening. I know the solution is just to do for (int i = 0; i < 5; i++) { int localI = i; Task.Factory.StartNew(() => Console.WriteLine(localI)); } Console.ReadLine(); However is something like this ok to do? foreach (MyClass myClass in myClassList) { Task.Factory.StartNew(() => myClass.DoAction()); } Console.ReadLine(); Or do I need to do the same thing I did in the for loop. foreach (MyClass myClass in myClassList) { MyClass localMyClass = myClass; Task.Factory.StartNew(() => localMyClass.DoAction()); } Console.ReadLine();

    Read the article

  • UIComponent in Swc

    - by mustISignUp
    In Flash, if i create a custom Movieclip, and compile it to a SWC, i can use it in .fla files (by linking to the .swc).. var mcInstance = new CustomMovieClip(); addChild(mcInstance); All the arrangement of graphics on the custom movieClip's layers is preserved. If i subclass UIComponent and compile to a swc, I can use the custom Class in my .fla file, but the new instance doesn't seem to construct the children arranged on the layers. I know that the correct way to make a custom component is to have the two frames, first to specify bounding box, second frame for assets, and that the first graphic in frame 1 is removed at runtime. But i'm not really trying to make a reusable component - i just want to use the UIComponent class (It seems to have some nice extensions to Sprite). As i really want some hand-positioned layers inside the component i figured i could have the bounding box as the first element on frame 1 (knowing that it would be removed), but any other items i put on frame 1 would be preserved - buttons, images, lines, etc. Is this possible?

    Read the article

  • How to access property in sub method after it runs?

    - by Warren
    I'm having a hard time working this one out but I think it should be pretty simple. Basically, I have this method which talks to a webservice and I need to return some data from the sub method, the "authCode". What am I doing wrong? How can I get the authCode out of the manager, or can I create a block or something to to ensure that the manager block runs first? Am I even using the right words - blocks, sub methods??? Please help :) - (NSString *)getAuthCodeEXAMPLE { __block NSString *returnString = @"nothing yet!"; NSURL *baseURL = [NSURL URLWithString:BaseURLString]; NSDictionary *parametersGetAuthCode = @{@"req": @"getauth"}; AFHTTPSessionManager *manager = [[AFHTTPSessionManager alloc] initWithBaseURL:baseURL]; manager.responseSerializer = [AFJSONResponseSerializer serializer]; [manager POST:APIscript parameters:parametersGetAuthCode success:^(NSURLSessionDataTask *task, id responseObject) { if ([task.response isKindOfClass:[NSHTTPURLResponse class]]) { NSHTTPURLResponse *r = (NSHTTPURLResponse *)task.response; if ([r statusCode] == 200) { self.returnedData = responseObject; NSString *authCode = [self.returnedData authcode]; NSLog(@"Authcode: %@", authCode); returnString = authCode; } } } failure:^(NSURLSessionDataTask *task, NSError *error) { UIAlertView *alertView = [[UIAlertView alloc] initWithTitle:@"Error" message:[error localizedDescription] delegate:nil cancelButtonTitle:@"Ok" otherButtonTitles:nil]; [alertView show]; }]; //this currently returns "nothing yet!" return returnString; }

    Read the article

  • Hey, Google: It’s Time to Add Multi-Window Multitasking To Android

    - by Chris Hoffman
    In 2012, Google’s Dianne Hackborn threatened to revoke CyanogenMod’s access to the Android Market if they moved forward with adding “Cornerstone” multitasking to their custom ROM. Samsung has since created their own multi-window multitasking feature. Dianne Hackborn said this “is something that needs to be done at the mainline platform level” so apps wouldn’t break. She was right — Android needs this as a standard feature and it’s time for Google to provide it. Doesn’t Android Have Multitasking? Android originally stood out from Apple’s iOS with its powerful multitasking. Applications can continue running in the background while you’re using another application. This makes Android powerful — you can even have BitTorrent clients downloading files in the background while using another app. Android still kept the design of a single app on screen at a time. This made a lot of sense when Android only ran on smartphones with small screens. Today, Android runs on everything from smaller smartphones all the way up to huge “phablets” like the Galaxy Note. Android has gone beyond phones and runs on 12-inch tablets, convertibles with keyboard docks, laptops, and even Android desktops. Android isn’t just a phone operating system. Samsung’s Multi-Window Isn’t Good Enough Samsung has tried to add value to Android by adding a multi-window feature. When you’re using a high-end phone like the Galaxy Note or Galaxy S, or a Galaxy tablet, you have the ability to run certain apps side-by-side with each other. There are big problems here. This only works on Samsung devices, and only on specific Samsung devices. To add support for this feature in a way that doesn’t break other apps, Samsung’s multi-window feature also only works with specific apps. You can’t just run any app in multi-window view, only the apps on the Multi Window bar Samsung provides. This prevents third-party apps from breaking, which is what Google was worried about with CyanogenMod’s Cornerstone feature. A feature that only works with a handful of apps on specific devices from a single manufacturer isn’t good enough. This feature needs to work on every Android device — or at least ones with suitably large screens and powerful enough internals. It needs to be an Android platform feature so application developers can ensure their apps will work properly with it on every device. Android developers shouldn’t have to add support for each manufacturer’s own multi-window feature if other manufacturers decide to copy Samsung. Floating Apps Are a Dirty Hack Floating apps also enable real multitasking. Remember that Android allows apps to run in the background while you’re using an app in the foreground. These apps can present interfaces that appear floating above the current app — think of it like using “always on top” to make a window always appear over every other app on a desktop operating system. You can install floating apps to browse the web, take notes, chat, and watch videos while using any app. Only apps specifically designed to run as floating apps will work, so you have to seek them out. Floating apps are also awkward to use because they float over the app you’re using, blocking parts of its interface. Microsoft added floating-window support to Skype for Android. You can have a video conversation and the other person’s face will always appear on your screen, even when you leave the Skype app. Microsoft is using more of Android’s multi-window multitasking power than Google is. Custom ROMs and Root-Only Tweaks Aren’t Acceptable Some custom ROMs are adding this feature to Android. Google threatened to revoke CyanogenMod’s access to the Android Market (now known as Google Play) if they added this feature because it could potentially break third-party apps. Today, other custom ROMs are working on split-screen multitasking. Samsung added their own version to their own devices. You can also get this feature by using a root-only Xposed Framework tweak known as XMultiWindow. If you have root access, you can get multi-window multitasking or any app on your device. This shouldn’t require rooting your device or installing a custom ROM. These third-party solutions often have awkward interfaces and bugs. We need an integrated, supported solution that works the same on every device. Why Multi-Window is Important Microsoft’s Windows 8.1 stands out among tablet operating systems for its powerful multitasking support, allowing you to view several apps side-by-side at the same time. Apple is also reported to be working on adding side-by-side apps to the iPad with iOS 8. On every competitor’s operating system, you’ll be able to view a web page while you write an email, watch a video while you browse the web, or chat with someone while you do anything else. But Android’s still remained frozen in time. Despite all Android’s underlying power — and despite the way Android allows apps to adapt to different screen sizes — Google is resisting adding this feature. Large-screen Android tablets like the Nexus 10 (remember that tablet Google hasn’t updated in over 18 months?) need this feature. So do huge phones, convertibles, laptops, and Android desktops. If tablets are the future of personal computing, we should be able to do more than one thing at a time on our tablets’ big screens. Microsoft, Samsung, and even Apple are realizing this — now it’s Google’s turn. Image Credit: Sergey Galyonkin on Flickr, Karlis Dambrans on Flickr

    Read the article

  • Amazon Web Services (AWS) Plug-in for Oracle Enterprise Manager

    - by Anand Akela
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Contributed by Sunil Kunisetty and Daniel Chan Introduction and ArchitectureAs more and more enterprises deploy some of their non-critical workload on Amazon Web Services (AWS), it’s becoming critical to monitor those public AWS resources along side with their on-premise resources. Oracle recently announced Oracle Enterprise Manager Plug-in for Amazon Web Services (AWS) allows you to achieve that goal. The on-premise Oracle Enterprise Manager (EM12c) acts as a single tool to get a comprehensive view of your public AWS resources as well as your private cloud resources.  By deploying the plug-in within your Cloud Control environment, you gain the following management features: Monitor EBS, EC2 and RDS instances on Amazon Web Services Gather performance metrics and configuration details for AWS instances Raise alerts and violations based on thresholds set on monitoring Generate reports based on the gathered data Users of this Plug-in can leverage the rich Enterprise Manager features such as system promotion, incident generation based on thresholds, integration with 3rd party ticketing applications etc. AWS Monitoring via this Plug-in is enabled via Amazon CloudWatch API and the users of this Plug-in are responsible for supplying credentials for accessing AWS and the CloudWatch API. This Plug-in can only be deployed on an EM12C R2 platform and agent version should be at minimum 12c R2.Here is a pictorial view of the overall architecture: Amazon Elastic Block Store (EBS) Amazon Elastic Compute Cloud (EC2) Amazon Relational Database Service (RDS) Here are a few key features: Rich and exhaustive list of metrics. Metrics can be gathered from an Agent running outside AWS. Critical configuration information. Custom Home Pages with charts and AWS configuration information. Generate incidents based on thresholds set on monitoring data. Discovery and Monitoring AWS instances can be added to EM12C either via the EM12c User Interface (UI) or the EM12c Command Line Interface ( EMCLI)  by providing the AWS credentials (Secret Key and Access Key Id) as well as resource specific properties as target properties. Here is a quick mapping of target types and properties for each AWS resources AWS Resource Type Target Type Resource specific properties EBS Resource Amazon EBS Service CloudWatch base URI, EC2 Base URI, Period, Volume Id, Proxy Server and Port EC2 Resource Amazon EC2 Service CloudWatch base URI, EC2 Base URI, Period, Instance  Id, Proxy Server and Port RDS Resource Amazon RDS Service CloudWatch base URI, RDS Base URI, Period, Instance  Id, Proxy Server and Port Proxy server and port are optional and are only needed if the agent is within the firewall. Here is an emcli example to add an EC2 target. Please read the Installation and Readme guide for more details and step-by-step instructions to deploy  the plugin and adding the AWS the instances. ./emcli add_target \       -name="<target name>" \       -type="AmazonEC2Service" \       -host="<host>" \       -properties="ProxyHost=<proxy server>;ProxyPort=<proxy port>;EC2_BaseURI=http://ec2.<region>.amazonaws.com;BaseURI=http://monitoring.<region>.amazonaws.com;InstanceId=<EC2 instance Id>;Period=<data point periond>"  \     -subseparator=properties="=" ./emcli set_monitoring_credential \                 -set_name="AWSKeyCredentialSet"  \                 -target_name="<target name>"  \                 -target_type="AmazonEC2Service" \                 -cred_type="AWSKeyCredential"  \                 -attributes="AccessKeyId:<access key id>;SecretKey:<secret key>" Emcli utility is found under the ORACLE_HOME of EM12C install. Once the instance is discovered, the target will show up under the ‘All Targets’ list under “Amazon EC2 Service’. Once the instances are added, one can navigate to the custom homepages for these resource types. The custom home pages not only include critical metrics, but also vital configuration parameters and incidents raised for these instances.  By mapping the configuration parameters as instance properties, we can slice-and-dice and group various AWS instance by leveraging the EM12C Config search feature. The following configuration properties and metrics are collected for these Resource types. Resource Type Configuration Properties Metrics EBS Resource Volume Id, Volume Type, Device Name, Size, Availability Zone Response: Status Utilization: QueueLength, IdleTime Volume Statistics: ReadBrandwith, WriteBandwidth, ReadThroughput, WriteThroughput Operation Statistics: ReadSize, WriteSize, ReadLatency, WriteLatency EC2 Resource Instance ID, Owner Id, Root Device type, Instance Type. Availability Zone Response: Status CPU Utilization: CPU Utilization Disk I/O:  DiskReadBytes, DiskWriteBytes, DiskReadOps, DiskWriteOps, DiskReadRate, DiskWriteRate, DiskIOThroughput, DiskReadOpsRate, DiskWriteOpsRate, DiskOperationThroughput Network I/O : NetworkIn, NetworkOut, NetworkInRate, NetworkOutRate, NetworkThroughput RDS Resource Instance ID, Database Engine Name, Database Engine Version, Database Instance Class, Allocated Storage Size, Availability Zone Response: Status Disk I/O:  ReadIOPS, WriteIOPS, ReadLatency, WriteLatency, ReadThroughput, WriteThroughput DB Utilization:  BinLogDiskUsage, CPUUtilization, DatabaseConnections, FreeableMemory, ReplicaLag, SwapUsage Custom Home Pages As mentioned above, we have custom home pages for these target types that include basic configuration information,  last 24 hours availability, top metrics and the incidents generated. Here are few snapshots. EBS Instance Home Page: EC2 Instance Home Page: RDS Instance Home Page: Further Reading: 1)      AWS Plugin download 2)      Installation and  Read Me. 3)      Screenwatch on SlideShare 4)      Extensibility Programmer's Guide 5)      Amazon Web Services

    Read the article

  • CodePlex Daily Summary for Sunday, May 16, 2010

    CodePlex Daily Summary for Sunday, May 16, 2010New Projects3D Calculator: 3D Calc is a simple calculator application for Windows Phone 7, the purpose of this project is to demo the 3D animations capabilities of WP7 and sh...azaleas: AzaleasBlueset Studio Opensource Projects: Only for Opensource projects form Blueset Studio.Breck: A Phoenix and Jumper Moneky Production: Breck is a first person non-violent shooter developed in C++ and Dark GDK. After the main game is developed we are looking into making a sequel or...Discuz! Forum SDK: This project is use to login in and post or reply topic on discuz forum.Dominion.NET: Evolving Dominion source code originally written in VB6 and posted by "jatill" on Collectible Card Game Headquarters. Migration of the design and s...EkspSys2010-ITR: A mini project for the course Experimental System devolopment in spring 2010Facebook Graph Toolkit: This project is a .Net implementation of the Facebook Graph API. The aim of this project is to be a replacement to the existing Facebook Toolkit (h...iFree: This is a solution for Vietnamese network socialInfoPath Editor for Developer: InfoPath Editor for developer allows user to modify the html text directly inside InfoPath designer or filler and push the change back to InfoPath ...iZeit: Run your own online calendar, with blog integration, recurrence, todo list and categories.machgos dotNet Tests: Just some little test-projects for learningmim: TBAMinePost: MinePost is a game made for the first 48 hour Reddit Game Jam.Mockina: Mockina is a mock framework. Expression tree syntax is used to specify which members to mock, both public and non-public. The code is easy to under...MSBuild Launch Pad (mPad): This is just another shell extension for MSBuild to enable quick execution of MSBuild scripts via Windows Explorer context menu. (C) 2010 Lex LiPeacock: A browser like tabbed applicationPrimeCalculation: PrimeCalculation is a .NET app to calculate primes in a given range. Speed on Core2Duo 2,4GHZ: Found all primes from 0 to 1 billion in 35 seconds (...Slightly Silverlight: A Framework that leverages Silverlight for processing, business logic but standard HTML for the presentation layer.Stopwatch: Stopwatch is a tool for measuring the time. To start and pause stopwatch you only need to press a key on the keyboard. An additional context menu a...YAXLib: Yet Another XML Serialization Library for the .NET Framework: YAXLib is an XML Serialization library which helps you structure freely the XML result, choose among private and public fields to be serialized, an...New ReleasesActivate Your Glutes: v1.0.3.0: This release is a migration to VS2010, .Net 4, MVC2 and Entity Framework 4. The code has also been considerably cleaned up - taking advantage of E...AnyCAD: AnyCAD.Free.ENU.v1.1: http://www.anycad.net Modeling •2D: Line, Rectangle, Arc, Arch, Circle, Spline, Polygon •Feature: Extrude, Loft, Chamfer, Sweep, Revol •Boolean: ...Blueset Studio Opensource Projects: 多功能计算器 3.5: 稳定版本。Code for Rapid C# Windows Development eBook: LLBLGen LINQPad Data Context Driver Ver 1.0.0.0: First release of a Static LLBLGen Pro Data Context Driver for LINQPad I recommend LINQPad 4 as it seems more stable with this driver than LINQPad 2.DSQLT - Dynamic SQL Templates: Release 1.2. Some behaviour has changed!!: Attention. Some behaviour has changed! Now its necessary to use WildCards in the pattern-parameter for DSQLT.AllSourceContains DSQLT.Databases DSQ...FDS AutoCAD plug-in: FDS to AutoCAD plug-in: Basic functionality was implemented. Some routines like setting fds executable location are still not automated.Feature Builder Guidance Extensions: FBGX 2 - Standalone FX: Background: The Feature Builder Guidance is extensible and displays guidance content supplied by all the Feature Builder Guidance Extensions (FBGX...Floe IRC Client: Floe IRC Client 2010-05 R3: - You can now right click on the input box to get options for toggling bold, underline, colors, etc. - The size of the nickname column is now saved...Floe IRC Client: Floe IRC Client 2010-05 R4: - A user's channel status now appears next to their nick when they talk (e.g. @Nick or +Nick) - Fixed an error where certain kinds of network probl...HD-Trailers.NET Downloader: HD-Trailers.NET Downloader v1.0: Version 1.0 Thanks to Wolfgang for all his help. I let this project languish for too long while focusing on other things, but his involvement has ...InfoPath Editor for Developer: InfoPath Editor Beta 1: Intial Release: Can load InfoPath inner html. Can edit InfoPath inner html. InfoPath 2007 only.LinkSharp: LinkSharp 0.1.0: First release of LinkSharp. Set up iis, and use the sql script to create a new database.PowerAuras: PowerAuras V3.0.0F: This version adds better integration with GTFO New Flags Added PvP flag In 5-Man Instance In Raid Instance In Battleground In ArenaRx Contrib: V1.4: Add the ability to catch internal exception and the ability to publish error by queue adaptersSEO SiteMap: SEO SiteMap RC1: -SevenZipLib Library: v9.13.2: Stable release associated with 7z.dll 9.13 beta. Ability to create and update archives not implemente yet.Silverlight / WPF Controls: Upload, FlipPanel, DeepZoom, Animation, Encryption: Code Camp Demonstration: This code example demonstrates MVVM/MEF with WPF with attached properties,security and custom ICommand class.SQL Data Capture - Black Box Application Testing: SQLDataCapture V1.2: Added Entity Framework Support to CRUD generator (Insert Stored Procedure) and switched to VS 2010 for development.Stopwatch: Stopwatch 0.1: Stopwatch Release 0.1VCC: Latest build, v2.1.30515.0: Automatic drop of latest buildYet another developer blog - Examples: Asynchronous TreeView in ASP.NET MVC: This sample application shows how to use jQuery TreeView plugin for creating an asynchronous TreeView in ASP.NET MVC. This application is accompani...Most Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)patterns & practices – Enterprise LibraryMicrosoft SQL Server Community & SamplesPHPExcelASP.NETMost Active Projectspatterns & practices – Enterprise LibraryRawrPHPExcelBlogEngine.NETMicrosoft Biology FoundationCustomer Portal Accelerator for Microsoft Dynamics CRMWindows Azure Command-line Tools for PHP DevelopersMirror Testing SystemN2 CMSStyleCop

    Read the article

  • Conversion of BizTalk Projects to Use the New WCF-SAP Adaptor

    - by Geordie
    We are in the process of upgrading our BizTalk Environment from BizTalk 2006 R2 to BizTalk 2010. The SAP adaptor in BizTalk 2010 is an all new and more powerful WCF-SAP adaptor. When my colleagues tested out the new adaptor they discovered that the format of the data extracted from SAP was not identical to the old adaptor. This is not a big deal if the structure of the messages from SAP is simple. In this case we were receiving the delivery and invoice iDocs. Both these structures are complex especially the delivery document. Over the past few years I have tweaked the delivery mapping to remove bugs from original mapping. The idea of redoing these maps did not appeal and due to the current work load was not even an option. I opted for a rather crude alternative of pulling in the iDoc in the new typed format and then adding a static map at the start of the orchestration to convert the data to the old schema.  Note WCF-SAP data formats (on the binding tab of the configuration dialog box is the ‘RecieiveIdocFormat’ field): Typed:  Returns a XML document with the hierarchy represented in XML and all fields being represented by XML tags. RFC: Returns an XML document with the hierarchy represented in XML but the iDoc lines in flat file format. String: This returns the iDoc in a format that is closest to the original flat file format but is still wrapped with some top level XML tags. The files also contained some strange characters at the end of each line. I started with the invoice document and it was quite straight forward to add the mapping but this is where my problems started. The orchestrations for these documents are dynamic and so require the identity of the partner to be able to correctly configure the orchestration. The partner identity is in the EDI_DC40 segment of the iDoc. In the old project the RECPRN node of the segment was promoted. The code to set a variable to the partner ID was now failing. After lot of head scratching I discovered the problem was due to the addition of Namespaces to the fields in the EDI_DC40 segment. To overcome this I needed to use an xPath query with a Namespace Manager. This had to be done in custom code. I now tried to repeat the process with the delivery document. Unfortunately when we tried to get sample typed data from SAP an exception was thrown. The adapter "WCF-SAP" raised an error message. Details "Microsoft.ServiceModel.Channels.Common.XmlReaderGenerationException: The segment or group definition E2EDKA1001 was not found in the IDoc metadata. The UniqueId of the IDoc type is: IDOCTYP/3/DESADV01/ZASNEXT1/640. For Receive operations, the SAP adapter does not support unreleased segments.   Our guess is that when the WCF-SAP adaptor tries to down load the data it retrieves a data schema from SAP. For some reason the schema does not match the data. This may be due to the version of SAP we are running or due to a customization. Either way resolving this problem did not look easy. When doing some research on this problem I found an article showing me how to get the data from SAP using the WCF-SAP adaptor without any XML tags. http://blogs.msdn.com/b/adapters/archive/2007/10/05/receiving-idocs-getting-the-raw-idoc-data.aspx Reproduction of Mustansir blog: Since the WCF based SAP Adapter is ... well, WCF based, all data flowing in and out of the adapter is encapsulated within a SOAP message. Which means there are those pesky xml tags all over the place. If you want to receive an Idoc from SAP, you can receive it in "Typed" format (in which case each column in each segment of the idoc appears within its own xml tag), or you can receive it in "String" format (in which case there are just 2 xml tags at the top, the raw xml data in string/flat file format, and the 2 closing xml tags). In "String" format, an incoming idoc (for ORDERS05, containing 5 data records) would look like: <ReceiveIdoc ><idocData>EDI_DC40 8000000000001064985620 E2EDK01005 800000000000106498500000100000001 E2EDK14 8000000000001064985000002000000020111000 E2EDK14 8000000000001064985000003000000020081000 E2EDK14 80000000000010649850000040000000200710 E2EDK14 80000000000010649850000050000000200600</idocData></ReceiveIdoc> (I have trimmed part of the control record so that it fits cleanly here on one line). Now, you're only interested in the IDOC data, and don't care much for the XML tags. It isn't that difficult to write your own pipeline component, or even some logic in the orchestration to remove the tags, right? Well, you don't need to write any extra code at all - the WCF Adapter can help you here! During the configuration of your one-way Receive Location using WCF-Custom, navigate to the Messages tab. Under the section "Inbound BizTalk Messge Body", select the "Path" radio button, and: (a) Enter the body path expression as: /*[local-name()='ReceiveIdoc']/*[local-name()='idocData'] (b) Choose "String" for the Node Encoding. What we've done is, used an XPATH to pull out the value of the "idocData" node from the XML. Your Receive Location will now emit text containing only the idoc data. You can at this point, for example, put the Flat File Pipeline component to convert the flat text into a different xml format based on some other schema you already have, and receive your version of the xml formatted message in your orchestration.   This was potentially a much easier solution than adding the static maps to the orchestrations and overcame the issue with ‘Typed’ delivery documents. Not quite so fast… Note: When I followed Mustansir’s blog the characters at the end of each line disappeared. After configuring the adaptor and passing the iDoc data into the original flat file receive pipelines I was receiving exceptions. There was a failure executing the receive pipeline: "PAPINETPipelines.DeliveryFlatFileReceive, CustomerIntegration2.PAPINET.Pipelines, Version=1.0.0.0, Culture=neutral, PublicKeyToken=4ca3635fbf092bbb" Source: "Pipeline " Receive Port: "recSAP_Delivery" URI: "D:\CustomerIntegration2\SAP\Delivery\*.xml" Reason: An error occurred when parsing the incoming document: "Unexpected data found while looking for: 'Z2EDPZ7' The current definition being parsed is E2EDP07GRP. The stream offset where the error occured is 8859. The line number where the error occured is 23. The column where the error occured is 0.". Although the new flat file looked the same as the old one there was a differences. In the original file all lines in the document were exactly 1064 character long. In the new file all lines were truncated to the last alphanumeric character. The final piece of the puzzle was to add a custom pipeline component to pad all the lines to 1064 characters. This component was added to the decode node of the custom delivery and invoice flat file disassembler pipelines. Execute method of the custom pipeline component: public IBaseMessage Execute(IPipelineContext pc, IBaseMessage inmsg) { //Convert Stream to a string Stream s = null; IBaseMessagePart bodyPart = inmsg.BodyPart;   // NOTE inmsg.BodyPart.Data is implemented only as a setter in the http adapter API and a //getter and setter for the file adapter. Use GetOriginalDataStream to get data instead. if (bodyPart != null) s = bodyPart.GetOriginalDataStream();   string newMsg = string.Empty; string strLine; try { StreamReader sr = new StreamReader(s); strLine = sr.ReadLine(); while (strLine != null) { //Execute padding code if (strLine != null) strLine = strLine.PadRight(1064, ' ') + "\r\n"; newMsg += strLine; strLine = sr.ReadLine(); } sr.Close(); } catch (IOException ex) { throw new Exception("Error occured trying to pad the message to 1064 charactors"); }   //Convert back to stream and set to Data property inmsg.BodyPart.Data = new MemoryStream(Encoding.UTF8.GetBytes(newMsg)); ; //reset the position of the stream to zero inmsg.BodyPart.Data.Position = 0; return inmsg; }

    Read the article

  • Converting projects to use Automatic NuGet restore

    - by terje
    Originally posted on: http://geekswithblogs.net/terje/archive/2014/06/11/converting-projects-to-use-automatic-nuget-restore.aspxDownload tool In version 2.7 of NuGet automatic nuget restore was introduced, meaning you no longer need to distort your msbuild project files with nuget target information.   Visual Studio and TFS 2013 build have this enabled by default.  However, if your project was created before this was introduced, and/or if you have used the “Enable NuGet Package Restore” afterwards, you now have a series of unwanted things in your projects, and a series of project files that have been modified – and – you no longer neither want nor need this !  You might also get into some unwanted issues due to these modifications.  This is a MSBuild modification that was needed only before NuGet 2.7 ! So: DON’T USE THIS FUNCTION !!! There is an issue https://nuget.codeplex.com/workitem/4019 on this on the NuGet project site to get this function removed, renamed or at least moved farther away from the top level (please help vote it up!).  The response seems to be that it WILL BE removed, around version 3.0. This function does nothing you need after the introduction of NuGet 2.7.  What is also unfortunate is the naming of it – it implies that it is needed, it is not, and what is worse, there is no corresponding function to remove what it does ! So to fix this use the tool named IFix, that will fix this issue for you   - all free of course, and the code is open source.  Also report issues there:  https://github.com/OsirisTerje/IFix    IFix information DOWNLOAD HERE This command line tool installs using an MSI, and add itself to the system path.  If you work in a team, you will probably need to use the  tool multiple times.  Anyone in the team may at any time use the “Enable NuGet Package Restore” function and mess up your project again.  The IFix program can be run either in a  check modus, where it does not write anything back – it only checks if you have any issues, or in a Fix mode, where it will also perform the necessary fixes for you. The IFix program is used like this: IFix <command> [-c/--check] [-f/--fix]  [-v/--verbose] The command in this case is “nugetrestore”.  It will do a check from the location where it is being called, and run through all subfolders from that location. So  “IFix nugetrestore  --check” , will do the check ,  and “IFix nugetrestore  --fix”  will perform the changes, for all files and folders below the current working directory. (Note that --check  can be replaced with only –c, and --fix with –f, and so on. ) BEWARE: When you run the fix option, all solutions to be affected must be closed in Visual Studio ! So, if you just want to DO it, then: IFix nugetrestore --check to see if you have issues then IFix nugetrestore  --fix to fix them. How does it work IFix nugetrestore  checks and optionally fixes four issues that the older enabling of nuget restore did.  The issues are related to the MSBuild projess, and are: Deleting the nuget.targets file. Deleting the nuget.exe that is located under the .nuget folder Removing all references to nuget.targets in the solution file Removing all properties and target imports of nuget.targets inside the csproj files. IFix fixes these issues in the same sequence. The first step, removing the nuget.targets file is the most critical one, and all instances of the nuget.targets file within the scope of a solution has to be removed, and in addition it has to be done with the solution closed in Visual Studio.  If Visual Studio finds a nuget.targets file, the csproj files will be automatically messed up again. This means the removal process above might need to be done multiple times, specially when you’re working with a team, and that solution context menu still has the “Enable NuGet Package Restore” function.  Someone on the team might inadvertently do this at any time. It can be a good idea to add this check to a checkin policy – if you run TFS standard version control, but that will have no effect if you use TFS Git version control of course. So, better be prepared to run the IFix check from time to time. Or, even better, install IFix on your build servers, and add a call to IFix nugetrestore --check in the TFS Build script.    How does it look As a first example I have run the IFix program from the top of a set of git repositories, so it spans multiple repositories with multiple solutions. The result from the check option is as follows: We see the four red lines, there is one for each of the four checks we talked about in the previous section. The fact that they are red, means we have that particular issue. The first section (above the first red text line) is the nuget targets section.  Notice  No.1, it says it has found no paths to copy.  What IFix does here is to check if there are any defined paths to other nuget galleries.  If there are, then those are copied over to the nuget.config file, where is where it should be in version 2.7 and above.   No.2 says it has found the particular nuget.targets file,  No.3  states it HAS found some other nuget galleries defines in the targets file, which then it would like to copy to the config.file. No.4 is the section for nuget.exe files, and list those it has found, and which it would like to delete. No 5 states it has found a reference to nuget.targets in the solution file.  This reference comes from the fact that the .nuget folder is a solution folder, and the items within are described in the solution file. It then checks the csproj files, and as can be seen from the last red line, it ha found issues in 96 out of 198 csproj files.  There are two possible issues in a csproj files.  No.6 is the first one, and the most common and most important one, an “Import project” section.  This is the section that calls the nuget.targets files.  No.7 is another issue, which seems to sometimes be there, sometimes not, it is a RestorePackages property, which also should go away. Now, if we run the IFix nugetrestore –fix command, and then the check again after that, the result is: All green !

    Read the article

  • Partner Blog Series: PwC Perspectives - "Is It Time for an Upgrade?"

    - by Tanu Sood
    Is your organization debating their next step with regard to Identity Management? While all the stakeholders are well aware that the one-size-fits-all doesn’t apply to identity management, just as true is the fact that no two identity management implementations are alike. Oracle’s recent release of Identity Governance Suite 11g Release 2 has innovative features such as a customizable user interface, shopping cart style request catalog and more. However, only a close look at the use cases can help you determine if and when an upgrade to the latest R2 release makes sense for your organization. This post will describe a few of the situations that PwC has helped our clients work through. “Should I be considering an upgrade?” If your organization has an existing identity management implementation, the questions below are a good start to assessing your current solution to see if you need to begin planning for an upgrade: Does the current solution scale and meet your projected identity management needs? Does the current solution have a customer-friendly user interface? Are you completely meeting your compliance objectives? Are you still using spreadsheets? Does the current solution have the features you need? Is your total cost of ownership in line with well-performing similar sized companies in your industry? Can your organization support your existing Identity solution? Is your current product based solution well positioned to support your organization's tactical and strategic direction? Existing Oracle IDM Customers: Several existing Oracle clients are looking to move to R2 in 2013. If your organization is on Sun Identity Manager (SIM) or Oracle Identity Manager (OIM) and if your current assessment suggests that you need to upgrade, you should strongly consider OIM 11gR2. Oracle provides upgrade paths to Oracle Identity Manager 11gR2 from SIM 7.x / 8.x as well as Oracle Identity Manager 10g / 11gR1. The following are some of the considerations for migration: Check the end of product support (for Sun or legacy OIM) schedule There are several new features available in R2 (including common Helpdesk scenarios, profiling of disconnected applications, increased scalability, custom connectors, browser-based UI configurations, portability of configurations during future upgrades, etc) Cost of ownership (for SIM customers)\ Customizations that need to be maintained during the upgrade Time/Cost to migrate now vs. waiting for next version If you are already on an older version of Oracle Identity Manager and actively maintaining your support contract with Oracle, you might be eligible for a free upgrade to OIM 11gR2. Check with your Oracle sales rep for more details. Existing IDM infrastructure in place: In the past year and half, we have seen a surge in IDM upgrades from non-Oracle infrastructure to Oracle. If your organization is looking to improve the end-user experience related to identity management functions, the shopping cart style access request model and browser based personalization features may come in handy. Additionally, organizations that have a large number of applications that include ecommerce, LDAP stores, databases, UNIX systems, mainframes as well as a high frequency of user identity changes and access requests will value the high scalability of the OIM reconciliation and provisioning engine. Furthermore, we have seen our clients like OIM's out of the box (OOB) support for multiple authoritative sources. For organizations looking to integrate applications that do not have an exposed API, the Generic Technology Connector framework supported by OIM will be helpful in quickly generating custom connector using OOB wizard. Similarly, organizations in need of not only flexible on-boarding of disconnected applications but also strict access management to these applications using approval flows will find the flexible disconnected application profiling feature an extremely useful tool that provides a high degree of time savings. Organizations looking to develop custom connectors for home grown or industry specific applications will likewise find that the Identity Connector Framework support in OIM allows them to build and test a custom connector independently before integrating it with OIM. Lastly, most of our clients considering an upgrade to OIM 11gR2 have also expressed interest in the browser based configuration feature that allows an administrator to quickly customize the user interface without adding any custom code. Better yet, code customizations, if any, made to the product are portable across the future upgrades which, is viewed as a big time and money saver by most of our clients. Below are some upgrade methodologies we adopt based on client priorities and the scale of implementation. For illustration purposes, we have assumed that the client is currently on Oracle Waveset (formerly Sun Identity Manager).   Integrated Deployment: The integrated deployment is typically where a client wants to split the implementation to where their current IDM is continuing to handle the front end workflows and OIM takes over the back office operations incrementally. Once all the back office operations are moved completely to OIM, the front end workflows are migrated to OIM. Parallel Deployment: This deployment is typically done where there can be a distinct line drawn between which functionality the platforms are supporting. For example the current IDM implementation is handling the password reset functionality while OIM takes over the access provisioning and RBAC functions. Cutover Deployment: A cutover deployment is typically recommended where a client has smaller less complex implementations and it makes sense to leverage the migration tools to move them over immediately. What does this mean for YOU? There are many variables to consider when making upgrade decisions. For most customers, there is no ‘easy’ button. Organizations looking to upgrade or considering a new vendor should start by doing a mapping of their requirements with product features. The recommended approach is to take stock of both the short term and long term objectives, understand product features, future roadmap, maturity and level of commitment from the R&D and build the implementation plan accordingly. As we said, in the beginning, there is no one-size-fits-all with Identity Management. So, arm yourself with the knowledge, engage in industry discussions, bring in business stakeholders and start building your implementation roadmap. In the next post we will discuss the best practices on R2 implementations. We will be covering the Do's and Don't's and share our thoughts on making implementations successful. Meet the Writers: Dharma Padala is a Director in the Advisory Security practice within PwC.  He has been implementing medium to large scale Identity Management solutions across multiple industries including utility, health care, entertainment, retail and financial sectors.   Dharma has 14 years of experience in delivering IT solutions out of which he has been implementing Identity Management solutions for the past 8 years. Scott MacDonald is a Director in the Advisory Security practice within PwC.  He has consulted for several clients across multiple industries including financial services, health care, automotive and retail.   Scott has 10 years of experience in delivering Identity Management solutions. John Misczak is a member of the Advisory Security practice within PwC.  He has experience implementing multiple Identity and Access Management solutions, specializing in Oracle Identity Manager and Business Process Engineering Language (BPEL). Praveen Krishna is a Manager in the Advisory Security practice within PwC.  Over the last decade Praveen has helped clients plan, architect and implement Oracle identity solutions across diverse industries.  His experience includes delivering security across diverse topics like network, infrastructure, application and data where he brings a holistic point of view to problem solving. Jenny (Xiao) Zhang is a member of the Advisory Security practice within PwC.  She has consulted across multiple industries including financial services, entertainment and retail. Jenny has three years of experience in delivering IT solutions out of which she has been implementing Identity Management solutions for the past one and a half years.

    Read the article

< Previous Page | 249 250 251 252 253 254 255 256 257 258 259 260  | Next Page >