Search Results

Search found 2694 results on 108 pages for 'michael ellick ang'.

Page 16/108 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • University Choices For Programmers

    - by Michael
    I've noticed that the majority of eminent hackers seem to have come from prestigious universities. How true is this, and is it important to have this type of background to become prominent in the programming field? I don't necessarily have the means to attend a top school, but I have the desire to work among the best. Is it possible without coming from a highly-regarded program? Is graduate study at a good school more important than undergraduate in this regard?

    Read the article

  • JavaScript function to Redirects parent of IFrame to specified URL

    - by Michael Freidgeim
    /// <summary>    /// Redirects parent of IFrame to specified URL    /// If current page doesn't have parent, redirect itself    /// </summary>    /// <param name="page"></param>    /// <param name="url"></param>    public static void NavigateParentToUrl(Page page, string url)    {     String script = @" try { var sUrl='" + url + @"'; if (self.parent.frames.length != 0)     self.parent.location=sUrl; else   self.location = sUrl; } catch (Exception) {} ";     page.ClientScript.RegisterStartupScript(TypeForClientScript(), MethodBase.GetCurrentMethod().Name, script, true);    }    /// <summary>

    Read the article

  • C#/.NET Little Wonders: Use Cast() and TypeOf() to Change Sequence Type

    - by James Michael Hare
    Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can help improve your code by making it easier to write and maintain. The index of all my past little wonders posts can be found here. We’ve seen how the Select() extension method lets you project a sequence from one type to a new type which is handy for getting just parts of items, or building new items.  But what happens when the items in the sequence are already the type you want, but the sequence itself is typed to an interface or super-type instead of the sub-type you need? For example, you may have a sequence of Rectangle stored in an IEnumerable<Shape> and want to consider it an IEnumerable<Rectangle> sequence instead.  Today we’ll look at two handy extension methods, Cast<TResult>() and OfType<TResult>() which help you with this task. Cast<TResult>() – Attempt to cast all items to type TResult So, the first thing we can do would be to attempt to create a sequence of TResult from every item in the source sequence.  Typically we’d do this if we had an IEnumerable<T> where we knew that every item was actually a TResult where TResult inherits/implements T. For example, assume the typical Shape example classes: 1: // abstract base class 2: public abstract class Shape { } 3:  4: // a basic rectangle 5: public class Rectangle : Shape 6: { 7: public int Widtgh { get; set; } 8: public int Height { get; set; } 9: } And let’s assume we have a sequence of Shape where every Shape is a Rectangle… 1: var shapes = new List<Shape> 2: { 3: new Rectangle { Width = 3, Height = 5 }, 4: new Rectangle { Width = 10, Height = 13 }, 5: // ... 6: }; To get the sequence of Shape as a sequence of Rectangle, of course, we could use a Select() clause, such as: 1: // select each Shape, cast it to Rectangle 2: var rectangles = shapes 3: .Select(s => (Rectangle)s) 4: .ToList(); But that’s a bit verbose, and fortunately there is already a facility built in and ready to use in the form of the Cast<TResult>() extension method: 1: // cast each item to Rectangle and store in a List<Rectangle> 2: var rectangles = shapes 3: .Cast<Rectangle>() 4: .ToList(); However, we should note that if anything in the list cannot be cast to a Rectangle, you will get an InvalidCastException thrown at runtime.  Thus, if our Shape sequence had a Circle in it, the call to Cast<Rectangle>() would have failed.  As such, you should only do this when you are reasonably sure of what the sequence actually contains (or are willing to handle an exception if you’re wrong). Another handy use of Cast<TResult>() is using it to convert an IEnumerable to an IEnumerable<T>.  If you look at the signature, you’ll see that the Cast<TResult>() extension method actually extends the older, object-based IEnumerable interface instead of the newer, generic IEnumerable<T>.  This is your gateway method for being able to use LINQ on older, non-generic sequences.  For example, consider the following: 1: // the older, non-generic collections are sequence of object 2: var shapes = new ArrayList 3: { 4: new Rectangle { Width = 3, Height = 13 }, 5: new Rectangle { Width = 10, Height = 20 }, 6: // ... 7: }; Since this is an older, object based collection, we cannot use the LINQ extension methods on it directly.  For example, if I wanted to query the Shape sequence for only those Rectangles whose Width is > 5, I can’t do this: 1: // compiler error, Where() operates on IEnumerable<T>, not IEnumerable 2: var bigRectangles = shapes.Where(r => r.Width > 5); However, I can use Cast<Rectangle>() to treat my ArrayList as an IEnumerable<Rectangle> and then do the query! 1: // ah, that’s better! 2: var bigRectangles = shapes.Cast<Rectangle>().Where(r => r.Width > 5); Or, if you prefer, in LINQ query expression syntax: 1: var bigRectangles = from s in shapes.Cast<Rectangle>() 2: where s.Width > 5 3: select s; One quick warning: Cast<TResult>() only attempts to cast, it won’t perform a cast conversion.  That is, consider this: 1: var intList = new List<int> { 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89 }; 2:  3: // casting ints to longs, this should work, right? 4: var asLong = intList.Cast<long>().ToList(); Will the code above work?  No, you’ll get a InvalidCastException. Remember that Cast<TResult>() is an extension of IEnumerable, thus it is a sequence of object, which means that it will box every int as an object as it enumerates over it, and there is no cast conversion from object to long, and thus the cast fails.  In other words, a cast from int to long will succeed because there is a conversion from int to long.  But a cast from int to object to long will not, because you can only unbox an item by casting it to its exact type. For more information on why cast-converting boxed values doesn’t work, see this post on The Dangers of Casting Boxed Values (here). OfType<TResult>() – Filter sequence to only items of type TResult So, we’ve seen how we can use Cast<TResult>() to change the type of our sequence, when we expect all the items of the sequence to be of a specific type.  But what do we do when a sequence contains many different types, and we are only concerned with a subset of a given type? For example, what if a sequence of Shape contains Rectangle and Circle instances, and we just want to select all of the Rectangle instances?  Well, let’s say we had this sequence of Shape: 1: var shapes = new List<Shape> 2: { 3: new Rectangle { Width = 3, Height = 5 }, 4: new Rectangle { Width = 10, Height = 13 }, 5: new Circle { Radius = 10 }, 6: new Square { Side = 13 }, 7: // ... 8: }; Well, we could get the rectangles using Select(), like: 1: var onlyRectangles = shapes.Where(s => s is Rectangle).ToList(); But fortunately, an easier way has already been written for us in the form of the OfType<T>() extension method: 1: // returns only a sequence of the shapes that are Rectangles 2: var onlyRectangles = shapes.OfType<Rectangle>().ToList(); Now we have a sequence of only the Rectangles in the original sequence, we can also use this to chain other queries that depend on Rectangles, such as: 1: // select only Rectangles, then filter to only those more than 2: // 5 units wide... 3: var onlyBigRectangles = shapes.OfType<Rectangle>() 4: .Where(r => r.Width > 5) 5: .ToList(); The OfType<Rectangle>() will filter the sequence to only the items that are of type Rectangle (or a subclass of it), and that results in an IEnumerable<Rectangle>, we can then apply the other LINQ extension methods to query that list further. Just as Cast<TResult>() is an extension method on IEnumerable (and not IEnumerable<T>), the same is true for OfType<T>().  This means that you can use OfType<TResult>() on object-based collections as well. For example, given an ArrayList containing Shapes, as below: 1: // object-based collections are a sequence of object 2: var shapes = new ArrayList 3: { 4: new Rectangle { Width = 3, Height = 5 }, 5: new Rectangle { Width = 10, Height = 13 }, 6: new Circle { Radius = 10 }, 7: new Square { Side = 13 }, 8: // ... 9: }; We can use OfType<Rectangle> to filter the sequence to only Rectangle items (and subclasses), and then chain other LINQ expressions, since we will then be of type IEnumerable<Rectangle>: 1: // OfType() converts the sequence of object to a new sequence 2: // containing only Rectangle or sub-types of Rectangle. 3: var onlyBigRectangles = shapes.OfType<Rectangle>() 4: .Where(r => r.Width > 5) 5: .ToList(); Summary So now we’ve seen two different ways to get a sequence of a superclass or interface down to a more specific sequence of a subclass or implementation.  The Cast<TResult>() method casts every item in the source sequence to type TResult, and the OfType<TResult>() method selects only those items in the source sequence that are of type TResult. You can use these to downcast sequences, or adapt older types and sequences that only implement IEnumerable (such as DataTable, ArrayList, etc.). Technorati Tags: C#,CSharp,.NET,LINQ,Little Wonders,TypeOf,Cast,IEnumerable<T>

    Read the article

  • C#: Handling Notifications: inheritance, events, or delegates?

    - by James Michael Hare
    Often times as developers we have to design a class where we get notification when certain things happen. In older object-oriented code this would often be implemented by overriding methods -- with events, delegates, and interfaces, however, we have far more elegant options. So, when should you use each of these methods and what are their strengths and weaknesses? Now, for the purposes of this article when I say notification, I'm just talking about ways for a class to let a user know that something has occurred. This can be through any programmatic means such as inheritance, events, delegates, etc. So let's build some context. I'm sitting here thinking about a provider neutral messaging layer for the place I work, and I got to the point where I needed to design the message subscriber which will receive messages from the message bus. Basically, what we want is to be able to create a message listener and have it be called whenever a new message arrives. Now, back before the flood we would have done this via inheritance and an abstract class: 1:  2: // using inheritance - omitting argument null checks and halt logic 3: public abstract class MessageListener 4: { 5: private ISubscriber _subscriber; 6: private bool _isHalted = false; 7: private Thread _messageThread; 8:  9: // assign the subscriber and start the messaging loop 10: public MessageListener(ISubscriber subscriber) 11: { 12: _subscriber = subscriber; 13: _messageThread = new Thread(MessageLoop); 14: _messageThread.Start(); 15: } 16:  17: // user will override this to process their messages 18: protected abstract void OnMessageReceived(Message msg); 19:  20: // handle the looping in the thread 21: private void MessageLoop() 22: { 23: while(!_isHalted) 24: { 25: // as long as processing, wait 1 second for message 26: Message msg = _subscriber.Receive(TimeSpan.FromSeconds(1)); 27: if(msg != null) 28: { 29: OnMessageReceived(msg); 30: } 31: } 32: } 33: ... 34: } It seems so odd to write this kind of code now. Does it feel odd to you? Maybe it's just because I've gotten so used to delegation that I really don't like the feel of this. To me it is akin to saying that if I want to drive my car I need to derive a new instance of it just to put myself in the driver's seat. And yet, unquestionably, five years ago I would have probably written the code as you see above. To me, inheritance is a flawed approach for notifications due to several reasons: Inheritance is one of the HIGHEST forms of coupling. You can't seal the listener class because it depends on sub-classing to work. Because C# does not allow multiple-inheritance, I've spent my one inheritance implementing this class. Every time you need to listen to a bus, you have to derive a class which leads to lots of trivial sub-classes. The act of consuming a message should be a separate responsibility than the act of listening for a message (SRP). Inheritance is such a strong statement (this IS-A that) that it should only be used in building type hierarchies and not for overriding use-specific behaviors and notifications. Chances are, if a class needs to be inherited to be used, it most likely is not designed as well as it could be in today's modern programming languages. So lets look at the other tools available to us for getting notified instead. Here's a few other choices to consider. Have the listener expose a MessageReceived event. Have the listener accept a new IMessageHandler interface instance. Have the listener accept an Action<Message> delegate. Really, all of these are different forms of delegation. Now, .NET events are a bit heavier than the other types of delegates in terms of run-time execution, but they are a great way to allow others using your class to subscribe to your events: 1: // using event - ommiting argument null checks and halt logic 2: public sealed class MessageListener 3: { 4: private ISubscriber _subscriber; 5: private bool _isHalted = false; 6: private Thread _messageThread; 7:  8: // assign the subscriber and start the messaging loop 9: public MessageListener(ISubscriber subscriber) 10: { 11: _subscriber = subscriber; 12: _messageThread = new Thread(MessageLoop); 13: _messageThread.Start(); 14: } 15:  16: // user will override this to process their messages 17: public event Action<Message> MessageReceived; 18:  19: // handle the looping in the thread 20: private void MessageLoop() 21: { 22: while(!_isHalted) 23: { 24: // as long as processing, wait 1 second for message 25: Message msg = _subscriber.Receive(TimeSpan.FromSeconds(1)); 26: if(msg != null && MessageReceived != null) 27: { 28: MessageReceived(msg); 29: } 30: } 31: } 32: } Note, now we can seal the class to avoid changes and the user just needs to provide a message handling method: 1: theListener.MessageReceived += CustomReceiveMethod; However, personally I don't think events hold up as well in this case because events are largely optional. To me, what is the point of a listener if you create one with no event listeners? So in my mind, use events when handling the notification is optional. So how about the delegation via interface? I personally like this method quite a bit. Basically what it does is similar to inheritance method mentioned first, but better because it makes it easy to split the part of the class that doesn't change (the base listener behavior) from the part that does change (the user-specified action after receiving a message). So assuming we had an interface like: 1: public interface IMessageHandler 2: { 3: void OnMessageReceived(Message receivedMessage); 4: } Our listener would look like this: 1: // using delegation via interface - omitting argument null checks and halt logic 2: public sealed class MessageListener 3: { 4: private ISubscriber _subscriber; 5: private IMessageHandler _handler; 6: private bool _isHalted = false; 7: private Thread _messageThread; 8:  9: // assign the subscriber and start the messaging loop 10: public MessageListener(ISubscriber subscriber, IMessageHandler handler) 11: { 12: _subscriber = subscriber; 13: _handler = handler; 14: _messageThread = new Thread(MessageLoop); 15: _messageThread.Start(); 16: } 17:  18: // handle the looping in the thread 19: private void MessageLoop() 20: { 21: while(!_isHalted) 22: { 23: // as long as processing, wait 1 second for message 24: Message msg = _subscriber.Receive(TimeSpan.FromSeconds(1)); 25: if(msg != null) 26: { 27: _handler.OnMessageReceived(msg); 28: } 29: } 30: } 31: } And they would call it by creating a class that implements IMessageHandler and pass that instance into the constructor of the listener. I like that this alleviates the issues of inheritance and essentially forces you to provide a handler (as opposed to events) on construction. Well, this is good, but personally I think we could go one step further. While I like this better than events or inheritance, it still forces you to implement a specific method name. What if that name collides? Furthermore if you have lots of these you end up either with large classes inheriting multiple interfaces to implement one method, or lots of small classes. Also, if you had one class that wanted to manage messages from two different subscribers differently, it wouldn't be able to because the interface can't be overloaded. This brings me to using delegates directly. In general, every time I think about creating an interface for something, and if that interface contains only one method, I start thinking a delegate is a better approach. Now, that said delegates don't accomplish everything an interface can. Obviously having the interface allows you to refer to the classes that implement the interface which can be very handy. In this case, though, really all you want is a method to handle the messages. So let's look at a method delegate: 1: // using delegation via delegate - omitting argument null checks and halt logic 2: public sealed class MessageListener 3: { 4: private ISubscriber _subscriber; 5: private Action<Message> _handler; 6: private bool _isHalted = false; 7: private Thread _messageThread; 8:  9: // assign the subscriber and start the messaging loop 10: public MessageListener(ISubscriber subscriber, Action<Message> handler) 11: { 12: _subscriber = subscriber; 13: _handler = handler; 14: _messageThread = new Thread(MessageLoop); 15: _messageThread.Start(); 16: } 17:  18: // handle the looping in the thread 19: private void MessageLoop() 20: { 21: while(!_isHalted) 22: { 23: // as long as processing, wait 1 second for message 24: Message msg = _subscriber.Receive(TimeSpan.FromSeconds(1)); 25: if(msg != null) 26: { 27: _handler(msg); 28: } 29: } 30: } 31: } Here the MessageListener now takes an Action<Message>.  For those of you unfamiliar with the pre-defined delegate types in .NET, that is a method with the signature: void SomeMethodName(Message). The great thing about delegates is it gives you a lot of power. You could create an anonymous delegate, a lambda, or specify any other method as long as it satisfies the Action<Message> signature. This way, you don't need to define an arbitrary helper class or name the method a specific thing. Incidentally, we could combine both the interface and delegate approach to allow maximum flexibility. Doing this, the user could either pass in a delegate, or specify a delegate interface: 1: // using delegation - give users choice of interface or delegate 2: public sealed class MessageListener 3: { 4: private ISubscriber _subscriber; 5: private Action<Message> _handler; 6: private bool _isHalted = false; 7: private Thread _messageThread; 8:  9: // assign the subscriber and start the messaging loop 10: public MessageListener(ISubscriber subscriber, Action<Message> handler) 11: { 12: _subscriber = subscriber; 13: _handler = handler; 14: _messageThread = new Thread(MessageLoop); 15: _messageThread.Start(); 16: } 17:  18: // passes the interface method as a delegate using method group 19: public MessageListener(ISubscriber subscriber, IMessageHandler handler) 20: : this(subscriber, handler.OnMessageReceived) 21: { 22: } 23:  24: // handle the looping in the thread 25: private void MessageLoop() 26: { 27: while(!_isHalted) 28: { 29: // as long as processing, wait 1 second for message 30: Message msg = _subscriber.Receive(TimeSpan.FromSeconds(1)); 31: if(msg != null) 32: { 33: _handler(msg); 34: } 35: } 36: } 37: } } This is the method I tend to prefer because it allows the user of the class to choose which method works best for them. You may be curious about the actual performance of these different methods. 1: Enter iterations: 2: 1000000 3:  4: Inheritance took 4 ms. 5: Events took 7 ms. 6: Interface delegation took 4 ms. 7: Lambda delegate took 5 ms. Before you get too caught up in the numbers, however, keep in mind that this is performance over over 1,000,000 iterations. Since they are all < 10 ms which boils down to fractions of a micro-second per iteration so really any of them are a fine choice performance wise. As such, I think the choice of what to do really boils down to what you're trying to do. Here's my guidelines: Inheritance should be used only when defining a collection of related types with implementation specific behaviors, it should not be used as a hook for users to add their own functionality. Events should be used when subscription is optional or multi-cast is desired. Interface delegation should be used when you wish to refer to implementing classes by the interface type or if the type requires several methods to be implemented. Delegate method delegation should be used when you only need to provide one method and do not need to refer to implementers by the interface name.

    Read the article

  • Howto fix "[Errno 13] Permission denied" in mailman mailing lists

    - by Michael
    After migrating domains from one plesk server onto another, I got several of those mails every day: (the target mailbox does not exist, so I get those as undeliverable mail bounces) Return-Path: <[email protected]> Received: (qmail 26460 invoked by uid 38); 26 May 2012 12:00:02 +0200 Date: 26 May 2012 12:00:02 +0200 Message-ID: <20120526100002.xyzxx.qmail@lvpsxxx-xx-xx-xx.dedicated.hosteurope.de> From: [email protected] (Cron Daemon) To: [email protected] Subject: Cron <list@lvpsxxx-xx-xx-xx> [ -x /usr/lib/mailman/cron/senddigests ] && /usr/lib/mailman/cron/senddigests Content-Type: text/plain; charset=ANSI_X3.4-1968 X-Cron-Env: <SHELL=/bin/sh> X-Cron-Env: <HOME=/var/list> X-Cron-Env: <PATH=/usr/bin:/bin> X-Cron-Env: <LOGNAME=list> List: xyzxyz: problem processing /var/lib/mailman/lists/xyzxyz/digest.mbox: [Errno 13] Permission denied: '/var/lib/mailman/archives/private/xyzxyz' I tried to fix the permissions myself, but the problem still exists.

    Read the article

  • Search engine optimization Links

    - by Michael Freidgeim
    Below there are a few links, that I used for my Search engine optimization research:     http://websearch.about.com/od/designforsearch/ss/tendesigntips.htm     Keyword Selection Guidelines   Where To Use Keywords  Google Search Engine Optimization http://websearch.about.com/od/keywordsandphrases/a/sitedesign.htm     http://en.wikipedia.org/wiki/Search_engine_optimization       http://www.google.com/support/webmasters/bin/answer.py?answer=35291

    Read the article

  • BizTalk Testing Series - The xpath Function

    - by Michael Stephenson
    Background While the xpath function in a BizTalk orchestration is a very powerful feature I have often come across the situation where someone has hard coded an xpath expression in an orchestration. If you have read some of my previous posts about testing I've tried to get across the general theme like test-driven or test-assisted development approaches where the underlying principle is that your building up your solution of small well tested units that are put together and the resulting solution is usually quite robust. You will be finding more bugs within your unit tests and fewer outside of your team. The thing I don't like about the xpath functions usual usage is when you come across an orchestration which has something like the below snippet in an expression or assign shape: string result = xpath(myMessage,"string(//Order/OrderItem/ProductName)"); My main issue with this is that the xpath statement is hard coded in the orchestration and you don't really know it works until you are running the orchestration. Some of the problems I think you end up with are: You waste time with lengthy debugging of the orchestration when your statement isn't working You might not know the function isn't working quite as expected because the testable unit around it is big You are much more open to regression issues if your schema changes     Approach to Testing The technique I usually follow is to hold the xpath statement as a constant in a helper class or to format a constant with a helper function to get the actual xpath statement. It is then used by the orchestration like follows. string result = xpath(myMessage, MyHelperClass.ProductNameXPathStatement); This means that because the xpath statement is available outside of the orchestration it now becomes testable in its own right. This means: I can test it in its own right I'm less likely to waste time tracking down problems caused by an error in the statement I can reduce the risk or regression issuess I'm now able to implement some testing around my xpath statements which usually are something like the following:    The test will use a sample xml file The sample will be validated against the schema The test will execute the xpath statement and then check the results are as expected     Walk-through BizTalk uses the XPathNavigator internally behind the xpath function to implement the queries you will usually use using the navigators select or evaluate functions. In the sample (link at bottom) I have a small solution which contains a schema from which I have generated a sample instance. I will then use this instance as the basis for my tests.     In the below diagram you can see the helper class which I've encapsulated my xpath expressions in, and some helper functions which will format the expression in the case of a repeating node which would want to inject an index into the xpath query.             I have then created a test class which has some functions to execute some queries against my sample xml file. An example of this is below.         In the test class I have a couple of helper functions which will execute the xpath expressions in a similar way to BizTalk. You could have a proper helper class to do this if you wanted.         You can see now in the BizTalk expression editor I can use these functions alongside the xpath function.         Conclusion I hope you can see with very little effort you can make your life much easier by testing xpath statements outside of an orchestration rather than using them directly hard coded into the orchestration.     This can also save you lots of pain longer term because your build should break if your schema changes unexpectedly causing these xpath tests to fail where as your tests around the orchestration will be more difficult to troubleshoot and workout the cause of the problem.     Sample Link The sample is available from the following link: http://code.msdn.microsoft.com/testbtsxpathfunction     Other Tools On the subject of using the xpath function, if you don't already use it the below tool is very useful for creating your xpath statements (thanks BizBert) http://www.bizbert.com/bizbert/2007/11/30/XPath+The+Hidden+Language+Of+BizTalk.aspx

    Read the article

  • I want to build a Virtual Machine, are there any good references?

    - by Michael Stum
    I'm looking to build a Virtual Machine as a platform independent way to run some game code (essentially scripting). The Virtual Machines that I'm aware of in games are rather old: Infocom's Z-Machine, LucasArts' SCUMM, id Software's Quake 3. As a .net Developer, I'm familiar with the CLR and looked into the CIL Instructions to get an overview of what you actually implement on a VM Level (vs. the language level). I've also dabbled a bit in 6502 Assembler during the last year. The thing is, now that I want¹ to implement one, I need to dig a bit deeper. I know that there are stack based and register based VMs, but I don't really know which one is better at what and if there are more or hybrid approaches. I need to deal with memory management, decide which low level types are part of the VM and need to understand why stuff like ldstr works the way it does. My only reference book (apart from the Z-Machine stuff) is the CLI Annotated Standard, but I wonder if there is a better, more general/fundamental lecture for VMs? Basically something like the Dragon Book, but for VMs? I'm aware of Donald Knuth's Art of Computer Programming which uses a register-based VM, but I'm not sure how applicable that series still is, especially since it's still unfinished? Clarification: The goal is to build a specialized VM. For example, Infocom's Z-Machine contains OpCodes for setting the Background Color or playing a sound. So I need to figure out how much goes into the VM as OpCodes vs. the compiler that takes a script (language TBD) and generates the bytecode from it, but for that I need to understand what I'm really doing. ¹ I know, modern technology would allow me to just interpret a high level scripting language on the fly. But where is the fun in that? :) It's also a bit hard to google because Virtual Machines is nowadays often associated with VMWare-type OS Virtualization...

    Read the article

  • Camera lookAt target changes when rotating parent node

    - by Michael IV
    have the following issue.I have a camera with lookAt method which works fine.I have a parent node to which I parent the camera.If I rotate the parent node while keeping the camera lookAt the target , the camera lookAt changes too.That is nor what I want to achieve.I need it to work like in Adobe AE when you parent camera to a null object:when null object is rotated the camera starts orbiting around the target while still looking at the target.What I do currently is multiplying parent's model matrix with camera model matrix which is calculated from lookAt() method.I am sure I need to decompose (or recompose ) one of the matrices before multiplying them .Parent model or camera model ? Anyone here can show the right way doing it ? UPDATE: The parent is just a node .The child is the camera.The parented camera in AfterEffects works like this: If you rotate the parent node while camera looks at the target , the camera actually starts orbiting around the target based on the parent rotation.In my case the parent rotation changes also Camera's lookAt direction which IS NOT what I want.Hope now it is clear .

    Read the article

  • BizTalk 360 Alarms, How do you configure yours?

    - by Michael Stephenson
    Originally posted on: http://geekswithblogs.net/michaelstephenson/archive/2013/06/18/153157.aspxIve recently written a guest post for BizTalk 360 on their blog about how customers may configure BizTalk 360 Alarms to optimize getting the right information to the right type of support people.This is my thoughts on how users of BTS 360 can get the best value out of BizTalk 360 alarmshttp://blogs.biztalk360.com/what-are-the-different-types-of-alarms-alerts-you-should-configure-in-biztalk360/

    Read the article

  • Why prefer a wildcard to a type discriminator in a Java API (Re: Effective Java)

    - by Michael Campbell
    In the generics section of Bloch's Effective Java (which handily is the "free" chapter available to all: http://java.sun.com/docs/books/effective/generics.pdf), he says: If a type parameter appears only once in a method declaration, replace it with a wildcard. (See page 31-33 of that pdf) The signature in question is: public static void swap(List<?> list, int i, int j) vs public static void swap(List<E> list, int i, int j) And then proceeds to use a static private "helper" function with an actual type parameter to perform the work. The helper function signature is EXACTLY that of the second option. Why is the wildcard preferable, since you need to NOT use a wildcard to get the work done anyway? I understand that in this case since he's modifying the List and you can't add to a collection with an unbounded wildcard, so why use it at all?

    Read the article

  • SBUG --> UK Connected Systems User Group

    - by Michael Stephenson
    Following a recent user group meeting we have decided that the UK SOA/BPM User Group will be renamed to the UK Connected Systems User Group.  The reasons for this are as follows: 1. Other user groups who cover the same topics as us are all called something similar 2. We feel the name change will help to increase user group membership The focus and topics of the user group will remain the same.

    Read the article

  • C#/.NET Little Wonders: The Nullable static class

    - by James Michael Hare
    Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can help improve your code by making it easier to write and maintain. The index of all my past little wonders posts can be found here. Today we’re going to look at an interesting Little Wonder that can be used to mitigate what could be considered a Little Pitfall.  The Little Wonder we’ll be examining is the System.Nullable static class.  No, not the System.Nullable<T> class, but a static helper class that has one useful method in particular that we will examine… but first, let’s look at the Little Pitfall that makes this wonder so useful. Little Pitfall: Comparing nullable value types using <, >, <=, >= Examine this piece of code, without examining it too deeply, what’s your gut reaction as to the result? 1: int? x = null; 2:  3: if (x < 100) 4: { 5: Console.WriteLine("True, {0} is less than 100.", 6: x.HasValue ? x.ToString() : "null"); 7: } 8: else 9: { 10: Console.WriteLine("False, {0} is NOT less than 100.", 11: x.HasValue ? x.ToString() : "null"); 12: } Your gut would be to say true right?  It would seem to make sense that a null integer is less than the integer constant 100.  But the result is actually false!  The null value is not less than 100 according to the less-than operator. It looks even more outrageous when you consider this also evaluates to false: 1: int? x = null; 2:  3: if (x < int.MaxValue) 4: { 5: // ... 6: } So, are we saying that null is less than every valid int value?  If that were true, null should be less than int.MinValue, right?  Well… no: 1: int? x = null; 2:  3: // um... hold on here, x is NOT less than min value? 4: if (x < int.MinValue) 5: { 6: // ... 7: } So what’s going on here?  If we use greater than instead of less than, we see the same little dilemma: 1: int? x = null; 2:  3: // once again, null is not greater than anything either... 4: if (x > int.MinValue) 5: { 6: // ... 7: } It turns out that four of the comparison operators (<, <=, >, >=) are designed to return false anytime at least one of the arguments is null when comparing System.Nullable wrapped types that expose the comparison operators (short, int, float, double, DateTime, TimeSpan, etc.).  What’s even odder is that even though the two equality operators (== and !=) work correctly, >= and <= have the same issue as < and > and return false if both System.Nullable wrapped operator comparable types are null! 1: DateTime? x = null; 2: DateTime? y = null; 3:  4: if (x <= y) 5: { 6: Console.WriteLine("You'd think this is true, since both are null, but it's not."); 7: } 8: else 9: { 10: Console.WriteLine("It's false because <=, <, >, >= don't work on null."); 11: } To make matters even more confusing, take for example your usual check to see if something is less than, greater to, or equal: 1: int? x = null; 2: int? y = 100; 3:  4: if (x < y) 5: { 6: Console.WriteLine("X is less than Y"); 7: } 8: else if (x > y) 9: { 10: Console.WriteLine("X is greater than Y"); 11: } 12: else 13: { 14: // We fall into the "equals" assumption, but clearly null != 100! 15: Console.WriteLine("X is equal to Y"); 16: } Yes, this code outputs “X is equal to Y” because both the less-than and greater-than operators return false when a Nullable wrapped operator comparable type is null.  This violates a lot of our assumptions because we assume is something is not less than something, and it’s not greater than something, it must be equal.  So keep in mind, that the only two comparison operators that work on Nullable wrapped types where at least one is null are the equals (==) and not equals (!=) operators: 1: int? x = null; 2: int? y = 100; 3:  4: if (x == y) 5: { 6: Console.WriteLine("False, x is null, y is not."); 7: } 8:  9: if (x != y) 10: { 11: Console.WriteLine("True, x is null, y is not."); 12: } Solution: The Nullable static class So we’ve seen that <, <=, >, and >= have some interesting and perhaps unexpected behaviors that can trip up a novice developer who isn’t expecting the kinks that System.Nullable<T> types with comparison operators can throw.  How can we easily mitigate this? Well, obviously, you could do null checks before each check, but that starts to get ugly: 1: if (x.HasValue) 2: { 3: if (y.HasValue) 4: { 5: if (x < y) 6: { 7: Console.WriteLine("x < y"); 8: } 9: else if (x > y) 10: { 11: Console.WriteLine("x > y"); 12: } 13: else 14: { 15: Console.WriteLine("x == y"); 16: } 17: } 18: else 19: { 20: Console.WriteLine("x > y because y is null and x isn't"); 21: } 22: } 23: else if (y.HasValue) 24: { 25: Console.WriteLine("x < y because x is null and y isn't"); 26: } 27: else 28: { 29: Console.WriteLine("x == y because both are null"); 30: } Yes, we could probably simplify this logic a bit, but it’s still horrendous!  So what do we do if we want to consider null less than everything and be able to properly compare Nullable<T> wrapped value types? The key is the System.Nullable static class.  This class is a companion class to the System.Nullable<T> class and allows you to use a few helper methods for Nullable<T> wrapped types, including a static Compare<T>() method of the. What’s so big about the static Compare<T>() method?  It implements an IComparer compatible comparison on Nullable<T> types.  Why do we care?  Well, if you look at the MSDN description for how IComparer works, you’ll read: Comparing null with any type is allowed and does not generate an exception when using IComparable. When sorting, null is considered to be less than any other object. This is what we probably want!  We want null to be less than everything!  So now we can change our logic to use the Nullable.Compare<T>() static method: 1: int? x = null; 2: int? y = 100; 3:  4: if (Nullable.Compare(x, y) < 0) 5: { 6: // Yes! x is null, y is not, so x is less than y according to Compare(). 7: Console.WriteLine("x < y"); 8: } 9: else if (Nullable.Compare(x, y) > 0) 10: { 11: Console.WriteLine("x > y"); 12: } 13: else 14: { 15: Console.WriteLine("x == y"); 16: } Summary So, when doing math comparisons between two numeric values where one of them may be a null Nullable<T>, consider using the System.Nullable.Compare<T>() method instead of the comparison operators.  It will treat null less than any value, and will avoid logic consistency problems when relying on < returning false to indicate >= is true and so on. Tweet   Technorati Tags: C#,C-Sharp,.NET,Little Wonders,Little Pitfalls,Nulalble

    Read the article

  • Silverlight Cream for November 27, 2011 -- #1176

    - by Dave Campbell
    In this Issue: Matt Eland, Parag Joshi, Jerrel Blankenship, and Joost van Schaik. Above the Fold: WP7: "Safe event detachment base class for Windows Phone 7 behaviors" Joost van Schaik Shoutouts: Michael Palermo's latest Desert Mountain Developers is up Michael Washington's latest Visual Studio #LightSwitch Daily is up From SilverlightCream.com:31 Days of Mango | Day #22: App ConnectMatt Eland takes the reigns of Jeff's blog for Day 22 and is talking about App Connect... App Connect allows apps to be listed on Quick Cards relative to an app's subject matter, and Quick Cards are items that appear in searches to let users find out more info... check out the blog post if you're not familiar with this31 Days of Mango | Day #21: SocketsJeff's Day 21 is written by Parag Joshi, and is on sockets... and is building a WP7 app for posting restaurant orders to a Silverlight OOB app running on a host machine... good sized tutorial and discussion, plus a project to download and play with31 Days of Mango | Day #20: Creating RingtonesJerrel Blankenship has Day 20 for Jeff Blankenburg's 31 Days of Mango and is discussing Ringtones... how to create and save a custom ringtone for your userSafe event detachment base class for Windows Phone 7 behaviorsJoost van Schaik revisits his Safe Event Detachment pattern for WP7 and built a base class to take care of the initialization involved to be kind to us, the developers... code includedStay in the 'Light!Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCreamJoin me @ SilverlightCream | Phoenix Silverlight User GroupTechnorati Tags:Silverlight    Silverlight 3    Silverlight 4    Windows PhoneMIX10

    Read the article

  • Do programmers at non-software companies need the same things as at software companies?

    - by Michael
    There is a lot of evidence that things like offices, multiple screens, administration rights of your own computer, and being allowed whatever software you want is great for productivity while developing. However, the studies I've seen tend toward companies that sell software. Therefore, keeping the programmers productive is paramount to the company's profitability. However, at companies that produce software simply to support their primary function, programming is merely a support role. Do the same rules apply at a company that only uses the software they produce to support their business, and a lot of a programmer's work is maintainence?

    Read the article

  • Anomaly with bash PS1 definition

    - by Michael Wiles
    My root and admin user both have the same .bashrc file. The prompt section of the .bashrc is the following: if [ "$color_prompt" = yes ]; then PS1='${debian_chroot:+($debian_chroot)}\[\033[01;32m\]\u@\h\[\033[00m\]:\[\033[01;34m\]\w\[\033[00m\]\$ ' else PS1='${debian_chroot:+($debian_chroot)}\u@\h:\w\$ ' fi unset color_prompt force_color_prompt # If this is an xterm set the title to user@host:dir case "$TERM" in xterm*|rxvt*) PS1="\[\e]0;${debian_chroot:+($debian_chroot)}\u@\h: \w\a\]$PS1" ;; *) ;; esac But the problem is that the admin user and root user have different prompts. admin's prompt is: admin@hostname:~$ and root's prompt is root@hostname:/home# So it seems root is using the "xterm" version and admin is not. Why does the .bashrc file have this difference in prompts? How do I get the admin user to also use the xterm version? How would I test that condition? If I run echo $TERM while running as the admin user I get xterm so as far as I can tell, it should be using the xterm version for the admin user.

    Read the article

  • WebCenter Innovation Award Winners

    - by Michael Snow
    Of course, here on our WebCenter blog – we’d like to highlight and brag about our great WebCenter winners. The 2012 WebCenter Innovation Award Winners University of Louisville Location: Louisville, KY, USA Industry: Higher Education Fusion Middleware Products: WebCenter Portal, WebCenter Content, JDeveloper, WebLogic, Oracle BI, Oracle IdM University of Louisville is a state supported research university Statewide Informatics Network to improve public health The University of Louisville has implemented WebCenter as part of the LOUI (Louisville Informatics Institute) Initiative, a Statewide Informatics Network, which will improve public healthcare and lower cost through the use of novel technology and next generation analytics, decision support and innovative outcomes-based payment systems. ---------- News Limited Country/Region: Australia Industry: News/Media FMW Products: WebCenter Sites Single platform running websites for 50% of Australia's newspapers News Corp is running half of Australia's newspaper websites on this shared platform powered by Oracle WebCenter Sites and have overtaken their nearest competitors and are now leading in terms of monthly page impressions. At peak they have over 250 editors on the system publishing in real-time.Sites include: www.newsspace.com.au, www.news.com.au, www.theaustralian.com.au and many others ------ Life Technologies Corp. Country/Region: Carlsbad, CA, USAIndustry: Life SciencesFMW Products: WebCenter Portal, SOA Suite Life Technologies Corp. is a global biotechnology tools company dedicated to improving the human condition with innovative life science products. They were awarded an innovation award for their solution utilizing WebCenter Portal for remotely monitoring & repairing biotech instruments. They deployed WebCenter as a portal that accesses Life Technologies cloud based service monitoring system where all customer deployed instruments can be remotely monitored and proactively repaired.  The portal provides alerts from these cloud based monitoring services directly to the customer and to Life Technologies Field Engineers.  The Portal provides insight into the instruments and services customers purchased for the purpose of analyzing and anticipating future customer needs and creating targeted sales and service programs. ----- China Mobile Jiangsu China Mobile Jiangsu is one of the biggest subsidiaries of China Mobile. It has over 25,000 employees and 40 million mobile subscribers. Country/Region: Jiangsu, China Industry: Telecommunications FMW Products: WebCenter Portal, WebCenter Content, JDeveloper, SOA Suite, IdM They were awarded an Innovation Award for their new employee platform powered by WebCenter Portal is designed to serve their 25,000+ employees and help them drive collaboration & productivity. JSMCC (Chian Mobile Jiangsu) Employee Enterprise Portal and Collaboration Platform. It is one of the China Mobile’s most important IT innovation projects. The new platform is designed to serve for JSMCC’s 25000+ employees and to help them improve the working efficiency, changing their traditional working mode to social ways, encouraging employees on business collaboration and innovation. The solution is built on top of Oracle WebCenter Portal Framework and WebCenter Spaces while also leveraging Weblogic Server, UCM, OID, OAM, SES, IRM and Oracle Database 11g. By providing rich collaboration services, knowledge management services, sensitive document protection services, unified user identity management services, unified information search services and personalized information integration capabilities, the working efficiency of JSMCC employees has been greatly improved. Main Functionality : Information portal, office automation integration, personal space, group space, team collaboration with web2.0 services, unified search engine for multiple data sources, document management and protection. SSO for multiple platforms. -------- LADWP – Los Angeles Department for Water and Power Los Angeles Department of Water and Power (LADWP) is the largest public utility company in United States with over 1.6 Million customers. LADWP provides water and power for millions of residential & commercial customers in Southern California. LADWP also bills most of these customers for sanitation services provided by another city department. Country/Region: US – Los Angeles, CA Industry: Public Utility FMW Products: WebCenter Portal, WebCenter Content, JDeveloper, SOA Suite, IdM The new infrastructure consists of: Oracle WebCenter Portal including mobile portal Oracle WebCenter Content for Content Management and Digital Asset Management (DAM) Oracle OAM (IDM, OVD, OAM) integrated with AD for enterprise identity management Oracle Siebel for CRM Oracle DB Oracle SOA Suite for integration of various subsystems and back end systems  The new portal's features include: Complete Graphical redesign based on best practices in UI Design for high usability Customer Self Service implemented through MyAccount (Bill Pay, Payment History, Bill History, Usage Analysis, Service Request Management) Financial Assistance Programs (CRM, WebCenter) Customer Rebate Programs (CRM, WebCenter) Turn On/Off/Transfer of services (Commercial & Residential) Outage Reporting eNotification (SMS, email) Multilingual (English & Spanish) – using WebCenter multi-language support Section 508 (ADA) Compliant Search – Using WebCenter SES (Secured Enterprise Search) Distributed Authorship in WebCenter Content Mobile Access (any Mobile Browser)

    Read the article

  • Stuck with Documentum Still? Do MORE with Oracle WebCenter!

    - by Michael Snow
    WEBCAST TODAY!! 03/22/12 Do you need to lower costs? Raise Productivity? Foster Innovation? Improve Online Engagement? But you’re still stuck with Documentum? Step away from the ledge – there is hope – let us help you. Top 4 Content Imperatives · Lower Costs - Reduce labor, maintenance fees, storage and electrical consumption · Raise Productivity - Automation and integration, communication, findability · Foster Innovation - Enable collaboration, expertise location · Improve Online Engagement – enable user-driven, dynamic marketing initiatives With the coming technology wave we see four content imperatives. Every organization has had to reduce costs, cost cutting has become a way of life. Everyone is working three jobs as positions are eliminated. And so we have to reduce labor, reduce maintenance, and reduce money we are wasting on things like storing content that is redundant or no longer useful. We also, to fill that gap, need to raise productivity. Knowledge workers represent the fastest growing segment of the workforce, accounting for 40%-75% of the employees at organizations in sectors like financial services, life sciences, healthcare and retail.  What’s more, their wages total 18 percent of the United States GDP. And so we can’t afford information systems that don’t let our top performers be the best they can be. We look to automate the content processes, provide ways to integrate that content into our processes, provide communication to make decisions, and to make content more findable so people can make the right decision and move the process forward. And really to get ourselves out of the current financial status, we can only cut costs so far. We have to innovate out of economic tough times – to find new products and new markets. And to enable the innovation process, we have to enable collaboration and expertise location. So much of innovation is about building on innovations that have come before. To solve problems, we have to be able to find what our organization has already created. We find that problems we need to solve have already been solved if we can find the right document, the right person. So we have to provide systems that enable us to stand on the shoulders of our organization’s accomplishments. Good content drives great marketing. Online engagement is growing as an absolute necessity for modern growing marketing organizations that require the business users be enabled for dynamic marketing content creation, updates and targeted content creation and management. Unfortunately – if you are currently stuck with Documentum, you are really lacking in your Web Experience Management capabilities. Documentum previously used FatWire for web publishing. Now FatWire is part of Oracle. Oracle provides powerful web engagement capabilities: Increase sales and loyalty by optimizing online engagement Create, manage and moderate contextually relevant, targeted and interactive online experiences Optimize customer engagement across, web, mobile and social channels Manage large scale multichannel global online presence with integration to enterprise applications Enable business users to control their content and make their own updates Publish content from native files – enable navigation of project documents, procedures, policy information Enable content display and updates from existing web applications – one click to drag and drop content management functionality So you get the ability to self-publish information and make it navigable, to move the process of publishing from IT to business users, and the ability to address a whole new area of user engagement with web experience management. So… if you are still stuck with Documentum and don’t know what to do – contact us – not only will Oracle help you step away from the ledge, but also with the MoveOff Documentum program, we are offering you a way – trade-in your Documentum licenses for a 100% credit on Oracle WebCenter. How’s that for a nice bonus? It’s time to stop maintaining Documentum, and to start innovating with Oracle WebCenter. Learn More Here! To learn more about what Oracle WebCenter can offer you today – join us for a webcast – your eyes will be opened to all that’s possible. Do More with WebCenter: Extend Beyond Content Management

    Read the article

  • C#/.NET Little Wonders &ndash; Cross Calling Constructors

    - by James Michael Hare
    Just a small post today, it’s the final iteration before our release and things are crazy here!  This is another little tidbit that I love using, and it should be fairly common knowledge, yet I’ve noticed many times that less experienced developers tend to have redundant constructor code when they overload their constructors. The Problem – repetitive code is less maintainable Let’s say you were designing a messaging system, and so you want to create a class to represent the properties for a Receiver, so perhaps you design a ReceiverProperties class to represent this collection of properties. Perhaps, you decide to make ReceiverProperties immutable, and so you have several constructors that you can use for alternative construction: 1: // Constructs a set of receiver properties. 2: public ReceiverProperties(ReceiverType receiverType, string source, bool isDurable, bool isBuffered) 3: { 4: ReceiverType = receiverType; 5: Source = source; 6: IsDurable = isDurable; 7: IsBuffered = isBuffered; 8: } 9: 10: // Constructs a set of receiver properties with buffering on by default. 11: public ReceiverProperties(ReceiverType receiverType, string source, bool isDurable) 12: { 13: ReceiverType = receiverType; 14: Source = source; 15: IsDurable = isDurable; 16: IsBuffered = true; 17: } 18:  19: // Constructs a set of receiver properties with buffering on and durability off. 20: public ReceiverProperties(ReceiverType receiverType, string source) 21: { 22: ReceiverType = receiverType; 23: Source = source; 24: IsDurable = false; 25: IsBuffered = true; 26: } Note: keep in mind this is just a simple example for illustration, and in same cases default parameters can also help clean this up, but they have issues of their own. While strictly speaking, there is nothing wrong with this code, logically, it suffers from maintainability flaws.  Consider what happens if you add a new property to the class?  You have to remember to guarantee that it is set appropriately in every constructor call. This can cause subtle bugs and becomes even uglier when the constructors do more complex logic, error handling, or there are numerous potential overloads (especially if you can’t easily see them all on one screen’s height). The Solution – cross-calling constructors I’d wager nearly everyone knows how to call your base class’s constructor, but you can also cross-call to one of the constructors in the same class by using the this keyword in the same way you use base to call a base constructor. 1: // Constructs a set of receiver properties. 2: public ReceiverProperties(ReceiverType receiverType, string source, bool isDurable, bool isBuffered) 3: { 4: ReceiverType = receiverType; 5: Source = source; 6: IsDurable = isDurable; 7: IsBuffered = isBuffered; 8: } 9: 10: // Constructs a set of receiver properties with buffering on by default. 11: public ReceiverProperties(ReceiverType receiverType, string source, bool isDurable) 12: : this(receiverType, source, isDurable, true) 13: { 14: } 15:  16: // Constructs a set of receiver properties with buffering on and durability off. 17: public ReceiverProperties(ReceiverType receiverType, string source) 18: : this(receiverType, source, false, true) 19: { 20: } Notice, there is much less code.  In addition, the code you have has no repetitive logic.  You can define the main constructor that takes all arguments, and the remaining constructors with defaults simply cross-call the main constructor, passing in the defaults. Yes, in some cases default parameters can ease some of this for you, but default parameters only work for compile-time constants (null, string and number literals).  For example, if you were creating a TradingDataAdapter that relied on an implementation of ITradingDao which is the data access object to retreive records from the database, you might want two constructors: one that takes an ITradingDao reference, and a default constructor which constructs a specific ITradingDao for ease of use: 1: public TradingDataAdapter(ITradingDao dao) 2: { 3: _tradingDao = dao; 4:  5: // other constructor logic 6: } 7:  8: public TradingDataAdapter() 9: { 10: _tradingDao = new SqlTradingDao(); 11:  12: // same constructor logic as above 13: }   As you can see, this isn’t something we can solve with a default parameter, but we could with cross-calling constructors: 1: public TradingDataAdapter(ITradingDao dao) 2: { 3: _tradingDao = dao; 4:  5: // other constructor logic 6: } 7:  8: public TradingDataAdapter() 9: : this(new SqlTradingDao()) 10: { 11: }   So in cases like this where you have constructors with non compiler-time constant defaults, default parameters can’t help you and cross-calling constructors is one of your best options. Summary When you have just one constructor doing the job of initializing the class, you can consolidate all your logic and error-handling in one place, thus ensuring that your behavior will be consistent across the constructor calls. This makes the code more maintainable and even easier to read.  There will be some cases where cross-calling constructors may be sub-optimal or not possible (if, for example, the overloaded constructors take completely different types and are not just “defaulting” behaviors). You can also use default parameters, of course, but default parameter behavior in a class hierarchy can be problematic (default values are not inherited and in fact can differ) so sometimes multiple constructors are actually preferable. Regardless of why you may need to have multiple constructors, consider cross-calling where you can to reduce redundant logic and clean up the code.   Technorati Tags: C#,.NET,Little Wonders

    Read the article

  • Is there any reason to use "container" classes?

    - by Michael
    I realize the term "container" is misleading in this context - if anyone can think of a better term please edit it in. In legacy code I occasionally see classes that are nothing but wrappers for data. something like: class Bottle { int height; int diameter; Cap capType; getters/setters, maybe a constructor } My understanding of OO is that classes are structures for data and the methods of operating on that data. This seems to preclude objects of this type. To me they are nothing more than structs and kind of defeat the purpose of OO. I don't think it's necessarily evil, though it may be a code smell. Is there a case where such objects would be necessary? If this is used often, does it make the design suspect?

    Read the article

  • Going Direct to Consumer in Consumer Goods – Live Webcast April 12

    - by Michael Seback
    Going Direct to Consumer is top of mind with executives in the Consumer Goods (CG) industry today.   Join our live webcast on Thursday, April 12 to learn what CG companies worldwide are thinking as they deploy their direct-to-consumer strategies in an effort to better engage with today’s empowered consumer. Hear Jon Copestake, Chief Consumer Goods Analyst of the Economist Intelligence Unit and Oracle to discuss the findings and industry trends. Some key findings include: Pushing traditional media through new media channels is not enough to reach today’s more plugged in, product-savvy consumer CG companies are experimenting with new ways to establish and enhance direct, two-way relationships with their target consumers across multiple channels Survey respondents and other CG executives see their nascent e-commerce efforts as complimentary to, not competing with, existing retail channels. Register to attend on April 12, 8:00 a.m. PT / 11:00 p.m. ET  

    Read the article

  • "Page Size" and "Orientation" are disabled when printing from some applications

    - by Michael
    There are many different Print dialogs but one is very common and is used by Gimp, Shutter, Evloution and Simple Scan. In all these apps the "Page Size" and "Orientation" are disabled. The same dialog in Firefox, Thunderbird and GEdit works OK. I program in Gambas3 which uses this dialog in conjunction with the GTK+ library and it also has these options disabled. If I use the QT4 library then a different print dialog is displayed with no problems. Anybody else notice this problem and found a solution?

    Read the article

  • Creating a Training Lab on Windows Azure

    - by Michael Stephenson
    Originally posted on: http://geekswithblogs.net/michaelstephenson/archive/2013/06/17/153149.aspxThis week we are preparing for a training course that Alan Smith will be running for the support teams at one of my customers around Windows Azure. In order to facilitate the training lab we have a few prerequisites we need to handle. One of the biggest ones is that although the support team all have MSDN accounts the local desktops they work on are not ideal for running most of the labs as we want to give them some additional developer background training around Azure. Some recent Azure announcements really help us in this area: MSDN software can now be used on Azure VM You don't pay for Azure VM's when they are no longer used  Since the support team only have limited experience of Windows Azure and the organisation also have an Enterprise Agreement we decided it would be best value for money to spin up a training lab in a subscription on the EA and then we can turn the machines off when we are done. At the same time we would be able to spin them back up when the users need to do some additional lab work once the training course is completed. In order to achieve this I wanted to create a powershell script which would setup my training lab. The aim was to create 18 VM's which would be based on a prebuilt template with Visual Studio and the Azure development tools. The script I used is described below The Start & Variables The below text will setup the powershell environment and some variables which I will use elsewhere in the script. It will also import the Azure Powershell cmdlets. You can see below that I will need to download my publisher settings file and know some details from my Azure account. At this point I will assume you have a basic understanding of Azure & Powershell so already know how to do this. Set-ExecutionPolicy Unrestrictedcls $startTime = get-dateImport-Module "C:\Program Files (x86)\Microsoft SDKs\Windows Azure\PowerShell\Azure\Azure.psd1"# Azure Publisher Settings $azurePublisherSettings = '<Your settings file>.publishsettings'  # Subscription Details $subscriptionName = "<Your subscription name>" $defaultStorageAccount = "<Your default storage account>"  # Affinity Group Details $affinityGroup = '<Your affinity group>' $dataCenter = 'West Europe' # From Get-AzureLocation  # VM Details $baseVMName = 'TRN' $adminUserName = '<Your admin username>' $password = '<Your admin password>' $size = 'Medium' $vmTemplate = '<The name of your VM template image>' $rdpFilePath = '<File path to save RDP files to>' $machineSettingsPath = '<File path to save machine info to>'    Functions In the next section of the script I have some functions which are used to perform certain actions. The first is called CreateVM. This will do the following actions: If the VM already exists it will be deleted Create the cloud service Create the VM from the template I have created Add an endpoint so we can RDP to them all over the same port Download the RDP file so there is a short cut the trainees can easily access the machine via Write settings for the machine to a log file  function CreateVM($machineNo) { # Specify a name for the new VM $machineName = "$baseVMName-$machineNo" Write-Host "Creating VM: $machineName"       # Get the Azure VM Image      $myImage = Get-AzureVMImage $vmTemplate   #If the VM already exists delete and re-create it $existingVm = Get-AzureVM -Name $machineName -ServiceName $serviceName if($existingVm -ne $null) { Write-Host "VM already exists so deleting it" Remove-AzureVM -Name $machineName -ServiceName $serviceName }   "Creating Service" $serviceName = "bupa-azure-train-$machineName" Remove-AzureService -Force -ServiceName $serviceName New-AzureService -Location $dataCenter -ServiceName $serviceName   Write-Host "Creating VM: $machineName" New-AzureQuickVM -Windows -name $machineName -ServiceName $serviceName -ImageName $myImage.ImageName -InstanceSize $size -AdminUsername $adminUserName -Password $password  Write-Host "Updating the RDP endpoint for $machineName" Get-AzureVM -name $machineName -ServiceName $serviceName ` | Add-AzureEndpoint -Name RDP -Protocol TCP -LocalPort 3389 -PublicPort 550 ` | Update-AzureVM    Write-Host "Get the RDP File for machine $machineName" $machineRDPFilePath = "$rdpFilePath\$machineName.rdp" Get-AzureRemoteDesktopFile -name $machineName -ServiceName $serviceName -LocalPath "$machineRDPFilePath"   WriteMachineSettings "$machineName" "$serviceName" }    The delete machine settings function is used to delete the log file before we start re-running the process.  function DeleteMachineSettings() { Write-Host "Deleting the machine settings output file" [System.IO.File]::Delete("$machineSettingsPath"); }    The write machine settings function will get the VM and then record its details to the log file. The importance of the log file is that I can easily provide the information for all of the VM's to our infrastructure team to be able to configure access to all of the VM's    function WriteMachineSettings([string]$vmName, [string]$vmServiceName) { Write-Host "Writing to the machine settings output file"   $vm = Get-AzureVM -name $vmName -ServiceName $vmServiceName $vmEndpoint = Get-AzureEndpoint -VM $vm -Name RDP   $sb = new-object System.Text.StringBuilder $sb.Append("Service Name: "); $sb.Append($vm.ServiceName); $sb.Append(", "); $sb.Append("VM: "); $sb.Append($vm.Name); $sb.Append(", "); $sb.Append("RDP Public Port: "); $sb.Append($vmEndpoint.Port); $sb.Append(", "); $sb.Append("Public DNS: "); $sb.Append($vmEndpoint.Vip); $sb.AppendLine(""); [System.IO.File]::AppendAllText($machineSettingsPath, $sb.ToString());  } # end functions    Rest of Script In the rest of the script it is really just the bit that orchestrates the actions we want to happen. It will load the publisher settings, select the Azure subscription and then loop around the CreateVM function and create 16 VM's  Import-AzurePublishSettingsFile $azurePublisherSettings Set-AzureSubscription -SubscriptionName $subscriptionName -CurrentStorageAccount $defaultStorageAccount Select-AzureSubscription -SubscriptionName $subscriptionName  DeleteMachineSettings    "Starting creating Bupa International Azure Training Lab" $numberOfVMs = 16  for ($index=1; $index -le $numberOfVMs; $index++) { $vmNo = "$index" CreateVM($vmNo); }    "Finished creating Bupa International Azure Training Lab" # Give it a Minute Start-Sleep -s 60  $endTime = get-date "Script run time " + ($endTime - $startTime)    Conclusion As you can see there is nothing too fancy about this script but in our case of creating a small isolated training lab which is not connected to our corporate network then we can easily use this to provision the lab. Im sure if this is of use to anyone you can easily modify it to do other things with the lab environment too. A couple of points to note are that there are some soft limits in Azure about the number of cores and services your subscription can use. You may need to contact the Azure support team to be able to increase this limit. In terms of the real business value of this approach, it was not possible to use the existing desktops to do the training on, and getting some internal virtual machines would have been relatively expensive and time consuming for our ops team to do. With the Azure option we are able to spin these machines up for a temporary period during the training course and then throw them away when we are done. We expect the costing of this test lab to be very small, especially considering we have EA pricing. As a ball park I think my 18 lab VM training environment will cost in the region of $80 per day on our EA. This is a fraction of the cost of the creation of a single VM on premise.

    Read the article

  • Oracle is #1 in Life Sciences!

    - by Michael Snow
    Guest post today by: John Klinke, Senior Principal Product Manager, Oracle WebCenter Content 12.00 Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-fareast-font-family:Calibri; mso-bidi-font-family:"Times New Roman";} Based on the announcement last week at EMC World about Documentum for Life Sciences, it looks like EMC is starting to have regrets about pulling out of the life sciences space over the last few years. Certainly their content management customers and partners in life sciences have noticed their retreat. Many of them are now talking to us about WebCenter Content since they’ve seen the writing on the wall regarding Documentum’s decline, including falling revenue, shrinking investment, departure of key executives, and EMC’s auditing of existing customers. While EMC has been neglecting the life sciences industry over the last few years, Oracle has been increasing its investment and commitment by providing best-of-breed solutions to enable pharmaceutical, medical device, biotech and CRO companies to improve productivity and drive innovation. As a result, according to IDC Health Insights, Oracle is #1 in life sciences. From research and development through clinical development and manufacturing to sales and marketing, Oracle provides the solutions that life sciences companies depend on to accelerate R&D, expedite clinical trials, and speed time to market. Specifically for Oracle WebCenter, our life sciences business is booming thanks to our comprehensive offerings led by Oracle WebCenter Content, our 21 CFR Part 11 compliant enterprise content management platform. Unlike Documentum, WebCenter Content is all about keeping the cost of ownership low - through simplicity, flexibility, and out-of-the-box integrations. WebCenter Content is a single, comprehensive ECM platform that can handle all your content management needs, from controlled documents to digital asset management, records management and document imaging and capture. And it is much more flexible, letting you do configuration changes instead of customizations to meet your business needs. It also saves you money by being pre-integrated with the rest of the Oracle Fusion Middleware technology stack and with leading enterprise applications like Siebel (including Siebel CTMS), Primavera, E-Business Suite, JD Edwards and PeopleSoft. So if you think EMC’s announcement last week was too little and too late, I’m happy to report that Oracle is here to help. Back in October, we announced our Move Off Documentum offer, which provides a 100% trade-in credit for your Documentum licenses when you purchase Oracle WebCenter, and the good news is, this offer is still available for a limited time. So stop maintaining Documentum and start innovating with Oracle WebCenter. For more details see www.oracle.com/moveoff/documentum.

    Read the article

  • When reversing a Google Analytics e-commerce transaction is the per-unit price positive or negative?

    - by Michael Glenn
    Google's own instructions for reversing an e-commerce transaction seem to contradict themselves regarding the unit price. In the instructions it states The item field has a positive per-unit price and a negative quantity. yet, the code sample has a negative per-unit price and negative quantity. _gaq.push(['_addItem', '1234', // order ID - necessary to associate item with transaction 'DD44', // SKU/code - required 'T-Shirt', // product name 'Olive Medium', // category or variation '-11.99', // unit price - required '-1' // quantity - required ]); Which is correct?

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >