Search Results

Search found 10661 results on 427 pages for 'move semantics'.

Page 377/427 | < Previous Page | 373 374 375 376 377 378 379 380 381 382 383 384  | Next Page >

  • Get Pop Up Notifications for Your RSS Feeds with Feed Notifier

    - by DigitalGeekery
    Are you looking for a way to get updates from your favorite websites right to your desktop?  If so, you’ll want to check out Feed Notifier. This free Windows application runs in the system tray and delivers pop-up notifications to your desktop when your subscribed RSS feeds are updated. Download and install Feed Notifier. (Download link below) When you are finished installing, the Feed Notifier Preferences window will open. Click on the Add… button to add an RSS feed. Copy and paste the Feed URL into the text box and click Next. Choose your polling interval. This is how often your feed will be checked for new items. You can set your polling interval for days, hours, minutes, or even seconds. Click FInish. At your configured interval, Feed Notifier will check your feeds for new items. If new items are present, they will pop up above your system tray.  You’ll get an intro portion of the article. Simply Click the headline in the feed pop up… …to open the full article in your default browser. Setting Preferences Open the preferences of Feed Notifier, by going to Start > All Programs > Feed Notifier, or right clicking on the system tray icon and selecting Preferences. On the Pop-ups tab you can configure the duration in seconds that each article stays displayed on your screen. The default is five seconds. You can also change the size of the display, the theme, and the amount of content displayed.   The Options tab offers additional configurations like article caching and using a proxy server. Filter tab allows you to filter in or out certain content. To add a filter click Add…   … then type in the filter rule. You can even choose to apply it to only certain feeds. Click OK. Feed Notifier will display on the filters tab the number of times the filter is applied. Click OK when finished.   You can scroll though the articles by using the forward and back buttons at the lower left, or use the play / pause buttons to move though the articles in a slideshow-type fashion.   Feed Notifier is nice way to get your updated feeds directly to your desktop in a timely fashion. It’s supports all RSS and Atom feeds and features a clean look and feel with plenty of customizable options. Download Feed Notifier Similar Articles Productive Geek Tips Make Outlook Stop Using Internet Explorer’s RSS FeedsChange Default Feed Reader in FirefoxView Feedburner Subscriber Numbers Even if FeedCount is Not DisplayedSubscribe to RSS Feeds in Chrome with a Single ClickOrganize your RSS Feeds with FeedDemon TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Heaven & Hell Finder Icon Using TrueCrypt to Secure Your Data Quickly Schedule Meetings With NeedtoMeet Share Flickr Photos On Facebook Automatically Are You Blocked On Gtalk? Find out Discover Latest Android Apps On AppBrain

    Read the article

  • How does Trash Can works? Where can i find official specification / documentation / reference about it?

    - by MestreLion
    When trying to manage trash can from mounted NTFS volumes, I ended up reading FreeDesktop.org's reference on it. Poking around and doing some tests, I realized Ubuntu/Gnome does not follow the specs 100%. Here's why: For non-/ partitions, it always use <driveroot>/.Trash-<uid>, It never used <driveroot>/.Trash/<uid>, even when i created it in advance. While this works, its annoying: if i have 15 users, i end up with 15 /.Trash-xxx folders in my drive, while the other approach would still give a single folder (with 15 sub-folders). That "pollution" in my drives is very unpleasant. And specs say "If an $topdir/.Trash directory is absent, an $topdir/.Trash-$uid directory is to be used". Well, it IS present, so why it never uses it? root trash does not work, at least not out of the box. Open nautilus as root and click on trash, it gives error. Try to delete any file, it says "it cant move to trash". Ok, i know this can be fixed by creating /root/.local/share. But specs says "A “home trash” directory SHOULD be automatically created for any new user. If this directory is needed for a trashing operation but does not exist, the implementation SHOULD automatically create it, without any warnings or delays.". Why error then? Bug? Why do i must change /etc/fstab entries for mounted volumes, adding options like uid and guid, if the volumes are already mounted as RW for everyone? These are just some examples of deviation from standard. So, the question is: "If Ubuntu does not adhere 100% to the spec, HOW exactly does the trash work? WHERE can i find technical reference about Ubuntu's implementation of the trash?" By the way: if Ubuntu does happen to follow specs, please tell me what am i doing wrong, specially regarding the /.Trash-<uid> vs /.Trash/<uid> issue. Thanks! EDIT: Some more info: If a given fs has no support for sticky bit (VFAT, NTFS), it probably dont have for permitions either (at least VFAT surely doesnt). So what prevents one user for purging / restoring other users ./Trash-xxx ? If one can read/write his own Trash, he can also do the same for the whole drive, including other's trashes, isnt it? Or does Gnome has any "extra" protection on ./Trash-xxx folders on VFAT/NTFS fs? If Linux can "emulate" file permitions on NTFS mounting by editing /fstab uid and gid options, can it also "emulate" the sticky bit? I would really want to use /.Trash/xxx format... For the root issue: for the / partition, i can trash as root, and it goes to /root/.local/shate/Trash. But if i click on Nautilus "Trash" (as root), i get an error. Dont you? So files are correctly trashed, but i cant access it. All i can do is manually "purge" them (by deleting files on /root/.local/shate/Trash), but restoring would be very tricky (opening info files and manually moving, etc) For non-/ partitions (or at least for VFAT/NTFS), I can not even trash as root: it does not create a ./Trash-0 folder, it simply says "Cannot trash, want to permantly delete?" Why? About fstab: i use it for a permanent mount for my NTFS partitions. I have several, and if not "pre-mounted" they really cluttter desktop and/or Nautilus. Id rather have it pre mounted, integrated in my fs, in mounts like /data , /windows/xp , /windows/vista , and so on, and leave /media and its "mount/unmount" flexibility just for truly removable drives Si, if Ubuntu/Gnome truly follow the spec, is there any way to fix the root issues and to "emulate" the sticky bit for (at least) my fstab'ed NTFS fixed partitions?

    Read the article

  • Lease Accounting Closed for Comment

    - by Theresa Hickman
    December 15, 2010 marked the last day to send public comments to FASB and IASB on lease accounting. June 2011 is the deadline for the final consideration of the Leases Exposure Draft that will be given to standard setters in order to create a new lease accounting standard. Landlords, lessees, retailers, airlines industry, etc. are all worried right now about the changes to lease accounting. They feel the changes will be too costly and complex without adding significant improvement to the quality and relevance of financial statements. In a nutshell, IASB and FASB want to abolish operating leases where the lessee records the periodic payments as an expense over time. The proposed changes will mean that the accounting for leases will move from the P&L and hit both the lessee's and lessor's balance sheets. For companies that occupy a lot of property, this could significantly increase their liabilities not to mention front-load much of the costs that they were able to spread out over time before. Why are IASB and FASB doing this? Their goal is to have consistent accounting for both the lessees and lessors with higher quality financial statements. Leasing is one of four major projects being undertaken by the IASB and FASB in order to complete convergence between US GAAP and IFRS. I spoke to our resident accounting expert Seamus Moran about this to better understand how this might impact accounting software. He reminded me that the proposed changes to both US GAAP and IFRS in respect to leases are "proposed." It is still inappropriate to account for leases the way they are being proposed and we still need to account for them in accordance to the current regulations, which is what current accounting software programs, such as E-Business Suite Release 12.1 and prior and PeopleSoft Enterprise support. The FASB (US GAAP) and IASB (IFRS) exposure drafts (EDs) that outline the proposal were published. The FASB edition was published on August 17th, with comments due by December 15th. The IASB edition was published on the same date, and comments were due in London on the same date. Exposure drafts are the method both the FASB and the IASB use to solicit General Acceptance, the "GA" in GAAP. Both Boards will consider the input they have received, and perhaps revise the proposal. The proposal has come in for some criticism, both from the finance houses and the uses of the leased assets. There is, given the opposition to it, an excellent chance that the Leasing proposal will be modified or rewritten. We will know this in about six months, the usual time it takes for the FASB and IASB to digest the comments they receive. If they feel the proposal has General Acceptance, they will issue the final Standard at that time; if not, they will issue a revised proposal with another year of comment of drafting. Oracle participates in the standard setting process and is fully aware of the leasing proposal. We have designs that would reflect the proposal in hand. These designs will be finalized when the proposal is finalized. It is likely that customers will develop new financial arrangements if the proposal is finalized, and we are working with customers and partners to stay in touch with people's business responses to the proposal. The IASB and FASB are aware that ERP companies will have to revise their software, and that the companies filing results under IFRS or under US GAAP will have to implement such software. The form and timing of the release of the updated software will depend on the schedule of the take up of the new standard, the complexity of the standard, and the releases supported at the time the standard becomes effective.

    Read the article

  • Java EE @ No Fluff Just Stuff Tour

    - by reza_rahman
    If you work in the US and still don't know what the No Fluff Just Stuff (NFJS) Tour is, you are doing yourself a very serious disfavor. NFJS is by far the cheapest and most effective way to stay up to date through some world class speakers and talks. This is most certainly true for US enterprise Java developers in particular. Following the US cultural tradition of old-fashioned roadshows, NFJS is basically a set program of speakers and topics offered at major US cities year round. Many now famous world class technology speakers can trace their humble roots to NFJS. Via NFJS you basically get to have amazing training without paying for an expensive venue, lodging or travel. The events are usually on the weekends so you don't need to even skip work if you want (a great feature for consultants on tight budgets and deadlines). I am proud to share with you that I recently joined the NFJS troupe. My hope is that this will help solve the lingering problem of effectively spreading the Java EE message here in the US. For NFJS I hope my joining will help beef up perhaps much desired Java content. In any case, simply being accepted into this legendary program is an honor I could have perhaps only dreamed of a few years ago. I am very grateful to Jay Zimmerman for seeing the value in me and the Java EE content. The current speaker line-up consists of the likes of Neal Ford, Venkat Subramaniam, Nathaniel Schutta, Tim Berglund and many other great speakers. I actually had my tour debut on April 4-5 with the NFJS New York Software Symposium - basically a short train commute away from my home office. The show is traditionally one of the smaller ones and it was not that bad for a start. I look forward to doing a few more in the coming months (more on that a bit later). I had four talks back to back (really my most favorite four at the moment). The first one was a talk on JMS 2 - some of you might already know JMS is one of my most favored Java EE APIs. The slides for the talk are posted below: What’s New in Java Message Service 2 from Reza Rahman The next talk I delivered was my Cargo Tracker/Java EE + DDD talk. This talk basically overviews DDD and describes how DDD maps to Java EE using code examples/demos from the Cargo Tracker Java EE Blue Prints project. Applied Domain-Driven Design Blue Prints for Java EE from Reza Rahman The third talk I delivered was our flagship Java EE 7/8 talk. As you may know, currently the talk is basically about Java EE 7. I'll probably slowly evolve this talk to gradually transform it into a Java EE 8 talk as we move forward (I'll blog about that separately shortly). The following is the slide deck for the talk: JavaEE.Next(): Java EE 7, 8, and Beyond from Reza Rahman My last talk for the show was my JavaScript+Java EE 7 talk. This talk is basically about aligning EE 7 with the emerging JavaScript ecosystem (specifically AngularJS). The slide deck for the talk is here: JavaScript/HTML5 Rich Clients Using Java EE 7 from Reza Rahman Unsurprisingly this talk was well-attended. The demo application code is posted on GitHub. The code should be a helpful resource if this development model is something that interests you. Do let me know if you need help with it but the instructions should be fairly self-explanatory. My next NFJS show is the Central Ohio Software Symposium in Columbus on June 6-8 (sorry for the late notice - it's been a really crazy few weeks). Here's my tour schedule so far, I'll keep you up-to-date as the tour goes forward: June 6 - 8, Columbus Ohio. June 24 - 27, Denver Colorado (UberConf) - my most extensive agenda on the tour so far. July 18 - 20, Austin Texas. I hope you'll take this opportunity to get some updates on Java EE as well as the other awesome content on the tour?

    Read the article

  • Lessons Building KeyRef (a .NET developer learning Rails)

    - by Liam McLennan
    Just because I like to build things, and I like to learn, I have been working on a keyboard shortcut reference site. I am using this as an opportunity to improve my ruby and rails skills. The first few days were frustrating. Perhaps the learning curve of all the fun new toys was a bit excessive. Finally tonight things have really started to come together. I still don’t understand the rails built-in testing support but I will get there. Interesting Things I Learned Tonight RubyMine IDE Tonight I switched to RubyMine instead of my usual Notepad++. I suspect RubyMine is a powerful tool if you know how to use it – but I don’t. At the moment it gives me errors about some gems not being activated. This is another one of those things that I will get to. I have also noticed that the editor functions significantly differently to the editors I am used to. For example, in visual studio and notepad++ if you place the cursor at the start of a line and press left arrow the cursor is sent to the end of the previous line. In RubyMine nothing happens. Haml Haml is my favourite view engine. For my .NET work I have been using its non-union Mexican CLR equivalent – nHaml. Multiple CSS Classes To define a div with more than one css class haml lets you chain them together with a ‘.’, such as: .span-6.search_result contents of the div go here Indent Consistency I also learnt tonight that both haml and nhaml complain if you are not consistent about indenting. As a consequence of the move from notepad++ to RubyMine my haml views ended up with some tab indenting and some space indenting. For the view to render all of the indents within a view must be consistent. Sorting Arrays I guessed that ruby would be able to sort an array alphabetically by a property of the elements so my first attempt was: Application.all.sort {|app| app.name} which does not work. You have to supply a comparer (much like .NET). The correct sort is: Application.all.sort {|a,b| a.name.downcase <=> b.name.downcase} MongoMapper Find by Id Since document databases are just fancy key-value stores it is essential to be able to easily search for a document by its id. This functionality is so intrinsic that it seems that the MongoMapper author did not bother to document it. To search by id simply pass the id to the find method: Application.find(‘4c19e8facfbfb01794000002’) Rails And CoffeeScript I am a big fan of CoffeeScript so integrating it into this application is high on my priorities. My first thought was to copy Dr Nic’s strategy. Unfortunately, I did not get past step 1. Install Node.js. I am doing my development on Windows and node is unix only. I looked around for a solution but eventually had to concede defeat… for now. Quicksearch The front page of the application I am building displays a list of applications. When the user types in the search box I want to reduce the list of applications to match their search. A quick googlebing turned up quicksearch, a jquery plugin. You simply tell quicksearch where to get its input (the search textbox) and the list of items to filter (the divs containing the names of applications) and it just works. Here is the code: $('#app_search').quicksearch('.search_result'); Summary I have had a productive evening. The app now displays a list of applications, allows them to be sorted and links through to an application page when an application is selected. Next on the list is to display the set of keyboard shortcuts for an application.

    Read the article

  • How to write simple code using TDD [migrated]

    - by adeel41
    Me and my colleagues do a small TDD-Kata practice everyday for 30 minutes. For reference this is the link for the excercise http://osherove.com/tdd-kata-1/ The objective is to write better code using TDD. This is my code which I've written public class Calculator { public int Add( string numbers ) { const string commaSeparator = ","; int result = 0; if ( !String.IsNullOrEmpty( numbers ) ) result = numbers.Contains( commaSeparator ) ? AddMultipleNumbers( GetNumbers( commaSeparator, numbers ) ) : ConvertToNumber( numbers ); return result; } private int AddMultipleNumbers( IEnumerable getNumbers ) { return getNumbers.Sum(); } private IEnumerable GetNumbers( string separator, string numbers ) { var allNumbers = numbers .Replace( "\n", separator ) .Split( new string[] { separator }, StringSplitOptions.RemoveEmptyEntries ); return allNumbers.Select( ConvertToNumber ); } private int ConvertToNumber( string number ) { return Convert.ToInt32( number ); } } and the tests for this class are [TestFixture] public class CalculatorTests { private int ArrangeAct( string numbers ) { var calculator = new Calculator(); return calculator.Add( numbers ); } [Test] public void Add_WhenEmptyString_Returns0() { Assert.AreEqual( 0, ArrangeAct( String.Empty ) ); } [Test] [Sequential] public void Add_When1Number_ReturnNumber( [Values( "1", "56" )] string number, [Values( 1, 56 )] int expected ) { Assert.AreEqual( expected, ArrangeAct( number ) ); } [Test] public void Add_When2Numbers_AddThem() { Assert.AreEqual( 3, ArrangeAct( "1,2" ) ); } [Test] public void Add_WhenMoreThan2Numbers_AddThemAll() { Assert.AreEqual( 6, ArrangeAct( "1,2,3" ) ); } [Test] public void Add_SeparatorIsNewLine_AddThem() { Assert.AreEqual( 6, ArrangeAct( @"1 2,3" ) ); } } Now I'll paste code which they have written public class StringCalculator { private const char Separator = ','; public int Add( string numbers ) { const int defaultValue = 0; if ( ShouldReturnDefaultValue( numbers ) ) return defaultValue; return ConvertNumbers( numbers ); } private int ConvertNumbers( string numbers ) { var numberParts = GetNumberParts( numbers ); return numberParts.Select( ConvertSingleNumber ).Sum(); } private string[] GetNumberParts( string numbers ) { return numbers.Split( Separator ); } private int ConvertSingleNumber( string numbers ) { return Convert.ToInt32( numbers ); } private bool ShouldReturnDefaultValue( string numbers ) { return String.IsNullOrEmpty( numbers ); } } and the tests [TestFixture] public class StringCalculatorTests { [Test] public void Add_EmptyString_Returns0() { ArrangeActAndAssert( String.Empty, 0 ); } [Test] [TestCase( "1", 1 )] [TestCase( "2", 2 )] public void Add_WithOneNumber_ReturnsThatNumber( string numberText, int expected ) { ArrangeActAndAssert( numberText, expected ); } [Test] [TestCase( "1,2", 3 )] [TestCase( "3,4", 7 )] public void Add_WithTwoNumbers_ReturnsSum( string numbers, int expected ) { ArrangeActAndAssert( numbers, expected ); } [Test] public void Add_WithThreeNumbers_ReturnsSum() { ArrangeActAndAssert( "1,2,3", 6 ); } private void ArrangeActAndAssert( string numbers, int expected ) { var calculator = new StringCalculator(); var result = calculator.Add( numbers ); Assert.AreEqual( expected, result ); } } Now the question is which one is better? My point here is that we do not need so many small methods initially because StringCalculator has no sub classes and secondly the code itself is so simple that we don't need to break it up too much that it gets confusing after having so many small methods. Their point is that code should read like english and also its better if they can break it up earlier than doing refactoring later and third when they will do refactoring it would be much easier to move these methods quite easily into separate classes. My point of view against is that we never made a decision that code is difficult to understand so why we are breaking it up so early. So I need a third person's opinion to understand which option is much better.

    Read the article

  • Undefined fireball movement behavior

    - by optimisez
    Demonstration video I try to do after the player shoot 10 times of fireball, then delete all the fireball objects and recreate a 10 new set of fireball objects. I did it but there is a weird bug happens that sometimes the fireball will come out from top and move to the right after shooting a few times. All the 10 fireballs should follow the player all the time and all the fireball should come out from player even after a new set of fireballs is recreated. Any ideas to fix it? Variables typedef struct gameObject{ float X; float Y; int length; int height; bool action; }; // Fireball #define FIREBALL_NUM 10 LPDIRECT3DTEXTURE9 fireball = NULL; RECT fireballRect; gameObject *fireballDest = new gameObject[FIREBALL_NUM]; int iFireBallAnimation; int fireballCount = 0; Set up Fireball void setUpFireBall() { // Initialize destination rectangle, rectangle height and length for (int i = 0; i < FIREBALL_NUM; i++) { fireballDest[i].X = 0; fireballDest[i].Y = 0; fireballDest[i].length = fireballRect.right - fireballRect.left; fireballDest[i].height = fireballRect.bottom - fireballRect.top; } iFireBallAnimation = fireballRect.right - fireballRect.left; // Initialize boolean for (int i = 0; i < FIREBALL_NUM; i++) { fireballDest[i].action = false; } } Initialize fireball void initFireball() { hr = D3DXCreateTextureFromFileEx(d3dDevice, "fireball.png", 512, 512, D3DX_DEFAULT, NULL, D3DFMT_A8R8G8B8, D3DPOOL_MANAGED, D3DX_DEFAULT, D3DX_DEFAULT, D3DCOLOR_XRGB(255, 255, 0), NULL, NULL, &fireball); // Initialize source rectangle fireballRect.left = 0; fireballRect.top = 256; fireballRect.right = 64; fireballRect.bottom = 320; setUpFireBall(); } Update fireball void update() { updateAnimation(); updateAI(); updatePhysics(); updateGameState(); } void updatePhysics() { motion(); collison(); } void motion() { playerMove(); playerJump(); playerGravity(); shootFireball(); fireballFollowPlayer(); } void shootFireball() { if (keyArr['Z']) fireballDest[fireballCount].action = true; if (fireballDest[fireballCount].action) { fireballDest[fireballCount].X += 10; if (fireballDest[fireballCount].X > 800) fireballCount++; } } void fireballFollowPlayer() { for (int i = 0; i < FIREBALL_NUM; i++) { if (fireballDest[i].action == false) { fireballDest[i].X = playerDest.X - 30; fireballDest[i].Y = playerDest.Y - 14; } } } void updateGameState() { // When no more fireball left, rearm fireball if (fireballCount == FIREBALL_NUM) { delete[] fireballDest; fireballDest = new gameObject[10]; fireballCount = 0; setUpFireBall(); } } Render fireball void renderFireball() { for (int i = 0; i < FIREBALL_NUM; i++) { if (fireballDest[i].action) sprite->Draw(fireball, &fireballRect, NULL, &D3DXVECTOR3(fireballDest[i].X, fireballDest[i].Y, 0), D3DCOLOR_XRGB(255,255, 255)); } }

    Read the article

  • Use your iPhone or iPod Touch as a Boxee Remote

    - by DigitalGeekery
    Are you a Boxee user looking for a remote control solution? Well, you might not need to look any further than your pocket. The free Boxee Remote App turns your iPhone or iPod Touch into a a simple and easy-to-use Boxee remote. The Boxee Remote App works over WiFi, so there is no need for to buy or install additional hardware on your PC. Plus, you don’t even need to be within the line of sight for it to work. Using the Boxee Remote App Download the free Boxee Remote App from the App Store and install it on your iPhone or iPod Touch. See download link below. Next, make sure you have Boxee running on your PC. Select the Boxee icon to open the App.   The first time you log in you’ll be greeted by an introduction screen that will explain the two modes. Click Continue. When opened in “Button” mode, you’ll be presented with 4 directional buttons, an “OK” button, and a back arrow button that works like the Esc key does in Boxee. Button mode performs just as a normal remote. Touching the directional buttons moves your on screen selection right, left, up, and down. Tap the OK button to open or select an item. To enter “Gesture” mode, tap the Gesture button along the top of the Screen. Gesture mode works similar to a touch pad or trackball on a laptop. You drag the Boxee icon with your thumb or finger across the screen to move around within Boxee. The icon will turn red while being dragged or touched. Simply tap the icon to select.   The Settings button allows you to manually add or delete a host computer, or adjust the sensitivity of the controls.     If you need to enter text, such as enter logon credentials for an App, the on screen keyboard will pop up. While watching a video you’ll have on-screen Stop and Pause buttons along with a volume slider.   The Boxee Remote App is simple and easy to use. As long as you can connect via WiFi, you can use it to control any instance of Boxee running on any computer on your network. Download the Boxee Remote App Similar Articles Productive Geek Tips Why Wait? Amazing New Add-on Turns Your iPhone into an iPad! [Comic]Getting Started with BoxeeIntegrate Boxee with Media Center in Windows 7Watch Netflix Instant Movies in BoxeeWin a Free iPod Touch in the How-To Geek Facebook Giveaway! TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Out of band Security Update for Internet Explorer 7 Cool Looking Screensavers for Windows SyncToy syncs Files and Folders across Computers on a Network (or partitions on the same drive) If it were only this easy Classic Cinema Online offers 100’s of OnDemand Movies OutSync will Sync Photos of your Friends on Facebook and Outlook

    Read the article

  • Building on someone else's DefaultButton Silverlight work...

    - by KyleBurns
    This week I was handed a "simple" requirement - have a search screen execute its search when the user pressed the Enter key instead of having to move hands from keyboard to mouse and click Search.  That is a reasonable request that has been met for years both in Windows and Web apps.  I did a quick scan for code to pilfer and found Patrick Cauldwell's Blog posting "A 'Default Button' In Silverlight".  This posting was a great start and I'm glad that the basic work had been done for me, but I ran into one issue - when using bound textboxes (I'm a die-hard MVVM enthusiast when it comes to Silverlight development), the search was being executed before the textbox I was in when the Enter key was pressed updated its bindings.  With a little bit of reflection work, I think I have found a good generic solution that builds upon Patrick's to make it more binding-friendly.  Also, I wanted to set the DefaultButton at a higher level than on each TextBox (or other control for that matter), so the use of mine is intended to be set somewhere such as the LayoutRoot or other high level control and will apply to all controls beneath it in the control tree.  I haven't tested this on controls that treat the Enter key special themselves in the mix. The real change from Patrick's solution here is that in the KeyUp event, I grab the source of the KeyUp event (in my case the textbox containing search criteria) and loop through the static fields on the element's type looking for DependencyProperty instances.  When I find a DependencyProperty, I grab the value and query for bindings.  Each time I find a binding, UpdateSource is called to make sure anything bound to any property of the field has the opportunity to update before the action represented by the DefaultButton is executed. Here's the code: public class DefaultButtonService { public static DependencyProperty DefaultButtonProperty = DependencyProperty.RegisterAttached("DefaultButton", typeof (Button), typeof (DefaultButtonService), new PropertyMetadata (null, DefaultButtonChanged)); private static void DefaultButtonChanged(DependencyObject d, DependencyPropertyChangedEventArgs e) { var uiElement = d as UIElement; var button = e.NewValue as Button; if (uiElement != null && button != null) { uiElement.KeyUp += (sender, arg) => { if (arg.Key == Key.Enter) { var element = arg.OriginalSource as FrameworkElement; if (element != null) { UpdateBindings(element); } if (button.IsEnabled) { button.Focus(); var peer = new ButtonAutomationPeer(button); var invokeProv = peer.GetPattern(PatternInterface.Invoke) as IInvokeProvider; if (invokeProv != null) invokeProv.Invoke(); arg.Handled = true; } } }; } } public static DefaultButtonService GetDefaultButton(UIElement obj) { return (DefaultButtonService) obj.GetValue(DefaultButtonProperty); } public static void SetDefaultButton(DependencyObject obj, DefaultButtonService button) { obj.SetValue(DefaultButtonProperty, button); } public static void UpdateBindings(FrameworkElement element) { element.GetType().GetFields(BindingFlags.Public | BindingFlags.Static).ForEach(field => { if (field.FieldType.IsAssignableFrom(typeof(DependencyProperty))) { try { var dp = field.GetValue(null) as DependencyProperty; if (dp != null) { var binding = element.GetBindingExpression(dp); if (binding != null) { binding.UpdateSource(); } } } // ReSharper disable EmptyGeneralCatchClause catch (Exception) // ReSharper restore EmptyGeneralCatchClause { // swallow exceptions } } }); } }

    Read the article

  • Node.js Lockstep Multiplayer Architecture

    - by Wakaka
    Background I'm using the lockstep model for a multiplayer Node.js/Socket.IO game in a client-server architecture. User input (mouse or keypress) is parsed into commands like 'attack' and 'move' on the client, which are sent to the server and scheduled to be executed on a certain tick. This is in contrast to sending state data to clients, which I don't wish to use due to bandwidth issues. Each tick, the server will send the list of commands on that tick (possibly empty) to each client. The server and all clients will then process the commands and simulate that tick in exactly the same way. With Node.js this is actually quite simple due to possibility of code sharing between server and client. I'll just put the deterministic simulator in the /shared folder which can be run by both server and client. The server simulation is required so that there is an authoritative version of the simulation which clients cannot alter. Problem Now, the game has many entity classes, like Unit, Item, Tree etc. Entities are created in the simulator. However, for each class, it has some methods that are shared and some that are client-specific. For instance, the Unit class has addHp method which is shared. It also has methods like getSprite (gets the image of the entity), isVisible (checks if unit can be seen by the client), onDeathInClient (does a bunch of stuff when it dies only on the client like adding announcements) and isMyUnit (quick function to check if the client owns the unit). Up till now, I have been piling all the client functions into the shared Unit class, and adding a this.game.isServer() check when necessary. For instance, when the unit dies, it will call if (!this.game.isServer()) { this.onDeathInClient(); }. This approach has worked pretty fine so far, in terms of functionality. But as the codebase grew bigger, this style of coding seems a little strange. Firstly, the client code is clearly not shared, and yet is placed under the /shared folder. Secondly, client-specific variables for each entity are also instantiated on the server entity (like unit.sprite) and can run into problems when the server cannot instantiate the variable (it doesn't have Image class like on browsers). So my question is, is there a better way to organize the client code, or is this a common way of doing things for lockstep multiplayer games? I can think of a possible workaround, but it does have its own problems. Possible workaround (with problems) I could use Javascript mixins that are only added when in a browser. Thus, in the /shared/unit.js file in the /shared folder, I would have this code at the end: if (typeof exports !== 'undefined') module.exports = Unit; else mixin(Unit, LocalUnit); Then I would have /client/localunit.js store an object LocalUnit of client-side methods for Unit. Now, I already have a publish-subscribe system in place for events in the simulator. To remove the this.game.isServer() checks, I could publish entity-specific events whenever I want the client to do something. For instance, I would do this.publish('Death') in /shared/unit.js and do this.subscribe('Death', this.onDeathInClient) in /client/localunit.js. But this would make the simulator's event listeners list on the server and the client different. Now if I want to clear all subscribed events only from the shared simulator, I can't. Of course, it is possible to create two event subscription systems - one client-specific and one shared - but now the publish() method would have to do if (!this.game.isServer()) { this.publishOnClient(event); }. All in all, the workaround off the top of my head seems pretty complicated for something as simple as separating the client and shared code. Thus, I wonder if there is an established and simpler method for better code organization, hopefully specific to Node.js games.

    Read the article

  • User Produtivity Kit - Powerful Packages (Part 1)

    - by [email protected]
    User Productivity Kit provides the ability to create a variety of content types including robust topics on system process and web pages with formatted text and graphics. There are times when you want to enhance content with media types not naively created by User Productivity Kit, media types such as video, custom animations, forms, and more. One method of doing this is to maintain these media files on a web server - separate from the User Productivity Kit player content and link to the files using absolute URLs such as http://myserver/overview.html. While this will get you going, you won't benefit from the content management capabilities of the UPK Developer. Features such as check-in / check-out, history, document properties, folder permissions and more are not available to this external content. Further, if you ever need to move that content to a server with a different name or domain, you'd need to update all your links. UPK version 3.1 introduced a new document type - the package. A package is a group of folders and files that you manage in the Developer library as a single document. These package documents work in the same manner as any other document in the library and you can use all of the collaborative content development features you see with other document types. Packages can be used for anything from single Word documents, PDF files, and graphics to more intricate sets of inter-related files commonly seen with HTML files and their graphics, style sheets, and JavaScript files. The structure of the files and folders within a package will always be preserved so this means that any relative links between files in the package will work. For example, an HTML file containing an image tag with a relative link to a graphic elsewhere in the same package will continue to function properly both when viewed in the Developer and when published to outputs such as the UPK Player. Once you start to use packages, you'll soon discover that there is a lot of existing content that can be re-purposed by placing it into UPK packages. Packages are easily created by selecting File...New...Package. Files can be added in a number of ways including the "Add Files" button, copy & paste from Windows Explorer, and drag & drop. To use one of the files in the package, just create a link to the file in the package you want to target. This is supported throughout the Developer in places such as section & topic concepts, frame links and hyperlinks in web pages. A little more challenging is determining how to structure packages in your library. As I mentioned earlier, a package can contain anything from a single file to dozens of files and folders. So what should you do? You could create a package for each file. You could create one package for all your files. But which one is right? Well, there's not a right and wrong answer to this question. There are advantages and disadvantages to each. The right decision will be influenced by the package files themselves, the structure of the content in the library, the size and working style of the development team, how content is shared between different outlines and more. The first consideration can be assessed the quickest. If the content to be placed in the package is composed of multiple files and those files reference each other, they should be in the same package. There are loads of examples of this type of content. HTML files with graphics and style sheets, HTML files with embedded Flash movies, and Word documents saved as HTML are all examples where the content is composed of multiple files and the files reference each other in some way. Content like this should always be placed in a singe package such that these relative links between the files are preserved and play properly in the UPK Player. In upcoming posts, I'll explain additional considerations.

    Read the article

  • How would I handle input with a Game Component?

    - by Aufziehvogel
    I am currently having problems from finding my way into the component-oriented XNA design. I read an overview over the general design pattern and googled a lot of XNA examples. However, they seem to be right on the opposite site. In the general design pattern, an object (my current player) is passed to InputComponent::update(Player). This means the class will know what to do and how this will affect the game (e.g. move person vs. scroll text in a menu). Yet, in XNA GameComponent::update(GameTime) is called automatically without a reference to the current player. The only XNA examples I found built some sort of higher-level Keyboard engine into the game component like this: class InputComponent: GameComponent { public void keyReleased(Keys); public void keyPressed(Keys); public bool keyDown(Keys); public void override update(GameTime gameTime) { // compare previous state with current state and // determine if released, pressed, down or nothing } } Some others went a bit further making it possible to use a Service Locator by a design like this: interface IInputComponent { public void downwardsMovement(Keys); public void upwardsMovement(Keys); public bool pausedGame(Keys); // determine which keys pressed and what that means // can be done for different inputs in different implementations public void override update(GameTime); } Yet, then I am wondering if it is possible to design an input class to resolve all possible situations. Like in a menu a mouse click can mean "click that button", but in game play it can mean "shoot that weapon". So if I am using such a modular design with game components for input, how much logic is to be put into the InputComponent / KeyboardComponent / GamepadComponent and where is the rest handled? What I had in mind, when I heard about Game Components and Service Locator in XNA was something like this: use Game Components to run the InputHandler automatically in the loop use Service Locator to be able to switch input at runtime (i.e. let player choose if he wants to use a gamepad or a keyboard; or which shall be player 1 and which player 2). However, now I cannot see how this can be done. First code example does not seem flexible enough, as on a game pad you could require some combination of buttons for something that is possible on keyboard with only one button or with the mouse) The second code example seems really hard to implement, because the InputComponent has to know in which context we are currently. Moreover, you could imagine your application to be multi-layered and let the key-stroke go through all layers to the bottom-layer which requires a different behaviour than the InputComponent would have guessed from the top-layer. The general design pattern with passing the Player to update() does not have a representation in XNA and I also cannot see how and where to decide which class should be passed to update(). At most time of course the player, but sometimes there could be menu items you have to or can click I see that the question in general is already dealt with here, but probably from a more elobate point-of-view. At least, I am not smart enough in game development to understand it. I am searching for a rather code-based example directly for XNA. And the answer there leaves (a noob like) me still alone in how the object that should receive the detected event is chosen. Like if I have a key-up event, should it go to the text box or to the player?

    Read the article

  • SQL 2012 Licensing Thoughts

    - by Geoff N. Hiten
    The only thing more controversial than new Federal Tax plans is new Licensing plans from Microsoft.  In both cases, everyone calculates several numbers.  First, will I pay more or less under this plan?  Second, will my competition pay more or less than now?  Third, will <insert interesting person/company here> pay more or less?  Not that items 2 and 3 are meaningful, that is just how people think. Much like tax plans, the devil is in the details, so lets see how this looks.  Microsoft shows it here: http://www.microsoft.com/sqlserver/en/us/future-editions/sql2012-licensing.aspx First up is a switch from per-socket to per-core licensing.  Anyone who didn’t see something like this coming should rapidly search for a new line of work because you are not paying attention.  The explosion of multi-core processors has made SQL Server a bargain.  Microsoft is in business to make money and the old per-socket model was not going to do that going forward. Per-core licensing also simplifies virtualization licensing.  Physical Core = Virtual Core, at least for licensing.  Oversubscribe your processors, that’s your lookout.  You still pay for  what is exposed to the VM.  The cool part is you can seamlessly move physical and virtual workloads around and the licenses follow.  The catch is you have to have Software Assurance to make the licenses mobile.  Nice touch there. Let’s have a moment of silence for the late, unlamented, largely ignored Workgroup Edition.  To quote the Microsoft  FAQ:  “Standard becomes our sole edition for basic database needs”.  Considering I haven’t encountered a singe instance of SQL Server Workgroup Edition in the wild, I don’t think this will be all that controversial. As for pricing, it looks like a wash with current per-socket pricing based on four core sockets.  Interestingly, that is the minimum core count Microsoft proposes to swap to transition per-socket to per-core if you are on Software Assurance.  Reading the fine print shows that if you are using more, you will get more core licenses: From the licensing FAQ. 15. How do I migrate from processor licenses to core licenses?  What is the migration path? Licenses purchased with Software Assurance (SA) will upgrade to SQL Server 2012 at no additional cost. EA/EAP customers can continue buying processor licenses until your next renewal after June 30, 2012. At that time, processor licenses will be exchanged for core-based licenses sufficient to cover the cores in use by processor-licensed databases (minimum of 4 cores per processor for Standard and Enterprise, and minimum of 8 EE cores per processor for Datacenter). Looks like the folks who invested in the AMD 12-core chips will make out like bandits. Now, on to something new: SQL Server Business Intelligence Edition. Yep, finally a BI-specific SKU licensed for server+CAL configurations only.  Note that Enterprise Edition still supports the complete feature set; the BI Edition is intended for smaller shops who want to use the full BI feature set but without needing Enterprise Edition scale (or costs).  No, you don’t get ColumnStore, Compression, or Partitioning in the BI Edition.  Those are Enterprise scale features, ThankYouVeryMuch.  Then again, your starting licensing costs are about one sixth of an Enterprise Edition system (based on an 8 core server). The only part of the message I am missing is if the current Failover Licensing Policy will change.  Do we need to fully or partially license failover servers?  That is a detail I definitely want to know.

    Read the article

  • Screen Aspect Ratio

    - by Bill Evjen
    Jeffrey Dean, Pixar Aspect Ratio is very important to home video. What is aspect ratio – the ratio from the height to the width 2.35:1 The image is 2.35 times wide as it is high Pixar uses this for half of our movies This is called a widescreen image When modified to fit your television screen They cut this to fit the box of your screen When a comparison is made huge chunks of picture is missing It is harder to find what is going on when these pieces are missing The whole is greater than the pieces themselves. If you are missing pieces – you are missing the movie The soul and the mood is in the film shots. Cutting it to fit a screen, you are losing 30% of the movie Why different aspect ratios? Film before the 1950s 1.33:1 Academy Standard There were all aspects of images though. There was no standard. Thomas Edison developed projecting images onto a wall/screen He didn’t patent it as he saw no value in it. Then 1.37:1 came about to add a strip of sound This is the same size as a 35mm film Around 1952 – TV comes along NTSC Television followed the Academy Standard (4x3) Once TV came out, movie theater attendance plummets So Film brought forth color to combat this. Also early 3D Also Widescreen was brought forth. Cinema-Scope Studios at the time made movies bigger and bigger There was a Napoleon movie that was actually 4x1 … really wide. 1.85:1 Academy Flat 2.35:1 Anamorphic Scope (aka Panavision/Cinemascope) Almost all movies are made in these two aspect ratios Pixar has done half in one and half in the other Why choose one over the other? Artist choice It is part of the story the director wants to tell Can we preserve the story outside of the theaters? TVs before 1998 – they were very square Now TVs are very wide Historical options Toy Story released as it was and people cut it in a way that wasn’t liked by the studio Pan and Scan is another option Cut and then scan left or right depending on where the action is Frame Height Pixar can go back and animate more picture to account for the bottom/top bars. You end up with more sky and more ground The characters seem to get lost in the picture You lose what the director original intended Re-staging For animated movies, you can move characters around – restage the scene. It is a new completely different version of the film This is the best possible option that Pixar came up with They have stopped doing this really as the demand as pretty much dropped off Why not 1.33 today? There has been an evolution of taste and demands. VHS is a linear item The focus is about portability and not about quality Most was pan and scan and the quality was so bad – but people didn’t notice DVD was introduced in 1996 You could have more content – two versions of the film You could have the widescreen version and the 1.33 version People realized that they are seeing more of the movie with the widescreen High Def Televisions (16x9 monitors) This was introduced in 2005 Blu-ray Disc was introduced in 2006 This is all widescreen You cannot find a square TV anymore TVs are roughly 1.85:1 aspect ratio There is a change in demand Users are used to black bars and are used to widescreen Users are educated now What’s next for in-flight entertainment? High Def IFE Personal Electronic Devices 3D inflight

    Read the article

  • Windows Azure SDK 1.3 addresses early adopter feedback

    - by Eric Nelson
    At the end of November 2010 we released a new version of the Windows Azure SDK which contains many new features driven by the great feedback of early adopters plus a shiny new portal. New Portal implemented in Silverlight: The new portal is implemented using Silverlight and replaces the (IMHO rather clunky) original HTML + JavaScript portal. It is 100% better although does still have a few bugs. Enjoy! P.S. You can if you wish still use the old portal:   New runtime functionality: The following functionality is now generally available through the Windows Azure SDK and Windows Azure Tools for Visual Studio and the new Windows Azure Management Portal: Elevated Privileges and Full IIS. You can now run a portion or all of your code in Web and Worker roles with elevated administrator privileges. The Web role now provides Full IIS functionality, which enables multiple IIS sites per Web role and the ability to install IIS modules. Remote Desktop functionality enables you to connect to a running instance of your application or service in order to monitor activity and troubleshoot common problems. Windows Server 2008 R2 Roles: Windows Azure now supports Windows Server 2008 R2 in its Web, worker and VM roles. This new support enables you to take advantage of the full range of Windows Server 2008 R2 features such as IIS 7.5, AppLocker, and enhanced command-line and automated management using PowerShell Version 2.0. New runtime functionality – in beta: Windows Azure Virtual Machine Role: Support for more types of new and existing Windows applications will soon be available with the introduction of the Virtual Machine (VM) role. You can move more existing applications to Windows Azure, reducing the need to make costly code or deployment changes. Extra Small Windows Azure Instance, which is priced at $0.05 per compute hour, provides developers with a cost-effective training and development environment. Developers can also use the Extra Small instance to prototype cloud solutions at a lower cost. Windows Azure Connect: (formerly Project Sydney), which enables a simple and easy-to-manage mechanism to set up IP-based network connectivity between on-premises and Windows Azure resources, is the first Windows Azure Virtual Network feature that we’re making available as a CTP. You can sign up for any of the betas via the Windows Azure Management Portal. Improved processes and simplified operations New portal! (see above) Access to new diagnostic information including the ability to click on a role to see role type, deployment time and last reboot time A new sign-up process that dramatically reduces the number of steps needed to sign up for Windows Azure. New scenario based Windows Azure Platform forums to help answer questions and share knowledge more efficiently. Multiple Service Administrators: Windows Azure now supports multiple Windows Live IDs to have administrator privileges on the same Windows Azure account. The objective is to make it easy for a team to work on the same Windows Azure account while using their individual Windows Live IDs.   Related Links Please also let us know through Microsoft Platform Ready if and when you intend to build an application using the Windows Azure Platform. Or indeed if you already have (Well done). You will get access to some great benefits if you do (more on that in a future post). It also really helps us better understand the demand out there which directly impacts how we will plan the next six months of activities around the Windows Azure Platform. Visit Microsoft Platform Ready to tell us about your plans for your applications UK based? Interested in the Windows Azure Platform? Join http://ukazure.ning.com Get started with the Windows Azure Platform http://bit.ly/startazure

    Read the article

  • Simple OpenGL program major slow down at high resolution

    - by Grieverheart
    I have created a small OpenGL 3.3 (Core) program using freeglut. The whole geometry is two boxes and one plane with some textures. I can move around like in an FPS and that's it. The problem is I face a big slow down of fps when I make my window large (i.e. above 1920x1080). I have monitors GPU usage when in full-screen and it shows GPU load of nearly 100% and Memory Controller load of ~85%. When at 600x600, these numbers are at about 45%, my CPU is also at full load. I use deferred rendering at the moment but even when forward rendering, the slow down was nearly as severe. I can't imagine my GPU is not powerful enough for something this simple when I play many games at 1080p (I have a GeForce GT 120M btw). Below are my shaders, First Pass #VS #version 330 core uniform mat4 ModelViewMatrix; uniform mat3 NormalMatrix; uniform mat4 MVPMatrix; uniform float scale; layout(location = 0) in vec3 in_Position; layout(location = 1) in vec3 in_Normal; layout(location = 2) in vec2 in_TexCoord; smooth out vec3 pass_Normal; smooth out vec3 pass_Position; smooth out vec2 TexCoord; void main(void){ pass_Position = (ModelViewMatrix * vec4(scale * in_Position, 1.0)).xyz; pass_Normal = NormalMatrix * in_Normal; TexCoord = in_TexCoord; gl_Position = MVPMatrix * vec4(scale * in_Position, 1.0); } #FS #version 330 core uniform sampler2D inSampler; smooth in vec3 pass_Normal; smooth in vec3 pass_Position; smooth in vec2 TexCoord; layout(location = 0) out vec3 outPosition; layout(location = 1) out vec3 outDiffuse; layout(location = 2) out vec3 outNormal; void main(void){ outPosition = pass_Position; outDiffuse = texture(inSampler, TexCoord).xyz; outNormal = pass_Normal; } Second Pass #VS #version 330 core uniform float scale; layout(location = 0) in vec3 in_Position; void main(void){ gl_Position = mat4(1.0) * vec4(scale * in_Position, 1.0); } #FS #version 330 core struct Light{ vec3 direction; }; uniform ivec2 ScreenSize; uniform Light light; uniform sampler2D PositionMap; uniform sampler2D ColorMap; uniform sampler2D NormalMap; out vec4 out_Color; vec2 CalcTexCoord(void){ return gl_FragCoord.xy / ScreenSize; } vec4 CalcLight(vec3 position, vec3 normal){ vec4 DiffuseColor = vec4(0.0); vec4 SpecularColor = vec4(0.0); vec3 light_Direction = -normalize(light.direction); float diffuse = max(0.0, dot(normal, light_Direction)); if(diffuse 0.0){ DiffuseColor = diffuse * vec4(1.0); vec3 camera_Direction = normalize(-position); vec3 half_vector = normalize(camera_Direction + light_Direction); float specular = max(0.0, dot(normal, half_vector)); float fspecular = pow(specular, 128.0); SpecularColor = fspecular * vec4(1.0); } return DiffuseColor + SpecularColor + vec4(0.1); } void main(void){ vec2 TexCoord = CalcTexCoord(); vec3 Position = texture(PositionMap, TexCoord).xyz; vec3 Color = texture(ColorMap, TexCoord).xyz; vec3 Normal = normalize(texture(NormalMap, TexCoord).xyz); out_Color = vec4(Color, 1.0) * CalcLight(Position, Normal); } Is it normal for the GPU to be used that much under the described circumstances? Is it due to poor performance of freeglut? I understand that the problem could be specific to my code, but I can't paste the whole code here, if you need more info, please tell me.

    Read the article

  • More Free Apps Bound for the Marketplace

    - by Scott Kuhl
    Microsoft has announced they are raising the limit of free applications a developer can submit from 5 to 100.  But what does that really mean? First, lets look at the reason for the limitation.  The iTunes Store and the Android Market both have a lot more applications available than the Windows Phone Marketplace.  But that says nothing about the quality of those applications.  I attended a couple of pre-launch events and Microsoft representatives were clearly told to send a message. We don’t want a bunch of junky applications that do nothing but spam the marketplace.  That was the reason for the 5 free application limit. Okay, so now what has the result been?  Well, there are still fart apps, but there is no sign of a developer flooding the marking with 1500 wallpaper applications or 1000 of the same application all pointed at different RSS feeds.   On the other hand there are developers who want to release real free apps but are constrained by the 5 app limit. So why did Microsoft change it’s mind?  Is it to get the count of applications up, or is to make developers happy?  Windows Phone Marketplace is growing fast but it’s a long way behind the other guys.   I don’t think Microsoft wants to have 100,000 apps show up in the next 3 months if they are loaded with copy cat apps.  Those numbers will get picked apart quickly and the press will start complaining about  the same problems the Android Market has.  I do think the bump was at developer request.  Microsoft is usually good about listening to developer feedback, but has been pretty slow about it at times.  And from a financial perspective, there will me more apps that Microsoft has to review that they will see no profit on.  At least not until they bake in a advertising model connected to Bing. Ultimately, what does this mean for the future? Well, there are developers out there looking to release more than 5 simple free apps, so I think we will see more hobby apps.  And there are developers out there trying to make money from advertising instead of sales, so I think we will see more of those also.  But the category that I think will grow the fastest is free versions of paid applications that are the same as the trial version of the application.  While technically that makes no sense, its purely a marketing move.  Free apps get downloaded a lot more than paid apps, even with a trial mode.  It always surprises me how little consumers are willing to spend on mobile apps.  How many reviews of applications have you seen that says something like “a bit pricey at $1.99”.  Really?  Have you looked at how much you spend on your phone and plan?  I always thought the trial mode baked into Windows Marketplace was a good idea.  So I’m not sure how the more open free market will play out. In the long run though, I won’t be surprised to see a Bing ad mobile ad model show up so Microsoft can capitalize on the more open and free Windows Marketplace. Bonus: The Oatmeal on How I Feel About Buying Apps

    Read the article

  • Iterative and Incremental Principle Series 4: Iteration Planning – (a.k.a What should I do today?)

    - by llowitz
    Welcome back to the fourth of a five part series on applying the Iteration and Incremental principle.  During the last segment, we discussed how the Implementation Plan includes the number of the iterations for a project, but not the specifics about what will occur during each iteration.  Today, we will explore Iteration Planning and discuss how and when to plan your iterations. As mentioned yesterday, OUM prescribes initially planning your project approach at a high level by creating an Implementation Plan.  As the project moves through the lifecycle, the plan is progressively refined.  Specifically, the details of each iteration is planned prior to the iteration start. The Iteration Plan starts by identifying the iteration goal.  An example of an iteration goal during the OUM Elaboration Phase may be to complete the RD.140.2 Create Requirements Specification for a specific set of requirements.  Another project may determine that their iteration goal is to focus on a smaller set of requirements, but to complete both the RD.140.2 Create Requirements Specification and the AN.100.1 Prepare Analysis Specification.  In an OUM project, the Iteration Plan needs to identify both the iteration goal – how far along the implementation lifecycle you plan to be, and the scope of work for the iteration.  Since each iteration typically ranges from 2 weeks to 6 weeks, it is important to identify a scope of work that is achievable, yet challenging, given the iteration goal and timeframe.  OUM provides specific guidelines and techniques to help prioritize the scope of work based on criteria such as risk, complexity, customer priority and dependency.  In OUM, this prioritization helps focus early iterations on the high risk, architecturally significant items helping to mitigate overall project risk.  Central to the prioritization is the MoSCoW (Must Have, Should Have, Could Have, and Won’t Have) list.   The result of the MoSCoW prioritization is an Iteration Group.  This is a scope of work to be worked on as a group during one or more iterations.  As I mentioned during yesterday’s blog, it is pointless to plan my daily exercise in advance since several factors, including the weather, influence what exercise I perform each day.  Therefore, every morning I perform Iteration Planning.   My “Iteration Plan” includes the type of exercise for the day (run, bike, elliptical), whether I will exercise outside or at the gym, and how many interval sets I plan to complete.    I use several factors to prioritize the type of exercise that I perform each day.  Since running outside is my highest priority, I try to complete it early in the week to minimize the risk of not meeting my overall goal of doing it twice each week.  Regardless of the specific exercise I select, I follow the guidelines in my Implementation Plan by applying the 6-minute interval sets.  Just as in OUM, the iteration goal should be in context of the overall Implementation Plan, and the iteration goal should move the project closer to achieving the phase milestone goals. Having an Implementation Plan details the strategy of what I plan to do and keeps me on track, while the Iteration Plan affords me the flexibility to juggle what I do each day based on external influences thus maximizing my overall success. Tomorrow I’ll conclude the series on applying the Iterative and Incremental approach by discussing how to manage the iteration duration and highlighting some benefits of applying this principle.

    Read the article

  • A story of Murphy&ndash;my technical issues at TechDays Switzerland #chtd

    - by Laurent Bugnion
    I had two sessions at the recent Swiss TechDays. While the first one (Advanced Development for Windows Phone 8) went extremely well (I think), I had a very annoying technical issue in the beginning of my second session. First let me add that I talked to Microsoft about that and I hope they will change a few things in the room assignment for next year. My two sessions were one right after the other, with only 15 minutes break to change room. I don’t mind having two sessions so close from each other, but I would really like them to be in the same room in order to avoid having to move my laptops (plural, that will become important later) and redoing the tech check. That being said, I am guilty of not checking where my talks would be before the day before the conference, and when I did notice, it was too late to change it. After my first session, I quickly moved to the other room and setup my main laptop, a Dell Precision. We tested the video output (VGA) and didn’t notice anything special. The projectors are using a fairly high resolution (kudos to the Basel conference center for not having old school 1024x768 projectors anymore, that makes Blend really hard to demo ;) but since everything went great during the first talk, I was not worried. In fact I even had some time to chat with some early attendees about my Microsoft Surface and the Samsung Slate 7, which I had carried with me in addition to the Precision. I just thought it would be nice to show the hardware that Windows 8 can run on, without thinking any further. When the session started, I immediately noticed that the main screen was not showing anything. I thought I had just forgotten to switch to “duplicate” for the video output, and did that with a quick Win-P. However it didn’t “hold”. After 2 seconds, it reverted back to a black display for my attendees. Then I started to really worry. We tried everything, switching from VGA to HDMI, changing the resolution, setting the projector as primary display, but nothing did the trick. The projector was just refusing to show my screen. Now, to show you how despaired I started to be, I even considered using the “extend” setting (which worked just fine), and to use one of the feedback monitors on the floor but really it was super cumbersome. Eventually, my last resort arrived: I started my Samsung Slate 7, which by chance has Visual Studio 12 and Blend 5 installed, plugged the HDMI projector in the dock (yes, I had the dock with me, which I usually don’t!), connected it to internet (had to enter a long password for that), loaded the source code from my main machine using a USB stick and…. finally started to give my presentation. All in all I think we lost about 10 minutes. Amongst the most horrible minutes of my whole life, truly (yes I am blessed, I didn’t have that many horrible minutes in my life ;) I really want to apologize to my attendees. We joked a bit during the attempts to resolve the issue, the reactions I had after the session were all very nice and sympathetic. Only a handful of people left my session while I was having the issues, and I really don’t blame them (who knew how long the problem would last!!). But still, I probably talked at more than 60 sessions over the years, and this was by far my most painful moment. What did I learn? So what did I learn from this? Well from now on I will always have my slate ready with the latest source code, internet connection and every tool I might need during the presentation. This way, if I detect even a hint that the Precision might not work, I will just switch to the Slate. The experience of presenting on the slate is actually not bad at all, it is just a bit slow for my taste, but it does work. By the way, I will be posting the code and slides for my sessions very soon, I just need to “clean it and zip it”. Stay tuned, and thanks again for your patience in that horrible circumstance. Cheers Laurent   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • October in Review

    - by Richard Bingham
    With OpenWorld over October was time to get back to serious work for everyone, including the Fusion Applications Developer Relations team. Don't forget the OpenWorld content is still available, including presentation downloads, for a limited period of time so be sure to grab anything you found useful or take another scan for anything you might have missed. Of all the announcements, the continued evolution of the Oracle Cloud services for extending and integrating with Fusion Applications is increasing in popularity, and certainly the Cloud Marketplace is something we're becoming involved in. More details to follow. Fusion Concepts Last week Vik from our team started the new "Fusion Concepts" series of articles, providing those new to Fusion Applications an explanation of the architectural basics, with the aim to reduce the learning curve and lay the platform for more efficient and effective development. The series begun with an insightful first post on the different schemas that exist in the Fusion Applications database. Look out for upcoming posts on multi-lingual entities, profile options, look-ups and more. New Learning Resources Our YouTube channel continued to expand with more 'how to' videos on using page composer, extending the Simplified UI (aka FUSE), and integrating BI reports and analytics. Also the Oracle Learning Library is now well established as a central resource for knowledge, now with thousands of tutorials, videos, and documents. Of particular note are the great new extensibility-related videos added by the CRM Product Management team, including more on the ever-expanding capabilities of Application Composer. To see some examples of these search using keyword 'customization' or the product 'Sales Cloud'. Finally on learning resources, as Oliver mentioned the Oracle Press book on Fusion Application Customization and Extensibility is now available for pre-order on Amazon (due out 1st Jan). Out And About October also saw us attend the annual Apps Conference held by the UK Oracle User Group in London. Interestingly there was an Applications Transformation stream of sessions and content that included Fusion Applications with all the latest in the Oracle Applications evolution, as always focused around the three tenets of social, mobile, and cloud. Read more in Richard's post-event write up. Other teams around Oracle have also been busy. Angelo from the Platform Technical Services group has done quite a bit of work using web services with Fusion SaaS and has published many interesting findings on his blog. It's definitely recommended reading if you are working on any related integration projects. The middleware-for-applications group has built a new tool called "AppAdvantage" offering an online assessment of your use of Fusion Middleware technologies with Oracle Applications. As the popularity of integrating cloud applications with on-premises systems continued to grow, leveraging existing middleware technologies (and licenses) to support the integration solution is likely to be of paramount importance. Similarly the "Build Enterprise Application Extensions with Ease" section of the related webpage has AppsUX director Killan Evers speaking about customization using the composer tools. Both are useful resources for those just getting started with a move to Fusion Applications. The Oracle A-Team, specialists in middleware technical architecture, always publish superb content via their 'chronicles' site, now with a substantial amount specifically related to Fusion Applications. Click on the Fusion Applications menu on the top right of their homepage to see more. Last month of particular note was an article on customizing the timeout pop-up message that shows to inactive users, providing design-time insight and easy-to-follow steps. Finally if you're looking at using Oracle Middleware and Cloud to tailor and extend your applications then you may also be interested in this new blog post on the roadmap for Oracle SOA and the latest on-demand Cloud Development webcast.

    Read the article

  • Handling "related" work within a single agile work item

    - by Tesserex
    I'm on a project team of 4 devs, myself included. We've been having a long discussion on how to handle extra work that comes up in the course of a single work item. This extra work is usually things that are slightly related to the task, but not always necessary to accomplish the goal of the item (that may be an opinion). Examples include but are not limited to: refactoring of the code changed by the work item refactoring code neighboring the code changed by the item re-architecting the larger code area around the ticket. For example if an item has you changing a single function, you realize the entire class now could be redone to better accommodate this change. improving the UI on a form you just modified When this extra work is small we don't mind. The problem is when this extra work causes a substantial extension of the item beyond the original feature point estimation. Sometimes a 5 point item will actually take 13 points of time. In one case we had a 13 point item that in retrospect could have been 80 points or more. There are two options going around in our discussion for how to handle this. We can accept the extra work in the same work item, and write it off as a mis-estimation. Arguments for this have included: We plan for "padding" at the end of the sprint to account for this sort of thing. Always leave the code in better shape than you found it. Don't check in half-assed work. If we leave refactoring for later, it's hard to schedule and may never get done. You are in the best mental "context" to handle this work now, since you're waist deep in the code already. Better to get it out of the way now and be more efficient than to lose that context when you come back later. We draw a line for the current work item, and say that the extra work goes into a separate ticket. Arguments include: Having a separate ticket allows for a new estimation, so we aren't lying to ourselves about how many points things really are, or having to admit that all of our estimations are terrible. The sprint "padding" is meant for unexpected technical challenges that are direct barriers to completing the ticket requirements. It is not intended for side items that are just "nice-to-haves". If you want to schedule refactoring, just put it at the top of the backlog. There is no way for us to properly account for this stuff in an estimation, since it seems somewhat arbitrary when it comes up. A code reviewer might say "those UI controls (which you actually didn't modify in this work item) are a bit confusing, can you fix that too?" which is like an hour, but they might say "Well if this control now inherits from the same base class as the others, why don't you move all of this (hundreds of lines of) code into the base and rewire all this stuff, the cascading changes, etc.?" And that takes a week. It "contaminates the crime scene" by adding unrelated work into the ticket, making our original feature point estimates meaningless. In some cases, the extra work postpones a check-in, causing blocking between devs. Some of us are now saying that we should decide some cut off, like if the additional stuff is less than 2 FP, it goes in the same ticket, if it's more, make it a new ticket. Since we're only a few months into using Agile, what's the opinion of all the more seasoned Agile veterans around here on how to handle this?

    Read the article

  • Hell and Diplomacy: Notes on Software Integration

    - by ericajanine
    Well, I'm getting cabin fever and short-timer's ADD all at the same time. I haven't been anywhere outside of my greater city area in FOREVER and I'm only days away from my vacation. I have brainlock because the last few days have been non-stop diffusing amazingly hostile conversations. I think I'll write about that. So then, I "do" software. At the end of the day, software is pretty straightforward. Software is that thing we love and try to make do things not currently in play, in existence. If a process around getting software to do something is broken (like most actually are), then we should acknowledge it and move on. We are professional. We are helpful beyond the normal call of duty. We live and breathe making the lives better for those apps being active in the world. But above all--the shocker: We are SERVICE. In a service frame of mind, all perspectives shift to what is best overall for system stabilization vs. what must be in production to meet business objectives. It doesn't matter how much you like or dislike the creator of said software. It doesn't matter what time you went to bed last night or if your mate appreciates your Death March attitude. Getting a product in and when is an age-old dilemma in a software environment where more than, say, 3 people are involved. We know this. Taking a servant's perspective eliminates the drama surrounding what a group of half-baked developers forgot to tell each other in the 11th hour about their trampling changes before check-in. We, my counterparts in society, get paid to deal with that drama. I get paid to diffuse that drama and make everything integrate as smoothly as possible. At the end of the day, attacking someone over a minor detail not only makes things worse, it's against the whole point of our real existence. Being in support or software integration means you are to keep your eyes on the end game. That end game? It's making a solution work for all stakeholders, not just you or your immediate superior. Development and technology groups exist because business groups need them to exist and solve their issues. The end game? Doing what is best for those business groups ultimately. Period. Note: That does not mean you let your business users solely dictate when and if something gets changed in an environment you ultimately own. That's just crazy. Software and its environments are legitimately owned by those who manage it directly, no matter how important a business group believes it is to the existence of mankind. So, you both negotiate the terms of changing that environment and only do so upon that negotiation. Diplomacy is in order. So, to finish my thoughts: If you have no ability to keep your mouth shut in a situation where a business or development group truly need your help to make something work even beyond a deadline, find another profession. Beating up someone verbally because they screw up means a service attitude is not at the forefront of your motivation for doing what is ultimately their work and their product. Software, especially integration, requires a strong will and a soft touch to keep it on track. Not a hammer covered in broken glass.

    Read the article

  • Creating a Corporate Data Hub

    - by BuckWoody
    The Windows Azure Marketplace has a rich assortment of data and software offerings for you to use – a type of Software as a Service (SaaS) for IT workers, not necessarily for end-users. Among those offerings is the “Data Hub” – a  codename for a project that ironically actually does what the codename says. In many of our organizations, we have multiple data quality issues. Finding data is one problem, but finding it just once is often a bigger problem. Lots of departments and even individuals have stored the same data more than once, and in some cases, made changes to one of the copies. It’s difficult to know which location or version of the data is authoritative. Then there’s the problem of accessing the data. It’s fairly straightforward to publish a database, share or other location internally to store the data. But then you have to figure out who owns it, how it is controlled, and pass out the various connection strings to those who want to use it. And then you need to figure out how to let folks access the internal data externally – bringing up all kinds of security issues. Finally, in many cases our user community wants us to combine data from the internally sources with external data, bringing up the security, strings, and exploration features up all over again. Enter the Data Hub. This is an online offering, where you assign an administrator and data stewards. You import the data into the service, and it’s available to you - and only you and your organization if you wish. The basic steps for this service are to set up the portal for your company, assign administrators and permissions, and then you assign data areas and import data into them. From there you make them discoverable, and then you have multiple options that you or your users can access that data. You’re then able, if you wish, to combine that data with other data in one location. So how does all that work? What about security? Is it really that easy? And can you really move the data definition off to the Subject Matter Experts (SME’s) that know the particular data stack better than the IT team does? Well, nothing good is easy – but using the Data Hub is actually pretty simple. I’ll give you a link in a moment where you can sign up and try this yourself. Once you sign up, you assign an administrator. From there you’ll create data areas, and then use a simple interface to bring the data in. All of this is done in a portal interface – nothing to install, configure, update or manage. After the data is entered in, and you’ve assigned meta-data to describe it, your users have multiple options to access it. They can simply use the portal – which actually has powerful visualizations you can use on any platform, even mobile phones or tablets.     Your users can also hit the data with Excel – which gives them ultimate flexibility for display, all while using an authoritative, single reference for the data. Since the service is online, they can do this wherever they are – given the proper authentication and permissions. You can also hit the service with simple API calls, like this one from C#: http://msdn.microsoft.com/en-us/library/hh921924  You can make HTTP calls instead of code, and the data can even be exposed as an OData Feed. As you can see, there are a lot of options. You can check out the offering here: http://www.microsoft.com/en-us/sqlazurelabs/labs/data-hub.aspx and you can read the documentation here: http://msdn.microsoft.com/en-us/library/hh921938

    Read the article

  • PowerShell: Read Excel to Create Inserts

    - by BuckWoody
    I’m writing a series of articles on how to migrate “departmental” data into SQL Server. I also hold workshops on the entire process – from discovering that the data exists to the modeling process and then how to design the Extract, Transform and Load (ETL) process. Finally I write about (and teach) a few methods on actually moving the data. One of those options is to use PowerShell. There are a lot of ways even with that choice, but the one I show is to read two columns from the spreadsheet and output statements that would insert the data using a stored procedure. Of course, you could re-write this as INSERT statements, out to a text file for bcp, or even use a database connection in the script to move the data directly from Excel into SQL Server. This snippet won’t run on your system, of course – it assumes a Microsoft Office Excel 2007 spreadsheet located at c:\temp called VendorList.xlsx. It looks for a tab in that spreadsheet called Vendors. The statement that does the writing just uses one column: Vendor Code. Here’s the breakdown of what I’m doing: In the first block, I connect to Microsoft Office Excel. That connection string is specific to Excel 2007, so if you need a different version you’ll need to look that up. In the second block I set up a selection from the entire spreadsheet based on that tab. Note that if you’re only after certain data you shouldn’t get the whole spreadsheet – that’s just good practice. In the next block I create the text I want, inserting the Vendor Code field as I go. Finally I close the connection. Enjoy! $ExcelConnection= New-Object -com "ADODB.Connection" $ExcelFile="c:\temp\VendorList.xlsx" $ExcelConnection.Open("Provider=Microsoft.ACE.OLEDB.12.0;` Data Source=$ExcelFile;Extended Properties=Excel 12.0;") $strQuery="Select * from [Vendors$]" $ExcelRecordSet=$ExcelConnection.Execute($strQuery) do { Write-Host "EXEC sp_InsertVendors '" $ExcelRecordSet.Fields.Item("Vendor Code").Value "'" $ExcelRecordSet.MoveNext()} Until ($ExcelRecordSet.EOF) $ExcelConnection.Close() Script Disclaimer, for people who need to be told this sort of thing: Never trust any script, including those that you find here, until you understand exactly what it does and how it will act on your systems. Always check the script on a test system or Virtual Machine, not a production system. All scripts on this site are performed by a professional stunt driver on a closed course. Your mileage may vary. Void where prohibited. Offer good for a limited time only. Keep out of reach of small children. Do not operate heavy machinery while using this script. If you experience blurry vision, indigestion or diarrhea during the operation of this script, see a physician immediately. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Oracle Retail Mobile Point-of-Service

    - by David Dorf
    When most people discuss mobile in retail, they immediately go to shopping applications.  While I agree the consumer side of mobile is huge, I believe its also important to arm store associates with mobile tools.  There are around a dozen major roll-outs of mobile POS to chain retailers, and all have been successful.  This does not, however, signal the demise of traditional registers.  Retailers will adopt mobile POS slowly and reduce the number of fixed registers over time, but there's likely to be a combination of both for the foreseeable future.  Even Apple retains at least one fixed register in every store, you just have to know where to look. The business benefits for mobile POS are pretty straightforward: 1. Faster checkout.  Walmart's CFO recently reported that for every second they shave off the average transaction time, they can potentially save $12M a year in labor.  I think its more likely that labor will be redeployed to enhance the customer experience. 2. Smarter associates.  The sales associates on the floor need the same access to information that consumers have, if not more.  They need ready access to product details, reviews, inventory, etc. to meet consumer expectations.  In a recent study, 40% of consumers said a savvy store associate can impact their final product selection more than a website. 3. Lower costs.  Mobile POS hardware (iPod touch + sled) costs about a fifth of fixed registers, not to mention the reclaimed space that can be used for product displays. But almost all Mobile POS solutions can claim those benefits equally.  Where there's differentiation is on the technical side.  Oracle recently announced availability of the Oracle Retail Mobile Point-of-Service, and it has three big technology advantages in the market: 1. Portable. We used a popular open-source component called PhoneGap that abstracts the app from the underlying OS and hardware so that iOS, Android, and other platforms could be supported.  Further, we used Web technologies such as HTML5 and JavaScript, which are commonly known by many programmers, as opposed to ObjectiveC which is more difficult to find.  The screen can adjust to different form-factors and sizes, just like you see with browsers.  In the future when a new, zippy device gets released, retailers will have the option to move to that device more easily than if they used a native app. 2. Flexible.  Our Mobile POS is free with the Oracle Retail Point-of-Service product.  Retailers can use any combination of fixed and mobile registers, and those ratios can change as required.  Perhaps start with 1 mobile and 4 fixed per store, then transition over time to 4 mobile and 1 fixed without any additional software licenses.  Our scalable solution supports lots of combinations. 3. Consistent.  Because our Mobile POS is fully integrated to our traditional POS, the same business logic is reused.  Third-party Mobile POS solutions often handle pricing, promotions, and tax calculations separately leading to possible inconsistencies within the store.  That won't happen with Oracle's solution. For many retailers, Mobile POS can lower costs, increase customer service, and generally enhance a consumer's in-store experience.  Apple led the way, but lots of other retailers are discovering the many benefits of adding mobile capabilities in their stores.  Just be sure to examine both the business and technology benefits so you get the most value from your solution for the longest period of time.

    Read the article

< Previous Page | 373 374 375 376 377 378 379 380 381 382 383 384  | Next Page >