Search Results

Search found 26285 results on 1052 pages for 'grant back'.

Page 745/1052 | < Previous Page | 741 742 743 744 745 746 747 748 749 750 751 752  | Next Page >

  • Retreiving upcoming calendar events from a Google Calendar

    - by brian_ritchie
    Google has a great cloud-based calendar service that is part of their Gmail product.  Besides using it as a personal calendar, you can use it to store events for display on your web site.  The calendar is accessible through Google's GData API for which they provide a C# SDK. Here's some code to retrieve the upcoming entries from the calendar:  .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: Consolas, "Courier New", Courier, Monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } 1: public class CalendarEvent 2: { 3: public string Title { get; set; } 4: public DateTime StartTime { get; set; } 5: } 6:   7: public class CalendarHelper 8: { 9: public static CalendarEvent[] GetUpcomingCalendarEvents 10: (int numberofEvents) 11: { 12: CalendarService service = new CalendarService("youraccount"); 13: EventQuery query = new EventQuery(); 14: query.Uri = new Uri( 15: "http://www.google.com/calendar/feeds/userid/public/full"); 16: query.FutureEvents = true; 17: query.SingleEvents = true; 18: query.SortOrder = CalendarSortOrder.ascending; 19: query.NumberToRetrieve = numberofEvents; 20: query.ExtraParameters = "orderby=starttime"; 21: var events = service.Query(query); 22: return (from e in events.Entries select new CalendarEvent() 23: { StartTime=(e as EventEntry).Times[0].StartTime, 24: Title = e.Title.Text }).ToArray(); 25: } 26: } There are a few special "tricks" to make this work: "SingleEvents" flag will flatten out reoccurring events "FutureEvents", "SortOrder", and the "orderby" parameters will get the upcoming events. "NumberToRetrieve" will limit the amount coming back  I then using Linq to Objects to put the results into my own DTO for use by my model.  It is always a good idea to place data into your own DTO for use within your MVC model.  This protects the rest of your code from changes to the underlying calendar source or API.

    Read the article

  • Developing add-ins for multiple versions of Office

    - by Pranav
    Do you want to develop an add-in targeting multiple versions of Office? And you have basic questions like “Is it possible to do? ” and “How to do it?” ? Then you came to the right place. Few months back, I got a requirement to developed add-ins for Outlook 2003 and Outlook 2007. The functionality for both the versions is same. A doubt stroked… when the functionality is same, why would I develop two add-ins separately? Why don’t I make a single build for both the versions of Office? Then I started searching for techniques to develop add-ins which works in both (2003 and 2007) and read many articles written by VSTO Experts in their blogs, Official VSTO Blog, MSDN, Forums and what not. Misha Says: Theoretically, you can develop an add-in for multiple versions of Microsoft Office by catering to the lowest common denominator. This means if you use an Excel 2003 add-in template in Visual Studio 2008, you would be able to develop and debug this with Excel 2007. However if you try this, you may meet these error messages: “You cannot debug or run this project, because the required version of the Microsoft Office application is not installed.”, followed by “Unable to start debugging.” You can develop Office 2003 add-in in a system where Office 2007 is installed. The following is the procedure that demonstrates how to update your Visual Studio debugging options to use Microsoft Outlook 2007 to debug an add-in targeting Microsoft Outlook 2003. On the Project menu, click on ProjectName Properties Click on the Debug tab In the Start Action pane, click the Start external program radio button Click the file browser button and navigate to %ProgramFiles%\Microsoft Office\Office12 Choose Outlook.exe and click Open Press F5 to debug your add-in For more details. Go through this article in Misha Shneerson’s Blog. There are some tips and tricks to be followed and the things that one needs to take care while developing add-ins targeting multiple versions of Office in Andrew’s Blog. Have a look at this too. You might find it interesting and useful. http://blogs.msdn.com/andreww/archive/2007/06/15/can-you-build-one-add-in-for-multiple-versions-of-office.aspx Here is an MSDN article on Running Solutions in Different Versions of Microsoft Office http://msdn.microsoft.com/en-us/library/bb772080.aspx Hope this helps!

    Read the article

  • ArchBeat Link-o-Rama for July 3, 2013

    - by Bob Rhubart
    Industrial SOA Chapter 5: Enterprise Service Bus Enterprise Service Bus, the fifth and latest addition to the Industrial SOA article series, answers some of the most important questions surrounding the use of an ESB. Industrial SOA Chapter 4: SOA Maturity The fourth article in the Industrial SOA series, SOA Maturity offers "an exploration of the fundamentals of applying a factory approach to modern service-oriented software development." Using the Exalytics Summary Advisor and Oracle BI Apps 7.9.6.4 | Mark Rittman Oracle ACE Director Mark Rittman's post revisits "the use of the Summary Advisor, with my BI Apps installation bumped-up to version 7.9.6.4, and the Exalytics environment patched up to 11.1.1.6.9, the latest patch release we’ve applied to that environment." Part 1 - 12c Database and WLS - Overview | Steve Felts Steve Felts shares a handy table that "maps the Oracle 12c Database features supported with various combinations of currently available WLS releases, 11g and 12c Drivers, and 11g and 12c Databases." Developers WebCast: Deploy Highly-Available Custom Services on Your Data Grid Products - July 11 Oracle Coherence Sr. Architect Brian Oliver hosts this free July 11 webcast for developers to show you how to "create and deploy customized, highly-available services for your data grid, and how real-time data processing will allow you to provide unmatched end-user experiences." A checklist for OIM go live | Daniel Gralewski FMW A-Team solution architect Daniel Gralewski's list is intended to complement Oracle Identity Manager. His post "provides tips on a few topics that are not part of the documentation." How Many ODI Master Repositories Should We Have? | Christophe Dupupet FMW solution architect Christophe Dupupet provides a simple along with best practices for the architecture of ODI repositories in a corporate environment. Distinguish EA from enterprise wide solution architecture | John Wu My buddy Tony Meyer, who did a great presentation recently at the Cleveland-area Enterprise Architect / Solution Architect Meet-up, recommends this Toolbox article by John Wu. YouTube: Oracle Fusion Applications Developer Tips If you work with Fusion Applications you'll want to check out the tips and tricks for building extensions, customizations, and integrations now available on the new Oracle Fusion Middleware Developer Relations YouTube channel. The CX Factor: Wooing and wowing customers in the digital age "There was a time when 'customer experience' was limited to what happened to you when you walked into a store, restaurant, or other place of business or when you called a business on the telephone. But that was back when you could still smoke on airplanes." Thought for the Day "If you had to identify, in one word, the reason why the human race has not achieved, and never will achieve, its full potential, that word would be 'meetings.' " — Dave Barry (Born July 3, 1947) Source: brainyquote.com

    Read the article

  • Upgrading to Ubuntu 11.04 failed

    - by Rupert
    Today Ubuntu asked me to upgrade to 11.04. The installation went completely fine until right at the end when the following packages failed: install-info ubuntu-standard The installer hung so I had to shut it down manually. Ubuntu still works fine but it says that the upgrade didn't work properly so I am hesitant to restart it until I have resolved the problem in case I can't get back in. I am running Ubuntu inside the latest version of Virtual Box and was previously running version 10.10. I have tried installing install-info manually with apt-get but I get the following error: Unhandled exception: [#<SystemStackError: stack level too deep>] /usr/local/ruby/lib/ruby/gems/1.9.1/gems/ZenTest-4.5.0/lib/autotest.rb:842:in `block in <class:Autotest>': undefined method `backtrace' for [#<SystemStackError: stack level too deep>]:Array (NoMethodError) from /usr/local/ruby/lib/ruby/gems/1.9.1/gems/ZenTest-4.5.0/lib/autotest.rb:828:in `[]' from /usr/local/ruby/lib/ruby/gems/1.9.1/gems/ZenTest-4.5.0/lib/autotest.rb:828:in `block in hook' from /usr/local/ruby/lib/ruby/gems/1.9.1/gems/ZenTest-4.5.0/lib/autotest.rb:828:in `each' from /usr/local/ruby/lib/ruby/gems/1.9.1/gems/ZenTest-4.5.0/lib/autotest.rb:828:in `any?' from /usr/local/ruby/lib/ruby/gems/1.9.1/gems/ZenTest-4.5.0/lib/autotest.rb:828:in `hook' from /usr/local/ruby/lib/ruby/gems/1.9.1/gems/ZenTest-4.5.0/lib/autotest.rb:344:in `rescue in run' from /usr/local/ruby/lib/ruby/gems/1.9.1/gems/ZenTest-4.5.0/lib/autotest.rb:320:in `run' from /usr/local/ruby/lib/ruby/gems/1.9.1/gems/ZenTest-4.5.0/lib/autotest.rb:241:in `run' from /usr/local/ruby/lib/ruby/gems/1.9.1/gems/ZenTest-4.5.0/bin/autotest:6:in `<top (required)>' from /usr/local/ruby/bin/autotest:19:in `load' from /usr/local/ruby/bin/autotest:19:in `<main>' dpkg: error processing install-info (--configure): subprocess installed post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of info: info depends on install-info; however: Package install-info is not configured yet. dpkg: error processing info (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of ubuntu-standard: ubuntu-standard depends on info; however: Package info is not configured yet. dpkg: error processing ubuntu-standard (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates it's a follow-up error from a previous failure. No apport report written because the error message indicates it's a follow-up error from a previous failure. Errors were encountered while processing: install-info info ubuntu-standard E: Sub-process /usr/bin/dpkg returned an error code (1) Any ideas on what I should try next?

    Read the article

  • 0xC0017011 and other error messages - what is the error message text?

    Recently there was a bug raised against BIDS Helper which originated in my Expression Editor control. Thankfully the person that raised it kindly included a screenshot, so I had the error code (HRESULT 0xC0017011) and a stack trace that pointed the finger firmly at my control, but no error message text. The code itself looked fine so I searched on the error code but got no results. I’d expected to get a hit from Books Online with the Integration Services Error and Message Reference topic at the very least, but no joy. There is however a more accurate and definitive reference, namely the header file that defines all these codes dtsmsg.h which you can find at- C:\Program Files (x86)\Microsoft SQL Server\110\SDK\Include\dtsmsg.h Looking the code up in the header file gave me a much more useful error message. //////////////////////////////////////////////////////////////////////////// // The parameter is sensitive // // MessageId: DTS_E_SENSITIVEPARAMVALUENOTALLOWED // // MessageText: // // Accessing value of the parameter variable for the sensitive parameter "%1!s!" is not allowed. Verify that the variable is used properly and that it protects the sensitive information. // #define DTS_E_SENSITIVEPARAMVALUENOTALLOWED ((HRESULT)0xC0017011L) Unfortunately I’d forgotten all about this. By the time I had remembered about it, the person who raised the issue had managed to narrow it down to something to do with having  sensitive parameter. Putting that together with the error message I’d finally found, a quick poke around in the code and I found the new GetSensitiveValue method which seemed to do the trick. The HResult fields are also listed online but it only shows the short error message, and it doesn’t include that all so important HRESULT value itself. So let this be a lesson to you (and me!), if you need to check  SSIS error go straight to the horses mouth - dtsmsg.h. This is particularly true when working with early builds, or CTP releases when we expect the documentation to be a bit behind. There is also a programmatic approach to getting better SSIS error messages. I should to take another look at the error handling in the control, or the way it is hosted in BIDS Helper. I suspect that if I use an implementation of Microsoft.SqlServer.Dts.Runtime.Wrapper.IDTSInfoEvents100 I could catch the error itself and get the full error message text which I could then report back. This would obviously be a better user experience and also make it easier to diagnose any issues like this in the future. See ExprssionEvaluator.cs for an example of this in use in the Expression Editor control.

    Read the article

  • Agile PLM 9.3 Service Pack 2 (SP2 or 9.3.0.2) is released along with AUT 1.6.2.0 and AutoVue 20 for

    - by Shane Goodwin
    Oracle released Agile PLM 9.3 SP2 on June 14 and the Agile installer for AutoVue 20 for Agile PLM on April 30. Also available are the new versions of AUT and Averify - 1.6.3 for both tools. 9.3 SP2 is a combined English and NLS release for use on any version of 9.3.0. SP2 contains many bug fixes and rolls up several Hot Fixes - please review the Readme for all the details. In addition, this release also addresses some scalability issues when working with very large Exports and Reports. When exporting very large BOMs, the export module will now release objects more efficiently to reduce the amount of memory consumed on the Application Server. Adminstrators can also control the maximum row limits for Users verses system processes, like ACS. Several out of the box BOM reports have also been changed to use a new row limit option. The combination of all these changes will provide more stability on the application server for customers managing very large datasets. 9.3 SP2 also adds support for Oracle Database 11gR2 for Windows, Oracle Internet Directory (OID) and Oracle Access Manager (OAM). Please note that currently the Variant Patch is not intended to be released for SP2. Customers running the Variant Patch should remain on 9.3.0.0 or 9.3.0.1. Back in April, we also released the AutoVue 20 for Agile PLM installer. AutoVue 20 has many new features which will help Agile PLM customers. Large multi-page Word documents and 2D CAD documents will open more quickly to the first page or first rendition. Memory usage is less when working with 3D Models. There are many new formats supported for MCAD, 2D Cad, and EDA. AutoVue 20 is immediately available for Windows and Linux platforms. The new software can be found in Edelivery or Metalink / Oracle Support: - AutoVue 20 for Agile PLM is on E-Delivery with part number B58963-01 - Oracle Agile PLM 9.3 Service Pack 2 (9.3.0.2) My Oracle Support Patch ID 9782736 - AVERIFY 1.6.3 My Oracle Support Patch ID 9791892 - AUT 1.6.3 My Oracle Support Patch ID 9791908 - Agile PLM 9.3 SP2 Documentation is available on the OTN Agile Documentation Page

    Read the article

  • Stagnating in programming

    - by Coder
    Time after time this question came up in my mind, but up until today I wasn't thinking about it much. I have been programming for maybe around 8 years now, and for the last two years it seems I'm not as keen to pick up new technologies anymore. Maybe that's a burnout or something, but I'd say it's experience and what I like, that's stopping me from running after the latest and greatest. I'm C++ developer, by this I mean, I love close to metal programming. I have no problems tracing problems through assembly, using tools like WinDbg or HexView. When I use constructs, I think about how they are realized underneath, how the bits are set and unset under the hood. I love battling with complex threading problems and doing everything hardcore way, even by hand if the regular solutions seem half baked. But I also love the C++0x stuff, and use it a lot. And all C++ code as long as it's not cumbersome compared to C counterparts, sometimes I also fall back to sort of "Super C" if the C++ way is ugly. And then there are all other developers who seem to be way more forward looking, .Net 4.0 MVC, WPF, all those Microsoft X#s, LINQ languages, XML and XSLT, mobile devices and so on. I have done a considerable amount of .NET, SQL, ASPX programming, but the further I go, the less I want to try those technologies. Is that bad? Almost every day I hear people saying that managed code is the only way forward, WPF is the way to go. I hear that C++ is godawful, and you can't code anything in it that's somewhat stable. But I don't buy it. With the experience I have, and the knowledge of how native code is compiled and executes, I can say I find it extremely rare that C++ code is unstable, or leaks, or causes crashes that takes more than 30 seconds to identify and fix. And to tell the truth, I've seen enough problems with other "cool" languages that I'd say C++ is even more stable and production proof than the safe languages, at least for me. The only thing that scares me in C++ is new frameworks, I don't trust them, and I use them extra sparingly. STL - yes, ATL - very sparingly, everything else... Well, not very keen on it. Most huge problems I've ran into, all were related to frameworks, not the language itself. Some overrided operator here, bad hierarchy there, poor class design here, mystical castings there. Other than that, C/C++ (yes, I use them together) still seems a very controlled and stable way to develop applications. Am I stagnating? Should I switch a profession, or force myself in all that marketing hype? Are there more developers who feel the same way?

    Read the article

  • View the Real Links Behind Shortened URLs in Chrome

    - by Asian Angel
    When you encounter shortened URLs there is always that worry in the back of your mind about where they really lead to. Now you can get a “sneak peak” at the real links behind those URLs with the View Thru extension for Google Chrome. The URL Shortening services officially supported at this time are: bit.ly, cli.gs, ff.im, goo.gl, is.gd, nyti.ms, ow.ly, post.ly, su.pr, & tinyurl.com. Before When you encounter a shortened URL you are pretty much on your own in deciding whether to trust that link or not. It would really be nice if you could just hover your mouse over those links and know where they will lead ahead of time. After Once you have the extension installed you are ready to access that link viewing goodness. Please note that you will need to reload any pages that were open prior to installing the extension. For our first example we chose a shortened URL from “bit.ly”. As you can see the entire link behind the shortened URL is displayed very nicely…no hidden surprises there! Note: There are no options to worry with for the extension. Another perfect result for the “goo.gl URL” shown below. View Thru will certainly remove a lot of the stress related to clicking on shortened URLs. Bonus Find Just out of curiosity we looked for a shortened URL not listed as being officially supported at this time. We found one with the “http://nyti.ms/” domain and View Thru showed the link perfectly…so be sure to give it a try on other services too. Conclusion If you worry about where a shortened URL will really lead you then the View Thru extension can help alleviate that stress. Links Download the View Thru extension (Google Chrome Extensions) Similar Articles Productive Geek Tips See Where Shortened URLs “Link To” in Your Favorite BrowserVerify the Destinations of Shortened URLs the Easy WayCreate Shortened goo.gl URLs in Google Chrome the Easy WayCreate Shortened goo.gl URLs in Your Favorite BrowserAccess Google Chrome’s Special Pages the Easy Way TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 QuicklyCode Provides Cheatsheets & Other Programming Stuff Download Free MP3s from Amazon Awe inspiring, inter-galactic theme (Win 7) Case Study – How to Optimize Popular Wordpress Sites Restore Hidden Updates in Windows 7 & Vista Iceland an Insurance Job?

    Read the article

  • Finding the Twins when Implementing Catmull-Clark subdivision using Half-Edge mesh [migrated]

    - by Ailurus
    Note: The description became a little longer than expected. Do you know a readable implementation of this algorithm using this mesh? Please let me know! I'm trying to implement Catmull-Clark subdivision using Matlab (because later on the results have to be compared with some other stuff already implemented in Matlab). First try was with a Vertex-Face mesh, the algorithm works but it is of course not very efficient (since you need neighbouring information for edges and faces). Therefore, I'm now using a Half-Edge mesh (info), see also the paper of Lutz Kettner. Wikipedia link to the idea behind Catmull-Clark SDV: Wiki. My problem lies in finding the Twin HalfEdges, I'm just not sure how to do this. Below I'm describing my thoughts on the implementation, trying to keep it concise. Half-Edge mesh (using indices to Vertices/HalfEdges/Faces): Vertex (x,y,z,Outgoing_HalfEdge) HalfEdge (HeadVertex (or TailVertex, which one should I use), Next, Face, Twin). Face (HalfEdge) To keep it simple for now, assume that every face is a quadrilateral. The actual mesh is a list of Vertices, HalfEdges and Faces. The new mesh will consist of NewVertices, NewHalfEdges and NewFaces, like this (note: Number_... is the number of ...): NumberNewVertices: Number_Faces + Number_HalfEdges/2 + Number_Vertices NumberNewHalfEdges: 4 * 4 * NumberFaces NumberNewfaces: 4 * NumberFaces Catmull-Clark: Find the FacePoint (centroid) of each Face: --> Just average the x,y,z values of the vertices, save as a NewVertex. Find the EdgePoint of each HalfEdge: --> To prevent duplicates (each HalfEdge has a Twin which would result in the same HalfEdge) --> Only calculate EdgePoints of the HalfEdge which has the lowest index of the Pair. Update old Vertices Ok, now all the new Vertices are calculated (however, their Outgoing_HalfEdge is still unknown). Next step to save the new HalfEdges and Faces. This is the part causing me problems! Loop through each old Face, there are 4 new Faces to be created (because of the quadrilateral assumption) First create the 4 new HalfEdges per New Face, starting at the FacePoint to the Edgepoint Next a new HalfEdge from the EdgePoint to an Updated Vertex Another new one from the Updated Vertex to the next EdgePoint Finally the fourth new HalfEdge from the EdgePoint back to the FacePoint. The HeadVertex of each new HalfEdge is known, the Next HalfEdge too. The Face is also known (since it is the new face you're creating!). Only the Twin HalfEdge is unknown, how should I know this? By the way, while looping through the Vertices of the new Face, assign the Outgoing_HalfEdge to the Vertices. This is probably the place to find out which HalfEdge is the Twin. Finally, after the 4 new HalfEdges are created, save the Face with the HalfVertex index the last newly created HalfVertex. I hope this is clear, if needed I can post my (obviously not-yet-finished) Matlab code.

    Read the article

  • Looking for tips on managing complexity with SCM repositories

    - by Philip Regan
    I am a solo developer in my department and I have a lot of individual projects, all created and managed by me. I started using SVN at ProjectLocker via Versions on the Mac a couple years ago when the variety of projects started getting unwieldy. Scenario 1: Now I have a process that is of reasonable complexity it can be broken up into multiple smaller applications and they all share files. In one phase, there is a single shared file—a constants file—that is shared between a Cocoa app and an iPhone app framework. In the second phase, the iPhone app framework will be used to create individual apps of the same ilk—controller classes and what not will all be the same—but with different content in each. The problem that I am running across is that the file in the first phase is in one repository with the application that started it, and the app framework is in a second, separate repository. Scenario 2: I have another application framework that partially relies on code from an open source project. This is all internal, non-commerical work, but again, the application framework is going to be used to create a variety of unique products and processes. So, now I have an internally managed repository and an externally managed one out of my control. I make little changes to the open source code to meet the needs of my framework when there is an update I download, but I never commit back into the external repository (though, now that I think about it, I don't think I'm committing it to mine either. Oops). The Problem I have all of this set up on my production Mac quite nicely, but duplicating and subsequently maintaining that environment on my laptop has been challenging. For Scenario 1, I've thought of merging these two projects together into the same repository because they are, for all intents and purposes inextricably linked. But, Scenario 2, I think I'm stuck just managing files as best I can. The Question I'm wondering if anyone has any tips on how to manage either of these situations, as well as other complex SCM scenarios when it comes to linking various files from various repositories together. My familiarity with SVN only comes from my work with Versions. It's been great, but I'm a little out of my depth here.

    Read the article

  • Log Debug Messages without Debug Serial on Shipped Device

    - by Kate Moss' Open Space
    Debug message is one of the ancient but useful way for problem resolving. Message is redirected to PB if KITL is enabled otherwise it goes to default debug port, usually a serial port on most of the platform but it really depends on how OEMWriteDebugString and OEMWriteDebugByte are implemented. For many reasons, we don't want to have a debug serial port, for example, we don't have enough spare serial ports and it can affect the performance. So some of the BSP designers decide to dump the messages into other media, could be a log file, shared memory or any solution that is suitable for the need. In CE 5.0 and previous, OAL and Kernel are linked into one binaries; in the other word, you can use whatever function in kernel, such as SC_CreateFileW to access filesystem in OAL, even this is strongly not recommended. But since the OAL is being a standalone executable in CE 6.0, we no longer can use this back door but only interface exported in NKGlobal which just provides enough for OAL but no more. Accessing filesystem or using sync object to communicate to other drivers or application is even not an option. Sounds like the kernel lock itself up; of course, OAL is in kernel space, you can still do whatever you want to hack into kernel, but once again, it is not only make it a dirty solution but also fragile. So isn't there an elegant solution? Let's see how a debug message print out. In private\winceos\COREOS\nk\kernel\printf.c, the OutputDebugStringW is the one for pumping out the messages; most of the code is for error handling and serialization but what really interesting is the following code piece     if (g_cInterruptsOff) {         OEMWriteDebugString ((unsigned short *)str);     } else {         g_pNKGlobal->pfnWriteDebugString ((unsigned short *)str);     }     CELOG_OutputDebugString(dwActvProcId, dwCurThId, str); It outputs the message to default debug output (is redirected to KITL when available) or OAL when needed but note that highlight part, it also invokes CELOG_OutputDebugString. Follow the thread to private\winceos\COREOS\nk\logger\CeLogInstrumentation.c, this function dump whatever input to CELOG. So whatever the debug message is we always got a clone in CELOG. General speaking, all of the debug message is logged to CELOG already, so what you need to do is using celogflush.exe with CELZONE_DEBUG zone, and then viewing the data using the by Readlog tool. Here are some information about these tools CELOG - http://msdn.microsoft.com/en-us/library/ee479818.aspx READLOG - http://msdn.microsoft.com/en-us/library/ee481220.aspx Also for advanced reader, I encourage you to dig into private\winceos\COREOS\nk\celog\celogdll, the source of CELOG.DLL and use it as a starting point to create a more lightweight debug message logger for your own device!

    Read the article

  • Nagging As A Strategy For Better Linking: -z guidance

    - by user9154181
    The link-editor (ld) in Solaris 11 has a new feature that we call guidance that is intended to help you build better objects. The basic idea behind guidance is that if (and only if) you request it, the link-editor will issue messages suggesting better options and other changes you might make to your ld command to get better results. You can choose to take the advice, or you can disable specific types of guidance while acting on others. In some ways, this works like an experienced friend leaning over your shoulder and giving you advice — you're free to take it or leave it as you see fit, but you get nudged to do a better job than you might have otherwise. We use guidance to build the core Solaris OS, and it has proven to be useful, both in improving our objects, and in making sure that regressions don't creep back in later. In this article, I'm going to describe the evolution in thinking and design that led to the implementation of the -z guidance option, as well as give a brief description of how it works. The guidance feature issues non-fatal warnings. However, experience shows that once developers get used to ignoring warnings, it is inevitable that real problems will be lost in the noise and ignored or missed. This is why we have a zero tolerance policy against build noise in the core Solaris OS. In order to get maximum benefit from -z guidance while maintaining this policy, I added the -z fatal-warnings option at the same time. Much of the material presented here is adapted from the arc case: PSARC 2010/312 Link-editor guidance The History Of Unfortunate Link-Editor Defaults The Solaris link-editor is one of the oldest Unix commands. It stands to reason that this would be true — in order to write an operating system, you need the ability to compile and link code. The original link-editor (ld) had defaults that made sense at the time. As new features were needed, command line option switches were added to let the user use them, while maintaining backward compatibility for those who didn't. Backward compatibility is always a concern in system design, but is particularly important in the case of the tool chain (compilers, linker, and related tools), since it is a basic building block for the entire system. Over the years, applications have grown in size and complexity. Important concepts like dynamic linking that didn't exist in the original Unix system were invented. Object file formats changed. In the case of System V Release 4 Unix derivatives like Solaris, the ELF (Extensible Linking Format) was adopted. Since then, the ELF system has evolved to provide tools needed to manage today's larger and more complex environments. Features such as lazy loading, and direct bindings have been added. In an ideal world, many of these options would be defaults, with rarely used options that allow the user to turn them off. However, the reality is exactly the reverse: For backward compatibility, these features are all options that must be explicitly turned on by the user. This has led to a situation in which most applications do not take advantage of the many improvements that have been made in linking over the last 20 years. If their code seems to link and run without issue, what motivation does a developer have to read a complex manpage, absorb the information provided, choose the features that matter for their application, and apply them? Experience shows that only the most motivated and diligent programmers will make that effort. We know that most programs would be improved if we could just get you to use the various whizzy features that we provide, but the defaults conspire against us. We have long wanted to do something to make it easier for our users to use the linkers more effectively. There have been many conversations over the years regarding this issue, and how to address it. They always break down along the following lines: Change ld Defaults Since the world would be a better place the newer ld features were the defaults, why not change things to make it so? This idea is simple, elegant, and impossible. Doing so would break a large number of existing applications, including those of ISVs, big customers, and a plethora of existing open source packages. In each case, the owner of that code may choose to follow our lead and fix their code, or they may view it as an invitation to reconsider their commitment to our platform. Backward compatibility, and our installed base of working software, is one of our greatest assets, and not something to be lightly put at risk. Breaking backward compatibility at this level of the system is likely to do more harm than good. But, it sure is tempting. New Link-Editor One might create a new linker command, not called 'ld', leaving the old command as it is. The new one could use the same code as ld, but would offer only modern options, with the proper defaults for features such as direct binding. The resulting link-editor would be a pleasure to use. However, the approach is doomed to niche status. There is a vast pile of exiting code in the world built around the existing ld command, that reaches back to the 1970's. ld use is embedded in large and unknown numbers of makefiles, and is used by name by compilers that execute it. A Unix link-editor that is not named ld will not find a majority audience no matter how good it might be. Finally, a new linker command will eventually cease to be new, and will accumulate its own burden of backward compatibility issues. An Option To Make ld Do The Right Things Automatically This line of reasoning is best summarized by a CR filed in 2005, entitled 6239804 make it easier for ld(1) to do what's best The idea is to have a '-z best' option that unchains ld from its backward compatibility commitment, and allows it to turn on the "best" set of features, as determined by the authors of ld. The specific set of features enabled by -z best would be subject to change over time, as requirements change. This idea is more realistic than the other two, but was never implemented because it has some important issues that we could never answer to our satisfaction: The -z best proposal assumes that the user can turn it on, and trust it to select good options without the user needing to be aware of the options being applied. This is a fallacy. Features such as direct bindings require the user to do some analysis to ensure that the resulting program will still operate properly. A user who is willing to do the work to verify that what -z best does will be OK for their application is capable of turning on those features directly, and therefore gains little added benefit from -z best. The intent is that when a user opts into -z best, that they understand that z best is subject to sometimes incompatible evolution. Experience teaches us that this won't work. People will use this feature, the meaning of -z best will change, code that used to build will fail, and then there will be complaints and demands to retract the change. When (not if) this occurs, we will of course defend our actions, and point at the disclaimer. We'll win some of those debates, and lose others. Ultimately, we'll end up with -z best2 (-z better), or other compromises, and our goal of simplifying the world will have failed. The -z best idea rolls up a set of features that may or may not be related to each other into a unit that must be taken wholesale, or not at all. It could be that only a subset of what it does is compatible with a given application, in which case the user is expected to abandon -z best and instead set the options that apply to their application directly. In doing so, they lose one of the benefits of -z best, that if you use it, future versions of ld may choose a different set of options, and automatically improve the object through the act of rebuilding it. I drew two conclusions from the above history: For a link-editor, backward compatibility is vital. If a given command line linked your application 10 years ago, you have every reason to expect that it will link today, assuming that the libraries you're linking against are still available and compatible with their previous interfaces. For an application of any size or complexity, there is no substitute for the work involved in examining the code and determining which linker options apply and which do not. These options are largely orthogonal to each other, and it can be reasonable not to use any or all of them, depending on the situation, even in modern applications. It is a mistake to tie them together. The idea for -z guidance came from consideration of these points. By decoupling the advice from the act of taking the advice, we can retain the good aspects of -z best while avoiding its pitfalls: -z guidance gives advice, but the decision to take that advice remains with the user who must evaluate its merit and make a decision to take it or not. As such, we are free to change the specific guidance given in future releases of ld, without breaking existing applications. The only fallout from this will be some new warnings in the build output, which can be ignored or dealt with at the user's convenience. It does not couple the various features given into a single "take it or leave it" option, meaning that there will never be a need to offer "-zguidance2", or other such variants as things change over time. Guidance has the potential to be our final word on this subject. The user is given the flexibility to disable specific categories of guidance without losing the benefit of others, including those that might be added to future versions of the system. Although -z fatal-warnings stands on its own as a useful feature, it is of particular interest in combination with -z guidance. Used together, the guidance turns from advice to hard requirement: The user must either make the suggested change, or explicitly reject the advice by specifying a guidance exception token, in order to get a build. This is valuable in environments with high coding standards. ld Command Line Options The guidance effort resulted in new link-editor options for guidance and for turning warnings into fatal errors. Before I reproduce that text here, I'd like to highlight the strategic decisions embedded in the guidance feature: In order to get guidance, you have to opt in. We hope you will opt in, and believe you'll get better objects if you do, but our default mode of operation will continue as it always has, with full backward compatibility, and without judgement. Guidance suggestions always offers specific advice, and not vague generalizations. You can disable some guidance without turning off the entire feature. When you get guidance warnings, you can choose to take the advice, or you can specify a keyword to disable guidance for just that category. This allows you to get guidance for things that are useful to you, without being bothered about things that you've already considered and dismissed. As the world changes, we will add new guidance to steer you in the right direction. All such new guidance will come with a keyword that let's you turn it off. In order to facilitate building your code on different versions of Solaris, we quietly ignore any guidance keywords we don't recognize, assuming that they are intended for newer versions of the link-editor. If you want to see what guidance tokens ld does and does not recognize on your system, you can use the ld debugging feature as follows: % ld -Dargs -z guidance=foo,nodefs debug: debug: Solaris Linkers: 5.11-1.2275 debug: debug: arg[1] option=-D: option-argument: args debug: arg[2] option=-z: option-argument: guidance=foo,nodefs debug: warning: unrecognized -z guidance item: foo The -z fatal-warning option is straightforward, and generally useful in environments with strict coding standards. Note that the GNU ld already had this feature, and we accept their option names as synonyms: -z fatal-warnings | nofatal-warnings --fatal-warnings | --no-fatal-warnings The -z fatal-warnings and the --fatal-warnings option cause the link-editor to treat warnings as fatal errors. The -z nofatal-warnings and the --no-fatal-warnings option cause the link-editor to treat warnings as non-fatal. This is the default behavior. The -z guidance option is defined as follows: -z guidance[=item1,item2,...] Provide guidance messages to suggest ld options that can improve the quality of the resulting object, or which are otherwise considered to be beneficial. The specific guidance offered is subject to change over time as the system evolves. Obsolete guidance offered by older versions of ld may be dropped in new versions. Similarly, new guidance may be added to new versions of ld. Guidance therefore always represents current best practices. It is possible to enable guidance, while preventing specific guidance messages, by providing a list of item tokens, representing the class of guidance to be suppressed. In this way, unwanted advice can be suppressed without losing the benefit of other guidance. Unrecognized item tokens are quietly ignored by ld, allowing a given ld command line to be executed on a variety of older or newer versions of Solaris. The guidance offered by the current version of ld, and the item tokens used to disable these messages, are as follows. Specify Required Dependencies Dynamic executables and shared objects should explicitly define all of the dependencies they require. Guidance recommends the use of the -z defs option, should any symbol references remain unsatisfied when building dynamic objects. This guidance can be disabled with -z guidance=nodefs. Do Not Specify Non-Required Dependencies Dynamic executables and shared objects should not define any dependencies that do not satisfy the symbol references made by the dynamic object. Guidance recommends that unused dependencies be removed. This guidance can be disabled with -z guidance=nounused. Lazy Loading Dependencies should be identified for lazy loading. Guidance recommends the use of the -z lazyload option should any dependency be processed before either a -z lazyload or -z nolazyload option is encountered. This guidance can be disabled with -z guidance=nolazyload. Direct Bindings Dependencies should be referenced with direct bindings. Guidance recommends the use of the -B direct, or -z direct options should any dependency be processed before either of these options, or the -z nodirect option is encountered. This guidance can be disabled with -z guidance=nodirect. Pure Text Segment Dynamic objects should not contain relocations to non-writable, allocable sections. Guidance recommends compiling objects with Position Independent Code (PIC) should any relocations against the text segment remain, and neither the -z textwarn or -z textoff options are encountered. This guidance can be disabled with -z guidance=notext. Mapfile Syntax All mapfiles should use the version 2 mapfile syntax. Guidance recommends the use of the version 2 syntax should any mapfiles be encountered that use the version 1 syntax. This guidance can be disabled with -z guidance=nomapfile. Library Search Path Inappropriate dependencies that are encountered by ld are quietly ignored. For example, a 32-bit dependency that is encountered when generating a 64-bit object is ignored. These dependencies can result from incorrect search path settings, such as supplying an incorrect -L option. Although benign, this dependency processing is wasteful, and might hide a build problem that should be solved. Guidance recommends the removal of any inappropriate dependencies. This guidance can be disabled with -z guidance=nolibpath. In addition, -z guidance=noall can be used to entirely disable the guidance feature. See Chapter 7, Link-Editor Quick Reference, in the Linker and Libraries Guide for more information on guidance and advice for building better objects. Example The following example demonstrates how the guidance feature is intended to work. We will build a shared object that has a variety of shortcomings: Does not specify all it's dependencies Specifies dependencies it does not use Does not use direct bindings Uses a version 1 mapfile Contains relocations to the readonly allocable text (not PIC) This scenario is sadly very common — many shared objects have one or more of these issues. % cat hello.c #include <stdio.h> #include <unistd.h> void hello(void) { printf("hello user %d\n", getpid()); } % cat mapfile.v1 # This version 1 mapfile will trigger a guidance message % cc hello.c -o hello.so -G -M mapfile.v1 -lelf As you can see, the operation completes without error, resulting in a usable object. However, turning on guidance reveals a number of things that could be better: % cc hello.c -o hello.so -G -M mapfile.v1 -lelf -zguidance ld: guidance: version 2 mapfile syntax recommended: mapfile.v1 ld: guidance: -z lazyload option recommended before first dependency ld: guidance: -B direct or -z direct option recommended before first dependency Undefined first referenced symbol in file getpid hello.o (symbol belongs to implicit dependency /lib/libc.so.1) printf hello.o (symbol belongs to implicit dependency /lib/libc.so.1) ld: warning: symbol referencing errors ld: guidance: -z defs option recommended for shared objects ld: guidance: removal of unused dependency recommended: libelf.so.1 warning: Text relocation remains referenced against symbol offset in file .rodata1 (section) 0xa hello.o getpid 0x4 hello.o printf 0xf hello.o ld: guidance: position independent (PIC) code recommended for shared objects ld: guidance: see ld(1) -z guidance for more information Given the explicit advice in the above guidance messages, it is relatively easy to modify the example to do the right things: % cat mapfile.v2 # This version 2 mapfile will not trigger a guidance message $mapfile_version 2 % cc hello.c -o hello.so -Kpic -G -Bdirect -M mapfile.v2 -lc -zguidance There are situations in which the guidance does not fit the object being built. For instance, you want to build an object without direct bindings: % cc -Kpic hello.c -o hello.so -G -M mapfile.v2 -lc -zguidance ld: guidance: -B direct or -z direct option recommended before first dependency ld: guidance: see ld(1) -z guidance for more information It is easy to disable that specific guidance warning without losing the overall benefit from allowing the remainder of the guidance feature to operate: % cc -Kpic hello.c -o hello.so -G -M mapfile.v2 -lc -zguidance=nodirect Conclusions The linking guidelines enforced by the ld guidance feature correspond rather directly to our standards for building the core Solaris OS. I'm sure that comes as no surprise. It only makes sense that we would want to build our own product as well as we know how. Solaris is usually the first significant test for any new linker feature. We now enable guidance by default for all builds, and the effect has been very positive. Guidance helps us find suboptimal objects more quickly. Programmers get concrete advice for what to change instead of vague generalities. Even in the cases where we override the guidance, the makefile rules to do so serve as documentation of the fact. Deciding to use guidance is likely to cause some up front work for most code, as it forces you to consider using new features such as direct bindings. Such investigation is worthwhile, but does not come for free. However, the guidance suggestions offer a structured and straightforward way to tackle modernizing your objects, and once that work is done, for keeping them that way. The investment is often worth it, and will replay you in terms of better performance and fewer problems. I hope that you find guidance to be as useful as we have.

    Read the article

  • Sony VAIO is booting directly into Windows without showing grub

    - by Rohan Dhruva
    I bought a new Sony Vaio S series laptop. It uses Insyde H2O BIOS EFI, and trying to install Linux on it is driving me crazy. root@kubuntu:~# parted /dev/sda print Model: ATA Hitachi HTS72756 (scsi) Disk /dev/sda: 640GB Sector size (logical/physical): 512B/4096B Partition Table: gpt Number Start End Size File system Name Flags 1 1049kB 274MB 273MB fat32 EFI system partition hidden 2 274MB 20.8GB 20.6GB ntfs Basic data partition hidden, diag 3 20.8GB 21.1GB 273MB fat32 EFI system partition boot 4 21.1GB 21.3GB 134MB Microsoft reserved partition msftres 5 21.3GB 342GB 320GB ntfs Basic data partition 6 342GB 358GB 16.1GB ext4 Basic data partition 7 358GB 374GB 16.1GB ntfs Basic data partition 8 374GB 640GB 266GB ntfs Basic data partition What is surprising is that there are 2 EFI system partitions on the disk. The sda2 partition is a 20gb recovery partition which loads windows with a basic recovery interface. This is accessible by pressing the "ASSIST" button as opposed to the normal power button. I presume that the sda1 EFI System Partition (ESP) loads into this recovery. The sda3 ESP has more fleshed out entries for Microsoft Windows, which actually goes into Windows 7 (as confirmed by bcdedit.exe on Windows). Ubuntu is installed on sda6, and while installation I chose sda3 as my boot partition. The installer correctly created a sda3/EFI/ubuntu/grubx64.efi application. The real problem: for the life of me, I can't set it to be the default! I tried creating a sda3/startup.nsh which called grubx64.efi, but it didn't help -- on rebooting, the system still boots into windows. I tried using efibootmgr, and that shows as it it worked: root@kubuntu:~# efibootmgr BootCurrent: 0000 BootOrder: 0000,0001 Boot0000* EFI USB Device Boot0001* Windows Boot Manager root@kubuntu:~# efibootmgr --create --gpt --disk /dev/sda --part 3 --write-signature --label "GRUB2" --loader "\\EFI\\ubuntu\\grubx64.efi" BootCurrent: 0000 BootOrder: 0002,0000,0001 Boot0000* EFI USB Device Boot0001* Windows Boot Manager Boot0002* GRUB2 root@kubuntu:~# efibootmgr BootCurrent: 0000 BootOrder: 0002,0000,0001 Boot0000* EFI USB Device Boot0001* Windows Boot Manager Boot0002* GRUB2 However, on rebooting, as you guessed, the machine rebooted directly back into Windows. The only things I can think of are: The sda1 partition is somehow being used Overwrite /EFI/Boot/bootx64.efi and /EFI/Microsoft/Boot/bootmgfw.efi with grubx64.efi [but this seems really radical]. Can anyone please help me out? Thanks -- any help is greatly appreciated, as this issue is driving me crazy!

    Read the article

  • Creating the Completely Customized World Just for YOU

    - by divya.malik
    OK so not a customized world, but do you know what goes into creating that customized web store front for you? How do you get those additional offers from vendors when you call in for service or when you are browsing a storefront. This is what is has been happening behind the scenes.  When a customer calls in a contact center for service, at the end of the conversation, they are offered a new product, or service. But what just transpired was that the CRM system that was in place had routed the call to the right agent, the agent got the pop up screen with the customer information, and the call request  was handled. Then came the decision point to cross-sell and up-sell, The agent got some recommended offers that were created based on analyzed data (this data had been put into a data warehouse, modeled, profiled and rules were implemented e.g.. People with profile X like product Y).  But with this system, what happens is that analytics can be applied to a very small subset. Now comes Real Time Decisioning (RTD), this helps companies make optimal decisions in the context of transactional systems. It enables companies to improve business processes with real time intelligence on every single transaction. RTD is like a service plug-in that you put at the back of your transactional systems and that you  ping to get a recommendation.  It listens to business process flows and data moving through the process, getting all that data, processes all that you can do with that data, and gives out out various offers. It takes a process centric view of analytics rather than just a data centric view. It continuously observes and learns from ever-changing customer behavior and applies those insights to providing real-time decisions and recommendations at any customer touch point. At Oracle we define Real Time Decisioning as “ The solution that addresses a business issue faced by all organizations : how to make accurate decisions, using the most up to date information, in real time…consistently and in large volumes”. Here is a video on recommendation engines that are benefiting from real time decisioning today and see how it is helping online vendors.

    Read the article

  • Transform Your Portal Experience and Optimize Online Engagement

    - by Christie Flanagan
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Does your portal environment foster collaboration between your business and your customers? Are you effectively managing your customer, employee, and partner relationships and engagement? Can your users access information through Web, mobile, and social channels? Online engagement solutions give organizations the ability to listen and respond to their customers, provide targeted experiences, and encourage interaction among customers and employees.Join us for a webcast on Thursday, April 12, 2012 at 10 a.m. PT / 1 p.m. ET, where Sachin Agarwal, Senior Director of Product Management and Kellsey Ruppel, Senior Product Marketing Manager for Oracle WebCenter, will tell you how to transform your portal experience and optimize online engagement. With Oracle WebCenter, you can: Deliver an optimized online experience for your users Create contextually relevant, targeted online experiences Provide intuitive and secure access to back-office applications Manage and moderate interactive, multichannel social interactions Register today and learn how to make your portals more interactive and engaging across multiple channels.

    Read the article

  • Organizing Git repositories with common nested sub-modules

    - by André Caron
    I'm a big fan of Git sub-modules. I like to be able to track a dependency along with its version, so that you can roll-back to a previous version of your project and have the corresponding version of the dependency to build safely and cleanly. Moreover, it's easier to release our libraries as open source projects as the history for libraries is separate from that of the applications that depend on them (and which are not going to be open sourced). I'm setting up workflow for multiple projects at work, and I was wondering how it would be if we took this approach a bit of an extreme instead of having a single monolithic project. I quickly realized there is a potential can of worms in really using sub-modules. Supposing a pair of applications: studio and player, and dependent libraries core, graph and network, where dependencies are as follows: core is standalone graph depends on core (sub-module at ./libs/core) network depdends on core (sub-module at ./libs/core) studio depends on graph and network (sub-modules at ./libs/graph and ./libs/network) player depends on graph and network (sub-modules at ./libs/graph and ./libs/network) Suppose that we're using CMake and that each of these projects has unit tests and all the works. Each project (including studio and player) must be able to be compiled standalone to perform code metrics, unit testing, etc. The thing is, a recursive git submodule fetch, then you get the following directory structure: studio/ studio/libs/ (sub-module depth: 1) studio/libs/graph/ studio/libs/graph/libs/ (sub-module depth: 2) studio/libs/graph/libs/core/ studio/libs/network/ studio/libs/network/libs/ (sub-module depth: 2) studio/libs/network/libs/core/ Notice that core is cloned twice in the studio project. Aside from this wasting disk space, I have a build system problem because I'm building core twice and I potentially get two different versions of core. Question How do I organize sub-modules so that I get the versioned dependency and standalone build without getting multiple copies of common nested sub-modules? Possible solution If the the library dependency is somewhat of a suggestion (i.e. in a "known to work with version X" or "only version X is officially supported" fashion) and potential dependent applications or libraries are responsible for building with whatever version they like, then I could imagine the following scenario: Have the build system for graph and network tell them where to find core (e.g. via a compiler include path). Define two build targets, "standalone" and "dependency", where "standalone" is based on "dependency" and adds the include path to point to the local core sub-module. Introduce an extra dependency: studio on core. Then, studio builds core, sets the include path to its own copy of the core sub-module, then builds graph and network in "dependency" mode. The resulting folder structure looks like: studio/ studio/libs/ (sub-module depth: 1) studio/libs/core/ studio/libs/graph/ studio/libs/graph/libs/ (empty folder, sub-modules not fetched) studio/libs/network/ studio/libs/network/libs/ (empty folder, sub-modules not fetched) However, this requires some build system magic (I'm pretty confident this can be done with CMake) and a bit of manual work on the part of version updates (updating graph might also require updating core and network to get a compatible version of core in all projects). Any thoughts on this?

    Read the article

  • MPEG-2 playback inconsistent

    - by DustByte
    Many years ago I gave up on Linux because video playback was choppy. Now I'm back, and video playback is still playing up... I have two MPEG files: good.mpg bad.mpg. Here is some information about the two files, using avprobe: My machine is Intel Core 2 Duo E8400 @ 3.00GHz x 2, 64-bit. I do not know what graphics card I have. I run Ubuntu 12.04. So far I have had no problems with YouTube and playback of various video files, including playback of the file good.mpg, included in the avprobe snapshot above. However, the file bad.mpg gives me headache! The file bad.mpg is produced by a respectable "Old-video-tapes-to-DVD" company. I converted over 10 Video-8 tapes to MPEG through them, and today I collected my hard drive containing the MPEG files. Unfortunately I have problem watching them! Here are some details: Using Totem Movie Player 3.0.1 works well for several seconds, then it gets choppy and the playback is not at all smooth. Also the player easily freezes for a while when trying to jump to another position in the file. Most strangely though, the total time is shown as 0:42 (42 seconds) instead of the true 00:39:11: The VLC media player is doing a better job. It shows the correct total length, but as soon as I jump in the video to a new position, it stalls. Playback also stalls after 30 seconds if I press play and leave it. Using Handbrake and choosing bad.mpg as the source, gives me: There is only one title to choose, and it is 6 min 53 seconds. I would have guessed the full 39 minutes of the video should have shown. Lastly, putting the file bad.mpg in Dropbox and viewing it on my iPad with the Dropbox app seems fine (disregard the lack of easy jumping forward due to real-time encoding when streaming it). My question is simple: What is going on?! Why do I have problem to play the MPEG-2 files I just paid good money for (the issue with bad.mpg applies to all files I had encoded)? Is it an issue with my particular Linux machine? The graphics card? But why has everything worked fine so far, and why does not the good.mpg file cause any problems?

    Read the article

  • GLM Velocity Vectors - Basic Maths to Simulate Steering

    - by Reanimation
    UPDATE - Code updated below but still need help adjusting my math. I have a cube rendered on the screen which represents a car (or similar). Using Projection/Model matrices and Glm I am able to move it back and fourth along the axes and rotate it left or right. I'm having trouble with the vector mathematics to make the cube move forwards no matter which direction it's current orientation is. (ie. if I would like, if it's rotated right 30degrees, when it's move forwards, it travels along the 30degree angle on a new axes). I hope I've explained that correctly. This is what I've managed to do so far in terms of using glm to move the cube: glm::vec3 vel; //velocity vector void renderMovingCube(){ glUseProgram(movingCubeShader.handle()); GLuint matrixLoc4MovingCube = glGetUniformLocation(movingCubeShader.handle(), "ProjectionMatrix"); glUniformMatrix4fv(matrixLoc4MovingCube, 1, GL_FALSE, &ProjectionMatrix[0][0]); glm::mat4 viewMatrixMovingCube; viewMatrixMovingCube = glm::lookAt(camOrigin, camLookingAt, camNormalXYZ); vel.x = cos(rotX); vel.y=sin(rotX); vel*=moveCube; //move cube ModelViewMatrix = glm::translate(viewMatrixMovingCube,globalPos*vel); //bring ground and cube to bottom of screen ModelViewMatrix = glm::translate(ModelViewMatrix, glm::vec3(0,-48,0)); ModelViewMatrix = glm::rotate(ModelViewMatrix, rotX, glm::vec3(0,1,0)); //manually turn glUniformMatrix4fv(glGetUniformLocation(movingCubeShader.handle(), "ModelViewMatrix"), 1, GL_FALSE, &ModelViewMatrix[0][0]); //pass matrix to shader movingCube.render(); //draw glUseProgram(0); } keyboard input: void keyboard() { char BACKWARD = keys['S']; char FORWARD = keys['W']; char ROT_LEFT = keys['A']; char ROT_RIGHT = keys['D']; if (FORWARD) //W - move forwards { globalPos += vel; //globalPos.z -= moveCube; BACKWARD = false; } if (BACKWARD)//S - move backwards { globalPos.z += moveCube; FORWARD = false; } if (ROT_LEFT)//A - turn left { rotX +=0.01f; ROT_LEFT = false; } if (ROT_RIGHT)//D - turn right { rotX -=0.01f; ROT_RIGHT = false; } Where am I going wrong with my vectors? I would like change the direction of the cube (which it does) but then move forwards in that direction.

    Read the article

  • ArchBeat Link-o-Rama for 2012-06-14

    - by Bob Rhubart
    Duke's Choice Award Nominations Close Friday! | The Java Source The Duke's Choice Awards celebrate extreme innovation in the world of Java technology. Nominate an individual, a group or company who show the best in Java innovation. Nominate at Java.net/dukeschoice. Nominations are open until this Friday, June 15. Whole Lotta Virtualization Goin' On | Rick Ramsey The OTN Garage's Rick Ramsey shares a list of recent Virtualization articles available on OTN, along with a link to a video by The Killer, Mr Jerry Lee Lewis. A Pragmatic Path to Navigating your Infrastructure to the Cloud | The WebLogic Server Blog Ruma Sanyal offers an overview of a recent Oracle webcast featuring Gartner VP and Distinguished Analyst Andy Butler and Vice President and Gartner Fellow Massimo Pezzini. Migrating C/C++ embedded SQL code | Tom Laszewski Cloud migration expert Tom Laszewski explains the how-to in 5 easy steps. Aetna Dumps Its Siloed Enterprise Architecture for SOA | CIO.com CIO writer Stephanie Overby tells the story of how one major health insurance provider put the "Enterprise" back in Enterprise Architecture. (H/T to Joe McKendrick for this story.) Downloading specific video renditions in WebCenter Content | Kyle Hatlestad How-to from Oracle WebCenter & ADF A-Team blogger Kyle Hatlestad. Eclipse DemoCamp - June 2012 - Redwood Shores, CA Location: Oracle HQ - 10 Twin Dolphin Drive, Redwood Shores, CA (Map) Date and Time: Wednesday, June 13, 2012. From 6pm - 9pm Agenda: The evolution of Java persistence, Doug Clarke, EclipseLink Project Lead, Oracle Integrating BIRT into Applications, Ashwini Verma, Actuate Corporation Leveraging OSGi In The Enterprise, Kamal Muralidharan, Lead Engineer, eBay Developing Rich ADF Applications with Java EE, Greg Stachnick, Oracle NVIDIA® NsightTM Eclipse Edition, Goodwin (Tech lead - Visual tools), Eugene Ostroukhov (Senior engineer – Visual tools) 2012 Oracle Fusion Middleware Innovation Awards - Win a FREE Pass to Oracle OpenWorld 2012 in San Francisco Share your use of Oracle Fusion Middleware solutions and how they help your organization drive business innovation. You just might win a free pass to Oracle Openworld 2012 in San Francisco. Deadline for submissions in July 17, 2012. BI Architecture Master Class for Partners – Oracle Architecture Unplugged Date: June 21, 2012 No slides, no fluff. This workshop will be highly interactive and is aimed at Oracle OPN member partners who are IT Architects and BI+W specialists. The focus will be on architectural issues and considerations. DevOps: Evolving to Handle Disruption | JP Morgenthal The subject of DevOps came up this week during an OTN ArchBeat podcast interview with Ron Batra and James Baty on the role of the cloud architect (that program will be available in a few weeks). Morgenthal's article for InfoQ offers a good overview of what DevOps is and how it works. Thought for the Day "Elegance is not a dispensable luxury but a factor that decides between success and failure." — Edsger Dijkstra Source: softwarequotes.com

    Read the article

  • Travelling MVP #2: Community event at Bucharest, Romania

    - by DigiMortal
    My second trip was to DevReach with two stops. My first stop was at Bucharest where I met with my friend Dimitar Georgiev who is one of authors of Gym Realm service. Romanian MVP Andrei Ignat was our host there and organized meeting with local community guys. With me – it was first time in my life – was one more guy from Estonia visiting DevReach and he made the whole trip with me. Bucharest We arrived to Bucharest 29.09 at night. We stayed at Hotel Michelangelo. It’s small hotel with nice rooms, free WiFi and very good service. Although my room was on the first floor there was no street noise. We visited one restaurant that offers national cuisine and it was really great. Next day we went out with local guys and had some beers in “old town”. Bucharest “old town” is nice and cozy. There are many bars open and I am sure everybody will find there some very okay place. After supper we visited one warm karaoke bar where we had beers with local guys. Andrei Ignat – karaoke star Agu Suur and Andrei Ion Rinea enjoying karaoke and tequila Community event Next day we had community event. I made my session about ASP.NET Web API and Dimitar told about how to port ASP.NET web applications to cloud environment. Sessions were held at study class of one local company. Dimitar Georgiev speaking about porting web apps to Windows Azure. As it was usual community evening and not some bigger event there were about 12 guys attending from Bucharest. There were both IT-PROs and developers and one nice thing about Bucharest community is that they are listening to you very well and they ask questions if something is unclear or if you slide over from topic they are interested in. Okay, we tried to keep up good tempo so people stay awake and I think we succeeded. After sessions we went all together to local Piranha pub that was near event venue. We had some beers with local guys and talked with them on different technology topics. It was another good and interesting evening at Bucharest. I want to go back there for sure. As it was my first trip to Bucharest and mostly I gathered experiences I think my next community trip there will be way stronger. I take it as a challenge. Plus – I have there some new friends and I want to meet them too – be it community event or not. :)

    Read the article

  • A Bad Day at Work

    - by TehGrumpyCoder
    There's lots of ways of having a bad day at work... I suppose for many people, just being *at* work makes it a bad day, but I happen to be one of those people that found a way to do something I like for a living. I've always said "if you're not having fun, what's the point?" ... on the latest Zune podcast, they were interviewing someone from the WP7 team and he said they're mantra is "It's not done until it's fun" ... I like that too. But, even when you're doing what you like for a living, it can get tedious. There were times that I didn't look forward to going out and playing guitar on a Friday or Saturday night, and some nights I was looking at my watch just waiting for it to be over. Well, that was today... like Steve Martin in "The Jerk" ... the first hour was like a regular hour, but then the rest of the morning was like a day, and the afternoon has been like a week. I've got a list of stuff I need to get into my head, and it's tough when the highest technology you have during 9 hours of your day is .NET 2.0 and you can only run what IT installed. I get wrapped around the power take-off reading something and dearly want to write some code to try, but with the state of technology here, it's like trying to teach jazz chords to someone that showed up for their lesson with that stupid plastic guitar from Guitar Hero. I tried to watch a training video... downloaded it zipped so maybe it wouldn't be noticed like it might if I streamed it. Then nothing on this machine would play the video... dang! Well, if someone doesn't take me out on the drive tonight or back in tomorrow, maybe it'll be a better day... or maybe I'll d/l a bunch of training videos in a different format, or bring in a decent viewer, or download them to my Zune maybe... that would work. I suppose at age 61 there are worse things than feeling stifled... for instance, so far I've lived 2 years longer than my father... but at the same time, he's the one that pointed out that in my first letter home from Boot Camp "He's complaining, he's fine"... guess he had my number :) I think he'd appreciate "Teh Grumpy Coder"

    Read the article

  • How important is knowing functionality before coding?

    - by minusSeven
    I work for a software development company where the development work have been off shored to us. The on shore team handle the support and talk directly to the clients. We never talk to the clients directly we just talk people from the on shore team who talk directly to the clients. When requirements come, on shore team talk to the clients and make requirement documents and informs us. We make design documents after studying the requirements (we follow traditional waterfall model ). But there is one problem in the whole process: nobody in the either off-shore or on-shore understand the functionality of the application completely. We just know its a big complex web app handling complex order processing, catalog management, campaign management and other activities. We struggle with the design document as the requirements would not be clear. It then goes into a series of questions/answers back and forth between the on shore team,off shore team and clients. We would often be told to understand functionality from the code. But that's usually not feasible as the code base is huge and even understanding a simple menu item take days if not weeks. We tried telling the clients to give us knowledge transfer about the application but to no avail. Our manager would often tell us to start coding even if the design document is not complete or requirements not clear. We would start by coding part of the requirement that seems clear and wait for the rest. This usually would delay the deployment by a month. In extreme cases we would have very low errors in the development and production but the clients would say that's not what they asked. That would start a blame game and a series of change requests and we would end up developing something very different. My question is how would you do development work if you don't know the functionality of the app fully? UPDATE About development methodology it isn't really my choice and I am not my team's lead It is the way it began. I tried to tell people about the advantages of agile but to no avail. Besides I don't think my team has the necessary mindset to work in AGILE environment.

    Read the article

  • TechEd 2010 Day Three: The Database Designer (Isn't)

    - by BuckWoody
    Yesterday at TechEd 2010 here in New Orleans I worked the front-booth, answering general SQL Server questions for the masses. I was actually a little surprised to find most of the questions I got were from folks that wanted to know more about Stream Insight and Master Data Services. In past conferences I've been asked a lot of "free consulting" questions, about problems folks have had from older products. I don't mind that a bit - in fact, I'm always happy to help in any way I can. But this time people are really interested in the new features in the product, and I like that they are thinking ahead, not just having to solve problems in production. My presentation was on "Database Design in an Hour". We had the usual fun, and SideShow Bob made an appearance - I kid you not. The guy in the back of the room looked just like Sideshow Bob, so I quickly held a "bes thair" contest, and he won. Duing the presentation, I explain the tools you can use to design databases. I also explain that the "Database Designer" tool in SQL Server Management Studio (SSMS) isn't truly a desinger - it uses non-standard notation, doesn't have a meta-data dictionary, and worst of all, it works at the physical level. In other words, whatever you do in SSMS will automatically change the field/table/relationship structures in the database. We fixed this in SSMS 2008 and higher by adding an option to block that, but the tool is not a good design function nonetheless. To be fair, no one I know of at Microsoft recommends that it is - but I was shocked to hear so many developers in the room defending it as a good tool. I think the main issue for someone who doesn't have to work with Relational Systems a great deal is that it can be difficult to figure out Foreign Keys. The syntax makes them look "backwards", so it's just easier to grab a field and place it on the table you want to point to. There are options. You can download a couple of free tools (CA has a community edition of ER-WIN, Quest has one, and Embarcadero also has one) and if you design more than one or two databases a year, it may be worth buying a true design tool. For years I used Visio, but we changed it so that it doesn't forward-engineer (create the DDL) any more, so it isn't a true design tool either. So investigate those free and not-so-free tools. You'll find they help you in your job - but stay away from the Database Designer in SSMS. Or I'll send Sideshow Bob over there to straighten you out. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Why wearing Jeans is considered unprofessional?

    - by Gopinath
    When I started my career 9 years ago I use to wear casual wear to office – Jeans & T-Shirts all the 5 days. The environment at workplace during those days encouraged me to be casual and many of my colleagues use to come in Jeans. We just started our career those days it was perfectly fine to be in casual. As I grow up in the ladder, I started feeling the discomfort of wearing Jeans at work. During clients visits, senior managers meetings and consultations I was an odd man in the crowd as the rest of them are in formals. In order to be one among the professionals I’m forced change my dressing style and start wearing formals. But  the question of “Why wearing jeans to workplace is considered as unprofessional?” use in linger in my mind till today. I got the answer to my question from a discussion thread on Quora When they were invented, jeans were associated with blue-collar work. They were meant to get muddy and gross and take lots of abuse without falling apart, even if you wore the same pair every day. The people who bought them were the ones whose lives required durable clothing. And another commenter says… A professional image is critical to cementing business relationships, and part of that is, for right or wrong, how you dress. Jeans are typically associated with "kicking back", relaxation, leisure, informality,  and even a slightly rebellious flavor. The style and condition of the jeans are a consideration, as we often wear jeans into advanced states of being worn down, with tearing, etc.. that we generally do not do with other clothing items. I agree with this theory even though it may be centuries old. If you want to look like a professional and treated like a professional it’s better to be dress up in formals. These days I make a point to be in formals at workplace. Not everyone is Steve Jobs to wear a Jean & Turtle Neck T-shirt  right? CC Image credit flickr/exey

    Read the article

  • Somewhere to get inspiration - Pair up the creative with the tech

    - by Morten Bergfall
    I am a somewhat green developer; some work experience, last year of school. As most of you, I am constantly working on an assortment of personal projects. Since my mind often has a somewhat drifting characteristic; I am not always able to keep the projects in check. After some time they all exhibit the moral fiber of Vikings, harlots and chain-letter-knitters. This includes constant forking, round-abouting, eating of school assignments of rather mundane, and hence pretty yawn-inducing, specifications, and of course quite a bit of gathering of folder dust. Well, on to my question....is there a place, forum... or something with the purpose of linking people with ideas to the people actually being able to bring said ideas to life? Of course, I know of the professional ones, like rent-a-coder and such. And there seem to be a lot of open source projects available for participation. What I'm looking for doesn't really fit into any of those categories....the form would be somewhat like rent-a-coder, but this is ideas&inspiration, not bubble-sort-my-quarterly-for-a-buck. The possibilities for developing bonds, spicy code, and plain old fun seem quite possible.As I see it, the main benefit would be that we (that is the tech-flipside of the proverbial eCoin) get something worthwhile to do, rather than squeeze the last creative grain out of our code-heavy brains.To give it some perspective...: My last project consists of an absurd jQuery-plugin that includes animated png-robots migrating from Google Earth to drag a html-element of your choosing onto the map, where it gets color, for so to be dragged back by this poorly animated robot.... Often, the line between the creative and the tech is blurred, to say the least. I wouldn't think that would be a problem. Think someone who has developed a nifty little windows application, then sees possibility for a broader use, perhaps some sort of networking functionality. This fellow sadly lacks the skill to implememet this. So he, she or it would then seek a developer with the know-how and they could complete this project together. So, do any of you know of such a place, or can nudge in the right direction? And yes, I understand completely that I should be dedicating myself to doing school work, or applying for mundane developer positions, so please.... :-) UPDATE Sadly, I'm situated in Oslo, Norway, and the number of developers are somewhat limited...and I have had quite some ahem personality issues with the ones who are available ;-) So I feel I must go deeper; search the multitude of the web...

    Read the article

< Previous Page | 741 742 743 744 745 746 747 748 749 750 751 752  | Next Page >