Search Results

Search found 7517 results on 301 pages for 'fast debugger'.

Page 207/301 | < Previous Page | 203 204 205 206 207 208 209 210 211 212 213 214  | Next Page >

  • Four Emerging Payment Stories

    - by David Dorf
    The world of alternate payments has been moving fast of late.  Innovation in this area will help both consumers and retailers, but probably hurt the banks (at least that's the plan).  Here are four recent news items in this area: Dwolla, a start-up in Iowa, is trying to make credit cards obsolete.  Twelve guys in Des Moines are using $1.3M they raised to allow businesses to skip the credit card networks and avoid the fees.  Today they move about $1M a day across their network with an average transaction size of $500. Instead of charging merchants 2.9% plus $.30 per transaction, Dwolla charges a quarter -- yep, that coin featuring George Washington. Dwolla (Web + Dollar = Dwolla) avoids the credit networks and connects directly to bank accounts using the bank's ACH network.  They are signing up banks and merchants targeting both B2B and C2B as well as P2P payments.  They leverage social networks to notify people they have a money transfer, and also have a mobile app that uses GPS location. However, all is not rosy.  There have been complaints about unexpected chargebacks and with debit fees being reduced by the big banks, the need is not as pronounced.  The big banks are working on their own network called clearXchange that could provide stiff competition. VeriFone just bought European payment processor Point for around $1B.  By itself this would not have caught my attention except for the fact that VeriFone also announced the acquisition of GlobalBay earlier this month.  In addition to their core business of selling stand-beside payment terminals, with GlobalBay they get employee-operated mobile selling tools and with Point they get a very big payment processing platform. MasterCard and Intel announced a partnership around payments, starting with PayPass, MasterCard's new payment technology.  Intel will lend its expertise to add additional levels of security, which seems to be the biggest barrier for consumer adoption.  Everyone is scrambling to get their piece of cash transactions, which still represents 85% of all transactions. Apple was awarded another mobile payment patent further cementing the rumors that the iPhone 5 will support NFC payments.  As usual, Apple is upsetting the apple cart (sorry) by moving control of key data from the carriers to Apple.  With Apple's vast number of iTunes accounts, they have a ready-made customer base to use the payment infrastructure, which I bet will slowly transition people away from credit cards and toward cheaper ACH.  Gary Schwartz explains the three step process Apple is taking to become a payment processor. Below is a picture I drew representing payments in the retail industry. There's certainly a lot of innovation happening.

    Read the article

  • Tom Cruise: Meet Fusion Apps UX and Feel the Speed

    - by ultan o'broin
    Unfortunately, I am old enough to remember, and now to admit that I really loved, the movie Top Gun. You know the one - Tom Cruise, US Navy F-14 ace pilot, Mr Maverick, crisis of confidence, meets woman, etc., etc. Anyway, one of more memorable lines (there were a few) was: "I feel the need, the need for speed." I was reminded of Tom Cruise recently. Paraphrasing a certain Senior Vice President talking about Oracle Fusion Applications and user experience at an all-hands meeting, I heard that: Applications can never be too easy to use. Performance can never be too fast. Developers, assume that your code is always "on". Perfect. You cannot overstate the user experience importance of application speed to users, or at least their perception of speed. We all want that super speed of execution and performance, and increasingly so as enterprise users bring the expectations of consumer IT into the work environment. Sten Vesterli (@stenvesterli), an Oracle Fusion Applications User Experience Advocate, also addressed the speed point artfully at an Oracle Usability Advisory Board meeting in Geneva. Sten asked us that when we next Googled something, to think about the message we see that Google has found hundreds of thousands or millions of results for us in a split second (for example, About 8,340,000 results (0.23 seconds)). Now, how many results can we see and how many can we use immediately? Yet, this simple message communicating the total results available to us works a special magic about speed, delight, and excitement that Google has made its own in the search space. And, guess what? The Oracle Application Development Framework table component relies on a similar "virtual performance boost", says Sten, when it displays the first 50 records in a table, and uses a scrollbar indicating the total size of the data record set. The user scrolls and the application automatically retrieves more records as needed. Application speed and its perception by users is worth bearing in mind the next time you're at a customer site and the IT Department demands that you retrieve every record from the database. Just think of... Dave Ensor: I'll give you all the rows you ask for in one second. If you promise to use them. (Again, hat tip to Sten.) And then maybe think of... Tom Cruise. And if you want to read about the speed of Oracle Fusion Applications, and what that really means in terms of user productivity for your entire business, then check out the Oracle Applications User Experience Oracle Fusion Applications white papers on the usable apps website.

    Read the article

  • The Minimalist Approach to Content Governance - Retire Phase

    - by Kellsey Ruppel
     Originally posted by John Brunswick. Good news - the Retire Phase is actually more fun than the Manage Phase. During the Retire Phase our content management team should not have to track down content creators if the Request Phase of this process was completed successfully. The ownership meta data, success criteria and time stamp that was applied to the original content submission will help to manage content at the end of the content life cycle. The Retire Phase will provide the opportunity for us to prune irrelevant content items through archiving or deletion, keeping the content system clear of irrelevant information, streamlining users ability to browse and search for content.   1. Act on Metrics Established during the Request Phase Why - Some information is only relevant for a given amount of time. In Content Platform Migration Strategy - Artifacts vs Perishable Content we examined two content types - Artifacts and Perishable content. Understanding the differences between Artifacts and Perishable content will allow us to explicitly respect their various lifespans. Additionally, some content may have been part of a project that failed to meet the success criteria outlined in the Request Phase. Any content that did not meet the metrics outlined in the Request Phase should be considered for deletion. How - Thankfully by adhering to to The Minimalist Approach to Content Governance our content should have some level of meta data associated with it that will allow us to quickly sort and understand how to deal with it. Content Management Systems like Oracle's Universal Content Management (UCM) natively allow you to create and save advanced searches that can use content meta data like folders, author, expiration date, security settings and custom meta data to pull back listings of content for examination. Additionally, analytics are available for all content items that allow us to determine if the usage is meeting success criteria that may have been previously outlined during the request phase. The lists that are produced from these approaches can be quickly reviewed for each project with the content owners and based on the nature of the content and success criteria undergo archiving or deletion. Impact - Retiring content that is no longer relevant will allow end users to have fast and relevant access to information across your enterprise. As we mentioned in our first post in this series - it is easy to quickly start producing content, but the challenge is ensuring that the environment is easy to navigate and use on the third week and during the third year. The light level of effort that was placed into the Request Phase of this process will set us up to keep content clean and relevant for a long time to come. With an up-to-date content repository users will be able to quickly find access to the information that is critical to their work processes. You might not get a holiday named in your honor managing the content system, but will appreciate their quick access to quality information.

    Read the article

  • WCF REST Service Activation Errors when AspNetCompatibility is enabled

    - by Rick Strahl
    I’m struggling with an interesting problem with WCF REST since last night and I haven’t been able to track this down. I have a WCF REST Service set up and when accessing the .SVC file it crashes with a version mismatch for System.ServiceModel: Server Error in '/AspNetClient' Application. Could not load type 'System.ServiceModel.Activation.HttpHandler' from assembly 'System.ServiceModel, Version=3.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'.Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.TypeLoadException: Could not load type 'System.ServiceModel.Activation.HttpHandler' from assembly 'System.ServiceModel, Version=3.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'.Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [TypeLoadException: Could not load type 'System.ServiceModel.Activation.HttpHandler' from assembly 'System.ServiceModel, Version=3.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'.] System.RuntimeTypeHandle.GetTypeByName(String name, Boolean throwOnError, Boolean ignoreCase, Boolean reflectionOnly, StackCrawlMarkHandle stackMark, Boolean loadTypeFromPartialName, ObjectHandleOnStack type) +0 System.RuntimeTypeHandle.GetTypeByName(String name, Boolean throwOnError, Boolean ignoreCase, Boolean reflectionOnly, StackCrawlMark& stackMark, Boolean loadTypeFromPartialName) +95 System.RuntimeType.GetType(String typeName, Boolean throwOnError, Boolean ignoreCase, Boolean reflectionOnly, StackCrawlMark& stackMark) +54 System.Type.GetType(String typeName, Boolean throwOnError, Boolean ignoreCase) +65 System.Web.Compilation.BuildManager.GetType(String typeName, Boolean throwOnError, Boolean ignoreCase) +69 System.Web.Configuration.HandlerFactoryCache.GetTypeWithAssert(String type) +38 System.Web.Configuration.HandlerFactoryCache.GetHandlerType(String type) +13 System.Web.Configuration.HandlerFactoryCache..ctor(String type) +19 System.Web.HttpApplication.GetFactory(String type) +81 System.Web.MaterializeHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +223 System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +184 Version Information: Microsoft .NET Framework Version:4.0.30319; ASP.NET Version:4.0.30319.1 What’s really odd about this is that it crashes only if it runs inside of IIS (it works fine in Cassini) and only if ASP.NET Compatibility is enabled in web.config:<serviceHostingEnvironment aspNetCompatibilityEnabled="true" multipleSiteBindingsEnabled="true" /> Arrrgh!!!!! After some experimenting and some help from Glenn Block and his team mates I was able to track down the problem in ApplicationHost.config. Specifically the problem was that there were multiple *.svc mappings in the ApplicationHost.Config file and the older 2.0 runtime specific versions weren’t marked for the proper runtime. Because these handlers show up at the top of the list they execute first resulting in assembly load errors for the wrong version assembly. To fix this problem I ended up making a couple changes in applicationhost.config. On the machine level root’s Handler mappings I had an entry that looked like this:<add name="svc-Integrated" path="*.svc" verb="*" type="System.ServiceModel.Activation.HttpHandler, System.ServiceModel, Version=3.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" preCondition="integratedMode" /> and it needs to be changed to this:<add name="svc-Integrated" path="*.svc" verb="*" type="System.ServiceModel.Activation.HttpHandler, System.ServiceModel, Version=3.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" preCondition="integratedMode,runtimeVersionv2.0" />Notice the explicit runtime version assignment in the preCondition attribute which is key to keep ASP.NET 4.0 from executing that handler. The key here is that the runtime version needs to be set explicitly so that the various *.svc handlers don’t fire only in the order defined which in case of a .NET 4.0 app with the original setting would result in an incompatible version of System.ComponentModel to load.What was really hard to track this down is that even when looking in the debugger when launching the Web app, the AppDomain assembly loads showed System.ServiceModel V4.0 starting up just fine. Apparently the ASP.NET runtime load occurs at a different point and that’s when things break.So how did this break? According to the Microsoft folks it’s some older tools that got installed that change the default service handlers. There’s a blog entry that points at this problem with more detail:http://blogs.iis.net/webtopics/archive/2010/04/28/system-typeloadexception-for-system-servicemodel-activation-httpmodule-in-asp-net-4.aspxNote that I tried running aspnet_regiis and that did not fix the problem for me. I had to manually change the entries in applicationhost.config.   © Rick Strahl, West Wind Technologies, 2005-2011Posted in AJAX   ASP.NET  WCF  

    Read the article

  • How to Code Faster (Without Sacrificing Quality)

    - by ashes999
    I've been a professional coder for a several years. The comments about my code have generally been the same: writes great code, well-tested, but could be faster. So how do I become a faster coder, without sacrificing quality? For the sake of this question, I'm going to limit the scope to C#, since that's primarily what I code (for fun) -- or Java, which is similar enough in many ways that matter. Things that I'm already doing: Write the minimal solution that will get the job done Write a slew of automated tests (prevents regressions) Write (and use) reusable libraries for all kinds of things Use well-known technologies where they work well (eg. Hibernate) Use design patterns where they fit into place (eg. Singleton) These are all great, but I don't feel like my speed is increasing over time. I do care, because if I can do something to increase my productivity (even by 10%), that's 10% faster than my competitors. (Not that I have any.) Besides which, I've consistently gotten this feeback from my managers -- whether it was small-scale Flash development or enterprise Java/C++ development. Edit: There seem to be a lot of questions about what I mean by fast, and how I know I'm slow. Let me clarify with some more details. I worked in small and medium-sized teams (5-50 people) in various companies over various projects and various technologies (Flash, ASP.NET, Java, C++). The observation of my managers (which they told me directly) is that I'm "slow." Part of this is because a significant number of my peers sacrificed quality for speed; they wrote code that was buggy, hard to read, hard to maintain, and difficult to write automated tests for. My code generally is well-documented, readable, and testable. At Oracle, I would consistently solve bugs slower than other team-members. I know this, because I would get comments to that effect; this means that other (yes, more senior and experienced) developers could do my work in less time than it took me, at nearly the same quality (readability, maintainability, and testability). Why? What am I missing? How can I get better at this? My end goal is simple: if I can make product X in 40 hours today, and I can improve myself somehow so that I can create the same product at 20, 30, or even 38 hours tomorrow, that's what I want to know -- how do I get there? What process can I use to continually improve? I had thought it was about reusing code, but that's not enough, it seems.

    Read the article

  • SQL SERVER – DMV – sys.dm_exec_query_optimizer_info – Statistics of Optimizer

    - by pinaldave
    Incredibly, SQL Server has so much information to share with us. Every single day, I am amazed with this SQL Server technology. Sometimes I find several interesting information by just querying few of the DMV. And when I present this info in front of my client during performance tuning consultancy, they are surprised with my findings. Today, I am going to share one of the hidden gems of DMV with you, the one which I frequently use to understand what’s going on under the hood of SQL Server. SQL Server keeps the record of most of the operations of the Query Optimizer. We can learn many interesting details about the optimizer which can be utilized to improve the performance of server. SELECT * FROM sys.dm_exec_query_optimizer_info WHERE counter IN ('optimizations', 'elapsed time','final cost', 'insert stmt','delete stmt','update stmt', 'merge stmt','contains subquery','tables', 'hints','order hint','join hint', 'view reference','remote query','maximum DOP', 'maximum recursion level','indexed views loaded', 'indexed views matched','indexed views used', 'indexed views updated','dynamic cursor request', 'fast forward cursor request') All occurrence values are cumulative and are set to 0 at system restart. All values for value fields are set to NULL at system restart. I have removed a few of the internal counters from the script above, and kept only documented details. Let us check the result of the above query. As you can see, there is so much vital information that is revealed in above query. I can easily say so many things about how many times Optimizer was triggered and what the average time taken by it to optimize my queries was. Additionally, I can also determine how many times update, insert or delete statements were optimized. I was able to quickly figure out that my client was overusing the Query Hints using this dynamic management view. If you have been reading my blog, I am sure you are aware of my series related to SQL Server Views SQL SERVER – The Limitations of the Views – Eleven and more…. With this, I can take a quick look and figure out how many times Views were used in various solutions within the query. Moreover, you can easily know what fraction of the optimizations has been involved in tuning server. For example, the following query would tell me, in total optimizations, what the fraction of time View was “reference“. As this View also includes system Views and DMVs, the number is a bit higher on my machine. SELECT (SELECT CAST (occurrence AS FLOAT) FROM sys.dm_exec_query_optimizer_info WHERE counter = 'view reference') / (SELECT CAST (occurrence AS FLOAT) FROM sys.dm_exec_query_optimizer_info WHERE counter = 'optimizations') AS ViewReferencedFraction Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL DMV, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • SQL SERVER – Maximize Database Performance with DB Optimizer – SQL in Sixty Seconds #054

    - by Pinal Dave
    Performance tuning is an interesting concept and everybody evaluates it differently. Every developer and DBA have different opinion about how one can do performance tuning. I personally believe performance tuning is a three step process Understanding the Query Identifying the Bottleneck Implementing the Fix While, we are working with large database application and it suddenly starts to slow down. We are all under stress about how we can get back the database back to normal speed. Most of the time we do not have enough time to do deep analysis of what is going wrong as well what will fix the problem. Our primary goal at that time is to just fix the database problem as fast as we can. However, here is one very important thing which we need to keep in our mind is that when we do quick fix, it should not create any further issue with other parts of the system. When time is essence and we want to do deep analysis of our system to give us the best solution we often tend to make mistakes. Sometimes we make mistakes as we do not have proper time to analysis the entire system. Here is what I do when I face such a situation – I take the help of DB Optimizer. It is a fantastic tool and does superlative performance tuning of the system. Everytime when I talk about performance tuning tool, the initial reaction of the people is that they do not want to try this as they believe it requires lots of the learning of the tool before they use it. It is absolutely not true with the case of the DB optimizer. It is a very easy to use and self intuitive tool. Once can get going with the product, in no time. Here is a quick video I have build where I demonstrate how we can identify what index is missing for query and how we can quickly create the index. Entire three steps of the query tuning are completed in less than 60 seconds. If you are into performance tuning and query optimization you should download DB Optimizer and give it a go. Let us see the same concept in following SQL in Sixty Seconds Video: You can Download DB Optimizer and reproduce the same Sixty Seconds experience. Related Tips in SQL in Sixty Seconds: Performance Tuning – Part 1 of 2 – Getting Started and Configuration Performance Tuning – Part 2 of 2 – Analysis, Detection, Tuning and Optimizing What would you like to see in the next SQL in Sixty Seconds video? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Interview Questions and Answers, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology, Video Tagged: Identity

    Read the article

  • Oracle Financial Analytics for SAP Certified with Oracle Data Integrator EE

    - by denis.gray
    Two days ago Oracle announced the release of Oracle Financial Analytics for SAP.  With the amount of press this has garnered in the past two days, there's a key detail that can't be missed.  This release is certified with Oracle Data Integrator EE - now making the combination of Data Integration and Business Intelligence a force to contend with.  Within the Oracle Press Release there were two important bullets: ·         Oracle Financial Analytics for SAP includes a pre-packaged ABAP code compliant adapter and is certified with Oracle Data Integrator Enterprise Edition to integrate SAP Financial Accounting data directly with the analytic application.  ·         Helping to integrate SAP financial data and disparate third-party data sources is Oracle Data Integrator Enterprise Edition which delivers fast, efficient loading and transformation of timely data into a data warehouse environment through its high-performance Extract Load and Transform (E-LT) technology. This is very exciting news, demonstrating Oracle's overall commitment to Oracle Data Integrator EE.   This is a great way to start off the new year and we look forward to building on this momentum throughout 2011.   The following links contain additional information and media responses about the Oracle Financial Analytics for SAP release. IDG News Service (Also appeared in PC World, Computer World, CIO: "Oracle is moving further into rival SAP's turf with Oracle Financial Analytics for SAP, a new BI (business intelligence) application that can crunch ERP (enterprise resource planning) system financial data for insights." Information Week: "Oracle talks a good game about the appeal of an optimized, all-Oracle stack. But the company also recognizes that we live in a predominantly heterogeneous IT world" CRN: "While some businesses with SAP Financial Accounting already use Oracle BI, those integrations had to be custom developed. The new offering provides pre-built integration capabilities." ECRM Guide:  "Among other features, Oracle Financial Analytics for SAP helps front-line managers improve financial performance and decision-making with what the company says is comprehensive, timely and role-based information on their departments' expenses and revenue contributions."   SAP Getting Started Guide for ODI on OTN: http://www.oracle.com/technetwork/middleware/data-integrator/learnmore/index.html For more information on the ODI and its SAP connectivity please review the Oracle® Fusion Middleware Application Adapters Guide for Oracle Data Integrator11g Release 1 (11.1.1)

    Read the article

  • A debugging experience with "highly compatible" ASP.NET 4.5

    - by Jeff
    I have to admit that I will pretty much upgrade software for no reason other than being on the latest version. I won't do it if it's super expensive (Adobe gets money from me about once every three or four years at best), but particularly with frameworks and stuff generally available as part of my MSDN subscription, I'll be bleeding edge. CoasterBuzz was running on the MVC 4 framework pretty much as soon as they did a "go live" license for it. I didn't really jump in head-first with Windows 8 and Visual Studio 2012, in part because I just wasn't interested in doing the reinstalls for each new version. Turns out there weren't that many revisions anyway. But when the final versions were released a week and a half ago, I jumped in. I saw on one of the Microsoft sites that .Net 4.5 was a "highly compatible in-place update" to the framework. Good enough for me. I was obviously running it by default in Windows 8, and installed it on my production server. I suppose it's "highly compatible," except when it isn't. Three of my sites are running with various flavors of the MVC version of POP Forums. All of them stopped working under ASP.NET 4.5. It was not immediately obvious what the problem might be beyond an exception indicating that there were no repository classes registered with Ninject, which I use for dependency injection in the forums. This was made all the more weird by the fact that it ran fine locally in the dev Web host. My first instinct was to spin up a Windows Server VM on my local box and put the remote debugger on it. (Side note: running multiple VM's on a Retina MacBook Pro with 16 gigs of RAM is pretty much the most awesome thing ever. I can't believe this computer is for real, and not a 50-pound tower under my desk.) What might have been going on in IIS that doesn't happen in Visual Studio? In the debugging process, I realized that I might be looking in the wrong place. POP Forums creates a Ninject container using a method called from a PreApplicationStartMethod attribute, and at that time registers a module (what Ninject uses to map interfaces to implementations) that maps all of the core dependencies. It also creates an instance of an HttpModule that originally hosted the "services" (search indexing, mailer, etc.), but now just records errors. That's all well and good, but the actual repository mapping, where data is actually read or persisted, happens in Application_Start() in global.asax. The idea there is that you can swap out the SqlSingleWebServer repos for something tuned for multiple servers, Oracle or something else. Of course, if I used something like StructureMap, which does convention-based mapping for dependency injection (a class implementing ISettingsRepository called SettingsRepository is automagically mapped), I wouldn't have to worry about it. In any case, the HttpModule, being instantiated before Application_Start() gets to run, would throw because there was no repo mapped where it could get settings from the database. This makes total sense. The fix is sort of a hack, where I don't setup the innards of the HttpModule until a call to its BeginRequest is made. I say it's a hack, because its primary function, logging exceptions, won't work until the app has warmed up. Still, this brings up an interesting question about the race condition, and what changed in 4.5 when it's running in IIS. In ASP.NET 4, it would appear that the code called via the PreApplicationStartMethod was either failing silently, and running again later, or it was getting to that code after Application_Start was called. In any case, weird thing. The real pain point I'm experiencing now is a bug in MVC 4 that is extremely serious because it renders the mobile/alternate view functionality very much broken.

    Read the article

  • Oracle AIM, Oracle ABF, and Siebel Results Roadmap Officially Retired as of January 31, 2011

    - by tom.spitz
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} It seems somehow appropriate that the first entry of the Oracle® Unified Method (OUM) blog is about the retirement of several of our legacy methods, most notably AIM Foundation.If you're reading this, you're probably aware that Oracle has been developing OUM to support the entire Enterprise IT lifecycle, including support for the successful implementation of every Oracle product. As Oracle has continued to acquire new companies and technologies, it has become essential that we also create a single, unified language and approach for implementation - across the Oracle ecosystem.With the release of OUM 5.1 in 2009, OUM provided full support for all enterprise application implementation projects including Oracle E-Business Suite R12, Siebel CRM, PeopleSoft Enterprise, and JD Edwards EnterpriseOne projects. In 2010, we released OUM training that supports the use of OUM on these types of projects.That support represented a major milestone in the evolution of OUM and enabled implementers to transition to OUM. Consequently, we announced a staggered retirement schedule for Oracle's legacy methods. On January 31, 2011 we announced the retirement of:Oracle Application Implementation Method (AIM)Oracle AIM for Business Flows (ABF)Siebel Results RoadmapLater this year, we will announce the retirement of Compass - the legacy PeopleSoft method - and Data Warehouse Method Fast Track.OUM is available free of charge to Oracle Gold, Platinum, and Diamond partners through the Oracle Partner Network (OPN) [OUM on OPN]. The OUM Customer Program allows customers to obtain copies of the method for their internal use by contracting with Oracle for an engagement of two weeks or longer meeting some additional minimum criteria.There be more retirement announcements in the coming months. For now it's "Adios AIM." Thanks for the memories...

    Read the article

  • Exadata support for ACFS (and thus, 10gR2) now available!

    - by Robert Freeman
    Really? Exadata, ACFS and 10gR2? If you work with Exadata you are probably aware that ACFS has not been supported - until now! ACFS is now supported on Exadata if you are running Grid Infrastructure version 12.1.0.2 or later. This new support is described in MOS note 1326938.1. Also Exadata support for ACFS is mentioned in MOS note 888828.1, which is the king of all Exadata notes on MOS. The upshot is that you can now run Oracle Database 10gR2 on Exadata using ACFS as the storage for the Oracle Database. Don’t Over React and just Throw Everything on ACFS!First, let’s be clear that ACFS is not an alternative for running your Exadata databases on ASM. If you are running any production or non-production performance sensitive Oracle databases on 11.2 or 12.1, then you should be running them on ASM disks that are associated with the storage cells. The use case for ACFS is generally limited to the following: Running any Oracle 10gR2 databases on Exadata. Running Oracle 11gR2 development or test databases that require rapid cloning, and that do not require the performance benefits of the Exadata storage cells. If you are running Oracle Database 12c and you need snapshot/clone kinds of capabilities, then you should be using Oracle MultiTennant and the features present in that option (remember though that MultiTennant is a licensed option). The Fine PrintThere are some requirements that you will need to meet If you are going to run ACFS on Exadata. These are: You have to use Oracle Linux You must use GI 12.1.0.2 or later If you wish to use HCC then you must apply the fix for bug 19136936 to your system. This bug, and it’s associated patch do not appear on MOS (as of the time that I wrote this) so you will need to open an SR and get support to provide the patch for you. The Best Use Case for ACFSEven though Oracle Database 10gR2 is at end of life, it remains in use in a large number of places. This has caused problems when choosing to implement Exadata as a consolidation platform, or when choosing it during a hardware refresh process. Now that ACFS is supported, Exadata has become even more flexible and affords customers greater flexibility when migrating to Exadata and Engineered Systems. While all of the features of Exadata might not be available to a 10.2.0.4 database, certainly just the improved processing capabilities of Exadata with its fast as heck infiniband network fabric, additional memory, reduced power requirements and a whole host of other features, justifies moving these databases to Exadata now. This will also make it easier to upgrade these databases when the time comes!

    Read the article

  • Best advice for setting up Ubuntu on my mother's computer

    - by idealmachine
    Intended use My mother had an old Compaq desktop computer running Windows 98, which she used for occasional Web browsing and playing cards. The name of her card game is Hoyle Card Games 3. Although I had to repair it several times over the last 10 years, it worked fine until it finally died at the end of last year. Hardware specifications A relative brought up a newer computer soon afterward: Operating system: Windows XP Asus K8N motherboard (with broken on-board sound; getting a sound card) Athlon 64? processor (don't remember the clock speed) 512 MB RAM Hope the graphics card works... Replacement sound card will be one of: Ensoniq ES1370 AudioPCI Diamond Monster Sound MX300 (Aureal chipset) Sound Blaster Audigy 2 SE Peripherals HP Scanjet 3400c scanner (USB connected) HP LaserJet multi-function printer (parallel port connected, and printing works with a PCL driver) Same serial mouse as old computer Question I had set up an SSH/VNC connection to allow for remotely working out problems. Or so I thought. A month later, the computer would not boot, rendering the SSH connection useless and an OS reinstall necessary. Unfortunately, I have neither the original Windows disc nor the product key. Unless I were to pay $200 for a full Windows 7 Home Premium license for my computer, I would not be able to re-install Windows XP on hers. I consider myself an advanced Linux user, having used Debian for years. So here are my questions. I have only one day to decide whether to use Ubuntu or buy Windows: A quick search leads me to believe all the hardware listed above is supposed to work with Linux, but am I mistaken? Would Ubuntu/Xubuntu suffice (specify which one if it matters), or would I be better off paying the $200 necessary for Windows XP? Is the card game likely to run on Wine? I believe the minimum system requirement is Windows 95. Failing Wine compatibility, will VirtualBox run fast enough on such a computer (Windows 98 as the guest OS)? Are there any free card games just as good? She plays mainly Bridge, Poker, and Solitaire. Is there any "Large Fonts" option for those with poor vision? The lack of it would be a big disadvantage. BONUS: Although I would probably replace the old mouse upon a move to Ubuntu, is it even possible to get a serial mouse working?

    Read the article

  • Dell XPS 15 (L502x) sound problem

    - by lauramolenaar
    I have a problem with my sound on my Dell XPS 15. First, when I had Windows 7, my sound was pretty good (I have a JBL 2.1 speaker system with Waves Maxx audio), but since I installed Ubuntu 12.04 it sounded very cheap and as if I put my laptop in a tin can. I've already tried installing alsa-hda-dkms from the alsa-daily ppa (http://ppa.launchpad.net/ubuntu-audio-dev/alsa-daily) This is my audio controller: laura@laura-XPS-L502X:~$ lspci -v | grep -A7 -i audio 00:1b.0 Audio device: Intel Corporation 6 Series/C200 Series Chipset Family High Definition Audio Controller (rev 05) Subsystem: Dell Device 04b6 Flags: bus master, fast devsel, latency 0, IRQ 51 Memory at f1c00000 (64-bit, non-prefetchable) [size=16K] Capabilities: <access denied> Kernel driver in use: snd_hda_intel Kernel modules: snd-hda-intel I hope you can help me, and you can always ask me for more information. Result of aplay -l: **** List of PLAYBACK Hardware Devices **** card 0: PCH [HDA Intel PCH], device 0: ALC665 Analog [ALC665 Analog] Subdevices: 0/1 Subdevice #0: subdevice #0 card 0: PCH [HDA Intel PCH], device 1: ALC665 Digital [ALC665 Digital] Subdevices: 1/1 Subdevice #0: subdevice #0 card 0: PCH [HDA Intel PCH], device 3: HDMI 0 [HDMI 0] Subdevices: 1/1 Subdevice #0: subdevice #0 Result of aplay -L: default Playback/recording through the PulseAudio sound server sysdefault:CARD=PCH HDA Intel PCH, ALC665 Analog Default Audio Device front:CARD=PCH,DEV=0 HDA Intel PCH, ALC665 Analog Front speakers surround40:CARD=PCH,DEV=0 HDA Intel PCH, ALC665 Analog 4.0 Surround output to Front and Rear speakers surround41:CARD=PCH,DEV=0 HDA Intel PCH, ALC665 Analog 4.1 Surround output to Front, Rear and Subwoofer speakers surround50:CARD=PCH,DEV=0 HDA Intel PCH, ALC665 Analog 5.0 Surround output to Front, Center and Rear speakers surround51:CARD=PCH,DEV=0 HDA Intel PCH, ALC665 Analog 5.1 Surround output to Front, Center, Rear and Subwoofer speakers surround71:CARD=PCH,DEV=0 HDA Intel PCH, ALC665 Analog 7.1 Surround output to Front, Center, Side, Rear and Woofer speakers iec958:CARD=PCH,DEV=0 HDA Intel PCH, ALC665 Digital IEC958 (S/PDIF) Digital Audio Output hdmi:CARD=PCH,DEV=0 HDA Intel PCH, HDMI 0 HDMI Audio Output dmix:CARD=PCH,DEV=0 HDA Intel PCH, ALC665 Analog Direct sample mixing device dmix:CARD=PCH,DEV=1 HDA Intel PCH, ALC665 Digital Direct sample mixing device dmix:CARD=PCH,DEV=3 HDA Intel PCH, HDMI 0 Direct sample mixing device dsnoop:CARD=PCH,DEV=0 HDA Intel PCH, ALC665 Analog Direct sample snooping device dsnoop:CARD=PCH,DEV=1 HDA Intel PCH, ALC665 Digital Direct sample snooping device dsnoop:CARD=PCH,DEV=3 HDA Intel PCH, HDMI 0 Direct sample snooping device hw:CARD=PCH,DEV=0 HDA Intel PCH, ALC665 Analog Direct hardware device without any conversions hw:CARD=PCH,DEV=1 HDA Intel PCH, ALC665 Digital Direct hardware device without any conversions hw:CARD=PCH,DEV=3 HDA Intel PCH, HDMI 0 Direct hardware device without any conversions plughw:CARD=PCH,DEV=0 HDA Intel PCH, ALC665 Analog Hardware device with all software conversions plughw:CARD=PCH,DEV=1 HDA Intel PCH, ALC665 Digital Hardware device with all software conversions plughw:CARD=PCH,DEV=3 HDA Intel PCH, HDMI 0 Hardware device with all software conversions

    Read the article

  • Spritebatch node animation appears to be broken in cocos2d-x 2.0.3

    - by George Host
    Hi I have spent aprox 2 days trying to get this to work doing a google searches left and right and I did get it working except for sprite batch nodes. So in my class I am able to load kuwalio_stand.png and I tested kuwalio_walk1.png and 2 and 3 from the FrameCache(). They work for sure 100%. I run this code and it does not animate does anyone else have the same issue with sprite batch nodes? cocos2d::CCSprite * player = Player::create(); player->setPosition(cocos2d::CCPointMake(0.0f,0.0f)); player->setDisplayFrame(cocos2d::CCSpriteFrameCache::sharedSpriteFrameCache()->spriteFrameByName("kuwalio_stand.png")); player->setTag(PlayerTag); cocos2d::CCAnimation * walk = cocos2d::CCAnimation::create(); cocos2d::CCSpriteFrame * walk1 = cocos2d::CCSpriteFrameCache::sharedSpriteFrameCache()->spriteFrameByName("kuwalio_walk1.png"); cocos2d::CCSpriteFrame * walk2 = cocos2d::CCSpriteFrameCache::sharedSpriteFrameCache()->spriteFrameByName("kuwalio_walk2.png"); cocos2d::CCSpriteFrame * walk3 = cocos2d::CCSpriteFrameCache::sharedSpriteFrameCache()->spriteFrameByName("kuwalio_walk3.png"); walk->addSpriteFrame(walk1); walk->addSpriteFrame(walk2); walk->addSpriteFrame(walk3); cocos2d::CCAnimate * actionWalk = cocos2d::CCAnimate::create(walk); cocos2d::CCRepeatForever * actionRepeat = cocos2d::CCRepeatForever::create(actionWalk); walk->setDelayPerUnit(0.1f); actionWalk->setDuration(10.1f); player->runAction(actionRepeat); // Change camera to a soft follow camera. this->runAction(cocos2d::CCFollow::create(player)); mSceneSpriteBatchNode->addChild(player); // Have the CCNode object run its virtual update function as fast as possible. // Every frame for this layer. this-scheduleUpdate(); Counter example without the sprite batch node... cocos2d::CCSprite * sprite = cocos2d::CCSprite::create("kuwalio_walk1.png"); this->addChild(sprite,0); sprite->setPosition(cocos2d::CCPointMake(60,60)); sprite->retain(); cocos2d::CCAnimation * actionAnimation = cocos2d::CCAnimation::create(); actionAnimation->setDelayPerUnit(0.01f); actionAnimation->retain(); actionAnimation->addSpriteFrameWithFileName("kuwalio_walk1.png"); actionAnimation->addSpriteFrameWithFileName("kuwalio_walk2.png"); actionAnimation->addSpriteFrameWithFileName("kuwalio_walk3.png"); cocos2d::CCAnimate * a = cocos2d::CCAnimate::create(actionAnimation); a->setDuration(0.10f); cocos2d::CCRepeatForever * actionRepeat = cocos2d::CCRepeatForever::create(a); sprite->runAction(actionRepeat);

    Read the article

  • .NET development on a “Retina” MacBook Pro

    - by Jeff
    The rumor that Apple would release a super high resolution version of its 15” laptop has been around for quite awhile, and one I watched closely. After more than three years with a 17” MacBook Pro, and all of the screen real estate it offered, I was ready to replace it with something much lighter. It was a fantastic machine, still doing 6 or 7 hours after 460 charge cycles, but I wanted lighter and faster. With the SSD I put in it, I was able to sell it for $750. The appeal of higher resolution goes way back, when I would plug into a projector and scale up. Consolas, as it turns out, is a nice looking font for code when it’s bigger. While I have mostly indifference for iOS, I have to admit that a higher dot pitch on the iPhone and iPad is pretty to look at. So I ordered the new 15” “Retina” model as soon as the Apple Store went live with it, and got it seven days later. I’ve been primarily using Parallels as my VM of choice from OS X for about five years. They recently put out an update for compatibility with the display, though I’m not entirely sure what that means. I figured there would have to be some messing around to get the VM to look right. The combination that seems to work best is this: Set the display in OS X to “more room,” which is roughly the equivalent of the 1920x1200 that my 17” did. It’s not as stunning as the text at the default 1440x900 equivalent (in OS X), but it’s still quite readable. Parallels still doesn’t entirely know what to do with the high resolution, though what it should do is somehow treat it as native. That flaw aside, I set the Windows 7 scaling to 125%, and it generally looks pretty good. It’s not really taking advantage of the display for sharpness, but hopefully that’s something that Parallels will figure out. Screen tweaking aside, I got the base model with 16 gigs of RAM, so I give the VM 8. I can boot a Windows 7 VM in 9 seconds. Nine seconds! The Windows Experience Index scores are all 7 and above, except for graphics, which are both at 6. Again, that’s in a VM. It’s hard to believe there’s something so fast in a little slim package like that. Hopefully this one gets me at least three years, like the last one.

    Read the article

  • links for 2010-04-14

    - by Bob Rhubart
    Why business needs should shape IT architecture - McKinsey Quarterly - Business Technology - Organization "Too often, efforts to fix architecture issues remain rooted in a company’s IT practices, culture, and leadership. The reason, in part, is that the chief architect—the overall IT-architecture program leader—is frequently selected from within the technical ranks, bringing deep IT know-how but little direct experience or influence in leading a business-wide change program. A weak linkage to the business creates a void that limits the quality of the resulting IT architecture and the organization’s ability to enforce and sustain the benefits of implementation over time." -- Helge Buckow and Stéphane Rey (tags: architecture it technology enterprise mckinsey) Eric Maurice: April 2010 Critical Patch Update Released Eric Maurice offers the details on April 2010 Critical Patch Update (CPUApr2010), "the first one to include security fixes for Oracle Solaris" (tags: oracle otn database fusionmiddleware peoplesoft security) @shivmohan: Oracle – OAF – Oracle Application Framework – OA Framework "For all the PL/SQL and Oracle Forms developers out there, start planning your evolution. Sure PL/SQL and Forms will be around for some time, but you need to add more skills to your stack if you want to stay current (employable)." -- Shivmohan Purohit (tags: oracle otn application framework) @ORACLENERD: APEX Architecture Oracle ACE Chet Justice offer a "short list of potential architectures" for Oracle APEX, based on his experience with a client. (tags: oracle otn oracleace apex architecture) Luis Moreno Campos: Why is Exadata so fast? "You could find a lot of tech doc around oracle.com, but the bottom line is that the vision to even build a V2 and place it as an OLTP and DW (general purpose) machine is just pure genius." -- Luis Moreno Campos (tags: oracle otn exadata database) Edwin Biemond: Resetting Weblogic datasources with ANT Oracle ACE and Whitehorses architect Edwin Biemond shares an ANT script "to fire some WLST and Python commandos" to correct invalid database session states. (tags: oracle otn oracleace database ANT Python) @deltalounge: The future of MySQL with Oracle Peter Paul van de Beek has compiled an informative collection of Edward Scriven quotes, from various publications, on Oracle's plans for MySQL. (tags: oracle otn database mysql) Cristobal Soto: Coherence Special Interest Group: First Meeting in Toronto, Upcoming Events in New York and California Cameron Purdy, Patrick Peralta, and others are speaking at upcoming Coherence SIG events. Cristobal Soto shares the details. (tags: oracle otn coherence sig grid appserver)

    Read the article

  • Computer Visionaries 2014 Kinect Hackathon

    - by T
    Originally posted on: http://geekswithblogs.net/tburger/archive/2014/08/08/computer-visionaries-2014-kinect-hackathon.aspxA big thank you to Computer Vision Dallas and Microsoft for putting together the Computer Visionaries 2014 Kinect Hackathon that took place July 18th and 19th 2014.  Our team had a great time and learned a lot from the Kinect MVP's and Microsoft team.  The Dallas Entrepreneur Center was a fantastic venue. In total, 114 people showed up to form 15 teams. Burger ITS & Friends team members with Ben Lower:  Shawn Weisfeld, Teresa Burger, Robert Burger, Harold Pulcher, Taylor Woolley, Cori Drew (not pictured), and Katlyn Drew (not pictured) We arrived Friday after a long day of work/driving.  Originally, our idea was to make a learning game for kids.  It was intended to be multi-simultaneous players dragging and dropping tiles into a canvas area for kids around 5 years old. We quickly learned that we were limited to two simultaneous players. After working on the game for the rest of the evening and into the next morning we decided that a fast multi-player game with hand gestures was not going to happen without going beyond what was provided with the API. If we were going to have something to show, it was time to switch gears. The next idea on the table was the Photo Anywhere Kiosk. The user can use voice and hand gestures to pick a place they would like to be.  After the user says a place (or anything they want) and then the word "search", the app uses Bing to display a bunch of images for him/her to choose from. With the use of hand gesture (grab and slide to move back and forth and push/pull to select an image) the user can get the perfect image to pose with. I couldn't get a snippet with the hand but when a the app is in use, a hand shows up to cue the user to use their hand to control it's movement. Once they chose an image, we use the Kinect background removal feature to super impose the user on that image. When they are in the perfect position, they say "save" to save the image. Currently, the image is saved in the images folder on the users account but there are many possibilities such as emailing it, posting to social media, etc.. The competition was great and we were honored to be recognized for third place. Other related posts: http://jasongfox.com/computer-visionaries-2014-incredible-success/ A couple of us are continuing to work on the kid's game and are going to make it a Windows 8 multi-player game without Kinect functionality. Stay tuned for more updates.

    Read the article

  • The Whole Enchilada — Fusion Supply Chain in the Cloud

    - by Kathryn Perry
    A guest post by Tyra Crockett, Senior Manager at Oracle No other vendor can offer everything in the cloud the way Oracle can. You can get HR from Workday and CRM from Salesforce, but you can get the whole enchilada—HCM, CRM and ERP—all from Oracle on one platform. If you’re thinking about using Oracle's Cloud Services to implement the newest Oracle Fusion Supply Chain applications, this post is for you. Point #1: The Oracle Cloud Applications Services portfolio includes ERP cloud services which are flexible and can adapt to fill your supply chain needs. For example, you might be opening a small distribution facility in California, but don’t have the time or IT resources to warrant a full scale supply chain implementation. You can use Oracle’s Cloud to implement the Oracle Fusion Supply Chain applications you need without an increase in IT staff or hardware. Then as your business grows, you can add more features and applications to your cloud.   Point #2: Whether you’re implementing a slice of the Fusion Procurement pie, or the entire ERP portfolio, you want to be up and running fast with low upfront costs and investment risks. That’s where you can trust a world-class technology organization like Oracle. Your SaaS subscription-based deployment model will take away the headaches associated with determining your software costs. You also will be able to eliminate expensive customizations and configure your deployment as you like, saving you time and money during the initial stages and upon upgrade. Point #3: Another great benefit of operating your Oracle Fusion Supply Chain in the cloud is the opportunity to standardize your processes across your entire supply chain. You can institute processes in San Francisco and be confident they will be followed in Mexico City and Hong Kong. Point #4: If data security is a concern – and it is for most of us – Oracle-managed cloud services give you the comfort of knowing that your data will always be there when you need it. You will not have to manage the IT services associated with patching and upgrade. They will be taken care of automatically. This enables you to focus on what you do best: managing your business. Point #5: Cloud services aren’t an either/or proposition. You might have very good business reasons for choosing a hybrid model -- running some applications in the cloud and others on premise. That allows you to leverage your own IT department, when and where you need to, and shift focus when necessary. I urge you to take a hard look at the Oracle Fusion Supply Chain applications running in the cloud. These solutions running alongside your existing legacy systems can solve your toughest business challenges as you move forward in the 21st century.

    Read the article

  • How To Approach 360 Degree Snake

    - by Austin Brunkhorst
    I've recently gotten into XNA and must say I love it. As sort of a hello world game I decided to create the classic game "Snake". The 90 degree version was very simple and easy to implement. But as I try to make a version of it that allows 360 degree rotation using left and right arrows, I've come into sort of a problem. What i'm doing now stems from the 90 degree version: Iterating through each snake body part beginning at the tail, and ending right before the head. This works great when moving every 100 milliseconds. The problem with this is that it makes for a choppy style of gameplay as technically the game progresses at only 6 fps rather than it's potential 60. I would like to move the snake every game loop. But unfortunately because the snake moves at the rate of it's head's size it goes way too fast. This would mean that the head would need to move at a much smaller increment such as (2, 2) in it's direction rather than what I have now (32, 32). Because I've been working on this game off and on for a couple of weeks while managing school I think that I've been thinking too hard on how to accomplish this. It's probably a simple solution, i'm just not catching it. Here's some pseudo code for what I've tried based off of what makes sense to me. I can't really think of another way to do it. for(int i = SnakeLength - 1; i > 0; i--){ current = SnakePart[i], next = SnakePart[i - 1]; current.x = next.x - (current.width * cos(next.angle)); current.y = next.y - (current.height * sin(next.angle)); current.angle = next.angle; } SnakeHead.x += cos(SnakeAngle) * SnakeSpeed; SnakeHead.y += sin(SnakeAngle) * SnakeSpeed; This produces something like this: Code in Action. As you can see each part always stays behind the head and doesn't make a "Trail" effect. A perfect example of what i'm going for can be found here: Data Worm. Not the viewport rotation but the trailing effect of the triangles. Thanks for any help!

    Read the article

  • Add "My Dropbox" to Your Windows 7 Start Menu

    - by The Geek
    Over here at How-To Geek, we’re huge fans of Dropbox, the amazingly fast online file sync utility, but we’d be even happier if we could natively add it to the Windows 7 Start Menu, where it belongs. And today, that’s what we’ll do. Yep, that’s right. You can add it to the Start Menu… using a silly hack to the Libraries feature and renaming the Recorded TV library to a different name. It’s not a perfect solution, but you can access your Dropbox folder this way and it just seems to belong there. First things first, head into the Customize Start Menu panel by right-clicking on the start menu and using Properties, then make sure that Recorded TV is set to “Display as a link”. Next, right-click on Recorded TV, choose Rename, and then change it to something else like My Dropbox.   Now you’ll want to right-click on that button again, and choose Properties, where you’ll see the Library locations in the list… the general idea is that you want to remove Recorded TV, and then add your Dropbox folder. Oh, and you’ll probably want to make sure to set “Optimize this library for” to “General Items”. At this point, you can just click on My Dropbox, and you’ll see, well, Your Dropbox! (no surprise there). Yeah, I know, it’s totally a hack. But it’s a very useful one! Also, if you aren’t already using Dropbox, you should really check it out—2 GB for free, accessible via the web from anywhere, and you can sync to multiple desktops. Similar Articles Productive Geek Tips Use the Windows Key for the "Start" Menu in Ubuntu LinuxAccess Your Dropbox Files in Google ChromeSpeed up Windows Vista Start Menu Search By Limiting ResultsPin Any Folder to the Vista Start Menu the Easy WayEnable "Pin to Start Menu" for Folders in Windows Vista / XP TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Classic Cinema Online offers 100’s of OnDemand Movies OutSync will Sync Photos of your Friends on Facebook and Outlook Windows 7 Easter Theme YoWindoW, a real time weather screensaver Optimize your computer the Microsoft way Stormpulse provides slick, real time weather data

    Read the article

  • Analysis Services Tabular books #ssas #tabular

    - by Marco Russo (SQLBI)
    Many people are looking for books about Analysis Services Tabular. Today there are two books available and they complement each other: Microsoft SQL Server 2012 Analysis Services: The BISM Tabular Model by Marco Russo, Alberto Ferrari and Chris Webb Applied Microsoft SQL Server 2012 Analysis Services: Tabular Modeling by Teo Lachev The book I wrote with Alberto and Chris is a complete guide to create tabular models and has a good coverage about DAX, including how to use it for enriching a semantic model with calculated columns and measures and how to use it for querying a Tabular model. In my experience, DAX as a query language is a very interesting option for custom analytical applications that requires a fast calculation engine, or simply for standard reports running in Reporting Services and accessing a Tabular model. You can freely preview the table of content and read some excerpts from the book on Safari Books Online. The book is in printing and should be shipped within mid-July, so finally it will be very soon on the shelf of all the people already preordered it! The Teo Lachev’s book, covers the full spectrum of Tabular models provided by Microsoft: starting with self-service BI, you have users creating a model with PowerPivot for Excel, publishing it to PowerPivot for SharePoint and exploring data by using Power View; then, the PowerPivot for Excel model can be imported in a Tabular model and published in Analysis Services, adding more control on the model through row-level security and partitioning, for example. Teo’s book follows a step-by-step approach describing each feature that is very good for a beginner that is new to PowerPivot and/or to BISM Tabular. If you need to get the big picture and to start using the products that are part of the new Microsoft wave of BI products, the Teo’s book is for you. After you read the book from Teo, or if you already have a certain confidence with PowerPivot or BISM Tabular and you want to go deeper about internals, best practices, design patterns in just BISM Tabular, then our book is a suggested read: it contains several chapters about DAX, includes discussions about new opportunities in data model design offered by Tabular models, and also provides examples of optimizations you can obtain in DAX and best practices in data modeling and queries. It might seem strange that an author write a review of a book that might seem to compete with his one, but in reality these two books complement each other and are not alternatives. If you have any doubt, buy both: you will be not disappointed! Moreover, Amazon usually offers you a deal to buy three books, including the Visualizing Data with Microsoft Power View, another good choice for getting all the details about Power View.

    Read the article

  • Is your test method self-validating ?

    - by mehfuzh
    Writing state of art unit tests that can validate your every part of the framework is challenging and interesting at the same time, its like becoming a samurai. One of the key concept in this is to keep our test synced all the time as underlying code changes and thus breaking them to the furthest unit as possible.  This also means, we should avoid  multiple conditions embedded in a single test. Let’s consider the following example of transfer funds. [Fact] public void ShouldAssertTranserFunds() {     var currencyService = Mock.Create<ICurrencyService>();     //// current rate     Mock.Arrange(() => currencyService.GetConversionRate("AUS", "CAD")).Returns(0.88f);       Account to = new Account { Currency = "AUS", Balance = 120 };     Account from = new Account { Currency = "CAD" };       AccountService accService = new AccountService(currencyService);       Assert.Throws<InvalidOperationException>(() => accService.TranferFunds(to, from, 200f));       accService.TranferFunds(to, from, 100f);       Assert.Equal(from.Balance, 88);     Assert.Equal(20, to.Balance); } At first look,  it seems ok but as you look more closely , it is actually doing two tasks in one test. At line# 10 it is trying to validate the exception for invalid fund transfer and finally it is asserting if the currency conversion is successfully made. Here, the name of the test itself is pretty vague. The first rule for writing unit test should always reflect to inner working of the target code, where just by looking at their names it is self explanatory. Having a obscure name for a test method not only increase the chances of cluttering the test code, but it also gives the opportunity to add multiple paths into it and eventually makes things messy as possible. I would rater have two test methods that explicitly describes its intent and are more self-validating. ShouldThrowExceptionForInvalidTransferOperation ShouldAssertTransferForExpectedConversionRate Having, this type of breakdown also helps us pin-point reported bugs easily rather wasting any time on debugging for something more general and can minimize confusion among team members. Finally, we should always make our test F.I.R.S.T ( Fast.Independent.Repeatable.Self-validating.Timely) [ Bob martin – Clean Code]. Only this will be enough to ensure, our test is as simple and clean as possible.   Hope that helps

    Read the article

  • Is inline SQL still classed as bad practice now that we have Micro ORMs?

    - by Grofit
    This is a bit of an open ended question but I wanted some opinions, as I grew up in a world where inline SQL scripts were the norm, then we were all made very aware of SQL injection based issues, and how fragile the sql was when doing string manipulations all over the place. Then came the dawn of the ORM where you were explaining the query to the ORM and letting it generate its own SQL, which in a lot of cases was not optimal but was safe and easy. Another good thing about ORMs or database abstraction layers were that the SQL was generated with its database engine in mind, so I could use Hibernate/Nhibernate with MSSQL, MYSQL and my code never changed it was just a configuration detail. Now fast forward to current day, where Micro ORMs seem to be winning over more developers I was wondering why we have seemingly taken a U-Turn on the whole in-line sql subject. I must admit I do like the idea of no ORM config files and being able to write my query in a more optimal manner but it feels like I am opening myself back up to the old vulnerabilities such as SQL injection and I am also tying myself to one database engine so if I want my software to support multiple database engines I would need to do some more string hackery which seems to then start to make code unreadable and more fragile. (Just before someone mentions it I know you can use parameter based arguments with most micro orms which offers protection in most cases from sql injection) So what are peoples opinions on this sort of thing? I am using Dapper as my Micro ORM in this instance and NHibernate as my regular ORM in this scenario, however most in each field are quite similar. What I term as inline sql is SQL strings within source code. There used to be design debates over SQL strings in source code detracting from the fundamental intent of the logic, which is why statically typed linq style queries became so popular its still just 1 language, but with lets say C# and Sql in one page you have 2 languages intermingled in your raw source code now. Just to clarify, the SQL injection is just one of the known issues with using sql strings, I already mention you can stop this from happening with parameter based queries, however I highlight other issues with having SQL queries ingrained in your source code, such as the lack of DB Vendor abstraction as well as losing any level of compile time error capturing on string based queries, these are all issues which we managed to side step with the dawn of ORMs with their higher level querying functionality, such as HQL or LINQ etc (not all of the issues but most of them). So I am less focused on the individual highlighted issues and more the bigger picture of is it now becoming more acceptable to have SQL strings directly in your source code again, as most Micro ORMs use this mechanism. Here is a similar question which has a few different view points, although is more about the inline sql without the micro orm context: http://stackoverflow.com/questions/5303746/is-inline-sql-hard-coding

    Read the article

  • How to stream H264 Video from camera over FTP?

    - by Jay
    I bought a h264 security camera system last year and set it up to ftp video to my computer. I was able to get the video to play (even though it played a little fast) on Ubuntu 11.04 using mplayer. A few months ago, I did a fresh install of 12.04 and I cannot seem to get the video to play with mplayer, smplayer or VLC. I have the restricted formats video packages installed and when playing with any of the players, all I get is a gray video. When calling mplayer from the command line to play the video with no options, I get a lot of these errors: [h264 @ 0x7f278c61f280]concealing 1320 DC, 1320 AC, 1320 MV errors No pts value from demuxer to use for frame! pts after filters MISSING I'm not a video expert and have been coming up with a lot of dead ends when Googling for this. Could someone offer some advice about how to play these videos? Here is the output of mediainfo for a sample file. mediainfo -f sec-cam01-m-20120921-212454.h264 General Count : 278 Count of stream of this kind : 1 Kind of stream : General Kind of stream : General Stream identifier : 0 Count of video streams : 1 Video_Format_List : AVC Video_Format_WithHint_List : AVC Codecs Video : AVC Complete name : sec-cam01-m-20120921-212454.h264 File name : sec-cam01-m-20120921-212454 File extension : h264 Format : AVC Format : AVC Format/Info : Advanced Video Codec Format/Url : http://developers.videolan.org/x264.html Format/Extensions usually used : avc h264 Commercial name : AVC Internet media type : video/H264 Codec : AVC Codec : AVC Codec/Info : Advanced Video Codec Codec/Url : http://developers.videolan.org/x264.html Codec/Extensions usually used : avc h264 File size : 1097315 File size : 1.05 MiB File size : 1 MiB File size : 1.0 MiB File size : 1.05 MiB File size : 1.046 MiB File last modification date : UTC 2012-09-22 01:27:12 File last modification date (local) : 2012-09-21 21:27:12 Video Count : 205 Count of stream of this kind : 1 Kind of stream : Video Kind of stream : Video Stream identifier : 0 Format : AVC Format/Info : Advanced Video Codec Format/Url : http://developers.videolan.org/x264.html Commercial name : AVC Format profile : [email protected] Format settings : 1 Ref Frames Format settings, CABAC : No Format settings, CABAC : No Format settings, ReFrames : 1 Format settings, ReFrames : 1 frame Format settings, GOP : M=1, N=3 Internet media type : video/H264 Codec : AVC Codec : AVC Codec/Family : AVC Codec/Info : Advanced Video Codec Codec/Url : http://developers.videolan.org/x264.html Codec profile : [email protected] Codec settings : 1 Ref Frames Codec settings, CABAC : No Codec_Settings_RefFrames : 1 Width : 704 Width : 704 pixels Height : 480 Height : 480 pixels Pixel aspect ratio : 1.000 Display aspect ratio : 1.467 Display aspect ratio : 3:2 Standard : NTSC Resolution : 8 Resolution : 8 bits Colorimetry : 4:2:0 Color space : YUV Chroma subsampling : 4:2:0 Bit depth : 8 Bit depth : 8 bits Scan type : Progressive Scan type : Progressive Interlacement : PPF Interlacement : Progressive Edit: Here is a sample video using the same encoding: https://www.dropbox.com/s/l5acwzy8rtqn9xe/sec-cam08-m-20121118-105815.h264 (not the same video as mediainfo output)

    Read the article

  • Tile engine Texture not updating when numbers in array change

    - by Corey
    I draw my map from a txt file. I am using java with slick2d library. When I print the array the number changes in the array, but the texture doesn't change. public class Tiles { public Image[] tiles = new Image[5]; public int[][] map = new int[64][64]; public Image grass, dirt, fence, mound; private SpriteSheet tileSheet; public int tileWidth = 32; public int tileHeight = 32; public void init() throws IOException, SlickException { tileSheet = new SpriteSheet("assets/tiles.png", tileWidth, tileHeight); grass = tileSheet.getSprite(0, 0); dirt = tileSheet.getSprite(7, 7); fence = tileSheet.getSprite(2, 0); mound = tileSheet.getSprite(2, 6); tiles[0] = grass; tiles[1] = dirt; tiles[2] = fence; tiles[3] = mound; int x=0, y=0; BufferedReader in = new BufferedReader(new FileReader("assets/map.dat")); String line; while ((line = in.readLine()) != null) { String[] values = line.split(","); x = 0; for (String str : values) { int str_int = Integer.parseInt(str); map[x][y]=str_int; //System.out.print(map[x][y] + " "); x++; } //System.out.println(""); y++; } in.close(); } public void update(GameContainer gc) { } public void render(GameContainer gc) { for(int y = 0; y < map.length; y++) { for(int x = 0; x < map[0].length; x ++) { int textureIndex = map[x][y]; Image texture = tiles[textureIndex]; texture.draw(x*tileWidth,y*tileHeight); } } } } Mouse Picking Where I change the number in the array Input input = gc.getInput(); gc.getInput().setOffset(cameraX-400, cameraY-300); float mouseX = input.getMouseX(); float mouseY = input.getMouseY(); double mousetileX = Math.floor((double)mouseX/tiles.tileWidth); double mousetileY = Math.floor((double)mouseY/tiles.tileHeight); double playertileX = Math.floor(playerX/tiles.tileWidth); double playertileY = Math.floor(playerY/tiles.tileHeight); double lengthX = Math.abs((float)playertileX - mousetileX); double lengthY = Math.abs((float)playertileY - mousetileY); double distance = Math.sqrt((lengthX*lengthX)+(lengthY*lengthY)); if(input.isMousePressed(Input.MOUSE_LEFT_BUTTON) && distance < 4) { System.out.println("Clicked"); if(tiles.map[(int)mousetileX][(int)mousetileY] == 1) { tiles.map[(int)mousetileX][(int)mousetileY] = 0; } } I never ask a question until I have tried to figure it out myself. I have been stuck with this problem for two weeks. It's not like this site is made for asking questions or anything. So if you actually try to help me instead of telling me to use a debugger thank you. You either get told you have too much or too little code. Nothing is never enough for the people on here it's as bad as something like reddit. Idk what is wrong all my textures work when I render them it just doesn't update when the number in the array changes. I am obviously debugging when I say that I was printing the array and the number is changing like it should, so it's not a problem with my mouse picking code. It is a problem with my textures, but I don't know what because they all render correctly. That is why I need help.

    Read the article

< Previous Page | 203 204 205 206 207 208 209 210 211 212 213 214  | Next Page >