Search Results

Search found 17041 results on 682 pages for 'architecture and design'.

Page 582/682 | < Previous Page | 578 579 580 581 582 583 584 585 586 587 588 589  | Next Page >

  • Gamify your Web

    - by Isabel F. Peñuelas
    Yesterday Valencia welcomed the Gamification World Congress that I follow virtually through #GWC2012. BBVA, Iberia, Ligeresa, Axe, Wayra, ESADE, GlaxoSmithKline, Macmillan, Gamisfaction, Nomaders, Blaffin were among the companies presenting success stories on gaming. It has been proved that people remember things easily when an emotion is created. The marketing expectations around Gamification techniques have a lot to do with Neuromarketing theories. There are a lot of expectations on internal enterprise Gamification. In the public Web some sectors are taking the lead on following the trend. The Gartner Analyst Brian Burke opened another Gamification recent event in Madrid remembering that “Gamification is mostly about Engagement”, and this can be applied both to customers or employees. Gamification and Banking The experience of the Spanish Financial Group BBVA that just launched BBVA Game was also presented a week ago at the BBVA Innovation Centre during the event “Gamification & Banking: a fad or a serious business?” . One of the objectives of the BBVA Game was to double the name of registered users. “People like the efficiency of the online channel want to keep a one-to-one contact with the brand”-explained Bernardo Crespo. Another interested data coming out the BBVA presentation was that “only 20% of Spanish users –out of the total holders of Bank Accounts in the country- is familiar with the use of a Web Site to consult their bank accounts”, the project aims also to reverse this situation helping people to learn making a heavy use of the Video in the gaming context. In general Banking presenters seem to agree that Gamification techniques are helping to increase the time spent on the Web. Gamification and Health Using Gamification techniques for chronic illness rehabilitation was another topic of the World Congress. Here you can find some ideas and experiences What can games do for the health (In Spanish) I have personally started my own mental-health gaming project at http://www.lumosity.com/ Gamification in the Enterprise I really recommend Reading this excellent post of Ultan ÓBroin my Introduction to Gamification and Applications. Employee´s motivation and learning are experiencing a 360º turn and it looks than some of us will become soon the Dragon of the year instead of the Employee of the Year. Using Web 2.0 Tools for Gamification Projects  What type of tools do we need for a quick-win Gamification project? To certain extend Gamification can be considered an evolution of the participative Web. Badging, avatars, points and awards, leader boards, progress charts, virtual currencies, gifting and giving challenges and quests are common components and elements. The Web is offering new development frameworks to that purpose as this Avatar Framework from Paypal or Badgeville to include in web applications. Besides, tools to create communities around a game are required to comment, share and vote players as well as for an efficient multimedia management. Due to its entirely open architecture, its community features, and its multimedia and imaging solutions is were I see WebCenter as a tool helping brands to success. Link to Sources & Recommended Readings YouTube Video of BBVAGame presentation Where To Apply Gamification In Your Incentive Jim Calhoun Cancer Challenge Ride and Walkh For my Spanish Readers El aburrimiento es el enemigo número uno del éxito

    Read the article

  • Oracle AutoVue Key Highlights from Oracle OpenWorld 2012

    - by Celine Beck
    We closed another successful Oracle Open World for AutoVue. Thanks to everyone who joined us this year. As usual, from customer presentations to evening networking activities, there was enough to keep us busy during the entire event. Here is a summary of some of the key highlights of the conference: Sessions:We had two AutoVue-specific sessions during Oracle Open World this year. The first session was part of the Product Lifecycle Management track and covered how AutoVue can be used to help drive effective decision making and streamline design for manufacturing processes. Attendees had the opportunity to learn from customer speaker GLOBALFOUNDRIES how they have been leveraging Oracle AutoVue within Agile PLM to enable high degree of collaboration during the exceptionally creative phases of their product development processes, securely, without risking valuable intellectual property. If you are interested, you can actually download the presentation by visiting launch.oracle.com/?plmopenworld2012.AutoVue was also featured as part of the Utilities track. This session focused on how visualization solutions play a critical role in effective plant optimization and configuration strategies defined by owners and operators of power generation facilities. Attendees learnt how integrated with document management systems, and enterprise applications like Oracle Primavera and Asset Lifecycle Management, AutoVue improves change management processes; minimizes risks by providing access to accurate engineering drawings which capture and reflect the as-maintained status of assets; and allows customers to drive complex maintenance projects to successful completion.Augmented Business Visualization for Agile PLMDuring Oracle Open World, we also showcased an Augmented Business Visualization-based solution for Oracle Agile PLM. An Augmented Business Visualization (ABV) solution is one where your structured data (from Oracle Agile PLM for instance) and your unstructured data (documents, designs, 3D models, etc) come together to allow you to make better decisions (check out our blog posts on the topic: Augment the Value of Your Data (or Time to replace the “attach” button) and Context is Everything ). As part of the Agile PLM, the idea is to support more effective decision-making by turning 3D assemblies into color-coded reports, and streamlining business processes like Engineering Change Management by enabling the automatic creation of engineering change requests in Agile PLM directly from documents being viewed in AutoVue. More on this coming soon...probably during the Oracle Value Chain Summit to be held in San Francisco, from Feb. 4-6, 2013 in San Francisco! Mark your calendars and stay tuned for more information! And thanks again for joining us at Oracle OpenWorld!

    Read the article

  • New Release Overview Part 2

    - by brian.harrison
    To continue our discussion of the next release of WCI, lets take a look at a few other new features that have been developed and tested. Password Management With customer implementations starting to go more external, we were finding that these customers wanted to use the native users within the portal because the customer did not want to provide an LDAP server that is externally facing. However, the portal does not provide anything close to the same level of password policy that a standard LDAP environment would provide. With that being the case, we made the decision to provide the same kind of password policies directly within WCI that a standard LDAP environment would have. Password Expiration - In how many days will a password expire which will force the user to change their password? Also, in how many days prior to expiration with the user be notified that their password is about the expire? Password Rotation - How many of your previous passwords will you not be able to use when changing your password? Password Policies - What are the requirements for the password that is being created by the user? Number of Characters Numbers Required Symbols Required Capitalization Required Easily Configurable - Configuration is handled through the Portal Settings utility within Administration. All options are available on the main page of the utility. In addition to the configuration options that were mention above, there has also been a complete rewrite of the Change Password screen to provide better information to the user when they are changing their password. The Change Password will now provide a red light/green light listing of all the policies the user must meet for the changed password to be successful. As the user is typing the password, the red lights will change to green lights as the policies as met. In addition, text will show next to the password text box stating what policy has not been met yet. NOTE: The password policy functionality is not held within the User Editor page within Administration. We did not want to remove the option for Administrators to change a user's password on the fly in the case of a password reset situation. Miscellaneous Features In addition to the Password Management feature, there are a few other features that are related to WCI that should be mentioned. Consolidated Installer - Instead of having up to 12 or 13 different installers, one for each of the main products and separate services, we are going to only provide two installers. One that will be used for Collaboration and its respective images. The second will contain WCI and all of the relevant services required for a WCI architecture as well as the IDK, .NET App Accelerator, SharePoint Console as well as all Content Web Services and Identity Services. Updated Documentation - Most of us are aware that the documentation hasn't been properly kept up to date with the last couple of releases. We are doing everything that we can to remedy this with the next release by consolidating and reviewing everything that is available. We are making sure to fill in the gaps that are already there, add in all documentation for the functionality as well as clearing anything that is no longer valid based on the newly released version. I hope that you enjoyed reading through this new release information. Next time we will start to talk about the new functionality that will be available within the next release of Collaboration. If there is anything in particular that you would like to get more detail about, then please don't hesitate to send me a comment.

    Read the article

  • Fair Comments

    - by Tony Davis
    To what extent is good code self-documenting? In one of the most entertaining sessions I saw at the recent PASS summit, Jeremiah Peschka (blog | twitter) got a laugh out of a sleepy post-lunch audience with the following remark: "Some developers say good code is self-documenting; I say, get off my team" I silently applauded the sentiment. It's not that all comments are useful, but that I mistrust the basic premise that "my code is so clearly written, it doesn't need any comments". I've read many pieces describing the road to self-documenting code, and my problem with most of them is that they feed the myth that comments in code are a sign of weakness. They aren't; in fact, used correctly I'd say they are essential. Regardless of how far intelligent naming can get you in describing what the code does, or how well any accompanying unit tests can explain to your fellow developers why it works that way, it's no excuse not to document fully the public interfaces to your code. Maybe I just mixed with the wrong crowd while learning my favorite language, but when I open a stored procedure I lose the will even to read it unless I see a big Phil Factor- or Jeff Moden-style header summarizing in plain English what the code does, how it fits in to the broader application, and a usage example. This public interface describes the high-level process and should explain the role of the code, clearly, for fellow developers, language non-experts, and even any non-technical stake holders in the project. When you step into the body of the code, the low-level details, then I agree that the rules are somewhat different; especially when code is subject to frequent refactoring that can quickly render comments redundant or misleading. At their worst, here, inline comments are sticking plaster to cover up the scars caused by poor naming conventions, failure in clarity when mapping a complex domain into code, or just by not entirely understanding the problem (/ this is the clever part). If you design and refactor your code carefully so that it is as simple as possible, your functions do one thing only, you avoid having two completely different algorithms in the same piece of code, and your functions, classes and variables are intelligently named, then, yes, the need for inline comments should be minimal. And yet, even given this, I'd still argue that many languages (T-SQL certainly being one) just don't lend themselves to readability when performing even moderately-complex tasks. If the algorithm is complex, I still like to see the occasional helpful comment. Please, therefore, be as liberal as you see fit in the detail of the comments you apply to this editorial, for like code it is bound to increase its' clarity and usefulness. Cheers, Tony.

    Read the article

  • SharpDX and game engines, back to zero?

    - by Baboon
    I'm a desktop developer (I mainly do WPF for a living) but I want to make games as a hobby. So a few months ago, I started reading blogs, gamedevSE, you name it. I understand in the C++ DirectX world, you have engines such as Unity3D with designers and whatnot, but as much as I'm ok to spend months understanding game development, I'd prefer to stay with my comfy C#. So I thought I'd develop my first games in /C#/DirectX through SharpDX But then, I can't use game engines anymore, since they're made for C++DirectX and not SharpDX. (ok, I could do P/Invokes but that defeats the purpose of SharpDX). I do know about XNA, and I also do know I can't publish to the marketplace with it (and quite frankly, I don't really want to learn an API that is in jeopardy). So how do you conceal writing games in C# and using existing game engines instead of reinventing the wheel? wait for ports? So I've found out the following: After doing the first tutorials of MOgre and digging around, it seems MOgre gives you the worse of both worlds: . You can't port it with Mono because it directly references a C++/CLI dll (Ogre) and C++/CLI isn't supported by Mono. And since it's not C++ itself, you can't make a pure build. Which, as far as I understand, means I'd be stuck on windows (and not even WinRT/Metro compatible), without capability of porting anything to mobiles or other OS (Mac/Linux). Even though it looks really nice to develop with MOgre, I'd like to learn something a little more open to future broadening. On the other hand, MonoGame seems to be a rewrite of XNA with SharpDX which sounds very promising: Mono allows me to easily port my games to other platforms, mobiles included. SharpDX allows me to access the latest DirectX versions XNA would be my first choice if only MS showed some hope for the future It really looks like MonoGame is nothing else than XNA on SharpDX. Axiom looks good too but I lack info on the subject (and pages with poor design don't give me a good feeling about an API...) XNA with SunBurn looks good: It should be portable to Mono (can anyone with experience give us feedback on that?), thus multi-platform. It's then marketplace-able since Mono itself is. Did I miss something or are 2) and 4) my best options (aside from the fact Mono doesn't support any XAML)?

    Read the article

  • Offre d’emploi – Job Offer - Montreal

    - by guybarrette
    I’m currently helping a client plan its management systems re-architecture and they are looking to hire a full time .NET developer.  It’s a small 70 people company located in the Old Montreal, you’ll be the sole dev there and you’ll use the latest technologies in re writing their core systems. Here’s the job offer in French: Concepteur de logiciel et programmeur-analyste .NET chevronné (poste permanent à temps plein) Employeur : Traductions Serge Bélair inc. Ville : Montreal QC TRSB, cabinet de traduction en croissance rapide regroupant à l’interne une des équipes de professionnels les plus compétentes et les plus diversifiées du secteur de la traduction au Canada, désire combler le poste de : Le concepteur de logiciel et programmeur-analyste .Net sera responsable de la conception, du développement complet et de l’implantation d’une solution clés en main personnalisée pour répondre aux besoins de l’entreprise. Il réalisera la conception, la programmation, la documentation, les tests, le dépannage et la maintenance du nouveau système de gestion des opérations de l’entreprise utilisant des bases de données et offrant une grande souplesse pour la production de rapports. S’il est nécessaire de faire appel à des fournisseurs ou à des consultants pour la réalisation du projet, il sera responsable de trouver les ressources requises, devra assurer les communications avec ces ressources et voir à l’exécution du travail. Il sera également appelé à mettre à jour et à maintenir les applications actuellement utilisées dans l’entreprise jusqu’à ce que l’application développée puisse être utilisée. Les principales tâches du concepteur et programmeur-analyste chevronné recherché seront les suivantes : Concevoir et développer un nouveau système de gestion des opérations en fonction des besoins d’exploitation de l’entreprise Trouver les ressources externes et internes requises Assurer les communications et le suivi avec des fournisseurs externes (p. ex., programmeurs, analystes ou architectes) Assumer la responsabilité de la mise en place du nouveau système de gestion des opérations Résoudre les problèmes liés au nouveau système de gestion des opérations Assurer le soutien les soirs de semaine et la fin de semaine (au besoin), principalement avec des outils de travail à distance Maintenir la documentation du système de gestion des opérations à jour Exécuter d’autres tâches connexes Exigences Baccalauréat en informatique ou l’équivalent Au moins 5 années d’expérience pertinente 2 ans et plus d'expérience en programmation C# Excellente connaissance en programmation d’applications Web avec bases de données Excellente connaissance en méthodologie structurée de développement et des techniques de programmation itératives Habiletés à procéder à la récolte d’informations ainsi que la rédaction de documents d’analyse Spécialisations techniques Essentielle - Design et programmation orientée objet avec C#, ASP.NET, .NET Framework 3.5, AJAX Importante - Silverlight 3, WCF, LINQ, SQL Server, Team Foundation Server Atout - Entity Framework, MVC, jQuery, MySQL, QuickBooks, Suite d’outils Telerik Technologies utilisées C# 4.0, Visual Studio 2010, Team Foundation Server 2010, LINQ, ASP.NET, ASP.NET MVC, jQuery, WCF, Silverlight 4, SQL Server 2008, MySQL, QuickBooks, Suite d’outils Telerik Qualités recherchées Bilinguisme oral et écrit Sens élevé des responsabilités Autonomie Sens de l’initiative Volonté de dépassement Leadership et aptitudes à la prise de décisions Motivation élevée Minutie et souci du détail Bon sens de l’organisation Souplesse et bonne capacité d’adaptation au changement Une expérience antérieure du développement de logiciel avec flux de processus et modules de facturation, de l’établissement de ponts entre des bases de données de types différents (Quickbooks et SQL p. ex.) et des outils d’aide à la traduction serait un atout important. Excellentes conditions de travail : salaire et avantages sociaux très concurrentiels, milieu de travail stimulant dans un environnement agréable, dans le Vieux-Montréal. Faire parvenir votre CV et votre lettre de motivation à [email protected] TRSB 276, rue Saint-Jacques, bureau 900 Montréal (Québec) H2Y 1N3 L’usage du générique masculin a pour seul but d’alléger le texte et d’en faciliter la lecture. var addthis_pub="guybarrette";

    Read the article

  • Oracle Big Data Software Downloads

    - by Mike.Hallett(at)Oracle-BI&EPM
    Companies have been making business decisions for decades based on transactional data stored in relational databases. Beyond that critical data, is a potential treasure trove of less structured data: weblogs, social media, email, sensors, and photographs that can be mined for useful information. Oracle offers a broad integrated portfolio of products to help you acquire and organize these diverse data sources and analyze them alongside your existing data to find new insights and capitalize on hidden relationships. Oracle Big Data Connectors Downloads here, includes: Oracle SQL Connector for Hadoop Distributed File System Release 2.1.0 Oracle Loader for Hadoop Release 2.1.0 Oracle Data Integrator Companion 11g Oracle R Connector for Hadoop v 2.1 Oracle Big Data Documentation The Oracle Big Data solution offers an integrated portfolio of products to help you organize and analyze your diverse data sources alongside your existing data to find new insights and capitalize on hidden relationships. Oracle Big Data, Release 2.2.0 - E41604_01 zip (27.4 MB) Integrated Software and Big Data Connectors User's Guide HTML PDF Oracle Data Integrator (ODI) Application Adapter for Hadoop Apache Hadoop is designed to handle and process data that is typically from data sources that are non-relational and data volumes that are beyond what is handled by relational databases. Typical processing in Hadoop includes data validation and transformations that are programmed as MapReduce jobs. Designing and implementing a MapReduce job usually requires expert programming knowledge. However, when you use Oracle Data Integrator with the Application Adapter for Hadoop, you do not need to write MapReduce jobs. Oracle Data Integrator uses Hive and the Hive Query Language (HiveQL), a SQL-like language for implementing MapReduce jobs. Employing familiar and easy-to-use tools and pre-configured knowledge modules (KMs), the application adapter provides the following capabilities: Loading data into Hadoop from the local file system and HDFS Performing validation and transformation of data within Hadoop Loading processed data from Hadoop to an Oracle database for further processing and generating reports Oracle Database Loader for Hadoop Oracle Loader for Hadoop is an efficient and high-performance loader for fast movement of data from a Hadoop cluster into a table in an Oracle database. It pre-partitions the data if necessary and transforms it into a database-ready format. Oracle Loader for Hadoop is a Java MapReduce application that balances the data across reducers to help maximize performance. Oracle R Connector for Hadoop Oracle R Connector for Hadoop is a collection of R packages that provide: Interfaces to work with Hive tables, the Apache Hadoop compute infrastructure, the local R environment, and Oracle database tables Predictive analytic techniques, written in R or Java as Hadoop MapReduce jobs, that can be applied to data in HDFS files You install and load this package as you would any other R package. Using simple R functions, you can perform tasks such as: Access and transform HDFS data using a Hive-enabled transparency layer Use the R language for writing mappers and reducers Copy data between R memory, the local file system, HDFS, Hive, and Oracle databases Schedule R programs to execute as Hadoop MapReduce jobs and return the results to any of those locations Oracle SQL Connector for Hadoop Distributed File System Using Oracle SQL Connector for HDFS, you can use an Oracle Database to access and analyze data residing in Hadoop in these formats: Data Pump files in HDFS Delimited text files in HDFS Hive tables For other file formats, such as JSON files, you can stage the input in Hive tables before using Oracle SQL Connector for HDFS. Oracle SQL Connector for HDFS uses external tables to provide Oracle Database with read access to Hive tables, and to delimited text files and Data Pump files in HDFS. Related Documentation Cloudera's Distribution Including Apache Hadoop Library HTML Oracle R Enterprise HTML Oracle NoSQL Database HTML Recent Blog Posts Big Data Appliance vs. DIY Price Comparison Big Data: Architecture Overview Big Data: Achieve the Impossible in Real-Time Big Data: Vertical Behavioral Analytics Big Data: In-Memory MapReduce Flume and Hive for Log Analytics Building Workflows in Oozie

    Read the article

  • Oracle Coherence 3.5 : Create Internet-scale applications using Oracle's high-performance data grid

    - by frederic.michiara
    Oracle Coherence Coherence provides replicated and distributed (partitioned) data management and caching services on top of a reliable, highly scalable peer-to-peer clustering protocol. Coherence has no single points of failure; it automatically and transparently fails over and redistributes its clustered data management services when a server becomes inoperative or is disconnected from the network. When a new server is added, or when a failed server is restarted, it automatically joins the cluster and Coherence fails back services to it, transparently redistributing the cluster load. Coherence includes network-level fault tolerance features and transparent soft re-start capability to enable servers to self-heal. For the ones looking at an easy reading and first good approach to Oracle Coherence, I would recommend reading the following book : Overview of Oracle Coherence 3.5 Build scalable web sites and Enterprise applications using a market-leading data grid product Design and implement your domain objects to work most effectively with Coherence and apply Domain Driven Designs (DDD) to Coherence applications Leverage Coherence events and continuous queries to provide real-time updates to client applications Successfully integrate various persistence technologies, such as JDBC, Hibernate, or TopLink, with Coherence Filled with numerous examples that provide best practice guidance, and a number of classes you can readily reuse within your own applications This book is targeted to Architects and developers, and as in our team we're more about Solutions Architects than developers I found interest in this book as it help to understand better Oracle Coherence and its value. The only point I may not agree with the authors is that Oracle Coherence is not an alternative to Oracle RAC in providing High Availability, but combining both Oracle RAC and Oracle Coherence will help Architects and Customers to reach higher level of service and high-availability. This book is available on https://www.packtpub.com/oracle-coherence-3-5/book Need to find out about Table of contents : https://www.packtpub.com/toc/oracle-coherence-35-table-contents Discover a sample chapter : https://www.packtpub.com/sites/default/files/6125_Oracle%20Coherence_SampleChapter.pdf Read also articles from the Authors on http://www.packtpub.com/ : Working with Aggregators in Oracle Coherence 3.5 Working with Value Extractors and Simplifying Queries in Oracle Coherence 3.5 Querying the Data Grid in Coherence 3.5: Obtaining Query Results and Using Indexes Installing Coherence 3.5 and Accessing the Data Grid: Part 1 Installing Coherence 3.5 and Accessing the Data Grid: Part 2 For more information on Oracle Coherence : What Oracle Coherence Can Do for You... : http://www.oracle.com/technology/products/coherence/coherencedatagrid/coherence_solutions.html Oracle Coherence on OTN : http://www.oracle.com/technology/products/coherence/index.html Oracle Coherence Knowledge Base : http://coherence.oracle.com/display/COH/Oracle+Coherence+Knowledge+Base+Home

    Read the article

  • What have my fellow Delphi programmers done to make Eclipse/Java more like Delphi?

    - by Robert Oschler
    I am a veteran Delphi programmer working on my first real Android app. I am using Eclipse and Java as my development environment. The thing I miss the most of course is Delphi's VCL components and the associated IDE tools for design-time editing and code creation. Fortunately I am finding Eclipse to be one hell of an IDE with it's lush context sensitive help, deep auto-complete and code wizard facilities, and other niceties. This is a huge double treat since it is free. However, here is an example of something in the Eclipse/Java environment that will give a Delphi programmer pause. I will use the simple case of adding an "on-click" code stub for an OK button. DELPHI Drop button on a form Double-click button on form and fill in the code that will fire when the button is clicked ECLIPSE Drop button on layout in the graphical XML file editor Add the View.OnClickListener interface to the containing class's "implements" list if not there already. (Command+1 on Macs, Ctrl + 1 on PCs I believe). Use Eclipse to automatically add the code stub for unimplemented methods needed to support the View.OnClickListener interface, thus creating the event handler function stub. Find the stub and fill it in. However, if you have more than one possible click event source then you will need to inspect the View parameter to see which View element triggered the OnClick() event, thus requiring a case statement to handle multiple click event sources. NOTE: I am relatively new to Eclipse/Java so if there is a much easier way of doing this please let me know. Now that work flow isn't all that terrible, but again, that's just the simplest of use cases. Ratchet up the amount of extra work and thinking for a more complex component (aka widget) and the large number of properties/events it might have. It won't be long before you miss dearly the Delphi intelligent property editor and other designers. Eclipse tries to cover this ground by having an extensive list of properties in the menu that pops up when you right-click over a component/widget in the XML graphical layout editor. That's a huge and welcome assist but it's just not even close to the convenience of the Delphi IDE. Let me be very clear. I absolutely am not ranting nor do I want to start a Delphi vs. Java ideology discussion. Android/Eclipse/Java is what it is and there is a lot that impresses me. What I want to know is what other Delphi programmers that made the switch to the Eclipse/Java IDE have done to make things more Delphi like, and not just to make component/widget event code creation easier but any programming task. For example: Clever tips/tricks Eclipse plugins you found other ideas? Any great blog posts or web resources on the topic are appreciated too. -- roschler

    Read the article

  • Monogame/SharpDX - Shader parameters missing

    - by Layoric
    I am currently working on a simple game that I am building in Windows 8 using MonoGame (develop3d). I am using some shader code from a tutorial (made by Charles Humphrey) and having an issue populating a 'texture' parameter as it appears to be missing. Edit I have also tried 'Texture2D' and using it with a register(t0), still no luck I'm not well versed writing shaders, so this might be caused by a more obvious problem. I have debugged through MonoGame's Content processor to see how this shader is being parsed, all the non 'texture' parameters are there and look to be loading correctly. Edit This seems to go back to D3D compiler. Shader code below: #include "PPVertexShader.fxh" float2 lightScreenPosition; float4x4 matVP; float2 halfPixel; float SunSize; texture flare; sampler2D Scene: register(s0){ AddressU = Clamp; AddressV = Clamp; }; sampler Flare = sampler_state { Texture = (flare); AddressU = CLAMP; AddressV = CLAMP; }; float4 LightSourceMaskPS(float2 texCoord : TEXCOORD0 ) : COLOR0 { texCoord -= halfPixel; // Get the scene float4 col = 0; // Find the suns position in the world and map it to the screen space. float2 coord; float size = SunSize / 1; float2 center = lightScreenPosition; coord = .5 - (texCoord - center) / size * .5; col += (pow(tex2D(Flare,coord),2) * 1) * 2; return col * tex2D(Scene,texCoord); } technique LightSourceMask { pass p0 { VertexShader = compile vs_4_0 VertexShaderFunction(); PixelShader = compile ps_4_0 LightSourceMaskPS(); } } I've removed default values as they are currently not support in MonoGame and also changed ps and vs to v4 instead of 2. Could this be causing the issue? As I debug through 'DXConstantBufferData' constructor (from within the MonoGameContentProcessing project) I find that the 'flare' parameter does not exist. All others seem to be getting created fine. Any help would be appreciated. Update 1 I have discovered that SharpDX D3D compiler is what seems to be ignoring this parameter (perhaps by design?). The ConstantBufferDescription.VariableCount seems to be not counting the texture variable. Update 2 SharpDX function 'GetConstantBuffer(int index)' returns the parameters (minus textures) which is making is impossible to set values to these variables within the shader. Any one know if this is normal for DX11 / Shader Model 4.0? Or am I missing something else?

    Read the article

  • Appropriate response when client empowered with CMS destroys content to his own will

    - by dukeofgaming
    So, I just recently closed a website project that pretty much was The Oatmeals' Design Hell, but with content. The client loved the site at the beginning but started getting other people involved and mercilessly bombarding us with their opinions. We served a carefully thought content strategy (which the client approved) and extremely curated copywriting that took us four months after at least 5 requirement changes (new content, new objectives for the business, changed offerings, new mindfaps, etc.) that required us to rewrite the content about 3 times. The client never gave timely feedback even though we kept the process open for him and his people to see (content being developed transparently in Google Docs). Near the end of the project he still wanted to make changes but wanted us to finish already (there are not enough words in the world to even try to make sense of this). So I explained to him the obvious implications of the never-ending requirement changes and advised him to take the time to gather his thoughts with his own team and see the new content introduced as a new content maintenance project. He happily accepted, but on the day of training/delivery things went very wrong and we have no idea why. The client didn't even allow the site to be out for a week with the content we developed for him and quickly replaced us with a Joomla savvy intern so that he completely destroy the content with shallow, unstructured, tasteless and plain wordsmithing (and I'm not even being visceral). Worst insult of all, he revoked our access from his server and the deployed CMS not even having passed 10 minutes of being given his administrator account (we realized the day after that he did it in our own office, the nerve!). Everybody involved in the team is enraged and insulted. I never want to see this happen again. So, to try to make sense of this situation and avoid it in the future with new clients I have two concrete questions: Is there even an appropriate course of action with a client like this?, or is he just not worth the trouble of analyzing (blindly hoping this never repeats again). In the exercise to try and blame ourselves instead of the client and take this as a lesson of... something, how should we set expectations for new clients about the working terms, process and final product so that they are discouraged from mauling the content to their own contempt once they get the codes to the nukes (access to the CMS)?

    Read the article

  • BoundingBox created from mesh to origin, making it bigger

    - by Gunnar Södergren
    I'm working on a level-based survival game and I want to design my scenes in Maya and export them as a single model (with multiple meshes) into XNA. My problem is that when I try to create Bounding Boxes(for Collision purposes) for each of the meshes, the are calculated from origin to the far-end of the current mesh, so to speak. I'm thinking that it might have something to do with the position each mesh brings from Maya and that it's interpreted wrongly... or something. Here's the code for when I create the boxes: private static BoundingBox CreateBoundingBox(Model model, ModelMesh mesh) { Matrix[] boneTransforms = new Matrix[model.Bones.Count]; model.CopyAbsoluteBoneTransformsTo(boneTransforms); BoundingBox result = new BoundingBox(); foreach (ModelMeshPart meshPart in mesh.MeshParts) { BoundingBox? meshPartBoundingBox = GetBoundingBox(meshPart, boneTransforms[mesh.ParentBone.Index]); if (meshPartBoundingBox != null) result = BoundingBox.CreateMerged(result, meshPartBoundingBox.Value); } result = new BoundingBox(result.Min, result.Max); return result; } private static BoundingBox? GetBoundingBox(ModelMeshPart meshPart, Matrix transform) { if (meshPart.VertexBuffer == null) return null; Vector3[] positions = VertexElementExtractor.GetVertexElement(meshPart, VertexElementUsage.Position); if (positions == null) return null; Vector3[] transformedPositions = new Vector3[positions.Length]; Vector3.Transform(positions, ref transform, transformedPositions); for (int i = 0; i < transformedPositions.Length; i++) { Console.WriteLine(" " + transformedPositions[i]); } return BoundingBox.CreateFromPoints(transformedPositions); } public static class VertexElementExtractor { public static Vector3[] GetVertexElement(ModelMeshPart meshPart, VertexElementUsage usage) { VertexDeclaration vd = meshPart.VertexBuffer.VertexDeclaration; VertexElement[] elements = vd.GetVertexElements(); Func<VertexElement, bool> elementPredicate = ve => ve.VertexElementUsage == usage && ve.VertexElementFormat == VertexElementFormat.Vector3; if (!elements.Any(elementPredicate)) return null; VertexElement element = elements.First(elementPredicate); Vector3[] vertexData = new Vector3[meshPart.NumVertices]; meshPart.VertexBuffer.GetData((meshPart.VertexOffset * vd.VertexStride) + element.Offset, vertexData, 0, vertexData.Length, vd.VertexStride); return vertexData; } } Here's a link to the picture of the mesh(The model holds six meshes, but I'm only rendering one and it's bounding box to make it clearer: http://www.gsodergren.se/portfolio/wp-content/uploads/2011/10/Screen-shot-2011-10-24-at-1.16.37-AM.png The mesh that I'm refering to is the Cubelike one. The cylinder is a completely different model and not part of any bounding box calculation. I've double- (and tripple-)-checked that this mesh corresponds to this bounding box. Any thoughts on what I'm doing wrong?

    Read the article

  • Non-standard installation (installing Linux from Linux)

    - by Evan Plaice
    So, here's my setup. I have one partition with the newest version installed, a second partition with an older version installed (as a backup just in case), a swap partition that both share, and a boot partition so the bootloader doesn't need to be setup after each upgrade. Partitions: sda1 ext3 /boot sda2 ext4 / (current version) sda3 ext4 / (old version) sda4 swap /swap sda5 ntfs (contains folders symbolically linked to /home on /) So far it has been a very good setup. I can create new boot loaders without screwing it up and adding my personal files into a new install is as simple as creating some symbolic links (the partition is NTFS in case I need to load windows on the system again). Here's the issue. I'd like to be able to drop the install into /distro on the current version and install a new version on / on the old version effectively replacing/upgrading it. The goal is to be able to just swap out new versions as they are released while maintaining redundancy in case I don't like th update. So far I have: downloaded the install.iso created a folder in /distro copied the install.iso into /distro extracted vmlinuz and initrd.lz into /distro Then I modified /boot/grub/menu.lst with the following entry: title Install Linux root (hd0,1) kernel /distro/vmlinuz initrd /distro/initrd.lz vmlinuz loads perfectly but it says it can't find initrd.lz on boot. I have also tried to uncompress the image with: unlzma < initrd.lz > initrd.img And, updating the menu.lst file to match; but that doesn't work either. I'm assuming that vmlinuz (linux kernel) loads, fires up the virtual filesystem by creating a ramdisk (initrd), mounts the iso, and launches the installer. Am I missing something here? Update: First, I wanted to say that the accepted answer would have been the best option if I was doing a normal Ubuntu install. Unfortunately, I was installing Linux Mint (which lacks the script needed to make debootstrap work. So the problem I with the above approach was, I was missing the command that vmlinuz (linux kernel) needed to execute to start boot into LiveCD mode. By looking in the /boot/grub/grub.cfg file I found what I was missing. Although this method will work, it requires that the installation files reside on their own partition. I took the easy route and used unetbootin to drop the LiveCD on a usb drive and booted from that. Like I said before. Debootstrap would have been the ideal solution here. Even though I couldn't use it I wrote down the steps it would've taken to use it. Step One: Format sda3 (the partition with the old copy of linux that's being overwritten) I used gparted to format it as ext4 from within the current linux install. How this is done varies based on what tools you prefer to use. Step Two: Mount the newly formatted partition (we'll call the mount ubuntu for simplicity) sudo mkdir /mnt/ubuntu sudo mount -o -loop /dev/sda3 /mnt/ubuntu Step Three: Get debootstrap sudo apt-get install debootstrap Step Four: Mount the install disk (replace ubuntu.iso with the name if your install disk) sudo mkdir /media/cdrom sudo mount -o loop ~/ubuntu.iso /media/cdrom Step Five: Install the OS using debootstrap (replace fiesty with the version you're installing and amd64 with your processor's architecture) sudo debootstrap --arch amd64 fiesty /mnt/ubuntu file:/media/cdrom The settings here varies. While I loaded debootstrap using an install iso, you can also have debootstrap automatically download and install if with a repository link (While most of these repositories contain debian versions I'm still not clear as to whether Ubuntu has similar repositories). Here a list of the debian package repositories and their mirrors. This is how you'd deploy debootstrap if you were doing it directly from a repository: sudo debootstrap --arch amd64 squeeze /mnt/debian http://ftp.us.debian.org/debian Here's the link that I primarily used to figure this out.

    Read the article

  • Using MVP, how to create a view from another view, linked with the same model object

    - by Dinaiz
    Background We use the Model-View-Presenter design pattern along with the abstract factory pattern and the "signal/slot" pattern in our application, to fullfill 2 main requirements Enhance testability (very lightweight GUI, every action can be simulated in unit tests) Make the "view" totally independant from the rest, so we can change the actual view implementation, without changing anything else In order to do so our code is divided in 4 layers : Core : which holds the model Presenter : which manages interactions between the view interfaces (see bellow) and the core View Interfaces : they define the signals and slots for a View, but not the implementation Views : the actual implementation of the views When the presenter creates or deals with views, it uses an abstract factory and only knows about the view interfaces. It does the signal/slot binding between views interfaces. It doesn't care about the actual implementation. In the "views" layer, we have a concrete factory which deals with implementations. The signal/slot mechanism is implemented using a custom framework built upon boost::function. Really, what we have is something like that : http://martinfowler.com/eaaDev/PassiveScreen.html Everything works fine. The problem However, there's a problem I don't know how to solve. Let's take for example a very simple drag and drop example. I have two ContainersViews (ContainerView1, ContainerView2). ContainerView1 has an ItemView1. I drag the ItemView1 from ContainerView1 to ContainerView2. ContainerView2 must create an ItemView2, of a different type, but which "points" to the same model object as ItemView1. So the ContainerView2 gets a callback called for the drop action with ItemView1 as a parameter. It calls ContainerPresenterB passing it ItemViewB In this case we are only dealing with views. In MVP-PV, views aren't supposed to know anything about the presenter nor the model, right ? How can I create the ItemView2 from the ItemView1, not knowing which model object is ItemView1 representing ? I thought about adding an "itemId" to every view, this id being the id of the core object the view represents. So in pseudo code, ContainerPresenter2 would do something like itemView2=abstractWidgetFactory.createItemView2(); this.add(itemView2,itemView1.getCoreObjectId()) I don't get too much into details. That just work. The problem I have here is that those itemIds are just like pointers. And pointers can be dangling. Imagine that by mistake, I delete itemView1, and this deletes coreObject1. The itemView2 will have a coreObjectId which represents an invalid coreObject. Isn't there a more elegant and "bulletproof" solution ? Even though I never did ObjectiveC or macOSX programming, I couldn't help but notice that our framework is very similar to Cocoa framework. How do they deal with this kind of problem ? Couldn't find more in-depth information about that on google. If someone could shed some light on this. I hope this question isn't too confusing ...

    Read the article

  • SharePoint 2010 Hosting :: Hiding SharePoint 2010 Ribbon From Anonymous Users

    - by mbridge
    The user interface improvements in SharePoint 2010 as a whole are truly amazing. Microsoft has brought this already impressive product leaps and bounds in terms of accessibility, standards, and usability. One thing you might be aware of is the new and quite useful “ribbon” control that appears by default at the top of every SharePoint 2010 master page. Here’s a sneak peek: You’ll see this ribbon not only in the 2010 web interface, but also throughout the entire family of Office products coming out this year. Even SharePoint Designer 2010 makes use of the ribbon in a very flexible and useful way. Hiding The Ribbon In SharePoint 2010, the ribbon is used almost exclusively for content creation and site administration. It doesn’t make much sense to show the ribbon on a public-facing internet site (in fact, it can really retract from your site’s design when it appears), so you’ll probably want to hide the ribbon when users aren’t logged in. Here’s how it works: <SharePoint:SPSecurityTrimmedControl PermissionsString="ManagePermissions" runat="server">     <div id="s4-ribbonrow" class="s4-pr s4-ribbonrowhidetitle">         <!-- Ribbon code appears here... -->     </div> </SharePoint:SPSecurityTrimmedControl> In your master page, find the SharePoint ribbon by looking for the line of code that begins with <div id=”s4-ribbonrow”>. Place the SPSecurityTrimmedControl code around your ribbon to conditionally hide it based on user permissions. In our example, we’ve hidden the ribbon from any user who doesn’t have the ManagePermissions ability, which is going to be almost any user short of a site administrator. Other Permission Levels You can specify different permission levels for the SPSecurityTrimmedControl, allowing you to configure exactly who can see the SharePoint 2010 ribbon. Basically, this control will hide anything inside of it when users don’t have the specified PermissionString. The available options include: 1. List Permissions - ManageLists - CancelCheckout - AddListItems - EditListItems - DeleteListItems - ViewListItems - ApproveItems - OpenItems - ViewVersionsDeleteVersions - CreateAlerts - ViewFormPages 2. Site Permissions - ManagePermissions - ViewUsageData - ManageSubwebs - ManageWeb - AddAndCustomizePages - ApplyThemeAndBorder - ApplyStyleSheets - CreateGroups - BrowseDirectories - CreateSSCSite - ViewPages - EnumeratePermissions - BrowseUserInfo - ManageAlerts - UseRemoteAPIs - UseClientIntegration - Open - EditMyUserInfo 3. Personal Permissions - ManagePersonalViews - AddDelPrivateWebParts - UpdatePersonalWebParts You can use this control to hide anything in your master page or on related page layouts, so be sure to keep it in mind when you’re trying to hide/show things conditionally based on user permission. The One Catch You may notice that the login control (or welcome control) is actually inside the ribbon by default in SharePoint 2010. You’ll probably want to pull this control out of the ribbon and place it elsewhere on your page. Just look for the line of code that looks like this: <wssuc:Welcome id="IdWelcome" runat="server" EnableViewState=”false”/> Move this code out of the ribbon and into another location within your master page. Save your changes, check in and approve all files, and anonymous users will never know your site is built on SharePoint 2010!

    Read the article

  • BizTalk 2009 - BizTalk Benchmark Wizard: Running a Test

    - by StuartBrierley
    The BizTalk Benchmark Wizard is a ultility that can be used to gain some validation of a BizTalk installation, giving a level of guidance on whether it is performing as might be expected.  It should be used after BizTalk Server has been installed and before any solutions are deployed to the environment.  This will ensure that you are getting consistent and clean results from the BizTalk Benchmark Wizard. The BizTalk Benchmark Wizard applies load to the BizTalk Server environment under a choice of specific scenarios. During these scenarios performance counter information is collected and assessed against statistics that are appropriate to the BizTalk Server environment. For details on installing the Benchmark Wizard see my previous post. The BizTalk Benchmarking Wizard provides two simple test scenarios, one for messaging and one for Orchestrations, which can be used to test your BizTalk implementation. Messaging Loadgen generates a new XML message and sends it over NetTCP A WCF-NetTCP Receive Location receives a the xml document from Loadgen. The PassThruReceive pipeline performs no processing and the message is published by the EPM to the MessageBox. The WCF One-Way Send Port, which is the only subscriber to the message, retrieves the message from the MessageBox The PassThruTransmit pipeline provides no additional processing The message is delivered to the back end WCF service by the WCF NetTCP adapter Orchestrations Loadgen generates a new XML message and sends it over NetTCP A WCF-NetTCP Receive Location receives a the xml document from Loadgen. The XMLReceive pipeline performs no processing and the message is published by the EPM to the MessageBox. The message is delivered to a simple Orchestration which consists of a receive location and a send port The WCF One-Way Send Port, which is the only subscriber to the Orchestration message, retrieves the message from the MessageBox The PassThruTransmit pipeline provides no additional processing The message is delivered to the back end WCF service by the WCF NetTCP adapter Below is a quick outline of how to run the BizTalk Benchmark Wizard on a single server, although it should be noted that this is not ideal as this server is then both generating and processing the load.  In order to separate this load out you should run the "Indigo" service on a seperate server. To start the BizTalk Benchmark Wizard click Start > All Programs > BizTalk Benchmark Wizard > BizTalk Benchmark Wizard. On this screen click next, you will then get the following pop up window. Check the server and database names and check the "check prerequsites" check-box before pressing ok.  The wizard will then check that the appropriate test scenarios are installed. You should then choose the test scenario that wish to run (messaging or orchestration) and the architecture that most closely matches your environment. You will then be asked to confirm the host server for each of the host instances. Next you will be presented with the prepare screen.  You will need to start the indigo service before pressing the Test Indigo Service Button. If you are running the indigo service on a separate server you can enter the server name here.  To start the indigo service click Start > All Programs > BizTalk Benchmark Wizard > Start Indigo Service.   While the test is running you will be presented with two speed dial type displays - one for the received messages per second and one for the processed messages per second. The green dial shows the current rate and the red dial shows the overall average rate.  Optionally you can view the CPU usage of the various servers involved in processing the tests. For my development environment I expected low results and this is what I got.  Although looking at the online high scores table and comparing to the quad core system listed, the results are perhaps not really that bad. At some time I may look at what improvements I can make to this score, but if you are interested in that now take a look at Benchmark your BizTalk Server (Part 3).

    Read the article

  • Engagement: Don’t Forget Your Employees!

    - by Kellsey Ruppel
    By Mark Brown, Sr. Director, Oracle WebCenter  This week we want to focus on Employee Engagement, and how it is critical to your business. Today we hear and read a great deal about “Customer Engagement” – and rightly so, it is those customers, whether they be traditional paying customers, citizens, students, club members, or whomever it is that are “paying the bills”.  A more engaged customer is more likely to make it easier to pay those bills by buying more, giving good reviews, or spreading the word of how wonderful their experience was. But what about those who are providing those services, those who design and make those goods; why is it that all too often they are left out of conversations concerning engagement? In fact, it is critical that we consider our employees as customers since they are using internal systems that run your organization the same way customers use external systems. Studies have shown that an organization in which the employees feel “engaged” or better able to make decisions, do their jobs, and are connected to their peers have better return to their stakeholders. (shareholders).  On the surface this seems obvious, happy employees are more productive employees. But it leads to the question – how many of our existing policies, systems and processes are actually reducing that level of engagement? Let’s look at a couple examples. If posting new information that may be of great value to everyone in the larger organization is hard to do because we use an antiquated system, then we’re making it hard to share and increasing the potential for duplicate work. If it is not trivially obvious how to create and publish this post, then chances are very high that I’ll put it on the bottom of my queue. And finally, when critical information is spread across various systems, intranet sites, workgroups and peoples inboxes, then it is very hard to learn and grow from that information.  These may sound trivial, but how often do we push things off not because it is intellectually challenging, we may have the answer at our fingertips, but because it is hard to make that information readily available.  If an engaged employee is a productive employee, then what can we do to increase their level of engagement? We can start by looking for opportunities to provide self-documenting self-service solutions. Our newer employees grew up using simplified web interfaces everyday and they loathe calling a help-desk unless it is the last resort. Sadly, many of our enterprise applications have not kept pace and we all still have processes that are based on sending an email -- like discount approvals, vacation requests, or even offer-letter approvals.   My suggestion is to pick one highly visible, high-impact process where employees are either reticent to execute on the process or openly complain about how cumbersome it is and look at the mechanism for that process. If there are better ways, streamlined steps, better UIs that could be done, then you have a candidate to reconfigure that process and make it more engaging. Looking to better engage your employees? Start here!

    Read the article

  • Dual Screen will only mirror after 12.04 upgrade

    - by Ne0
    I have been using Ubuntu with a dual screen for years now, after upgrading to 12.04 LTS i cannot get my dual screen working properly Graphics: 01:00.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI RV350 AR [Radeon 9600] 01:00.1 Display controller: Advanced Micro Devices [AMD] nee ATI RV350 AR [Radeon 9600] (Secondary) I noticed i was using open source drivers and attempted to install official binaries using the methods in this thread. Output: liam@liam-desktop:~$ sudo apt-get install fglrx fglrx-amdcccle Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be upgraded: fglrx fglrx-amdcccle 2 upgraded, 0 newly installed, 0 to remove and 12 not upgraded. Need to get 45.1 MB of archives. After this operation, 739 kB of additional disk space will be used. Get:1 http://gb.archive.ubuntu.com/ubuntu/ precise/restricted fglrx i386 2:8.960-0ubuntu1 [39.2 MB] Get:2 http://gb.archive.ubuntu.com/ubuntu/ precise/restricted fglrx-amdcccle i386 2:8.960-0ubuntu1 [5,883 kB] Fetched 45.1 MB in 1min 33s (484 kB/s) (Reading database ... 328081 files and directories currently installed.) Preparing to replace fglrx 2:8.951-0ubuntu1 (using .../fglrx_2%3a8.960-0ubuntu1_i386.deb) ... Removing all DKMS Modules Error! There are no instances of module: fglrx 8.951 located in the DKMS tree. Done. Unpacking replacement fglrx ... Preparing to replace fglrx-amdcccle 2:8.951-0ubuntu1 (using .../fglrx-amdcccle_2%3a8.960-0ubuntu1_i386.deb) ... Unpacking replacement fglrx-amdcccle ... Processing triggers for ureadahead ... ureadahead will be reprofiled on next reboot Setting up fglrx (2:8.960-0ubuntu1) ... update-alternatives: warning: forcing reinstallation of alternative /usr/lib/fglrx/ld.so.conf because link group i386-linux-gnu_gl_conf is broken. update-alternatives: warning: skip creation of /etc/OpenCL/vendors/amdocl64.icd because associated file /usr/lib/fglrx/etc/OpenCL/vendors/amdocl64.icd (of link group i386-linux-gnu_gl_conf) doesn't exist. update-alternatives: warning: skip creation of /usr/lib32/libaticalcl.so because associated file /usr/lib32/fglrx/libaticalcl.so (of link group i386-linux-gnu_gl_conf) doesn't exist. update-alternatives: warning: skip creation of /usr/lib32/libaticalrt.so because associated file /usr/lib32/fglrx/libaticalrt.so (of link group i386-linux-gnu_gl_conf) doesn't exist. update-alternatives: warning: forcing reinstallation of alternative /usr/lib/fglrx/ld.so.conf because link group i386-linux-gnu_gl_conf is broken. update-alternatives: warning: skip creation of /etc/OpenCL/vendors/amdocl64.icd because associated file /usr/lib/fglrx/etc/OpenCL/vendors/amdocl64.icd (of link group i386-linux-gnu_gl_conf) doesn't exist. update-alternatives: warning: skip creation of /usr/lib32/libaticalcl.so because associated file /usr/lib32/fglrx/libaticalcl.so (of link group i386-linux-gnu_gl_conf) doesn't exist. update-alternatives: warning: skip creation of /usr/lib32/libaticalrt.so because associated file /usr/lib32/fglrx/libaticalrt.so (of link group i386-linux-gnu_gl_conf) doesn't exist. update-initramfs: deferring update (trigger activated) update-initramfs: Generating /boot/initrd.img-3.2.0-25-generic-pae Loading new fglrx-8.960 DKMS files... Building only for 3.2.0-25-generic-pae Building for architecture i686 Building initial module for 3.2.0-25-generic-pae Done. fglrx: Running module version sanity check. - Original module - No original module exists within this kernel - Installation - Installing to /lib/modules/3.2.0-25-generic-pae/updates/dkms/ depmod....... DKMS: install completed. update-initramfs: deferring update (trigger activated) Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Setting up fglrx-amdcccle (2:8.960-0ubuntu1) ... Processing triggers for initramfs-tools ... update-initramfs: Generating /boot/initrd.img-3.2.0-25-generic-pae Processing triggers for libc-bin ... ldconfig deferred processing now taking place liam@liam-desktop:~$ sudo aticonfig --initial -f aticonfig: No supported adapters detected When i attempt to get my settings back to what they were before upgrading i get this message requested position/size for CRTC 81 is outside the allowed limit: position=(1440, 0), size=(1440, 900), maximum=(1680, 1680) and GDBus.Error:org.gtk.GDBus.UnmappedGError.Quark._gnome_2drr_2derror_2dquark.Code3: requested position/size for CRTC 81 is outside the allowed limit: position=(1440, 0), size=(1440, 900), maximum=(1680, 1680) Any idea's on what i need to do to fix this issue?

    Read the article

  • Step Away From That Computer! You’re Not Qualified to Use It!

    - by Michael Sorens
    Most things tend to come with warnings and careful instructions these days, but sadly not one of the most ubiquitous appliances of all, your computer. If a chainsaw is missing its instructions, you’re well advised not to use it, even though you probably know roughly how it’s supposed to work. I confess, there are days when I feel the same way about computers. Long ago, during the renaissance of the computer age, it was possible to know everything about computers. But today, it is challenging to be fully knowledgeable even in one small area, and most people aren’t as savvy as they like to think. And, if I may borrow from Edwin Abbott Abbott’s classic Flatland, that includes me. And you. Need an example of what I mean? Take a look at almost any recent month’s batch of Windows updates. Just two quick questions for you: Do you need all of those updates? Is it safe to install all of those updates? I do software design and development for a living on Windows and the .NET platform, but I will be quite candid: I often have little clue what the heck some of those updates are going to do or why they are needed. So, if you do not know why they are needed or what they do, how do you know if they are safe? Of course, one can sidestep both questions by accepting Microsoft’s recommended Windows Update setting of “install updates automatically”. That leads you to infer that you need all of them (which is not always the case) and, more significantly, that they are safe. Quite safe. Ah, lest reality intrude upon such a pretty picture! Sadly, there is no such thing as risk-free software installation, and payloads from Windows Update are no exception. Earlier this year, a Windows Secrets Patch Watch article touted this headline: Keep this troublesome kernel update on hold. It discusses KB 2862330, a security update originally published more than 4 months earlier, and yet the article still recommends not installing it! Most people simply do not have the time, resources, or interest, to go about figuring out which updates to install or postpone or skip for safety reasons. Windows Secrets Patch Watch is the best service I have encountered for getting advice, but it is still no panacea and using the service effectively requires a degree of computer literacy that I still think is beyond a good number of people. Which brings us full circle: Step Away From That Computer! You’re Not Qualified to Use It!

    Read the article

  • SOA Community Newsletter June 2013

    - by JuergenKress
    Dear SOA partner community member Thanks for showing us your interest to rerun the Fusion Middleware Summer Camps! After knowing your suggestions we are happy to announce the 3rd edition of our advanced Fusion Middleware training. The camps will take place from August 26th - 30th 2013 in Lisbon Portugal. Topics will include Adaptive Case Management (ACM) as part of BPM Suite, b2b, Advanced SOA and SOA Governance. Please make sure you plan and book your seat in advance - (Booking is on the basis of first come first seat!). Thanks for all your efforts to become certified and Specialized. For all the experts who achieved the SOA Suite 11g Essentials or BPM Suite 11g Certified Implementation Specialist, you can download a logo for your blog or business card at the Competence Center. For all the companies who achieved a SOA or BPM specialization you can request a nice Plaques for your office. As part of our Industrial SOA article services we published “Canonizing a Language for Architecture” in the Service Technology Magazine and on Oracle Technology Network. If you write books or a blog - make sure you share it with us! Cloud Computing is the hottest topic in IT, specially as an architect you should be aware of the concepts and technology, therefore I highly recommend you Thomas Erl’s latest book named “Cloud Computing”. In the BPM space, Adaptive Case Management (ACM) is the hottest topic, with BPM PS6 the backend ACM functionality and an ACM sample application are available. You can even combine this hype with Customer Experience. The BPM section in this newsletter reflects the high importance of the topic and includes BPM PS6 video showing process lifecycle,BPM Resource Kit, Functional Testing, Introduction to Web Forms, Customized Workspace Application and Instance Patching Demo. B2B also become more and more popular in the Oracle SOA Suite. If you could not attend the training organized in the month May, we offer you an additional B2B training as a part of the Summer Camps or you can download the B2B training material from our SOA Community Workspace (SOA Community membership required). Thanks to all for sharing the valuable SOA content with our community! Special thanks to ec4u for the new reference of SOA Suite and AIA Foundation Pack at a Swiss insurance company. It is time to submit a SOA and BPM  reference request today! In this edition of the newsletter you will see Guido and Ronald's second part of OSB article series and Kathiravan Udayakumar's published an exclusive article on SOA Suite best practice. If you want to submit your content for the next edition of the Newsletter then please feel free to submit it to myself. The A-Team is an excellent contributor to the best practice - make sure you visit the new A-Team page and read their articles such as Getting to know Maven. Also on the SOA side, we have published many new articles from the community Oracle SOA Suite for the Busy IT Professional by Frank Munz, SOA Suite Knowledge - Polyglot Service Implementation with Groovy by Alexander Suchier, QA82 Analyzer - Automated Quality Assurance for Oracle SOA Suite Projects, Verifying the Target by Anthony Reynolds and a new book called Oracle SOA Governance 11g Implementation book by Luis Augusto Weir. Two new SOA on-demand training courses NEW - Oracle Business Rules Self-Study Course & Introduction Human Workflow online course are available now! Make use of the Summer Time and get trained - hope to see you in Lisbon for the Summer Camps! Jürgen Kress Oracle SOA & BPM Partner Adoption EMEA To read the newsletter please visit http://tinyurl.com/soanewsJune2013 (OPN Account required) To become a member of the SOA Partner Community please register at http://www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: SOA Community newsletter,SOA Community,Oracle,OPN,Jürgen Kress,SOA,BPM

    Read the article

  • Cutting Edge versus Just Average? Your SOA, Got BPM? by Mala Ramakrishnan

    - by JuergenKress
    Service Oriented Architecture (SOA) has completely transformed IT from the time it was introduced well over a decade ago. Organizations have been re-plumbing their infrastructure for reusability, efficiency and gain and succeeding with it. Best practices have emerged and people and technology have matured. We have got better at delivering on a stable platform on mission critical applications and services. Yet, there is this one secret that sets some SOA customers apart from the others. These companies grow and revolutionize their business and not just transform their IT infrastructure. The differences seem subtle for an untrained eye examining these organizations externally. And from within the company, it’s a bit like an ant sitting on an elephant, hard to differentiate between the IT trunk and business tail. What is it that some organizations do differently that makes them succeed beyond SOA? These organizations pull in business people more and more to weigh into their IT decisions. They wrench understanding process over services. They don’t settle easily when bridging business metrics and IT performance. They anguish over business requirements not translating seamlessly and quickly into IT. IT is not just an enabler but a pillar that revolutionizes their business. Okay, I’ll give it to you. These organizations layer Business Process Management (BPM) on top of their SOA. Think about lifeblood business processes in your own organizations. If you are Fedex, this would be shipping and handling. If you are Stanford Hospital, this would be patient case-management: from on-boarding through discharge and follow-up care. If you are Wells Fargo, this would be loan origination. Now think about how your SOA ties into your business process. Can you decouple your business processes from your SOA so that the two can transform and change independent of each other? Can you forecast success metrics for your business process, make the changes across the board and then look back over different periods of time to see if you are on track? Are your critical business processes entrenched in the minds of few experts in your organization or does everyone from the receptionist to your enterprise architect to your CEO understand what they can do to revolutionize it? Business Process Management is a superset of SOA. It is the process of getting your business to articulate business value and metrics and have it implemented in IT without any loss in translation. It is the act of extracting the business process from the minds of experts and IT applications in your organization and valuing them as assets for performance and gain. BPM is stepping outside your SOA and moving your organization to the next level of innovation. Oracle is accelerating BPM across industries with the latest launch. Join us to understand how BPM can give your organization a cutting edge over your SOA. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: SOA,BPM,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Overview of getting and setting the URL and parts of the URL using angularjs and/or Javascript

    - by Sandy Good
    Getting and Setting the URL, and different parts of the URL are a basic part of Application Design. For Page Navigation Deep Linking Providing a link to the user Querying Data Passing information to other pages Both angularjs and javascript provide ways to get/set the URL and parts of the URL. I'm looking for the following information: Situation: Show a simple URL in the browser address bar to the user Provide a more detailed URL with string parameters to the page that the user will not see. In other words, two different URLs will be used, one simple one that the user sees in the browser, a more detailed one available to the page on load. Get URL info with PHP when then page intially loads, both don't reload the PHP page when the user needs more detailed info that is already loaded but not displayed yet. Set the URL with a more detailed URL for deep linking as the user drills down to more specific information. Get URL info in a controller or JavaSript when angularjs detects a change in the URL with routing. Hash or Query String or Both? Should I use a hash # in the URL, a string ?= or both? Here is what I currently know and what I want: A Query String HTTP:\\www.name.com?mykey=itemID will prevent angularjs from reloading the page. So, I can change the URL by adding/changing the string at the end, thereby providing new info to the page, and keep the page from reloading. I can change the URL and force a page reload with: window.location.href = "#Store/" + argUserPubId + "?itemID=home"; If home is the itemID string, I want code to simply load the page, and not display more detailed information. If there is a real itemID in the URL query string, I want the code to display the more detailed information. Code from angularjs will run either from the controller specified in the routing, or a controller specified in the HTML, or both. The angularjs code specified in the routing seems to run first, before the code specified in the HTML. A different URL for the page can be used in angularjs templateURL: than the URL that was sent to the browser address bar. when('/Store/:StoreId', { templateUrl: function(params){return 'Client_Pages/Stores.php?storeID=' + params.StoreId;}, controller: 'storeParseData' }). The above code detects http:\\www.name.com\Store\StoreID in the browser, but SENDS http:\\www.name.com\Client_Pages/Stores.php?storeID=StoreID to the page. In the above code, a function is used for the angularjs routing templateURL: to dynamically set the templateURL. So, when the user clicks something to see details of an item, how should I configure the URL? Should I use angularjs $location or window.location.href ? Should I use a longer URL with more parameters, a hash bang, or a query string? Should I use: http:\\www.name.com\Store\StoreID\ItemID or http:\\www.name.com\Store\StoreID#ItemID or http:\\www.name.com\Store\StoreID?ItemID or http:\\www.name.com\Store#StoreID?ItemID or Something else?

    Read the article

  • Is inline SQL still classed as bad practice now that we have Micro ORMs?

    - by Grofit
    This is a bit of an open ended question but I wanted some opinions, as I grew up in a world where inline SQL scripts were the norm, then we were all made very aware of SQL injection based issues, and how fragile the sql was when doing string manipulations all over the place. Then came the dawn of the ORM where you were explaining the query to the ORM and letting it generate its own SQL, which in a lot of cases was not optimal but was safe and easy. Another good thing about ORMs or database abstraction layers were that the SQL was generated with its database engine in mind, so I could use Hibernate/Nhibernate with MSSQL, MYSQL and my code never changed it was just a configuration detail. Now fast forward to current day, where Micro ORMs seem to be winning over more developers I was wondering why we have seemingly taken a U-Turn on the whole in-line sql subject. I must admit I do like the idea of no ORM config files and being able to write my query in a more optimal manner but it feels like I am opening myself back up to the old vulnerabilities such as SQL injection and I am also tying myself to one database engine so if I want my software to support multiple database engines I would need to do some more string hackery which seems to then start to make code unreadable and more fragile. (Just before someone mentions it I know you can use parameter based arguments with most micro orms which offers protection in most cases from sql injection) So what are peoples opinions on this sort of thing? I am using Dapper as my Micro ORM in this instance and NHibernate as my regular ORM in this scenario, however most in each field are quite similar. What I term as inline sql is SQL strings within source code. There used to be design debates over SQL strings in source code detracting from the fundamental intent of the logic, which is why statically typed linq style queries became so popular its still just 1 language, but with lets say C# and Sql in one page you have 2 languages intermingled in your raw source code now. Just to clarify, the SQL injection is just one of the known issues with using sql strings, I already mention you can stop this from happening with parameter based queries, however I highlight other issues with having SQL queries ingrained in your source code, such as the lack of DB Vendor abstraction as well as losing any level of compile time error capturing on string based queries, these are all issues which we managed to side step with the dawn of ORMs with their higher level querying functionality, such as HQL or LINQ etc (not all of the issues but most of them). So I am less focused on the individual highlighted issues and more the bigger picture of is it now becoming more acceptable to have SQL strings directly in your source code again, as most Micro ORMs use this mechanism. Here is a similar question which has a few different view points, although is more about the inline sql without the micro orm context: http://stackoverflow.com/questions/5303746/is-inline-sql-hard-coding

    Read the article

  • Today at Oracle OpenWorld 2012

    - by Scott McNeil
    We have another full day of great Oracle OpenWorld keynotes, sessions, demos and customer presentations in the Seen and Be Heard threater. Here's a quick run down of what's happening today with Oracle Enterprise Manager 12c: Download the Oracle Enterprise Manager 12c OpenWorld schedule (PDF) Oracle Enterprise Manager Cloud Control 12c (and Private Cloud) General Session Tues 2 Oct, 2012 Time Title Location 11:45 AM - 12:45 PM General Session: Using Oracle Enterprise Manager to Manage Your Own Private Cloud Moscone South - 103* 1:15 PM - 2:15 PM General Session: Breakthrough Efficiency in Private Cloud Infrastructure Moscone West - 3014 Conference Session Tues 2 Oct, 2012 Time Title Location 10:15 AM - 11:15 AM Oracle Exadata/Oracle Enterprise Manager 12c: Journey into Oracle Database Cloud Moscone West - 3018 10:15 AM - 11:15 AM Bulletproof Your Application Upgrades with Secure Data Masking and Subsetting Moscone West - 3020 10:15 AM - 11:15 AM Oracle Enterprise Manager 12c: Architecture Deep Dive, Tips, and Techniques Moscone South - 303 11:45 AM - 12:45 PM RDBMS Forensics: Troubleshooting with Active Session History Moscone West - 3018 11:45 AM - 12:45 PM Building and Operationalizing Your Data Center Environment with Oracle Exalogic Moscone South - 309 11:45 AM - 12:45 PM Securely Building a National Electronic Health Record: Singapore Case Study Westin San Francisco - Concordia 1:15 PM - 2:15 PM Managing Heterogeneous Environments with Oracle Enterprise Manager Moscone West - 3018 1:15 PM - 2:15 PM Complete Oracle WebLogic Server Management with Oracle Enterprise Manager 12c Moscone South - 309 1:15 PM - 2:15 PM Database Lifecycle Management with Oracle Enterprise Manager 12c Moscone West - 3020 1:15 PM - 2:15 PM Best Practices, Key Features, Tips, Techniques for Oracle Enterprise Manager 12c Upgrade Moscone South - 307 1:15 PM - 2:15 PM Enterprise Cloud with CSC’s Foundation Services for Oracle and Oracle Enterprise Manager 12c Moscone South - 236 5:00 PM - 6:00 PM Deep Dive 3-D on Oracle Exadata Management: From Discovery to Deployment to Diagnostics Moscone West - 3018 5:00 PM - 6:00 PM Everything You Need to Know About Monitoring and Troubleshooting Oracle GoldenGate Moscone West - 3005 5:00 PM - 6:00 PM Oracle Enterprise Manager 12c: The Nerve Center of Oracle Cloud Moscone West - 3020 5:00 PM - 6:00 PM Advanced Management of Oracle E-Business Suite with Oracle Enterprise Manager Moscone West - 2016 5:00 PM - 6:00 PM Oracle Enterprise Manager 12c Cloud Control Performance Pages: Falling in Love Again Moscone West - 3014 Hands-on Labs Tues 2 Oct, 2012 Time Title Location 10:15 AM - 12:45 PM Managing the Cloud with Oracle Enterprise Manager 12c Marriott Marquis - Salon 5/6 1:15 PM - 2:15 PM Database Performance Tuning Hands-on Lab Marriott Marquis - Salon 5/6 Scene and Be Heard Theater Session Tues 2 Oct, 2012 Time Title Location 10:30 AM - 10:50 AM Start Small, Grow Big: Hands-On Oracle Private Cloud—A Step-by-Step Guide Moscone South Exhibition Hall - Booth 2407 12:30 PM - 12:50 PM Blue Medora’s Oracle Enterprise Manager Plug-in for VMware vSphere Monitoring Moscone South Exhibition Hall - Booth 2407 Demos Demo Location Application and Infrastructure Testing Moscone West - W-092 Automatic Application and SQL Tuning Moscone South, Left - S-042 Automatic Fault Diagnostics Moscone South, Left - S-036 Automatic Performance Diagnostics Moscone South, Left - S-033 Complete Care for Oracle Using My Oracle Support Moscone South, Left - S-031 Complete Cloud Lifecycle Management Moscone North, Upper Lobby - N-019 Complete Database Lifecycle Management Moscone South, Left - S-030 Comprehensive Infrastructure as a Service via Oracle Enterprise Manager Moscone South, Left - S-045 Data Masking and Data Subsetting Moscone South, Left - S-034 Database Testing with Oracle Real Application Testing Moscone South, Left - S-041 Identity Management Monitoring with Oracle Enterprise Manager Moscone South, Right - S-212 Mission-Critical, SPARC-Powered Infrastructure as a Service Moscone South, Center - S-157 Oracle E-Business Suite, Siebel, JD Edwards, and PeopleSoft Management Moscone West - W-084 Oracle Enterprise Manager Cloud Control 12c Overview Moscone South, Left - S-039 Oracle Enterprise Manager: Complete Data Center Management Moscone South, Left - S-040 Oracle Exadata Management Moscone South, Center - Oracle Exalogic Management Moscone South, Center - Oracle Fusion Applications Management Moscone West - W-018 Oracle Real User Experience Insight Moscone South, Right - S-226 Oracle WebLogic Server Management and Java Diagnostics Moscone South, Right - S-206 Platform as a Service Using Oracle Enterprise Manager Moscone North, Upper Lobby - N-020 SOA Management Moscone South, Right - S-225 Self-Service Application Testing on Private and Public Clouds Moscone West - W-110 Oracle OpenWorld Music Festival New this year is Oracle’s first annual Oracle OpenWorld Musical Festival, featuring some of today's breakthrough musicians from around the country and the world. It's five nights of back-to-back performances in the heart of San Francisco—free to registered attendees. See the lineup Not Heading to OpenWorld—Watch it Live! Stay Connected: Twitter | Facebook | YouTube | Linkedin | Newsletter Download the Oracle Enterprise Manager Cloud Control12c Mobile app

    Read the article

  • Unification of TPL TaskScheduler and RX IScheduler

    - by JoshReuben
    using System; using System.Collections.Generic; using System.Reactive.Concurrency; using System.Security; using System.Threading; using System.Threading.Tasks; using System.Windows.Threading; namespace TPLRXSchedulerIntegration { public class MyScheduler :TaskScheduler, IScheduler     { private readonly Dispatcher _dispatcher; private readonly DispatcherScheduler _rxDispatcherScheduler; //private readonly TaskScheduler _tplDispatcherScheduler; private readonly SynchronizationContext _synchronizationContext; public MyScheduler(Dispatcher dispatcher)         {             _dispatcher = dispatcher;             _rxDispatcherScheduler = new DispatcherScheduler(dispatcher); //_tplDispatcherScheduler = FromCurrentSynchronizationContext();             _synchronizationContext = SynchronizationContext.Current;         }         #region RX public DateTimeOffset Now         { get { return _rxDispatcherScheduler.Now; }         } public IDisposable Schedule<TState>(TState state, DateTimeOffset dueTime, Func<IScheduler, TState, IDisposable> action)         { return _rxDispatcherScheduler.Schedule(state, dueTime, action);         } public IDisposable Schedule<TState>(TState state, TimeSpan dueTime, Func<IScheduler, TState, IDisposable> action)         { return _rxDispatcherScheduler.Schedule(state, dueTime, action);         } public IDisposable Schedule<TState>(TState state, Func<IScheduler, TState, IDisposable> action)         { return _rxDispatcherScheduler.Schedule(state, action);         }         #endregion         #region TPL /// Simply posts the tasks to be executed on the associated SynchronizationContext         [SecurityCritical] protected override void QueueTask(Task task)         {             _dispatcher.BeginInvoke((Action)(() => TryExecuteTask(task))); //TryExecuteTaskInline(task,false); //task.Start(_tplDispatcherScheduler); //m_synchronizationContext.Post(s_postCallback, (object)task);         } /// The task will be executed inline only if the call happens within the associated SynchronizationContext         [SecurityCritical] protected override bool TryExecuteTaskInline(Task task, bool taskWasPreviouslyQueued)         { if (SynchronizationContext.Current != _synchronizationContext)             { SynchronizationContext.SetSynchronizationContext(_synchronizationContext);             } return TryExecuteTask(task);         } // not implemented         [SecurityCritical] protected override IEnumerable<Task> GetScheduledTasks()         { return null;         } /// Implementes the MaximumConcurrencyLevel property for this scheduler class. /// By default it returns 1, because a <see cref="T:System.Threading.SynchronizationContext"/> based /// scheduler only supports execution on a single thread. public override Int32 MaximumConcurrencyLevel         { get             { return 1;             }         } //// preallocated SendOrPostCallback delegate //private static SendOrPostCallback s_postCallback = new SendOrPostCallback(PostCallback); //// this is where the actual task invocation occures //private static void PostCallback(object obj) //{ //    Task task = (Task) obj; //    // calling ExecuteEntry with double execute check enabled because a user implemented SynchronizationContext could be buggy //    task.ExecuteEntry(true); //}         #endregion     } }     What Design Pattern did I use here?

    Read the article

< Previous Page | 578 579 580 581 582 583 584 585 586 587 588 589  | Next Page >