Search Results

Search found 23062 results on 923 pages for 'multiple models'.

Page 461/923 | < Previous Page | 457 458 459 460 461 462 463 464 465 466 467 468  | Next Page >

  • More Stuff less Fluff

    - by brendonpage
    Originally posted on: http://geekswithblogs.net/brendonpage/archive/2013/11/08/more-stuff-less-fluff.aspxYAGNI – "You Aren't Going To Need It". This is an acronym commonly used in software development to remind developers to only write what they need. This acronym exists because software developers have gotten into the habit of writing everything they need to solve a problem and then everything they think they're going to possibly need in the future. Since we can't predict the future this results in a large portion of the code that we write never being used. That extra code causes unnecessary complexity, which makes it harder to understand and harder to modify when we inevitably have to write something that we didn't think of. I've known about YAGNI for some time now but I never really got it. The words made sense and the idea was clear but the concept never sank in. I was one of those devs who'd happily write a ton of code in the anticipation of future needs. In my mind this was an essential part of writing high quality code. I didn't realise that in doing so I was actually writing low quality code. If you are anything like me you are probably thinking "Lies and propaganda! High quality code needs to be future proof." I agree! But what makes code future proof? If we could see into the future the answer would be simple, code that allows for or meets all future requirements. Since we can't see the future the best we can do is write code that can easily adapt to future requirements, this means writing flexible code. Flexible code is: Fast to understand. Fast to add to. Fast to modify. To be flexible code has to be simple, this means only making it as complex as it needs to be to meet those 3 criteria. That is high quality code. YAGNI! The art is in deciding where to place the seams (abstractions) that will give you flexibility without making decisions about future functionality. Robert C Martin explains it very nicely, he says a good architecture allows you to defer decisions because if you can defer a decision then you have the flexibility to change it. I've recently had a YAGNI experience which brought this all into perspective. I was working on a new project which had multiple clients that connect to a server hosted in the cloud. I was tasked with adding a feature to the desktop client that would allow users to capture items that would then be saved to the cloud. My immediate thought was "Hey we have multiple clients so I should build a web service for these items, that way we can access them from other clients", so I went to work and this is what I created.  I stood back and gazed upon what I'd created with a warm fuzzy feeling. It was beautiful! Then the time came for the team to use the design I'd created for another feature with a new entity. Let's just say that they didn't get the same warm fuzzy feeling that I did when they looked at the design. After much discussion they eventually got it through to me that I'd bloated the design based on an assumption of future functionality. After much more discussion we cut the design down to the following. This design gives us future flexibility with no extra work, it is as complex as it needs to be. It has been a couple of months since this incident and we still haven't needed to access either of the entities from other clients. Using the simpler design allowed us to do more stuff with less stuff!

    Read the article

  • Security in Robots and Automated Systems

    - by Roger Brinkley
    Alex Dropplinger posted a Freescale blog on Securing Robotics and Automated Systems where she asks the question,“How should we secure robotics and automated systems?”.My first thought on this was duh, make sure your robot is running Java. Java's built-in services for authentication, authorization, encryption/confidentiality, and the like can be leveraged and benefit robotic or autonomous implementations. Leveraging these built-in services and pluggable encryption models of Java makes adding security to an exist bot implementation much easier. But then I thought I should ask an expert on robotics so I fired the question off to Paul Perrone of Perrone Robotics. Paul's build automated vehicles and other forms of embedded devices like auto monitoring of commercial vehicles on highways.He says that most of the works that robots do now are autonomous so it isn't a problem in the short term. But long term projects like collision avoidance technology in automobiles are going to require it.Some of the work he's doing with his Java-based MAX, set of software building blocks containing a wide range of low level and higher level software modules that developers can use to build simple to complex robot and automation applications faster and cheaper, already provide some support for JAUS compliance and because their based on Java, access to standards based security APIs.But, as Paul explained to me, "the bottom line is…it depends on the criticality level of the bot, it's network connectivity, and whether or not a standards compliance is required."

    Read the article

  • Protecting Consolidated Data on Engineered Systems

    - by Steve Enevold
    In this time of reduced budgets and cost cutting measures in Federal, State and Local governments, the requirement to provide services continues to grow. Many agencies are looking at consolidating their infrastructure to reduce cost and meet budget goals. Oracle's engineered systems are ideal platforms for accomplishing these goals. These systems provide unparalleled performance that is ideal for running applications and databases that traditionally run on separate dedicated environments. However, putting multiple critical applications and databases in a single architecture makes security more critical. You are putting a concentrated set of sensitive data on a single system, making it a more tempting target.  The environments were previously separated by iron so now you need to provide assurance that one group, department, or application's information is not visible to other personnel or applications resident in the Exadata system. Administration of the environments requires formal separation of duties so an administrator of one application environment cannot view or negatively impact others. Also, these systems need to be in protected environments just like other critical production servers. They should be in a data center protected by physical controls, network firewalls, intrusion detection and prevention, etc Exadata also provides unique security benefits, including a reducing attack surface by minimizing packages and services to only those required. In addition to reducing the possible system areas someone may attempt to infiltrate, Exadata has the following features: 1.    Infiniband, which functions as a secure private backplane 2.    IPTables  to perform stateful packet inspection for all nodes               Cellwall implements firewall services on each cell using IPTables 3.    Hardware accelerated encryption for data at rest on storage cells Oracle is uniquely positioned to provide the security necessary for implementing Exadata because security has been a core focus since the company's beginning. In addition to the security capabilities inherent in Exadata, Oracle security products are all certified to run in an Exadata environment. Database Vault Oracle Database Vault helps organizations increase the security of existing applications and address regulatory mandates that call for separation-of-duties, least privilege and other preventive controls to ensure data integrity and data privacy. Oracle Database Vault proactively protects application data stored in the Oracle database from being accessed by privileged database users. A unique feature of Database Vault is the ability to segregate administrative tasks including when a command can be executed, or that the DBA can manage the health of the database and objects, but may not see the data Advanced Security  helps organizations comply with privacy and regulatory mandates by transparently encrypting all application data or specific sensitive columns, such as credit cards, social security numbers, or personally identifiable information (PII). By encrypting data at rest and whenever it leaves the database over the network or via backups, Oracle Advanced Security provides the most cost-effective solution for comprehensive data protection. Label Security  is a powerful and easy-to-use tool for classifying data and mediating access to data based on its classification. Designed to meet public-sector requirements for multi-level security and mandatory access control, Oracle Label Security provides a flexible framework that both government and commercial entities worldwide can use to manage access to data on a "need to know" basis in order to protect data privacy and achieve regulatory compliance  Data Masking reduces the threat of someone in the development org taking data that has been copied from production to the development environment for testing, upgrades, etc by irreversibly replacing the original sensitive data with fictitious data so that production data can be shared safely with IT developers or offshore business partners  Audit Vault and Database Firewall Oracle Audit Vault and Database Firewall serves as a critical detective and preventive control across multiple operating systems and database platforms to protect against the abuse of legitimate access to databases responsible for almost all data breaches and cyber attacks.  Consolidation, cost-savings, and performance can now be achieved without sacrificing security. The combination of built in protection and Oracle’s industry-leading data protection solutions make Exadata an ideal platform for Federal, State, and local governments and agencies.

    Read the article

  • CodePlex Daily Summary for Wednesday, July 30, 2014

    CodePlex Daily Summary for Wednesday, July 30, 2014Popular ReleasesGhostscript.NET: Ghostscript.NET v.1.1.9.: v.1.1.9. fixed problem with the PDF invisible layers (the optional content groups which will be left unmarked if processtrailerattrs is not executed). fixed text rasterization problem for some pdf's, it seems that the 'pdfopen begin' did not initialize everything required to render pdf properly so we replaced it with the 'runpdfopen' method which corrects everything (problem reported by "xatabhk"). changed GhostscriptRasterizer methods to support Stream insted of the MemoryStream. fixed...Recaptcha for .NET: Recaptcha for .NET v1.5: What's NewMinor bug fixes Support for legacy .NET framework 4.0 and ASP.NET MVC 4. Support for .NET Framework 4.5.1.Azure Storage Explorer: Azure Storage Explorer 6 Preview 1: Welcome to Azure Storage Explorer 6 Preview 1 This is the first release of the latest Azure Storage Explorer, code-named Phoenix. What's New?Here are some important things to know about version 6: Open Source Now being run as a full open source project. Full source code on CodePlex. Collaboration encouraged! Updated Code Base Brand-new code base (WPF/C#/.NET 4.5) Visual Studio 2013 solution (previously VS2010) Uses the Task Parallel Library (TPL) for asynchronous background operat...Wsus Package Publisher: release v1.3.1407.29: Updated WPP to recognize the very latest console version. Some files was missing into the latest release of WPP which lead to crash when trying to make a custom update. Add a workaround to avoid clipboard modification when double-clicking on a label when creating a custom update. Add the ability to publish detectoids. (This feature is still in a BETA phase. Packages relying on these detectoids to determine which computers need to be updated, may apply to all computers).VG-Ripper & PG-Ripper: PG-Ripper 1.4.32: changes NEW: Added Support for 'ImgMega.com' links NEW: Added Support for 'ImgCandy.net' links NEW: Added Support for 'ImgPit.com' links NEW: Added Support for 'Img.yt' links FIXED: 'Radikal.ru' links FIXED: 'ImageTeam.org' links FIXED: 'ImgSee.com' links FIXED: 'Img.yt' linksAsp.Net MVC-4,Entity Framework and JQGrid Demo with Todo List WebApplication: Asp.Net MVC-4,Entity Framework and JQGrid Demo: Asp.Net MVC-4,Entity Framework and JQGrid Demo with simple Todo List WebApplication, Overview TodoList is a simple web application to create, store and modify Todo tasks to be maintained by the users, which comprises of following fields to the user (Task Name, Task Description, Severity, Target Date, Task Status). TodoList web application is created using MVC - 4 architecture, code-first Entity Framework (ORM) and Jqgrid for displaying the data.Waterfox: Waterfox 31.0 Portable: New features in Waterfox 31.0: Added support for Unicode 7.0 Experimental support for WebCL New features in Firefox 31.0:New Add the search field to the new tab page Support of Prefer:Safe http header for parental control mozilla::pkix as default certificate verifier Block malware from downloaded files Block malware from downloaded files audio/video .ogg and .pdf files handled by Firefox if no application specified Changed Removal of the CAPS infrastructure for specifying site-sp...SuperSocket, an extensible socket server framework: SuperSocket 1.6.3: The changes below are included in this release: fixed an exception when collect a server's status but it has been stopped fixed a bug that can cause an exception in case of sending data when the connection dropped already fixed the log4net missing issue for a QuickStart project fixed a warning in a QuickStart projectYnote Classic: Ynote Classic 2.8.5 Beta: Several Changes - Multiple Carets and Multiple Selections - Improved Startup Time - Improved Syntax Highlighting - Search Improvements - Shell Command - Improved StabilityTEBookConverter: 1.2: Fixed: Could not start convertion in some cases Fixed: Progress show during convertion was truncated Fixed: Stopping convertion didn't reset program titleSharePoint 2010 & 2013 Google Maps V3 WebPart: SPGoogleMap webpart - SharePoint 2013 - July 2014: Google API key support added. The webpart does not need it but if you have one you can use it.QuieNet: Version 2.0: Replaced autoplay prevention mechanism: instead of replacing the player itself, only the function that starts the player is replaced. This only works for video players for now, and live streams are handled as before.XamlImageConverter: Xaml Image Converter 3.11: Improvements: - ASP.NET 64bit support for html2pdf. - Attribute to suppress parallel execution. - Ghostscript rendering. - No need for a snapshot for a imagemap, you can use original svg image.Automatic Parallel Computing: APC SDK 2.1: Features: integration with Amazon DynamoDB. Includes: Investment Club Benchmark Investor Ranking BenchmarkCatchException (Manage Exception): CatchMe Exception Version 1.0: Code SourceDnnFoundation Skin for DnnCMS: DnnC DnnFoundation Skin: First release of the DnnC DnnFoundation SkinQND Operations Manager SNMP Monitoring: QND.SNMP.Library version 1.0.0.103: fixed bug #1815FlMML customized: FlMML customized c.s.30938: ????????LFO、?????LFO??????。 ???·Y·????LFO、??????????????????。 ???LFO????????????????????。CS-Script Source: Release v3.8.4: CSScript.Evaluator is migrated to Mono v3.3.0 Added aggregating //css_ignore_ns from the imported scripts cs-script.7z - CS-Script Suite (binaries, documentation, samples) cs-script.ExtensionPack.7z - CS-Script Extension Pack (additional binaries and samples) cs-scriptDocs.7z - CS-Script DocumentationDotSpatial: DotSpatial 1.7: DotSpatial.Full - includes all DotSpatial libraries, extensions and DemoMap application DotSpatial.Core - includes only DotSpatial core libraries Entire list of changes see in the issue tracker. Main changes: Improved common stability, optimized memory and speed when loading and rendering shapefiles, fixed some memory leaks in rasters and shape layers. Simplified plugin infrastructure. Now there are predefined implementations for all required components (IStatusControl, IDockManager, IHead...New ProjectsAdmin QuikView for Dynamics CRM 2013: Admin QuickView is a gives you a quick hierarchical snapshot of all active Business Units, Teams, Security Roles and Users in your Dynamics CRM Organization.All bots: Windows Phone app to chat with bots from http://www.pandorabots.com/Calculator Proj: Calculator with power hexadecimal binary option and more... Computational Network Toolkit (CNTK): Computational networks (CNs) generalize models that can be described as a series of computational steps such as DNN, CNN, RNN, LSTM, and maximum entropy models.DelegateExpressionizer: Library is intended to decompile delegate code at runtime and build and appropriate expression tree.Hystaspes: Logistics Management System: Logistics Management SystemKinect Experiments: This repository contains various code samples, proof-of-concepts and utilities for Kinect for Windows (v1.8 and v2)ppcs: Placement Project Version 1.0.0.0Reusable MVC Partial View with JQuery Datatables: Reusable jQuery DataTables integrated with in a Partial View of MVC 4.0 is a reusable control/view written entirely in C# and JQuery, my aim was to create a MVCShared Code Project Template for Visual Studio 2013 Update 2: This Project contains the source code for a Shared Project template for VS 2013 which extends the shareing of code to all Project types besides universal apps. Text word search: This project is a sample for searching words in a notepad or a word file through wpf platform. Trace And Watch: Trace And Watch is a Pintool developed to assist in finding 32-bit integer errors.Vtron Automatic tester screen parameters: VTRONWalli: Walli is an open source SalesForce integrated developer environment(IDE) application written in .NET.Windows Kirlian WiFi App: Illustrates the radio field generated by WiFiWOMPS - What's new On My PLEX Server: Purpose of this project is to send an user a weekly or daily email of all new content add to their Plex library.

    Read the article

  • What is Cloud Computing?

    - by joelvarty
    This is a question that we discuss quite often at Edentity.  It’s one of those things, kind of like “web services” where the terminology has been thrown around by a ton of people and means a lot of different things. Here’s my favorite diagram so far, which is a visual breakdown of the material presented here by NIST, visualized by the folks at Cloud Security Alliance.     What I like about this diagram is that is shows several different ways that we can differentiate our definitions of cloud computing, from the essential characteristics, or which “Broad Network Access" and “On-Demand Self-Service” (which often are used on their own to define cloud computing) are but a couple of things that help make something “cloud”. The most important section from my point of view is the middle one – the Service Models.  This represents the different ways that cloud computing can be exposed from the ground up.  It can be an Infrastructure, a Platform or a piece of Software that an end user interacts with. This is the future, folks. more late - joel

    Read the article

  • How do I get the correct values from glReadPixels in OpenGL 3.0?

    - by NoobScratcher
    I'm currently trying to Implement mouse selection into my game editor and I ran into a little problem when I look at the values stored in &pixel[0],&pixel[1],&pixel[2],&pixel[3]; I get r: 0 g: 0 b: 0 a: 0 As you can see I'm not able to get the correct values from glReadPixels(); My 3D models are red colored using glColor3f(255,0,0); I was hoping someone could help me figure this out. Here is the source code: case WM_LBUTTONDOWN: { GetCursorPos(&pos); ScreenToClient(hwnd, &pos); GLenum err = glGetError(); while (glGetError() != GL_NO_ERROR) {cerr << err << endl;} glReadPixels(pos.x, SCREEN_HEIGHT - 1 - pos.y, 1, 1, GL_RGB, GL_UNSIGNED_BYTE, &pixel[0] ); cerr << "r: "<< (int)pixel[0] << endl; cerr << "g: "<< (int)pixel[1] << endl; cerr << "b: "<< (int)pixel[2] << endl; cerr << "a: "<< (int)pixel[3] << endl; cout << pos.x << endl; cout << pos.y << endl; } break; I use : WIN32 API OPENGL 3.0 C++

    Read the article

  • Proof Identify stolen computer getting computer identification info from Launchpad bugs and comparing

    - by Kangarooo
    I sold my old laptop to neighbours and it was stolen from them. Well i think i have found thief so i want to check his computer id and compare it to my old Launchpad bugs id. How in Launchpad i can find from my bugs: Motherboard HDD Somthing else that can help identify it Maybe how to recover or find some overwritten files (couse now there is windows) I found in Launchpad one my bugs has LSPCI autogenerated from bug 682846 https://launchpadlibrarian.net/70611231/Lspci.txt but i dont see any id that can be used to identify specificly my comp. This can be used to identify many same models. Or i missed something in there? And what commands should i use to get all identification on that comp in one go fast? Just lspci? How to get same lspci as it is in that Launchpad link? Now testing laspci on my computer i dont get so much info. Also im now doing a search in my external hdd where i have many backups and maybe i have there result from lspci. So what containing keywords would help doing search with for small lspci and full reports ive done? I might have done sudo lshw somefilename

    Read the article

  • What is UVIndex and how do I use it on OpenGL?

    - by Delta
    I am a noob in OpenGL ES 2.0 (for WebGL) and I'm trying to draw a simple model I've made with a 3D tool and exported to .fbx format. I've been able to draw some models that only have: A vertex buffer, a index buffer for the vertices, a normal buffer and a texture coordinate buffer, but this model now has a "UVIndex" and I'm not sure where am I supposed to put this UVIndex. My code looks like this: GL.bindBuffer(GL.ARRAY_BUFFER, this.Model.House.VertexBuffer); GL.vertexAttribPointer(this.Shader.TextureAndLighting.Attribute["vPosition"],3,GL.FLOAT, false, 0, 0); GL.bindBuffer(GL.ARRAY_BUFFER, this.Model.House.NormalBuffer); GL.vertexAttribPointer(this.Shader.TextureAndLighting.Attribute["vNormal"], 3, GL.FLOAT, false, 0, 0); GL.bindBuffer(GL.ARRAY_BUFFER, this.Model.House.TexCoordBuffer); GL.vertexAttribPointer(this.Shader.TextureAndLighting.Attribute["TexCoord"], 2, GL.FLOAT, false, 0, 0); GL.bindBuffer(GL.ELEMENT_ARRAY_BUFFER, this.Model.House.IndexBuffer); GL.bindTexture(GL.TEXTURE_2D, this.Texture.HTex1); GL.activeTexture(GL.TEXTURE0); GL.drawElements(GL.TRIANGLES, this.Model.House.IndexBuffer.Length, GL.UNSIGNED_SHORT, 0); But my model renders totally incorrect and I think it has to do with the fact that I am ignoring this "UVIndex" in the .fbx file, since I've never drawn any model that uses this UVIndex I really have no clue on what to do with it. This is the json file containing the model's data: http://pastebin.com/raw.php?i=G294TVmz

    Read the article

  • The effects of Agile Programming can alter the five desirable properties of modeling tools and techniques

    The effects of Agile Programming can alter the five desirable properties of modeling tools and techniques as documented by Pfleeger. The agile methodology does promote human understanding and communication through the use of short iterative software development life cycles which forces stakeholders to review the project and adjust the project for any requirement changes.  Due to the consistent evaluations of a project and requirements, process are continually being refined, upgraded, and compared against other alternatives to ensure the best design delivered to the client. Due to the short repetitive development cycles, increased time is devoted to process management due to the fact that requirements and designs could be constantly changing. This requires additional forecasting, monitoring, and planning for each iteration. Because things can change so rapidly, automated guidance in performing process must be updated for each iteration because the environment and the available reusable process could change. In addition, the original guidance and suggestions for the project also need to be updated to account for these changes as well.   In essence the automation of process execution is supported by the agile methodology because during every iteration all processes must be tested, evaluated to ensure process integrity and compliance with the customer’s requirements. I do not think the agile approach diminishes modeling, in fact I think it increases the modeling because before the start of every development cycle, modeling must be checked for accuracy based on the changed requirements. So in essence the reduced time spent initially designing the models is in fact gained as the project completes every iteration of the project.

    Read the article

  • MVC + 3 tier; where ViewModels come into play?

    - by mikhairu
    I'm designing a 3-tiered application using ASP.NET MVC 4. I used the following resources as a reference. CodeProject: MVC + N-tier + Entity Framework Separating data access in ASP.NET MVC I have the following desingn so far. Presentation Layer (PL) (main MVC project, where M of MVC was moved to Data Access Layer): MyProjectName.Main Views/ Controllers/ ... Business Logic Layer (BLL): MyProjectName.BLL ViewModels/ ProjectServices/ ... Data Access Layer (DAL): MyProjectName.DAL Models/ Repositories.EF/ Repositories.Dapper/ ... Now, PL references BLL and BLL references DAL. This way lower layer does not depend on the one above it. In this design PL invokes a service of the BLL. PL can pass a View Model to BLL and BLL can pass a View Model back to PL. Also, BLL invokes DAL layer and DAL layer can return a Model back to BLL. BLL can in turn build a View Model and return it to PL. Up to now this pattern was working for me. However, I've ran into a problem where some of my ViewModels require joins on several entities. In the plain MVC approach, in the controller I used a LINQ query to do joins and then select new MyViewModel(){ ... }. But now, in the DAL I do not have access to where ViewModels are defined (in the BLL). This means I cannot do joins in DAL and return it to BLL. It seems I have to do separate queries in DAL (instead of joins in one query) and BLL would then use the result of these to build a ViewModel. This is very inconvenient, but I don't think I should be exposing DAL to ViewModels. Any ideas how I can solve this dilemma? Thanks.

    Read the article

  • OUM is Flexible and Scalable

    - by user535886
    Flexible and Scalable Traditionally, projects have been focused on satisfying the contents of a requirements document or rigorously conforming to an existing set of work products. Often, especially where iterative and incremental techniques have not been employed, these requirements may be inaccurate, the previous deliverables may be flawed, or the business needs may have changed since the start of the project. Fitness for business purpose, derived from the Dynamic Systems Development Method (DSDM) framework, refers to the focus of delivering necessary functionality within a required timebox. The solution can be more rigorously engineered later, if such an approach is acceptable. Our collective experience shows that applying fit-for-purpose criteria, rather than tight adherence to requirements specifications, results in an information system that more closely meets the needs of the business. In OUM, this principle is extended to refer to the execution of the method processes themselves. Project managers and practitioners are encouraged to scale OUM to be fit-for-purpose for a given situation. It is rarely appropriate to execute every activity within OUM. OUM provides guidance for determining the core set of activities to be executed, the level of detail targeted in those activities and their associated tasks, and the frequency and type of end user deliverables. The project workplan should be developed from this core. The plan should then be scaled up, rather than tailored down, to the level of discipline appropriate to the identified risks and requirements. Even at the task level, models and work products should be completed only to the level of detail required for them to be fit-for-purpose within the current iteration or, at the project level, to suit the business needs of the enterprise and to meet the contractual obligations that govern the project. OUM provides well defined templates for many of its tasks. Use of these templates is optional as determined by the context of the project. Work products can easily be a model in a repository, a prototype, a checklist, a set of application code, or, in situations where a high degree of agility is warranted, simply the tacit knowledge contained in the brain of an analyst or practitioner. For further reading on agility, see Balancing Agility and Discipline: A guide fro the Perplexed.

    Read the article

  • Xna GS 4 Animation Sample bone transforms not copying correctly

    - by annonymously
    I have a person model that is animated and a suitcase model that is not. The person can pick up the suitcase and it should move to the location of the hand bone of the person model. Unfortunately the suitcase doesn't follow the animation correctly. it moves with the hand's animation but its position is under the ground and way too far to the right. I haven't scaled any of the models myself. Thank you. The source code (forgive the rough prototype code): Matrix[] tran = new Matrix[man.model.Bones.Count];// The absolute transforms from the animation player man.model.CopyAbsoluteBoneTransformsTo(tran); Vector3 suitcasePos, suitcaseScale, tempSuitcasePos = new Vector3();// Place holders for the Matrix Decompose Quaternion suitcaseRot = new Quaternion(); // The transformation of the right hand bone is decomposed tran[man.model.Bones["HPF_RightHand"].Index].Decompose(out suitcaseScale, out suitcaseRot, out tempSuitcasePos); suitcasePos = new Vector3(); suitcasePos.X = tempSuitcasePos.Z;// The axes are inverted for some reason suitcasePos.Y = -tempSuitcasePos.Y; suitcasePos.Z = -tempSuitcasePos.X; suitcase.Position = man.Position + suitcasePos;// The actual Suitcase properties suitcase.Rotation = man.Rotation + new Vector3(suitcaseRot.X, suitcaseRot.Y, suitcaseRot.Z); I am also copying the bone transforms from the animation player in the Person class like so: // The transformations from the AnimationPlayer Matrix[] skinTrans = new Matrix[model.Bones.Count]; skinTrans = player.GetBoneTransforms(); // copy each transformation to its corresponding bone for (int i = 0; i < skinTrans.Length; i++) { model.Bones[i].Transform = skinTrans[i]; }

    Read the article

  • Implementing Service Level Agreements in Enterprise Manager 12c for Oracle Packaged Applications

    - by Anand Akela
    Contributed by Eunjoo Lee, Product Manager, Oracle Enterprise Manager. Service Level Management, or SLM, is a key tool in the proactive management of any Oracle Packaged Application (e.g., E-Business Suite, Siebel, PeopleSoft, JD Edwards E1, Fusion Apps, etc.). The benefits of SLM are that administrators can utilize representative Application transactions, which are constantly and automatically running behind the scenes, to verify that all of the key application and technology components of an Application are available and performing to expectations. A single transaction can verify the availability and performance of the underlying Application Tech Stack in a much more efficient manner than by monitoring the same underlying targets individually. In this article, we’ll be demonstrating SLM using Siebel Applications, but the same tools and processes apply to any of the Package Applications mentioned above. In this demonstration, we will log into the Siebel Application, navigate to the Contacts View, update a contact phone record, and then log-out. This transaction exposes availability and performance metrics of multiple Siebel Servers, multiple Components and Component Groups, and the Siebel Database - in a single unified manner. We can then monitor and manage these transactions like any other target in EM 12c, including placing pro-active alerts on them if the transaction is either unavailable or is not performing to required levels. The first step in the SLM process is recording the Siebel transaction. The following screenwatch demonstrates how to record Siebel transaction using an EM tool called “OpenScript”. A completed recording is called a “Synthetic Transaction”. The second step in the SLM process is uploading the Synthetic Transaction into EM 12c, and creating Generic Service Tests. We can create a Generic Service Test to execute our synthetic transactions at regular intervals to evaluate the performance of various business flows. As these transactions are running periodically, it is possible to monitor the performance of the Siebel Application by evaluating the performance of the synthetic transactions. The process of creating a Generic Service Test is detailed in the next screenwatch. EM 12c provides a guided workflow for all of the key creation steps, including configuring the Service Test, uploading of the Synthetic Test, determining the frequency of the Service Test, establishing beacons, and selecting performance and usage metrics, just to name a few. The third and final step in the SLM process is the creation of Service Level Agreements (SLA). Service Level Agreements allow Administrators to utilize the previously created Service Tests to specify expected service levels for Application availability, performance, and usage. SLAs can be created for different time periods and for different Service Tests. This last screenwatch demonstrates the process of creating an SLA, as well as highlights the Dashboards and Reports that Administrators can use to monitor Service Test results. Hopefully, this article provides you with a good start point for creating Service Level Agreements for your E-Business Suite, Siebel, PeopleSoft, JD Edwards E1, or Fusion Applications. Enterprise Manager Cloud Control 12c, with the Application Management Suites, represents a quick and easy way to implement Service Level Management capabilities at customer sites. Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Google+ |  Newsletter

    Read the article

  • Fun with Python

    - by dotneteer
    I am taking a class on Coursera recently. My formal education is in physics. Although I have been working as a developer for over 18 years and have learnt a lot of programming on the job, I still would like to gain some systematic knowledge in computer science. Coursera courses taught by Standard professors provided me a wonderful chance. The three languages recommended for assignments are Java, C and Python. I am fluent in Java and have done some projects using C++/MFC/ATL in the past, but I would like to try something different this time. I first started with pure C. Soon I discover that I have to write a lot of code outside the question that I try to solve because the very limited C standard library. For example, to read a list of values from a file, I have to read characters by characters until I hit a delimiter. If I need a list that can grow, I have to create a data structure myself, something that I have taking for granted in .Net or Java. Out of frustration, I switched to Python. I was pleasantly surprised to find that Python is very easy to learn. The tutorial on the official Python site has the exactly the right pace for me, someone with experience in another programming. After a couple of hours on the tutorial and a few more minutes of toying with IDEL, I was in business. I like the “battery supplied” philosophy that gives everything that I need out of box. For someone from C# or Java background, curly braces are replaced by colon(:) and tab spaces. Although I tend to miss colon from time to time, I found that the idea of tab space is actually very nice once I get use to them. I also like to feature of multiple assignment and multiple return parameters. When I need to return a by-product, I just add it to the list of returns. When would use Python? I would use Python if I need to computer anything quick. The language is very easy to use. Python has a good collection of libraries (packages). The REPL of the interpreter allows me test ideas quickly before committing them into script. Lots of computer science work have been ported from Lisp to Python. Some universities are even teaching SICP in Python. When wouldn’t I use Python? I mostly would not use it in a managed environment, such as Ironpython or Jython. Both .Net and Java already have a rich library so one has to make a choice which library to use. If we use the managed runtime library, the code will tie to the particular runtime and thus not portable. If we use the Python library, then we will face the relatively long start-up time. For this reason, I would not recommend to use Ironpython for WP7 development. The only situation that I see merit with managed Python is in a server application where I can preload Python so that the start-up time is not a concern. Using Python as a managed glue language is an over-kill most of the time. A managed Scheme could be a better glue language as it is small enough to start-up very fast.

    Read the article

  • Trying to install Canon LBP7750Cdn driver on Ubuntu 12.04

    - by Gideon
    I'n new to Ubuntu/Linux and had significant difficulties while attempting to configure my printer to work. The automatic driver pairing wizard which Ubuntu uses to identify and install the appropriate drivers did not find my printer's driver. I managed to get it to print when I manually select the generic configuration and checked the PCL6 configuration. However, the printer driver wizard does provide a list of Canon printers and actually do specify my printer as LBP7750C (minus the "dn" at the end, I'm assuming its because duplex ability and networking is not present on all the models - I'm not sure if this could be the source of the problem), but in selecting this option and trying to print I receive this message: Idle - /usr/lib/cups/filter/foomatic-rip failed I searched for this similar problem which other users might have encountered, but while there where plenty of such cases, they all had different resolutions and were all related to HP printers. Canon actually do provide a driver for my printer, but it comes with no installation instructions unless you consider yourself an experienced CUPS guru. Seriously. If anyone can help me solve this foomatic-rip failed problem I'd be really grateful - and I'm sure many other folks too. [BTW, can't Canonical fix this type of thing for the next Ubuntu release? - I't seems like a small problem but it causes many problems and countles hours of production time loss.] Thanks in advance.

    Read the article

  • Efficiently separating Read/Compute/Write steps for concurrent processing of entities in Entity/Component systems

    - by TravisG
    Setup I have an entity-component architecture where Entities can have a set of attributes (which are pure data with no behavior) and there exist systems that run the entity logic which act on that data. Essentially, in somewhat pseudo-code: Entity { id; map<id_type, Attribute> attributes; } System { update(); vector<Entity> entities; } A system that just moves along all entities at a constant rate might be MovementSystem extends System { update() { for each entity in entities position = entity.attributes["position"]; position += vec3(1,1,1); } } Essentially, I'm trying to parallelise update() as efficiently as possible. This can be done by running entire systems in parallel, or by giving each update() of one system a couple of components so different threads can execute the update of the same system, but for a different subset of entities registered with that system. Problem In reality, these systems sometimes require that entities interact(/read/write data from/to) each other, sometimes within the same system (e.g. an AI system that reads state from other entities surrounding the current processed entity), but sometimes between different systems that depend on each other (i.e. a movement system that requires data from a system that processes user input). Now, when trying to parallelize the update phases of entity/component systems, the phases in which data (components/attributes) from Entities are read and used to compute something, and the phase where the modified data is written back to entities need to be separated in order to avoid data races. Otherwise the only way (not taking into account just "critical section"ing everything) to avoid them is to serialize parts of the update process that depend on other parts. This seems ugly. To me it would seem more elegant to be able to (ideally) have all processing running in parallel, where a system may read data from all entities as it wishes, but doesn't write modifications to that data back until some later point. The fact that this is even possible is based on the assumption that modification write-backs are usually very small in complexity, and don't require much performance, whereas computations are very expensive (relatively). So the overhead added by a delayed-write phase might be evened out by more efficient updating of entities (by having threads work more % of the time instead of waiting). A concrete example of this might be a system that updates physics. The system needs to both read and write a lot of data to and from entities. Optimally, there would be a system in place where all available threads update a subset of all entities registered with the physics system. In the case of the physics system this isn't trivially possible because of race conditions. So without a workaround, we would have to find other systems to run in parallel (which don't modify the same data as the physics system), other wise the remaining threads are waiting and wasting time. However, that has disadvantages Practically, the L3 cache is pretty much always better utilized when updating a large system with multiple threads, as opposed to multiple systems at once, which all act on different sets of data. Finding and assembling other systems to run in parallel can be extremely time consuming to design well enough to optimize performance. Sometimes, it might even not be possible at all because a system just depends on data that is touched by all other systems. Solution? In my thinking, a possible solution would be a system where reading/updating and writing of data is separated, so that in one expensive phase, systems only read data and compute what they need to compute, and then in a separate, performance-wise cheap, write phase, attributes of entities that needed to be modified are finally written back to the entities. The Question How might such a system be implemented to achieve optimal performance, as well as making programmer life easier? What are the implementation details of such a system and what might have to be changed in the existing EC-architecture to accommodate this solution?

    Read the article

  • Upgrading Oracle Siebel CRM Application Without Downtime

    - by Doug Reid
    Oracle’s Siebel Customer Relationship Management (CRM) software helps organizations differentiate their businesses to achieve top- and bottom-line growth. Siebel CRM delivers comprehensive solutions that are tailored to more than 20 different industries. As Siebel CRM implementations have evolved into mission critical, operational business processes that must operate 24/7, companies are finding it increasingly difficult to afford the downtime typically required to perform an in-place upgrade. Without these upgrades, businesses loose out on critical new features and functionality. With Oracle GoldenGate, customers don’t have to choose between upgrades and outages. Oracle GoldenGate allows Siebel CRM customers to perform upgrades with zero downtime. Now Siebel customers can always take advantages of the latest innovations in customer relationship management without having to worry about potential lost revenue due to downtime. Oracle GoldenGate provides three different deployment models for Siebel CRM zero downtime upgrades that are designed to meet differing customer requirements. These range from a basic unidirectional model, which is designed to work out-of-the-box, to the most sophisticated active-active model for phased migrations. If you have mission-critical Siebel CRM implementations I recommend that you watch the screencast below to learn how you can begin taking advantage of all the latest Siebel enhancements without having any downtime. This screencast is also available on Oracle Media Network and Oracle's YouTube channel. For even more details I recommend reading the whitepaper Upgrading Siebel CRM with Zero Downtime .

    Read the article

  • WebLogic Server Performance and Tuning: Part II - Thread Management

    - by Gokhan Gungor
    WebLogic Server, like any other java application server, provides resources so that your applications use them to provide services. Unfortunately none of these resources are unlimited and they must be managed carefully. One of these resources is threads which are pooled to provide better throughput and performance along with the fast response time and to avoid deadlocks. Threads are execution points that WebLogic Server delivers its power and execute work. Managing threads is very important because it may affect the overall performance of the entire system. In previous releases of WebLogic Server 9.0 we had multiple execute queues and user defined thread pools. There were different queues for different type of work which had fixed number of execute threads.  Tuning of this thread pools and finding the proper number of threads was time consuming which required many trials. WebLogic Server 9.0 and the following releases use a single thread pool and a single priority-based execute queue. All type of work is executed in this single thread pool. Its size (thread count) is automatically decreased or increased (self-tuned). The new “self-tuning” system simplifies getting the proper number of threads and utilizing them.Work manager allows your applications to run concurrently in multiple threads. Work manager is a mechanism that allows you to manage and utilize threads and create rules/guidelines to follow when assigning requests to threads. We can set a scheduling guideline or priority a request with a work manager and then associate this work manager with one or more applications. At run-time, WebLogic Server uses these guidelines to assign pending work/requests to execution threads. The position of a request in the execute queue is determined by its priority. There is a default work manager that is provided. The default work manager should be sufficient for most applications. However there can be cases you want to change this default configuration. Your application(s) may be providing services that need mixture of fast response time and long running processes like batch updates. However wrong configuration of work managers can lead a performance penalty while expecting improvement.We can define/configure work managers at;•    Domain Level: config.xml•    Application Level: weblogic-application.xml •    Component Level: weblogic-ejb-jar.xml or weblogic.xml(For a specific web application use weblogic.xml)We can use the following predefined rules/constraints to manage the work;•    Fair Share Request Class: Specifies the average thread-use time required to process requests. The default is 50.•    Response Time Request Class: Specifies a response time goal in milliseconds.•    Context Request Class: Assigns request classes to requests based on context information.•    Min Threads Constraint: Limits the number of concurrent threads executing requests.•    Max Threads Constraint: Guarantees the number of threads the server will allocate to requests.•    Capacity Constraint: Causes the server to reject requests only when it has reached its capacity. Let’s create a work manager for our application for a long running work.Go to WebLogic console and select Environment | Work Managers from the domain structure tree. Click New button and select Work manager and click next. Enter the name for the work manager and click next. Then select the managed server instances(s) or clusters from available targets (the one that your long running application is deployed) and finish. Click on MyWorkManager, and open the Configuration tab and check Ignore Stuck Threads and save. This will prevent WebLogic to tread long running processes (that is taking more than a specified time) as stuck and enable to finish the process.

    Read the article

  • Dirt Cheap Bi-Directional Antenna Wirelessly Extends Your LAN

    - by Jason Fitzpatrick
    If you’re looking for an effective way to link remote LANs without the hassle of laying cable, this DIY bi-directional antenna is a quick (and cheap) method for bringing internet access to outbuildings and other locations. Tinker Danilo Larizza needed to share internet access between apartments that are relatively close together but not hardwired–ruling out simply sharing the access via existing LAN infrastructure. His solution combines a simple scrap wire antenna array mounted inside a plastic food bin (seen here with the cover removed to show the antenna) and some coaxial cable to link the antenna to two routers. Our favorite part about his build is that he constructed the pair to establish if the antenna setup would even work in his location and intended to buy commercial antennas if it did; his Tupperware models worked so well, however, they’re now the permanent solution. Hit up the link below for more information about the project. 2.4 Ghz Directive Biquad Antenna [via Hack A Day] How To Use USB Drives With the Nexus 7 and Other Android Devices Why Does 64-Bit Windows Need a Separate “Program Files (x86)” Folder? Why Your Android Phone Isn’t Getting Operating System Updates and What You Can Do About It

    Read the article

  • Open a File Browser From Your Current Command Prompt/Terminal Directory

    - by The Geek
    Ever been doing some work at the command line when you realized… it would be a lot easier if I could just use the mouse for this task? One command later, you’ll have a window open to the same place that you’re at. This same tip works in more than one operating system, so we’ll detail how to do it in every way we know how. Open a File Browser in Windows We’ve actually covered this before when we told you how to open an Explorer window from the command prompt’s current directory, but we’ll briefly review: Just type the follow command into your command prompt: explorer . Note: You could actually just type “start .” instead. And you’ll then see a file browsing window set to the same directory you were previous at. And yes, this screenshot is from Vista, but it works the same in every version of Windows. If that wasn’t good enough, you should really read how you can navigate in the File Open/Save dialogs with just the keyboard—now that’s a Stupid Geek Trick! Open a File Browser in Linux For this exercise, we’re going to assume that you’re using Gnome under a Linux flavor like Ubuntu, because that’s the most common. From your terminal window, just type in the following command: nautilus . And the next thing you know, you’ll have a file browser window open at the current location. You’ll see some type of error message at the prompt, but you can pretty much ignore that. You can also use “gnome-open .” if you want. Open Finder in Mac OS X All the Mac computers in this office are running Linux, so we haven’t had a chance to verify, but you should be able to use the following command on OS X to open Finder in the current terminal location: open . Open Dolphin on Linux KDE4 dolphin . Got any extra tips to help out your fellow readers? How do you do the same thing in KDE3? What about OS X? Leave your savvy advice in the comments, and maybe we’ll update the article. Or not. Either way, it’ll help somebody! Similar Articles Productive Geek Tips Keyboard Ninja: Concatenate Multiple Text Files in WindowsStupid Geek Tricks: Open an Explorer Window from the Command Prompt’s Current DirectoryHow to automate FTP uploads from the Windows Command LineShell Geek: Rename Multiple Files At OnceAdd "Open with gedit" to the right click menu in Ubuntu TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Home Networks – How do they look like & the problems they cause Check Your IMAP Mail Offline In Thunderbird Follow Finder Finds You Twitter Users To Follow Combine MP3 Files Easily QuicklyCode Provides Cheatsheets & Other Programming Stuff Download Free MP3s from Amazon

    Read the article

  • Laptop Charger Not Recognised Properly on Samsung NP900X3F

    - by user193732
    Firstly thanks for your time. Secondly, having an issue with my power charger on my Samsung Series 9 NP900X3F. When I boot into Ubuntu with the charger plugged in it recognises it as charging. When I unplug the charger after this it is still says it is charging. If I suspend in Ubuntu then plug/unplug during this suspended state it recognises it, but not during normal running. If I knew a little more I'm sure I could grab logs and find out what the difference between wake on suspend and normal running is, but alas I need help! I also am having issues with my keyboard backlight via the fn keys, but that I care about far less. Thank you very much. Linux mikey-900X3F 3.12.0-031200rc1-generic #201309161735 SMP Mon Sep 16 21:38:21 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux (I upgraded my kernel version to remove heinous horizontal artefacts I was getting) Happy to list more info about my system, ima bit of a noob. I did try searching however I can't find any questions at all about my system or related models with the same issue.

    Read the article

  • New CAM Editor v2.3 with Open-XDX for Open Data APIs

    - by drrwebber
    Creating actual working XML exchanges, loading data from data stores, generating XML, testing, integrating with web services and then deployment delivery takes a lot of coding and effort. Then writing the documentation, models, schema and doing naming and design rule (NDR) checks and packaging all this together (such as for NIEM IEPD use). What if there was a tool that helped you do all that easily and simply? Welcome to the new Open-XDX and the CAM Editor! Open-XDX uses code-free techniques in combination with CAM templates and visual drag and drop to rapidly design your XML exchange. Then Open-XDX will automatically generate all the SQL for you, read the database data, generate and populate the valid output XML, and filter with parameters. To complete the processing solution Open-XDX works with web services and JDBC database connections as a callable module that can be deployed plug and play with your middleware stack, all with just a few lines of Java code (about 5 actually). You can build either Query/Response or Publish/Subscribe services from existing data stores to XML literally in minutes. To see a demonstration of using Open-XDX, a MySQL data store and integrating with Oracle Web Logic server please see this short few minutes video - http://youtube.com/user/TheCameditor There is also a Quick Guide available that provides more technical insights along with a sample pack download of templates and SQL that you can try for yourself. Head on over to our project resource site to learn more, download the latest CAM Editor and see links to all the resources and materials. We look forward to seeing how the developer community is able to jump start information sharing initiatives using this new innovative approach.

    Read the article

  • Can not login Dashboard / Unable to find the server at mykeystoneurl

    - by neo0
    I installed Dashboard following this guide: http://wiki.openstack.org/OpenStackDashboard Everything fine, but when I run the server, I can not login with the username and password in DATABASE config in local_settings.py. Here's my config: DATABASES = { 'default': { 'ENGINE': 'django.db.backends.mysql', 'NAME': 'dashboarddb', 'USER': 'nova', 'PASSWORD': 'nova', 'HOST': 'localhost', 'default-character-set': 'utf8' }, } When I run the Dashboard server and enter username + password. It returned this error on browser: Unable to find the server at mykeystoneurl (HTTP 400) And in the command line: DEBUG:openstack_dashboard.settings:Running in debug mode without debug_toolbar. DEBUG:openstack_dashboard.settings:Running in debug mode without debug_toolbar. Validating models... 0 errors found Django version 1.3.1, using settings 'openstack_dashboard.settings' Development server is running at http://0.0.0.0:8888/ Quit the server with CONTROL-C. Request returned failure status. Traceback (most recent call last): File "/home/us/horizon/.venv/src/python-keystoneclient/keystoneclient/client.py", line 121, in request body = json.loads(body) File "/usr/lib/python2.7/json/__init__.py", line 326, in loads return _default_decoder.decode(s) File "/usr/lib/python2.7/json/decoder.py", line 366, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "/usr/lib/python2.7/json/decoder.py", line 384, in raw_decode raise ValueError("No JSON object could be decoded") ValueError: No JSON object could be decoded [06/Mar/2012 15:20:03] "POST /auth/login/ HTTP/1.1" 200 3735 I also tried login as "admin" with password is "password" or "secrete" but I didn't work. What's wrong? Thank you!

    Read the article

  • TellagoStudio's presenting SOA Governance on the Microsoft platform using SO-Aware at Microsoft TechReady.

    - by Vishal
    Hi there, Microsoft is hosting the first edition of their annual TechReddy conference. TechReady is an internal Microsoft conference but Microsoft invited Tellago Studios to present a session about how to enable Agile SOA Governance on the Microsoft platform using our recently release product: SO-Aware. As part of our session, we will take a look at the current challenges that organizations face when enabling SOA governance capabilities on the Microsoft platform and how organizations can benefit from  more agile, lightweight and modern SOA governance models. The session will provide a practical view to the role of Tellago Studios' SO-Aware as an essential technology to enable native SOA governance on the Microsoft platform. We will explore in detail important capabilities of SO-Aware such as Centralized service repository Centralized configuration management Service testing Monitoring Transparent integration with technologies such as Visual Studio, BizTalk Server, Windows Server & Azure AppFabric among many others But the fun doesn't stop there..... As part of this session, we will showcase for the first time our upcoming SO-Aware Test Workbench product which enables load and functional web service testing capabilities on the Microsoft technology stack. SO-Aware Test Workbench provides developers with a visually rich environment to model and control the execution of load and functional tests in a SOA infrastructure. This tool includes the first native WCF load testing engine allowing developers to transparently load test applications built on Microsoft's service oriented technologies such as WCF, BizTalk Server or the Windows Server or Azure AppFabric.

    Read the article

  • Unable To Get Sound Working to External Speaker on HP TouchSmart 320 on 11.04 or 11.10

    - by Schof
    This is an HP TouchSmart 320, model number 320-1200m. I'm using Ubuntu 11.04. Hardware information: root@hp320:/home/mpower# aplay -l **** List of PLAYBACK Hardware Devices **** card 0: Generic [HD-Audio Generic], device 0: STAC92xx Analog [STAC92xx Analog] Subdevices: 1/1 Subdevice #0: subdevice #0 root@hp320:~$ cat /proc/asound/card0/codec#0 | grep Codec Codec: IDT 92HD91BXX Sound to headphone jack works properly, but sound to built-in speakers does not work. I have installed Windows, and with Windows 7 installed, all audio hardware works properly, so this isn't a hardware fault. I've looked at https://help.ubuntu.com/community/HdaIntelSoundHowto and have been unable to find my card in http://www.kernel.org/doc/Documentation/sound/alsa/HD-Audio-Models.txt . I have tried adding almost every conceivable model in the line "options snd-hda-intel model=MODEL" line I added to /etc/modprobe.d/alsa-base.conf. Update 2011-11-09 2:31 PM PST: I've gone to Control Center - Sound Preferences to attempt settings that make sound work. The "Hardware" tab shows one device: "Internal Audio 1 Output / 1 Input Analog Stereo Duplex." There are two output profiles listed in the selection box at the bottom of the tag: Analog Stereo Duplex and Analog Stereo Output. Neither cause sound to emit from the speakers. I've also run alsamixer on the command-line and ensured that everything is set to maximum and nothing is muted. Update 2011-11-09 5:15 PM PST: I've replicated the exact same symptoms in 11.10. Update 2011-11-10 11:31 AM PST: I've filed a bug in launchpad: https://launchpad.net/ubuntu/+source/alsa-driver/+bug/888703

    Read the article

< Previous Page | 457 458 459 460 461 462 463 464 465 466 467 468  | Next Page >