Search Results

Search found 19815 results on 793 pages for 'control'.

Page 629/793 | < Previous Page | 625 626 627 628 629 630 631 632 633 634 635 636  | Next Page >

  • Caching items in Orchard

    - by Bertrand Le Roy
    Orchard has its own caching API that while built on top of ASP.NET's caching feature adds a couple of interesting twists. In addition to its usual work, the Orchard cache API must transparently separate the cache entries by tenant but beyond that, it does offer a more modern API. Here's for example how I'm using the API in the new version of my Favicon module: _cacheManager.Get( "Vandelay.Favicon.Url", ctx => { ctx.Monitor(_signals.When("Vandelay.Favicon.Changed")); var faviconSettings = ...; return faviconSettings.FaviconUrl; }); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } There is no need for any code to test for the existence of the cache entry or to later fill that entry. Seriously, how many times have you written code like this: var faviconUrl = (string)cache["Vandelay.Favicon.Url"]; if (faviconUrl == null) { faviconUrl = ...; cache.Add("Vandelay.Favicon.Url", faviconUrl, ...); } Orchard's cache API takes that control flow and internalizes it into the API so that you never have to write it again. Notice how even casting the object from the cache is no longer necessary as the type can be inferred from the return type of the Lambda. The Lambda itself is of course only hit when the cache entry is not found. In addition to fetching the object we're looking for, it also sets up the dependencies to monitor. You can monitor anything that implements IVolatileToken. Here, we are monitoring a specific signal ("Vandelay.Favicon.Changed") that can be triggered by other parts of the application like so: _signals.Trigger("Vandelay.Favicon.Changed"); In other words, you don't explicitly expire the cache entry. Instead, something happens that triggers the expiration. Other implementations of IVolatileToken include absolute expiration or monitoring of the files under a virtual path, but you can also come up with your own.

    Read the article

  • The Internet Key Wave MW833UP is not recognized in Ubuntu

    - by gio900
    I can't use my Onda MW833UP... :( Any advice? Here is something that someone else may understand: ~$: lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 11.10 Release: 11.10 Codename: oneiric ~$: lsusb Bus 001 Device 005: ID 1ee8:0012 ~$: dmesg [ 22.709475] cdc_acm 1-1:1.0: ttyACM0: USB ACM device [ 22.714856] usbcore: registered new interface driver cdc_acm [ 22.714866] cdc_acm: USB Abstract Control Model driver for USB modems and ISDN adapters [ 23.520490] ieee80211 phy0: wl_ops_bss_info_changed: arp filtering: enabled true, count 1 (implement) [ 24.244530] usbcore: registered new interface driver usbserial [ 24.244575] USB Serial support registered for generic [ 24.244673] usbcore: registered new interface driver usbserial_generic [ 24.244681] usbserial: USB Serial Driver core [ 24.265879] USB Serial support registered for GSM modem (1-port) [ 24.285680] usbcore: registered new interface driver option [ 24.285691] option: v0.7.2:USB Driver for GSM modems [ 24.425878] EXT4-fs (sda9): re-mounted. Opts: errors=remount-ro,commit=600 [ 24.736540] EXT4-fs (sda8): re-mounted. Opts: commit=600 [ 35.705796] Easy slow down manager: checking for SABI support. [ 35.706002] Easy slow down manager: SABI is supported (f5189) [ 36.060099] usbcore: deregistering interface driver uvcvideo [ 139.508061] CE: hpet increased min_delta_ns to 20113 nsec [ 6798.378917] usb 1-1: USB disconnect, device number 5 [ 6809.108232] usb 1-1: new high speed USB device number 6 using ehci_hcd [ 6809.242692] scsi5 : usb-storage 1-1:1.0 [ 6810.241257] scsi 5:0:0:0: CD-ROM Onda Datacard CD-ROM 0001 PQ: 0 ANSI: 0 [ 6810.241841] scsi 5:0:0:1: Direct-Access Onda Storage 0001 PQ: 0 ANSI: 0 [ 6810.271410] sr0: scsi3-mmc drive: 0x/0x caddy [ 6810.272099] sr 5:0:0:0: Attached scsi CD-ROM sr0 [ 6810.272852] sr 5:0:0:0: Attached scsi generic sg1 type 5 [ 6810.279954] sd 5:0:0:1: [sdb] Attached SCSI removable disk [ 6810.281210] sd 5:0:0:1: Attached scsi generic sg2 type 0 [ 6810.380591] sr0: CDROM (ioctl) error, command: Xpwrite, Read disk info 51 00 00 00 00 00 00 00 02 00 [ 6810.380617] sr: Sense Key : Hardware Error [current] [ 6810.380625] sr: Add. Sense: No additional sense information [ 6810.613937] sr0: CDROM (ioctl) error, command: Xpwrite, Read disk info 51 00 00 00 00 00 00 00 02 00 [ 6810.613972] sr: Sense Key : Hardware Error [current] [ 6810.613984] sr: Add. Sense: No additional sense information [ 6810.673716] usb 1-1: USB disconnect, device number 6 [ 6815.572142] usb 1-1: new high speed USB device number 7 using ehci_hcd [ 6815.706828] cdc_acm 1-1:1.0: ttyACM0: USB ACM device The last 3 lines are where I inserted the Internet key, then reconnected it. usb-device T: Bus=01 Lev=01 Prnt=01 Port=00 Cnt=01 Dev#= 7 Spd=480 MxCh= 0 D: Ver= 2.00 Cls=ef(misc ) Sub=02 Prot=01 MxPS=64 #Cfgs= 1 P: Vendor=1ee8 ProdID=0012 Rev=00.01 S: Manufacturer=Onda S: Product=MW833UP S: SerialNumber=9230B35D870F9CB7AE684EACC5C12BE5EC33B26E C: #Ifs= 2 Cfg#= 1 Atr=a0 MxPwr=500mA I: If#= 0 Alt= 0 #EPs= 1 Cls=02(commc) Sub=02 Prot=01 Driver=cdc_acm I: If#= 1 Alt= 0 #EPs= 2 Cls=0a(data ) Sub=00 Prot=00 Driver=cdc_acm Then there is /dev/ttyACM0. When the key is connected to the USB port, everything that will meant...

    Read the article

  • Survey: Your Plans for Adopting New Firefox Releases?

    - by Steven Chan (Oracle Development)
    Mozilla is committing to releasing new Firefox versions every six weeks.  Mozilla released Firefox 5 this week.  With this release, Mozilla states that Firefox 4 is End-of-Life and will not receive any additional security updates.  In a comment thread posted on to a Mike Kaply's blog article discussing these new Firefox policies, Asa Dotzler from Mozilla stated: ... Enterprise has never been (and I’ll argue, shouldn’t be) a focus of ours. Until we run out of people who don’t have sysadmins and enterprise deployment teams looking out for them, I can’t imagine why we’d focus at all on the kinds of environments you care so much about.  In a later comment, he added: ... A minute spent making a corporate user happy can better be spent making many regular users happy. I’d much rather Mozilla spending its limited resources looking out for the billions of users that don’t have enterprise support systems already taking care of them. Asa then confirmed that every new Firefox release will put the previous one into End-of-Life: As for John’s concern, “By the time I validate Firefox 5, what guarantee would I have that Firefox 5 won’t go EOL when Firefox 6 is released?” He has the opposite of guarantees that won’t happen. He has my promise that it will happen. Firefox 6 will be the EOL of Firefox 5. And Firefox 7 will be the EOL for Firefox 6.  He added: “You’re basically saying you don’t care about corporations.” Yes, I’m basically saying that I don’t care about making Firefox enterprise friendly. Kev Needham, Channel Manager at Mozilla later stated to PC Mag: The Web and Web browsers continue to evolve rapidly. Mozilla's focus is on providing users with the best Web experience possible, and Firefox needs to evolve at the pace the Web's users and developers expect. By releasing small, focused updates more often, we are able to deliver improved security and stability even as we introduce new features, which is better for our users, and for the Web.We recognize that this shift may not be compatible with a large organization's IT Policy and understand that it is challenging to organizations that have effort-intensive certification polices. However, our development process is geared toward delivering products that support the Web as it is today, while innovating and building future Web capabilities. Tying Firefox product development to an organizational process we do not control would make it difficult for us to continue to innovate for our users and the betterment of the Web.  Your feedback needed for E-Business Suite certifications  Mozilla's new support policy has significant implications for enterprise users of Firefox with Oracle E-Business Suite.  We are reviewing the implications for our certification and support policies for Firefox now.  It would be very helpful if you could let me know about your organisation's plans for Firefox in light of this new information.  Please feel free to drop me a private email, or post a comment here if that's appropriate. 

    Read the article

  • Oracle OpenWorld - Events of Interest

    - by Larry Wake
    I mentioned the "Focus On Oracle Solaris" document the other day, which lists many of the Solaris-related events at Oracle OpenWorld this year; today I thought I'd highlight a few sessions you might find interesting. Monday, October 1st: 4:45 PM - Get Proactive: Best Practices for Maintaining and Upgrading Oracle Solaris (Moscone South 252) This session covers best practices for upgrading and patching and how to take advantage of unique technologies in Oracle Solaris 10 and 11. Learn how to get maximum value from My Oracle Support for both reactive and proactive requirements. Understand the benefits of secure remote access and how Oracle Support experts use collaborative shared sessions combined with Oracle Solaris technologies such as DTrace. Tuesday, October 2nd: 10:15 AM -  How to Increase Performance and Agility with an Open Data Center Fabric (Moscone South 200) If you haven't had a chance to hear about Xsigo Systems, this is a golden opportunity while you're at OpenWorld. Now part of Oracle, Xsigo's network virtualization technology is designed to increase both application performance and management efficiency, through a combination of software-defined network technology and the industry’s fastest fabric, allowing data center to converge Ethernet and Fibre Channel connectivity to a single fabric, to reduce complexity by 70 percent and CapEx by 50 percent while providing more I/O bandwidth to your applications. Wednesday, October 3rd: 10:15 AM - General Session: Oracle Solaris 11 Strategy, Engineering Insights, and Roadmap (Moscone South 103) Markus Flierl, head of Oracle Solaris Core Engineering, will outline the strategy and roadmap for Oracle Solaris,  how Oracle Solaris 11 is being deployed in cloud computing and the unique optimizations in Oracle Solaris 11 for the Oracle stack. The session also offers a sneak peek at the latest technology under development in Oracle Solaris, and what customers can expect to see in the coming updates. Plus, there are several Hands-On Labs: Monday, October 1st: 10:45 AM - 11:45 AM - Reduce Risk with Oracle Solaris Access Control to Restrain Users and Isolate Applications (Marriott Marquis - Salon 14/15) 4:45 PM - 5:45 PM - Managing Your Data with Built-In Oracle Solaris ZFS Data Services in Release 11  (Marriott Marquis - Salon 14/15) Tuesday, October 2nd: 1:15 PM - 2:15 PM - Virtualizing Your Oracle Solaris 11 Environment  (Marriott Marquis - Salon 10/11) Wednesday, October 3rd: 3:30 PM - 4:30 PM - Large-Scale Installation and Deployment of Oracle Solaris 11 (Marriott Marquis - Salon 14/15) There's plenty more--see the "Focus On Oracle Solaris" guide. See you next week in San Francisco!

    Read the article

  • Understanding exceptional cases

    - by Justin
    I've been studying the use of exceptions in various php projects (such as Doctrine and Zend Framework). Exceptions seem to be thrown when unordinary input/state occurs. A perfect example is Doctrine throwing an exception when you try to use a invalid query string. I think the creators of the doctrine api understood that first, you can't query data by using an invalid DQL statement, and a developer should immediately be warned that an error has occurred, rather then letting execution continue with the possibility of an error code going un-checked. I also bet that this simplifies reading the code. I can't think of a situation where you would want to use an invalid DQL statement, except unit testing. Since this is true, it's better to avoid plaguing a bunch of code with null/error checks and use exceptions. I've read in books that exceptions shouldn't be thrown when validating dating user input. I've seen examples where of where the guideline is broken. One example is the Zend framework. If supplying an invalid controller or action name, an exception is thrown. Unlike doctrine, the user has more direct control over this sort of input. I know you can configure an error controller and set up a 404 message or what have you, but I'm curious why they have used an exception in this scenario? I guess you can argue the Zend Framework does not know how to continue processing the quest. One last example Is I wrote a function to return some html based on a given resource type. This resource type is hard-coded and sent when a user interacts with a web site (such as clicking a button to display the form to input data). I don't expect users to be mucking around with the request type. Under normal operating conditions, the resource type should be valid. To clean up some logic, I was going to throw an exception if a particular form wasn't found. This is mainly to find the correct form associated with a resource type so proper validation can occur. Does this sound like a valid use case for an exception? Right now it's pretty trivial, but I do plan to implement a restful consumer and re-using a function to map resources to their validation services would be very useful. I can then catch the exception and based on the consumer, return an error message suitable for the request type...

    Read the article

  • BIDS Helper 1.6 Beta Release (now with SQL 2012 support!)

    - by Darren Gosbell
    The beta for BIDS Helper 1.6 was just released. We have not updated the version notification just yet as we would like to get some feedback on people's experiences with the SQL 2012 version. So if you are using SQL 2012, go grab it and let us know how you go (you can post a comment on this blog post or on the BIDS Helper site itself). This is the first release that supports SQL 2012 and consequently also the first release that runs in Visual Studio 2010. A big thanks to Greg Galloway for doing the bulk of the work on this release. Please note that if you are doing an xcopy deploy that you will need to unblock the files you download or you will get a cryptic error message. This appears to be caused by a security update to either Visual Studio or the .Net framework – the xcopy deploy instructions have been updated to show you how to do this. Below are the notes from the release page. ====== This beta release is the first to support SQL Server 2012 (in addition to SQL Server 2005, 2008, and 2008 R2). Since it is marked as a beta release, we are looking for bug reports in the next few months as you use BIDS Helper on real projects. In addition to getting all existing BIDS Helper functionality working appropriately in SQL Server 2012 (SSDT), the following features are new... Analysis Services Tabular Smart Diff Tabular Actions Editor Tabular HideMemberIf Tabular Pre-Build Fixes and Updates The Unused Datasets feature for Reporting Services now accounts for new features in Reporting Services 2008 R2 like Lookups and new features in Reporting Services 2012. SSIS: emit an informational message when a variable has an expression defined and EvaluateAsExpression = False SSAS: roles reports points to wrong server SSIS - Variable Copy / Move broken in v1.5 "Unused DataSets Report" not showing up in Context menu on VS2005 if Solution Folders used SSAS Tabular: Create a UI for managing actions SSAS Tabular: Smart Diff improvements for new schema and Tabular models SSIS: Copy/Move Variable Erroring due to custom Control Flow item Icon SSIS Performance Visualization Index out of range fixing bugs in AggManager when aggregation design IDs don't match names The exe downloads are a self extracting installer, the zip downloads allow for an xcopy deploy. Make sure to note the updated xcopy deploy instructions for SQL Server 2012.

    Read the article

  • Simple thruster like behaviour when rotating sprite

    - by ensamgud
    I'm prototyping some 2D game concepts with XNA and have added some basic keyboard inputs to control a triangle sprite. When I press key up the sprite accelerates in it's current facing direction, when I release the key it brakes down. For rotation, when I press left/right keys I rotate the sprite. Currently the sprite immedately changes direction when I rotate it. What I want is for it to keep moving in the same direction when I rotate, until I hit key up, adding thrust in whatever direction the sprite is pointing at. This would simulate thrusters on a classic space shooter like Asteroids. I'm adding an image to describe the behaviour I'm after and some code samples of how I'm doing things at the moment. This is my player struct, holding information of the sprite. public struct PlayerData { public Vector2 Position; // where to draw the sprite public Vector2 Direction; // travel direction of sprite public float Angle; // rotation of sprite public float Velocity; public float Acceleration; public float Decelleration; public float RotationAcceleration; public float RotationDecceleration; public float TopSpeed; public float Scale; } This is how I'm currently handling thrusting / braking (when pressing/releasing key up) (simplified, removed some bounds checking etc): player.Velocity += player.Acceleration * 0.1f; player.Velocity -= player.Acceleration * 0.1f; And when I rotate the sprite left and right: player.Angle -= player.RotationAcceleration * 0.1f; player.Angle += player.RotationAcceleration * 0.1f; This runs in the update loop, keeps the direction updated and updates the position: Vector2 up = new Vector2(0f, -1f); Matrix rotMatrix = Matrix.CreateRotationZ(player.Angle); player.Direction = Vector2.Transform(up, rotMatrix); player.Direction *= player.Velocity; player.Position += player.Direction; I am following along various beginner tutorials and haven't found any describing this, but I have tried some on my own without success. Do I need to change my velocity and acceleration fields to Vectors instead of floats to accomplish this type of movement? I realise my Angle and the Direction vector is currently tied together and I need to disconnect these somehow to be able to rotate freely without changing the direction of the movement, but I can't quite figure out how to do this while keeping the acceleration/decceleration functional. Would appreciate an explanation rather than pure code samples. Thanks,

    Read the article

  • SharePoint Thoughts

    - by Tim Murphy
    I was listening to .NET Rocks episode #713 and it got me thinking about a number of SharePoint related topics. I have been working with SharePoint since the 2001 product came out and have watched it evolve over the years.  Today SharePoint is one of the most powerful and flexible products in the market.  Of course that doesn’t mean there isn’t room for improvement (a lot of improvement in fact) and with much power comes much responsibility. My main gripe these days is that you have to develop on a server instance.  This adds a real barrier to entry for developers.  You either have to run VMWare or Hyper-V on your developer machine or actually develop on your dev server for most tasks.  Yes, there is a way to setup a Windows 7 machine with the SharePoint components but it is very hackish. Beyond that the tools in VS2010 are a great leap forward from past generations.  Not requiring a separate package creation tool is not the least of the improvements.  Better workflow and web part development have also eased the burden of many developers. The other thing the show brought up in my thoughts was more around usage.  Users want to be able to self server everything without regard to what affect that has on leveraging their data from a corporate perspective.  My coworkers who work on Lotus Notes ask why the user can’t just do what ever they want? Part of the reason is that those features have not been built, but the other part is that giving them those features is often like giving an infant a loaded hand gun.  You can do it but it doesn’t make it the smart thing to do. As with any tool that is going to be used in the enterprise it should be subject to governance.  If controls are not in place as they said in the episode of DNR the document libraries and I believe SharePoint in general starts to look as disarrayed and unusable as a shared drive.  Consider these factors before giving into every whim of the users.  You should be able to explain to them the tradeoffs of giving them full control versus being able to leverage the information they collect to the benefit of the organization. These are just a couple of the thoughts that were triggered by the show.  I’m sure there are more discussions that can be had.  Feel free to leave your comments about the pros and cons of SharePoint. del.icio.us Tags: .NET Rocks,SharePoint,software development

    Read the article

  • Should accessible members of an internal class be internal too?

    - by Jeff Mercado
    I'm designing a set of APIs for some applications I'm working on. I want to keep the code style consistent in all the classes I write but I've found that there are a few inconsistencies that I'm introducing and I don't know what the best way to resolve them is. My example here is specific to C# but this would apply to any language with similar mechanisms. There are a few classes that I need for implementation purposes that I don't necessarily want to expose in the API so I make them internal whereever needed. Generally what I would do is design the class as I normally would (e.g., make members public/protected/private where necessary) and change the visibility level of the class itself to internal. So I might have a few classes that look like this: internal interface IMyItem { ItemSet AddTo(ItemSet set); } internal class _SmallItem : IMyItem { private readonly /* parameters */; public _SmallItem(/* small item parameters */) { /* ... */ } public ItemSet AddTo(ItemSet set) { /* ... */ } } internal abstract class _CompositeItem: IMyItem { private readonly /* parameters */; public _CompositeItem(/* composite item parameters */) { /* ... */ } public abstract object UsefulInformation { get; } protected void HelperMethod(/* parameters */) { /* ... */ } } internal class _BigItem : _CompositeItem { private readonly /* parameters */; public _BigItem(/* big item parameters */) { /* ... */ } public override object UsefulInformation { get { /* ... */ } } public ItemSet AddTo(ItemSet set) { /* ... */ } } In another generated class (part of a parser/scanner), there is a structure that contains fields for all possible values it can represent. The class generated is internal too but I have control over the visibility of the members and decided to make them internal as well. internal partial struct ValueType { internal string String; internal ItemSet ItemSet; internal IMyItem MyItem; } internal class TokenValue { internal static int EQ(ItemSetScanner scanner) { /* ... */ } internal static int NAME(ItemSetScanner scanner, string value) { /* ... */ } internal static int VALUE(ItemSetScanner scanner, string value) { /* ... */ } //... } To me, this feels odd because the first set of classes, I didn't necessarily have to make some members public, they very well could have been made internal. internal members of an internal type can only be accessed internally anyway so why make them public? I just don't like the idea that the way I write my classes has to change drastically (i.e., change all uses of public to internal) just because the class is internal. Any thoughts on what I should do here? It makes sense to me that I might want to make some members of a class declared public, internal. But it's less clear to me when the class is declared internal.

    Read the article

  • Segmentation and Targeting: Your Tools for Personalizing the Online Customer Experience

    - by Christie Flanagan
    In order to deliver the kind of personalized and engaging online experiences that customers expect today, look to segmentation and targeting.  Segmentation is the practice of dividing your site visitors into distinct groups based on shared characteristics or behavior – for example, a segment may consist of site visitors who have visited pages related to certain product type, or they may consist of visitors within the same age group or geographic area.  The idea is that those within a segment are more likely to have common needs, problems or interests that can be served by your business. Targeting is the process by which the most relevant content, whether an article promotion or other piece of content, is delivered to your visitors based on their segment membership. Segmentation and targeting are used to drive greater engagement on your web presence by delivering content to your site visitors that is tailored to their interests, behavior or other attributes.  You may have a number of different goals for your segmentation and targeting efforts: Up-sell or cross-sell to your customers Conduct A/B testing on your offers and creative Offer discounts, promotions or other incentives for the time and duration that you specify Make is easier to find relevant information about products and services Create premium content model There are two different approaches you can take toward segmentation and targeting for you online customer experience initiatives. The first is more of a manual process, in which marketers manage the process of determining which segments to create and which content to target to those segments. The benefit of this approach is that it gives marketers a high level of control over the whole process which works well when you have a thorough understanding of your segments and which content is most likely to serve their needs.  Tools for marketer managed segmentation and targeting are often built right in to your WEM platform, as they are with Oracle WebCenter Sites. The downside is that the more segments and content that you have, the more time consuming and complicated in can be to manage manually.The second approach relies on predictive intelligence to automate the segmentation and targeting process.  This allows optimization of the process to occur in real time. This approach helps reduce the burden of manual segmentation and targeting and can result in new insights into segments that you may never have thought of on your own.  It also provides you with the capability to quickly test new offers and promotions on your site.  Predictive segmentation and targeting can be achieved by using Oracle WebCenter Sites and Oracle Real-Time Decisions together. *****Get a taste for how Oracle WebCenter Sites and Oracle Real-Time Decisions combine to deliver powerful capabilities for predictive segmentation and targeting by watching this on demand webcast introducing Oracle WebCenter Sites 11g or by reading IDC’s take on the latest release of Oracle’s web experience management solution.  Be sure to return to the Oracle WebCenter blog on Thursday for a closer look at how to optimize the online customer experience using these two products together.

    Read the article

  • Lenovo W520 back usb port not working

    - by jaudette
    The usb port in the back of my laptop (on the right side when viewed from using perspective) is not working. Does anybody know if we can get this port working, and what the port number is. Here is my lsusb, if it can help. % lsusb Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 003 Device 002: ID 046d:c01e Logitech, Inc. MX518 Optical Mouse Bus 003 Device 003: ID 046d:c318 Logitech, Inc. Illuminated Keyboard Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 001 Device 003: ID 0765:5001 X-Rite, Inc. Huey PRO Colorimeter Bus 001 Device 004: ID 147e:2016 Upek Biometric Touchchip/Touchstrip Fingerprint Sensor Bus 001 Device 005: ID 0a5c:217f Broadcom Corp. Bluetooth Controller Bus 001 Device 006: ID 04f2:b217 Chicony Electronics Co., Ltd Lenovo Integrated Camera (0.3MP) Bus 002 Device 003: ID 17ef:1003 Lenovo Integrated Smart Card Reader I am running 12.10, upgraded from 12.04 but it did not work either in 12.04. The two usb ports on the left work just fine. EDIT: I just updated my bios from 1.32 to 1.39, no change in behaviour. The port does not even power up my devices. EDIT 2: Booted up windows, and the port is working. I went into the device manager and looked at the USB settings. I found my USB drive on Port 2, Hub 3, i just don't know how that relates to the Bus and Device numbers of linux. In windows, Smart card reader was on USB hub located at port 1 hub 2, fingerprint and bluetooth were on USB hub located at port 1 hub 1 EDIT 3: Went and looked at this SO post. Tried to look at my kern.log file with tail -f /var/log/kern.log. Got some activity when plugging/removing devices in other ports, but nothing happens when connecting a device into that port. It really looks disabled. Looked at my usb1-4 sys/bus/usb/devices/usb1/power/control and they are all set on auto. As expected, usb4 have version 3.00, the others (usb1-usb3) are 2.00.

    Read the article

  • TransportWithMessageCredential & Service Bus – Introduction

    - by Michael Stephenson
    Recently we have been working on a project using the Windows Azure Service Bus to expose line of business applications. One of the topics we discussed a lot was around the security aspects of the solution. Most of the samples you see for Windows Azure Service Bus often use the shared secret with the Access Control Service to protect the service bus endpoint but one of the problems we found was that with this scenario any claims resulting from credentials supplied by the client are not passed through to the service listening to the service bus endpoint. As an example of this we originally were hoping that we could give two different clients their own shared secret key and the issuer for each would indicate which client it was. If the claims had flown to the listening service then we could check that the message sent by client one was a type they are allowed to send. Unfortunately this claim isn't flown to the listening service so we were unable to implement this scenario. We had also seen samples that talk about changing the relayClientAuthenticationType attribute would allow you to authenticate the client within the service itself rather than with ACS. While this was interesting it wasn't exactly what we wanted. By removing the step where access to the Relay endpoint is protected by authentication against ACS it means that anyone could send messages via the service bus to the on-premise listening service which would then authenticate clients. In our scenario we certainly didn't want to allow clients to skip the ACS authentication step because this could open up two attack opportunities for an attacker. The first of these would allow an attacker to send messages through to our on-premise servers and potentially cause a denial of service situation. The second case would be with the same kind of attack by running lots of messages through service bus which were then rejected the attacker would be causing us to incur charges per message on our Windows Azure account. The correct way to implement our desired scenario is to combine one of the common options for authenticating against ACS so the service bus endpoint cannot be accessed by an unauthenticated caller with the normal WCF security features using the TransportWithMessageCredential security option. Looking around I could not find any guidance on how to implement this correctly so on the back of setting this up I decided to write a couple of articles to walk through a couple of the common scenarios you may be interested in. These are available on the following links: Walkthrough - Combining shared secret and username token Walkthrough – Combining shared secret and certificates

    Read the article

  • Configuring trace file size and number in WebCenter Content 11g

    - by Kyle Hatlestad
    Lately I've been doing a lot of debugging using the System Output tracing in WebCenter Content 11g.  This is built-in tracing in the content server which provides a great level of detail on what's happening under the hood.  You can access the settings as well as a view of the tracing by going to Administration -> System Audit Information.  From here, you can select the tracing sections to include.  Some of my personal favorites are searchquery,  systemdatabase, userstorage, and indexer.  Usually I'm trying to find out some information regarding a search, database query, or user information.  Besides debugging, it's also very helpful for performance tuning. One of the nice tricks with the tracing is it honors the wildcard (*) character.  So you can put in 'schema*' and gather all of the schema related tracing.  And you can notice if you select 'all' and update, it changes to just a *.   To view the tracing in real-time, you simply go to the 'View Server Output' page and the latest tracing information will be at the bottom. This works well if you're looking at something pretty discrete and the system isn't getting much activity.  But if you've got a lot of tracing going on, it would be better to go after the trace log file itself.  By default, the log files can be found in the <content server instance directory>/data/trace directory. You'll see it named 'idccs_<managed server name>_current.log.  You may also find previous trace logs that have rolled over.  In this case they will identified by a date/time stamp in the name.  By default, the server will rotate the logs after they reach 1MB in size.  And it will keep the most recent 10 logs before they roll off and get deleted.  If your server is in a cluster, then the trace file should be configured to be local to the node per the recommended configuration settings. If you're doing some extensive tracing and need to capture all of the information, there are a couple of configuration flags you can set to control the logs. #Change log size to 10MB and number of logs to 20FileSizeLimit=10485760FileCountLimit=20 This is set by going to Admin Server -> General Configuration and entering them in the Additional Configuration Variables: section.  Restart the server and it should take on the new logging settings. 

    Read the article

  • Screen brightness dull after upgrade to Ubuntu 14.04

    - by user288426
    After upgrading to Ubuntu 14.04 I found that I could not increase screen brightness. I'm using a Samsung NC110 netbook. Initially the function key to modify brightness did not respond at all. After implementation of the first part of the fix, the key came alive and the brightness bar could be modified. Yet at maximum brightness indicated the screen still remained very dull. The 2nd part of the fix cures that problem, at least for this type of machine. First part of the fix was copied and modified in line with my experience from following post: How to control Brightness Open a terminal (Ctrl + Alt + T). Then type sudo nano /etc/default/grub. It will ask for your password. Type it in. Around the 11th line, there will be something like: GRUB_CMDLINE_LINUX_DEFAULT="quiet splash". Change it to: GRUB_CMDLINE_LINUX_DEFAULT="quiet splash acpi_osi=Linux acpi_backlight=vendor" Save the file by Ctrl+O followed by Ctrl+X. Then run sudo update-grub in the terminal. Reboot and see if backlight adjustment works. Then I needed to modify the rc.local file. Therefore read below fix to understand the procedure: problem with adjusting brightness Ubuntu 14.04 In my case I had 2 folders listed under /sys/class/backlight which were: intel_backlight samsung I realized that the samsung folder is governing. I had to modify the check for max brightness to: cat /sys/class/backlight/samsung/max_brightness In my case the max value obtained is 8. Besides putting this into rc.local, I also had to uncomment the first line to get this working. My rc.local under /etc/ now looks as follows: !/bin/sh -e # # rc.local # # This script is executed at the end of each multiuser runlevel. # Make sure that the script will "exit 0" on success or any other # value on error. # # In order to enable or disable this script just change the execution # bits. # # By default this script does nothing. echo 8 > /sys/class/backlight/samsung/brightness exit 0 Now I can modify brightness on my netbook and also can get the screen up to its maximum brightness. Hope this is helpful.

    Read the article

  • Oracle Virtualization Friday Spotlight - November 8, 2013

    - by Monica Kumar
    Hands-on Private Cloud Simulator In One Hour Submitted by: Doan Nguyen, Senior Principal Product Marketing Director My aeronautics instructor used to say, "you can’t appreciate flying until you take flight." To clarify, this is not about gearing up in a flying squirrel suit and hopping off a cliff (topic for another blog!) but rather about flying an airplane. The idea is to get hands-on with the controls at the cockpit and experience flight before you actually fly a real plane. After the initial 40 hours of flight time, the concept sank in and it really made sense.This concept is what inspired our technical experts to put together the hands-on lab for a private cloud deployment and management self-service model. Yes, we are comparing the lab to a flight simulator! Let’s look at the parallels: To get trained to fly, starting in the simulator gets you off the ground quicker. There is no need to have a real plane to begin with. In a hands-on lab, there is no need for a real server, with networking and real storage installed. All you need is your laptop The simulator is pre-configured, pre-flight check done. Similarly, in a hands-on lab, Oracle VM and Oracle Enterprise Manager are pre-configured and assembled using Oracle VM VirtualBox as the container. Software installations are not needed. After time spent training at the controls, you can really appreciate the practical experience of flying. Along the same lines, the hands-on lab is a guided learning path, without the encumbrances of hardware, software installation, so you can learn about cloud deployment and management.  However, unlike the simulator training, your time investment with the lab is only about an hour and not 40 hours! This hands-on lab takes you through private cloud deployment and management using Oracle VM and  Oracle Enterprise Manager Cloud Control 12c in an Infrastructure as a service IaaS model. You will first configure the IaaS cloud as the cloud administrator and then deploy guest virtual machines (VMs) as a self-service user. Then you are ready to take flight into the cloud! Why not step into the cockpit now!

    Read the article

  • Does the Ubuntu One sync work?

    - by bisi
    I have been on this for several hours now, trying to get a simple second folder to sync with my (paid) account. I cannot tell you how many times I removed all devices, removed stored passwords, killed all processes of u1, logged out and back in online...and still, the tick in the file browser (Synchronize this folder) is loading and loading and loading. Also, I have logged out, rebooted countless times. And this is after me somehow managing to get the u1 preferences to finally "connect" again. I have also checked the status of your services, and none are close to what I am experiencing. And I have checked the suggested related questions above! So please, just confirm whether it is a problem on my side, or a problem on your side. EDIT: In the mean time, here is what has changed, on top of what is mentioned just above. • My files went from 0MB to 71.9MB, and is still rising. • My first folder of 400.2MB is being filled with the data as I write this. The second folder has the folder sub-structure in place. • Both folders now show in the File Browser that it will be synchronized. I believe that right now, it is all back to normal and working fine, and I guess that's what a good night's sleep can do ;). And we're now only back to the point where synchronizing is slow, but will pick up with the release of Natty (https://wiki.ubuntu.com/UbuntuOne/FAQ/WhyIsItTakingSoLongForMyFilesToSync). But to get to the questions: My about says I use 11.04, Natty Narwhal, but I am quite sure the last distribution I installed was 10.10. Folder A is 400.2MB, and Folder B is 29.5MB I am on a DSL line, behind a regular fritz.box setting. No proxy servers in use, and I did not install any particular firewall features. No physical firewall, just the router (on which I have a TV signal as well), and 2 switches to get to this floor. Status: inactive The ubuntuone-indicator runs the same window as when I click on my name on the top-right corner and select Ubuntu one, or in the Control Center choose Ubuntu one. It wasn't supposed to go further than this was it?

    Read the article

  • Professional immigration

    - by etranger
    Hello all, Does anyone here have a practical advice on professional relocation from Russia to Europe? The reasons behind making such a decision are far beyond the subject, perhaps, so I'll stick to the practical part. Having done some of the "common stuff" for finding a job, I am now facing two serious problems: I am a "dual-class" person, with university degree in marketing, and multiple years of self-studied computer competence (hence my writing here). Have professional experience in both areas. I don't currently hold a European work permit. From what I can see, this results in normal HR person throwing out my CV as either being "overqualified" or "too much trouble with making the permit". I do have the skills and character to start my own business, but it requires start-up capital that I don't have, over the last years I had to pay high bills for medical treatment of my family member, who had deceased. Now, I'm almost out of debts. As you can probably guess, English is not a problem, and I'm open to new languages, but first steps of entering the market, or the society, is the problematic part. I live close to Norway, and am trying to get some professional contacts there, but it hasn't got me any practical perspective so far. Any advice is greatly appreciated. EDIT: I am currently making my living off web site development, and occasional consulting services both in IT and marketing. For purely geographic reasons I'm dealing with clients that reside in the same city where I live, pop. 350 000. Being quite local, market requirements for web sites are simple and stable — clients need to control navigation, write articles in a word-like editor, upload illustrations and place ad banners, all with no additional programming. As many web developers do, I'm using my own content management system that fits these expectations. I have also started developing a newer version of this system that has better support for international environments, but I'm too distant from the real market demand in Europe to speak of the right track here. Technically it's based on php/mysql and uses xslt for templating. It allows for quick website deployment, and has architectural neatness, lack of which made me abandon similar opensource solutions (Joomla and the like). Deploying time from rasterized design proofs is normally under 6-8 working hours, don't know how that compares to the world practice. EDIT 2: Can anyone share what Norwegian (Scandinavian) web solutions market currently demands?

    Read the article

  • A little gem from MPN&ndash;FREE online course on Architectural Guidance for Migrating Applications to Windows Azure Platform

    - by Eric Nelson
    I know a lot of technical people who work in partners (ISVs, System Integrators etc). I know that virtually none of them would think of going to the Microsoft Partner Network (MPN) learning portal to find some deep and high quality technical content. Instead they would head to MSDN, Channel 9, msdev.com etc. I am one of those people :-) Hence imagine my surprise when i stumbled upon this little gem Architectural Guidance for Migrating Applications to Windows Azure Platform (your company and hence your live id need to be a member of MPN – which is free to join). This is first class stuff – and represents about 4 hours which is really 8 if you stop and ponder :) Course Structure The course is divided into eight modules.  Each module explores a different factor that needs to be considered as part of the migration process. Module 1:  Introduction:  This section provides an introduction to the training course, highlighting the values of the Windows Azure Platform for developers. Module 2:  Dynamic Environment: This section goes into detail about the dynamic environment of the Windows Azure Platform. This session will explain the difference between current development states and the Windows Azure Platform environment, detail the functions of roles, and highlight development considerations to be aware of when working with the Windows Azure Platform. Module 3:  Local State: This session details the local state of the Windows Azure Platform. This section details the different types of storage within the Windows Azure Platform (Blobs, Tables, Queues, and SQL Azure). The training will provide technical guidance on local storage usage, how to write to blobs, how to effectively use table storage, and other authorization methods. Module 4:  Latency and Timeouts: This session goes into detail explaining the considerations surrounding latency, timeouts and how to assess an IT portfolio. Module 5:  Transactions and Bandwidth: This session details the performance metrics surrounding transactions and bandwidth in the Windows Azure Platform environment. This session will detail the transactions and bandwidth costs involved with the Windows Azure Platform and mitigation techniques that can be used to properly manage those costs. Module 6:  Authentication and Authorization: This session details authentication and authorization protocols within the Windows Azure Platform. This session will detail information around web methods of authorization, web identification, Access Control Benefits, and a walkthrough of the Windows Identify Foundation. Module 7:  Data Sensitivity: This session details data considerations that users and developers will experience when placing data into the cloud. This section of the training highlights these concerns, and details the strategies that developers can take to increase the security of their data in the cloud. Module 8:  Summary Provides an overall review of the course.

    Read the article

  • S11 launched

    - by unixman
    Now that Oracle Solaris 11 is out, its time to do 2 things -- 1) Its time to see what's in it, what's new and why its important, and then assess why it might make sense to begin evaluating it for your needs and 2) Its time to acknowledge, give thanks to and congratulate all the R&D personnel, architects, engineers, designers and testers who've put in so much effort and energy into helping make Solaris 11 (and SunOS 5.11) what it has become -- starting way back circa 2004 and, more importantly, culminating in the recent years and months -- staying focused on the execution, unwavering in the face of various challenges. For #1 above, here are a few good things to get going with - Watch the product launch replay - Visit the Solaris 11 Spotlight section on oracle.com - Get comfortable through introductory videos and detailed "how-to" guides (ex: how to create and publish IPS packages), white papers on the new default root file system, ZFS, and reap the benefits brought on by the fundamental shift in easing the administration experience - Look at the next level of software lifecycle management that is enabled by technologies such as Automated Installer and Image Packaging System -- that dramatically address patch management-related challenges - Understand how we continue to innovate in areas of service intelligence, reliability and availability - Start to evaluate enhancements in virtualization capabilities -- whether influenced by the need to consolidate or motivated by the need to have increased service mobility across physical systems, leveraging hardware-level abstractions - Gain more control over your network-centric services through enhancements in network resource management, observability and I/O performance - Look beyond your existing infrastructure with confidence that you can re-host and transition to newer systems with the use of Solaris 10 zones running on top of Solaris 11 - Relish in the fact that you can do all this, get your data to be secure and encrypted and more, on both, SPARC and x86-based systems. - Stay informed by keeping an eye on relevant blogs, which we've begun turning up recently. - Go through a hands-on lab - Sign up to take a class or just opt to watch various videos to begin to raise your comfort level with these technologies For #2 above -- There are many ways to do that. One way is to just say "thanks" with an email, a post, or a simple card,  similar to this one seen at a Barnes and Noble store recently.  The front of the card is followed by what's inside... and as the saying goes, now more then ever "it's what's inside that counts" And here's the inside of the card: So, what are you waiting for ? Go download and try it out, and please let us know what you think of it!

    Read the article

  • RTS Game Style Application [closed]

    - by Daniel Wynand van Wyk
    My question may seem somewhat odd, but I hope that my specifications will clarify EXACTLY what it is that I am after. I need some help choosing the right tooling for a particular endeavour. My background is in desktop application development and large back-end systems. I have worked primarily on the Microsoft stack using C# and the .Net framework. My goal is to develop a 2D, RTS style, interactive office simulation. The simulation will model various office spaces, office equipment, employees and their interactions with one another. The idea is to abstract the concept of an office completely. Under the hood the application will do many things that are nothing like a game. This includes P2P networking, VPN tunnelling, streaming video, instant messaging, document collaboration, remote screen sharing, file-sharing, virus scanning, VOIP, document scanning, faxing, emailing, distributed computing, content management and much more! A somewhat similar thing has been attempted by IBM, where they created a virtual office in second life. If their attempt was a game, the game-play would be notably horrible, to say the least! The users/players will drive and control my application through the various objects modelled in the simulation. A single application capable of performing all of these various tasks would be a nightmare to navigate for even the most expert user. Using the concept of a game, I can easily separate functionality by assigning them to objects that relate 1-1 with their real world counter-parts. This can greatly simplify computing for novice users, with many added benefits in terms of visibility, transparency of process and centralized configuration. My hope is to make complex computing tasks accessible to all kinds of users and to greatly reduce the cognitive load associated with using the many different utilities and applications inside office settings. The complexity is therefore limited to the complexity of the space in which you find yourself. I want the application to target as many platforms as possible and run on computers that have no accelerated graphics capabilities. The simulation won't contain any of the fancy eye-candy you find in modern games, to the contrary, my "game" will purposefully be clean and simple. The closest thing I could imagine would be an old game like "Theme Hospital" or the first instalment of "The Sims". All the content will be pre-created and not user-generated like Second Life. New functionality will be added via a plugin system. Given my background and nature of my "game", I would like to spend most of my time writing code that does not have to do with the simulated office, as the "game" is really just a glorified application menu. I have done much reading about existing engines, frameworks and tools. I need the help of an experienced game developer who has tried and tested various products over the years who can guide me in the right direction given my very particular needs. I would appreciate any help I can get!

    Read the article

  • "Untrusted packages could compromise your system's security." appears while trying to install anything

    - by maria
    Hi I've freshly installed Ubuntu 10.4 on a new computer. I'm trying to install on it application I need (my old computer is broken and I have to send it to the service). I've managed to install texlive and than I can't install anything else. All software I want to have is what I have succesfuly installed on my old computer (with the same version of Ubuntu), so I don't understand, why terminal says (sorry, the terminal talks half English, half Polish, but I hope it's enough): maria@marysia-ubuntu:~$ sudo aptitude install emacs Czytanie list pakietów... Gotowe Budowanie drzewa zaleznosci Odczyt informacji o stanie... Gotowe Reading extended state information Initializing package states... Gotowe The following NEW packages will be installed: emacs emacs23{a} emacs23-bin-common{a} emacs23-common{a} emacsen-common{a} 0 packages upgraded, 5 newly installed, 0 to remove and 0 not upgraded. Need to get 23,9MB of archives. After unpacking 73,8MB will be used. Do you want to continue? [Y/n/?] Y WARNING: untrusted versions of the following packages will be installed! Untrusted packages could compromise your system's security. You should only proceed with the installation if you are certain that this is what you want to do. emacs emacs23-bin-common emacsen-common emacs23-common emacs23 Do you want to ignore this warning and proceed anyway? To continue, enter "Yes"; to abort, enter "No" I was trying to install other editors as well, with the same result. As I decided that I might be sure that I know the package I want to install is secure, finaly I've entered "Yes". The installation ended succesfuly, but editor don't understand any .tex file (.tex files are for sure fine): this is pdfTeX, Version 3.1415926-1.40.10 (TeX Live 2009/Debian) restricted \write18 enabled. entering extended mode (./Szarfi.tex ! Undefined control sequence. l.2 \documentclass {book} ? What's more, I've realised that in Synaptic Manager there is no package which would be marked as supported by Canonical... Any tips? Thanks in advance

    Read the article

  • Nifty popup fails to register

    - by Snailer
    I'm new to Nifty GUI, so I'm following a tutorial here for making popups. For now, I'm just trying to get a very basic "test" popup to show, but I get multiple errors and none of them make much sense. To show a popup, I believe it is necessary to first have a Nifty Screen already showing, which I do. So here is the ScreenController for the working Nifty Screen: public class WorkingScreen extends AbstractAppState implements ScreenController { //Main is my jme SimpleApplication private Main app; private Nifty nifty; private Screen screen; public WorkingScreen() {} public void equip(String slotstr) { int slot = Integer.valueOf(slotstr); System.out.println("Equipping item in slot "+slot); //Here's where it STOPS working. app.getPlayer().registerPopupScreen(nifty); System.out.println("Registered new popup"); Element ele = nifty.createPopup(app.getPlayer().POPUP); System.out.println("popup is " +ele); nifty.showPopup(nifty.getCurrentScreen(), ele.getId(), null); } @Override public void initialize(AppStateManager stateManager, Application app) { super.initialize(stateManager, app); this.app = (Main)app; } @Override public void update(float tpf) { /** jME update loop! */ } public void bind(Nifty nifty, Screen screen) { this.nifty = nifty; this.screen = screen; } When I call equip(0) the system prints Equipping item in slot 0, then a lot of errors and none of the subsequent println()'s. Clearly it botches somewhere in Player.registerPopupScreen(Nifty nifty). Here's the method: public final String POPUP = "Test Popup"; public void registerPopupScreen(Nifty nifty) { System.out.println("Attempting new popup"); PopupBuilder b = new PopupBuilder(POPUP) {{ childLayoutCenter(); backgroundColor("#000a"); panel(new PanelBuilder() {{ id("List"); childLayoutCenter(); height(percentage(75)); width(percentage(50)); control(new ButtonBuilder("TestButton") {{ label("TestButton"); width("120px"); height("40px"); align(Align.Center); }}); }}); }}; System.out.println("PopupBuilder success."); b.registerPopup(nifty); System.out.println("Registerpopup success."); } Because that first println() doesn't show, it looks like this method isn't even called at all! Edit After removing all calls on the Player object, the popup works. It seems I'm not "allowed" to access the player from the ScreenController. Unfortunate, since I need information on the player for the popup. Is there a workaround?

    Read the article

  • Oredev 2012: Summary and source code

    - by Laurent Bugnion
    This week, I had the pleasure to be invited to talk at Oredev, a really cool conference taking place in Malmo, Sweden. The whole event is awesome, including a very special dinner on Monday including sauna and swimming in a 6 degrees cold Baltic sea, and a reception with dinner at the town hall, including the mayor himself. Considering Malmo is a town of 300'000 inhabitants, it is a pretty nice occasion and the historical building itself is really worth seeing. For those interested, I placed my pictures on my Flickr account. I had a workshop on Tuesday morning about Windows 8 development with XAML/C#, and then a session on Wednesday about MVVM in Windows Phone 8 and Windows 8, of course using MVVM Light. I was very nervous because I reworked some of my demos as recently as this morning, in the wake of the Build conference last week and the release of both the Windows Phone SDK and MVVM Light V4.1. Everything went well however, and if I judge by the people I talked t after the talk, and Twitter, everything went pretty well. Before my talk on Tuesday, I had the pleasure to see a talk by Iris Classon (@irisclasson) on the challenges of being a "n00b" and a woman in software development. I especially appreciated her research and conclusions on the lack of women I our industry, a topic that is dear to my heart (because I want the best possible future for my two daughters, and also because I really enjoy working with women on projects, and getting a different insight on the art of software development. I really want to thank the excellent organization committee for their hard work and their fantastic welcome to Malmo. In particular Emily Holweck did a wonderful job and was super helpful throughout the preparation and the conference itself. I made a few pictures during my stay, all with the new Nokia Lumia 920, and hope you will enjoy them too. The source code and the slides… The source code is available for download from Skydrive. You will find the following: Windows 8 workshop slides. MVVM Applied slides Source code package with Win8Demo: The demo I built during the 4 hours workshop, with some light MVVM, web services (JSON), GridView, Design time data (Blend / Visual Studio designer), Bing maps integration, location sensor, Search pane integration. SemanticZoomSample: a sample I put together to demonstrate the SemanticZoom control, with two GridViews and of course full design time data for Blend work. Due to time constraints, I was not able to show this demo during the workshop, but I publish it anyway, hoping it will be useful to someone. PictureUploader: The demo I built during my 50 minutes session about MVVM Applied in Windows Phone 8 and Windows 8. Code sharing, design time data, MVVM Light are used in Windows Phone 8 and Windows 8 apps. And in video… You can also see the video of my MVVM talk thanks to the good services of the Oredev team! MVVM Applied in Windows Phone and Windows 8 from Øredev Conference on Vimeo.   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • ATI Radeon HD7000 Series (Laptop) - Switch Mode Between ATI & Intel Integrated GPU. Stuck on Boot Screen On Intel GPU Selection Mode

    - by Monkey Drone
    Laptop Specs: HP Pavilion G6-2020SE GPUs 1) ATI HD7000 Series 2) Intel Integrated OS Installed: x) Ubuntu 12.04 (64 bit) i) ATI Graphics Card Drivers Installed From AMD website. Note: Graphics Card Drivers are Working Fine in 3D Mode. It runs a little Hot as it should since its a GPU. Observation) AMD Catalyst Control Centre Lets me Choose If I want to run the system in HIGH-END (ATI GPU) OR Intel Integrated (Better battery life) While I am on High End GPU Choice, Ubuntu works fine. Problem) But when I switch to Intel Mode in the AMD CCC and reboot the Machine. Ubuntu goes into 'Low Graphics Mode'. The problem is not that it goes into low graphics mode, it is completely expected since I am no longer using the ATI GPU but the integrated Intel GPU. Problem starts with the 'Selection' of the options. During that screen, I have no mouse on the screen (even tried plugging in an external USB mouse) & No Keyboard functionality. Thus I am left completely disabled to choose any option and load into Ubuntu. The Only thing I can do is switch to a terminal and enable ATI GPU through command-line and Ubuntu works Fine again. Is it a bug that there is no mouse/keyboard available to me during the startup of Ubuntu when its launched in Low-Graphics Mode? Any suggestions on how to pass through that? My palms are sweating as I write this down because the ATI GPU is really heating up my laptop. I dont want to boot into Windows or keep it around any longer than necessary. Please advise with help and directions. Sincerely, MonkeyD Edit1: The Answer by Celso has helped me switch to Intel, thus giving me sufficient battery power. Kudos to Celso. Now I can at least use my laptop for the time being without having it burn hair off my skin. I am still looking for answer to my original question of, 'why is lightdm not working properly when I switch to Intel GPU using ATI HD7000 series official drivers provided by AMD'.

    Read the article

  • Developing a Support Plan for Cloud Applications

    - by BuckWoody
    Last week I blogged about developing a High-Availability plan. The specifics of a given plan aren't as simple as "Step 1, then Step 2" because in a hybrid environment (which most of us have) the situation changes the requirements. There are those that look for simple "template" solutions, but unless you settle on a single vendor and a single way of doing things, that's not really viable. The same holds true for support. As I've mentioned before, I'm not fond of the term "cloud", and would rather use the tem "Distributed Computing". That being said, more people understand the former, so I'll just use that for now. What I mean by Distributed Computing is leveraging another system or setup to perform all or some of a computing function. If this definition holds true, then you're essentially creating a partnership with a vendor to run some of your IT - whether that be IaaS, PaaS or SaaS, or more often, a mix. In your on-premises systems, you're the first and sometimes only line of support. That changes when you bring in a Cloud vendor. For Windows Azure, we have plans for support that you can pay for if you like. http://www.windowsazure.com/en-us/support/plans/ You're not off the hook entirely, however. You still need to create a plan to support your users in their applications, especially for the parts you control. The last thing they want to hear is "That's vendor X's problem - you'll have to call them." I find that this is often the last thing the architects think about in a solution. It's fine to put off the support question prior to deployment, but I would hold off on calling it "production" until you have that plan in place. There are lots of examples, like this one: http://www.va-interactive.com/inbusiness/editorial/sales/ibt/customer.html some of which are technology-specific. Once again, this is an "it depends" kind of approach. While it would be nice if there was just something in a box we could buy, it just doesn't work that way in a hybrid system. You have to know your options and apply them appropriately.

    Read the article

< Previous Page | 625 626 627 628 629 630 631 632 633 634 635 636  | Next Page >