Search Results

Search found 3159 results on 127 pages for 'tom martin'.

Page 17/127 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • Silverlight 4, MVVM and Test-Driven Development

    - by Martin Hinshelwood
    As part of his UK tour Microsoft's Jesse Liberty will be talking in Edinburgh for an evening on Silverlight 4. [Register Now, there are some places left]  The Talk MVVM and Silverlight to build test-driven programs Understanding Refactoring and Dependency Injection A Walk through of a non-trivial application The Speaker Jesse Liberty, Silverlight Geek, is a Developer Community Program Manager for Microsoft (US). Lately he has been focused on Component-based, Test-Driven, Cross-platform line-of-business application development, and has led the development of the open source  Silverlight HyperVideo Platform. Liberty is the author of over two dozen books, and his blog is a required resource for Silverlight programmers. His twenty years of programming experience include stints as a Distinguished Software Engineer at AT&T; Vice President of Human-Computer Interaction at Citibank and Software Architect at PBS/Learning Link. The Venue We are meeting at Microsoft's offices in Edinburgh in Waterloo Place. This is the building on the corner of North Bridge at the east end of Princes Street. Parking can be found at the nearby Greenside Row car park which is just off Leith Walk (used for the Omni Centre). The venue is approximately 2-3 minutes walk away from Edinburgh Waverly train station. The Agenda 18:30 Doors open 19:00 Welcome 19:10 Part 1 20:00 Break 20:10 Part 2 20:50 Feedback and Prizes 21:00 End   [Register Now, there are some places left] Technorati Tags: Silverlight,MVVM,TDD

    Read the article

  • Creating a branch for every Sprint

    - by Martin Hinshelwood
    There are a lot of developers using version control these days, but a feature of version control called branching is very poorly understood and remains unused by most developers in favour of Labels. Most developers think that branching is hard and complicated. Its not! What is hard and complicated is a bad branching strategy. Just like a bad software architecture a bad branch architecture, or one that is not adhered to can prove fatal to a project. We I was at Aggreko we had a fairly successful Feature branching strategy (although the developers hated it) that meant that we could have multiple feature teams working at the same time without impacting each other. Now, this had to be carefully orchestrated as it was a Business Intelligence team and many of the BI artefacts do not lend themselves to merging. Today at SSW I am working on a Scrum team delivering a product that will be used by many hundreds of developers. SSW SQL Deploy takes much of the pain out of upgrading production databases when you are not using the Database projects in Visual Studio. With Scrum each Scrum Team works for a fixed period of time on a single sprint. You can have one or more Scrum Teams involved in delivering a product, but all the work must be merged and tested, ready to be shown to the Product Owner at the the Sprint Review meeting at the end of the current Sprint. So, what does this mean for a branching strategy? We have been using a “Main” (sometimes called “Trunk”) line and doing a branch for each sprint. It’s like Feature Branching, but with only ONE feature in operation at any one time, so no conflicts Figure: DEV folder containing the Development branches.   I know that some folks advocate applying a Label at the start of each Sprint and then rolling back if you need to, but I have always preferred the security of a branch. Like: being able to create a release from Main that has Sprint3 code even while Sprint4 is being worked on. being sure I can always create a stable build on request. Being able to guarantee a version (labels are not auditable) Be able to abandon the sprint without having to delete the code (rare I know, but would be a mess if it happened) Being able to see the flow of change sets through to a safe release It helps you find invalid dependencies when merging to Main as there may be some file that is in everyone’s Sprint branch, but never got checked in. (We had this at the merge of Sprint2) If you are always operating in this way as a standard it makes it easier to then add more scrum teams in the future. Muscle memory of this way of working. Don’t Like: Additional DB space for the branches Baseless merging between sprint branches when changes are directly ported Note: I do not think we will ever attempt this! Maybe a bit tougher to see the history between sprint branches since the changes go up through Main and down to another sprint branch Note: What you would have to do is see which Sprint the changes were made in and then check the history he same file in that Sprint. A little bit of added complexity that you would have to do anyway with multiple teams. Over time, you can end up with a lot of old unused sprint branches. Perhaps destroy with /keephistory can help in this case. Note: We ALWAYS delete the Sprint branch after it has been merged into Main. That is the theory anyway, and as you can see from the images Sprint2 has already been deleted. Why take the chance of having a problem rolling back or wanting to keep some of the code, when you can just abandon a branch and start a new one? It just seems easier and less painful to use a branch to me! What do you think?   Technorati Tags: TFS,TFS2010,Software Development,ALM,Branching

    Read the article

  • Programa Talleres FMW Mayo y Junio 2010

    - by [email protected]
    PROGRAMA TALLERES FMW Mayo y Junio 2010 Enterprise 2.0 TALLER FECHA LOCALIZACIÓN Enterprise 2.0 y Redes Sociales Empresariales (Webcenter Spaces) 03/05/10 Madrid Digitalización (IP/M) 10/05/10 Madrid Gestión Documental y Records Management (UCM/URM) 11/05/10 Barcelona Gestión de Contenidos Web y portales (UCM + WC Suite) 25/05/10 Barcelona Gestión Documental y Records Management (UCM/URM) 19/05/10 Madrid Gestión de Contenidos Web y portales (UCM + WC Suite) 31/05/10 Madrid Service Oriented Architecture (SOA) TALLER FECHA LOCALIZACIÓN Construccion de Modelos de Negocio con BPEL 11g 13/05/10 Madrid Automatización de Procesos de Negocio con Oracle BPM 20/05/10 Madrid Oracle WebLogic 27/05/10 Madrid Gestión de Ciclo de Vida SOA Sobre un Repositorio Empresarial 11/05/10 Madrid Desarrollo de Aplicaciones de Alto Rendimiento con Oracle Coherence 18/05/10 Madrid Plataforma de Integración de Datos (ODI) 25/05/10 Madrid Business Activity Monitoring 11g (BAM) 13/05/10 Barcelona Inscribirse:

    Read the article

  • AS3 Stage3D Mouse click problem?

    - by Martin K
    I have a problem with Mouse interaction and Stage3D. The only way I found to register to listen to mouse clicks and interact with Stage3D, is to add a mouse eventListener directly to the .stage. However this will result in any time i click anywhere in the flash application the mouse click will fire, even if there is an overlaid 2D menu where the user intended to click. IE I have a 3D application running in the background, which listens to clicks, and I have some floating User Interface elements in the foreground, and ideally if I clicked a button in the foreground, then that would NOT fire a click event that the Stage3D would register. Any idea how to solve this problem?

    Read the article

  • Cannot Unbind Super Key from Unity

    - by Tom Thorogood
    Due to a graphics card compatibility issue using CrunchBang, I was told that my best option would be to move to 12.04 LTS. I'm trying to get everything configured and personalized the way I'm used to things, but am having some issues with unbinding default Unity shortcuts. I'm used to having all my shortcuts routed through the super key (T for Terminal, W for Web, Up for increased opacity, and so on). I've followed instructions to install compizconfig-settings-manager, and did an advanced search for all keyboard shortcuts binding to the super key, including the Unity shortcuts, but Unity still seems to listen for that keypress, and thus neither compiz nor the keybindings set up in system prefs - keyboard receive the commands I give them. (I did try also to simply change the unity launcher key instead of disabling it as shown below -- neither worked)

    Read the article

  • Sony Vaio Webcam

    - by Martin H
    I have a in-built webcam in my Sony Vaio VGN-FE21M. lsusb shows me the device Bus 001 Device 003: ID 0ac8:c002 Z-Star Microelectronics Corp. Visual Communication Camera VGP-VCC1 and it is working within Skype most of the time. Sometimes, however, lsusb shows me the exact same output, but trying to test my cam in v4l2ucp I get the error Unable to open file /dev/video0 No such file or directory A reboot fixes the problem but I just can't pinpoint what the difference is between a working and a not working webcam and the time/instance this occurs. It would probably be a fix if i could unmount and remount the cam, but how can I do this with in-built devices? Any other advice is welcome as well.

    Read the article

  • ResponseStatusLine protocol violation

    - by Tom Hines
    I parse/scrape a few web page every now and then and recently ran across an error that stated: "The server committed a protocol violation. Section=ResponseStatusLine".   After a few web searches, I found a couple of suggestions – one of which said the problem could be fixed by changing the HttpWebRequest ProtocolVersion to 1.0 with the command: 1: HttpWebRequest req = (HttpWebRequest)HttpWebRequest.Create(strURI); 2: req.ProtocolVersion = HttpVersion.Version10;   …but that did not work in my particular case.   What DID work was the next suggestion I found that suggested the use of the setting: “useUnsafeHeaderParsing” either in the app.config file or programmatically. If added to the app.config, it would be: 1: <!-- after the applicationSettings --> 2: <system.net> 3: <settings> 4: <httpWebRequest useUnsafeHeaderParsing ="true"/> 5: </settings> 6: </system.net>   If done programmatically, it would look like this: C++: 1: // UUHP_CPP.h 2: #pragma once 3: using namespace System; 4: using namespace System::Reflection; 5:   6: namespace UUHP_CPP 7: { 8: public ref class CUUHP_CPP 9: { 10: public: 11: static bool UseUnsafeHeaderParsing(String^% strError) 12: { 13: Assembly^ assembly = Assembly::GetAssembly(System::Net::Configuration::SettingsSection::typeid); //__typeof 14: if (nullptr==assembly) 15: { 16: strError = "Could not access Assembly"; 17: return false; 18: } 19:   20: Type^ type = assembly->GetType("System.Net.Configuration.SettingsSectionInternal"); 21: if (nullptr==type) 22: { 23: strError = "Could not access internal settings"; 24: return false; 25: } 26:   27: Object^ obj = type->InvokeMember("Section", 28: BindingFlags::Static | BindingFlags::GetProperty | BindingFlags::NonPublic, 29: nullptr, nullptr, gcnew array<Object^,1>(0)); 30:   31: if(nullptr == obj) 32: { 33: strError = "Could not invoke Section member"; 34: return false; 35: } 36:   37: FieldInfo^ fi = type->GetField("useUnsafeHeaderParsing", BindingFlags::NonPublic | BindingFlags::Instance); 38: if(nullptr == fi) 39: { 40: strError = "Could not access useUnsafeHeaderParsing field"; 41: return false; 42: } 43:   44: if (!(bool)fi->GetValue(obj)) 45: { 46: fi->SetValue(obj, true); 47: } 48:   49: return true; 50: } 51: }; 52: } C# (CSharp): 1: using System; 2: using System.Reflection; 3:   4: namespace UUHP_CS 5: { 6: public class CUUHP_CS 7: { 8: public static bool UseUnsafeHeaderParsing(ref string strError) 9: { 10: Assembly assembly = Assembly.GetAssembly(typeof(System.Net.Configuration.SettingsSection)); 11: if (null == assembly) 12: { 13: strError = "Could not access Assembly"; 14: return false; 15: } 16:   17: Type type = assembly.GetType("System.Net.Configuration.SettingsSectionInternal"); 18: if (null == type) 19: { 20: strError = "Could not access internal settings"; 21: return false; 22: } 23:   24: object obj = type.InvokeMember("Section", 25: BindingFlags.Static | BindingFlags.GetProperty | BindingFlags.NonPublic, 26: null, null, new object[] { }); 27:   28: if (null == obj) 29: { 30: strError = "Could not invoke Section member"; 31: return false; 32: } 33:   34: // If it's not already set, set it. 35: FieldInfo fi = type.GetField("useUnsafeHeaderParsing", BindingFlags.NonPublic | BindingFlags.Instance); 36: if (null == fi) 37: { 38: strError = "Could not access useUnsafeHeaderParsing field"; 39: return false; 40: } 41:   42: if (!Convert.ToBoolean(fi.GetValue(obj))) 43: { 44: fi.SetValue(obj, true); 45: } 46:   47: return true; 48: } 49: } 50: }   F# (FSharp): 1: namespace UUHP_FS 2: open System 3: open System.Reflection 4: module CUUHP_FS = 5: let UseUnsafeHeaderParsing(strError : byref<string>) : bool = 6: // 7: let assembly : Assembly = Assembly.GetAssembly(typeof<System.Net.Configuration.SettingsSection>) 8: if (null = assembly) then 9: strError <- "Could not access Assembly" 10: false 11: else 12: 13: let myType : Type = assembly.GetType("System.Net.Configuration.SettingsSectionInternal") 14: if (null = myType) then 15: strError <- "Could not access internal settings" 16: false 17: else 18: 19: let obj : Object = myType.InvokeMember("Section", BindingFlags.Static ||| BindingFlags.GetProperty ||| BindingFlags.NonPublic, null, null, Array.zeroCreate 0) 20: if (null = obj) then 21: strError <- "Could not invoke Section member" 22: false 23: else 24: 25: // If it's not already set, set it. 26: let fi : FieldInfo = myType.GetField("useUnsafeHeaderParsing", BindingFlags.NonPublic ||| BindingFlags.Instance) 27: if(null = fi) then 28: strError <- "Could not access useUnsafeHeaderParsing field" 29: false 30: else 31: 32: if (not(Convert.ToBoolean(fi.GetValue(obj)))) then 33: fi.SetValue(obj, true) 34: 35: // Now return true 36: true VB (Visual Basic): 1: Option Explicit On 2: Option Strict On 3: Imports System 4: Imports System.Reflection 5:   6: Public Class CUUHP_VB 7: Public Shared Function UseUnsafeHeaderParsing(ByRef strError As String) As Boolean 8:   9: Dim assembly As [Assembly] 10: assembly = [assembly].GetAssembly(GetType(System.Net.Configuration.SettingsSection)) 11:   12: If (assembly Is Nothing) Then 13: strError = "Could not access Assembly" 14: Return False 15: End If 16:   17: Dim type As Type 18: type = [assembly].GetType("System.Net.Configuration.SettingsSectionInternal") 19: If (type Is Nothing) Then 20: strError = "Could not access internal settings" 21: Return False 22: End If 23:   24: Dim obj As Object 25: obj = [type].InvokeMember("Section", _ 26: BindingFlags.Static Or BindingFlags.GetProperty Or BindingFlags.NonPublic, _ 27: Nothing, Nothing, New [Object]() {}) 28:   29: If (obj Is Nothing) Then 30: strError = "Could not invoke Section member" 31: Return False 32: End If 33:   34: ' If it's not already set, set it. 35: Dim fi As FieldInfo 36: fi = [type].GetField("useUnsafeHeaderParsing", BindingFlags.NonPublic Or BindingFlags.Instance) 37: If (fi Is Nothing) Then 38: strError = "Could not access useUnsafeHeaderParsing field" 39: Return False 40: End If 41:   42: If (Not Convert.ToBoolean(fi.GetValue(obj))) Then 43: fi.SetValue(obj, True) 44: End If 45:   46: Return True 47: End Function 48: End Class   Technorati Tags: C++,CPP,VB,Visual Basic,F#,FSharp,C#,CSharp,ResponseStatusLine,protocol violation

    Read the article

  • How to enable Unity 3D support in 12.04 using open-source drivers for RadeonHD cards?

    - by martin
    As the title says I can't enable the Unity 3D support when I'm using open-source drivers (xorg-edgers). I have an xfx Radeon HD 6950 by the way. If I install the proprietary 12.3 drivers from AMD it works, but I get poorer 2D performance than the open-source drivers and also I get some freezes and lock ups at random. So because of this I'm trying the open-source drivers and so far no issues at all, except this one. Running this command $ /usr/lib/nux/unity_support_test -p shows this: OpenGL vendor string: VMware, Inc. OpenGL renderer string: Gallium 0.4 on llvmpipe (LLVM 0x300) OpenGL version string: 2.1 Mesa 8.0.2 Not software rendered: no Not blacklisted: yes GLX fbconfig: yes GLX texture from pixmap: yes GL npot or rect textures: yes GL vertex program: yes GL fragment program: yes GL vertex buffer object: yes GL framebuffer object: yes GL version is 1.4+: yes Unity 3D supported: no And this command $ lspci -nn | grep VGA shows: 01:00.0 VGA compatible controller [0300]: Advanced Micro Devices [AMD] nee ATI Cayman PRO [Radeon HD 6950] [1002:6719] So, is this normal? Do I need to go back to proprietary drivers to enable Unity 3D? If anyone can give me help, I'll much appreciate it.

    Read the article

  • When there's no TCO, when to worry about blowing the stack?

    - by Cedric Martin
    Every single time there's a discussion about a new programming language targetting the JVM, there are inevitably people saying things like: "The JVM doesn't support tail-call optimization, so I predict lots of exploding stacks" There are thousands of variations on that theme. Now I know that some language, like Clojure for example, have a special recur construct that you can use. What I don't understand is: how serious is the lack of tail-call optimization? When should I worry about it? My main source of confusion probably comes from the fact that Java is one of the most succesful languages ever and quite a few of the JVM languages seems to be doing fairly well. How is that possible if the lack of TCO is really of any concern?

    Read the article

  • Using a Predicate as a key to a Dictionary

    - by Tom Hines
    I really love Linq and Lambda Expressions in C#.  I also love certain community forums and programming websites like DaniWeb. A user on DaniWeb posted a question about comparing the results of a game that is like poker (5-card stud), but is played with dice. The question stemmed around determining what was the winning hand.  I looked at the question and issued some comments and suggestions toward a potential answer, but I thought it was a neat homework exercise. [A little explanation] I eventually realized not only could I compare the results of the hands (by name) with a certain construct – I could also compare the values of the individual dice with the same construct. That piece of code eventually became a Dictionary with the KEY as a Predicate<int> and the Value a Func<T> that returns a string from the another structure that contains the mapping of an ENUM to a string.  In one instance, that string is the name of the hand and in another instance, it is a string (CSV) representation of of the digits in the hand. An added benefit is that the digits re returned in the order they would be for a proper poker hand.  For instance the hand 1,2,5,3,1 would be returned as ONE_PAIR (1,1,5,3,2). [Getting to the point] 1: using System; 2: using System.Collections.Generic; 3:   4: namespace DicePoker 5: { 6: using KVP_E2S = KeyValuePair<CDicePoker.E_DICE_POKER_HAND_VAL, string>; 7: public partial class CDicePoker 8: { 9: /// <summary> 10: /// Magical construction to determine the winner of given hand Key/Value. 11: /// </summary> 12: private static Dictionary<Predicate<int>, Func<List<KVP_E2S>, string>> 13: map_prd2fn = new Dictionary<Predicate<int>, Func<List<KVP_E2S>, string>> 14: { 15: {new Predicate<int>(i => i.Equals(0)), PlayerTie},//first tie 16:   17: {new Predicate<int>(i => i > 0), 18: (m => string.Format("Player One wins\n1={0}({1})\n2={2}({3})", 19: m[0].Key, m[0].Value, m[1].Key, m[1].Value))}, 20:   21: {new Predicate<int>(i => i < 0), 22: (m => string.Format("Player Two wins\n2={2}({3})\n1={0}({1})", 23: m[0].Key, m[0].Value, m[1].Key, m[1].Value))}, 24:   25: {new Predicate<int>(i => i.Equals(0)), 26: (m => string.Format("Tie({0}) \n1={1}\n2={2}", 27: m[0].Key, m[0].Value, m[1].Value))} 28: }; 29: } 30: } When this is called, the code calls the Invoke method of the predicate to return a bool.  The first on matching true will have its value invoked. 1: private static Func<DICE_HAND, E_DICE_POKER_HAND_VAL> GetHandEval = dh => 2: map_dph2fn[map_dph2fn.Keys.Where(enm2fn => enm2fn(dh)).First()]; After coming up with this process, I realized (with a little modification) it could be called to evaluate the individual values in the dice hand in the event of a tie. 1: private static Func<List<KVP_E2S>, string> PlayerTie = lst_kvp => 2: map_prd2fn.Skip(1) 3: .Where(x => x.Key.Invoke(RenderDigits(dhPlayerOne).CompareTo(RenderDigits(dhPlayerTwo)))) 4: .Select(s => s.Value) 5: .First().Invoke(lst_kvp); After that, I realized I could now create a program completely without “if” statements or “for” loops! 1: static void Main(string[] args) 2: { 3: Dictionary<Predicate<int>, Action<Action<string>>> main = new Dictionary<Predicate<int>, Action<Action<string>>> 4: { 5: {(i => i.Equals(0)), PlayGame}, 6: {(i => true), Usage} 7: }; 8:   9: main[main.Keys.Where(m => m.Invoke(args.Length)).First()].Invoke(Display); 10: } …and there you have it. :) ZIPPED Project

    Read the article

  • Are project managers useful in Scrum?

    - by Martin Wickman
    There are three roles defined in Scrum: Team, Product Owner and Scrum Master. There is no project manager, instead the project manager job is spread across the three roles. For instance: The Scrum Master: Responsible for the process. Removes impediments. The Product Owner: Manages and prioritizes the list of work to be done to maximize ROI. Represents all interested parties (customers, stakeholders). The Team: Self manage its work by estimating and distributing it among themselves. Responsible for meeting their own commitments. So in Scrum, there is no longer a single person responsible for project success. There is no command-and-control structure in place. That seems to baffle a lot of people, specifically those not used to agile methods, and of course, PM's. I'm really interested in this and what your experiences are, as I think this is one of the things that can make or break a Scrum implementation. Do you agree with Scrum that a project manager is not needed? Do you think such a role is still required? Why?

    Read the article

  • Problems with subversion (in gnome keyring, maybe), user=null

    - by Tom Brito
    I'm having a problem with my subversion in Ubuntu, and it's happening only on my computer, my colleagues are working fine. It asks for password for user "(null)": Password for '(null)' GNOME keyring: entering the password it shows: svn: OPTIONS of 'http://10.0.203.3/greenfox': authorization failed: Could not authenticate to server: rejected Basic challenge (http://10.0.203.3) What can be causing that (again: it's just on my computer, the svn server is ok).

    Read the article

  • Free training at Northwest Cadence

    - by Martin Hinshelwood
    Even though I have only been at Northwest Cadence for a short time I have already done so much. What I really wanted to do was let you guys know about a bunch of FREE training that NWC offers. These sessions are at a fantastic time for the UK as 9am PST (Seattle time) is around 5pm GMT. Its a fantastic way to finish off your Fridays and with the lack of love for developers in the UK set to continue I would love some of you guys to get some from the US instead. There are really two offerings. The first is something called Coffee talks that take you through an hours worth of detail in a specific category. Coffee Talks These coffee talks have some superb topics and you can get excellent interaction with the presenter as they are kind of informal. Date Day Time Topic Register Here 01/04/11 Tuesday 8:30AM – 9:30AM PST Real World Business and Technical Benefits of ALM with TFS 2010 150656 01/28/11 Friday 9:00AM - 10:00AM PST The Full Testing Experience Professional Quality Assurance with Visual Studio 2010 152810 02/11/11 Friday 9:00AM - 10:00AM PST Visual Source Safe to Team Foundation Server 152844 02/25/11 Friday 2:00PM - 3:00PM PST The Full Testing Experience Professional Quality Assurance with Visual Studio 2010 152816 03/11/11 Friday 9:00AM - 10:00AM PST Lab Manager The Ultimate “No More No Repro” Tool 152809 03/25/11 Friday 9:00AM - 10:00AM PST The Full Testing Experience Professional Quality Assurance with Visual Studio 2010 152838 04/08/11 Friday 9:00AM - 10:00AM PST Visual Source Safe to Team Foundation Server 152846 04/22/11 Friday 9:00AM - 10:00AM PST The Full Testing Experience Professional Quality Assurance with Visual Studio 2010 152839 05/06/11 Friday 2:00PM - 3:00PM PST Real World Business and Technical Benefits of ALM with TFS 2010 150657 05/20/11 Friday 9:00AM - 10:00AM PST The Full Testing Experience Professional Quality Assurance with Visual Studio 2010 152842 06/03/11 Friday 9:00AM - 10:00AM PST Visual Source Safe to Team Foundation Server 152847 06/17/11 Friday 9:00AM - 10:00AM PST The Full Testing Experience Professional Quality Assurance with Visual Studio 2010 152843   ALM Training Engagement Program Microsoft has released a new program to bring free Visual Studio 2010 Training Sessions to select customers on Microsoft Visual Studio products and how Application Lifecycle Management (ALM) solutions can help drive greater business impact. For more details on this program, please see the process chart below.  To get started send an email to us; This training is paid for by Microsoft and you would need to commit to 4 sessions in order to get accepted into the program. So these have more hoops to jump through to get them, but the content is much more formal and centres around adoption.

    Read the article

  • dpkg behaving strangely?

    - by Tom Henderson
    When I use apt to get a package, I have been receiving the same error message. Here is an example trying to install wicd (which is already installed): Reading package lists... Building dependency tree... Reading state information... wicd is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 1 not upgraded. 3 not fully installed or removed. After this operation, 0B of additional disk space will be used. Setting up tex-common (2.06) ... debconf: unable to initialize frontend: Dialog debconf: (Dialog frontend requires a screen at least 13 lines tall and 31 columns wide.) debconf: falling back to frontend: Readline Running mktexlsr. This may take some time... done. No packages found matching texlive-base. dpkg: error processing tex-common (--configure): subprocess installed post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of texlive-binaries: texlive-binaries depends on tex-common (>= 2.00); however: Package tex-common is not configured yet. dpkg: error processing texlive-binaries (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of dvipng: dvipng depends on texlive-base-bin; however: Package texlive-base-bin is not installed. Package texlive-binaries which provides texlive-base-bin is not configured yet. dpkg: error processing dvipng (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: tex-common texlive-binaries dvipng E: Sub-process /usr/bin/dpkg returned an error code (1) I'm not sure if this is a problem with apt or with dpkg, but it certainly doesn't look good!

    Read the article

  • Upgrading Visual Studio 2010

    - by Martin Hinshelwood
    I have been running Visual Studio 2010 as my main development studio on my development computer since the RC was released. I need to upgrade that to the RTM, but first I need to remove it. Microsoft have done a lot of work to make this easy, and it works. Its as easy as uninstalling from the control panel. I have had may previous versions of Visual Studio 2010 on this same computer with no need to rebuild to remove all the bits. Figure: Run the uninstall from the control panel to remove Visual Studio 2010 RC Figure: The uninstall removes everything for you.  Figure: A green tick means the everything went OK. If you get a red cross, try installing the RTM anyway and it should warn you with what was not uninstalled properly and you can remove it manually.   Once you have VS2010 RC uninstalled installing should be a breeze. The install for 2010 is much faster than 2008. Which could take all day, and then some on slower computers. This takes around 20 minutes even on my small laptop. I always do a full install as although I have to do c# I sometimes get to use a proper programming language VB.NET. Seriously, there is nothing worse than trying to open a project and the other developer has used something you don't have. Its not their fault. Its yours! Save yourself the angst and install Fully, its only 5.9GB. Figure: I always select all of the options.   Now go forth and develop! Preferably in VB.NET…   Technorati Tags: Visual Studio,VS2010,VS 2010

    Read the article

  • Legal responsibility for emebedding code

    - by Tom Gullen
    On our website we have an HTML5 arcade. For each game it has an embed this game on your website copy + paste code box. We've done the approval process of games as strictly and safely as possible, we don't actually think it is possible to have any malicious code in the games. However, we are aware that there's a bunch of people out there smarter than us and they might be able to exploit it. For webmasters wanting to copy + paste our games on their websites, we want to warn them that they are doing it at their own risk - but could we be held responsible if say for instance a malicious game was hosted on an important website and it stole their users credentials and cause them damage? I'm wondering if having an HTML comment in the copy + paste code saying "Use at your own risk" is sufficient.

    Read the article

  • A Multi-Channel Contact Center Can Reduce Total Cost of Ownership

    - by Tom Floodeen
    In order to remain competitive in today’s market, CRM customers need to provide feature-rich superior call center experience to their customers across all communication channels while improving their service agent productivity. They also require their call center to be deeply integrated with their CRM system; and they need to implement all this quickly, seamlessly, and without breaking the bank. Oracle’s Siebel Customer Relationship Management (CRM) is the world’s leading application suite for automated customer-facing operations for Sales and Marketing and for managing all aspects of providing service to customers. Oracle’s Contact On Demand (COD) is a world-class carrier grade hosted multi-channel contact center solution that can be deployed in days without up-front capital expenditures or integration costs. Agents can work efficiently from anywhere in the world with 360-degree views into customer interactions and real-time business intelligence. Customers gain from rapid and personalized sales and service, while organizations can dramatically reduce costs and increase revenues Oracle’s latest update of Siebel CRM now comes pre-integrated with Oracle’s Contact On Demand. This solution seamlessly runs fully-functional contact center provided by a single vendor, significantly reducing your total cost of ownership. This solution supports Siebel 7.8 and higher for Voice and Siebel 8.1 and higher for Voice and Siebel CRM Chat.  The impressive feature list of Oracle’s COD solution includes full-control CTI toolbar with Voice, Chat, and Click to Dial features.  It also includes context-sensitive screens, automated desktops, built-in IVR, Multidimensional routing, Supervisor and Quality monitoring, and Instant Provisioning. The solution also ships with Extensible Web Services interface for implementing more complex business processes. Click here to learn how to reduce complexity and total cost of ownership of your contact center. Contact Ann Singh at [email protected] for additional information.

    Read the article

  • Is there a language between C and C++?

    - by Robert Martin
    I really like the simple and transparent nature of C: when I write C code I feel unencumbered by "leaky abstractions" and can almost always make a shrewd guess as to the assembly I'm producing. I also like the simple, familiar syntax for C. However, C doesn't have these simple, helpful doodads that C++ offers like classes, simplified non-cstring handling, etc. I know that it's all possible to implement in C using jump tables and the like, but that's a bit wordy at times, and not very type-safe for various reasons. I'm not a fan of the over-emphasis on objects in C++, though, and I'm gun shy of the 'new' operator and the like. C++ seems to have just a few too many hiccups to, for instance, be used as a system programming language. Does there exist a language that sits between C and C++ on the scale of widgets and doodads? Disclaimer: I mean this as purely a factual question. I do not intend to anger you because I don't share your view that C{,++} is good enough to do whatever I'm planning.

    Read the article

  • Cuda vs OpenCL - opinions

    - by Martin Beckett
    Interested in peoples opinions of Cuda vs openCL following NVidia's Cuda4 release. I had originally gone with openCL since cross platform, open standards are a good thing(tm). I assumed NVidia would fall into line as they had done with openGL. But having talked to some NVidia people, they (naturaly) claim that they will concentrate on CUDA and openCL is hampered by having committees and having to please everyone - like openGL. And with the new tools and libs in CUDA it's hard to argue with that. -I'm in a fairly technical market so I can require the users to have particular HW.

    Read the article

  • How to ignore certain coding standard errors in PHP CodeSniffer

    - by Tom
    We have a PHP 5 web application and we're currently evaluating PHP CodeSniffer in order to decide whether forcing code standards improves code quality without causing too much of a headache. If it seems good we will add a SVN pre-commit hook to ensure all new files committed on the dev branch are free from coding standard smells. Is there a way to configure PHP codeSniffer to ignore a particular type of error? or get it to treat a certain error as a warning instead? Here an example to demonstrate the issue: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> </head> <body> <div> <?php echo getTabContent('Programming', 1, $numX, $numY); if (isset($msg)) { echo $msg; } ?> </div> </body> </html> And this is the output of PHP_CodeSniffer: > phpcs test.php -------------------------------------------------------------------------------- FOUND 2 ERROR(S) AND 1 WARNING(S) AFFECTING 3 LINE(S) -------------------------------------------------------------------------------- 1 | WARNING | Line exceeds 85 characters; contains 121 characters 9 | ERROR | Missing file doc comment 11 | ERROR | Line indented incorrectly; expected 0 spaces, found 4 -------------------------------------------------------------------------------- I have a issue with the "Line indented incorrectly" error. I guess it happens because I am mixing the PHP indentation with the HTML indentation. But this makes it more readable doesn't it? (taking into account that I don't have the resouces to move to a MVC framework right now). So I'd like to ignore it please.

    Read the article

  • Guidance: How to layout you files for an Ideal Solution

    - by Martin Hinshelwood
    Creating a solution and having it maintainable over time is an art and not a science. I like being pedantic and having a place for everything, no matter how small. For setting up the Areas to run Multiple projects under one solution see my post on  When should I use Areas in TFS instead of Team Projects and for an explanation of branching see Guidance: A Branching strategy for Scrum Teams. Update 17th May 2010 – We are currently trialling running a single Sprint branch to improve our history. Whenever I setup a new Team Project I implement the basic version control structure. I put “readme.txt” files in the folder structure explaining the different levels, and a solution file called “[Client].[Product].sln” located at “$/[Client]/[Product]/DEV/Main” within version control. Developers should add any projects you need to create to that solution in the format “[Client].[Product].[ProductArea].[Assembly]” and they will automatically be picked up and built automatically when you setup Automated Builds using Team Foundation Build. All test projects need to be done using MSTest to get proper IDE and Team Foundation Build integration out-of-the-box and be named for the assembly that it is testing with a naming convention of “[Client].[Product].[ProductArea].[Assembly].Tests” Here is a description of the folder layout; this content should be replicated in readme files under version control in the relevant locations so that even developers new to the project can see how to do it. Figure: The Team Project level - at this level there should be a folder for each the products that you are building if you are using Areas correctly in TFS 2010. You should try very hard to avoided spaces as these things always end up in a URL eventually e.g. "Code Auditor" should be "CodeAuditor". Figure: Product Level - At this level there should be only 3 folders (DEV, RELESE and SAFE) all of which should be in capitals. These folders represent the three stages of your application production line. Each of them may contain multiple branches but this format leaves all of your branches at the same level. Figure: The DEV folder is where all of the Development branches reside. The DEV folder will contain the "Main" branch and all feature branches is they are being used. The DEV designation specifies that all code in every branch under this folder has not been released or made ready for release. And feature branches MUST merge (Forward Integrate) from Main and stabilise prior to merging (Reverse Integration) back down into Main and being decommissioned. Figure: In the Feature branching scenario only merges are allowed onto Main, no development can be done there. Once we have a mature product it is important that new features being developed in parallel are kept separate. This would most likely be used if we had more than one Scrum team working on a single product. Figure: when we are ready to do a release of our software we will create a release branch that is then stabilised prior to deployment. This protects the serviceability of of our released code allowing developers to fix bugs and re-release an existing version. Figure: All bugs found on a release are fixed on the release.  All bugs found in a release are fixed on the release and a new deployment is created. After the deployment is created the bug fixes are then merged (Reverse Integration) into the Main branch. We do this so that we separate out our development from our production ready code.  Figure: SAFE or RTM is a read only record of what you actually released. Labels are not immutable so are useless in this circumstance.  When we have completed stabilisation of the release branch and we are ready to deploy to production we create a read-only copy of the code for reference. In some cases this could be a regulatory concern, but in most cases it protects the company building the product from legal entanglements based on what you did or did not release. Figure: This allows us to reference any particular version of our application that was ever shipped.   In addition I am an advocate of having a single solution with all the Project folders directly under the “Trunk”/”Main” folder and using the full name for the project folders.. Figure: The ideal solution If you must have multiple solutions, because you need to use more than one version of Visual Studio, name the solutions “[Client].[Product][VSVersion].sln” and have it reside in the same folder as the other solution. This makes it easier for Automated build and improves the discoverability of your code and its dependencies. Send me your feedback!   Technorati Tags: VS ALM,VSTS Developing,VS 2010,VS 2008,TFS 2010,TFS 2008,TFBS

    Read the article

  • Adventures in Scrum: Lesson 1 &ndash; The failed Sprint

    - by Martin Hinshelwood
    I recently had a conversation with a product owner that wanted to have the Scrum team broken up into smaller units so that less time was wasted on the Scrum Ceremonies! Their complaint was around the need in Scrum to have the entire “Team” (7+-2) involved in the sizing of the work during the “Sprint Planning Meeting”.  The standard flippant answer of all Scrum professionals, “Well that's not Scrum”, does not get you any brownie points in these situations. The response could be “Well we are not doing Scrum then” which in turn leads to “We are doing Scrum…But, we have split the scrum team into units of 2/3 so that they can concentrate on a specific area of work”. While this may work, it is not Scrum and should not be called so… It is just a form of Agile. Don’t get me wrong at this stage, there is nothing wrong with Agile, just don’t call it Scrum. The reason that the Product Owner wants to do this is that, in effect, through a number of miscommunications and failings in our implementation of Scrum, there was NO unit of potentially Shippable software at the end of the first sprint. It does not matter to them that most Scrum teams will fail the first Sprint, even those that are high performing teams. Remember it is the product owners their money! We should NOT break up scrum teams into smaller units for the purpose of having less people tied up in the Scrum Ceremonies. The amount of backlog the Team selects is solely up to the Team… Only the Team can assess what it can accomplish over the upcoming Sprint. - Scrum Guide, Scrum.org The entire team must accept the work and in order to understand what they can accept they must be free to size it as a team. This both encourages common understanding and increases visibility on why team members think a task is of a particular size. This has the benefit of increasing the knowledge of the entire team in the problem domain. A new Team often first realizes that it will either sink or swim as a Team, not individually, in this meeting. The Team realizes that it must rely on itself. As it realizes this, it starts to self-organize to take on the characteristics and behaviour of a real Team. - Scrum Guide, Scrum.org This paragraph goes to the why of having the whole team at the meeting; The goal of Scrum it to produce a unit of potentially shippable software at the end of every Sprint. In order to achieve this we need high performing teams and this is what Scrum as a framework has been optimised to produce. I think that our Product Owner is understandably upset over loosing two weeks work and is losing sight the end goal of Scrum in the failures of the moment. As the man spending the money, I completely understand his perspective and I think that we should not have started Scrum on an internal project, but selected a customer  that is open to the ideas and complications of Scrum. So, what should we have NOT done on our first Scrum project: Should not have had 3 interns as the only on site resource – This lead to bad practices as the experienced guys were not there helping and correcting as they usually would. Should not have had the only experienced guys offsite – With both the experienced technical guys in completely different time zones it was difficult to get time for questions. Helping the guys on site was just plain impossible. Should not have used a part time ScrumMaster – Although the ScrumMaster attended all of the Ceremonies, because they are only in 2 full days of the week it makes it difficult for the team to raise impediments as they go. Should not have used a proxy product owner. – This was probably the worst decision that was made. Mainly because the proxy product owner did not have the same vision as the product owner. While Scrum does not explicitly reject the idea of a Proxy Product Owner, I do not think it works very well in practice. The “single wringable neck” needs to contain both the Money and the Vision as well as attending the required meetings. I will be brining all of these things up at the Sprint Retrospective and we will learn from our mistakes and move on. Do, Inspect then Adapt…   Technorati Tags: Scrum,Sprint Planing,Sprint Retrospective,Scrum.org,Scrum Guide,Scrum Ceremonies,Scrummaster,Product Owner Need Help? Professional Scrum Developer Training SSW has six Professional Scrum Developer Trainers who specialise in training your developers in implementing Scrum with Microsoft's Visual Studio ALM tools.

    Read the article

  • Google not indexing new forum

    - by Tom Gullen
    We installed a new forum a few months ago now. The URL is: https://www.scirra.com/forum I've 301'd the old topics/threads, as well as included all the new URLs in the sitemap. Yet they still are not appearing. Webmaster tools is showing: 139,512 URLs submitted 50,544 URLs indexed And has been stuck there for quite some time. A massive drop in indexed pages since we updated the forum as well: Any help much appreciated

    Read the article

  • SSW Scrum Rule: Do you know to use clear task descriptions?

    - by Martin Hinshelwood
    When you create tasks in Scrum you are doing this within a time box and you tend to add only the information you need to remember what the task is. And the entire Team was at the meeting and were involved in the discussions around the task, so why do you need more? Once you have accepted a task you should then add as much information as possible so that anyone can pick up that task; what if your numbers come up? Will you be into work the next day? Figure: What if your numbers come up in the lottery? What if the Team runs a syndicate and all your numbers come up? The point is that anything can happen and you need to protect the integrity of the project, the company and the Customer. Add as much information to the task as you think is necessary for anyone to work on the task. If you need to add rich text and images you can do this by attaching an email to the task.   Figure: Bad example, there is not enough information for a non team member to complete this task Figure: Julie provided a lot more information and another team should be able to pick this up. This has been published as Do you know to ensure that relevant emails are attached to tasks in our Rules to Better Scrum using TFS.   Technorati Tags: Scrum,SSW Rules,TFS 2010

    Read the article

  • Privacy Protection in Oracle IRM 11g

    - by martin.abrahams
    Another innovation in Oracle IRM 11g is an in-built privacy policy challenge. By design, one of the many things that Oracle IRM does, of course, is collect audit information about how and where sealed documents are being used - user names, machine identifiers and so on. Many customers consider that this has privacy implications that the user should be invited to accept as a condition of service use - for the protection of both of the user and the service from avoidable controversy. So, in 11g IRM, when a new user connects to a server for the first time, they can expect to see the following privacy policy dialog. The dialog provides a configurable URL that the customer can use to publish the privacy policy for their IRM service. The policy might clarify what data is being collected and stored, what use that data might be put to, and so on as required by the service owner's legal advisers. In previous releases, you could construct an equivalent capability, and some customers did, but this innovation makes it much easier to do - you simply write a privacy policy and publish it as a web page for which the dialog automatically provides a link. This is another example of how Oracle IRM anticipates not just the security requirements of a customer, but also the broader requirements of service provisioning.

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >