Search Results

Search found 41038 results on 1642 pages for 'personal software process'.

Page 69/1642 | < Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >

  • Visual Studio Continued Excitement

    - by Daniel Moth
    As you know Visual Studio 2012 RTM’d and then became available in August (Soma’s blog posts told you that here and here), and the VS2012 launch was earlier this month (Soma also told you that here). Every time a release of Visual Studio takes place I am very excited, since this has been my development tool of choice for almost my entire career (from many years before I joined Microsoft). I am doubly excited with this release since it is the second version of Visual Studio that I have worked on and contributed major features to, now that I’ve been in the developer division (DevDiv) for over 4 years. Additionally, I am very excited about the new era that VS2012 starts with VSUpdate for continued customer value: instead of waiting for the next major version of VS to get new features, there is new infrastructure to enable friction-free updates. The first update will ship before the end of this year, and you can read more about it at Brian’s blog post. I also noticed that a CTP of the first quarterly update is available to download here. In the last two months, the VS2012 family of products we all worked on in DevDiv shipped, coinciding with the end of the Microsoft financial/review year, and naturally followed by a couple of organizational changes (e.g. see Jason’s blog post)… On a personal level, this meant that I was very lucky to have an opportunity present itself to me that I simply could not turn down, so I grabbed it! I’ll still be working on Visual Studio, but not in the Parallel Computing part of the C++ team; instead I will be PM-leading the VS Diagnostics team… Stay tuned for details of what is coming in that space, in the VS updates and in the next major VS release, as I am able to share them… Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Oracle Java Embedded Client 1.1 Released

    - by Roger Brinkley
    Yesterday an update release of Oracle Java Embedded Client (OJEC) 1.1 quietly slipped out door for general availability. Until last year it was pretty difficult to get your hands on either a Connected Limited Device Configuration (CLDC) for small devices or a Connected Device Configuration (CDC) for medium devices java implementation without a substantial initial commitment. But with the the release of OJWC (CLDC) and OJEC (CDC) last year that has changed. OJEC 1.1 is a binary distribution designed for installation on medium configurations which is a mid range processor requiring a  slow startup time, seamless upgrades, in a cost sensitive hardware environment  anywhere from 3.5mb to 8 mb. There are headless as well as headed versions available. It is intended for devices, such as Blu-­-ray Disc players, set-­-top boxes, residential gateways,VOIP phones, and similar. From a software point of view, OJEC is the Java runtime platform implementation of Connected Device Configuration (CDC v1.1, JSR-­-218), Foundation Profile (FP v1.1, JSR-­-219), and Personal Basis Profile (PBP v1.1, JSR-­-217)  and includes optional packages RMI (JSR 66), JDBC (JSR 169) and XML API for Java ME (JSR 280), and Java TV (JSR-­-927). New to this release is support for the XML API (JSR 280) and a number of bug fixes and performance enhancements, including an improved Just-in-Time (JIT) compilation for the x86 chipset architecture. The platforms supported include ArmV5, ArmV6/ArmV7, MIPS 32 74K, and X86 in headless mode. For embedded developers there are number of advantages to using Java and if you have shied away from the JavaME edition in the past I would encourage you to look into the updated version of OJEC 1.1.

    Read the article

  • When is it ever ok to write your own development tools? (editor into IDE)

    - by mario
    So I'm foremost using a text editor for coding. It's a very bare bones editor; provides mostly just syntax highlighting. But on rare occasions I also need to debug something. And that's when I have to resort to an IDE (mostly Netbeans, but got fiddly Eclipse/Aptana working as second fallback). For general use however IDEs feel not workable to me. It's a visual thing, being used to console UIs etc. And switching back and forth between a text editor and an IDE is slightly cumbersome too. That's why I'm considering extending the editor, not really into a full-fledged IDE - but at the very least integrate a debug feature. Since I'm working on PHP, it seems not that much effort. The DBGp allows to externalize a debug handler from the editor, so it's just minor integration work and figuring out how to shoehorn a breakpoint feature into the editor (joe btw). And while I've also got time to do that, I'm wondering if this is really worthwhile. In this case it's not a needed development tool. It's just for convenience. And the cause for doing it is basically just not liking the existing solution. While over time I might extend and adapt this debugger thing, it initially will be as circumstantial as Eclipse. It inevitably starts out as poor development tool. Furthermore there is likely not much reuse. (Okay, this is not an important point. Most such software exists sans much of a use case. And also obviously, similar extensions already exist for emacs and vim, so it cannot be completely pointless.) But what's a general guideline on attempting to conoct custom development tools, particularily if they are not really needed but satisfy personal preferences? (Usability enhancement not certain.)

    Read the article

  • Please list here your deliberate practices in software development...

    - by JDelage
    What are your deliberate practices in relation with your work as a software developer / professional, or as a CS student? Deliberate practice are exercise and repetitions targeted specifically at an individual's weak points and meant to consistently stretch / grow someone's ability. It was described in this Anders Ericsson paper. To qualify as a deliberate practice, the exercise must satisfy the following: Is not inherently enjoyable. Is not play or paid practice. Is relevant to the skill being developed. Is not simply watching the skill being performed. Requires effort and attention from the learner. Often involves activities selected by a coach or teacher to facilitate learning. Please answer with one practice per answer. I'll seed the question with one possible answer.

    Read the article

  • Using CheckPoint SNX with RSA SecurID Software Token to connect to VPN

    - by Vinnie
    I have a fairly specific issue that I'm hoping someone else out in the community has had to tackle with success. My company uses CheckPoint VPN clients on Windows XP machines with RSA SecurID software to generate the tokens. The beauty is that once you generate a token code on the software, you can enter it into any machine trying to connect via VPN and with your username get connected. So, I've got Ubuntu 10.10 32bit on a tower and formerly on a laptop. Through several posts around the web, I was able to get SNX installed on the laptop, plug in my server connection information and be asked for a password only to have the connection fail. I used to debug mode and was able to see that the application was trying to and failing at writing a registry value, but I believe that to be a symptom of a different issue, even though I tried to find a way to remedy that. I'm wondering if anyone out there is on a similar configuration and was able to connect with SNX using an RSA token? If so, what steps did you take to setup and what problems/solutions did you encounter?

    Read the article

  • License Requirements for Including Dual-Licensed Open-Source Software

    - by Rick Roth
    How do you opt into one software license and not the other when the distributor gives the consumer more than one choice? For example I would like to use the DataTables JavaScript library in my web application. According to their web site, "DataTables is dual licensed under the GPL v2 license or a BSD (3-point) license." Furthermore, the source code of the JavaScript library has this text that calls out both licenses: /** * @summary DataTables * @description Paginate, search and sort HTML tables * @version 1.9.4 * @file jquery.dataTables.js * @author Allan Jardine (www.sprymedia.co.uk) * @contact www.sprymedia.co.uk/contact * * @copyright Copyright 2008-2012 Allan Jardine, all rights reserved. * * This source file is free software, under either the GPL v2 license or a * BSD style license, available at: * http://datatables.net/license_gpl2 * http://datatables.net/license_bsd * * This source file is distributed in the hope that it will be useful, but * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY * or FITNESS FOR A PARTICULAR PURPOSE. See the license files for details. * * For details please refer to: http://www.datatables.net */ Finally, the web pages with the licensing text (e.g. the DataTables BSD license page) has this statement: "DataTables is made available under both the GPL v2 license and a BSD (3-point) style license. You can select which one you wish to use the DataTables code under." My specific question is "how do you select which one you want to use." In my case, I want to only use the BSD license and I want to make it explicitly clear that I do not opt into the GPL v2 license in any way. How do you do that and have it hold up to legal challenge?

    Read the article

  • Post build events using ROBOCOPY instead of XCOPY

    - by Vizioz Limited
    I don't know about you, but for a long time I have used XCOPY statements in my Visual Studio post build events to copy my Umbraco files from the project folders to the local version of the website associated with the project.For the last few months we have been building a website framework for a client, who has subsequently sold the site to 5 clients, each with a different skin and some variations in their functional requirements.So, we now have a single source solutions, that builds and copies the site files into 5 seperate local websites, which enables us to easily test them all, what we had found was that this process was starting to slow up our build process and was reaching 30-45 seconds on a high spec Quad core machine (and slower on others)Today I asked Colin to create seperate Solution Configurations within Visual Studio so that while we were developing we could target a single site, and when we wanted to test all sites, we could target "ALL" and the Post Build script would then copy the files to all sites.This worked well, and with a couple of other optimisations, our build was now taking about 10 seconds for a single site.Then Colin came across ROBOCOPY and suggested that maybe this would be a suitable alternative to XCOPY, well, I had not heard of it.. (shock horror some of you shout, some I am sure like me, are also wondering what it is!)ROBOCOPY is new in Windows Vista & Windows 7 (you can also download it for XP & Windows 2003) and it has a lot of additional features, the two that were most interesting to us were:/MIR = Mirror a folder tree/XD = Exclude Directories/NP = No Progress (i.e. it does not give you a chart of it's results, which just fills up your Output window!)So, we set about implementing ROBOCOPY, we decided to use the /MIR switch on all folders that we knew were always stored in our project folders:- images- css- masterpages- xsltAnd for other files we just used the straight robocopy functionality.We also decided to exclude all the .SVN directories using the /XD switch and finally we added the /NP switch as mentioned above.The beauty of all of this, is the /MIR functionality, as this means that only files that have changed will be copied across which greatly speeds up the process, especially on the images folders which previously copied across on every build, now, if we add a new image to the project it will be copied across automatically and then never again, unless we change it of course!The build time now for all sites is approximately 4 seconds and for a single site, 2 seconds, I would highly recommend the time to make the same optimisations to your build processes if you have not done so already.

    Read the article

  • make-like build tools for data?

    - by miku
    Make is a standard tools for building software. But make decides whether a target needs to be regenerated by comparing file modification times. Are there any proven, preferably small tools that handle builds not for software but for data? Something that regenerates targets not only on mod times but on certain other properties (e.g. completeness). (Or alternatively some paper that describes such a tool.) As illustration: I'd like to automate the following process: get data (e.g. a tarball) from some regularly updated source copy somewhere if it's not there (based e.g. on some filename-scheme) convert the files to different format (but only if there aren't successfully converted ones there - e.g. from a previous attempt - custom comparison routine) for each file find a certain data element and fetch some additional file from say an URL, but only if that hasn't been downloaded yet (decide on existence of file and file "freshness") finally compute something (e.g. word count for something identifiable and store it in the database, but only if the DB does not have an entry for that exact ID yet) Observations: there are different stages each stage is usually simple to compute or implement in isolation each stage may be simple, but the data volume may be large each stage may produce a few errors each stage may have different signals, on when (re)processing is needed Requirements: builds should be interruptable and idempotent (== robust) when interrupted, already processed objects should be reused to speedup the next run data paths should be easy to adjust (simple syntax, nothing new to learn, internal dsl would be ok) some form of dependency graph, that describes the process would be nice for later visualizations should leverage existing programs, if possible I've done some research on make alternatives like rake and have worked a lot with ant and maven in the past. All these tools naturally focus on code and software build, not on data builds. A system we have in place now for a task similar to the above is pretty much just shell scripts, which are compact (and are a ok glue for a variety of other programs written in other languages), so I wonder if worse is better?

    Read the article

  • tfs 2010 RC Agile Process template update New Task progress report

    Maybe my next post will just be about why I am so excited and impressed with the out of the box templates.  But, for this first blog with my new focus, I thought I would just walk through the process I went through to create a task progress report (to enhance the out of the box Agile template). So, I started with the MSF for Agile Development 5.0 RC template.  After reviewing the template, I came away pretty excited about many of the new reports.  I am especially excited about the reporting services reports.  The big advantage I see here is that these are querying the Warehouse directly instead of the Analysis Services Cube which means that they are much closer to real-time which I find very important for reports like Burndown and task status.  One report that I focused on right away was the User Story Progress Report.  An overview is shown below: This report is very useful, but a lot of our internal managers really prefer to manage at the task level and either dont have stories in TFS or would like to view this type of report for tasks in addition to the User Stories.  So, what did I do? Step 1: Download the Agile Template In VS 2010 RC, open Process Template Manager from Team->Team Project Collection Settings.  Download the MSF for Agile Development template to your local file system.  A project template is a folder of xml files.  There is a ProcessTemplate.xml in the root and then a bunch of directories for things like Work Item Definitions and Queries, Reports, Shared Documents and Source Control Settings.  Step 2: Copy the folder My plan here is to make a new template with all of my modifications.  You can also just enhance update the MSF template.  However, I think it is cleaner when you start making modifications to make your own template.  So, copy the folder and name it with your new template name. Step 3: Change Template Name Open ProcessTemplate.xml and change the <name> of the template. Step 4: Copy the rdl of the Report you want to use a starting point In my case, I copied Stories Progress.rdl and named the file Task Progress Breakdown.rdl.  I reviewed the requirements for the new report with some of the users here and came up with this plan.  Should show tasks and be expandable to show subtasks.  Should add Assigned To and Estimated Finish Date as 2 extra columns. Step 5: Walkthrough the existing report to understand how it works The main thing that I do here is try to get the sql to run in SQL Management Studio.  So, I can walkthrough the process of building up the data for the report. After analyzing this particular report I found a couple of very useful things.  One, this report is already built to display subtasks if I just flip the IncludeTasks flag to 1.  So, if you are using Stories and have tasks assigned to each story.  This might give you everything you want.  For my purposes, I did make that change to the Stories Progress report as I find it to be a more useful report to be able to see the tasks that comprise each story.  But, I still wanted a task only version with the additional fields. Step 6: Update the report definition I tend to work on rdl in visual studio directly as xml.  Especially when I am just altering an existing report, I find it easier than trying to deal with the BI Studio designer.  For my report I made the following changes. Updated Fields Removed Stack Rank and Replaced with Priority since we dont use Stack Rank Added FinishDate and AssignedTo Changed the root deliverable SQL to pull @tasks instead of @deliverablecategory and added a join CurrentWorkItemView for FinishDate and Assigned to SELECT cwi.[System_Id] AS ID FROM [CurrentWorkItemView] cwi             WHERE cwi.[System_WorkItemType] IN (@Task)             AND cwi.[ProjectNodeGUID] = @ProjectGuid SELECT lh.SourceWorkItemID AS ID FROM FactWorkItemLinkHistory lh             INNER JOIN [CurrentWorkItemView] cwi ON lh.TargetWorkItemID = cwi.[System_Id]             WHERE lh.WorkItemLinkTypeSK = @ParentWorkItemLinkTypeSK                 AND lh.RemovedDate = CONVERT(DATETIME, '9999', 126)                 AND lh.TeamProjectCollectionSK = @TeamProjectCollectionSK                 AND cwi.[System_WorkItemType] NOT IN (@DeliverableCategory) Added AssignedTo and FinishDate columns to the @Rollups table Added two columns to the table used for column headers <Tablix Name="ProgressTable">         <TablixBody>           <TablixColumns>             <TablixColumn>               <Width>2.7625in</Width>             </TablixColumn>             <TablixColumn>               <Width>0.5125in</Width>             </TablixColumn>             <TablixColumn>               <Width>3.4625in</Width>             </TablixColumn>             <TablixColumn>               <Width>0.7625in</Width>             </TablixColumn>             <TablixColumn>               <Width>1.25in</Width>             </TablixColumn>             <TablixColumn>               <Width>1.25in</Width>             </TablixColumn>           </TablixColumns> Added Cells for the two new headers Added Cells to the data table to include the two new values (Assigned to & Finish Date) Changed a bunch of widths that would change the format of the report to display landscape and have room for the two additional columns Set the Value of the IncludeTasks Parameter to 1 <ReportParameter Name="IncludeTasks">       <DataType>Integer</DataType>       <DefaultValue>         <Values>           <Value>=1</Value>         </Values>       </DefaultValue>       <Prompt>IncludeTasks</Prompt>       <Hidden>true</Hidden>     </ReportParameter> Change a few descriptions on how the report should be used This is the resulting report I have attached the final rdl. Step 7: Update ReportTasks.xml Last step before the template is ready for use is to update the reportTasks.xml file in the reports folder.  This file defines the reports that are available in the template.           <report name="Task Progress Breakdown" filename="Reports\Task Progress Breakdown.rdl" folder="Project Management" cacheExpiration="30">             <parameters>               <parameter name="ExplicitProject" value="" />             </parameters>             <datasources>               <reference name="/Tfs2010ReportDS" dsname="TfsReportDS" />             </datasources>           </report> Step 8: Upload the template Open the process Template Manager just like Step 1.  And upload the new template. Thats it.  One other note, if you want to add this report to existing team project you will have to go into reportmanager (the reporting services portal) and upload the rdl to that projects directory.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Fun with RadCaptcha for ASP.NET AJAX and OCR software

    A friend of mine was evaluating OCR software and finally decided to go with FineReader. I was curious what would happen if we put the RadCaptcha control in. Will the advanced OCR manage to decode it or not? At first he showed me a test run with the RadCaptcha demo description, to get an idea of the basic output:    Naturally, the captured description text was no problem - only a few characters were misread but then corrected with the spellcheck. Next, the real test was performed:    These were only a couple of the results, but there is no need to post the rest of the tests - none of the RadCaptcha images were recognized by the OCR software. Here are the CaptchaImage settings used in the tests: Background Noise Level: Low /default value Line Noise Level: Low /default value Font Warp Factor: Low /Medium is default value...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Who practices, or is likely to practice, the IEEE Software Engineering? [closed]

    - by user72757
    There is an interesting issue in Software Engineering which I'd like to explore. The issue is firstly what is and what is not software engineering. Secondly, if software engineering is what the IEEE defines it to be, what are good examples of companies which practice the SE? Detailed question: Software engineering (SE) is the application of a systematic, disciplined, quantifiable approach to the design, development, operation, and maintenance of software, and the study of these approaches; that is, the application of engineering to software. [updated definition, originating in 610.12-1990 - IEEE Standard Glossary of Software Engineering Terminology] If we consider as SE only those approaches that 100% match the above definition, we naturally get to SWEBOK (Software Engineering Body of Knowledge) which is created by the IEEE and the ACM. I'm seeking the answer to this: How can I find a company outside the defence industry which practices the SE as defined by IEEE? Clues: SE originates in 1968 NATO conference. The Software Engineering Institute (SEI) is based in the US at Carnegie Mellon University. Funding of the SEI is largely done by the US DoD. Defence industry uses the SE and sometimes has a partnership with the IEEE (as in case of Boeing). Possible decomposition of my big question into smaller chunks: a) Where is anyone who acknowledges the IEEE Software Engineering standards at work and perhaps even uses some of them? http://cs.hbg.psu.edu/cmpsc487/IEEEStds_List.htm b) Where can I find a person or a company building around SWEBOK? http://www.computer.org/portal/web/swebok/html/contents c) What is an example of a company professionally using CSDP (apart from those at IEEE website)? Does anyone have any possible contribution to this question?

    Read the article

  • Bruce Lee Software development.

    - by DesigningCode
    "Styles tend to not only separate men - because they have their own doctrines and then the doctrine became the gospel truth that you cannot change. But if you do not have a style, if you just say: Well, here I am as a human being, how can I express myself totally and completely? Now, that way you won't create a style, because style is a crystallization. That way, it's a process of continuing growth."- Bruce Lee This is kind of how I see software development. What I enjoyed in the the early days of Agile, things seemed very dynamic, people were working out all manner of ways of doing things. It was technique oriented, it was very fluid and people were finding all kinds of good ways of doing things.  Now when I look at the world of “Agile” it seems more crystalized.  In fact that seemed to be a goal, to crystalize the goodness so everyone can share.   I think mainly because it seems a heck of a lot easier to market.  People are more willing to accept a well defined doctrine and drink the Kool Aid.   Its more “corporate” or “professional”. But the process of crystalizing the goodness actually makes it bad.   But luckily in the world of software development there are still many people who are more focused on “how can I express myself totally and completely”.   We are seeing expressive languages, expressive frameworks, tooling that helps you to better express yourself, design techniques that allow you to better express your intent.    I love that stuff! So beware, be very cautious of anyone offering you new age wisdom based on crystals!

    Read the article

  • Need for gksudo for "Install new software" in eclipse

    - by Captain Giraffe
    I have been developing for android on eclipse for a while now, and my experience with the eclipse environment on Ubuntu10.10 has not been a smooth one. With the repo install of eclipse I have had to sudo eclipse to install the required components for android development. (a big red flag for me) I tried today to install updates for the eclipse and android platform and my eclipse installation seems to have broken horribly. I can no longer find and of the urls for new software if i gksudo it, if I run it in user mode it fails (as it always has) with permissions problems. I have chowned user:user all my eclipse and android related private/user files. This is a system running ubuntu 10.10 with gnome2.x. On my kubuntu 11.10 install it work a lot better. Is there an easy fix to this? Is the repo version of eclipse broken? Should I do a clean install for just my user? (if so can I retain my previously installed software? the installation process is very time consuming) I saw there was a previous post here recommending this for new installations.

    Read the article

  • Restoring GRUB2 on Software RAID 0 after Windows 7 wiped it using Ubuntu 10.10 LiveCD

    - by unknownthreat
    I have installed Ubuntu 10.10 on my system. However, I need to install Windows 7 back, and I expect that it would alter GRUB and it did. Right now, my partition on my Software RAID 0 looks like this: nvidia_acajefec1 is Ubuntu 10.10 and nvidia_acajefec3 is Windows 7. I've been following some guides around and I am always stuck at GRUB not able to detect the usual RAID content. I've tried running: sudo grub > root (hd0,0) GRUB complains it couldn't find my hard disk. So I tried: find (hd0,0) And it complains that it couldn't find anything. So I tried: find /boot/grub/stage1 It said "file not found". Here's the text from the console: ubuntu@ubuntu:~$ grub Probing devices to guess BIOS drives. This may take a long time. [ Minimal BASH-like line editing is supported. For the first word, TAB lists possible command completions. Anywhere else TAB lists the possible completions of a device/filename. ] grub> root (hd0,0) root (hd0,0) Error 21: Selected disk does not exist grub> find /boot/grub/stage1 find /boot/grub/stage1 Error 15: File not found Fortunately, I got one person suggesting that what I've been trying to do is for GRUB Legacy, not GRUB2. So I went to the suggested website, ** (http://grub.enbug.org/Grub2LiveCdInstallGuide) **try to look around, and try: ubuntu@ubuntu:~$ sudo fdisk -l Unable to seek on /dev/sda This is just the step 2 of the instruction in the http://grub.enbug.org/Grub2LiveCdInstallGuide and I cannot proceed because it cannot seek /dev/sda. However, ubuntu@ubuntu:~$ sudo dmraid -r /dev/sdb: nvidia, "nvidia_acajefec", stripe, ok, 488397166 sectors, data@ 0 /dev/sda: nvidia, "nvidia_acajefec", stripe, ok, 488397166 sectors, data@ 0 So what now? Do you have an idea for how to make fdisk see my RAID array on live cd (Ubuntu 10.10)? Honestly, I am lost, very lost in trying to restore GRUB2 on this software RAID 0 system right now.

    Read the article

  • Can defect containment metrics be readily applied at an organizational level when there is only a consistant organizational process framework?

    - by Thomas Owens
    Defect containment metrics, such as total defect containment effectiveness (TDCE) and phase containment effectiveness (PCE), can be used to give a good indicator of the quality of the process. TDCE captures the defects that are captured at some point between requirements and the release of a product into the field, indicating the overall effectiveness of the entire process to find and remove defects. PCE provides more detail at each phase of the software development life cycle and how the defect detection and removal techniques are working. Applying these metrics makes sense at a level where you have a well-defined process and methodology for product development, often a project. However, some organizations provide a process framework that is tailored at the project level. This process framework would include the necessary guidance for meeting certifications (ISO9001, CMMI), practices for incorporating known good techniques (agile methods, Lean, Six Sigma), and requirements for legal or regulatory reasons. However, the specific details of how to gather requirements, design the system, produce the software, conduct test, and release are left to the product development teams. Is there any effective way to apply defect containment metrics at an organizational level when only a process framework exists at the organizational level? If not, what might be some ideas for metrics that can be distilled from each project (each using a tailored process that fits into the organizational process framework) that captures defect containment metrics to discuss the ability of the process to find and remove defects? The end goal of such a metric would be to consolidate the defect containment practices of a large number of ongoing projects and report to management. The target audience would be people in roles such as the chief software engineer and the chief engineer (of all engineering disciplines) for the organization. Although project specific data would be available, the idea is to produce something that quantifies the general effectiveness of all tailored processes across all ongoing projects. I would suspect that this data would also be presented as part of CMMI, ISO, or similar audits to demonstrate process quality.

    Read the article

  • Why do we have non-free software in the official repositories?

    - by fluteflute
    The Ubuntu wiki describes the "sections" of the official Ubuntu repositories as follows: Main - Officially supported software. Restricted - Supported software that is not available under a completely free license. Universe - Community maintained software, i.e. not officially supported software. Multiverse - Software that is not free. I thought that software in the Ubuntu repositories had to be open source, however doesn't the description of the Multiverse directly contradicts this?

    Read the article

  • Ubuntu 12.04 LTS: Error message "Failed to execute child process"

    - by Ron
    I am an Ubuntu-newbie and just started working with Ubuntu (version 12.04 LTS) a couple of days ago. I wanted to add a launcher icon to desktop for launching an application I previously installed. Up to now I can only launch it by typing setsid matlab -desktop into my terminal. Now there is the following problem with the execution via the desktop icon: Whenever I click the desktop icon, I get the following error message: "Failed to execute child process" I would like to add a screenshot, but unfortunately as a new user, I am not allowed to... In the main menu from where I added the icon via drag'n'drop to desktop there is also a permission to execute the .desktop file. I also tried to look for advice on the error message "Failed to execute child process..." but could not find anything useful. Now does anybody have an idea what I am missing? Sorry if this is a stupid question ;) ...but as I just said: I just started with Ubuntu... Thanks to everybody in advance for their help! :) And let me know if you should need any more information... Regards, Ron

    Read the article

  • software architecture (OO design) refresher course

    - by PeterT
    I am lead developer and team lead in a small RAD team. Deadlines are tight and we have to release often, which we do, and this is what keep the business happy. While we (the development team) are trying to maintain the quality of the code (clean and short methods), I can't help but notice that the overall quality of the OO design&architecture is getting worse over the time - the library we are working on is gradually reducing itself to a "bag of functions". Well, we try to use the design patterns, but since we don't really have much time for a design as such we are mostly using the creational ones. I have read Code Complete / Design Patterns (GOF & enterprise) / Progmatic Programmer / and many books from Effective XXX series. Should I re-read them again as I have read them a long time ago and forgotten quite a lot, or there are other / better OO design / software architeture books been published since then which I should definitely read? Any ideas, recommendations on how can I get the situation under control and start improving the architecture. The way I see it - I will start improving the architectural / design quality of software components I am working on and then will start helping other team members once I find what is working for me.

    Read the article

  • Cloning existing software for commercial purposes - legal implications

    - by user2036256
    I have been asked to clone some existing software for a company. Basically its an old 16 bit DOS console app, which was supplied free of charge in I believe the late 80's. Having replaced the machine that needs to run it with a box running Win7 x64 they can't get it to work. It crashes every couple of minutes under DOSbox. The company that supplied it appears to no longer exist - if they did the company asking me to do this would almost certainly know about it. Its undetermined whether they have gone entirely or are just trading under a different name. If the latter they seem to have withdrawn from the market related to this product (because again, niche area, we should know about everyone there). What is the status to this with regards to copyright etc.? The main concern for the company involved is they want an identical interface to what they already have so I would have to clone this entirely. Having no source code / indication of the underlying mechanisms these would be written from scratch. Is an interface covered by copyright? / Does that still hold 30 years later? What is the assumed license when none at all is provided? Under UK law would I be under any serious risk were I to take on the project? How would this pan out if I then decided to sell the software on to other companies? Thanks

    Read the article

  • OpenWorld Session: BPM Analytics - Process Dashboards, BAM and Intelligent Optimization

    - by Ajay Khanna
    Blog Contributed by Payal Srivastava, Oracle Product Mamagement   Margin of error for the business is shrinking dramatically. Business needs to accomplish more with less i.e. minimal investment with quick ROI. Learn how you can leverage Oracle BPM suite and complementary technologies to create a robust analytics capability to provide visibility into operations to  C-level executives and Operational managers. We will talk about BPM analytics options available today that will not only enhance the visibility but allow you to intelligently optimize the business process at design time as well as run time.  The session will share some exciting this on our roadmap.  Come meet with the Oracle team members  from Product Development (Avinash Dabholkar , Eric Hsiao) and Product Management (Payal Srivastava) at the session. We would like to hear  your questions/comments about  our offering and roadmap. BPM Analytics: Process Dashboards, BAM and Intelligent Optimization, Moscone South 308, 10/3/12 @11:45am – CON 8598

    Read the article

  • Restoring GRUB2 on Software RAID 0 after Windows 7 wiped it using LiveCD

    - by unknownthreat
    I have installed Ubuntu 10.10 on my system. However, I need to install Windows 7 back, and I expect that it would alter GRUB and it did. Right now, my partition on my Software RAID 0 looks like this: nvidia_acajefec1 is Ubuntu 10.10 and nvidia_acajefec3 is Windows 7. I've been following some guides around and I am always stuck at GRUB not able to detect the usual RAID content. I've tried running: sudo grub > root (hd0,0) GRUB complains it couldn't find my hard disk. So I tried: find (hd0,0) And it complains that it couldn't find anything. So I tried: find /boot/grub/stage1 It said "file not found". Here's the text from the console: ubuntu@ubuntu:~$ grub Probing devices to guess BIOS drives. This may take a long time. [ Minimal BASH-like line editing is supported. For the first word, TAB lists possible command completions. Anywhere else TAB lists the possible completions of a device/filename. ] grub> root (hd0,0) root (hd0,0) Error 21: Selected disk does not exist grub> find /boot/grub/stage1 find /boot/grub/stage1 Error 15: File not found Fortunately, I got one person suggesting that what I've been trying to do is for GRUB Legacy, not GRUB2. So I went to the suggested website, ** (http://grub.enbug.org/Grub2LiveCdInstallGuide) **try to look around, and try: ubuntu@ubuntu:~$ sudo fdisk -l Unable to seek on /dev/sda This is just the step 2 of the instruction in the http://grub.enbug.org/Grub2LiveCdInstallGuide and I cannot proceed because it cannot seek /dev/sda. However, ubuntu@ubuntu:~$ sudo dmraid -r /dev/sdb: nvidia, "nvidia_acajefec", stripe, ok, 488397166 sectors, data@ 0 /dev/sda: nvidia, "nvidia_acajefec", stripe, ok, 488397166 sectors, data@ 0 So what now? Do you have an idea for how to make fdisk see my RAID array on live cd (Ubuntu 10.10)? Honestly, I am lost, very lost in trying to restore GRUB2 on this software RAID 0 system right now.

    Read the article

  • Software design of a browser-based strategic MMO game

    - by Mehran
    I wonder if there are any known tested software designs for Travian-like browser-based strategic MMO games? I mean how would they implement the server of such games or what is stored in database and what is stored in RAM? Is the state of the world stored in one piece or is it distributed among a number of storage? Does anyone know a resource to study the problems and solutions of creating such games? [UPDATE] Suggested in comments, I'm going to give an example how would I design such a project. Even though I'm not sure if I'm proposing the right one. Having stored the world state in a MongoDB, I would implement an event collection in which all the changes to the world will register. Changes that are meant to happen in the future will come with an action date set to the future and those that are to be carried out immediately will be set to now. Having this datastore as the central point of the system, players will issue their actions as events inserted in datastore. At the other end of the system, I'll have a constant-running software taking out events out of the datastore which are due to be carried out and not done yet. Executing an event means apply some update on the world's state and thus the datastore. As scalable as this design sounds, I'm not sure if it will be worth implementing. For one, it is pointless to cache the datastore as most of updates happen once without any follow ups. For instance if you have the growth of resources in your game, you'll be updating the whole world state periodically in which case, having incorporated a cache, you are keeping the whole world in RAM (which most likely is impossible). So can someone come up with a better design?

    Read the article

  • Software management for 2 programmers

    - by kajo
    me and my very good friend do a small bussiness. We have company and we develop web apps using Scala. We have started 3 months ago and we have a lot of work now. We cannot afford to employ another programmer because we can't pay him now. Until now we try to manage entire developing process very simply. We use excel sheets for simple bug tracking and we work on client requests on the fly. We have no plan for next week or something similar. But now I find it very inefficient and useless. I am trying to find some rules or some methodology for small team or for only two guys. For example Scrum is, imo, unadapted for us. There are a lot of roles (ScrumMaster, Product Owner, Team...) and it seems overkill. Can you something advise me? Have you any experiences with software management in small teams? Is any methodology of current agile development fitten for pair of programmers? Is there any software management for simple bug tracking, maybe wiki or time management for two coders? thanks a lot for sharing.

    Read the article

  • How to share code as open source?

    - by Ethel Evans
    I have a little program that I wrote for a local group to handle a somewhat complicated scheduling issue for scheduling multiple meetings in multiple locations that change weekly according to certain criteria. It's a niche need, but I wouldn't be surprised if there are other groups that could use software like this. In fact, we've had requests from others for directions on starting a group like this, and if their groups get as big, they might also want special software to help with scheduling. I plan to continue developing the program and eventually make it an online web app, but a very simple alpha version is completed as a console app. I'd like to make it available as open source, but I have no idea what kind of process I should go through first. Right now, all I have is Java code, not even unit-tested thoroughly. I haven't shown the code to anyone else. There is no documentation. I don't know where I would put the code so others could access it. I don't know anything about licensing it. I don't know what kind of support people will expect from me if I release it as open source. I have no idea what else I should worry about. Can someone outline for me (or post an article(s) that outlines) the process of taking open source software from "coded" to "completed / available"? I really don't want to embarrass myself by doing things weirdly.

    Read the article

  • Restoring GRUB2 on Software RAID 0 using LiveCD after Windows 7 wiped it

    - by unknownthreat
    I have installed Ubuntu 10.10 on my system. However, I need to install Windows 7 back, and I expect that it would alter GRUB and it did. Right now, my partition on my Software RAID 0 looks like this: nvidia_acajefec1 is Ubuntu 10.10 and nvidia_acajefec3 is Windows 7. I've been following some guides around and I am always stuck at GRUB not able to detect the usual RAID content. I've tried running: sudo grub > root (hd0,0) GRUB complains it couldn't find my hard disk. So I tried: find (hd0,0) And it complains that it couldn't find anything. So I tried: find /boot/grub/stage1 It said "file not found". Here's the text from the console: ubuntu@ubuntu:~$ grub Probing devices to guess BIOS drives. This may take a long time. [ Minimal BASH-like line editing is supported. For the first word, TAB lists possible command completions. Anywhere else TAB lists the possible completions of a device/filename. ] grub> root (hd0,0) root (hd0,0) Error 21: Selected disk does not exist grub> find /boot/grub/stage1 find /boot/grub/stage1 Error 15: File not found Fortunately, I got one person suggesting that what I've been trying to do is for GRUB Legacy, not GRUB2. So I went to the suggested website, ** (http://grub.enbug.org/Grub2LiveCdInstallGuide) **try to look around, and try: ubuntu@ubuntu:~$ sudo fdisk -l Unable to seek on /dev/sda This is just the step 2 of the instruction in the http://grub.enbug.org/Grub2LiveCdInstallGuide and I cannot proceed because it cannot seek /dev/sda. However, ubuntu@ubuntu:~$ sudo dmraid -r /dev/sdb: nvidia, "nvidia_acajefec", stripe, ok, 488397166 sectors, data@ 0 /dev/sda: nvidia, "nvidia_acajefec", stripe, ok, 488397166 sectors, data@ 0 So what now? Do you have an idea for how to make fdisk see my RAID array on live cd (Ubuntu 10.10)? Honestly, I am lost, very lost in trying to restore GRUB2 on this software RAID 0 system right now.

    Read the article

< Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >