Search Results

Search found 31421 results on 1257 pages for 'software performance'.

Page 342/1257 | < Previous Page | 338 339 340 341 342 343 344 345 346 347 348 349  | Next Page >

  • What is an elegant way to install non-repository software in 12.04?

    - by Tomas
    Perhaps I missed something when Canonical removed the "Create launcher" option from the right click menu, because I've really been missing that little guy. For me, it was the preferred way to install software that comes not in a .deb, but in a tar.gz, for example. (Note: in that tar.gz I have a folder with the compiled files, I'm NOT compiling from source) I just downloaded the new Eclipse IDE and extracted the tar.gz to my /usr folder. Now, I'd like to add it to my desktop and dash so it can be started easily. Intuitively I would right click the desktop and create a launcher. After this I'd copy the .desktop to /usr/share/applications. However, creating a launcher is not possible. My question: How would you install an already compiled tar.gz that you have downloaded from the internet? Below are a few things I've seen, but these are all more time-consuming than the right click option. If you have any better ideas, please let me know. Thanks! Manual copy & create a .desktop file: manually Simply extract the archive to /usr. Create a new text file, adding something along the lines of the code block below: [Desktop Entry] Version=1.0 Type=Application Terminal=false Exec="/usr/local/eclipse42/eclipse" Name="Eclipse 4.2" Icon=/home/tomas/icons/eclipse.svg Rename this file to eclipse42.desktop and make it executable. Then copy this to /usr/share/applications. Manually copy & create a .desktop file: GUI fossfreedom has elaborated on this in How can I create launchers on my desktop? Basically it involves the command: gnome-desktop-item-edit --create-new ~/Desktop After creating the launcher, copy it to /usr/share/applications.

    Read the article

  • How should I determine my rates for writing custom software?

    - by Carson Myers
    For a custom software that will likely take a year or more to develop, how would I go about determining what to charge as a consultant? I'm having a hard time coming up with a number, and searches online are providing vastly different numbers (between $55/hr and $300/hr). I don't want to shoot too low because it's going to take me so much time (and I'm deferring my education for this project). I also don't want to shoot too high and get unpleasant looks and demand for justification. FWIW I live in Canada, and have approx. 10 years of development experience. I've read the "take your salary and divide it by 1000" rule of thumb, but the thing is I don't have a salary. Currently I'm just doing fairly small programming tasks for a friend who is starting a marketing company, pricing each task fairly arbitrarily. I don't know what I would make over the course of a year doing it, but it would be incredibly low. My responsibilities for the project would be the architecture, programming, database, server, and UX to some degree. It's going to be a public facing web service so I will also need to put a lot of effort into security and scalability. Any advice or experience?

    Read the article

  • "System.Data.OracleClient requires Oracle client software version 8.1.7 or greater." Error Message

    - by Jandost Khoso
    Quick resolution: Give full permission to AUTHENTICATED USERS in following folders. a) ORACLE_HOME b) Program Files\ORACLE   Check your PATH. You might have installed different clients in your system and your .NET application is pointing to a home with inappoperiate client. What your .NET application should load is OCI.DLL with File version more than 8.1.7. According to the MSDN document Oracle and ADO.NET:   "The .NET Framework Data Provider for Oracle provides access to an Oracle database using the Oracle Call Interface (OCI) as provided by Oracle Client software. The functionality of the data provider is designed to be similar to that of the .NET Framework data providers for SQL Server, OLE DB, and ODBC. "     The MSDN document System Requirements (Oracle) says: "The .NET Framework Data Provider for Oracle requires Microsoft Data Access Components (MDAC) version 2.6 or later. MDAC 2.8 SP1 is recommended. You must also have Oracle 8i Release 3 (8.1.7) Client or later installed. "   Both the .NET Framework Data Provider for Oracle and Oracle Data Provider for .NET are data providers to access Oracle database. The former ships with .NET Framework and requires Oracle client version 8.1.7 or above. The latter is provided by Oracle company and requires Oracle client version 9.2 or later.     The Oracle Data Provider for .NET (ODP.NET) features optimized ADO.NET data access to the Oracle database. ODP.NET allows developers to take advantage of advanced Oracle database functionality, including Real Application Clusters, XML DB, and advanced security.   See the document Comparing the Microsoft .NET Framework 1.1 Data Provider for Oracle and the Oracle Data Provider for .NET for more information about the difference.

    Read the article

  • Software development process for a part time University project for 1 developer?

    - by Pricey
    I will be doing a part time University project soon and the time frame for it is around 8 months with approximately 10-15 hours a week spent working on it, with a review by a tutor each quarter. My question is what software development process would you recommend using when the course requires you to work on your own in order to manage yourself as well as the project? I wanted to use a weekly or bi-weekly iterative approach to my work but a lot of the processes seem tailored to teams of people. I am looking at XP (Extreme Programming) OR Scrum as something that is less than the norm for University work but again Scrum I don't know a lot about yet, and a question I have is; can you say you are doing XP without pair-programming? because my tutor seems to think that I have to stick to all the practices otherwise I can't do it (nevermind if I am working alone). We can have external user input as well but due to the small timescales with part time work it may be more beneficial for myself to be the user as well, which is not what I prefer considering how I can get lost in the design.

    Read the article

  • Is it correct to fix bugs without adding new features when releasing software for system testing?

    - by Pratik
    This question is to experienced testers or test leads. This is a scenario from a software project: Say the dev team have completed the first iteration of 10 features and released it to system testing. The test team has created test cases for these 10 features and estimated 5 days for testing. The dev team of course cannot sit idle for 5 days and they start creating 10 new features for next iteration. During this time the test team found defects and raised some bugs. The bugs are prioritised and some of them have to be fixed before next iteration. The catch is that they would not accept the new release with any new features or changes to existing features until all those bugs fixed. The test team says that's how can we guarantee a stable release for testing if we also introduce new features along with the bug fix. They also cannot do regression tests of all their test cases each iteration. Apparently this is proper testing process according to ISQTB. This means the dev team has to create a branch of code solely for bug fixing and another branch where they continue development. There is more merging overhead specially with refactoring and architectural changes. Can you agree if this is a common testing principle. Is the test team's concern valid. Have you encountered this in practice in your project.

    Read the article

  • Manic Monday - More OpenWorld Solaris Sessions: Developers, Cloud, Customer Insights, Hardware Optimization

    - by Larry Wake
    We're overflowing with Monday sessions; literally more than one person can take in. Learn more about what's new in Oracle Solaris Studio, hear about the latest x86 and SPARC hardware optimizations, get some insights on cloud deployment strategies, and find out from your peers what they're doing with Oracle Solaris. If you're an OpenWorld attendee, go to to Schedule Builder to guarantee your space in any session or lab. See yesterday's blog post and the "Focus on Oracle Solaris" guide for even more sessions. Monday, October 1st: 10:45 AM - Maximizing Your SPARC T4 Oracle Solaris Application Performance(CON6382,  Marriott Marquis - Golden Gate C3) Hear how customers and commercial software partners have reached peak performance on SPARC T4 servers and engineered systems with Oracle Solaris Studio and its latest tools for analyzing, reporting, and improving runtime performance: Autoparallelizing, high-performance compilers Performance Analyzer (used to find performance hotspots) Thread Analyzer (to expose data races and deadlocks) Code Analyzer (used to discover latent memory corruption issues) 10:45 Cloud Formation: Implementing IaaS in Practice with Oracle Solaris(CON8787, Moscone South 302) Decisions, decisions--at the same time, we've got a session that covers why Oracle Solaris is the ideal OS for public or private clouds, IaaS or PaaS, with built-in features for elastic infrastructure, unrivaled security, superfast installation and deployment, nonstop availability, and crystal-clear observability. This session will include a customer study on how Oracle Solaris is used in the cloud today to implement the Oracle stack. 12:15 PM - Customer Insight: Oracle Solaris on Oracle Exadata, Oracle Exalogic, and SPARC SuperCluster(CON8760, Moscone South 270) Hear from customers what benefits they have realized from using the Oracle stack on Oracle Exadata and Oracle’s SPARC SuperCluster and from using Oracle Solaris on those engineered systems, taking advantage of built-in lightweight OS virtualization (Zones), enterprise reliability and scale, and other key features. 1:45 PM - Case Study: Mobile Tornado Uses Oracle Technology for Better RAS and TCO?(CON4281, Moscone West 2005) Mobile Tornado develops and markets instant communication platforms, replacing traditional radio networks with cellular networks. Its critical concern is uptime. Find out how they've used Oracle Solaris, Netra SPARC T4, and Oracle Solaris Cluster, including Oracle Solaris ZFS and Zones, for their Oracle Database deployments to improve reliability and drive down cost. 3:15 PM - Technical Panel: Developing High Performance Applications on Oracle Solaris(CON7196, Marriott Marquis - Golden Gate C2) Engineers from the Oracle Solaris, Oracle Database, and Oracle Tuxedo development teams, and Oracle ISV Engineering discuss how they develop high-performance enterprise applications that take advantage of Oracle's SPARC and x86 servers, with Oracle Solaris Studio and new Oracle Solaris 11 features. Topics will include developer tools, parallel frameworks, best practices, and methodologies, as well as insights and case studies on parallelizing and optimizing application performance on Oracle Solaris. Bring your best questions! 3:15 PM -  x86 Power Management with Oracle Solaris: Current State, Opportunities, and Future(CON6271, Moscone West 2012) Another option for this time slot: learn about how Intel Xeon and Oracle Solaris work together to reduce server power consumption. This presentation addresses some of the recent power management improvements in Oracle Solaris, opportunities to further improve energy efficiency, and some future directions for Oracle Solaris power management.

    Read the article

  • Would it be possible to create an open source software library, entirely developed and moderated by an open community?

    - by Steven Jeuris
    Call it democratic software development, or open source on steroids if you will. I'm not just talking about the possibility of providing a patch which can be approved by the library owner. Think more along the lines of how Stack Exchange works. Anyone can post code, and through community moderation it is cleaned up and eventually valid code ends up in the final library. For complex libraries an elaborate system should probably be created, but for a simple library it is my belief this is already possible even within the Stack Exchange platform. Take a library of extension methods for .NET for example. Everybody goes their own way and implements their own subset of what they feel is important, open-source library or not. People want to share their code, but there is no suitable platform for it. extensionmethod.net is the result of answering this call for extension methods, but the framework hopelessly falls short; there is no order, or structure at all. You don't know whether an idea is any good until you try it, so I decided to create an Extension Methods proposal on Area51. I belief with proper moderation, it could be possible for the site to be more than a Q&A site, and that an actual library (or subsets of it) could be extracted from it. Has anything like this been attempted before? Are there platforms better suited for this?

    Read the article

  • Will proprietary software-based sound enhancements work with Ubuntu? (BeatsAudio, Dolby)

    - by LiveWireBT
    This question is targeted at mainstream or gamer-grade software-based audio/sound enhancements, found in highly integrated computing and entertainment systems like laptops, tablets and smartphones. These are mostly marketed with fancy badges of known audio-releated brands on the product or packaging, while being mostly uncertain about the actual implementation or components used and poorly differentiated from the general audio capabilities of the system or device. This question is not about actual hardware like speakers. If your headphones are not properly detected, your speakers are assigned wrong, work partially or not at all then your soundcard or chip is not properly detected and you should take a look at troubleshooting audio issues. This question is also not about enthusiast or recording-grade hardware like recording interfaces, amplifiers and DACs in a variety of formfactors. And this question is also not about audio encoding and playback of different audio formats like Dolby Digital, Dolby TrueHD and DTS. Most of these may be subject to patents and licensing, see restricted formats. If you are just searching for an equalizer, please take a look at this question: Is there any Sound enhancers/equalizer? Simply speaking: Every feature where you would flip a switch or check a box in a fancy looking interface in Windows that makes the sound change from neutral to fancy.

    Read the article

  • What are the options for retraining formally as a software engineer?

    - by Matt Harrison
    I'm a self-taught programmer. I have a good undergraduate degree in Architecture (building, not software). I was always a science/maths kid and got consistency good grades in these subjects. However I became indecisive at undergraduate level and switched between Physics, Chemistry, Art and finally stuck with Architecture mainly out of the desperate need to finish any degree. As soon as I graduated, I ditched architecture and started writing code again professionally. I've been a programmer now for 3 years and I've progressed very quickly. I'm ambitious and I want to work for the top companies in this field at some point and I've realised I need a Computer Science education to be taken seriously (based on job ads for the big tech firms). I've applied for a few MSc programs in Computer Science but they've all rejected me because of my BA. It's just not an option for me to quit my job and go back and do another 3 year undergraduate degree in CS. I know I can study at this level because I've read most of the books on the reading lists for CS courses in the UK that I can find and I have this knowledge now, it's just I can't prove it on an application form. What options are available to me?

    Read the article

  • WebCenter Content Web Search Performance: Do you really need that folder path info?

    - by Nicolas Montoya
    End-users want content at their fingertips at the speed of thought if possible. When running search operations in the WebCenter Conter Web Interface every second or fraction of a second improvement does matter. When doing some trace analysis on the systemdatabase tracing on a customer environment, we came across some SQL queries that were unnecessarily being triggered! These were related to determining the folder path for every entry part of the search result set. However, this folder path was not even being used as part of the displayed information in the user interface.Why was the folder path information being collected when it was not even displayed in the UI? We found that the configuration parameter 'FolderPathInSearchResults' was set to 'true' under Administration > Admin Server > General Configuration > Additional Configuration Variables as shown below:When executing a quicksearch by keyword we were getting 100 out of 2280 entries in the first page of the result set.When thera 'FolderPathInSearchResults' configuration parameter is set to 'true', the following queries appear in the systemdatabase tracing:100 executions for a query on the FolderFiles table for each of the documents displayed in the first page:>systemdatabase/6       12.13 11:17:48.188      IdcServer-199   1.45 ms. SELECT * FROM FolderFiles WHERE dDocName='SLC02VGVUSORAC140641' AND fLinkRank=0[Executed. Returned row(s): true]382 executions for a query of the folders tables - most of the documents that match the keyword criteria are at a folder depth level of three or four:>systemdatabase/6       12.13 11:17:48.114      IdcServer-199   2.57 ms. SELECT FolderFolders.*,FolderMetaDefaults.* FROM FolderFolders,FolderMetaDefaults WHERE FolderFolders.fFolderGUID=FolderMetaDefaults.fFolderGUID(+) AND((FolderFolders.fFolderGUID = '1EB8E527E19B09ED3FE82EE310AEA13A' ) )[Executed.Returned row(s): true]By setting this 'FolderPathInSearchResults' configuration parameter to 'false', the above queries were no longer reported in the Server Output System Audit Information.Now, let's consider a practical scenario:Search result set page = 100Average folder depth der document in the search result set: 5The number of folder path related queries will be: 100 + 5*500 = 600If each query takes slightly over 3 ms. You would have 2000 ms (2 seconds) spent in server time to get this information.The overall performance impact goes beyond seerver time execution, as this information needs to travel from the server to the browser. If the documents are further nested into the folder hierarchy, additional hundreds of queries may be executed. If folder path is not being displayed in the end-user interface profile, your system may be better of with the 'FolderPathInSearchResults' configuration parameter disabled.

    Read the article

  • My game seems to be incompatible with recording software. What could be causing this?

    - by Lewis Wakeford
    I've just finished a little Game-Dev project for university and I need to record a video to accompany my submission (just in case they can't get my source to work). Basically my game doesn't work at all when FRAPS or Bandicam attempts to attach to it, I get a black screen and a stream of GL INVALID OPERATION messages from my error reporting code. Dxtory can't seem to hook into it correctly at all, it doesn't display it's FPS counter or anything. My game logic appears to be running correctly from the debug traces, it just seems like all the gl library calls break. I don't know a huge amount about how these programs operate so I don't really know what I could be doing to cause this. I've heard they read from the OpenGL frame buffers so maybe I'm doing something wrong there? I'm letting GLFW and GLEW do all the low level initialization, but I have successfully recorded projects with the same setup and recording software. Essentially, has anyone ever run into something like this before or do you know anything about how these programs work that could give a clue as to the cause of the issue?

    Read the article

  • Why is a software development life-cycle so inefficient?

    - by user87166
    Currently the software development lifecycle followed in the IT company I work at is: The "Business" works with a solution manager to build a Business Requirement document The solution manager works with the Program manager to build a Functional Spec The PM works with the engineering lead to develop a release plan and with the engineering team to develop technical specifications If there are any clarifications required, developers contact the PM who contacts the solution manager who contacts the business and all the way back introducing a latency of nearly 24 hours and massive email chains for any clarifications By the time the tech spec is made, nearly 1 month has passed in back and forth Now, 2 weeks go to development while the test writes test cases Code is dropped formally to test, test starts raising bugs. Even if there is 1 root cause for 10 different issues, and its an easily fixed one, developers are not allowed to give fresh code to test for the next 1 week. After 2-3 such drops to test the code is given to the ops team as a "golden drop" ( 2 months passed from the beginning) Ops team will now deploy the code in a staging environment. If it runs stable for a week, it will be promoted to UAT and after 2 weeks of that it will be promoted to prod. If there are any bugs found here, well, applying for a visa requires less paperwork This entire process is followed even if a single SSRS report is to be released. How do other companies process such requirements? I'm wondering why, the business cannot just drop the requirements to developers, developers build and deploy to UAT themselves, expose it to the business who raise functional bugs and after fixing those promote to prod. (even for more complex stuff)

    Read the article

  • Is there a massive other side to software development which I've somehow missed, revolving entirely around Microsoft?

    - by Aerovistae
    I'm still a beginning programmer; I've been at it for 2 years. I've learned to work with a few languages, a bit of web development technologies, a handful of libraries, frameworks, and IDEs. But over the past two years (and long before I even started, really), I keep hearing references to these...things. A million of them. Things such as C#, ADO, SOAP, ASP, ASP.NET, the .NET framework, CLR, F#, etc etc. And I've read their Wikipedia articles, in-depth, multiple times, and they all mention a million other things on that list, but I just can't seem to grasp what it all is. The only thing I've taken away with any certainty is that Microsoft is behind all of it. It sounds almost like a conspiracy. Are all these technologies just for developing on the Windows platform? What is .NET? Do some software developers dedicate their entire career just to that side of things? Why would I want to get into it, and what advantage does...whatever it is...have over all the other technologies there are? I hope this makes sense. It's a broad question, but inside it there's a very specific question asking about something I don't know the name of. Hopefully you can grasp my confusion.

    Read the article

  • Can it be a good idea to lease a house rather than a standard office-space for a software development shop? [closed]

    - by hamlin11
    Our lease is up on our US-based office-space in July, so it's back on my radar to evaluate our office-space situation. Two of our partners rather like the idea of leasing a house rather than standard office-space. We have 4 partners and one employee. I'm against the idea at this moment in time. Pros, as I see them Easier to get a good location (minimize commutes) All partners/employees have dogs. Easier to work longer hours without dog-duties pulling people back home More comfortable bathroom situation Residential Internet Rate Control of the thermostat Clients don't come to our office, so this would not change our image The additional comfort-level should facilitate a significantly higher-percentage of time "in the zone" for programmers and artists. Cons, as I see them Additional bills to pay (house-cleaning, yard, util, gas, electric) Additional time-overhead in dealing with bills (house-cleaning, yard, util, gas, electric) Additional overhead required to deal with issues that maintenance would have dealt with in a standard office-space Residential neighbors to contend with The equation starts to look a little nasty when factoring in potential time-overhead, especially on issues that a maintenance crew would deal with at a standard office complex. Can this be a good thing for a software development shop?

    Read the article

  • Any empirical evidence on the efficacy of CMMI?

    - by mehaase
    I am wondering if there are any studies that examine the efficacy of software projects in CMMI-oriented organizations. For example, are CMMI organizations more likely to finish projects on time and/or on budget than non-CMMI organizations? Edit for clarification: CMMI stands for "Capability Maturity Model Integration". It's developed by the Software Engineering Institute at Carnegie-Mellon University (SEI-CMU). It's not a certification, but there are various companies that will "appraise" your organization to various levels of CMMI, such as level 2 and level 3. (I believe CMMI level 1 is an animalistic, Hobbesian free-for-all that nobody aspires to. In other words, everybody is at least CMMI level 1, even if you've never heard of CMMI before.) I'm definitely not an expert, but I believe that an organization can be appraised for CMMI levels within different scopes of work: i.e. service delivery, software development, foobaring, etc. My question is focused on the software development appraisal: is an organization that has been appraised to CMMI Level X for software projects more likely to finish a software project on time and on budget than another organization that has not been appraised to CMMI Level X? However, in the absence of hard data about software-oriented CMMI, I'd be interested in the effect that CMMI appraisals have on other activities as well. I originally asked the question because I've seen various studies conducted on software (e.g. the essays in The Mythical Man Month refer to numerous empirical studies, as does McConnell's Code Complete), so I know that there are organizations performing empirical studies of software development.

    Read the article

  • Melhoria de Performance no .NET 4.5: Multicore Just-in-Time (JIT).

    - by anobre
    Olá pessoal! Dando uma lida nas melhorias de performance da plataforma .NET 4.5, me deparei com algo extremamente interessante: Multicore Just-in-Time (JIT). A teoria é muito simples: por que não utilizar vários núcleos para a compilação JIT? Além disto, será que seria possível compilar os métodos em uma determinada ordem, onde os primeiros fossem aqueles com maior probabilidade de execução? Isto parece meio loucura mas é o que o Multicore Just-in-Time (JIT) faz. E o melhor de tudo, de uma forma extremamente simples. As aplicações ASP.NET 4.5 já o fazem por default. Em outras ocasiões, basta executar duas linhas de código: uma indicando a pasta onde o arquivo que armazenará o profile ficará, e a outra para iniciar o procedimento. Este profile é o arquivo responsável por armazenar a ordem de compilação dos métodos, para que aqueles com maior chance de serem executados mais cedo sejam compilados antes. Código para este processo: ProfileOptimization.SetProfileRoot(@"C:\ProfileRoot"); ProfileOptimization.StartProfile("profile"); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Esta otimização na compilação só será notada após a criação do profile. Portanto, na primeira vez nada será percebido. Ao final do processo, um arquivo com o nome escolhido (no caso profile) será criado, na pasta indicada como root: Fica a dica! Abraços!

    Read the article

  • Grub gives messages about the boot sector being used by other software. What should I do?

    - by Bobble
    This only happens with one of my computers. It is an elderly laptop that has had a long and varied history with several operating systems, but in its retirement it is acting as a server for my home network using Ubuntu 12.04. It is a single-boot system, there are no other systems installed. Every so often, whenever there is a grub upgrade, I notice a message like this: Setting up grub-common (1.99-21ubuntu3.4) ... Installing new version of config file /etc/grub.d/00_header ... Setting up grub2-common (1.99-21ubuntu3.4) ... Setting up grub-pc-bin (1.99-21ubuntu3.4) ... Setting up grub-pc (1.99-21ubuntu3.4) ... /usr/sbin/grub-setup: warn: Sector 32 is already in use by FlexNet; avoiding it. This software may cause boot or other problems in future. Please ask its authors not to store data in the boot track. Installation finished. No error reported. Should I be worried about this? What (if anything) should I do about it?

    Read the article

  • How can I use Performance Counters in C# to monitor 4 processes with the same name?

    - by Waffles
    I'm trying to create a performance counter that can monitor the performance time of applications, one of which is Google Chrome. However, I notice that the performance time I get for chrome is unnaturally low - I look under the task-manager to realize my problem that chrome has more than one process running under the exact same name, but each process has a different working set size and thus(what I would believe) different processor times. I tried doing this: // get all processes running under the same name, and make a performance counter // for each one. Process[] toImport = Process.GetProcessesByName("chrome"); instances = new PerformanceCounter[toImport.Length]; for (int i = 0; i < instances.Length; i++) { PerformanceCounter toPopulate = new PerformanceCounter ("Process", "% Processor Time", toImport[i].ProcessName, true); //Console.WriteLine(toImport[i].ProcessName + "#" + i); instances[i] = toPopulate; } But that doesn't seem to work at all - I just monitor the same process several times over. Can anyone tell me of a way to monitor separate processes with the same name?

    Read the article

  • The Incremental Architect&acute;s Napkin - #1 - It&acute;s about the money, stupid

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/05/24/the-incremental-architectacutes-napkin---1---itacutes-about-the.aspx Software development is an economic endeavor. A customer is only willing to pay for value. What makes a software valuable is required to become a trait of the software. We as software developers thus need to understand and then find a way to implement requirements. Whether or in how far a customer really can know beforehand what´s going to be valuable for him/her in the end is a topic of constant debate. Some aspects of the requirements might be less foggy than others. Sometimes the customer does not know what he/she wants. Sometimes he/she´s certain to want something - but then is not happy when that´s delivered. Nevertheless requirements exist. And developers will only be paid if they deliver value. So we better focus on doing that. Although is might sound trivial I think it´s important to state the corollary: We need to be able to trace anything we do as developers back to some requirement. You decide to use Go as the implementation language? Well, what´s the customer´s requirement this decision is linked to? You decide to use WPF as the GUI technology? What´s the customer´s requirement? You decide in favor of a layered architecture? What´s the customer´s requirement? You decide to put code in three classes instead of just one? What´s the customer´s requirement behind that? You decide to use MongoDB over MySql? What´s the customer´s requirement behind that? etc. I´m not saying any of these decisions are wrong. I´m just saying whatever you decide be clear about the requirement that´s driving your decision. You have to be able to answer the question: Why do you think will X deliver more value to the customer than the alternatives? Customers are not interested in romantic ideals of hard working, good willing, quality focused craftsmen. They don´t care how and why you work - as long as what you deliver fulfills their needs. They want to trust you to recognize this as your top priority - and then deliver. That´s all. Fundamental aspects of requirements If you´re like me you´re probably not used to such scrutinization. You want to be trusted as a professional developer - and decide quite a few things following your gut feeling. Or by relying on “established practices”. That´s ok in general and most of the time - but still… I think we should be more conscious about our decisions. Which would make us more responsible, even more professional. But without further guidance it´s hard to reason about many of the myriad decisions we´ve to make over the course of a software project. What I found helpful in this situation is structuring requirements into fundamental aspects. Instead of one large heap of requirements then there are smaller blobs. With them it´s easier to check if a decisions falls in their scope. Sure, every project has it´s very own requirements. But all of them belong to just three different major categories, I think. Any requirement either pertains to functionality, non-functional aspects or sustainability. For short I call those aspects: Functionality, because such requirements describe which transformations a software should offer. For example: A calculator software should be able to add and multiply real numbers. An auction website should enable you to set up an auction anytime or to find auctions to bid for. Quality, because such requirements describe how functionality is supposed to work, e.g. fast or secure. For example: A calculator should be able to calculate the sinus of a value much faster than you could in your head. An auction website should accept bids from millions of users. Security of Investment, because functionality and quality need not just be delivered in any way. It´s important to the customer to get them quickly - and not only today but over the course of several years. This aspect introduces time into the “requrements equation”. Security of Investments (SoI) sure is a non-functional requirement. But I think it´s important to not subsume it under the Quality (Q) aspect. That´s because SoI has quite special properties. For one, SoI for software means something completely different from what it means for hardware. If you buy hardware (a car, a hair blower) you find that a worthwhile investment, if the hardware does not change it´s functionality or quality over time. A car still running smoothly with hardly any rust spots after 10 years of daily usage would be a very secure investment. So for hardware (or material products, if you like) “unchangeability” (in the face of usage) is desirable. With software you want the contrary. Software that cannot be changed is a waste. SoI for software means “changeability”. You want to be sure that the software you buy/order today can be changed, adapted, improved over an unforseeable number of years so as fit changes in its usage environment. But that´s not the only reason why the SoI aspect is special. On top of changeability[1] (or evolvability) comes immeasurability. Evolvability cannot readily be measured by counting something. Whether the changeability is as high as the customer wants it, cannot be determined by looking at metrics like Lines of Code or Cyclomatic Complexity or Afferent Coupling. They may give a hint… but they are far, far from precise. That´s because of the nature of changeability. It´s different from performance or scalability. Also it´s because a customer cannot tell upfront, “how much” evolvability he/she wants. Whether requirements regarding Functionality (F) and Q have been met, a customer can tell you very quickly and very precisely. A calculation is missing, the calculation takes too long, the calculation time degrades with increased load, the calculation is accessible to the wrong users etc. That´s all very or at least comparatively easy to determine. But changeability… That´s a whole different thing. Nevertheless over time the customer will develop a feedling if changeability is good enough or degrading. He/she just has to check the development of the frequency of “WTF”s from developers ;-) F and Q are “timeless” requirement categories. Customers want us to deliver on them now. Just focusing on the now, though, is rarely beneficial in the long run. So SoI adds a counterweight to the requirements picture. Customers want SoI - whether they know it or not, whether they state if explicitly or not. In closing A customer´s requirements are not monolithic. They are not all made the same. Rather they fall into different categories. We as developers need to recognize these categories when confronted with some requirement - and take them into account. Only then can we make true professional decisions, i.e. conscious and responsible ones. I call this fundamental trait of software “changeability” and not “flexibility” to distinguish to whom it´s a concern. “Flexibility” to me means, software as is can easily be adapted to a change in its environment, e.g. by tweaking some config data or adding a library which gets picked up by a plug-in engine. “Flexibiltiy” thus is a matter of some user. “Changeability”, on the other hand, to me means, software can easily be changed in its structure to adapt it to new requirements. That´s a matter of the software developer. ?

    Read the article

  • USB software protection dongle for Java with an SDK which is cross-platform “for real”. Does it exist?

    - by Unai Vivi
    What I'd like to ask is if anybody knows about an hardware USB-dongle for software protection which offers a very complete out-of-the-box API support for cross-platform Java deployments. Its SDK should provide a jar (only one, not one different library per OS & bitness) ready to be added to one's project as a library. The jar should contain all the native stuff for the various OSes and bitnesses From the application's point of view, one should continue to write (api calls) once and run everywhere, without having to care where the end-user will run the software The provided jar should itself deal with loading the appropriate native library Does such a thing exist? With what I've tried so far, you have different APIs and compiled libraries for win32, linux32, win64, linux64, etc (or you even have to compile stuff yourself on the target machine), but hey, we're doing Java here, we don't know (and don't care) where the program will run! And we can't expect the end-user to be a software engineer, tweak (and break!) its linux server, link libraries, mess with gcc, litter the filesystem, etc... In general, Java support (in a transparent cross-platform fashion) is quite bad with the dongle SDKs I've evaluated so far (e.g. KeyLok and SecuTech's UniKey). I even purchased (no free evaluation kit available) SecureMetric SDKs&dongles (they should've been "soooo" straighforward to integrate -- according to marketing material :\ ) and they were the worst ever: SecureDongle X has no 64bit support and SecureDongle SD is not cross-platform at all. So, has anyone out there been through this and found the ultimate Java security usb dongle for cross-platform deployments? Note: software is low-volume, high-value; application is off-line (intranet with no internet access), so no online-activation alternatives and the like. -- EDIT Tried out HASP dongles (used to be called "Aladdin"), and added them to the no-no list: here, too, there is no out-of-the-box (out-of-the-jar) support: e.g. end-linux-user has to manually put the .so library (the specific file for the appropriate bitness) in the right place on his filesystem, and export an env. variable accordingly. -- EDIT 2 I really don't understand all the negativity and all the downvoting: is this a taboo topic? Is it so hard to understand that a freelance developer has to put food on the table everyday to feed its family and pay the bills at the end of the month? Please don't talk about "adding value" as a supplier, because that'd be off-topic. Furthermore I'm not in direct contact with end-customers, but there's an intermediate reselling entity: it's this entity I want to prevent selling copies of the software without sharing the revenue. -- EDIT 3 I'd like to emphasize the fact that the question is looking for a technical answer, not one about opinions concerning business models, philosophical lucubrations on the concept of value, resellers' reliability, etc. I cannot change resellers, because this isn't a "general purpose" kind of sw, but a very vertical one and (for some reasons it's not worth explaining here) I must go through them. I just need to prevent the "we sold 2 copies, here's your share [bwahaha we sold 10]" scenario.

    Read the article

  • Development-led security vs administration-led security in a software product?

    - by haylem
    There are cases where you have the opportunity, as a developer, to enforce stricter security features and protections on a software, though they could very well be managed at an environmental level (ie, the operating system would take care of it). Where would you say you draw the line, and what elements do you factor in your decision? Concrete Examples User Management is the OS's responsibility Not exactly meant as a security feature, but in a similar case Google Chrome used to not allow separate profiles. The invoked reason (though it now supports multiple profiles for a same OS user) used to be that user management was the operating system's responsibility. Disabling Web-Form Fields A recurrent request I see addressed online is to have auto-completion be disabled on form fields. Auto-completion didn't exist in old browsers, and was a welcome feature at the time it was introduced for people who needed to fill in forms often. But it also brought in some security concerns, and so some browsers started to implement, on top of the (obviously needed) setting in their own preference/customization panel, an autocomplete attribute for form or input fields. And this has now been introduced into the upcoming HTML5 standard. For browsers that do not listen to this attribute, strange hacks* are offered, like generating unique IDs and names for fields to avoid them from being suggested in future forms (which comes with another herd of issues, like polluting your local auto-fill cache and not preventing a password from being stored in it, but instead probably duplicating its occurences). In this particular case, and others, I'd argue that this is a user setting and that it's the user's desire and the user's responsibility to enable or disable auto-fill (by disabling the feature altogether). And if it is based on an internal policy and security requirement in a corporate environment, then substitute the user for the administrator in the above. I assume it could be counter-argued that the user may want to access non-critical applications (or sites) with this handy feature enabled, and critical applications with this feature disabled. But then I'd think that's what security zones are for (in some browsers), or the sign that you need a more secure (and dedicated) environment / account to use these applications. * I obviously don't deny the ingeniosity of the people who were forced to find workarounds, just the necessity of said workarounds. Questions That was a tad long-winded, so I guess my questions are: Would you in general consider it to be the application's (hence, the developer's) responsiblity? Where do you draw the line, if not in the "general" case?

    Read the article

  • Develop web site from existing software or cherry pick and use a web framework?

    - by erisco
    A small team and I are tasked with developing a web site. The client has referenced a particular open source project (we'll call it X) when describing some of the features. Because of this, the team wants to start with X and adapt it to satisfy the client. I have looked at X and its code and, in my opinion, it would be unwise. However, my experience is limited, and could really benefit from the insights of others so that I can figure out what I should be asserting as the right direction for the team. My red flags are going up and this is why. X was developed in the earlier days of PHP; 500 line blocks of code are the norm; global variables are abundant; giant switch cases are the norm for switching between which page is shown. There is no clear mapping between URL and where the code for that page sits. From a feature-set standpoint, X is actually software specialized for a different task and has dozens of features we don't need or have use for that come as core assumptions. We will be unable to adapt X through its plugin system. That said, there are a few features which can be mapped, with some modification, to suit our purposes. I believe this is the attraction the team feels. I would feel comfortable if, instead of using X directly, we lifted what is salvageable and useful to us. We can then use that code, and the same 3rd party libraries X is using, in a new code base built on top of a PHP web framework (particularly Agavi, so you understand what I mean by 'web framework'). The web framework gives us a strong MVC structure and provides the common facilities for web development, or adapters to work with 3rd party libraries that do so. We will also have a clean slate feature-wise to work from, which means we can work additively instead of subtractively. Because the code base is better structured, and contains none of what we don't need, it will be easier to document, which is a critical requirement of our client. So to summarize, the team wants to use X, whereas I want to take the bits we can from X and use a web framework instead. I want to bounce this opinion off of other's experiences so that I can be more informed. Thanks for your insight.

    Read the article

  • Remote desktop type software that the client need not install anything...

    - by allentown
    I am primarily a Macintosh user, and can usually walk a client though any troubles they may have because I have a Macintosh in front of me. If they are on a different OS, things are close enough, or I cam remember, that I can get by. When trying to help clients on Windows, I get stuck. I do not have access to windows, and even if I did, there are far too many versions of Outlook, all with their various esoteric settings and checkboxes, that I could never see exactly what they are seeing. I mostly need to just help them with email setup. Something like copilot.com may do the trick. What is the simplest remote control software out there, ideally, it would accomplish these: No software needed on remote end, or, a single .exe that they can toss when done. I need Mac based software on my end. I do have ARD, which support VNC Free :) If possible, it would be really nice Needs a port forwarding proxy run by the company. There is no way I can get the user to alter their router, or to even plug directly into their WAN for a short time. On the Mac, I just have them open iChat, and this is all built in, proxying through AIM, looking for the same for Windows and Mac.

    Read the article

  • SQL Azure Federation - how much data before performance benefits?

    - by Donald Hughes
    To avoid premature optimization, I don't want to implement SQL Azure's Federation too early. Is there a rule of thumb for how much data a table would need to have before seeing performance benefits from sharding? I know there won't be a precise answer as there are too many variables to consider, especially with much of SQL Azure's resources being hidden/unknown. To put it into several, more concrete examples, would Federation improve performance in any of the below table scenarios: 100,000 rows (~ 200 MB) 1,000,000 rows (~ 2 GB) 10,000,000 rows (~ 20 GB) 100,000,000 rows (~ 200 GB) For the sake of elaboration, we can assume this is the largest table that would be federated, which consists of order details, which is joined to an orders table with a 'customer_id' foreign key, which would be the distribution key. This is a fairly standard multi-tenant, CRUD order entry system, with a typical assortment of reporting needs (customer order totals by day/month/year, etc).

    Read the article

  • Zero-channel RAID for High Performance MySQL Server (IBM ServeRAID 8k) : Any Experience/Recommendation?

    - by prs563
    We are getting this IBM rack mount server and it has this IBM ServeRAID8k storage controller with Zero-Channel RAID and 256MB battery backed cache. It can support RAID 10 which we need for our high performance MySQL server which will have 4 x 15000K RPM 300GB SAS HDD. This is mission-critical and we want as much bandwidth and performance. Is this a good card or should we replace with another IBM RAID card? IBM ServeRAID 8k SAS Controller option provides 256 MB of battery backed 533 MHz DDR2 standard power memory in a fixed mounting arrangement. The device attaches directly to IBM planar which can provide full RAID capability. Manufacturer IBM Manufacturer Part # 25R8064 Cost Central Item # 10025907 Product Description IBM ServeRAID 8k SAS - Storage controller (zero-channel RAID) - RAID 0, 1, 5, 6, 10, 1E Device Type Storage controller (zero-channel RAID) - plug-in module Buffer Size 256 MB Supported Devices Disk array (RAID) Max Storage Devices Qty 8 RAID Level RAID 0, RAID 1, RAID 5, RAID 6, RAID 10, RAID 1E Manufacturer Warranty 1 year warranty

    Read the article

< Previous Page | 338 339 340 341 342 343 344 345 346 347 348 349  | Next Page >