Search Results

Search found 5312 results on 213 pages for 'hand e food'.

Page 97/213 | < Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >

  • How can a non-technical person can learn to write a spec for small projects?

    - by Joseph Turian
    How can a non-technical person learn to write specs for small projects? A friend of mine is trying to outsource some development on a statistics project. In particular, he does a lot of work in excel, and wants to outsource the creation of scripts to do what he now does by hand. However, my friend is extremely non-technical. He is poor at writing technical specs. When he does write a spec, it is written the way you would describe doing something in excel (go to this cell and then copy the value to that cell). It is also overly verbose, and does examples several times. I'm not sure if he properly describes corner cases. The first project he outsourced was a failure. I think he overdescribed some details, but underdescribed corner cases. That and/or the coder he hired didn't think through the corner cases and ask appropriate questions. I'm not sure. I got on IM with him and it took me half an hour to dig out a description that should have taken five minutes or less to describe. I wrote the scripts for him at the end, but didn't examine why his process with the coder failed. He has asked me for help. However, I refuse to get involved, because taking his spec and translating it into clear requirements is 10x more work than executing on a clearly written spec. What is the right way for him to learn? Are there resources he could use? Are there ways he can learn from small, low-pressure practice projects with coders? [edit: Most of his scripts are statistical and data processing oriented. e.g. take this column and run an average over it. Remove these rows under these conditions. So the challenge is different than spec'ing a web app.]

    Read the article

  • Learning to be a good developer: what parts can you skip over?

    - by Andrew M
    I have set myself the goal of becoming a decent developer by this time next year. By this I mean full experience of the development 'lifecycle,' a few good apps/sites/webapps under my belt, and most importantly being able to work at a steady pace without getting sidelined for hours by some should-know-this-already technique. I'm not starting from scratch. I've written a lot of html/css, SQL, javascript, python and VB.net, and studied other languages like C and Java. I know about things like OOP, design patterns, TDD, complexity, computational linguistics, pointers/references, functional programming, and other academic/theoretical matters. It's just I can't say I've really done these things yet. So I want to get up to speed, and I want to know what things I can leave till a later date. For instance, studying algorithms and the maths behind them is interesting and all, but so far I've hardly needed to write anything but the most basic nested loops. Investigating Assembly to have a clearer picture of low-level operations would be cool... but I imagine rarely infringes on daily work. On the other hand, looking at a functional programming language might help me write programs that are more comprehensible and less prone to hidden failures (at the moment I'm finding the biggest difficulty is when the complexity of the app exceeds my capacity to understand it - for instance passing data around was fine... until I had to start doing it with AJAX, which was a painful step up). I could spend time working through case studies of design patterns, but I'm not sure how many of them get used in 'real life.' I'm a programmer with basic abilities - what skills should I focus on developing? (also my Unix skills are very weak, and also knowledge of Windows configuration... not sure how much time I should spend on that)

    Read the article

  • How to handle growing QA reporting requirements?

    - by Phillip Jackson
    Some Background: Our company is growing very quickly - in 3 years we've tripled in size and there are no signs of stopping any time soon. Our marketing department has expanded and our IT requirements have as well. When I first arrived everything was managed in Dreamweaver and Excel spreadsheets and we've worked hard to implement bug tracking, version control, continuous integration, and multi-stage deployment. It's been a long hard road, and now we need to get more organized. The Problem at Hand: Management would like to track, per-developer, who is generating the most issues at the QA stage (post unit testing, regression, and post-production issues specifically). This becomes a fine balance because many issues can't be reported granularly (e.g. per-url or per-"page") but yet that's how Management would like reporting to be broken down. Further, severity has to be taken into account. We have drafted standards for each of these areas specific to our environment. Developers don't want to be nicked for 100+ instances of an issue if it was a problem with an include or inheritance... I had a suggestion to "score" bugs based on severity... but nobody likes that. We can't enter issues for every individual module affected by a global issue. [UPDATED] The Actual Questions: How do medium sized businesses and code shops handle bug tracking, reporting, and providing useful metrics to management? What kinds of KPIs are better metrics for employee performance? What is the most common way to provide per-developer reporting as far as time-to-close, reopens, etc.? Do large enterprises ignore the efforts of the individuals and rather focus on the team? Some other questions: Is this too granular of reporting? Is this considered 'blame culture'? If you were the developer working in this environment, what would you define as a measureable goal for this year to track your progress, with the reward of achieving the goal a bonus?

    Read the article

  • 2D map/plane with nodes overlayed that supports panning, scaling and clicking on nodes

    - by garlicman
    I'm trying my hand at Android development and seem to be running into an invisible ceiling in trying to get what I want accomplished. Basically I'm trying to create an app that renders a 2D surface map that I can (pinch) zoom and pan. I'll have to place nodes on the surface of the map that will scale/zoom and pan in relation to the surface. I started out with a 2D ImageView approach and got as far as pinch zoom, pan and laying nodes as relative ImageViews, but all the methods I tried to get X,Y,W,H for the 2D surface were always off for some reason. Additionally, I was never able to scale the node ImageViews correctly, and as a result never got far enough to try and work out their X,Y scaled offset. So I decided to get back to 3D rendering. Conceptually pan/zoom is camera manipulation, so I don't have to mess with how to scale the 2D map or the nodes. But I need a starting point or sample to get me going that's close to what I'm trying to achieve. A sample on a translucent spinning cube isn't helping as much as I need it to. Any tips? Links, insults and sympathy are all welcome!

    Read the article

  • Versioning millions of files with distributed SCM

    - by C. Lawrence Wenham
    I'm looking into the feasibility of using off-the-shelf distributed SCMs such as Git or Mercurial to manage millions of XML files. Each file would be a commercial transaction, such as a purchase order, that would be updated perhaps 10 times during the lifecycle of the transaction until it is "done" and changes no more. And by "manage", I mean that the SCM would be used to not just version the files, but also to replicate them to other machines for redundancy and transfer of IP. Lets suppose, for the sake of example, that a goal is to provide good performance if it was handling the volume of orders that Amazon.com claimed to have at its peak in December 2010: about 150,000 orders per minute. We're expecting the system to be distributed over many servers in order to get reasonable performance. We're also planning to use solid-state drives exclusively. There is a reason why we don't want to use an RDBMS for primary storage, but it's a bit beyond the scope of this question. Does anyone have first-hand experience with the performance of distributed SCMs under such a load, and what strategies were used? Open-source preferred, since the final product is to be FOSS, too.

    Read the article

  • Caching strategies for entities and collections

    - by Rob West
    We currently have an application framework in which we automatically cache both entities and collections of entities at the business layer (using .NET cache). So the method GetWidget(int id) checks the cache using a key GetWidget_Id_{0} before hitting the database, and the method GetWidgetsByStatusId(int statusId) checks the cache using GetWidgets_Collections_ByStatusId_{0}. If the objects are not in the cache they are retrieved from the database and added to the cache. This approach is obviously quick for read scenarios, and as a blanket approach is quick for us to implement, but requires large numbers of cache keys to be purged when CRUD operations are carried out on entities. Obviously as additional methods are added this impacts performance and the benefits of caching diminish. I'm interested in alternative approaches to handling caching of collections. I know that NHibernate caches a list of the identifiers in the collection rather than the actual entities. Is this an approach other people have tried - what are the pros and cons? In particular I am looking for options that optimise performance and can be implemented automatically through boilerplate generated code (we have our own code generation tool). I know some people will say that caching needs to be done by hand each time to meet the needs of the specific situation but I am looking for something that will get us most of the way automatically.

    Read the article

  • Code Reuse is (Damn) Hard

    - by James Michael Hare
    Being a development team lead, the task of interviewing new candidates was part of my job.  Like any typical interview, we started with some easy questions to get them warmed up and help calm their nerves before hitting the hard stuff. One of those easier questions was almost always: “Name some benefits of object-oriented development.”  Nearly every time, the candidate would chime in with a plethora of canned answers which typically included: “it helps ease code reuse.”  Of course, this is a gross oversimplification.  Tools only ease reuse, its developers that ultimately can cause code to be reusable or not, regardless of the language or methodology. But it did get me thinking…  we always used to say that as part of our mantra as to why Object-Oriented Programming was so great.  With polymorphism, inheritance, encapsulation, etc. we in essence set up the concepts to help facilitate reuse as much as possible.  And yes, as a developer now of many years, I unquestionably held that belief for ages before it really struck me how my views on reuse have jaded over the years.  In fact, in many ways Agile rightly eschews reuse as taking a backseat to developing what's needed for the here and now.  It used to be I was in complete opposition to that view, but more and more I've come to see the logic in it.  Too many times I've seen developers (myself included) get lost in design paralysis trying to come up with the perfect abstraction that would stand all time.  Nearly without fail, all of these pieces of code become obsolete in a matter of months or years. It’s not that I don’t like reuse – it’s just that reuse is hard.  In fact, reuse is DAMN hard.  Many times it is just a distraction that eats up architect and developer time, and worse yet can be counter-productive and force wrong decisions.  Now don’t get me wrong, I love the idea of reusable code when it makes sense.  These are in the few cases where you are designing something that is inherently reusable.  The problem is, most business-class code is inherently unfit for reuse! Furthermore, the code that is reusable will often fail to be reused if you don’t have the proper framework in place for effective reuse that includes standardized versioning, building, releasing, and documenting the components.  That should always be standard across the board when promoting reusable code.  All of this is hard, and it should only be done when you have code that is truly reusable or you will be exerting a large amount of development effort for very little bang for your buck. But my goal here is not to get into how to reuse (that is a topic unto itself) but what should be reused.  First, let’s look at an extension method.  There’s many times where I want to kick off a thread to handle a task, then when I want to reign that thread in of course I want to do a Join on it.  But what if I only want to wait a limited amount of time and then Abort?  Well, I could of course write that logic out by hand each time, but it seemed like a great extension method: 1: public static class ThreadExtensions 2: { 3: public static bool JoinOrAbort(this Thread thread, TimeSpan timeToWait) 4: { 5: bool isJoined = false; 6:  7: if (thread != null) 8: { 9: isJoined = thread.Join(timeToWait); 10:  11: if (!isJoined) 12: { 13: thread.Abort(); 14: } 15: } 16: return isJoined; 17: } 18: } 19:  When I look at this code, I can immediately see things that jump out at me as reasons why this code is very reusable.  Some of them are standard OO principles, and some are kind-of home grown litmus tests: Single Responsibility Principle (SRP) – The only reason this extension method need change is if the Thread class itself changes (one responsibility). Stable Dependencies Principle (SDP) – This method only depends on classes that are more stable than it is (System.Threading.Thread), and in itself is very stable, hence other classes may safely depend on it. It is also not dependent on any business domain, and thus isn't subject to changes as the business itself changes. Open-Closed Principle (OCP) – This class is inherently closed to change. Small and Stable Problem Domain – This method only cares about System.Threading.Thread. All-or-None Usage – A user of a reusable class should want the functionality of that class, not parts of that functionality.  That’s not to say they most use every method, but they shouldn’t be using a method just to get half of its result. Cost of Reuse vs. Cost to Recreate – since this class is highly stable and minimally complex, we can offer it up for reuse very cheaply by promoting it as “ready-to-go” and already unit tested (important!) and available through a standard release cycle (very important!). Okay, all seems good there, now lets look at an entity and DAO.  I don’t know about you all, but there have been times I’ve been in organizations that get the grand idea that all DAOs and entities should be standardized and shared.  While this may work for small or static organizations, it’s near ludicrous for anything large or volatile. 1: namespace Shared.Entities 2: { 3: public class Account 4: { 5: public int Id { get; set; } 6:  7: public string Name { get; set; } 8:  9: public Address HomeAddress { get; set; } 10:  11: public int Age { get; set;} 12:  13: public DateTime LastUsed { get; set; } 14:  15: // etc, etc, etc... 16: } 17: } 18:  19: ... 20:  21: namespace Shared.DataAccess 22: { 23: public class AccountDao 24: { 25: public Account FindAccount(int id) 26: { 27: // dao logic to query and return account 28: } 29:  30: ... 31:  32: } 33: } Now to be fair, I’m not saying there doesn’t exist an organization where some entites may be extremely static and unchanging.  But at best such entities and DAOs will be problematic cases of reuse.  Let’s examine those same tests: Single Responsibility Principle (SRP) – The reasons to change for these classes will be strongly dependent on what the definition of the account is which can change over time and may have multiple influences depending on the number of systems an account can cover. Stable Dependencies Principle (SDP) – This method depends on the data model beneath itself which also is largely dependent on the business definition of an account which can be very inherently unstable. Open-Closed Principle (OCP) – This class is not really closed for modification.  Every time the account definition may change, you’d need to modify this class. Small and Stable Problem Domain – The definition of an account is inherently unstable and in fact may be very large.  What if you are designing a system that aggregates account information from several sources? All-or-None Usage – What if your view of the account encompasses data from 3 different sources but you only care about one of those sources or one piece of data?  Should you have to take the hit of looking up all the other data?  On the other hand, should you have ten different methods returning portions of data in chunks people tend to ask for?  Neither is really a great solution. Cost of Reuse vs. Cost to Recreate – DAOs are really trivial to rewrite, and unless your definition of an account is EXTREMELY stable, the cost to promote, support, and release a reusable account entity and DAO are usually far higher than the cost to recreate as needed. It’s no accident that my case for reuse was a utility class and my case for non-reuse was an entity/DAO.  In general, the smaller and more stable an abstraction is, the higher its level of reuse.  When I became the lead of the Shared Components Committee at my workplace, one of the original goals we looked at satisfying was to find (or create), version, release, and promote a shared library of common utility classes, frameworks, and data access objects.  Now, of course, many of you will point to nHibernate and Entity for the latter, but we were looking at larger, macro collections of data that span multiple data sources of varying types (databases, web services, etc). As we got deeper and deeper in the details of how to manage and release these items, it quickly became apparent that while the case for reuse was typically a slam dunk for utilities and frameworks, the data access objects just didn’t “smell” right.  We ended up having session after session of design meetings to try and find the right way to share these data access components. When someone asked me why it was taking so long to iron out the shared entities, my response was quite simple, “Reuse is hard...”  And that’s when I realized, that while reuse is an awesome goal and we should strive to make code maintainable, often times you end up creating far more work for yourself than necessary by trying to force code to be reusable that inherently isn’t. Think about classes the times you’ve worked in a company where in the design session people fight over the best way to implement a class to make it maximally reusable, extensible, and any other buzzwordable.  Then think about how quickly that design became obsolete.  Many times I set out to do a project and think, “yes, this is the best design, I can extend it easily!” only to find out the business requirements change COMPLETELY in such a way that the design is rendered invalid.  Code, in general, tends to rust and age over time.  As such, writing reusable code can often be difficult and many times ends up being a futile exercise and worse yet, sometimes makes the code harder to maintain because it obfuscates the design in the name of extensibility or reusability. So what do I think are reusable components? Generic Utility classes – these tend to be small classes that assist in a task and have no business context whatsoever. Implementation Abstraction Frameworks – home-grown frameworks that try to isolate changes to third party products you may be depending on (like writing a messaging abstraction layer for publishing/subscribing that is independent of whether you use JMS, MSMQ, etc). Simplification and Uniformity Frameworks – To some extent this is similar to an abstraction framework, but there may be one chosen provider but a development shop mandate to perform certain complex items in a certain way.  Or, perhaps to simplify and dumb-down a complex task for the average developer (such as implementing a particular development-shop’s method of encryption). And what are less reusable? Application and Business Layers – tend to fluctuate a lot as requirements change and new features are added, so tend to be an unstable dependency.  May be reused across applications but also very volatile. Entities and Data Access Layers – these tend to be tuned to the scope of the application, so reusing them can be hard unless the abstract is very stable. So what’s the big lesson?  Reuse is hard.  In fact it’s damn hard.  And much of the time I’m not convinced we should focus too hard on it. If you’re designing a utility or framework, then by all means design it for reuse.  But you most also really set down a good versioning, release, and documentation process to maximize your chances.  For anything else, design it to be maintainable and extendable, but don’t waste the effort on reusability for something that most likely will be obsolete in a year or two anyway.

    Read the article

  • A mechanism to include site title in every page, but not in <title> element

    - by Saeed Neamati
    Each site can have a name. For example, site x. Each page also can have a name (or a title) that should appear in <title> tag in the header. However, many websites out there use the combination site name - page name to provide the value for <title> tag. I find it a little far from being semantic. On the other hand, if you only include page title in <title> tag, search engines won't find your site by its name. For example, if your site's name is Thought Results and you don't include it in page titles, then if you search for Thought Results, you won't find your site in SERPs. Thus I'm searching for a mechanism to both include site title (not page title) in every page, and also only include page title in <title> tag to get more semantic results. Is there any way to achieve this?

    Read the article

  • With which class to start Test Driven Development of card game application? And what would be the next 5 to 7 tests?

    - by Maxis
    I have started to write card game applications. Some model classes: CardSuit, CardValue, Card Deck, IDeckCreator, RegularDeckCreator, DoubleDeckCreator Board Hand and some game classes: Turn, TurnHandler IPlayer, ComputerPlayer, HumanPlayer IAttackStrategy, SimpleAttachStrategy, IDefenceStrategy, SimpleDefenceStrategy GameData, Game are already written. My idea is to create engine, where two computer players could play game and then later I could add UI part. Already for some time I'm reading about Test Driven Development (TDD) and I have idea to start writing application from scratch, as currently I have tendency to write not needed code, which seems usable in future. Also code doesn't have any tests and it is hard to add them now. Seems that TDD could improve all these issue - minimum of needed code, good test coverage and also could help to come to right application design. But I have one issue - I can't decide from where to start TDD? Should I start from bottom - Card related classes or somewhere on top - Game, TurnHandler, ... ? With which class you would start? And what would be the next 5 to 7 tests? (use the card game you know the best) I would like to start TDD with your help and then continue on my own!

    Read the article

  • Access Denied

    - by Tony Davis
    When Microsoft executives wake up in the night screaming, I suspect they are having a nightmare about their own version of Frankenstein's monster. Created with the best of intentions, without thinking too hard of the long-term strategy, and having long outlived its usefulness, the monster still lives on, occasionally wreaking vengeance on the innocent. Its name is Access; a living synthesis of disparate body parts that is resistant to all attempts at a mercy-killing. In 1986, Microsoft had no database products, and needed one for their new OS/2 operating system, the successor to MSDOS. In 1986, they bought exclusive rights to Sybase DataServer, and were also intent on developing a desktop database to capture Ashton-Tate's dominance of that market, with dbase. This project, first called 'Omega' and later 'Cirrus', eventually spawned two products: Visual Basic in 1991 and Access in late 1992. Whereas Visual Basic battled with PowerBuilder for dominance in the client-server market, Access easily won the desktop database battle, with Dbase III and DataEase falling away. Access did an excellent job of abstracting and simplifying the task of building small database applications in a short amount of time, for a small number of departmental users, and often for a transient requirement. There is an excellent front end and forms generator. We not only see it in Access but parts of it also reappear in SSMS. It's good. A business user can pull together useful reports, without relying on extensive technical support. A skilled Access programmer can deliver a fairly sophisticated application, whilst the traditional client-server programmer is still sharpening his pencil. Even for the SQL Server programmer, the forms generator of Access is useful for sketching out application designs. So far, so good, but here's where the problems start; Access ties together two different products and the backend of Access is the bugbear. The limitations of Jet/ACE are well-known and documented. They range from MDB files that are prone to corruption, especially as they grow in size, pathetic security, and "copy and paste" Backups. The biggest problem though, was an infamous lack of scalability. Because Microsoft never realized how long the product would last, they put little energy into improving the beast. Microsoft 'ate their own dog food' by using Access for Microsoft Exchange and Outlook. They choked on it. For years, scalability and performance problems with Exchange Server have been laid at the door of the Jet Blue engine on which it relies. Substantial development work in Exchange 2010 was required, just in order to improve the engine and storage schema so that it more efficiently handled the reading and writing of mails. The alternative of using SQL Server just never panned out. The Jet engine was designed to limit concurrent users to a small number (10-20). When Access applications outgrew this, bitter experience proved that there really is no easy upgrade path from Access to SQL Server, beyond rewriting the whole lot from scratch. The various initiatives to do this never quite bridged the cultural gulf between Access and a true relational database So, what are the obvious alternatives for small, strategic database applications? I know many users who, for simple 'list maintenance' requirements are very happy using Excel databases. Surely, now that PowerPivot has led the way, it is time for Microsoft to offer a new RAD package for database application development; namely an Excel-based front end for SQL Server Express. In that way, we'll have a powerful and familiar front end, to a scalable database, and a clear upgrade path when an app takes off and needs to go enterprise. Cheers, Tony.

    Read the article

  • Elementary OS boots to a terminal (other OS) [on hold]

    - by Benjamin Watson
    Im new to this site, please forgive me if I missed some posting protocol of some sort. I am attempting to install Luna on my samsung s2 laptop (a8 amd radeon 7640g) and when I click on try luna, it just pulls up a terminal after the insignia (curvy E). When I install it, same issue. CTRL-ALT-f7 reveals this (hand typed, sorry if there's typos) Starting preload: *starting CUPS printing spooler/server *stopping save kernel messages preload. fsck from util-linux 2.20.1 fsck from util-linux 2.20.1 dosfsck 3.0.12, 29 oct 2011 FAT32, LFN /dev/sda1: 3 files, 245/189518 clusters /dev/sda2: clean, 133841/30294016 files, 2529529/121164544 blocks Skipping profile in /etc/apparmor.d/disable: usr.sbin.rsyslogd *starting AppArmor profiles speech-dispacher disabled; edit /etc/default/speech-dispenser *stopping system V initialisation compatibility *starting system V runlevel compatability *starting apci daemon *starting anac(h)ronistic cron *starting save kernal messages *starting ntp server ntpd *starting regular background program processing damon *starting deferred execution scheduler *stopping anac(h)ronistic cron *starting LightDM Display Manager *starting bluetooth daemon *starting mDNS/DNS-SD daemon *starting CPU interrupts balancing daemon *stopping Send an event to indicate plymouth is up saned disabled ; edit /etc/default/saned *starting network connection manager *starting crash report submission daemon *checking battery state... That's it. I can't make heads or tails of it. Please note that while I've been running linux for about a year, I'm still fairly new to all of this, so try to be detailed in your explanations and/or descriptions of what I need to do. Any/all help would be appreciated. Thank you for your time.

    Read the article

  • Is browser fingerprinting a viable technique for identifying anonymous users?

    - by SMrF
    Is browser fingerprinting a sufficient method for uniquely identifying anonymous users? What if you incorporate biometric data like mouse gestures or typing patterns? The other day I ran into the Panopticlick experiment EFF is running on browser fingerprints. Of course I immediately thought of the privacy repercussions and how it could be used for evil. But on the other hand, this could be used for great good and, at the very least, it's a tempting problem to work on. While researching the topic I found a few companies using browser fingerprinting to attack fraud. And after sending out a few emails I can confirm at least one major dating site is using browser fingerprinting as but one mechanism to detect fake accounts. (Note: They have found it's not unique enough to act as an identity when scaling up to millions of users. But, my programmer brain doesn't want to believe them). Here is one company using browser fingerprints for fraud detection and prevention: http://www.bluecava.com/ Here is a pretty comprehensive list of stuff you can use as unique identifiers in a browser: http://browserspy.dk/

    Read the article

  • Setup for mounting kerberized nfs home directory - gssd not finding valid kerberos ticket

    - by janm
    Our home directories are exported via kerberized nfs, so the user needs a valid kerberos ticket to be able to mount its home. This setup works fine with our existing clients & server. Now we want to add some 11.10 client and thus set up ldap & kerberos together with pam_mount. The ldap authentication works and users can login via ssh, however their homes can not be mounted. When pam_mount is configured to mount as root, gssd does not find a valid kerberos ticket and the mount fails. Nov 22 17:34:26 zelda rpc.gssd[929]: handle_gssd_upcall: 'mech=krb5 uid=0 enctypes=18,17,16,23,3,1,2 ' Nov 22 17:34:26 zelda rpc.gssd[929]: handling krb5 upcall (/var/lib/nfs/rpc_pipefs/nfs/clnt2) Nov 22 17:34:26 zelda rpc.gssd[929]: process_krb5_upcall: service is '<null>' Nov 22 17:34:26 zelda rpc.gssd[929]: getting credentials for client with uid 0 for server purple.physcip.uni-stuttgart.de Nov 22 17:34:26 zelda rpc.gssd[929]: CC file '/tmp/krb5cc_65678_Ku2226' being considered, with preferred realm 'PURPLE.PHYSCIP.UNI-STUTTGART.DE' Nov 22 17:34:26 zelda rpc.gssd[929]: CC file '/tmp/krb5cc_65678_Ku2226' owned by 65678, not 0 Nov 22 17:34:26 zelda rpc.gssd[929]: WARNING: Failed to create krb5 context for user with uid 0 for server purple.physcip.uni-stuttgart.de Nov 22 17:34:26 zelda rpc.gssd[929]: doing error downfall When pam_mount is on the other hand configured with the noroot=1 option, then it cannot mount the volume at all. Nov 22 17:33:58 zelda sshd[2226]: pam_krb5(sshd:auth): user phy65678 authenticated as [email protected] Nov 22 17:33:58 zelda sshd[2226]: Accepted password for phy65678 from 129.69.74.20 port 51875 ssh2 Nov 22 17:33:58 zelda sshd[2226]: pam_unix(sshd:session): session opened for user phy65678 by (uid=0) Nov 22 17:33:58 zelda sshd[2226]: pam_mount(mount.c:69): Messages from underlying mount program: Nov 22 17:33:58 zelda sshd[2226]: pam_mount(mount.c:73): mount: only root can do that Nov 22 17:33:58 zelda sshd[2226]: pam_mount(pam_mount.c:521): mount of /Volumes/home/phy65678 failed So how can we allow users of a specific group to perform nfs mounts? If this does not work, can we make pam_mount use root but pass the correct uid?

    Read the article

  • What is a good basic/flexible cms for a small website? [closed]

    - by Samuel
    Possible Duplicate: Which Content Management System (CMS) should I use? I'm designing a very basic portfolio website for an artist. It features a blog, portfolio, cv and contact page. I've handcoded the basics of this site in php/java, as it is a very small website (and I like coding by hand). But I need a simple cms backend for the dynamic parts of the website (the blog/portfolio). The big systems (ruby, joomla, wordpress) are far too invasive for my liking (and frankly a bit beyond my capabilities). Wordpress for example, requires too much adaptation of the design to the wordpress structure, and ruby is far too extensive for a simple site like this (in my opinion). So what I'm looking for is a (preferably open source) cms that has a simple backend for the artist to use as a blogger, with a mysql database for the content, that will allow me to insert content with simple tags (using smarty tags for example), but is otherwise not too invasive or demanding in terms of the required page structure. Does anyone know of a good cms that fits this description? p.s.: I have tried phpnews and cmsmadesimple, but phpnews was a litte too basic (but very close too what I'm looking for) and cmsmadesimple was way too slow (but otherwise also pretty close too what I wanted, though a bit too extensive).

    Read the article

  • C#.NET (AForge) against Java (JavaCV, JMF) for video processing

    - by Leron
    I'm starting to get really confused looking deeper and deeper at video processing world and searching for optimal choices. This is the reason to post this and some other questions to try and navigate myself in the best possible way. I really like Java, working with Java, coding with Java, at the same time C# is not that different from Java and Visual Studio is maybe the best IDE I've been working with. So even though I really want to do my projects in Java so I can get better and better Java programmer at the same time I'm really attract to video processing and even though I'm still at the beginning of this journey I want to take the right path. So I'm really in doubt could Java be used in a production environment for serious video processing software. As the title says I already have been looking at maybe the two most used technologies for video processing in Java - JMF and JavaCV and I'm starting to think that even they are used and they provide some functionality, when it comes to real work and real project that's not the first thing that comes to once mind, I mean to someone that have a professional opinion about this. On the other hand I haven't got the time to investigate .NET (c# specificly) options but even AForge looks a lot more serious library then those provided for Java. So in general -either ways I'm gonna spend a lot of time learning some technology and trying to do something that make sense with it, but my plan is at the end the thing that I'll eventually come up to be my headline project. To represent my skills and eventually help me find a job in the field. So I really don't want to spend time learning something that will give me the programming result I want but at the same time is not something that is needed in the real world development. So what is your opinion, which language, technology is better for this specific issue. Which one worths more in terms that I specified above?

    Read the article

  • Is unit testing development or testing?

    - by Rubio
    I had a discussion with a testing manager about the role of unit and integration testing. She requested that developers report what they have unit and integration tested and how. My perspective is that unit and integration testing are part of the development process, not the testing process. Beyond semantics what I mean is that unit and integration tests should not be included in the testing reports and systems testers should not be concerned about them. My reasoning is based on two things. Unit and integration tests are planned and performed against an interface and a contract, always. Regardless of whether you use formalized contracts you still test what e.g. a method is supposed to do, i.e. a contract. In integration testing you test the interface between two distinct modules. The interface and the contract determine when the test passes. But you always test a limited part of the whole system. Systems testing on the other hand is planned and performed against the system specifications. The spec determines when the test passes. I don't see any value in communicating the breadth and depth of unit and integration tests to the (systems) tester. Suppose I write a report that lists what kind of unit tests are performed on a particular business layer class. What is he/she supposed to take away from that? Judging what should and shouldn't be tested from that is a false conclusion because the system may still not function the way the specs require even though all unit and integration tests pass. This might seem like useless academic discussion but if you work in a strictly formal environment as I do, it's actually important in determining how we do things. Anyway, am I totally wrong? (Sorry for the long post.)

    Read the article

  • Modern techniques for spriting

    - by DevilWithin
    Hello, I would like to know the flow for making modern 2D game artwork. How are the assets made nowadays? Bitmap? Vector-based? Hand-drawn and painted? Drawn digitally? Modeled in 3D and exported to bitmaps? I would like some information on programs as well, for fine looking art. Why does Flash's vector art style look good in most games? How do I make equivalent graphics with external tools? Or equaly good and not vector-based, anyway. Any special hints for animating? An answer oriented towards a one-man-army indie developer with little experience but some artistic sense would be appreciated! Not a complete dummy with paint programs, but also not a master at all, just need efficient ways to achieve results. Thanks. NOTE: Pixel art is not the goal of this question, nothing related to direct pixel manipulation should be brought up here, but you're free to do exactly that :)

    Read the article

  • Issue in understanding how to compare performance of classifier using ROC

    - by user1214586
    I am trying to demystify pattern recognition techniques and understood few of them. I am trying to design a classifier M. A gesture is classified based on the hamming distance between the sample time series y and the training time series x. The result of the classifier are probabilistic values. There are 3 classes/categories with labels A,B,C which classifies hand gestures where there are 100 samples for each class which are to be classified (single feature and data length=100). The data are different time series (x coordinate vs time). The training set is used to assign probabilities indicating which gesture has occured how many times. So,out of 10 training samples if gesture A appeared 6 times then probability that a gesture falls under category A is P(A)=0.6 similarly P(B)=0.3 and P(C)=0.1 Now, I am trying to compare the performance of this classifier with Bayes classifier, K-NN, Principal component analysis (PCA) and Neural Network. On what basis,parameter and method should I do it if I consider ROC or cross validate since the features for my classifier are the probabilistic values for the ROC plot hence what shall be the features for k-nn,bayes classification and PCA? Is there a code for it which will be useful. What should be the value of k is there are 3 classes of gestures? Please help. I am in a fix.

    Read the article

  • Should we write detailed architecture design or just an outline when designing a program?

    - by EpsilonVector
    When I'm doing design for a task, I keep fighting this nagging feeling that aside from being a general outline it's going to be more or less ignored in the end. I'll give you an example: I was writing a frontend for a device that has read/write operations. It made perfect sense in the class diagram to give it a read and a write function. Yet when it came down to actually writing them I realized they were literally the same function with just one line of code changed (read vs write function call), so to avoid code duplication I ended up implementing a do_io function with a parameter that distinguishes between operations. Goodbye original design. This is not a terribly disruptive change, but it happens often and can happen in more critical parts of the program as well, so I can't help but wondering if there's a point to design more detail than a general outline, at least when it comes to the program's architecture (obviously when you are specifying an API you have to spell everything out). This might be just the result of my inexperience in doing design, but on the other hand we have agile methodologies which sort of say "we give up on planning far ahead, everything is going to change in a few days anyway", which is often how I feel. So, how exactly should I "use" design?

    Read the article

  • Ubuntu 12.04.1 Update Manager and synaptic Package Manager not working

    - by Ashoke
    Recently installed Ubuntu 12.04.1, 64bit on Intel Quad core system. There is a Red Circle with a WHITE 'minus' sign on the upper right hand corner. In the end of a long message displayed below the RED CIRCLE in 'grey' says that INSTALLED PACKAGES HAVE UNMET DEPENDENCIES. None of the following is working. 1. Ubuntu Software Center not opening. UPDATE MANAGER Could not initialize the package information An unresolvable problem occurred while initializing the package information. Please report this bug against the 'update-manager' package and include the following error message: 'E:Encountered a section with no Package: header, E:Problem with MergeList /var/lib/apt/lists/in.archive.ubuntu.com_ubuntu_dists_precise-updates_multiverse_binary-i386_Packages, E:The package lists or status file could not be parsed or opened.' SYNAPTIC PACKAGE MANAGER AN ERROR OCCURED The following details are provided E: Encountered a section with no Package: header E: Problem with MergeList /var/lib/apt/lists/in.archive.ubuntu.com_ubuntu_dists_precise-updates_multiverse_binary-i386_Packages E: The package lists or status file could not be parsed or opened. E: _cache-open() failed, please report. Otherwise, the computer is working fine with broadband WiFi internet. InstallPLEASE HELP.

    Read the article

  • Dissing Architects, or "What's wrong with the coffee?"

    - by Bob Rhubart
    In my conversations with people in architect roles, tales of animosity, disrespect, and outright hostility aren't uncommon. And it's clear that in more than a few organizations architects regularly face a tough uphill climb. For architects with the requisite combination of technical, organizational, and people skills, that rough treatment is grossly undeserved. But tales of unqualified people in positions up and down the IT food chain are also easy to come by. So what's the other side of the architect story? Are some architects tarnishing the role and making life miserable for their more qualified colleagues? The various quotes included below were culled from a variety of sources. The criticism is harsh, and the people behind these quotes clearly have issues with architects. Still, whether based on mere opinion or actual experience, the comments shed some light on behaviors that should raise red flags for anyone pursuing a career as an architect. If you're an architect, and you've ever noticed that your coffee tastes like window cleaner, or your car is repeatedly keyed, or no one ever holds the elevator for you, maybe you need to do a little soul searching... Those Who Can, Code; Those Who Can't, Architect | Joe Winchester [May 18, 2007] "At the moment there seems to be an extremely unhealthy obsession in software with the concept of architecture. A colleague of mine, a recent graduate, told me he wished to become a software architect. He was drawn to the glamour of being able to come up with grandiose ideas - sweeping generalized designs, creating presentations to audiences of acronym addicts, writing esoteric academic papers, speaking at conferences attended by headless engineers on company expense accounts hungrily seeking out this year's grail, and creating e-mails with huge cc lists from people whose signature footer is more interesting than the content. I tried to re-orient him into actually doing some coding, to join a team that has a good product and keen users both of whom are pushing requirements forward, to no avail. Somehow the lure of being an architecture astronaut was too strong and I lost him to the dark side." Don't Let Architecture Astronauts Scare You | Joel Spolsky [April 21, 2001] "It's very hard to get them to write code or design programs, because they won't stop thinking about Architecture. They're astronauts because they are above the oxygen level, I don't know how they're breathing. They tend to work for really big companies that can afford to have lots of unproductive people with really advanced degrees that don't contribute to the bottom line. Remember that the architecture people are solving problems that they think they can solve, not problems which are useful to solve." Non Coding Architects Suck | Richard Henderson [May 24, 2010] "If a guy with a badge saying 'system architect' looks blank on low-level issues then he is not an architect, he is a business-analyst who went on a course. He will probably wax lyrical on all things high-level and 'important.' He will produce lovely object hierarchies without a clue to implementation. He will have a moustache and play golf." Architects Play Golf | Sunir Shah [August 15, 2012] "Often arrogant architects are difficult to get a hold of during the implementation phase because they no longer feel the need to stick around. Especially around midnight when most of the poor sob [sic] developers are still banging away. After all, they've already solved the problem--the rest is just an implementation exercise." Engineer vs Architect(Part of a discussion on the IT Architect Network Group on LinkedIn) "[An] architect spends his time producing white papers full of acronyms he does not understand but that impress his boss [while the] engineer keeps his head down and does the actual job." Architects Don't Code | [Author Unknown] "Faulty belief: System Architects don't need to code anymore. They know what they are talking about by virtue of the fact that they are System Architects."

    Read the article

  • WPF Applications &ndash; Handling the Unhandled

    - by David Totzke
    Instead of just letting your application crash, you can attach a method to the DispatcherUnhandledExceptionEventHandler and one to the AppDomain.Current.UnhandledException.  You wire these up in the code behind of your application which by default is App.xaml.cs.  You can log these errors or throw up a message Don Box and tell the user what happened.  Then you shut down the app gracefully.  You shut it down because something bad happened that you weren’t expecting and at this point there is no guarantee as to the state of the stack or memory or anything really.  All bets are off. If, on the other hand, the method for the UnhandledException is empty and the method for the DispatcherUnhandledEventHandler ends up in a call to a method called LogError() and the LogError() method is FUCKING EMPTY, and you just swallow the exceptions and keep on running, then, not so much.  I spent nearly a day trying to track down a bug that would have been obvious had something been logged or if it just crashed.  It’s my own fault I suppose.  I knew these were hooked up.  I just never suspected that there wouldn’t be any implementation at all.  Live and learn. Customs Man at Heathrow: Anything to declare, Sir? Jekyll and Hyde: Man has not evolved an inch from the slime that spawned him. Customs Man at Heathrow: Very Good, Sir. I tend to agree. Dave Just because I can…

    Read the article

  • How to talk a client out of a Flash website?

    - by bunglestink
    I have recently been doing a bunch of web side projects through word of mouth recommendations only. Although I am much more a of a programmer than a designer by any means, my design skills are not terrible, and do not hate dealing with UI like many programmers. As a result, I find myself lured into a bunch of side projects where aside from a minimal back end for content administration, most of the programming is on front end interfaces (read javascript/css). By far the biggest frustration I have had is convincing clients that they do not want Flash. Aside the fact that I really do not enjoy Flash "development", there are many practical reasons why Flash is not desirable (lack of compatibility across devices, decreased client accessibility, plug-in requirements, increased development time, etc.). Instead of just flat out telling the clients "I will not build you a flash website", I would much rather use tactics to convince/explain to them that this is not what they actually want, ie: meet their requirements any better than standard html/css/js and distract users from their content. What kind of first hand experience do others have with this? How do you explain to someone that javascript/css/AJAX is usually a better option for most websites? Why do people want to use Flash so bad to begin with? This question pertains to clients who do not have any technical reasons for wanting flash, but just want it because they think it makes pretty websites.

    Read the article

  • When and why you should use void (instead of i.e. bool/int)

    - by Jonas
    I occasionally run into methods where a developer chose to return something which isn't critical to the function. I mean, when looking at the code, it apparently works just as nice as a void and after a moment of thought, I ask "Why?" Does this sound familiar? Sometimes I would agree that most often it is better to return something like a bool or int, rather then just do a void. I'm not sure though, in the big picture, about the pros and cons. Depending on situation, returning an int can make the caller aware of the amount of rows or objects affected by the method (e.g., 5 records saved to MSSQL). If a method like "InsertSomething" returns a boolean, I can have the method designed to return true if success, else false. The caller can choose to act or not on that information. On the other hand, May it lead to a less clear purpose of a method call? Bad coding often forces me to double-check the method content. If it returns something, it tells you that the method is of a type you have to do something with the returned result. Another issue would be, if the method implementation is unknown to you, what did the developer decide to return that isn't function critical? Of course you can comment it. The return value has to be processed, when the processing could be ended at the closing bracket of method. What happens under the hood? Did the called method get false because of a thrown error? Or did it return false due to the evaluated result? What are your experiences with this? How would you act on this?

    Read the article

  • What is the ideal length of a method?

    - by iPhoneDeveloper
    In object-oriented programming, there is no exact rule on the maximum length of a method , but I still found these two qutes somewhat contradicting each other, so I would like to hear what you think. In Clean Code: A Handbook of Agile Software Craftsmanship, Robert Martin says: The first rule of functions is that they should be small. The second rule of functions is that they should be smaller than that. Functions should not be 100 lines long. Functions should hardly ever be 20 lines long. and he gives an example from Java code he sees from Kent Beck: Every function in his program was just two, or three, or four lines long. Each was transparently obvious. Each told a story. And each led you to the next in a compelling order. That’s how short your functions should be! This sounds great, but on the other hand, in Code Complete, Steve McConnell says something very different: The routine should be allowed to grow organically up to 100-200 lines, decades of evidence say that routines of such length no more error prone then shorter routines. And he gives a reference to a study that says routines 65 lines or long are cheaper to develop. So while there are diverging opinions about the matter, is there a functional best-practice towards determining the ideal length of a method for you?

    Read the article

< Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >