Daily Archives

Articles indexed Tuesday February 22 2011

Page 12/14 | < Previous Page | 8 9 10 11 12 13 14  | Next Page >

  • Leading a not-so-good team

    - by vinoth
    How would you manage if you are allocated a team of 5 with, say, 4 incompetent programmers and you are asked to lead? Obviously you can't code for the 4 guys (you can, but that is not a good idea. At least I burned out doing that). Have you come across these kind of situations? Edit: I think I sounded rude by choosing a wrong word (incompetent) to address my problem. To rephrase the question, how do you deal with people who do not complete assigned tasks (for whatever reasons [ranging from incompetence to 'I don't care' stuff])?

    Read the article

  • Video documentary on the open source culture ?

    - by explorest
    Hello, I'm looking for some videos on these subjects: A movie/documentary detailing the origin, history, and current state of open source culture A movie/documentary on how open source software actually gets developed. What are the technical workflows. How do people create projects, recruit contributors, build a community, assign roles, track issues, assimilate new comers ... etc etc. Could someone suggest a title?

    Read the article

  • Architecture/pattern resources for small applications and tools

    - by s73v3r
    I was wondering if anyone had any resources or advice related to using architecture patterns like MVVM/MVC/MVP/etc on small applications and tools, as opposed to large, enterprisy ones. EDIT: Most of the information I see on application architecture is directed at large, enterprise applications. I'm just writing small programs and tools. As far as using these architecture patterns, is it generally worthwhile to go through the overhead of using an MVC/MVVM framework? Or would I be better off keeping it simple?

    Read the article

  • Data migration - dangerous or essential?

    - by MRalwasser
    The software development department of my company is facing with the problem that data migrations are considered as potentially dangerous, especially for my managers. The background is that our customers are using a large amount of data with poor quality. The reasons for this is only partially related to our software quality, but rather to the history of the data: Most of them have been migrated from predecessor systems, some bugs caused (mostly business) inconsistencies in the data records or misentries by accident on the customer's side (which our software allowed by error). The most important counter-arguments from my managers are that faulty data may turn into even worse data, the data troubles may awake some managers at the customer and some processes on the customer's side may not work anymore because their processes somewhat adapted to our system. Personally, I consider data migrations as an integral part of the software development and that data migration can been seen to data what refactoring is to code. I think that data migration is an essential for creating software that evolves. Without it, we would have to create painful software which somewhat works around a bad data structure. I am asking you: What are your thoughts to data migration, especially for the real life cases and not only from a developer's perspecticve? Do you have any arguments against my managers opinions? How does your company deal with data migrations and the difficulties caused by them? Any other interesting thoughts which belongs to this topics?

    Read the article

  • Computer Science Degrees and Real-World Experience

    - by Steven Elliott Jr
    Recently, at a family reunion-type event I was asked by a high school student how important it is to get a computer science degree in order to get a job as a programmer in lieu of actual programming experience. The kid has been working with Python and the Blender project as he's into making games and the like; it sounds like he has some decent programming chops. Now, as someone that has gone through a computer science degree my initial response to this question is to say, "You absolutely MUST get a computer science degree in order to get a job as a programmer!" However, as I thought about this I was unsure as to whether my initial reaction was due in part to my own suffering as a CS student or because I feel that this is actually the case. Now, for me, I can say that I rarely use anything that I learned in college, in terms of the extremely hard math, algorithms, etc, etc. but I did come away with a decent attitude and the willingness to work through tough problems. I just don't know what to tell this kid; I feel like I should tell him to do the CS degree but I have hired so many programmers that majored in things like English, Philosophy, and other liberal arts-type degrees, even some that never went to college. In fact my best developer, falls into this latter category. He got started writing software for his church or something and then it took off into a passion. So, while I know this is one of those juicy potential down vote questions, I am just curious as to what everyone else thinks about this topic. Would you tell a high school kid about this? Perhaps if he/she already knows a good deal of programming and loves it he doesn't need a CS degree and could expand his horizons with a liberal arts degree. I know one of the creators of the Django web framework was a American Literature major and he is obviously a pretty gifted developer. Anyway, thanks for the consideration.

    Read the article

  • Licensing issues with using code from samples coming with SDK

    - by Andrey
    Samples coming with SDK are intended to provide best practices. So logically it looks perfectly valid to take code from them. But usually samples come under licenses, for example a lot of samples from Microsoft are released under Microsoft Public License (MS-PL). Samples are usually published to provide best practices and common reusable code. But how can I use code from samples if they are under rather strict licenses?

    Read the article

  • What are the standard practices for database access in .net?

    - by Gulshan
    I have seen weird database access practices in .net. I have seen stored procedures for every database tasks. I have seen every database property name is preceded by it's table name. I have seen fully separate layer/.dll for very few or no business logic. I have seen along with ORMs, there are separate data access layer playing the same role. And with them, I have always heard- "These are the standards you have to maintain". So, what are the real standards for data access in .net? What are the rules you follow?

    Read the article

  • kill SIGABRT does not generate core file from daemon started from crontab. [closed]

    - by Guma
    I am running CentOS 5.5 and working on server application that sometimes I need to force core dump so I can see what is going on. If I start my server from shell and send kill SIGABRT, core file is created. If I start same program from crontab and than I send same signal to it server is "killed" but not core file is generated. Does any one know why is that and what need to be added to my code or changed in system settings to allow core file generation? Just a side note I have ulimit set to unlimited in /etc/profile I have set kernel.core_uses_pid = 1 kernel.core_pattern=/var/cores/%h-%e-%p.core in /etc/sysctl.conf Also my server app was added to crontab under same login id as I am running it from shell. Any help greatly appreciated

    Read the article

  • Internal Libraries (Subversion Externals, 'library' branch, or just another folder)

    - by Ntsc
    Currently working on multiple projects that need to share internal libraries. The internal libraries are updated continually. Currently only 1 project needs to be stable but soon we will need to have both projects stable at any given time. What is the best way to SVN internal libraries? Currently we are using the 'just another folder' like so... trunk\project1 trunk\project2 trunk\libs It causes a major headache when a shared library is updated for project1 and project2 is now dead until the parts that use the library are updated. So after doing some research on SVN externals I thought of this... trunk\project1\libs (external to trunk\libs @ some revision) trunk\project2\libs (external to trunk\libs @ different revision) trunk\libs\ I'm a little worried about how externals work with commits and not making library commits so complicated that I am the only one capable of doing it (mostly worried about branches with externals as we use them extensively). On top of that we have multiple programming languages within each project some of which don't support per-project library directories (at least not easily) so we would need to check out on a per project basis instead of checking out the trunk. There is also the 'vendor' style branching of libraries but it has the same problem as above where the library would have to be a sub folder of each project and is maybe a little to complicated for how little projects we have. Any insight would be nice. I've spent quite a bit of time reading the Subversion book and feeling like I'm getting no where.

    Read the article

  • What's the most productive coding environment

    - by Ubiguchi
    I was speaking with an ex-colleague the other day about the most productive way to write code and he said he found it best "to CIMP, or Code In My Pants". When I asked him exactly what he meant, he explained he found it best to work at home, coding at his own pace, dressed comfortably (in his pants), and communicating with his team through emails, IM, or the telephone. Digesting his approach (which he describes to clients as the Complete Integrated Method of Programming), I realised my coding is also more productive when working in an isolated environment, which made me wonder if the software industry has got it all wrong and should development be really done by dispersed teams of individuals, or are there advantages to geographical herding that make up for the added interruptions it brings? So has business got it wrong? Should development occur predominantly across geographically isolated individuals to increase productivity, or are there real reasons why herding developers together makes sense?

    Read the article

  • When using membership provider, do you use the user ID or the username?

    - by Chris
    I've come across this is in a couple of different applications that I've worked on. They all used the ASP.NET Membership Provider for user accounts and controlling access to certain areas, but when we've gotten down into the code I've noticed that in one we're passing around the string based user name, like "Ralph Waters", or we're passing around the Guid based user ID from the membership table. Now both seem to work. You can make methods which get by username, or get by user ID, but both have felt somewhat "funny". When you pass a string like "Ralph Waters" you're passing essentially two separate words that make up a unique identifier. And with a Guid, you're passing around a string/number combination which can be cast and made unique. So my question is this; when using Membership Provider, which do you use, the username or the user ID to get back to the user? Thanks all!

    Read the article

  • How would it be if browsers natively *hosted* JavaScript frameworks? [closed]

    - by João Ramos
    More than 20 million websites nowadays are running jQuery and more than 1/5 of the top million websites are also doing so. What if we, as designers and developers, could take advantage of locally cached JavaScript libraries like jQuery, Prototype, script.aculo.us, etc? Wouldn't it be great if we could provide users with faster websites and experiences? EDIT: My apologies, I ment to ask how hould it be if browsers hosted JavaScript frameworks, not support them. Actually, there's no sense in my previous question. Sorry for that.

    Read the article

  • Why does Chrome video performance substantially degrade after waking from suspend in 10.10?

    - by Grant Heaslip
    Note: For some more details, some of which may not be true given what I've figured out, see this post. When I first boot my computer, video performance (both native H.264 HTML5 in YouTube and Vimeo, and in Flash) in Chrome is perfectly reasonable. CPU usage stays slow, everything works correctly, and the video is silky-smooth. But for whatever reason, if I suspend my computer then wake it up, video performance plummets. Full screen HTML5 video is choppy at best, and full-screen Flash video basically brings my computer to its knees (I'm talking less than a frame a second, and a 5 second lead time to leave full-screen after hitting Esc). Restarting Chrome doesn't fix this — I need to completely restart my machine before performance goes back to normal. Video performance in other applications, such as Movie Player, doesn't seem to be affected at all by the suspend cycle — it's only Chrome. I'm using a Lenovo X201, with an Intel GMA HD graphics chipset, and Intel compnents all around (I don't need any proprietary drivers). This didn't happen in 10.04, and I haven't anything that I think would have caused this to happen. It's possible that a Chrome release could have caused this, but it seems less likely than a regression between 10.04 and 10.10. Any ideas? EDIT: In response Georg's comment, logging in and out doesn't fix it. Restarting Compiz or switching to Metacity (at least by using "compiz/metacity --replace & disown" — am I doing it right?) doesn't help (actually, it seemed to help somewhat with Flash once, but I haven't been able to reproduce this). I'm not sure about GDM — when I use "sudo restart gdm" I get kicked back to the Linux shell (?), which I have no idea how to get out of. Also, I want to make very clear that this isn't just a case of Flash sucking (it does,but that's beside the point). I"m seeing the same general problem with HTML5 videos, and Flash is performing better on my Nexus One than it does on my Core i5 laptop. There's something screwy going on with Chrome and/or 10.10.

    Read the article

  • How can I recover from a frozen app when connecting over VNC

    - by Pablojim
    I have a htpc running 10.10. It has no keyboard or mouse attached, so I connect over vnc for administration and manual tasks. This usually works fine but every so often an app running in fullscreen mode crashes. All I want to be able to do is kill this app from VNC. Is there a keyboard shortcut to do this that will preserve the VNC connection? If I use CTRL + ALT + F1, I see the terminal but can no longer control it from VNC. Trying to return to the desktop doesn't work either? Any easy process I can follow?

    Read the article

  • Why doesn't my graphic card software support 1280*1024?

    - by Allwar
    Hi, I have an external monitor which is an 20" 1280*1024. In windows 7 it works fine with that resolution but in ubuntu it can't. Example: In windows I connect it and activates it, done. In ubuntu I connect and the only resolution that is available is the ones my laptop screen support, 12" 1366*768. My laptop is an asus 1201n. If I force it to use 1280*1024 both screen crashes and i have to force a reboot. When I force it I only force the external monitor, the laptop is already at maximum 1366*768. I connect it throw VGA. ((The graphic card supports 1280*1024 in windows 7, #Fail)) alvar@alvars-laptop:~$ disper -l display DFP-0: HSD121PHW1 resolutions: 320x175, 320x200, 360x200, 320x240, 400x300, 416x312, 512x384, 640x350, 576x432, 640x400, 680x384, 720x400, 640x480, 720x450, 640x512, 700x525, 800x512, 840x525, 800x600, 960x540, 832x624, 1024x768, 1366x768 display CRT-0: CRT-0 resolutions: 320x240, 400x300, 512x384, 680x384, 640x480, 800x600, 1024x768, 1152x864, 1360x768

    Read the article

  • Video recording in cheese is slow

    - by Gaurav Butola
    cheese records at painful fps. video recording is really slow in cheese almost unusable. How can I increase the fps for cheese. I have HCL laptop with built in 1.3 MP camera 2.47 GHz i3 processor with 2 GB RAM. running maverick 32 bit. I installed camorama from software centre which has fine video (fps), but I cant seem to use it due to some error. So I doubt there is something to be tweaked with cheese itself.

    Read the article

  • Are there plans for handwriting recognition?

    - by Patrick
    This is a big feature when it comes to putting Ubuntu onto tablets. Currently, Netbook edition works great for that purpose and the pen digitiser is perfect, but the handwriting would be a real dealmaker (especially for my business - we could actually move to Linux) to compete with the Windows one. CellWriter exists, but that only handles character and keyboard input (but I don't know about multitouch on the keyboard). It also needs to handle print and cursive, because character mode can be slow and uncomfortable (unless you're writing passwords). Lastly, CellWriter needs to have some default letter shapes rather than having to be trained from the start. There is a software package called MyScript (by Vision Objects) that handles all four modes (keyboard, character, print, cursive) plus calculator and fullscreen, but it's only free as a trial. Still, it would be nice to see it in the For Purchase section and the trial in the free section of the Software Centre. The only other ones are for Chinese/Japanese/Korean characters. What would really make a difference for us is the integration of some formal API with the OS that can automatically activate when running on a tablet to pass ink data to whatever recognition system is installed, and have something available (however rudimentary) to use it.

    Read the article

  • My files disappeared from the UbuntuOne synced folder

    - by Junji
    I set up an UbuntuOne account on PC1 (Ubuntu 10.10) and the same account on PC2 (Ubuntu 10.04). I did the following: Created file named maverick.txt in PC1's ~/Ubuntu One/log Created file named venus.txt in PC2's ~/Ubuntu One/log Bot files appeared in one.ubuntu.com A few hours later, those two files are disappeared from PC1's Ubuntu One/log PC2's Ubuntu One/log one.ubuntu.com So, my files are gone forever. Why did this happen? Is there any way to recover those files?

    Read the article

  • High load (and high temp) with idle processes

    - by Nanne
    I've got a semi-old laptop (toshiba satellite a110-228), that's appointed 'laptop for the kids' by my sister. I've installed ubuntu netbook (10.10) on it because of the lack-of memory and it seems to work fine, accept from some heat-issues. These where never a problem under windows. It looks like I've got something similar to this problem: Load is generally 1 or higher, sometimes its stuck at 0.80, but its way to high. Top/htop only show a couple of percentage CPU use (which isn't too shocking, as i'm not doing anything). At this point all the software is stock, and i'd like to keep it that way because its supposed to be the easy-to-maintain kids computer. Now I'd like to find out: What could be the cause of the high load? Could it be as this thread implies, some driver, are there other options to check? How could I see what is really keeping the system hot and bothered? How to check what runs, etc etc? I'd like to pinpoint the culprint. further steps to take for debugging? The big bad internet leads me to believe that it might be the graphics drivers. The laptop has an Intel 945M chipset, but that doesn't seem to be one of the problem childs in this manner (I read a lot abotu ATI drivers that need special isntall). I'd not only welcome hints to directly solve this (duh) but also help in starting to debug what is going on. I am really hesitant in installing an older kernel, as I want it to be stock, and easy upgradeable (because I don't live near it, it should run without me ;) ) As an afterthought: to keep the whole thing cooler, can I 'amp up' the fancontrol? Its only going "airplane" mode when the computer is 95 Celcius, which is a tad late for my taste. Top: powertop:

    Read the article

  • Sharing a mounted Truecrypt volume via Samba

    - by user10492
    Been banging my head against the wall on this one for a while. I have an encrypted (via truecrypt) partition on a drive. In windows, I mount it locally and share it on my local network. I'm trying to do the same in Ubuntu 10.10 but am running into permission issues. The tc volume has funny permissions and I just can't seem to figure out a way to access it (unsecurely of course) over my network. Other non-tc volume shares work just fine. To recap, I mount a truecrypt volume in Ubuntu and set up a samba share as normal. The share shows up on my local network but accessing it gives 'permission denied'. Mounting as a network drive using a password does not seem to work either. Any insight would be greatly appreciated.

    Read the article

  • How to get Google Earth installed via .DEB?

    - by George Edison
    Now I've really messed things up. A long time ago, I installed Google Earth via a binary installer from Google (v5.1, I think). Google now has version 6 available as a .DEB, so I decided to install that. However, that seems to have messed up both installations and now no matter what I do, I can't get Google Earth to run. Here's what I do: sudo apt-get purge google-earth-stable sudo dpkg -i --force-overwrite google-earth-stable_current_amd64.deb Which I thought would work... but when I run google-earth, I get: /usr/bin/google-earth: 43: ./googleearth-bin: not found How can I get it installed now?

    Read the article

  • Wifi doesn't connect if I'm upstairs

    - by user11233
    Hello there, I'm finding that Wifi connects when I'm in the living room, where the internet router etc is. But if I try to connect upstairs it is a no go. Booting in Windows 7 (dual boot :D ) I find that Wifi connects. I find this very confusing and have it since the install. Please help! Kind regards. #sudo lshw -c network *-network description: Wireless interface product: PRO/Wireless 5100 AGN [Shiloh] Network Connection vendor: Intel Corporation physical id: 0 bus info: pci@0000:02:00.0 logical name: wlan0 version: 00 serial: 00:21:5d:1a:63:58 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=iwlagn driverversion=2.6.35-26-generic firmware=8.24.2.12 ip=10.0.0.4 latency=0 link=yes multicast=yes wireless=IEEE 802.11abg resources: irq:49 memory:de200000-de201fff Hope it helps! Hello guys, just tried to connect internet upstairs and to my surprise it does connect. Don't know exactly why. Only thing I've done recently is input the codes said bij Oli. Two things I've done on my own 1: installed Jupiter and 2: installed laptop-mode-tools. I really don't know if this can effect my internet but it's the only thing I can think of. Thanks for all the help, really appreciate it :) PS If someone thinks it might be helpful for others I would like to help find a 'real' solution to this problem. Just post down what I need to post here and I will try my best to help.

    Read the article

  • How to change/create password keyring

    - by sadmicrowave
    when I try to remote desktop to the server from another ubuntu machine using the remote desktop viewer, it asks me to enter the password, which I do, then the viewer pane just goes black. When I come back and look at my server it is saying that the password keyring no longer matches the password used to login to the machine please reenter the password...and when I type in the password it doesnt take it, it just keeps popping back up saying the same message over and over. I found a thread explaining to go to System--Preferences--Passwords & Encryptions and right click on the keyring and click Set as Default. I did that and the problem persists...I tried changing the password but it told me that my original password was incorrect (even though it is the password I use to login and provide root authentication when asked) so I deleted the keyring in hopes of adding a new one but there is no place in gui to add a new one...so can I add a new one through command line? if so - how?

    Read the article

  • Recommended flexible website solution?

    - by Omega
    My site has a MyBB forums installation, and that is pretty much all I need. Forums. However, I need a homepage and a couple other static pages, for showing relevant information, links, etc. I don't need something fancy, all I need is something very flexible regarding theme and style editing, and just a couple simple modules, like public polls. That's all. I am very graphical, and I am looking for something to let me edit pretty much every aspect of the site. These are static pages, mainly, so I don't need something very complex. Some people tell me to use Dreamweaver, but quite honestly, that is not what I am looking for, even thought it does offer a lot of flexibility. I want something like, you know, Drupal or.. some other simple, deeply-editable in terms of graphics and style web platform. What would you recommend to me? Thank you.

    Read the article

  • Is it possible to mod_rewrite BASED on the existence of a file/directory and uniqueID?

    - by JM4
    My site currently forces all non www. pages to use www. Ultimately, I am able to handle all unique subdomains and parse correctly but I am trying to achieve the following: (ideally with mod_rewrite): when a consumer visits www.site.com/john4, the server processes that request as: www.site.com?Agent=john4 Our requirements are: The URL should continue to show www.site.com/john4 even though it was redirected to www.site.com?index.php?Agent=john4 If a file (of any extension OR a directory) exists with the name, the entire process stops an it tries to pull that file instead: for example: www.site.com/file would pull up (www.site.com/file.php if file.php existed on the server. www.site.com/pages would go to www.site.com/pages/index.php if the pages directory exists). Thank you ahead of time. I am completely at a crapshot right now.

    Read the article

< Previous Page | 8 9 10 11 12 13 14  | Next Page >