Search Results

Search found 37 results on 2 pages for 'dsimcha'.

Page 1/2 | 1 2  | Next Page >

  • Ubuntu Natty: 32-bit userland, 64-bit kernel?

    - by dsimcha
    I'm trying to manually install a 64-bit kernel for 32-bit Ubuntu. I have my reasons for doing so, but they're too complicated to explain here. Prior to Natty, this worked fine. Now, on Natty, I get the following error message when I try doing it the same way: dsimcha@dsimcha-laptop:~$ sudo dpkg -i --force-architecture linux-image-2.6.38-8-server_2.6.38-8.42_amd64.deb [sudo] password for dsimcha: dpkg: error processing linux-image-2.6.38-8-server_2.6.38-8.42_amd64.deb (--install): cannot access archive: No such file or directory Errors were encountered while processing: linux-image-2.6.38-8-server_2.6.38-8.42_amd64.deb dsimcha@dsimcha-laptop:~$ cd Downloads/ dsimcha@dsimcha-laptop:~/Downloads$ sudo dpkg -i --force-architecture linux-image-2.6.38-8-server_2.6.38-8.42_amd64.deb dpkg: warning: overriding problem because --force enabled: package architecture (amd64) does not match system (i386) (Reading database ... 159153 files and directories currently installed.) Preparing to replace linux-image-2.6.38-8-server:amd64 2.6.38-8.42 (using linux-image-2.6.38-8-server_2.6.38-8.42_amd64.deb) ... Done. Unpacking replacement linux-image-2.6.38-8-server:amd64 ... Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 2.6.38-8-server /boot/vmlinuz-2.6.38-8-server run-parts: executing /etc/kernel/postrm.d/zz-update-grub 2.6.38-8-server /boot/vmlinuz-2.6.38-8-server dpkg: dependency problems prevent configuration of linux-image-2.6.38-8-server:amd64: linux-image-2.6.38-8-server:amd64 depends on initramfs-tools (>= 0.36ubuntu6). linux-image-2.6.38-8-server:amd64 depends on coreutils | fileutils (>= 4.0); however: Package coreutils:amd64 is not installed. linux-image-2.6.38-8-server:amd64 depends on module-init-tools (>= 3.3-pre11-4ubuntu3); however: linux-image-2.6.38-8-server:amd64 depends on wireless-crda; however: dpkg: error processing linux-image-2.6.38-8-server:amd64 (--install): dependency problems - leaving unconfigured Errors were encountered while processing: linux-image-2.6.38-8-server:amd64 When I try the dependencies manually, I get, for example: dsimcha@dsimcha-laptop:~/Downloads$ sudo dpkg -i --force-architecture coreutils_8.5-1ubuntu6_amd64.deb dpkg: warning: overriding problem because --force enabled: package architecture (amd64) does not match system (i386) dpkg: error processing coreutils_8.5-1ubuntu6_amd64.deb (--install): coreutils:amd64 8.5-1ubuntu6 (Multi-Arch: no) is not co-installable with coreutils:i386 8.5-1ubuntu6 (Multi-Arch: no) which is currently installed Errors were encountered while processing: coreutils_8.5-1ubuntu6_amd64.deb Has anyone had any success installing 64-bit kernels on 32-bit Natty? If so, how can this be done?

    Read the article

  • Surviving MATLAB and R as a Hardcore Programmer

    - by dsimcha
    I love programming in languages that seem geared towards hardcore programmers. (My favorites are Python and D.) MATLAB is geared towards engineers and R is geared towards statisticians, and it seems like these languages were designed by people who aren't hardcore programmers and don't think like hardcore programmers. I always find them somewhat awkward to use, and to some extent I can't put my finger on why. Here are some issues I have managed to identify: (Both): The extreme emphasis on vectors and matrices to the extent that there are no true primitives. (Both): The difficulty of basic string manipulation. (Both): Lack of or awkwardness in support for basic data structures like hash tables and "real", i.e. type-parametric and nestable, arrays. (Both): They're really, really slow even by interpreted language standards, unless you bend over backwards to vectorize your code. (Both): They seem to not be designed to interact with the outside world. For example, both are fairly bulky programs that take a while to launch and seem to not be designed to make simple text filter programs easy to write. Furthermore, the lack of good string processing makes file I/O in anything but very standard forms near impossible. (Both): Object orientation seems to have a very bolted-on feel. Yes, you can do it, but it doesn't feel much more idiomatic than OO in C. (Both): No obvious, simple way to get a reference type. No pointers or class references. For example, I have no idea how you roll your own linked list in either of these languages. (MATLAB): You can't put multiple top level functions in a single file, encouraging very long functions and cut-and-paste coding. (MATLAB): Integers apparently don't exist as a first class type. (R): The basic builtin data structures seem way too high level and poorly documented, and never seem to do quite what I expect given my experience with similar but lower level data structures. (R): The documentation is spread all over the place and virtually impossible to browse or search. Even D, which is often knocked for bad documentation and is still fairly alpha-ish, is substantially better as far as I can tell. (R): At least as far as I'm aware, there's no good IDE for it. Again, even D, a fairly alpha-ish language with a small community, does better. In general, I also feel like MATLAB and R could be easily replaced by plain old libraries in more general-purpose langauges, if sufficiently comprehensive libraries existed. This is especially true in newer general purpose languages that include lots of features for library writers. Why do R and MATLAB seem so weird to me? Are there any other major issues that you've noticed that may make these languages come off as strange to hardcore programmers? When their use is necessary, what are some good survival tips? Edit: I'm seeing one issue from some of the answers I've gotten. I have a strong personal preference, when I analyze data, to have one script that incorporates the whole pipeline. This implies that a general purpose language needs to be used. I hate having to write a script to "clean up" the data and spit it out, then another to read it back in a completely different environment, etc. I find the friction of using MATLAB/R for some of my work and a completely different language with a completely different address space and way of thinking for the rest to be a huge source of friction. Furthermore, I know there are glue layers that exist, but they always seem to be horribly complicated and a source of friction.

    Read the article

  • Code Smell: Inheritance Abuse

    - by dsimcha
    It's been generally accepted in the OO community that one should "favor composition over inheritance". On the other hand, inheritance does provide both polymorphism and a straightforward, terse way of delegating everything to a base class unless explicitly overridden and is therefore extremely convenient and useful. Delegation can often (though not always) be verbose and brittle. The most obvious and IMHO surest sign of inheritance abuse is violation of the Liskov Substitution Principle. What are some other signs that inheritance is The Wrong Tool for the Job even if it seems convenient?

    Read the article

  • What counts as reinventing the wheel?

    - by dsimcha
    Do the following scenarios count as "reinventing the wheel" in your book? A solution exists, but not in the language you want to use, and existing solutions can't be interfaced with the language you want to use in a clean, idiomatic way. In principle you could get an existing library to do what you wanted with heavy modification, but you think it would probably be easier to just start from scratch. What you're writing has the same one-line description as stuff that's already been done, but you're targeting a different niche. For example, maybe your problem has been solved a zillion times before, but in a way that's inefficient for large datasets and your code works well for large datasets.

    Read the article

  • Is loose coupling w/o use cases an anti-pattern?

    - by dsimcha
    Loose coupling is, to some developers, the holy grail of well-engineered software. It's certainly a good thing when it makes code more flexible in the face of changes that are likely to occur in the foreseeable future, or avoids code duplication. On the other hand, efforts to loosely couple components increase the amount of indirection in a program, thus increasing its complexity, often making it more difficult to understand and often making it less efficient. Do you consider a focus on loose coupling without any use cases for the loose coupling (such as avoiding code duplication or planning for changes that are likely to occur in the foreseeable future) to be an anti-pattern? Can loose coupling fall under the umbrella of YAGNI?

    Read the article

  • Generalist Languages: Dying or Alive and Well?

    - by dsimcha
    Around here, it seems like there's somewhat of a consensus that generalist programming languages (that try to be good at everything, support multiple paradigms, support both very high- and very low-level programming), etc. are a bad idea, and that it's better to pick the right tool for the job and use lots of different languages. I see three major areas where this is flawed: Interfacing multiple languages is always at least a source of friction and is sometimes practically impossible. How severe a problem this is depends on how fine-grained the interfacing is. Near the boundary between the two languages, though, you're basically limited to the intersection of their features, and you have to care about things like binary interfaces that you usually wouldn't. Passing complex data structures (i.e. not just primitives and arrays of primitives) between languages is almost always a hassle. Furthermore, shifting between different syntaxes, different conventions, etc. can be confusing and annoying, though this is a fairly minor complaint. Requirements are never set in stone. I hate picking a language thinking it's the right tool for the job, then realizing that, when some new requirement surfaces, it's actually a terrible choice for that requirement. This has happened to me several times before, usually when working with languages that are very slow, very domain specific and/or has very poor concurrency/parallelism support. When you program in a language for a while, you start to build up a personal toolbox of small utility functions/classes/programs. The value of these goes drastically down if you're forced to use a different language than the one you've accumulated all this code in. What am I missing here? Why shouldn't more focus be placed on generalist languages? Are generalist languages as a category dying or alive and well?

    Read the article

  • Code Smell: Inheritance Abuse

    - by dsimcha
    It's been generally accepted in the OO community that one should "favor composition over inheritance". On the other hand, inheritance does provide both polymorphism and a straightforward, terse way of delegating everything to a base class unless explicitly overridden and is therefore extremely convenient and useful. Delegation can often (though not always) be verbose and brittle. The most obvious and IMHO surest sign of inheritance abuse is violation of the Liskov Substitution Principle. What are some other signs that inheritance is The Wrong Tool for the Job even if it seems convenient?

    Read the article

  • How employable am I as a programmer?

    - by dsimcha
    I'm currently a Ph.D. student in Biomedical Engineering with a concentration in computational biology and am starting to think about what I want to do after graduate school. I feel like I've accumulated a lot of programming skills while in grad school, but taken a very non-traditional path to learning all this stuff. I'm wondering whether I would have an easy time getting hired as a programmer and could fall back on that if I can't find a good job directly in my field, and if so whether I would qualify for a more prestigious position than "code monkey". Things I Have Going For Me Approximately 4 years of experience programming as part of my research. I believe I have a solid enough grasp of the fundamentals that I could pick up new languages and technologies pretty fast, and could demonstrate this in an interview. Good math and statistics skills. An extensive portfolio of open source work (and the knowledge that working on these projects implies): I wrote a statistics library in D, mostly from scratch. I wrote a parallelism library (parallel map, reduce, foreach, task parallelism, pipelining, etc.) that is currently in review for adoption by the D standard library. I wrote a 2D plotting library for D against the GTK Cairo backend. I currently use it for most of the figures I make for my research. I've contributed several major performance optimizations to the D garbage collector. (Most of these were low-hanging fruit, but it still shows my knowledge of low-level issues like memory management, pointers and bit twiddling.) I've contributed lots of miscellaneous bug fixes to the D standard library and could show the change logs to prove it. (This demonstrates my ability read other people's code.) Things I Have Going Against Me Most of my programming experience is in D and Python. I have very little to virtually no experience in the more established, "enterprise-y" languages like Java, C# and C++, though I have learned a decent amount about these languages from small, one-off projects and discussions about language design in the D community. In general I have absolutely no knowledge of "enterprise-y" technlogies. I've never used a framework before, possibly because most reusable code for scientific work and for D tends to call itself a "library" instead. I have virtually no formal computer science/software engineering training. Almost all of my knowledge comes from talking to programming geek friends, reading blogs, forums, StackOverflow, etc. I have zero professional experience with the official title of "developer", "software engineer", or something similar.

    Read the article

  • Open Source: Is Testing/Bug Reporting A Major Contribution?

    - by dsimcha
    When evaluating contributions to open source projects, does testing the code on various real-world inputs, reducing a large number of complicated bugs to small test cases and filing good bug reports count as a significant contribution? I've done this for several open-source projects (specifically D compilers) where I wanted to help out but the codebase was too complicated to learn my way around in the amount of spare time I have. I'm interested in both the perspective of the main developers (those that write the code and fix the bugs) and from the perspective of employers (in case I want to put it on my resume at some point).

    Read the article

  • Consistency vs. Usability?

    - by dsimcha
    When designing an API, consistency often aids usability. However, sometimes they conflict where an extra API feature can be added to streamline a common case. It seems like there's somewhat of a divide over what to do here. Some designs (the Java standard library come to mind) favor consistency even if it makes common cases more verbose. Others (the Python standard library comes to mind) favor usability even if it means treating the common case as "special" to make it easier. What is your opinion on how consistency and usability should be balanced?

    Read the article

  • Why Java as a First Language?

    - by dsimcha
    Why is Java so popular as a first language to teach beginners? To me it seems like a terrible choice: It's statically typed. Static typing isn't useful unless you care a lot about either performance or scaling to large projects. It requires tons of boilerplate to get the simplest code up and running. Try explaining "Hello, world" to someone who's never programmed before. It only handles the middle levels of abstraction well and is single-paradigm, thus leaving out a lot of important concepts. You can't program at a very low level (pointers, manual memory management) or a very high level, (metaprogramming, macros) in it. In general, Java's biggest strength (i.e. the reason people use it despite the shortcomings of the language per se) is its libraries and tool support, which is probably the least important attribute for a beginner language. In fact, while useful in the real world these may negatives from a pedagogical perspective as they can discourage learning to write code from scratch.

    Read the article

  • Clarify the Single Responsibility Principle.

    - by dsimcha
    The Single Responsibility Principle states that a class should do one and only one thing. Some cases are pretty clear cut. Others, though, are difficult because what looks like "one thing" when viewed at a given level of abstraction may be multiple things when viewed at a lower level. I also fear that if the Single Responsibility Principle is honored at the lower levels, excessively decoupled, verbose ravioli code, where more lines are spent creating tiny classes for everything and plumbing information around than actually solving the problem at hand, can result. How would you describe what "one thing" means? What are some concrete signs that a class really does more than "one thing"?

    Read the article

  • Old Visioneer 5800 on Windows 7 64-bit?

    - by dsimcha
    Does anyone know of a way to get an old Visioneer 5800 scanner that's supposed to work with nothing past Windows XP SP1 to work on Windows 7 64-bit? I don't care about all the bells and whistles, just the basic features. Is there any kind of generic TWAIN interface that can be used?

    Read the article

  • Asus MyCinema U3100mini Choppy

    - by dsimcha
    I'm running an Asus MyCinema U3100mini ATSC on Windows 7 64-bit. When I play live TV in Windows Media Center, it's very choppy and uses 500+ MB of RAM, I'm guessing due to the hard drive buffering functionality. Is there any way to disable the live TV pause buffer completely? If not, can anyone recommend alternative software that: Works with the MyCinema. Is lightweight and not horribly bloated with features I'll never use like Windows Media Center is. Edits: This is a dual boot system. I've discovered that the tuner actually works fine on XP. It also works fine on my other computer, which has slower hardware and also runs Windows 7 64-bit. The problem actually seems to be with playback at large screen sizes, not with hard drive buffering. Everything works fine below a certain window size and fails for large windows or full screens. Also, the same thing seems to happen whether playing live or recorded TV. As far as the obvious stuff goes, I have the latest video drivers from ATI for my Radeon x1050.

    Read the article

  • Non-Windows, non-Unix-like OS's?

    - by dsimcha
    Since most operating systems I've heard of besides Windows seem to derive their heritage from Unix, I've been curious whether any OS's with the following characteristics exist: Not generally considered Unix-like, i.e. wasn't designed with Unix compatibility as a primary goal, doesn't use X11 as its default GUI in the most common distributions, doesn't support Unix commands by default, etc. Not in the Windows NT family. Is a modern production operating system, not a purely legacy operating system, a research/hobby project or an OS that's still in an alpha state. Is targeted at commodity x86/x64 PC hardware.

    Read the article

  • Windows Media Center Buffering and Asus MyCinema U3100mini

    - by dsimcha
    I'm running an Asus MyCinema U3100mini ATSC on Windows 7 64-bit. When I play live TV in Windows Media Center, it's very choppy and uses 500+ MB of RAM, I'm guessing due to the hard drive buffering functionality. Is there any way to disable the live TV pause buffer completely? If not, can anyone recommend alternative software that: Works with the MyCinema. Is lightweight and not horribly bloated with features I'll never use like Windows Media Center is.

    Read the article

  • Triple-Boot + 4 partition Limit

    - by dsimcha
    I just bought a new hard drive so that I could convert my XP-only machine into an XP-Ubuntu-Windows 7 triple boot machine. Since the drive is absurdly huge (1 TB) I wouldn't mind throwing ReactOS into the mix, too. I just found out that master boot records are limited to 4 entries, meaning 4 primary partitions. I had Windows XP set up on my old drive as a boot partition, a program files partition and a media partition. Since I really didn't want to install XP from scratch, I cloned this setup on my new drive. This leaves me one MBR partition entry for installing Windows 7, Ubuntu and ReactOS. I'd like to avoid having to install XP from scratch like the plague, partly because it's supposed to be a safety net in case things go wrong with my other OS's and because I've invested a lot of time getting it set up exactly the way I like it. Here are the options I've considered and why I don't like them: Install Windows 7 on my media partition. This would work, but I prefer to keep my media partition completely separate from any OS, so that I can reformat an OS partition without affecting my media partition at all. Use wubi or something to install Ubuntu in the same partition as something else. Again, this is brittle. Move all my media to a logical drive on an extended partition. Create another logical drive on this extended partition for Ubuntu. The problem here is that extended partitions are rather brittle--if you nuke one, it renders the rest useless. Just put the old drive back in my computer and run XP off it. Use the new one for the other OS's. The problem here is that the old drive is slower and uses extra power, generates extra heat, etc. Can anyone suggest any other possibilities that I may have overlooked?

    Read the article

  • Why no Win16 support in 64-bit Windows?

    - by dsimcha
    My understanding (from Wikipedia) is that the x64 instruction set supports executing 16-bit protected mode code from long mode, but cannot execute real mode code without being switched out of long mode because long mode lacks virtual 8086 mode. Therefore, it stands to reason that real mode DOS apps can't be run in Win64 w/o software emulation or dynamic translation. However, why was support for Win16 protected-mode apps excluded when support for them seems (at least at first glance) to be reasonably implementable and is included in newer versions of Win32? Was it just a matter of demand not being high enough to justify implementation costs (and the win32 version was already implemented), or is there a good technical reason?

    Read the article

  • Unreliable resume from suspend?

    - by dsimcha
    My desktop PC (home-built) resumes from suspend somewhat unreliably. I'd say that it resumes successfully about 85-90% of the time and hangs with a blank screen 5-10% of the time. As far as I can tell, the success or failure of the resume is completely random. I doubt it's a software problem because I triple boot Windows 7, Windows XP and Ubuntu and it's similar under all 3 operating systems. If it matters, my system is overclocked, though other than the resume-from-suspend issue, it's definitely rock stable. What are some of the obvious suspects that would cause random, sporadic failures to resume from suspend?

    Read the article

  • Python serialize lexical closures?

    - by dsimcha
    Is there a way to serialize a lexical closure in Python using the standard library? pickle and marshal appear not to work with lexical closures. I don't really care about the details of binary vs. string serialization, etc., it just has to work. For example: def foo(bar, baz) : def closure(waldo) : return baz * waldo return closure I'd like to just be able to dump instances of closure to a file and read them back. Edit: One relatively obvious way that this could be solved is with some reflection hacks to convert lexical closures into class objects and vice-versa. One could then convert to classes, serialize, unserialize, convert back to closures. Heck, given that Python is duck typed, if you overloaded the function call operator of the class to make it look like a function, you wouldn't even really need to convert it back to a closure and the code using it wouldn't know the difference. If any Python reflection API gurus are out there, please speak up.

    Read the article

  • Why is thread local storage so slow?

    - by dsimcha
    I'm working on a custom mark-release style memory allocator for the D programming language that works by allocating from thread-local regions. It seems that the thread local storage bottleneck is causing a huge (~50%) slowdown in allocating memory from these regions compared to an otherwise identical single threaded version of the code, even after designing my code to have only one TLS lookup per allocation/deallocation. This is based on allocating/freeing memory a large number of times in a loop, and I'm trying to figure out if it's an artifact of my benchmarking method. My understanding is that thread local storage should basically just involve accessing something through an extra layer of indirection, similar to accessing a variable via a pointer. Is this incorrect? How much overhead does thread-local storage typically have? Note: Although I mention D, I'm also interested in general answers that aren't specific to D, since D's implementation of thread-local storage will likely improve if it is slower than the best implementations.

    Read the article

  • Linux 2.6.31 Scheduler and Multithreaded Jobs

    - by dsimcha
    I run massively parallel scientific computing jobs on a shared Linux computer with 24 cores. Most of the time my jobs are capable of scaling to 24 cores when nothing else is running on this computer. However, it seems like when even one single-threaded job that isn't mine is running, my 24-thread jobs (which I set for high nice values) only manage to get ~1800% CPU (using Linux notation). Meanwhile, about 500% of the CPU cycles (again, using Linux notation) are idle. Can anyone explain this behavior and what I can do about it to get all of the 23 cores that aren't being used by someone else? Notes: In case it's relevant, I have observed this on slightly different kernel versions, though I can't remember which off the top of my head. The CPU architecture is x64. Is it at all possible that the fact that my 24-core jobs are 32-bit and the other jobs I'm competing w/ are 64-bit is relevant? Edit: One thing I just noticed is that going up to 30 threads seems to alleviate the problem to some degree. It gets me up to ~2100% CPU.

    Read the article

1 2  | Next Page >